Thursday, February 5, 2009

Zen and the Art of Open Source Hardware – Day 3

Today, Chris and I drove to and finally arrived in Las Vegas. The odd thing is, this place is a ghost town. Sure, it’s not the middle of the summer right now, and it’s a Wednesday, but when we talked to some of the “locals” who work here at restaurants, they pretty much agreed… the recession has hit. Now I’m not much for casino’s, and I never gamble, so I’m writing this from the hotel room, while Chris is hacking away on the TouchShield Slide. Earlier today, I spent a lot of time thinking and debating with Chris about how Open Source hardware might help in the recession, and why it needs to exist in the first place… at some point it was obvious that neither of us could define what it was we were doing when we shared schematic files and source code with each other… so we dissected it.


Yesterday’s article was written like an email letter I might have written to a close friend of mine. Today, though, I’ve had some more heavy and serious thoughts on my mind, and so this tone will be like an editorial or paper.


In one of my recent interviews, I was reminded and humbled by the observation that many fields of study have cleverly avoided spending too much time trying to define their central organizing themes: biology has never satisfactorily defined “life”, medicine has avoided the definition of “health”, and physics avoids specifying the “big bang.” Perhaps there are some concepts so subjective that they avoid attempts at crisp, clean definitions.


An economic and complexity theoretical explanation for why Open Source exists…


Looking back to the first calculators (Babbage machines, digital accountants, and mechanical computers), there was a clear distinction between the unchangeable, static, immovable “hard” ware. Back then, hardware in mechanical computing devices consisted of gears, cranks, and pushrods. Now, they’re silicon devices that are one-way programmable gate arrays, or components that are connected to each other on immovable substrates (e.g. printed circuit boards). Perhaps there’s something to be learned from this comparison, after all, the old mechanical computers had much more to do with “physical” and “tangible” products, and so the metaphor might help:


Mechanical computers – consisted of static gears, pushrods, pins – assembled and built from blueprints – programmed by encoded holes, knobs, or punchcards


Digital computers – consists of static gates, component connections – assembled from gerber and schematic files – programmed through chip flashers, bootloaders and source code compilers


Both share a common “fixed” dimensionality. In other words, they tend to be physically constrained: the creation and assembly of each involves fixing parts, components, or objects in 3-dimensional space. Also, a part’s relationship to other parts is fixed by the design of the computer, or in other words the “architecture” is pre-determined and largely immobile.


Is this always the case that hardware circuit configurations need to be immobile and static?


When hardware begins to have options, or configurations or possible “states”, it begins to need instructions (or software) to meaningfully configure it and alter its behavior. Borrowing from Wolfram’s complexity framework, there might have a simple tiered rank of complexity in instructions:



Simple / basic – a static configuration state; this merely indicates how to set up a machine once, e.g. where to move the parts, connecting rods, etc. (like an abacus or simple hand-held calculator with no memory or repeat and store function)


Complete – possible to implement many complex behaviors, and algorithms; a lot like simple calculators that support programs (like the HP-90 or TI89 calculators)


Complex – a self-referential, self-modifying, fused relationship between the physical parts and the instructions; this is what excited individuals like Turing, Shannon, and Weiner – computers could be self-aware (like Core Wars, or Brainf*ck, or Forth, or cellular automata)


If a set of instructions is always simple or basic, they’re by definition open source, because it’s immediately obvious what they do. The instructions are a simple mapping, like “put this here” and “put that there” that are visually obvious just through observation or description. A lot of math papers read this way, since they contain complete descriptions of their algorithms, mapping tables, and sometimes even test vectors. At the simple level, pictures or descriptions of the device are by definition Open Source.



If a set of instructions is complete, there is now a meaningful distinction between binary and source code, and there is now a role for a compiler. There is a non-obvious mapping the obscures the source, and makes understanding it difficult. Tools are created like high level languages, macros, compilers, assemblers, linkers, etc. to simplify the process, and abstract the complexity to a higher level, making individuals more productive at manipulating behaviors. Open Source is a choice to release the high level, efficient instructions at the beginning of the compiler tool chain.


Finally, if a set of instructions is complex, it contains self-referential code structures where the storage, and instructions are blurred, losing the distinction between “programs” and “data.” The old game “Core Wars” or the programming language “Brainf*ck” or Forth come to mind – to run them and interact with them, the architecture, data, and instructions are always exposed, dynamic, and interactive. Under this circumstance, the instructions, architecture, and system is by definition “open source,” because the architecture supports it.


Oddly enough, in this complexity framework, “open source” as a “choice” only becomes relevant in a narrow “complexity band,” which is determined largely by the complexity of the instruction set, and the presence, availability or necessity of simplifying tools.


This feels like progress.


This establishes a social and relativistic debate over what “open source hardware” is. And this complexity framework helps us explain many of the current debates in Open Source Hardware communities. Many of these debates center largely around positions of perspective and comparative subjectivity. What is “open source” to some may not be to others, and so “open source” has evolved an ambiguous, amorphous, and unsettling implied definition.


Under the complexity framework, simple, or basic electronic circuits like LED blinkers, analog knob circuits, and pushbutton switches are like building elementary block structures out of Lego’s. Some people can just “look” at the device, or a picture of it, and “know” how to replicate it or build it. Some people I know - like Matt B. - can look at picture of a complex Lego structure from only one angle, and deduce precisely how to build it from scratch. Others - like Chris L. and Limor - can do the same with circuits, even ones that are quite complex!


On the polar opposite end of the complexity spectrum, some circuits are so “complex” that interaction with them at all requires that you work directly with the underlying architecture and components, and the meaning of “source” or “instructions” themselves lose meaning. In a Forth programming terminal or Ruby scripting environment, all words are available to the user to build upon. Compiled binaries are impossible in the architecture by design. Like programming a Forth or Brainf*ck, Core Wars bot, or one of Wolfram’s cellular automata, the program is the data is the instruction set is the interface. The interface is the interpreter is the program is the data. The system is fully transparent to begin with, and so “openness” is a function or attribute of the architecture, not of the availability of “code” or “instructions.” (One could even argue that complex architectures like this are “open source” if documented, and proprietary if not, but that’s quite a loose definition.) In the meantime, physical circuit analogies include hi-fi synth amps with feedback knobs, and many analog projects built from capacitors, resistors, and inductors – in this case, interaction and behaviour does not require instruction sets or programs, they require interaction.


So what about the middle zone?


Somewhere in the middle of this complexity spectrum is a zone where tools have been created to help optimize productivity, human understanding, and efficiency of design and incrementalism. Here, tools have been built to create efficiencies of design and production, and these are similar to building assembly lines, or large pieces of equipment in the Henderson Experience Curve world. And under this theory, it’s why digital electronics gadget and software companies can exist and make profit to begin with; they’re reducing the cost and effort of making a device by finding scalable tools and processes. If the architecture is sufficiently transparent and obvious, there would be no complexity arbitrage opportunity, no simplification or productivity efficiency someone else would be willing to pay for, and so no need for a company (examples would include trying to sell instruction books for obvious topics like breathing, eating, or sleeping). On the other hand, if the architecture is complex and pervasively interactive, the economic opportunity doesn’t come from proprietary distribution of instruction packages, it instead involves individual interaction (examples include services, art, aesthetic design, and healthcare treatment).


So when tools are necessary to advance efficiency and labor productivity, there is a choice to become open source. And this choice is about the distribution of higher level instructions, tutorials and file formats that reduce complexity for others, and improve efficiency and productivity for other individuals.


Seems like I’m finally ready to assert a socially-relativistic, complexity economic definition of “open source hardware”:


In an economic sense, the degree of open sourceness of a hardware project is measurable on the basis of what measures are taken to reduce labor, complexity, and learning curves for others, and consequently maximize efficiency, productivity, and gains for others.


So, what does this mean for someone wanting to make open source hardware, like me?


In no particular order, I’d propose this list of 10 assertions about what Open Source is - a bit like Asimov's rules of robotics:



  1. Open source will always be a qualitative – not quantitative – feature assigned to a hardware project
  2. Open source is not a binary attribute applied to a project
  3. Open source principles mean different things to individuals with varying levels of talent, knowledge, and experience
  4. Open source is only a meaningful decision to be made for projects of complexity where meaningful tools have been created to accelerate productivity
  5. Releasing source files, schematics, etc. is not by itself sufficient to be as “open source hardware” as possible; it may be impossible to be “as open source as possible” given it means different things to different communities
  6. Open source hardware is a function of target community and tool choice, and this can be maximized to varying degrees; the opposite suggests that purposefully selecting tools can be a way to prohibit community sharing while still “appearing” open source
  7. Releasing a file (e.g. gerber) in a format that requires expensive commercial tools to interpret is more open source amongst a community of professional practitioners than it is to a community of individual hobbyists
  8. Releasing a schematic picture is more “open source” amongst a community of engineers or professionals than it is to a community of artists
  9. Open source reduces the economic opportunity for companies because it reduces the ability to optimize and arbitrage product complexity through tools and accumulated knowledge, so companies need to profit in other ways
  10. Open source requires a community and audience, and is a function of that community’s shared knowledge, common infrastructure, and tools



Of course, Open Source Hardware will evolve as new projects come on the scene, and perhaps with the arrival of new more powerful, readily available production tools!


I’m exhausted, but also excited. This the clearest I’ve ever felt about “open source hardware”, but I’m sure not everyone will agree. And that’s perfectly ok, because I’d love to know how other people would define “open source hardware,” and what it means to them…


2 comments:

alexandra sonsino said...

Absolutely great read so far. Keep going, this is a really great analysis of the landscape.

Michael said...

Great post.........

Thanks
Rotary Tools