One of the critical factors that has permitted open source to flourish has been the relatively standardness of processors (they are all 8086s, effectively, sigh…), and their relative cheapness (generally speaking, less expensive than a used car, and now very much so).
We are able to experiment with different designs for almost any aspect of an open source operating system.
We can even try dangerous things: for instance, what happens if I try to keep my processor cool by switch C-states and P-states, rather than turn on the fan? When I encounter a bug in my code, I get a meltdown. Oh, well. Not a cheap situation, and the landfill result is unfortunate, but if I succeed, I might be able to significantly reduce the power consumption of laptops, extending their battery life (both charge time, and time before I have to landfill the batteries).
I picked this example on purpose. Can I do the same thing with a nuclear power plant? The answer is no. There aren’t enough of them, they are too expensive to build, we have no idea how to “landfill” them afterwards, and the negative impact of a meltdown is too significant to contemplate.
This means that I can’t think about the 30 ways that Homer Simpson can screw things up, and figure out how to build a system that fails safely with him in control.
Similarly, I couldn’t do the above work in 1972 either. The one computer that my university might own was far too valuable for me to be permitted to mess with the operating system. That was only for professionals (the person from IBM that came with the 360). Instead, they gave me virtual 360s on which I could play with virtual paper tape readers.
The lesson is that big is not better for open source. It creates a small number of critical components/systems.
Small, cheap, distributed components/systems permit experimentation. I point to “Back to the Future II” — The Delorian is now powered by “Mr. Fusion” – throw some garbage in it, and it generates power. It’s SMALL. It probably doesn’t blow up if you put the wrong things in, it probably just turns itself into a paper-weight, the way a P4 without a fan does.
That’s why nuclear power is bad — not because it’s atomic, but because we have a very small number of very centralized, hypercritical components, and we have, as a result, no idea how to take them apart.
Nature gets open source — she puts a copy of the source code in every friggin bacteria, and creates opportunities for mutation. And there a billions and billions and billions of them.
Greens are generally seen to be opposed to nuclear power. Unfortunately, most of them are still caught in the NDP/left hysteria about it. They don’t understand it. Fundamentally, what scares them is not that they are afraid of radiation – what scares them is that they know they aren’t in control of the political machine that controls nuclear power (and the mentally related nuclear weapons).
No amount of technical explanation really helps — I know sub-atomic physicists graduates who are emotionally afraid of nuclear power. So, this is why the nuclear power FAQ doesn’t really help people — it’s technical facts that just make them feel like there is an attempt to brainwash them.
Greens should be afraid of centralized control over the means of production. Greens, like republicans and libertarians oppose communism and stalinism — we should be concerned when the means of production is in the hands of the few.
This is why greens have a fundamental principle of decentralization.
A critical element of this is that technology be designed such that it can easily be replicated, that it can be adapted (in the field if necessary — think about Mr. Scott or Mr. Spock on the Star Trek - Enterprise: He can’t beam back to Redmond if you need an Operating System patch…), and it can be repaired.
Software monopolies are the opposite of decentralization – It’s stalinism.