We human beings are individuals. We are, literally, in-dividable. Unable to divide. Like atoms. Unique. And despite each of us being a unique, in-dividable being, we have a lot of things in common. But it’s in the commonality domain where we often take decisions more in favor on the common than on the individual aspects.
I have observed many times that the majority of commonality or services oriented decisions we make lead to one-size-fits-all concepts such as generalization, categorization, standardisation or centralisation. I believe these decisions are mostly driven by fear, lazyness or just plain incompetence. Fear for not achieving maximum efficiency. Fear for not achieving the maximum synergy. Fear for having to cope with more complexity. Lazyness because it makes systems operations easier. Lazyness because it makes design easier. Lazyness because it makes change management or project management easier. Incompetence just because we haven’t yet invented how to design complex systems for complex worlds.
But how would our world look if we stopped generalizing and started designing our world just as it is? A world that is complicated and full of unique living beings with unique service requirements? Instead of falling back on the easy-way out abstracted, generalized “types” that favor a minority above the majority. Time for change!
Do you know what the picture represents? It’s called a Binary Kite. You can find some more examples here. I stumbled upon this picture and got inspired to post an article about centralisation. For me, this Binary Kite represents what you see can happening all the time in our society: the drive for centralization. But most original ideas or innovations start decentralized. And then when they mature and are ready for growth, we allmost automagically see synergy potential by centralizing it. And before you know it, you have centralized so much that the Kite is becoming more of a disabler than of the original enabler it was meant to be. So what should you do when you see concepts growing and growing and becoming massive, dominant, slow, costly, inflexible? Decompose the centralized concept into more or less autonomous but smaller subsystems and introduce modularity and loose coupling between the subsystems where appropriate. Or develop a Creative Destruction strategy and let the environment build up a new, more decentralized ecosystem in parallel that is less massive, less dominant, faster, less costly and more flexible than the centralized original. It’s that easy.
We all know that planning fails when the complexity of the problem exceeds the capacity of the planners to reason about it. So the natural tendency is to fight the complexity so we don’t have to deal with it. This tendency is sometimes induced by fear: fear of not being able to oversee or handle the complexity. This fear has leds us to develop techniques that are commonly used to fight complexity. Think of techniques such as Isolation, Separation, Centralisation and Integration. And we are accustomed to use these techniques both in organizational design patterns (social complexity) as well as in technological design patterns (technological complexity). Isolation is a technique often seen in projects. We isolate the problem as much as possible so we don’t have to deal with complexity of the “whole”. This is one the reasons many projects fail. We use separation techniques when we organize something into domains or hierarchies. This is in fact a variant of Isolation. By separating into domains, departments, groups, teams, disciplines etc. we hope to have distributed the total complexity. We all know that in practice this leads to suboptimization. And then there is centralisation. By centralizing things they have to become standardized more or less. This is the payoff we need to get rid of the distributed complexity that was there before the centralisation. But we didn’t really reduce the total complexity with this technique. We just distributed it in another way. The total complexity is still there and pops out now in other areas. And finally, there is integration. We leave things as they are and fight the (coordination) complexity with integration techniques. This leads however to another type of distribution of complexity: we moved it into the integration domain. But it is still there. Now let’s take a look at examples where highly complex “systems” are handled without any major problems. Inspired by nature we see that a flock of birds is self organizing around a few very simple principles. 1: All birds must know the same principles. 2: a bird must control the distance to it’s direct neighbors (not too close, not to far). 3: a bird must match it’s own speed with that of it’s direct neighbors. So by having a few very simple principles, you can make an endlessly huge (or small) flock of birds act all in the same way. You can let them follow any goal or direction. (only one leader who gives the direction would be sufficient). These principles of nature have more or less been copied into the design of the Internet. It operates autonomously. Compared to the flock of birds principes you could say: 1: All routers must know the same routing algorithm. 2: All routers must be able to calculate the distance to their direct neighboring routers, more isn’t needed. 3: All routers must know how many hops to make to get a message to the endpoint. You can find more on the Internet design in network topologies designed by Paul Baran†
Now what would happen if we organized work that needs synergy according to similar principles? Let’s give it a try: Principle 1: All workers must know the shared principles (we must therefore have a shared language, that’s one of the most crucial things to centralize!). 2: All workers must know the direction their direct neighbors wants to go to (nothing more, nothing less, so if you design this clever, you also solve information overload or information underload). 3: All workers must know the innovation or change speed of their direct neighbors. That’s about it. No more control needed (at least, theoretically). Sounds simple, doesn’t it? Maybe it is just that simple. Or to quote K.A. Richardson: “Each element in the system is ignorant of the behavior of the system as a whole (this implies we have to trust each other that the total job gets done partly by others). […] If each element ‘knew’ what was happening to the system as a whole, all of the complexity would have to be present in that element” (this implies workers do not need to know the total complexity but only that of their direct neighboorhood). Now how does this match to architecture deisgn? I promote that architects design distributed architectures where possible because that makes the total architecture a lot simpler. And only centralize parts for which no good alternative exists. This approach would prevent that we have to fall back on other techniques like isolation, separation, centralisation. Good luck if you want to experiment with it! Please let me know if it worked for you!