Our society is becoming increasingly complex and super-integrated. The increasing number of connections could on one hand help solve all kinds of problems we now face together, but on the other hand, the sheer complexity could also lead to increased fear. Fear of not being able to cope with complexity anymore. So what do we need? Simplification seems to be the first answer that comes bubbling up. But simplify what? Make less connections? Seems uncontrollabe. Invent less technologies? This would hinder inventions and innovations, seems not a good idea. Less processes? We need processes and many already work ok, maybe we should accept this as a fact. Less methodologies? Would be nice but could lead hindering evolution by limiting diversity. Less organizations? Let it go. Orgs are already organizing themselves through open market principles. What about just simply Less control? Less control would help in letting things go and allow for some selforganization. This already is happening (experimentally) in certain areas. And there seems to be no hard evidences that limiting control increases problems. Nevertheless, good luck with your search for more or less complexity,whatever works fine for your situation.
Posts tagged ‘Complexity’
Have you encountered this also in your daily interactions with others? I mean the allmost automatic or intuitive Human reaction to complex problems is that they most of the time try to simplify complex cases by stripping things. Things that might at first hindsight be non-essential but at second hindsight are very essential.
Suppose you present a complex picture like the exploded view of the motor shown in the picture, the standard reaction can be to leave out details so people can have a better overview. This is where it can go wrong. If people (not you and me but all the others ofcourse) abstract complex things without knowing the essentials of the complexity, they might make the wrong (or sometimes even desastrous) decision. So the key to abstraction is that you translate a certain given complexity to a simpler viewpoint WITHOUT leaving out the essential aspects. In the example case, the abstracted picture could be a motorcycle where the exploded view of the motor is now merely an integral part of the total concept. But the details inside are still essential. And thus you cannot leave them out. Even if you think they are too complex or too costly or whatever.
I think Einstein got it right when he stated: “Everything should be made as simple as possible, but not simpler”. Map this to the exploded view of the engine and you know what I mean. So we should keep the complexity, not fight it, but abstract it only on communication level, not on architectural level! And that’s where communication skills can come in handy: if you can’t explain it to your grandmother or to your children, you should abstract your viewpoints, but not the architectural essentials! Leave the multi-layer approach (figure inspired by a Wikipedia article) intact! Happy decomplexing!
Picture source here. Traditionally, ownership is a key theme in many organizations. For many Business related “problems” we often tend to think that ownership will be the way to solve them. And that is ofcourse very true. Without ownership, noone feels “responsible” and we tend to let things go. So ownership helps. But what do we often see: in larger organizations there is a natural tendency to centralize things. We tend to centralize, integrate, uniform, standardize or (out)source on several topics: processes, organizational roles, functionalities, job descriptions, tasks, components, services, ICT etc. And from efficiency point of view there seems at first sight nothing wrong with that. But it can and often will also introduce new problems. By centralizing something which before was decentralized we need to rethink the ownership problem. And centralization will seem to make certain problems less complex but that is not allways true. Sometimes we only redistribute the complexity by centralisation and move the problem into another area. The total complexity remains or might even get worse. So what should we do instead? If we want to make people responsible for something, we must design architectures that are optimized for decentralization as much as possible to the personal level. The more personal ownership can be pinpointed, the better. What do we loose by this approach? We loose some efficiency because we add redundancy. But we gain effectiveness, we reduce the total complexity (because it is now distributed) and we have also spread riscs enourmously by decentralization. So in my opinion, in a human-centrically designed distributed architecture there can be, on an overall (enterprise) level, more advantages than disadvantages. What is your opinion?
Figure source. Just some crazy thought, but suppose organizations could make all of their decisions in realtime, just like a flock of birds does? What kind of information would we need to make realtime decisions? Well, if the goal is just to “survive” (“keep flying”) it would probably be sufficient to have information of the innovation speed and direction of direct “neighbors”. Suppose we could facilitate (supported with modern ICT) a distributed network of decision making information. And each node (decision making “entity”) in the network would get the decision making information relevant for making the most important (“survival”) decisions. So what are then the most important decisions? If we compare to the flock of birds, it’s making sure you’re “local” innovation speed and direction matches that of your direct environment. It’s allmost as if we would apply Ashby’s Law of Requisite Variety here. The law states that “variety absorbs variety, defines the minimum number of states necessary for a controller to control a system of a given number of states.” So here you have it: you need to match your own variety with that of your direct environment. Don’t make it more complex or less complex but make sure the complexity matches. And to reduce the decision making complexity you could try to limit the number of decision making connections because that would make the distributed decision making process more complex. That is because the complexity of the network increases quadratically with the number of connected nodes. But also the value of a network increases proportional to the number of connected nodes (Metcalfe’s Law). So we need large networks to create more value. The large networks could consist of interconnected small “local” networks to reduce the total decision making complexity. So here’s a wrap up: use the distributed topology Paul Baran† designed for the Internet as a reference model for distributed decision making and as a reference model for distributed value creation, combine that with Ashby’s Law of Requisite Variety to reduce the “local” decision making complexity and combine that with Metcalfe’s Law to increase the total value of our network. If we would really do that, I wonder how our would world look like?
- Tip 1: respectfully say bye to traditional (red ocean, greed-centered) strategy schools, say hi to (blue ocean) society oriented strategies.
- Tip 2: respectfully say bye to traditional power- or status oriented (Taylor-“made”) management schools, say hi to “tailor-made” schools that focus more on organizing people rather than on managing people.
- Tip 3: respectfully say bye to efficiency as a primary goal, say hi to effectiveness as a primary goal
- Tip 4: respectfully say bye to common fears that block true renewal (fear of isolation, fear of incompetence, fear of inconsistence, fear for imperfection, fear of separation, fear of ignorance, fear of complexity, fear for loss of control, fear for learning, fear for letting go the past etc. and say hi to their powerful counterparts such as love, respect, learning culture etc.
- Tip 5: respectfully say bye to traditional sharing strategies (greed, egoism, selfishness) and say hi to joyful sharing strategies
- Tip 6: respectfully say bye to innovation strategies aiming at free markets and say hi to strategies aiming at societal goals markets
- Tip 7: respectfully say bye to old style thinking and doing and say hi to new style thinking and doing
- Tip 8: respectfully say bye to fakeness and dishonesty and say hi to authenticness and honesty
- Tip 9: respectfully say bye to traditional scarcity thinking and say hi to abundance thinking
- Tip 10: respectfully say bye to rational (or scientifically proven) decision making and say hi to (spiritual) decision making based on your intuition and your heart
Ofcourse you don’t need to say goodbye to all the above, but a better balance wouldn’t hurt our society. So Good Luck and please let me know what worked for you.
Einstein once said: everything should be made as simple as possible, but not simpler. He was very right. If you take a look at some of the “systems” society has created, they have a tendency to become increasingly complex. In some cases this is oke, if the system under consideration is highly accepted, used and promoted. For example the Internet. But in other cases, we have built upon base systems and added complexity on top by layering. It’s as if we didn’t want to take proper time for redesign. By hiding the details (creating intransparancy) we were able to prevent true redesign. Now these type of systems, that have what I call redundant complexity could maybe better be unwrapped to find out there original, true intention or meaning again. And then redevelop from these new insights again. That way, we would at least limit the total complexity. For example the world’s financial system might be a good candidate given the crises it has put us all in. Or the availabilty of hundreds of thousands of “standards”, sometimes developed with so much complexity that it thereby hinders a true level playing field. Or systems that are designed with abundant features where a good is good enough design would have been sufficient. So for these types of systems, a “let’s unwrap first before adding an extra wrap around this mummy” mightbe just the right approach, honoring Einstein. Source of the mummy figure is here.
We all know that planning fails when the complexity of the problem exceeds the capacity of the planners to reason about it. So the natural tendency is to fight the complexity so we don’t have to deal with it. This tendency is sometimes induced by fear: fear of not being able to oversee or handle the complexity. This fear has leds us to develop techniques that are commonly used to fight complexity. Think of techniques such as Isolation, Separation, Centralisation and Integration. And we are accustomed to use these techniques both in organizational design patterns (social complexity) as well as in technological design patterns (technological complexity). Isolation is a technique often seen in projects. We isolate the problem as much as possible so we don’t have to deal with complexity of the “whole”. This is one the reasons many projects fail. We use separation techniques when we organize something into domains or hierarchies. This is in fact a variant of Isolation. By separating into domains, departments, groups, teams, disciplines etc. we hope to have distributed the total complexity. We all know that in practice this leads to suboptimization. And then there is centralisation. By centralizing things they have to become standardized more or less. This is the payoff we need to get rid of the distributed complexity that was there before the centralisation. But we didn’t really reduce the total complexity with this technique. We just distributed it in another way. The total complexity is still there and pops out now in other areas. And finally, there is integration. We leave things as they are and fight the (coordination) complexity with integration techniques. This leads however to another type of distribution of complexity: we moved it into the integration domain. But it is still there. Now let’s take a look at examples where highly complex “systems” are handled without any major problems. Inspired by nature we see that a flock of birds is self organizing around a few very simple principles. 1: All birds must know the same principles. 2: a bird must control the distance to it’s direct neighbors (not too close, not to far). 3: a bird must match it’s own speed with that of it’s direct neighbors. So by having a few very simple principles, you can make an endlessly huge (or small) flock of birds act all in the same way. You can let them follow any goal or direction. (only one leader who gives the direction would be sufficient). These principles of nature have more or less been copied into the design of the Internet. It operates autonomously. Compared to the flock of birds principes you could say: 1: All routers must know the same routing algorithm. 2: All routers must be able to calculate the distance to their direct neighboring routers, more isn’t needed. 3: All routers must know how many hops to make to get a message to the endpoint. You can find more on the Internet design in network topologies designed by Paul Baran†
Now what would happen if we organized work that needs synergy according to similar principles? Let’s give it a try: Principle 1: All workers must know the shared principles (we must therefore have a shared language, that’s one of the most crucial things to centralize!). 2: All workers must know the direction their direct neighbors wants to go to (nothing more, nothing less, so if you design this clever, you also solve information overload or information underload). 3: All workers must know the innovation or change speed of their direct neighbors. That’s about it. No more control needed (at least, theoretically). Sounds simple, doesn’t it? Maybe it is just that simple. Or to quote K.A. Richardson: “Each element in the system is ignorant of the behavior of the system as a whole (this implies we have to trust each other that the total job gets done partly by others). [...] If each element ‘knew’ what was happening to the system as a whole, all of the complexity would have to be present in that element” (this implies workers do not need to know the total complexity but only that of their direct neighboorhood). Now how does this match to architecture deisgn? I promote that architects design distributed architectures where possible because that makes the total architecture a lot simpler. And only centralize parts for which no good alternative exists. This approach would prevent that we have to fall back on other techniques like isolation, separation, centralisation. Good luck if you want to experiment with it! Please let me know if it worked for you!
Our society is getting more complicated ( “ingewikkeld” in Dutch, as the picture to the left shows) by the day. And yet, we still are looking for ways to simplify things while the total complexitywill certainly not diminish. So we need to accept total complexity as a given and find clever ways to handle it. That is where abstraction comes in as one of the possible tools. It’s a favourite tool used by architects and designers. With abstraction you just hide certain complexity by abstracting it and putting a (virtual) box, domain, layer or whatever you want to call it, around it. Take the PC as an example: a highly complex device, both HW- and SW-wise but the dirty details have been cleverly hidden by it’s designers. So an average user does not need to bother with the internal complexity anymore. He or she can start adding extra new complexity on top of it. But abstracting also has drawbacks: we can loose oversight because we don’t understand the inner working of our black-boxes anymore. If they work, they work oke, but if they fail, we are in trouble. People who really understand what is happening inside these insane HW/SW ecosystems, accessed by black boxes we call PC’s are getting more and more scarce. And then there is society itself, societal complexity or the way we all interact. It is not so simple to abstract complex social interactions. So maybe in this domain, chaos science can help us a little. Consider The Emergence Principle in the splendid article Living on the Edge: “capable of rising to increasingly higher levels of complexity and order all by themselves“. Sounds like a possible (scientific) solution to handle societal complexity questions. But will it also work in our day-to-day life? The mummy picture originated here.
We are entering the Knowledge Era. Our society is becoming more and more complex as well as the technologies we all use. The time to invent or further develop more advanced decision making techniques becomes more and more urgent, but how do we do that? Or to refer to one of the key principles of Het Rijnlandse Boekje: “Wie het weet mag het zeggen” or in English “(s)he who knows may tell how”. Let’s start from the Darkness Principle, as Jurgen Appelo so nicely explains in his article “Why We Delegate: The Darkness Principle” by referencing K.A. Richardson: “Each element in the system is ignorant of the behavior of the system as a whole [...] If each element ‘knew’ what was happening to the system as a whole, all of the complexity would have to be present in that element”. As long as we are not yet able to oversee all the complexity, we will have to distribute it somehow. And remember that the smarter we are trying to get by adding more complexity dimensions to our own decision making context, in fact the dumber we get. This is because in this case we do not re-use the knowledge already available adjacent to us. So the more complex the “system” is for which we are trying to solve some problem, the more there is a need to delegate the decision making. I see also a parallel in the network topologies designed by Paul Baran†. From the three distinct topologystyles available (centralized, decentralized and distributed) he picked the distributed topology style for the fundamental design of the Internet. And today it still works! It is an example of a highly self-healing, self-supportive architecture in which components (routers, switches etc.) make autonomous decisions based on the complexity of their direct environment (they do not need to know the total environment as is the case in traditional centralized, decentralized or hierarchical decision making styles). So it seems so obvious: if the complexity of the total system we are trying to “control” increases, a good approach is to organize the decision-making process in a network (distributed) style and not any more in a centralized or decentralized style. Each cell in the network is then allowed to make autonomic decisions within it’s own “jurisdiction”. For any decision that goes beyond that, the cell needs to organize some form of collective decision making by informing it’s direct adjacent cells before deciding. Further informing is not necessary, because the adjacent cells do that. Now the final question only remains: is there an optimum network size? Or should the network size depend on the problem you try to solve (for example solving a complex problem by crowdsourcing)?