Walking fish? Peace of cake…(This post was inspired by the following Tweet from @Mmglouw @ModerneGezegden: #vroeger, toen er nog geen water was en de vissen nog moesten lopen (in the early days, when there was no water yet and fish had to walk).
Archive for November, 2011
No, this is not one of Harry Potter’s spells. He used the Reducto spell to blast solid objects aside. In this post we talk a little bit about reductionism and the way it sometimes is used to make false ontological representations of “things”. According to this excerpt from Wikipedia it works as follows: Reductio ad absurdum (Latin: “reduction to the absurd”) is a form of argument in which a proposition is disproven by following its implications logically to an absurd consequence. Now there are reductionism varieties of which a particular interesting one is ontological reductionism. Let’s zoom in on that one. It seems this type of reductionism is happening in the real world all the time. It can lead to all kinds of “translation” problems. This is because we human beings like to talk to each other about real world “things” for which we have our own ontological meaning. For example, if one person talks about a Customer (s)he might mean a Consumer, whereas another person might assume that this Customer is a Prosumer. So you cannot reliably exchange information if you don’t add the proper context (or indirectly refer to that context in some way). This is where information exchange can go wrong: if we exchange meaningless information because the context isn’t exchanged or simply assumed to be known while in real life we introduce translation problems. For example, let’s assume I send you a Duck (it’s a Duck of Vaucanson but I did’t tell you that) and at the receiving end, you try to understand what’s inside. I only told you I sent a Duck, I didn’t tell you it was a mechanical Duck. And then you use at the receiving side your own Reduction at Absurdum strategy to determine it’s ontological meaning. By it’s outer, visible attributes you might falsely assume it’s not a mechanical Duck. This wouldn’t have happened if I also sent you the context. So if we exchange information to eachother that can be Reduced ad Absurdum to it’s original ontological meaning, we also need to “send” the ontological context. It’s another way of telling we need Semantic Interoperability…
Our society is getting more complicated ( “ingewikkeld” in Dutch, as the picture to the left shows) by the day. And yet, we still are looking for ways to simplify things while the total complexitywill certainly not diminish. So we need to accept total complexity as a given and find clever ways to handle it. That is where abstraction comes in as one of the possible tools. It’s a favourite tool used by architects and designers. With abstraction you just hide certain complexity by abstracting it and putting a (virtual) box, domain, layer or whatever you want to call it, around it. Take the PC as an example: a highly complex device, both HW- and SW-wise but the dirty details have been cleverly hidden by it’s designers. So an average user does not need to bother with the internal complexity anymore. He or she can start adding extra new complexity on top of it. But abstracting also has drawbacks: we can loose oversight because we don’t understand the inner working of our black-boxes anymore. If they work, they work oke, but if they fail, we are in trouble. People who really understand what is happening inside these insane HW/SW ecosystems, accessed by black boxes we call PC’s are getting more and more scarce. And then there is society itself, societal complexity or the way we all interact. It is not so simple to abstract complex social interactions. So maybe in this domain, chaos science can help us a little. Consider The Emergence Principle in the splendid article Living on the Edge: “capable of rising to increasingly higher levels of complexity and order all by themselves“. Sounds like a possible (scientific) solution to handle societal complexity questions. But will it also work in our day-to-day life? The mummy picture originated here.
We are entering the Knowledge Era. Our society is becoming more and more complex as well as the technologies we all use. The time to invent or further develop more advanced decision making techniques becomes more and more urgent, but how do we do that? Or to refer to one of the key principles of Het Rijnlandse Boekje: “Wie het weet mag het zeggen” or in English “(s)he who knows may tell how”. Let’s start from the Darkness Principle, as Jurgen Appelo so nicely explains in his article “Why We Delegate: The Darkness Principle” by referencing K.A. Richardson: “Each element in the system is ignorant of the behavior of the system as a whole [...] If each element ‘knew’ what was happening to the system as a whole, all of the complexity would have to be present in that element”. As long as we are not yet able to oversee all the complexity, we will have to distribute it somehow. And remember that the smarter we are trying to get by adding more complexity dimensions to our own decision making context, in fact the dumber we get. This is because in this case we do not re-use the knowledge already available adjacent to us. So the more complex the “system” is for which we are trying to solve some problem, the more there is a need to delegate the decision making. I see also a parallel in the network topologies designed by Paul Baran†. From the three distinct topologystyles available (centralized, decentralized and distributed) he picked the distributed topology style for the fundamental design of the Internet. And today it still works! It is an example of a highly self-healing, self-supportive architecture in which components (routers, switches etc.) make autonomous decisions based on the complexity of their direct environment (they do not need to know the total environment as is the case in traditional centralized, decentralized or hierarchical decision making styles). So it seems so obvious: if the complexity of the total system we are trying to “control” increases, a good approach is to organize the decision-making process in a network (distributed) style and not any more in a centralized or decentralized style. Each cell in the network is then allowed to make autonomic decisions within it’s own “jurisdiction”. For any decision that goes beyond that, the cell needs to organize some form of collective decision making by informing it’s direct adjacent cells before deciding. Further informing is not necessary, because the adjacent cells do that. Now the final question only remains: is there an optimum network size? Or should the network size depend on the problem you try to solve (for example solving a complex problem by crowdsourcing)?
I often wonder why it seems so difficult for society to integrate value added activities in such a way that they benefit society as a whole. And that we could collectively and proudly say we have together achieved a synergetic beauty. Just in the same way like Carlotti (famous Italian painter) meant it originally: beauty is when all parts are working together in such a way that nothing needs to be added, altered or taken away. But instead of this we often see separation where integration could have been beneficial, or integration where separation would have been more beneficial. Maybe this is the way we are used to handle integration and separation. The way it often seems is that we use separation to handle “separation of concerns” and integration to handle “integration of contexts”. Wy not do it the other way around? Separation only to separate contexts and integration only to integrate concerns. How would our world look if we tried this more and more?