Organizational terraforming: recursion
Apr. 11th, 2012 07:54 amEarlier, I mentioned a pattern that seems present in many organizations and sufficiently complex goal-seeking entities in general: wherein one makes a barrier or border layer that divides reality into an "inner side" and an "outer side", inside the barrier, things being easy to accomplish, where the barrier itself translates between the sides to change the outer side (from inner onto outer) or maintain the inner (from outer onto inner, canceling out unwanted changes).
Once I looked for instances of it, I found that in many places, it appears to be employed in a recursive manner. Within a large area of "managed reality" (to coin a phrase), you can find smaller areas of yet differently managed reality, and smaller areas within that, too.
Earlier, to explain "terraforming", I used the example of a domed Martian city. The inner reality is the terraformed Earthlike part, where people can live. On the outside you have the harsh and inhospitable Martian environment. The dome of the city, along with scrubbers and any other active environmental maintenance mechanisms, constitute the boundary/border layer.
But that is not the only level. The city, though pleasant compared to the outside, is not perfect. Inside the city, you have buildings (which you might consider another boundary layer depending on how active its "translation" is). Inside the buildings, you have people, whose bodies have boundary layers (the skin). Inside the people, you have organs made of cells, and even the cells have membranes.
Where does the recursion come from? That's where I start speculating, and it may be top-down or bottom-up. Bottom-up is easier to explain. Say that you have a small structure, like a person in the domed city example. The small structure notices that it's not doing a very good job of maintaining inner reality[1], so it builds a larger structure that handles the most gross interference. This larger structure can't be as personalized or detailed as the smaller structures in concert, but it makes the job much easier for the smaller structures. In the domed city example: it's much more convenient to have a dome (and environmental processors) handle the worst temperature and atmosphere problems than to have each person wear a space suit. More generally speaking, the large structures seem to be to the smaller ones what mass production is to individualized production: capable of acting on a much larger scale, but with less variety.
As a consequence, in bottom-up approaches, you'd expect to find some kind of signaling from the smaller structures onto the larger, so that if the target for the larger changes, the smaller one can notify the larger. Or, to be more precise: you'd expect to see a way of the smaller units to notify the larger that they can't themselves deal with the changes to outer reality[2]; and perhaps also something for the converse, when the larger unit does not have to burden itself with something the smaller ones can deal with.
A top-down approach has to differ. If you're a planner, builder, or organizer, you can pick any organizational structure you want. Some will be more effective than others, some will fail altogether, but you can initially pick any you'd like. So assuming the planners are reasonable, then if the organization is organized in a recursive manner, there would be a reason. If I were to speculate even more wildly, I'd say I've briefly touched upon the reason above: while large structures can affect great change, they can't customize the change very well. By differentiating and specializing, the larger change can be broken down into parts, each of which can be treated differently, and then rebuilt.
So let's put on the other hat. This all sounds reasonable, but what might be wrong about it? Well, it's very clean. I have no assurance that it really is that easy. Dynamics of an organization might make the comparison to something as orderly as actual terraforming invalid. The smaller parts aren't units in a hive, and the larger parts aren't inert computerized things directed by the smaller ones from some control room, either.
(And I can't quite shake the feeling I'm building a grand model with internal consistency but where I can't really know its external consistency. It matches itself, but does it match the world?)
[1] Systems with no direct perception of their own can be said to indirectly "notice" things to the degree that the selectively encourage parts that conform to the thing in question. For instance, evolution "notices" something has a higher survival value simply by that organisms that make use of the pattern outlast those that don't.
[2] And then this begins to look an awful lot like the Viable Systems Model, even though I didn't set about to try to recreate it.
Once I looked for instances of it, I found that in many places, it appears to be employed in a recursive manner. Within a large area of "managed reality" (to coin a phrase), you can find smaller areas of yet differently managed reality, and smaller areas within that, too.
Earlier, to explain "terraforming", I used the example of a domed Martian city. The inner reality is the terraformed Earthlike part, where people can live. On the outside you have the harsh and inhospitable Martian environment. The dome of the city, along with scrubbers and any other active environmental maintenance mechanisms, constitute the boundary/border layer.
But that is not the only level. The city, though pleasant compared to the outside, is not perfect. Inside the city, you have buildings (which you might consider another boundary layer depending on how active its "translation" is). Inside the buildings, you have people, whose bodies have boundary layers (the skin). Inside the people, you have organs made of cells, and even the cells have membranes.
Where does the recursion come from? That's where I start speculating, and it may be top-down or bottom-up. Bottom-up is easier to explain. Say that you have a small structure, like a person in the domed city example. The small structure notices that it's not doing a very good job of maintaining inner reality[1], so it builds a larger structure that handles the most gross interference. This larger structure can't be as personalized or detailed as the smaller structures in concert, but it makes the job much easier for the smaller structures. In the domed city example: it's much more convenient to have a dome (and environmental processors) handle the worst temperature and atmosphere problems than to have each person wear a space suit. More generally speaking, the large structures seem to be to the smaller ones what mass production is to individualized production: capable of acting on a much larger scale, but with less variety.
As a consequence, in bottom-up approaches, you'd expect to find some kind of signaling from the smaller structures onto the larger, so that if the target for the larger changes, the smaller one can notify the larger. Or, to be more precise: you'd expect to see a way of the smaller units to notify the larger that they can't themselves deal with the changes to outer reality[2]; and perhaps also something for the converse, when the larger unit does not have to burden itself with something the smaller ones can deal with.
A top-down approach has to differ. If you're a planner, builder, or organizer, you can pick any organizational structure you want. Some will be more effective than others, some will fail altogether, but you can initially pick any you'd like. So assuming the planners are reasonable, then if the organization is organized in a recursive manner, there would be a reason. If I were to speculate even more wildly, I'd say I've briefly touched upon the reason above: while large structures can affect great change, they can't customize the change very well. By differentiating and specializing, the larger change can be broken down into parts, each of which can be treated differently, and then rebuilt.
So let's put on the other hat. This all sounds reasonable, but what might be wrong about it? Well, it's very clean. I have no assurance that it really is that easy. Dynamics of an organization might make the comparison to something as orderly as actual terraforming invalid. The smaller parts aren't units in a hive, and the larger parts aren't inert computerized things directed by the smaller ones from some control room, either.
(And I can't quite shake the feeling I'm building a grand model with internal consistency but where I can't really know its external consistency. It matches itself, but does it match the world?)
[1] Systems with no direct perception of their own can be said to indirectly "notice" things to the degree that the selectively encourage parts that conform to the thing in question. For instance, evolution "notices" something has a higher survival value simply by that organisms that make use of the pattern outlast those that don't.
[2] And then this begins to look an awful lot like the Viable Systems Model, even though I didn't set about to try to recreate it.