The problem with capitalism, well, there are two, one obvious and one sneaky.
Socialism is widely understood to not work very well because you have to care as much about other people as you do about you (or your immediate kin group) and people aren't like that.
Capitalism requires you to prefer losing to cheating, and people aren't like that, either. This gets us to the regulated mixed economy, and an attempt to make cheating expensive enough to be rare.
Various people then claim that a pure capitalist system would work better, be more productive, and so on. It would necessarily be more merciless, in a win-or-die sort of way, but supposedly reputation labelling mechanisms and various sorts of techno-utopian information transfer systems would work to identify cheaters, and then everything would work; improved tools would beat the obvious people-aren't-like-that problem.
That gets us to the sneaky problem. Communication works, is authenticated and trusted, because there's a cost to signal. Lions "believe" antelope because the cost of leaping into the air, to indicate "see, healthy; not much chance of catching me, try another antelope", is that if you're not a healthy antelope, it shows; the leap is visibly and obviously sub-standard, and the lion can decide this one is worth a pursuit effort. (And probably because the leap itself costs time; straight up is not away from the lion, which is a disadvantage if the lion is feeling lucky and decides to try it anyway.)
What makes this interesting is that, if communication can be faked—if the antelope could buy really springy shoes—the resulting system isn't stable. (This happens, and has been studied extensively, with songbirds and mate selection.)
If most of the population cheats, being honest is disproportionately advantageous. If most of the population is honest, cheating is disproportionately advantageous. The proportion of "honest" and "cheats" will reach an equilibrium, but only for a specific environment. If the environment changes—as it will—the point of equilibrium moves. Behaviour moves in response, trying to optimize individual results. (Or, rather, the behaviour that works best moves, and fewer of those following the works-best-now strategy fail to reproduce, which changes their share of the total population.)
That's with little chirpling birds, where the environment changes due to pure contingency (mostly; nest building, population effects on food supply, etc. But so far as anyone can tell, they're not executing planned changes in their environment). In human interactions, most of the people involved are actively trying to alter the environment to their advantage. So you can't get a stable result from a "these are the simple rules" system. (Well, assuming it doesn't crash completely, and stop being a system; that's a stable result of a sort.)
What this means functionally is that all of the great simple philosophical approaches to coming up with a short set of rules don't work. They may approximate working for a little while in a particular place, but then the environment changes and things start to become a tangle of approximations and it-seemed-like-a-good-idea-at-the-times.
What can work is regularly re-visited quantified evaluation of results; is this policy, mechanism, market, or other system producing the desired results? If not, it needs to be changed.
This also shifts the argument into what the desired results are, rather than what simple philosophical system will surely produce the best results, but that's another post.
30 January 2009
Undreamt of in your philosophy
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment