Tuesday, December 7, 2010

Human Behavior Modeling Failures

This blog entry entitled Millenium bridge, endogeneity and risk management features two examples of faulty modeling in bridges and value-at-risk models (VaR) that take their roots in their not taking into account human behavior. One but wonders if people were dealing with unknown unknowns when the initial modeling was performed. In a different direction, when one deals with models and human behavior there is always the possibility for subgroups to game the system and make the intended modeling worthless. Here is an example related to ranking academics and researchers.

4 comments:

Anonymous said...

As well as your previous post about the solution of "Selling from Novosibirsk" this is an instance of the irreducible difference between Métis and Techné.
The "unknown unknowns" (and even the known unknowns) are outside the reach of modeling and rationality this is why "rationalists" like you and Beauzamy are fascinated by these cases.
This will always eludes you, the best you can do is to accept seemingly irrational approaches and swallow your pride.

P.S. Enforcing registration for commenting while you are also moderating sucks and does not give you any extra guarantees about the content or authorship (another rationality failure ;-) )

Igor said...

I said as much for the "unknown unknowns" previously so no surprise there.

However, I would beg to differ on the "known unknowns" that "refer to circumstances or outcomes that are known to be possible, but it is unknown whether or not they will be realized (no data or very low probability/extreme events)."

Your point is that we are facing an irreducible difference between Metis and Techne ( Techne: abstract technical knowledge, Metis: practical, experiential knowledge) whereby one would make a difference between the modeler removed from the experience and the practitioner. So a bird's eye, dare I say philosophical, view of the issue would make it look like we can never know. But you'd be wrong, most times the practitioners of these models have their hands in the practical/experiential knowledge and are striving to include all their knowledge in these models. Deep down, the engineers working on the debris (columbia RMM#2) knew they were outside their database. The inability to convey that information at higher level of the hierarchy was the faulty part. The same thing happened to the Challenger. Somehow information degradation needed for hierarchical structures also seem at play in the "smoothing out" of low probability events.

It really is not a question of fascination as in many instances of pure engineering like the building of a bridge or a nuclear power plant, you are *required* to have some grasp on what those extreme events/very low probability events are (As a matter of fact, most bridges these days follow guidelines set up by some of these early very low probability events). In order to do this, a substantial amount of funding is dedicated to this modeling. Just take a look at the RMM examples.

The people funding these models probably would not buy seemingly irrational approaches because, frankly, they are too many. You are also presupposing that most engineering does not try these irrational approaches. Again, I can think of numerous examples where seemingly weird conditions are studied. Most casual observers would not know about it except for a few specialists. But eventually the issue is how do you constrain the cost of investigating a totally open set of conditions.


PS: I noted that moderation was not strong enough for some spam. I agree with you it sucks.

Anonymous said...

Unfortunately it looks like my link to "seemingly irrational approaches" has been screwed (or is this another anti-spam feature :-) ) so you didn't got the full context of my remark:

http://econlog.econlib.org/archives/2010/11/the_park_ranger_2.html#124167

You say:
most times the practitioners of these models have their hands in the practical/experiential knowledge and are striving to include all their knowledge in these models.

This is of course very commendable and obvious but the point about the (reciprocal, BTW) irreducibility between Métis and Techné is that, no matter what and how, the efficiency of Métis cannot be shoehorned into any kind of model because it always stays fuzzy.
This is why I say that it will eludes you, you are trying to recast a foreign method into your "Techné paradigm", this is known to be impossible since Aristotle.

Yet, it IS possible to make good use of Métis, how do you think Roman engineers mostly worked given the paucity of their actual "hard knowledge"?

Thus, no:

The people funding these models probably would not buy seemingly irrational approaches because, frankly, they are too many.

There aren't "too many irrational approaches", this is what the work of Gerd Gigerenzer is about, evolution (surprise, surprise) has shaped up some heuristics which are actually effective under irreducible uncertainty but look profoundly irrational.

What make me talk of "fascination" are, for instance the papers of Beauzamy about Robust Mathematical Methods for Extremely Rare Events or The information associated with a [single] sample, unfortunately this approach will not catch the valuable information under uncertainty you are longing for.

Igor said...

Thanks for the link! I was not quite sure what the underlining was doing.

Let put absorb some of what you said. Thanks for the feedback.

Cheers,

Igor.