Spying Black Swans....   -  pfh 2/06/09

 


The "expert errors"  caused by modeling complex systems with "normal" distribution statistics.  

Part of why financial instruments designed by advanced statistical methods failed so drastically in the great collapse was the common error of trusting "the bell curve" distribution for complex system events.   Their fit with the real world came to actually have a very "fat tail" of "abnormal" distributions of events when the economy ran into abnormal changes.    Nassim Taleb popularized how experts get into that trap with his book The Black Swan.   

What Taleb  seemed to understand, but did not emphasize, is that there is also not much of any better way to allow for divergent events with a statistical distribution, except to combine the statistical method with a good way of watching out for divergences developing.     Generally and specifically, Black Swans are not random events, but developmental events that catch you by surprise because they are not predicted by prior patterns.   Our world has a better rule than 'normality', it's 'abnormality'.   Any given direction is sure to be reversed and you need to watch to see when.    The "expert error" comes from expecting the future will be "normal", in a world that changes, so allowing completely confounding events to occur unnoticed.    How to develop a prospective view is the trick, and there is a lot on my site about that.

The further complication, other than death and taxes... is that it has been the design of our whole economic system to generate ever more abnormal events, putting people who expect that not to happen at a particular disadvantage, all of us really.   It leaves us trying to regulate a system for producing ever more unusual, complex and unregulated events.   That is precisely what we were, and have always been, and still are trying to do.   Part of what changed is it stopped being fun!   We should have looked out for that.

 


current issues


Quotes from pfh FRIAM posts 7/15/08 to 11/18/08 below in  chronological order

Well, the reliance on competence is relative to the difficulty of the task. As our world explodes with new connections and complexity that's sort of in doubt, isn't it? Isn't Taleb's observation that when you have increasingly complex problems with increasingly 'fat tailed' distributions of correlation then you better not rely on analysis? Anyone who takes that job is probably running into 'black swans' aren't they?
--
No, taking on impossible tasks is what true stupidity is about, not expertise, and the best way to hire a stupid expert is to hire people ready to do it. Come on... heading into impenetrable walls of complexity is the stupidest thing any 'expert' could possibly recommend but we've gone and hired an entire world full of so called 'experts' doing exactly that. It ain't gonna work.
--
Of course!, the reason they fooled everyone so completely was that they were designed to be completely sensible. That's what is meant by the "black swan".
--
Gosh, I don't know why it's so hard to convey, but it's important to understand.

The error in planning on things working as if responding to a normal curve distribution (i.e. admirably designed for all the usual problems) is relying on it if there is a growing 'fat tail' of abnormal events (the black swan). That situation is sure to develop when trying to regulate a system for producing ever more unusual and complex events. That is precisely what we were, and have always been, and still are trying to do.

A bubble pops at it's weakest point, not it's strongest. When the 'containment' is a regulatory design, the certainty is that the breach will occur at the most critical place that no one checked. By definition it's a small error that multiplies to an irreversible point before anyone quite realizes it. The CAUSE of that is not the rare event (the pin prick). It's not the weak point in the containment that no one checked (patches of poor design or regulation or greed, etc). It's not the size of the bubble (how big the gradient is from high to low). It's the pump.

The cause is pumping the bubble of complications in an accelerating way that guarantees people will miss the problems developing. It's operating an growth system and making the first regulatory error, accepting the learning curve of exploding complication required to stabilize it forever.

There are a lot of 'why' answers, and some of them circular. I think the correctable reason why is the common scientific error of reading the future in the past. We plan on the future behaving like the past by trusting old stereotypes and patterns for changing things. We plan on the future being like the past EVEN for systems we design for the purpose changing at continually multiplying rates. The solution is to notice the cognitive dissonance... you could say, and just ask the dumb questions.

Phil Henshaw  
--
Yes, how people build bad models. Getting back to Taleb's point with the 'black swan', that he should have stated more clearly, that it's always dangerous to do complex analysis with fat tailed distributions. You might be still more clear about it by making a list of behaviors that become complex and fat tailed to watch out for. That includes things like growth and collisions and changing distributions generally that progressinely diverge from their original behavior, and all suggest that the system being modeled isn't the same anymore. Covering that up with an easy tweek of the noise factors in a model then doesn't address the problem. ;-)
--
[ph] Ah yes... but the value of deterministic answers does not remove the value of closely focused anticipatory questions. One needs to be careful not to waste one's time, but there really are some clear actionable signs of change that are every bit as useful as deterministic answers.
--
Ken
> Phil,
> You speak of causality and "why" answers as if they "ought" be
> deterministic in some scientific paradigm. Uncle Occam cautions that
> may be one assumption too many.

[ph] Ah yes... but the value of deterministic answers does not remove the value of closely focused anticipatory questions. One needs to be careful not to waste one's time, but there really are some clear actionable signs of change that are every bit as useful as deterministic answers.

> Therefore, I sense that the underlying assumption in your observation
> is that science is "supposed" to be the search for truth from events
> of the past. I refer you to Henry Pollack's book "Uncertain Science,
> Uncertain World", and Michael Lynton's observation that the purpose of
> science is to "separate the demonstrably false from the probably true".

[ph] Yes, and of course what science "is" is whatever scientists "do" and that's a quite broad spectrum of things. Some habits scientists consistently return to over and over, include coming up with new questions when the patterns of what we're looking at just don't fit... I'm just pointing to a better methodology for identifying that 'cognitive dissonance' in the environment that prompts the need for new questions. It's a way of scanning the environment for signs of impending systems change, and things like that. Relative to *that* point of view the purposes of science seem to be to describe only what is changeless, and to distract us from many of the exciting new questions all around us.

> I would add that "probably true" actually means "probably not false",
> so even that logical state is approached asymptotically. This is an
> artifact (meaning a residual error or inaccuracy) from a Newtonian era
> paradigm of science that has long been shown erroneous yet still
> permeates most peoples thinking.

[ph] Well, from an anticipatory science view "probably true" mean "worth checking out", so even if past data is inconclusive at the moment, the progression observed may suggest future data will become so. As we build an ever more complex and rapidly changing world, the "probably true" expectation that the system will expose ever larger errors we didn't see is, in my estimate, worth looking into.

> The problem caused by planning on things working is what I call The
> Sunny Day Paradox - usually faced by those who believe in a
> deterministic scientific paradigm. In other words, in spite of
> successful surgery, the patient died.

[ph] Well that is also aptly pointed out with the statistical world view of the turkey in the month before Thanksgiving... He's being treated unusually well and feeling fit as a fiddle... statistically speaking he has great prospects for a long life.

> One last point, by "pump" do you mean "probabilistic wavefunction"?

[ph] No I don't mean equations. I mean a reciprocating process of accumulation. The simple handle pump for water in a farmyard or the spritzer of a lady's perfume are analogous in various ways to the complex processes that systems use to accumulate changes. If the spritzer worked the way a growth system does, starting with an almost imperceptible discharge and then using some output to multiply the output, a person patient and persistent enough to get it to operate at all is likely to be suddenly drenched with perfume and feint to the floor if they don't pay very close attention to just when to stop it. It's the relation between the doubling rate of the pump and lag time in the control system responses that displays the range of outcomes. The big lag time to watch, of course, is the time it takes to switch it off automatic.

It's actually a quite useful question. The wonder to me is why science has not yet seemed to acknowledge that systems of change change things.

Phil
--

I'd agree Taleb does not communicate his main insights consistently, and uses fuzzy generalities that you need to "grok" to make sense of. I don't think one needs to deal with all that to get the main point, though.

The reasons why *statistical analysis fails for subjects of increasing non-homogenous complexity* seems invaluable. It's a principle that might be derived simply from any number of directions, and is an important point. Our world is making the critical error exposed in any number of ways it appears.

It's also interestingly central to the complexity theory of W M Elsasser that he developed in the 50's and 60's. He's an extraordinarily clear thinking theoretical physicist/biologist who points to that as a gap in statistical mechanics that needs to be considered for any attempt to model non-homogenous systems like life.

I even find that "strategy of the gaps" remotely similar to how Rosen points to why divergent sequences can't be represented in closed systems of equations, but are clearly part of life, and so are necessary for any attempt to model such non-homogenously developing and changing systems as life.
--

So very many Berliner's seem to rate the city above Paris, London and NY. I'm wondering how you might not be aware of the status so many people see in living there! Your second comment goes right to the point though, that the Jekyll & Hyde feature of feedback loops is their special beauty and mystery at the same time.

Awareness of that is also a key to watching them do it, how they switch from multiplying good to multiplying harm in the relative "blink of an eye". It's also one of their highly predictable features. The way markets can promote a growth in wealth to a point and then beyond it's point of diminishing returns promote a growth of instability... is one that would be exceptionally profitable for us to pay close attention to, for example.

The 'bitter pill' seems to be that nature changes her rules as the circumstances are altered, and we seem to define our identities in terms of which rules we believe in, and that itself is a big mistake.
--

“Expert error” and “expert confusion” is unquestionably the direct cause of our world collapsing right now, for example.    It’s expert designed systems that are doing it.    I’ve been pointing very directly to the critical errors being made.    So has Taleb, from another approach.   It’s important not only look for solutions, but also to see what is unsolvable.    It explains why previously trustworthy systems can go hopelessly out of control.    A usual part of expert error, of course, is reading “dismissal before content” in the usual peer review process.   Is that truly as unsolvable as it seems?
--


jlh synapse9/signals