- McClelland & Stewart (October 12, 2010)
- ISBN-10: 0771035195
Book Review and Commentary by Mike Darwin
The success of cryonics, both in absolute and relative terms, arguably depends upon the accuracy and precision with which we (cryonicists) can predict the future. Our ability as seers is important in the absolute sense, because failure to accurately anticipate the requisite social, economic and scientific developments necessary for the success of cryonics would mean that we are wasting our time, energy and money – and perhaps should concentrate those assets on other strategies for survival (or more simply, stop tilting at windmills and enjoy our life in the here and now). Our predictive ability is also important to cryonics’ success relatively, since failure to accurately foresee the short- to intermediate-term future of cryonics is very likely to erode our credibility with both the general public and the professional and scientific communities and result in failure to anticipate lethal problems that might otherwise have been avoided.
If you doubt that this is so, there is a simple on-line “game” that you can “play” that was developed by cryonicist and computer programmer Brook Norton. It is called The Cryonics Calculator: Derivation of Cryonics Probabilities, and it allows you to enter the risk of various possible failure modes for your hypothetical (or real) cryonics organization and then see what happens to the probability you that you will remain cryopreserved long enough to be revived: http://www.cryonicscalculator.com/index.php?option=com_content&view=article&id=2&Itemid=3.The results might be described as the reverse of compound interest: small risks for any short period of time become lethal risks over long periods of time. In plugging scenarios into the The Cryonics Calculator, I was also reminded of the liability of complex systems with hundreds or thousands of critical components to failure, even if the per component reliability is 99%. Spacecraft, as any Shuttle engineer will tell you, are a good example of this phenomenon.
So, how do we do in predicting the future? That question isn’t hard to answer in the case of most cryonicists, because there is a fairly large base of written material available to peruse in making an assessment. The answer is that we do horribly. Really horribly.
Of course, cryonicists are by no means the only people interested in predicting the future. To some extent, everybody wants to know what tomorrow holds. Economists, politicians, investors, corporations, in fact just about every human institution and enterprise, has a strong incentive to accurately predict what lies ahead. Indeed, many people make their livings doing just that; stock market analysts, commodities advisers, government intelligence analysts, and even the neighborhood fortune teller are all paid to peer into the future and tell us what lies in store. In answer to the question of how well these more conventional (and vastly more respected) seers perform, Canadian journalist Dan Gardner wrote the book Future Babble: Why Expert Predictions Fail and Why We Believe Them Anyway. Gardner’s conclusion, informed heavily by the research of Philip Tetlock, Professor of Psychology at the University of Pennsylvania, is that the experts, be they economists, petroleum experts, futurists, or political pundits are about as accurate in forecasting the future as as a group of “dart-throwing monkeys.”
In fact, on average, you’d be better off making decisions about what is to come based on a simple coin toss, or deciding that “things will stay about the same.” The first question that comes to mind is, “why are the experts (and indeed humans in general) so bad at predicting the future?” Gardner explores the answers to this question in clear, easy to understand terms in text that is as concise as it is fast paced. At the most basic level, predicting the future suffers from the problems of complexity and chaos that are inherent in the real world. Want to know when “peak oil” production will occur? How hard that can be to figure out? There is clearly a finite amount of oil on the planet, it would seem we know how much is left, and it is certainly easy enough to plug in various numbers for the rate at which oil is being consumed. What’s so difficult about that?
As it turns out, even such a seemingly simple problem is enormously complex. Knowing where and how much untapped oil exists is more difficult than it seems. Technological advances cannot only make formerly unreachable oil accessible, it can also make long abandoned oil fields formerly considered “exhausted” highly productive. And, as prices rise, previously economically nonviable sources of oil, such as oil sands, become cost effective to recover. While there is no question that oil will eventually run out, there is a huge difference between that happening in the 1980s, versus it not having happened 20 years later. Accuracy isn’t enough; precision is critically important as well.
If complexity weren’t a bad enough problem, to it can be added the problem of chaos, as in chaos theory. Modern chaos theory originated with the work of mathematician and meteorologist Edward N. Lorenz, who noticed that even infinitesimal changes to the numbers used in maths models of weather prediction resulted in radically altered outcomes. It was Lorenz who discredited linear statistical models in meteorology and who famously asked, “Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?” The answer is, yes, it can, and thus was born the term “the butterfly effect.” Chaos powerfully limits both accuracy and precision in predicting the behavior of complex systems, of which the everyday world is certainly one.
A central point that Gardner considers is Tetlock’s study (and resulting book) Expert Political Judgment: How Good Is It? How Can We Know? (2005) which describes his 20-year long prospective study in which 284 experts in many fields, from university professors to journalists, and with many ideological orientations, from ultra-left Marxists to libertarian free-marketeers, were asked to make 28,000 predictions about the future. Tetlock found their performance dismal: they were only slightly more accurate than chance. His study was complex, but his conclusion was brutally simple: the experts were not only worse than run of the mill statistical models, they could barely eke out a tie with the proverbial dart-throwing chimps. And there was no difference in ideological bias; capitalists and Marxists performed equally poorly.
None of this should be too surprising. Lots of other authors have explored this phenomena in detail, most notably Tetlock himself (i.e., Expert Political Judgement), and Nassim Taleb, in his superb book Fooled by Randomness (and the later in The Black Swan). The useful things about Gardener’s book are that it presents these ideas in a highly readable and accessible format, and that it explores the underlying psychology and biology of why we humans are such “seer-suckers.” We just can’t help coming back for more – usually from the same “discredited” experts who misled us only a few years, months or even weeks before.
Implications for Cryonics
Recently, in preparation for another piece of writing, I hauled out my copy of science fiction author Robert Heinlein’s 1980 book, Expanded Universe. Included in the book are his essays “1950 Where To?” and “The Third Millennium Opens.” The former are his predictions about the year 2000 made in 1950, and the latter are his predictions about the year 2001, made from the vantage point of 1980. In reading these, it is impossible to conclude anything other than that Heinlein was terrible, in fact ridiculously terrible at predicting the future. “Where to?” is 7 pages long, whereas his attempt to justify and waffle on the failed predictions he makes there runs to (a pathetic) 29 pages! Heinlein was neither stupid nor ignorant; he had access to some of the best scientific, technical and military minds of his day (as did future forecasters Herman Kahn and Robert Prehoda) and yet he failed utterly to see what lay even 20 years ahead of him, as did virtually all of the other technological seers before him.
What does this mean for cryonics? At first glance the news would seem to be all bad. It is pretty clear that we can’t predict the future, even the very near term future (5-10 years), either in terms of technological advances or man-made or natural catastrophes. The future remains as it has always been; not just to be seen “through a glass darkly,” but not to be seen at all. However, there is some more hopeful news summarized in Gardner’s book (and present in considerably greater detail in Tetlock’s superb book Expert Political Judgment), which I believe has real and useful application to cryonics. Not all seers in Tetlock’s study were equally bad. Some were truly terrible, and those were invariably the experts who informed their decision making on the basis of an ideological agenda. It did not matter if the experts were Marxists or Capitalists; to the extent their decision making was ideologically based, it was invariably less accurate. The best decision makers relied on multiple sources of data, entered the problem solving process with minimal biases, and had little or no ego investment in their conclusions. In other words, they were willing to revise their thinking, admit errors and reevaluate their conclusions as necessary. That’s a fairly uncommon trait in humans, even amongst scientists.
The Directors, Officers and in particular the Chief Executive Officers of cryonics organizations are the ones on whom the proximate responsibility rests for shepherding the organization’s members and patients into the future. In the past, no attention has been given to how these people should be selected. In large measure this has been because the pool of candidates has been vanishingly small, and all too often almost anyone willing to serve had to be accepted, for lack of any alternative. Hopefully, the future will offer more choice, and if and when it does, it would behoove us to carefully examine the background and the corpus of writing of those whom we choose to lead us. We should look for the accuracy and precision of their past decision making, as well for the extent to which they are “calibrated” in their decision making. If a person says (on average) that he is ~80% confident his predictions will come true, and in fact, ~80% of them do prove correct, then he is 100% calibrated. This is important, because knowing how much confidence to place in your judgment is often crucial. Overconfidence can be a killer, as can endless waffling and the inability to act.
Beyond the leader as seer there are, of course, many duties and qualities required. These are beyond the scope of consideration here. However, it seems a good place to start that we not empower people to decide our futures who are demonstrably terrible at predicting it. Not just ‘flip of the coin bad,’ but truly terribly bad. Such people, it turns out, are fairly easy to spot by examining the corpus of their past work and decision making. This is quite different than looking at a “markers,” such as economic success. A used car salesman, a stock broker, or a huckster of commemorative coins may be tremendously financially successful. The question that should be asked in such cases is, “At whose expense?”
A few months ago, I was scanning (digitizing) some back issues of Cryonics magazine from 1988, and I happened to notice I had written (with assistance from Steve Harris, M.D.) an article predicting the future of medicine 20 years hence, entitled The Future of Medicine, Cryonics, January, 1988 pp. 31-40: http://www.alcor.org/cryonics/cryonics8802.txt and in Cryonics, February 1988, pp 10-20: http://www.alcor.org/cryonics/cryonics8803.txt. I had forgotten I’d even written the article! You can read it and see how well (or poorly) I did.
That article led me to more comprehensively review my writings over the years. The results were interesting. For those of you who write, publicly or privately, I can promise you that rereading your writings in the decades to come will be a fascinating undertaking. Socrates famously said, “The unexamined life is not worth living.” Well, maybe, but I think that just perhaps, the unexamined life may be a lot more fun.