Unprecedented circumstances with Covid-19 make the need for mission-driven research to be more urgent, mobilizing research labs to discover drugs and vaccines, squeezing the usual timeline for such discoveries and bypassing standard operating rules. But what is the balance between mission rules and human ingenuity?
On June 11, 1969, NASA announced that Apollo 11 received the go-ahead for landing on the moon. The mission was full of unknowns, uncertainties and risks and NASA developed “mission rules” as a system to address them.
Mission rules
“The genesis of the concept had come early in the Mercury program from veteran engineers in the NASA Space Task Group. Early on, they decided they had better formally record every one of their important thoughts and observations about the Mercury capsule, about the rocket that was to launch Mercury, about each flight control system, and every flight situation. As Chris Kraft related, “We noted a large number of what-ifs, too, along with what to do about them. Then we printed the whole bunch in a booklet and called it our mission rules”[1]. It took many months for preparing the “mission rules” for Apollo 11, from the first version on May 16, 1969 to last-minute updates on the day of the launch. “There were rules to cover every conceivable problem and contingency. In each section, there was a summary of all the “Go/NoGo situations” (p. 193). There were so many that none could remember all the rules.
Between rules and human judgement
For many of the rules, there was a margin, some leeway, a little give. Some mission rules could be interpreted to leave an ultimate decision in the hands of the astronauts, but that sort of independent, on the spot judgement was not something that NASA managers wanted to encourage” (p. 193).
Armstrong was not very comfortable with all these prefixed procedures. Somehow, he had his own rules for landing. He remembered: “I had high respect for mission rules and how they were developed. But I would admit that if everything seemed to be going well and there was a mission rule that interrupted and said we have to do such-and-such, I would have been willing yo use my commander’s prerogative on the scene and overrule the mission rule if I thought that was the safest route.” On the other side, “Chris Kraft was also bothered by the fact that he could not be sure what Armstrong might do to override mission rules and force a lunar landing to happen”. Symmetrically, “Neil worried that an overzealous flight controller would abort a good descent, based on faulty information. I’m going to be in a better position to know what is happening than the people back in Houston, said over and over”. (pp. 194-195).
An uncertain compromise
Finally, Kraft and Armstrong agreed. “That mission rule stayed as written” Kraft recalled. “But I could tell from Neil’s frown that he wasn’t convinced. I wondered then if he ‘d overrule all of us in lunar orbit and try to land without a radar system” (p. 195).
Algorithmic thinking vs. human intuition
Rules represent a collective intelligence answer to problem-solving, usually codified in an algorithm or process operating under well-defined conditions. However, in many cases, expert intuition, individual decision and human judgment deviate radically from established rules and routines. Rules vs. expert intuition is a typical dilemma both in decision theory and innovation theory.
The work of Kahneman points out in favour of rules and collective thinking. “Statistical algorithms greatly outdo humans in noisy environments for two reasons: they are more likely than human judges to detect weakly valid cues and much more likely to maintain a modest level of accuracy by using such cues consistently”[2]. The broad argument of Kahneman’s research is that humans are intuitive thinkers, human intuition is imperfect and human judgments and choices often deviate substantially from the predictions of economic models. Algorithms do better.
At the other side, in innovation theory, rules represent routines that have been proven to be effective, work well until they are replaced by more effective routines. Innovation can be understood as a succession of routines from less to more effective, justified by a substantial increase in performance. Boschma and Frenken argue that “competition acts as a selection device causing ‘smart’ fit routines to diffuse and ‘stupid’ unfit routines to disappear”[3]. In their view, Evolutionary Economic Geography aims to understand the spatial distribution of routines over time, analysing the creation and diffusion of new routines, and the mechanisms through which the diffusion of ‘fitter’ routines occurs. But the evolution toward more effective routines, put aside the systemic character of innovation, is also an act of human intelligence and ingenuity. Some of the knowledge that routines store has tacit components, thus routines are deeply routed to human cognition and intuition [4]. Routines have a cognitive dimension as they encompass the knowledge base of an organisation, but also a motivational dimension to control intra-organisational conflict and they work as ‘truces’ among conflicts [5].
Ιs there a balance between collective intelligence, routines and rule-based thinking on the one hand and human intelligence and intuition on the other? Under what conditions does algorithmic thinking do better compared to human judgement, and under what conditions does human ingenuity outperform algorithms? In our digital age, the proliferation of algorithm-driven processes makes this relationship of balance and conflict more urgent in understanding.
When things go well, rules prevail
Returning from the moon Armstrong spoke in favour of rules and multiple choices. Reporters asked him about the most dangerous phase of Apollo 11 flight. Well, he said, “as in any flight, the things that give one most concern are those which have not been done previously, things that are new”, and continued “Now, there are other things that we always concern ourselves about greatly, and those are the situations where we have no alternative method to do the job, where we have only one” (p. 199).
We end this short piece with a reference from Montgomery and Svenson to this subject. [6] They suggested that a decision in a complex situation can be described as a sequential process in which different rules and information processing is used at different points in time. The decision is guided by a cognitive space in which information processing takes place. First, the decision-maker may use some system for representing the alternatives, such as a multiattribute representation system. Secondly, various procedures for processing the information can be applied and alternatives are evaluated on the same attribute before another attribute is considered. Thirdly, the decision-maker may use various decision rules to eliminate alternatives or to find ou whether one alternative is better than the others. Fourthly, more than one decision rule may be used before the final decision is made, and these decision rules might be applied sequentially in time. It seems that rules are replaced by rules only, and human ingenuity is to define another set of rules. Until then, the current rules prevail.
[1] Hansen, R.J. (2018). First Man. The Life of Neil A. Armstrong. Simon and Schuster, p. 192.[2] Kahneman, D. (2011). Thinking, Fast and Slow. Macmillan.
[3] Boschma, R. A., and Frenken, K. (2006). Why is economic geography not an evolutionary science? Towards an evolutionary economic geography. Journal of Economic Geography, 6(3), 273–302 (p. 278).
[4] Becker, M. C., Lazaric, N., Nelson, R. R., & Winter, S. G. (2005). Applying organizational routines in understanding organizational change. Industrial and Corporate Change, 14(5), 775-791.
[5] Cohendet, P., & Llerena, P. (2003). Routines and incentives: the role of communities in the firm. Industrial and Corporate Change, 12(2), 271-297.
[6] Montgomery, H., & Svenson, O. (1976). On decision rules and information processing strategies for choices among multiattribute alternatives. Scandinavian Journal of Psychology, 17(1), 283-291.