Human-Focused Turing Tests: A Framework for Judging Nudging and Techno-Social Engineering of Human Beings

From Wiki IoT
Jump to: navigation, search

FRISCHMANN B. M., Human-Focused Turing Tests: A Framework for Judging Nudging and Techno-Social Engineering of Human Beings, Cardozo Law School Legal Studies Research Paper No. 441, 22.09.2014

Type Article
Abstract This article makes two major contributions. First, it develops a methodology to investigate techno-social engineering of human beings. Many claim that technology dehumanizes, but this article is the first to develop a systematic approach to identifying when technologies dehumanize. The methodology depends on a fundamental and radical repurposing of the Turing test. The article develops an initial series of human-focused tests to examine different aspects of intelligence and distinguish humans from machines: (a) mathematical computation, (b) random number generation, (c) common sense, and (d) rationality. All four are plausible reverse Turing tests that generally could be used to distinguish humans and machines. Yet the first two do not implicate fundamental notions of what it means to be a human; the third and fourth do. When these latter two tests are passed, we have good reason to question and evaluate the humans and the techno-social environment within which they are situated.

Second, this article applies insights from the common sense and rationality tests to evaluate the ongoing behavioral law and economics project of nudging us to become rational humans. Based on decades of findings from cognitive psychologists and behavioral economists, this project has influenced academics across many disciplines and public policies around the world. There are a variety of institutional means for implementing "nudges" to improve human decision making in contexts where humans tend to act irrationally or contrary to their own welfare. Cass Sunstein defines nudges more narrowly and carefully as "low-cost, choice-preserving, behaviorally informed approaches to regulatory problems, including disclosure requirements, default rules, and simplification." These approaches tend to be transparent and more palatable. But there are other approaches, such as covert nudges like subliminal advertising. The underlying logic of nudging is to construct or modify the "choice architecture" or the environment within which humans make decisions. Yet as Lawrence Lessig made clear long ago, architecture regulates powerfully but subtly, and it can easily run roughshod over values that don’t matter to the architects. Techno-social engineering through (choice) architecture is rampant and will grow in scale and scope in the near future, and it demands close attention because of its subtle influence on both what people do and what people believe to be possible. Accordingly, this article evaluates nudging as a systematic agenda where institutional decisions about particular nudges aggregate and set a path that entails techno-social engineering of humans and society.

The article concludes with two true stories that bring these two contributions together. Neither is quite a story of dehumanization where humans become indistinguishable from machines. Rather, each is an example of an incremental step in that direction. The first concerns techno-social engineering of children’s preferences. It is the story of a simple nudge, implemented through the use of a wearable technology distributed in an elementary school for the purpose of encouraging fitness. The second concerns techno-social engineering of human emotions — the Facebook Emotional Contagion Experiment. It is not (yet) a conventional nudge, but it relies on the underlying logic of nudging. Both can be seen as steps along the same path.

Topics Business Model, Personality, Technology


FRISCHMANN deals with the personality manipulation that technology realizes on human beings.


Cass SUNSTEIN defines "nudging" as "low-cost, choice-preserving, behaviourally informed approaches to regulatory problems, including disclosure requirements, default rules, and simplification". FRISCHMANN specifies that this kind of nudging tools are transparent and more palatable, but there exist also more occulted nudging tools, like subliminal advertising.

The logic which sustains nudging is the construction or modification of the choice architecture or the environment with which humans make decisions.

Nudging is therefore a tool to manipulate people: "[n]udges are an example of techno-social engineering through manipulation of the choice architecture" (p. 57).

Supporters of nudging defend it saying that mechanisms that constrain people to choose in an active way are also a way of manipulating people, and therefore are exposed to the same criticism which is addressed to nudging. Moreover, people often decide to act in an irrational way because it requires too much effort to make an informed choice: therefore, default rules that nudge them forward a decision that enhances their welfare may be beneficial for them.

FRISCHMANN doesn't deny that there is truth in these arguments. However, in his opinion, "active choosing might provide beliefs, preferences, and even skills that enable people to exercise autonomy in many other contexts with respect to many other decision points through their lives. [...] In other words, if we are optimizing or maximizing autonomy, the static autonomy gains of preserving the freedom to choose not to choose in specific institutional contexts may be less than the dynamic autonomy gains from active choosing. Moreover, depending on the context, active choosing might provide the critical opportunity to develop the beliefs, preferences, ans skills that enable people to exercise other human capabilities, ranging from the development and sharing of common sense to empathy for and conscientiousness towards others" (emphasis added). In effect, "[t]here is more to being human than autonomy" (p. 47).

For example, we can think about identity, internal autonomy, and competence: even if FRISCHMANN doesn't call them with their name, his examples drive us to consider also those other aspects of human personality. Concerning identity and internal autonomy, he says that existing preferences are not always a reliable guide, because they mat depend on environmental, cultural and technological factors (e.g. advertising) (p. 46). Concerning competence, he takes the example of GPS: "when people rely on defaults or on other nudges, rather than on their own active choices, some important capacities will fail to develop or may atrophy" (page 45).

The boiling frog soup story

"Do you know how to make frog soup? If you begin with a live frog, you cannot just drop it into a pot of boiling water because it will jump out. You need to place the frog in a kettle of room temperature water and increase the temperature of the water slowly enough that the frog doesn’t notice it’s being cooked. “As the water gradually heats up, the frog will sink into a tranquil stupor, exactly like one of us in a hot bath, and before long, with a smile on its face, it will unresistingly allow itself to be boiled to death." 124 The story often is used as a metaphor to comment on the difficulties we face in dealing with the gradual changes in our environment that can have drastic, irreversible consequences. The gradual change may be difficult to identify, or each incremental step, or change in temperature, may in the moment seem desirable. The end state may be difficult to anticipate or comprehend, and in the end, it may not seem to matter. After all, it doesn’t really matter (to the frog) whether the frog knows at the end that it is frog soup or whether the soup is tasty and nourishing. What matters (to the frog) is the fact that water temperature is rising slowly, how that occurs, who controls the heat, and perhaps even why?" (pp. 56-57).

FRISCHMANN tells this story to say that we need tools for identifying and evaluating the evolving relationship between human beings and technology/environment.

Applying that reasoning to the Internet of Things, we may say that it is a technological phenomenon whose consequences are still not foreseeable: but people probably don't need to know the end state; rather, they need to have some tools to understand which role the IoT is playing in their lives, what influence it can have on their behaviour, on their preferences, on their autonomy: there is no need - FRISHMANN says - to reject technologies, such as wearable computing, which - among other things - enable surveillance and may influence people choices and preferences; but there is need - he says - for open dialogue and awareness about these technologies affect human being (pp. 51-52).