Authors:
Maya Ganesh
At the time of writing this article, a debate broke out on Twitter among noted AI practitioners. The chief scientist at a California tech company tweeted, "it may be that large neural networks are slightly conscious" (https://bit.ly/3Blgo7L). A thread of responses quickly formed. How was consciousness even being defined? asked one scientist, for the uncertainty around its definition vexes the entire field of cognitive science. Consciousness is impossible to approximate without significantly larger computational architectures, said another. But neural networks are only "engineered artifacts, much like a toaster," argued a third. Thus, a neural network was understood in distinct terms: sentience, computing power, infrastructure, and household appliance.
This Twitter debate is more than just quibbling across groups of elite U.S. scientists. It is an instance of "terminological anxiety": experts from different disciplines converging in a struggle over the boundaries of definitions of concepts [1]. Studying how experts, such as philosophers, engineers, or tech CEOs, talk about AI reveals influences on its shaping as one kind of technology or another. Their talk, reflecting scientific and industrial research and development, legal language, marketing, popular culture, and the institutions around them, is rich in metaphors. This article compiles metaphors of AI from 13 countries and in nine languages to emphasize how cultural meanings and world-making unfold through language.
→ Metaphoric language conveys emergent phenomena that are difficult to describe because they are so new.
→ Metaphors "stick" and hence shape our perceptions even when they have outlived their purpose.
→ Describing what AI does rather than adopting metaphors of being human could bring greater clarity and accountability.
Metaphors are figures of speech that help to convey experiences or observations that are difficult to describe because they may be unfamiliar, new, emergent, or complex. Metaphors slip into common parlance, easily making us forget that they are indeed just metaphors; they can be misleading, and they can also work as self-fulfilling prophecies because they fully occupy an unfamiliar phenomenon even after it becomes less so. This is neither unequivocally good nor bad, but requires attention because language matters, and is political because it structures and brings worlds into being.
The Greek word Μεταφορά (metafora), from which we get metaphor, can be spotted on the backs of many trucks in Greece because it means, literally, "to transport." But metaphors are more than just transportation; they are also the road, the map, and the destination. As theorist Hans Blumenberg argues, absolute metaphors can "give structure to the world, representing the non-experienceable, non-apprehensible totality of the real" [2]. Science makes use of metaphor to convey new and unfamiliar scales and complexity. For instance, plate tectonics describes Earth's surface as a thin crust "floating" on a viscous mantle, as if it were crème brûlée or an egg. Earth's surface is actually a solid 30 miles deep, but to convey the science of earthquakes, plate tectonics renders the surface of Earth unstable.
Thus metaphoric speech contains—in the sense of being housed in as well as limited to—what the world is and how we act in it [2,3]. For instance, if you think that AI has consciousness, you might give it the care, play, and love you extend to a pet [4]. Or, if you think it is a toaster, then you will give it a steady supply of electricity, buy toastable foods, and want toaster engineers to write safe production and use standards. AI "metaphorology" allows us to recognize the performative force of AI imaginaries through the social, economic, and political vectors within them, and how these shape what we think AI is, or can be.
Metaphors slip into common parlance, easily making us forget that they are indeed just metaphors.
Consider how we think of AI as a black box and thus in accounting for its harms demand transparency or explainability of algorithms rather than of the institutions that create and maintain them. The struggle for survival of research through a winter speaks of a crisis that requires resources. As a neural network, a computational-mathematical-visual metaphor for how the human brain is thought to work, AI is already brainlike. But AI has long been captured in terms of other complex systems: enzyme substrate, gland, ecological self-organizing system, autopoietic organism, landscape [5]. Each proposes a different way that information is communicated and regulated. Metaphors have led AI research down meandering paths. The paper "Is Chess the Drosophila of AI?" finds that what the fruit fly, Drosophila melanogaster, was to genetics, chess was to AI: fundamental to the early research agenda in how "intelligence" was shaped. Coined by AI researchers, this analogy reveals that the field imagined itself to produce knowledge as if it were an experimental, natural science rather than a highly applied, multidisciplinary field. This only amplified the stakes in the development of AI. But chess did not actually take AI research any further; all it did was result in computers that played chess very, very well [6].
The dichotomy of AI as potential tool or existential threat appears to transcend political geography, and can be found across 13 countries. For example: Fiebre del oro. A Shabbas goy. A rising sea. A train that you cannot miss. A hammer. A silver bullet. Shah Rukh Khan. A police officer. A gorilla. A superhuman. These tool-or-threat metaphors generate the leitmotif of a haunted gap between what is perceived as human and what is considered its other, such as machines, nonhuman beings, and nature. Contrary to conventional thinking that AI metaphors must be different across different political geographies and cultural contexts, this work finds that metaphors are fairly similar across the 13 countries surveyed for this work. This suggests the centrality of the human in the development of AI, as well as a coherent transfer of language between experts, industry, the media, and the public. What is unique is how local political economies, social contingencies, and anxieties leave their imprint on what being human means.
Here is another set of phrases about AI: algorithmic optimization, extreme spreadsheet, automated capital, automated compliance, software, autopoietic system, infinite game. This set was generated by experts in response to the question: What would you call it if you could not call it "artificial intelligence"? Interestingly, this set avoids the tool-or-threat metaphor and refers to what AI technologies actually do, or purport to. The avoidance of metaphors is a recognition that they can end up becoming unhelpfully entangled with the thing itself. Experts surveyed here urged disentangling our understanding of AI from its metaphoric language by following the feminist epistemologist Sandra Harding who discusses the "social fingerprints" that she says all metaphors carry [7]. Metaphors can reveal the political economies and material politics they emerge from.
Political-Geographic Metaphors of AI
AI's metaphor of a "golden" tool is driven by business and government; in Russia, India, South Africa, Mexico, and Spain, AI holds economic promise because these countries see their populations as reserves of data, likened to a natural resource to be mined to power AI. The Russian government wants to supplement its crude oil exports with "crude data"; the metaphor "data is the new oil" remains robust. Spain refers to AI as "gold fever" (fiebre del oro), suggesting a gold rush, a frenetic and speculative race. Yet, as Kate Crawford writes, the extractive industrial processes essential to building AI infrastructures come with steep costs to the planet that are obscured in dense supply chains. And in the benchmarking and calibration of AI systems, human data is extracted with little attention to its costs for marginalized people [8].
Bureaucrats and policymakers in India, Kenya, Ethiopia, and South Africa hope to capitalize on data to "reset the future" and "catch up" (with the West) with data as the fuel for AI [9,10]. AI is a train that you "cannot miss." These metaphors suggest a clock or meter of progress that circumvents the past and the present to miraculously catapult these economies into the future. African experts note that all the possibilities and promises that are being invested in data and AI will never redound to Africans' benefits so long as African governments continue to function as inefficient and corrupt bureaucracies that stymie actual innovation and local development [11]. So, AI development cannot be separated from research and practice that is continuous with digital access, freedom of expression, democracy, and social and political equity.
If AI presents an economic opportunity in one part of the world, it presents anxieties elsewhere. For instance, a 40-year review of Der Spiegel cover stories about AI was inevitably about the threat of automation and robots taking away German jobs [12]. But perhaps this tells us more about the German state's anxiety about what technological change means for its contract with its citizens. MIT scholar Kate Darling cautions, "The robot is a lightning rod, a red herring" and the panic associated with robots taking away jobs must be disentangled from the reality of economic contexts [13]. Thus, the economic concerns in Europe and much of the industrialized North might be that the jobs are just not there to be taken away, and that the problem is not technology but rather economic stagnation, austerity, defunding of social and public infrastructure, and debt, among others [14].
In China and in the U.S., AI is a tool that will work for us, make us more creative, efficient, and productive, and is not a simulacrum of a human or something that will take away jobs. This is the message of Microsoft's AI advertising in the U.S., as something that comes into its full potential when picked up and used by humans. Hence the tagline of its advertising spot is, "What's a hammer without someone to swing it." Similarly, Xiaomi's digital assistant, Xiao AI, presents as either a clever companion or a dutiful domestic helper of the kind that many middle-class Chinese would employ.
Faith and Trust. A collage of Google image search results. By Serife Wong (2019). |
Japan has a more sanguine relationship to robots and AI. Astro-Boy, an android boy, is a 1950s anime character who remains a source of inspiration in robotics [15]. He cares for humans and thus the gap thought to exist between human and machine is one of poignancy rather than horror. Japan strongly resists immigration to supplement its cleaning and care workforce, so there are incentives to develop robots that fill these roles. This is also a country where the economic boom of the 1980s and 1990s was built on the backs of workers like that of the salaryman, dehumanized to the point of death from overwork. The blurring of human into robotic worker is not just about merging with the machine, but also about the dystopian reality of living within an economy that is also a relentless machine [16].
Life in highly automated, algorithmically governed economies now includes multiple configurations of humans working with, being directed by, and helping algorithmic systems, including the banal captchas we complete to train computer vision systems. Though it might feel like the boundaries between human body, data, and machine are increasingly blurred, the power to control and account for this blurring rests with an elite few.
It is impossible to be fully outside of verbal language, but we can be reflexive about it as a material and political force that manifests our values in policy documents, tweets, geopolitics, and business media. For instance, Georgetown University's Center on Privacy and Technology says it will never use the words AI or machine learning because they are misleading and obfuscating marketing-speak, and thus suggests clearer uses of language [17].
The study of metaphors should give us pause to consider how the uncertainty and not-knowing about the human condition that AI provokes are being filled in. We don't actually know how most of human cognition works, which is why it's so hard to program machines to do things that scientists classify as common sense [18]. AI's metaphors work to deepen the notion of humanness and intelligence, writ narrow, and that benefits will accrue to human societies through the sped-up automation of this type of intelligence. Central to AI's telos is the replication and delimitation of "the human" in terms of vision, hearing, coordinated motor skills, proprioception, language, way-finding, affect recognition, and moral reasoning. Yet, in parallel, there is the eugenicist development of computer vision to differentiate between humans on the basis of physiognomic or biological differences, what Luke Stark and Jevan Hutson refer to as physiognomic AI [19]. If metaphors about data as lakes, exhausts, or streams were stripped of humans [20], current AI research reinforces specific aspects of being human.
Human relations with other humans, and nonhumans like animals, the planet, and machines, are largely incomputable, not because they are mysterious, but because they are complex, affective, fragmentary, and difficult to frame as computationally legible formulations. To live with such illegibility is what has always made human life a profound and poetic struggle. K. Allado-McDowell is an artist who set up Google's Artists and Machine Intelligence program. Their book Pharmako-AI [21] is (according to the publisher's website) a "hallucinatory…literary intervention," created as a cyclical process of supplying and responding to prompts to a neural network called GPT-3. GPT-3 was also the subject of the tweet this essay opened with. Consider how it responds on writing, thinking, and language:
The question is not about the emergence of consciousness in artificial intelligence. The question is the emergence of experience, meaning, and reality in and as the material world… that meaning is the result of experience, at all levels of being [21].
Allado-McDowell was using GPT-3 to reflect back what artists have always committed themselves to: conveying how differently this ineffable profundity is experienced. Humans have been in a dynamic process of exploration of the meaning of their own humanity through technologies. As we encounter AI, we might serve ourselves better by acknowledging how its metaphors both shape and limit how we make meaning of our shared yet also separate, lived, material human realities.
Conducted between January 2020 and February 2021, this research collated metaphors in popular and business media, and academic scholarship from 13 countries and in nine languages: Germany, the Anglophone North Atlantic, California, Spain, Mexico, Russia, India, Israel, South Korea, Japan, Kenya, Brazil, and China. This was made possible thanks to the support of the Rockefeller Foundation and the Berggruen Institute. Nine researchers worked on this project: Alphoncia Lyamuya (University of Mas-sachusetts, Amherst); Jasmin Schädler (Germany); Ajay Kumar (University of Münster, Germany); Dmitry Muravyov (Higher School of Economics, Moscow); Jeong Woo Park (University of Texas at Austin); Ana Carolina de Assis Nunes (Oregon State University); Xinyi Cai (Berggruen China Center); Jenny Bourne (Berggruen Institute); and Mirto (Sub.marin.li). Thirty-four experts were inter viewed for or contributed to this work between February and December 2020: Shazeda Ahmed, Urvashi Aneja, Jenny Bourne, Ranjini CR, Kate Darling, Timnit Gebru, Nils Gilman, Sandra González-Bailón, Mark Greif, Karen Hao, Mai Hassan, Christian Katzenbach, Kevin Kelly, Devangana Khokar, George Lakoff, Margaret Levi, John Markoff, Tim Maughan, Oarabile Mudongo, Keiko Nishimura, Jennifer Pan, Som-ya Rao, Venkatesh Rao, Noopur Raval, Tobias Rees, Tui Shaub, Matt Sheehan, Kathleen Siminyu, Bri-an Cantwell Smith, Denis Therien, Angeline Wairegi, Alice Walker, Serife Wong, and Mushon Zer-Aviv.
1. Seaver, N. Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society (2017); https://doi.org/10.1177/2053951717738104
2. Blumenberg, H. Paradigms for a Metaphorology. R. Savage, transl. Cornell Univ. Press, Ithaca, NY, 1960/2010.
3. Lakoff, G. and Johnson, M. Metaphors We Live By. Univ. of Chicago Press, Chicago, IL, 1980/2003, 124–130.
4. Darling, K. The New Breed: What Our History with Animals Reveals about Our Future with Robots. Henry Holt, 2021.
5. West, D.M and Travis, L.E. From society to landscape: Alternative metaphors for artificial intelligence. AI Magazine 12, 2 (1991), 71.
6. Ensmenger, N. Is chess the Drosophila of artificial intelligence? A social history of an algorithm. Social Studies of Science 42, 1 (Feb. 2012), 5–30; http://www.jstor.org/stable/23210226
7. Harding, S.G. The Science Question in Feminism. Cornell Univ. Press, 1986.
8. Crawford, K. Atlas of Al. Yale Univ. Press, New Haven, 2021.
9. Khanna, D. and Wong, J. Harnessing AI to reset the future: How to channel AI for social good? Rockefeller Foundation. Nov. 4, 2020; https://www.rockefellerfoundation.org/blog/harnessing-ai-to-reset-the-future-how-to-channel-ai-for-social-good/
10. AI readiness index 2020. Oxford Insights; https://www.oxfordinsights.com/government-ai-readiness-index-2020
11. From interviews with Kathleen Siminyu and Angeline Wairegi (Oct. 14, 2020) and Timnit Gebru (Oct. 29, 2020).
12. From interview with Christian Katzenbach (Oct. 29, 2020).
13. From interview with Kate Darling (Dec. 7, 2020).
14. Benanav, A. A world without work? Dissent (Fall 2020); https://www.dissentmagazine.org/article/a-world-without-work
15. Sabanovic, S. Inventing Japan's 'robotics culture': The repeated assembly of science, technology, and culture in social robotics. Social Studies of Science 44, 3 (2014), 342–367.
16. Semley, J. Cyberpunk is dead. The Baffler 48 (Nov. 2019); https://thebaffler.com/salvos/cyberpunk-is-dead-semley
17. Center on Privacy and Technology at Georgetown Law. Artifice and intelligence; https://medium.com/center-on-privacy-technology/artifice-and-intelligence%C2%B9-f00da128d3cd
18. Mitchell, M. Why AI is harder than we think. arXiv:2104.12871v2, 2020; https://doi.org/10.48550/arXiv.2104.12871
19. Stark, L. and Hutson, J. Physiognomic artificial intelligence. Fordham Intellectual Property, Media & Entertainment Law Journal (Sep. 20, 2021); https://ssrn.com/abstract=3927300
20. Hwang, T. and Levy, K. 'The Cloud' and other dangerous metaphors. The Atlantic. Jan. 20, 2015; https://www.theatlantic.com/technology/archive/2015/01/the-cloud-and-other-dangerous-metaphors/384518/
21. Allado-McDowell, K. Pharmako-AI. Ignota Books, 2021; https://ignota.org/products/pharmako-ai
Maya Indira Ganesh is a feminist technology researcher and writer whose work investigates the social, cultural, and political dimensions of digital technologies such as AI. She is a senior research fellow at the Leverhulme Centre for the Future of Intelligence, and an assistant teaching professor who co-leads the M.St. in AI, Ethics and Society at the University of Cambridge, U.K. [email protected]
Copyright 2022 held by owner/author
The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.
Post Comment
@Grant Castillou (2022 09 13)
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461