Authors:
Clarisse de Souza
If I had a magic wand, I would use it to grant HCI a seat at the core of theoretical computer science. This might sound like a strange wish—even a presumptuous one. But I ask for it gently, with a wave of my hand, using the enchanted word Abracadabra.
I was educated as a linguist, a Chomskyan linguist. But just as I was doing my Ph.D. in the mid-1980s, the "pragmatic turn" began to radically and definitively change most linguists' perspective on language description [1]. Noam Chomsky's seminal work—at least for linguistics and computer science—was deeply rooted in the separation between "language in the abstract" and "language in use." Language description should account for general (ideally universal) formal principles and avoid the myriad idiosyncratic and contingent variations that come about when individuals use language in real situations. These other topics should be the object of investigation for others, such as philosophers, logicians, semioticians, sociolinguists, psycholinguists, and so on. The pragmatic turn was the result of overwhelming evidence that depriving language description of an account of language use could not only lessen the object of linguistic studies, but also enable, and somehow promote, the development of independent unrelated theories, at both ends of the divide.
In the late 1980s, as a new immigrant in the Land of Computer Science, I started my research career in natural language processing, artificial intelligence, and text generation. One of the things that fascinated me at the time was how to explain the behavior of computer systems, especially expert systems. Among the hard problems that many of us wanted to solve then was how to deal with the pragmatics of explanations. What constituted an adequate explanation of an expert system's behavior? Depending on whether the explanations were targeted at knowledge engineers, experts, or non-expert end users, the language, the content, and even the purpose of explanations varied. Could computation, that is, the system's behavior that called for explanation, vary in the same way? And what did "the same" mean, anyway?
After my second encounter with the linguistic divide, I moved to human-computer interaction to learn more about the pragmatics of computing. My toolbox was loaded with concepts, models, methods, and theories from semiotics and linguistics. Semiotics helped me to keep culture, language, communication, logic, and philosophy always in sight, while linguistics—especially formal linguistics—helped me to connect with the heart of computer science.
In recent years, HCI has been challenged by the confluence of technologies such as machine learning, big data, smart objects, and the Internet of Things [2,3]. Once again, there is an urgent need to understand how intelligent systems work, not only to design productive and enjoyable interaction with them, but also, and more important, to make sure they do not turn into what Cathy O'Neil refers to as weapons of math destruction [4]. Again, it seems to be the other disciplines' job (HCI among them) to deal with the good and bad uses of clever algorithms and increasingly sophisticated representations. Theoretical computer science has apparently chosen not to be involved.
Who is accountable for the advent and behavior of autonomous systems?
We often hear these days that computers no longer need humans to program them. They can do it themselves. They can even develop an exclusive language of their own to communicate with each other and carry on with their affairs. Interestingly, when I tell New AI enthusiasts that humans have created the programs that do this and, therefore, that there is another version of the story to be told, they usually react very strongly. Computers now are—or at least can be—completely autonomous! Think of self-driving cars! Think of bots learning by themselves how to participate in social network conversations! I cannot help but wonder whether the notions of control and predictability have, in many cases, overshadowed the notions of initiative and accountability. Who is accountable for the advent and behavior of autonomous systems?
In December 2017, as part of an ongoing collaborative study, I started to probe a sample of more than 10,000 papers in theoretical computer science, trying to see if and how they talked about humans. The first superficial step of analysis in this study was to search the titles, abstracts, and keywords of papers published between 2007 and 2017 in two journals and three conference proceedings series. Titles, abstracts, and keywords are the best external indicators of a paper's content. Therefore, we can expect to find in them what the authors consider most important about their work. The searched terms were quite plain and generic (e.g., human(s), user(s), person(s), people). We assumed that if papers dealt with more specific human-related topics (e.g., games, e-commerce, social networks, Internet security), they would mention one of our search terms in the abstract (but this will be checked at later stages). Initial results showed that fewer than 3 percent of sampled publications mention humans, users, or people in their title, abstract, or keywords. A colleague commented that he was not surprised. He reminded me that many factors might explain this, such as CS theoreticians' lack of interest in applied computation, or the sociocultural values and kinds of discourse that define, identify, and consolidate scientific communities. But, I ask, could the act of theorizing about computation without any reference to how computation is used somehow lessen the central object of investigation of core computer science? And, keeping with the implicit analogy, could this promote the development of unrelated theories about the nature of computing, on the one side, and how computation is used, on the other? At whose expense?
Computing and computer programming are very often associated with (if not described as) algorithmic problem solving, without mention to human signification, interpretation, and intent. Computing starts and ends with people. It will then be déjà vu if our study concludes that the theory of computer science delegates an account of its relations with people to other disciplines. I wonder if, given our elevated position among such disciplines, some of our great HCI theorists would consider the need, or the opportunity, to effect computer science's own pragmatic turn.
Together with many inspired colleagues and students, I have been investigating how human meanings are inscribed in computer technologies [5]. Our work in semiotic engineering [6] is a long-term, long-shot attempt to connect computer representations with the wide range of intent and interpretations that different people in different contexts pragmatically assign to computer behavior, in theory and in practice. Could our digging into the many layers of semiotic sediment that support and constitute current computer systems eventually touch the core of computation? Maybe. But ours is only a short chapter of a long, complex story. It would take many more chapters, by many more HCI theorists, to add a pragmatic component to the description of formal computer languages. What we now see is that, like in the early days of Chomskyan linguistics, this description is still centered on only vocabulary, syntax, and semantics. So, I wish I could wave the magic wand in my hand et voilà! The heart of computer science would be changed, and so would we all—researchers, practitioners, and users.
1. The Cambridge Handbook of Pragmatics. K. Allan and K.M. Jaszczolt, eds. Cambridge Univ. Press, Cambridge, U.K., 2012.
2. Cook, D.J. and Das, S.J. 2012. Pervasive computing at scale: Transforming the state of the art. Pervasive and Mobile Computing 8, 1 (2012), 22–35.
3. Fisher, D., DeLine, R., Czerwinski, M., and Drucker, S. Interactions with big data analytics. Interactions 19, 3 (May–June 2012), 50–59.
4. O'Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishers, New York, NY, 2016.
5. de Souza, C.S., Cerqueira, R.F.G., Afonso, L.M., Brandão, R.M.R., and Ferreira, J.S.J. Software Developers as Users: Semiotic Investigations in Human-centered Software Development. Springer International, Cham, Switzerland, 2016.
6. de Souza, C.S. The Semiotic Engineering of Human-Computer Interaction. The MIT Press, Cambridge, MA, 2015.
Clarisse de Souza is a professor in the Department of Informatics of the Pontifical Catholic University of Rio de Janeiro (PUC-Rio) and a SIGCHI CHI Academy member. She is currently working at IBM Research Brazil, on sabbatical leave. She is the author of the first full-blown semiotic theory of HCI. [email protected]
Copyright held by author
The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
Post Comment
No Comments Found