Forums

XXIX.5 September - October 2022
Page: 89
Digital Citation

Practical routes in the UX of AI, or sharing more beaten paths


Authors:
Henriette Cramer

back to top 

The practice of the UX of AI has been going on for a while. The origins-of-AI summer at Dartmouth was in 1956. HCI's SIGCHI has turned 40, with CHI as a conference officially starting in 1983. This forum, focused on the intersection of UX and AI, has been running for two and a half years in ACM Interactions, a blip in a decades-long history. Human-centered machine learning has also been around for decades at the least. The first edition of AI & Society journal came out in 1987 (https://link.springer.com/journal/146/volumes-and-issues/1-1), and it included articles about "AI and accountability," "socially useful AI," and "human-centred systems" [1]. The second edition (https://link.springer.com/journal/146/volumes-and-issues/1-2) included similarly pertinent calls to monitor the culture of artificial intelligence itself, and topics such as technology, work, and unionization.

back to top  Insights

Whether going back or forward in time, the more we abstract, the more things may look the same across technologies.
However, preserving beaten-but-hidden paths requires space for sharing very concrete case studies on practices of communities themselves.
It's crucial to gain access to how the production of AI happens in actuality, not only for students or future tech workers but also for researchers and practitioners who want to influence current practice.

Even the online proceedings of the first International Joint Conference on Artificial Intelligence (IJCAI), the AI conference running since 1969, start with a session on "Man-Machine Symbiosis in Problem Solving." Its first (!) article discusses experiences building a programming aid that corrects spelling errors (Figure 1), summarizing user feedback into a very relevant set of principles of expressing uncertainty, enabling user control, correcting a system when it's wrong, and turning it all off that wouldn't be out of place in an intro HCI x AI course. Understanding those domain and application histories requires continued investment in professional communities as well as historical deep dives that relate to current practices (a potential role for the ACM).

ins01.gif Figure 1. Excerpt from Warren Teitelman's "Toward a Programming Laboratory," IJCAI '69.

ins02.gif

Whether going back or forward in time, the more we abstract, the more things may look the same across technologies. For example, Koen Vermeir [2] describes the curiosity of the 17th-century courts and gentry about new technological marvels, with the magic lantern as the latest novelty. Vermeir discusses the projection demonstrations and a sociohistorical context in which "social uncertainty and anxiety were expressed in a cultural fascination for illusion." Ghostly projections were a performance of both technological possibilities and people's interpretations of them. That magic-lantern slides were later turned into a mass medium for education and entertainment brings up modern parallels—whether there is a ghost in the machine is still a common popular diversion in 2022, and demonstrations of new tech, and models, remain just as fascinating as lantern performances were to wide audiences. In 1987, Cooley described the AI community as being at "a unique historical turning point, where new technology-related decisions can have a profound effect on how humans relate to each other, their work and nature." Decades later, we're still there; our decisions have had impacts and will have further impacts—likely far beyond what's expected. This means we need pragmatic advice.

Meta-articles reflecting on the state of overenthusiastic opinion pieces on the great promise of computing are not new. Joseph Weizenbaum in 1972 [3], post development of his chatbot Eliza, reflected on computer overenthusiasm, and even included a 17th-century technology reference (the microscope) and how it changed what could be envisioned and understood. Extremely useful historical perspectives, such as Simone Browne's Dark Matters: On the Surveillance of Blackness [4], provide the means to recognize situations as reoccurrences of systems and patterns and provide opportunities for countermeasures. The examples in Browne's book are not high-level conceptual abstractions; they are detailed, in-depth descriptions related to past and present. As Jasmine McNealy discussed in a previous article for this forum [5], informed imagination—or the lack thereof—comes before a system is built. Detail allows for that imagination.

Ruha Benjamin [6] points out that it's crucial to understand who gets to imagine and build the world and who just gets to live in it. This has a very practical implication. It means that it's crucial to gain access to how this production happens in actuality, not only for students or future tech workers but also for researchers and practitioners who want to influence current practice. For practitioners, it means knowing whom to ask and whom to team up with for crucial input. Rather than high-level comparisons, we need concrete how-to experiences and in-depth case studies on how to get things done—or stopped. Researchers working in the UX of AI space outside of industry, however, face a double challenge: Both AI and UX increasingly appear to be "done more easily," or rather are more well-resourced, in industry. The resources necessary for many current machine-learning advances are huge in both computing power and data, while access to existing models as services is becoming easier and easier for smaller actors. That access also applies to UX, user, and design research. Large industries will have increased access to professional design teams, to A/B testing, to user behavior at scale, to product strategy and operations, to editors and data-curation teams, as well as to internal and external design and user-research team support. There are decades of work in practice developing AI-adjacent applications that aren't easily accessible. Data curation, design, user research, and QA teams have been around from the early days of large-scale Web companies. However, sharing the design or research insights around these adjacent activities and infrastructures is much less incentivized, and therefore the ML infrastructure and the work and teams crucial to practical success are much less visible. This compounds the issue that AI and UX development have usually been the domain of well-resourced organizations in specific locales.

ins03.gif

Most UX work or data-curation practices appear less likely to be written up and shared, and practitioners wouldn't necessarily even be aware of the relevant research venues to share their experiences and insights. Information about how this work fits within the context of building a business, what types of functions exist in practice, how it gets funding, what teams are involved, and how to navigate industry environments isn't always accessible. Research opportunities that could have an immediate impact in practice could easily be missed by researchers without that access; they could also face their research being dismissed, even wrongfully, by practitioners who perceive a research project as missing practical nuance or as not reflecting "how things actually work."

Preserving beaten-but-hidden paths requires research on the practice of communities themselves. Helpful articles describing which traditions underlie design practices in AI are available (e.g., [7]). Histories of how communities built and used technologies, such as Black Software by Charlton McIlwain [8], similarly need to be invested in to preserve the concrete stories, strategies, and tactics. Examples of inward surveys have also been around for decades. In 1987, Massimo Negrotti [9] reported on a survey of 671 AI researchers at four conferences between 1983 and 1985 (90 percent male, mostly from the U.S., EU, U.K., and Japan), noting the differential impacts of local development cultures and pondering whether this difference could be a resource. However, these preserve the histories of those actually at these venues. Continuity in preserving but also expanding those practical and local histories is necessary (and certainly a role that the ACM could continue to play, and ramp up). This includes also recognizing AI development and histories have involved and affected different regions disproportionately, with histories not being equally represented or accessible. Reflexive practices, such as those Desmond Patton described in an earlier UX Meets AI forum article [10], are then necessary to enable the practical acknowledgment of organizational and personal dynamics, roles, and values.

Building on existing networks and expertise is crucial. Case studies are a great start—see, for example, the practitioner enthusiasm to present at RecSys—but are relatively slow for urgent issues. For practitioners, it can be surprisingly difficult to connect to other experts, especially when more-advanced, urgent problems occur. Having access to examples from other projects, as well as other companies informed by perspectives from multiple domains, multiple countries, and locales, along with historical grounding, can make all the difference to a practitioner who is trying to convince their organization that things can indeed be done. It also means creating spaces where practitioners can meet one another, and for academic researchers to meet practitioners outside the confines of publishing and conferences. This requires experimenting, while also protecting individuals if a collaboration does not work out. This means (new) incentive structures are necessary to accommodate these needs. For students and researchers, it requires knowing where, and from whom, to get practical advice regardless of whether things actually work in practice the way they are assumed to.

All of this means a plurality of contribution types are necessary, including examples of concrete situations: routes on how to fund expert advice, examples of collaborations that did and did not work, cases of how teams are organized, practical product journeys, and practical strategies to convince stakeholders of alternative directions. It also means having existing professional organizations like the ACM facilitate quick consultations. We need investment in those networks, meetings, and contacts so that practitioners and researchers can come together, especially when it urgently matters in protecting users and their rights—which means now—while also building on decades of prior work. For future editions, we'd like to invite those very practical stories. After all, being able to imagine what to do requires a community to show you others who came before.

back to top  References

1. Cooley, M. Creativity, skill and human-centred systems. In Knowledge, Skill and Artificial Intelligence. Springer, 1987, 127–137.

2. Vermeir, K. The magic of the magic lantern (1660–1700): On analogical demonstration and the visualization of the invisible. The British Journal for the History of Science 38, 2 (2005), 127–159

3. Weizenbaum, J. On the impact of the computer on society: How does one insult a machine? Science 176, 4305 (May 1972).

4. Browne, S. Dark Matters: On the Surveillance of Blackness. Duke Univ. Press, 2015.

5. McNealy, J. Before the algorithm, what's in the imagination? Interactions 29, 3 (May–Jun. 2022), 66–68; https://doi.org/10.1145/3529761

6. Benjamin, R. Race After Technology. Polity, 2019.

7. Auernhammer, J. Human-centered AI: the role of human-centered design research in the development of AI. Proc. of DRS'20, 2020.

8. McIlwain, C. Black Software: The Internet and Racial Justice, from the AfroNet to Black Lives Matter. Oxford Univ. Press, 2019.

9. Negrotti, M. The AI people's way of looking at man and machine. Applied Artificial Intelligence 1, 1 (1987), 109–116.

10. Patton, D.U. Social work thinking for UX and AI design. Interactions 27, 2 (Mar.–Apr. 2020), 86–89.

back to top  Author

Henriette Cramer is a principal research scientist at Spotify, where she leads its algorithmic responsibility effort. She is particularly interested in how design and organizational (non)decisions affect algorithmic outcomes, and in pragmatic ways to translate between research and unwieldy practice. She has worked on natural-language interactions, recommendation, and ad applications at Spotify and Yahoo, and investigated location-based interactions and human-robot interaction at SICS. She holds a Ph.D. from the University of Amsterdam. [email protected]

back to top 

Copyright held by author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.

Post Comment


No Comments Found