In his article, Joffrey Becker (2022) gives an overview over the three main challenges for social anthropology on robotics and AI. These challenges also raise questions of different scientific fields like engineering, philosophy of technology or science-and-technology-studies. For the sake of simplicity, I will just like Becker keep the two concepts of robotics and AI together, although I think that AI is only one of many attributes robots can have. Other would be life-likeness, robustness, anthropomorphism, or resilience in hostile environments, but none of them has necessarily come together with AI … [please read below the rest of the article].
Kasprowicz, Dawid. 2022. “Maintaining Relations and Re-Engineering the Social: A Reply to Becker’s ‘The Three Problems of Robotics and AI’.” Social Epistemology Review and Reply Collective 11 (8): 50-56. https://wp.me/p1Bfg0-74G.
🔹 The PDF of the article gives specific page numbers.
❧ Becker, Joffrey. 2022. “The Three Problems of Robots and AI.” Social Epistemology Review and Reply Collective 11 (5): 44-49. https://wp.me/p1Bfg0-74G
What we face today with the manifold types of robots and artificial agents are combinations and hybridizations of these qualities with AI. The three problems Becker lists also represent such hybrid embodiments of robotics and AI. The first problem deals with the “global transformation” of the way we think “about life, social relations or even territories” (44) due to the increasing use of robotics and AI in the future. The second addresses our techniques of interacting with life-like objects, and the third concerns organizations as a “hybrid environment where humans and objects act together” (47). I will briefly sketch the arguments Becker develops for each problem and then comment on them.
Three Categories of Problems
Generally, I agree that these three “categories of problems” represent key challenges—not only in the sociological and philosophical research about robotics and AI. They question our imaginations of life, our techniques and routines of interacting with non-humans, and not least the way how we reconfigure our world between animals and objects (48). However, there two aspects I miss in his argumentation which do not pose a fundamental problem to the three categories, but which could, however, extend their significance and avoid some misunderstandings.
The first aspect addresses the way how we encounter the black boxes. I agree that these new life-like “objects engage people to interpret their behavior” (46), and that these interactions need a proper description. But another crucial factor would be the sheer practices and instructions of building up and maintaining an interaction with the robotic object. Since communication is a complex and rather improbable event—based on notations, phonetics, or codified bodily movements—these practices of maintaining (or neglecting) interaction will become more important to keep up social relations between persons being taken care of and caregivers, between employees and their non-human collaborators.
The second aspect refers to the question of the “Machines We Live In” (47). The mentioned importance of maintaining stable relationships with technical objects becomes today also obvious in the machines some of us use to drive with. Although autonomous cars seemed as one of the feasible technologies of the near future, a full control of the autonomous vehicle is prohibited in most states (Stilgoe 2018). But what becomes more crucial in these machines than the withdrawal of the human driver are the interfaces that communicate what the “self-driving” car (or the robot) is doing. With interface, I mean here displays in autonomous cars as well as the body of animated objects that respond to their environment or to our actions or non-actions. Whether autonomous cars or human-animal-machine configurations, a new conception of the interface will be necessary to analyze not only the communication between the ontologically heterogenous agents but also the new materialization of their relationships. I will come back to these two aspects but will first address the question of life-likeness of robotic artifacts, the first topic mentioned by Becker.
The Staging of Life
Interesting in Becker’s argumentation seems to me that he never reduces his concept of life-like objects to a simple artifact-observer-relationship. Life-like objects are always part of reconfigurations between the object itself, the engineers, and the other objects as parts of the electrical circuit that enables biological functions like perceiving or orienting. It is in this sense that we cannot talk of life-like objects as mere imitations of life anymore. These reconfigurations represent what I call a stage for a new way of technical and scientific sense-making of life.
In history of science, the experiment was often the only acknowledged method to make scientific statements about life. In the case of robots like Scratch (that is mentioned by Becker), the cognitive abilities and the biological functions, written down in models and tested via computer simulations, perform life-like actions and initiate what Becker calls a “retroactive effect on knowledge” (45). In his study about the early cyberneticists, Andrew Pickering coined the term “ontological theater” (2009) for the performance of the early cybernetic artifacts as embodied cognitive agents. In the case of artifacts like Scratch, the theatre would be the staging of life, one that also produces its own retroactive effects on scientific knowledge just like experiments or computer simulations.
Hence, to analyze the retroactive effects of staged, life-like objects on their networks of engineers, scientists and their artifacts seems to me a promising challenge for social anthropologists. However, there is an importance of “staging life”, especially in computer and engineering science, that needs to be highlighted. In the early days of science, scientists have also been skillful craftsmen (or they have known some), they had to present their discoveries to an audience to be witnessed as scientists.
As one of the most famous examples, the physicist Alessandro Volta literally staged his battery in the grand salons, making the loading and unloading of electric current perceivable to his audience. Volta, who build his volta pile out of zinc- and copper-layers after his cooperation with the Italian anatomist and frog-experimenter Galvani, represents a famous figure in the staging of life-likeness (via animation) before the professionalization of science. Later, in the 19th and 20th century, experiments are conducted in laboratories, in closed rooms behind the doors of academic institutions. To make experiments perceivable to a public outside meant most of the time to publish scientific articles. However, with the upcoming engineering sciences in the end of the 19th century, the making and staging of scientific knowledge via engineered objects has gained a new importance which was often related to economic and political interests as well. It is not by coincidence that the increase and popularization of magical entertainment fall in this very same time (Smith 2015).
This leads me to the completion I would like to suggest concerning the life-like objects. It will not only be the question how machines or objects will have life-like functions and retroact on our knowledge of life. I assume that the way these machines and objects are staged plays also an important factor. Whether these machines and objects appear in presence, on YouTube channels or on one of the many social media platforms changes the access to what Wally Smith has called the enactment of patterns of simulation and dissimulation when it comes to “computerized life” (2015, 335). While Smith compares the techniques of staging in computer science to practices of magic, it could be a promising endeavor to look how the techniques of staging life differ in science and engineering—if they differ at all. This could be a complementary level to the questions on manufacturing life that Becker raises.
The question of “staging life” leads me to the next point, the relation of man to her or his new companions. There have been lively debates about the social status of robots (Jones 2017), about how robots could affect empathy in children (Matsuzoe and Tanaka 2012) or about their acting in human groups (Seibt 2018). Taking a less normative (what robots should do) or functional (what they could do) approach, Becker poses the question how first to describe these new interactions with our technical counterparts. This need for finding the right language of description turns particularly up in artistic contexts when the functional use of the object is not clear. Here, the aesthetics of the objects impose an ambivalence upon their human observer. These “objects engage people to interpret their behavior”, as Becker writes (46). They cause what he calls an “uncertain situation” (47), which holds up the dynamic of ongoing social relations between men and her or his environment. In this sense, I agree with Becker that these uncertainties need to be observed and described “on a wider scale”, not only in the context of artistic object design, but more and more with regard to our daily experiences with technology.
Hence, there are, after Becker, two points that would be crucial about the social behavior of these technical objects and robots: they initiate our beliefs and speculations about how they are animated and by who. And second, due to the possible ambivalence in their behavior, they provoke reflections about their ontological status: are they technical objects that you leave behind in the corner if you do not need them anymore like a (classical) hoover? But to tackle the challenge of describing the interactions with our new objects, I think we can add a third point which circles around the question how to withdraw as a subject from the interaction. This point needs a bit more clarification, but I think that it is an important addition to the two mentioned by Becker. To do so, I change the scenery and take an example from industrial robotics.
At first view, industrial robotics seem to be an unfitting example for objects or robots with an ambivalent status and odd behavior. The reason to produce industrial robots is to make the production more efficient, timesaving, and safety. In that sense, their design is primarily guided by functional and economic criteria. In the classical scene of industrial robotics, an employee works separately from the robot in her own cell. As soon as she steps into the robot’s cell, the machine stops and the robot freezes. In the last twenty years, more and more of so-called collaborative robots have been developed who are much lighter but also equipped with more sensors, especially with sensors for tensor movements. These robots can move towards their human collaborators, they stop in a short distance in front of her. Also, they are able to recognize if a human hand wants to lead them somewhere. This “liberation of the robots” has led to numerous research and tests about the gestures and body postures employees should learn to collaborate in a fluent and non-stagnant manner. Thus, these man-machine interactions do not exhaust in a measurable efficiency of the robot in the production process. The question is also how much freedom a robot should have? Since the idea of collaborating implicates to bring together skills for a common goal, the employee cannot use the robot like a technical tool and dominate the interaction, making it “ready-to-hand”, to put it in a Heideggerian way. She has to withdraw from a means-to-ends-scenario into one that includes the challenge to maintain a social relationship, to put meaning forward into the interaction to keep the robot in the loop as the collaborator it is intended to be. Otherwise, the robot falls back from a collaborator to a machine or to a dysfunctional tool.
In her ethnomethodological description of everyday encounters with robots, social studies of science scholar Morana Alač emphasizes the multiple possibilities of accounting these technical objects as material things and as agents of a social relationship. In her words:
As we move away from the idea that the robot’s sociality has to be understood as intrinsic and categorial property of the robot’s inside […], the robot is a technology that can be enacted in one breath, as an agent and a thing. Each of the facets that hand-in-hand maintain each other can become the “theme” […] at specific moments in interaction while its other profile coexists concurrently or as a possibility that can itself take center stage as the encounter develops (Alač 2016, 520).
With regard to the mentioned problems of robotics and AI, it will be a challenge to describe the (bodily) techniques how to “maintain” these dynamics of enacting the robot (as an object or an agent) and how to be enacted by it. As I have shown with the case of industrial robotics, even very functional uses of robots have to be coordinated by formalized movements and gestures. Safety guidelines are not the only reason for this. Another reason is the ongoing multiplicity of facets the robot can take by enacting on its human collaborator or by being enacted with a certain meaning, a “theme” that is set into the world by maintaining the relationship, as Alač writes with reference to phenomenology. It is here where the attributes of the social can emerge, but they do not have to. That is why the notion of maintaining fits so well. It is not so much about the use of technology or the expectations of the human, it refers to new scenarios of man-machine-interaction that imply the creation, the sustaining, and the neglecting of a social relationship to objects or machines.
In French, the bodily component is even more explicit, where “maintenir” consists of the two words “main” (hand) and “tenir” (to hold). In this sense, we face new human-robot or human-object-interactions, as Becker emphasizes. But I think it would also be helpful to look closer at the practices of the engineers to design interactions, and at the bodily postures and gestures that are improvised by humans entangled in these fragile situations. As Becker writes in paraphrasing Gregory Bateson, we have here “[…] a particular type of communication that deals with the very possibility to communicate”—the consequence would then be to look at the (bodily) practices and materialities to make communication a probable event that can maintain a social relation.
Co-Being in One Space
The factory does not only serve as a space where new human-robot-interactions take place. It represents also what Becker calls a “hybrid environment where humans and objects act together”. This statement, which I share, has an implicit notion of the interface that is not made explicit. In this context, the interface does not only refer to screens or displays made up for those “sociotechnical systems”. It establishes a wide range of knowledge about production processes, architecture, safety standards, and not least about the sensor motoric and cognitive capabilities of the agents involved. Thus, to analyze the materialization of this knowledge would require a descriptive approach that not only emphasizes the relations between humans, animals, and technical agents. It would need to highlight the knowledge embedded and transformed to organize the flows of these heterogeneous bodies. In this sense, the interface overcomes its technical meaning as the area where the human and the machine are related together. More generally, it can be understood, in the words of Brandon Hookway, “as a form of relation” (Hookway 2014). On the one hand, every interface presupposes knowledge to be integrated, formalized, and finally materialized. On the other hand, interfaces, after their materialization, become also a media to redistribute the agency of its agents.
I think that these redistribution processes are at the center of interest for Becker when he talks of hybrid environments. But he does not make any statement about the interfaces in those systems. To illustrate my final argument here, I come back to my example with the collaborative robots in the factory. With the removal of the fence between the human’s and the machine’s work cell, the question arises how to formalize the new interaction space without intimidating the employee on the one hand and risking the productivity of the robot on the other. For the design of this bodily encounter, engineers have referred to socio-anthropological knowledge, more precisely to “proxemics”, a term coined by the American anthropologist Edward T. Hall. For Hall, proxemics dealt with “the how of distance-setting” (Hall 1968, 84), the techniques that are needed to create and maintain social space in the environment. This could be the space between two bodies, between a table and a chair in the dining room—briefly, an interface between somebody and the world that constitutes the kind of social relation that she is involved in.
While Hall has examined the proxemics of different cultures and developed an own notation system to compare them, engineers refer to proxemics as a technique to formalize the bodily postures and gazes of the robot and its human collaborator. These so-called “interaction potentials” should help to find metrics to determine when and with what intensity the robot should communicate (Mead and Mataric 2016). Especially in industrial robotics, these formalizations of body techniques do not end with proxemics but extend also to the design of workplaces and the architecture of factories. Norms in industrial labor had to be adjusted due to the new interaction space between man and her collaborator, there have even been requests to reform the European standardization norms to increase the degree of freedom for the movement of the robots (Haddadin et al. 2009). Hence, this shortcut of proxemics and interface should emphasize the need to examine those materializations in which social relationships are negotiated (Kasprowicz 2021). As Becker points out, the encounters with robots or AI-enhanced objects reconfigure our idea of social relations. But there is also a knowledge that is remediated throughout the design of interfaces in these encounters.
A Short Outlook on AI
With its three “categories of problems”, Becker addresses two challenges for social anthropology but also for other disciplines: First, the description of transformative social processes as a consequence of the participation of robotic and intelligent systems in our lifeworld. And second, the search for a heuristic ground to debate these transformations through robotics and AI. His three problems cover the field of life sciences, interaction design, and the organization of the lifeworld. I have mentioned the need for highlighting the staging of computerized life in science and engineering. I have then shown how social relations are maintained via bodily routines, gazing techniques and non-verbal communication. As a consequence, these relations retroact on our knowledge, which changes the way interfaces are conceived and designed—I would even say that this retroacts on the concept of interface in general and withdraws it from its ocularcentric semantics of facing.
With regard to the current development in AI and robotics, one could also argue that the emphasis on bodily interactions and interfaces that I have raised here seems already a bit old-fashioned. Today’s robotic systems are more equipped with visual sensors and machine learning models that enable them to detect instantly objects and their contexts. With the help of neural networks and stochastic models, robotics systems classify objects and assess a “meaning” to them. Their probabilistic reasoning is based on data-samples and learning algorithms to improve their performance in unknown environments with changing agents in their interactions. To program anthropological gesture-models like proxemics has less relevance in these methods since the idea is to extract the relevant information from thousands of gesture images. However, the new machine learning methods do not nihilate the importance of the interface as the dynamic space where different species meet to negotiate their relationships. No matter how “smart” the machine will act, the contingency of communication between two bodies and the need for maintaining relations will continue.
Dawid Kasprowicz, email@example.com, RWTH Aachen University.
Alač, Morana. 2016. “Social Robots: Things or Agents?” AI & Society 31 (4): 519-535.
Becker, Joffrey. 2022. “The Three Problems of Robots and AI.” Social Epistemology Review and Reply Collective 11 (5): 44-49.
Haddadin, Sami et al. 2009. “Requirements for Safe Robots: Measurements, Analysis, and New Insights.” The International Journal of Robotics Research 28 (11-12): 1507-1527.
Hall, Edward T. 1968. “Proxemics.” Current Anthropology 9 (2-3): 83-108.
Hookway, Brandon. 2014. Interface. Cambridge (MA): MIT Press.
Jones, Raya. 2017. “What Makes a Robot ‘Social’?” Social Studies of Science 47 (4): 556-579.
Kasprowicz, Dawid. 2021. “New Labor, Old Questions. Practices of Collaboration with Robots.” In Connect and Divide. The Practice Turn in Media Studies edited by Erhard Schüttpelz, Ulrike Bergermann, Monika Dommann, Jeremy Stolow and Nadine Taha, 247-261. Zurich/Berlin: Diaphanes.
Matsuzoe, Shizuko, and Fumihide Tanaka. 2012. “How Smartly Should Robots Behave? Comparative Investigation on the Learning Ability of a Care-Receiving Robot.” The 21st IEEE International Symposium on Robot and Human Interactive Communication. Paris: 339-344.
Mead, Ross and Maja J. Mataric. 2017. “Autonomous Human-Robot Proxemics: Socially Aware Navigation Based on Interaction Potential.” Autonomous Robotics 41: 1189-1201.
Pickering, Andrew. 2009. The Cybernetic Brain. Sketches of Another Future. Chicago: Chicago University Press.
Seibt, Johanna. 2018. “Classifying Forms and Modes of Co-Working in the Ontology of Asymmetric Social Interactions (OASIS). In Envisioning Robots in Society edited by Mark Coeckelbergh, 133-146. Amsterdam: IOS Press.
Smith, Wally. 2015. “Technologies of stage magic: Simulation and dissimulation.” Social Studies of Science 45 (3): 319-343.
Stilgoe, Jack. 2018. “Machine Learning, Social Learning and the Governance of Self-Driving Cars.” Social Studies of Science 48 (1): 25-56.
Categories: Critical Replies
Leave a Reply