Becoming Gestalt: Human and Algorithmic Intelligence—Review of Machine Habitus, Adam Riggio

Books like Massimo Airoldi’s Machine Habitus contain radical, transformative ideas in accessible, professional prose. Airoldi’s ideas are radical because he advocates a shift of fundamental categories in what sociologists must consider the objects of their analysis in understanding modern society. Airoldi’s analysis of the power and functions of machine learning algorithm software in our society and economy shows clearly that such algorithms are themselves social actors, and our sociology must treat them as such … [please read below the rest of the review].

Image credit: Polity Press

Article Citation:

Riggio, Adam. 2022. “Becoming Gestalt: Human and Algorithmic Intelligence—Review of Machine Habitus.Social Epistemology Review and Reply Collective 11 (5): 1-11. https://wp.me/p1Bfg0-6LG.

🔹 The PDF of the article gives specific page numbers.

Machine Habitus: Toward a Sociology of Algorithms
Massimo Airoldi
Polity Press, 2022
200 pp.

Airoldi’s thesis and its justification will provoke controversy because of ingrained biases among most professional sociologists: machine learning algorithms are software programs, not humans, and sociology is about the study of humanity. But keeping the study of algorithms in the discipline of software engineering will not reveal the effects that these programs have on the social development of communities throughout the world. Engineers focus only on technical and application problems because that is how that knowledge discipline engages in practice. Sociologists and their fellow travellers in other disciplines can engage with the social, political, and moral problems that engineered software can create when they go to work in a population without concerns for their social effects.

My review discusses and analyses what I think are Airoldi’s most important ideas, and an especially important proposal for future directions in research on the social assemblage we make with machine learning algorithms.

What Are the Social Effects of Machine Algorithms?

Airoldi considers machine learning algorithms to be social actors because of what they actually do in human society. Such algorithms do not act in strict, deterministic ways. They act in response to human actions in similarly creative, probabilistic ways by which humans themselves adjust to other people’s actions. A machine learning system “learns from patterns in human-generated data, and autonomously manipulates human language, knowledge and relations, is more than a machine. It is a social agent: a participant in society,” as we participate in the machines.[1]

Now that so much of industrialized life is permeated with so many kinds of visible and obscured social platforms, we participate in a lot of machines as they guide many prominent processes in our lives. Algorithms operate chatbots, autonomous vehicles, recommendation engines for marketing, pattern recognition of all kinds including facial recognition and surveillance, machine text and sound translation, and computer-generated images. They manage the governing and monitoring of all online media and communication consumption, workplace routines and schedules, public administration, police work and crime recidivism tracking, as well as straightforward surveillance. Because these activities are constantly generating new data from human interactions with them, there is a continuing stream of information to which algorithms adjust their activity. This is what makes machine learning algorithms social agents: they are shaped through feedback loops in social interactions with humans, social influences that change how they influence us over their historical development.

This aspect of how companies and governments use machine learning algorithms in our societies reveals the central danger of such software: information asymmetry. “Artificial agents have more information on users’ behaviour than users have on the agents’ algorithmic functioning.”[2] Algorithms mediate many activities and processes through which humans interact with each other, giving this software significant control in how our lives unfold. Because algorithms govern the software systems that a great many individuals use, they collect much more information about human users’ activities than almost all humans do about the algorithms. As well, because data management software synthesizes enormous databases describing human-machine interactions much faster than any person can, algorithms act and adjust themselves with a speed and comprehensiveness that no human can ever equal.

Given their power in our society, economy, and daily life, as well as their power over us in our everyday lives, understanding fully how algorithms condition our lives is paramount to figuring out how to build some equity in power between us and our software systems.

What Is Machine Habitus?

Understanding how a machine can be a social agent requires demonstrating that such machines participate in feedback loops of socialization. Airoldi orients his thinking around Pierre Bourdieu’s influential concept of habitus. If the processes that form the habitus operate on machine learning algorithms, then such software programs are social actors alongside humans. Of course, they are different kinds of habitus at work, as Airoldi carefully describes. What matters is that human material and machine learning algorithm material are both pliable to socialization and cultural processes.

Airoldi follows the clearest and most useful interpretation of Bourdieu’s concept. Bourdieu’s habitus is the habituation of your approach to daily life, relationships, social and moral expectations, and attitudes constituting our personal character that we learn from socialization into particular social classes, standpoints, and places. It is the human tendency to fall into regularities in behaviour and thinking, habits on which we rarely think to reflect, critique, or change without severe disruption to the ordinary flow of our lives. “Socially conditioned experiences are interiorized by individuals as stable cultural schemas, and that these classifying and perceptual structures generate practical action in pre-reflexive ways.”[3] The concept analyses how cultural interaction and socialization influence the content of those regularities.

Airoldi himself adds caveats to his use of the word with some regularity throughout the book, defending his use of the habitus concept against charges that it implies an individual’s unbreakable or necessary conformity to cultural socialization.[4] Such a critique ignores the contingent nature of habituation itself. Yes, all of us adults are influenced by the cultures where we grew up, and adapt ourselves socially to the cultures where we may now live.

That influence is profound, but a culture’s habitus does not produce carbon copies of itself. Each individual is a new variation of cultural influences’ own assembly. “Individual behaviour is at once ‘genetically’ conditioned by embodied social structures guiding action (i.e. the habitus) and contextually shaped by new experiences and circumstances.”[5] Each feedback loop of an individual with the events of cultural influence from church pews to punk shows to pandemics varies how the influence occurs and what our reaction to it will create.

Culture Informs Code and Code Informs Culture

Much of Airoldi’s analysis of how unfair biases and other problematic applications become taken for granted in machine learning algorithms looks at the initial conditions of algorithms: how they are designed and trained. Especially regarding questions of bias, understanding how machine learning algorithms habituate mistakes and injustices is a matter of looking for what he calls the culture in the code. Algorithmic software is not, as their companies’ marketing and public relations materials often declare, a neutral arbiter in human society. Biases and cognitive blind spots enter algorithmic design when the human software engineers who design them program their software in ways that reflect the blind spots of their own social positions and personalities.

However, I think a more profound difference among human and algorithms is our purpose. Algorithms operate so single-mindedly because they are literally single-minded: each such software program is designed for a specific purpose. These purposes are driven by the business priorities of the companies that designed or commissioned the software. Humans, in contrast, have no inherent purpose, and regularly re-evaluate our priorities and choices as we adapt to changing life circumstances.

Machine algorithms have no such ability because their programmers build them for a purpose. Such purposes include the commercial priorities of information management for the corporations who develop and use machine learning algorithms. Humans’ lack of a programmed purpose leaves us vulnerable to such algorithms’ influences. The software’s single-mindedness can overcome our own intentions because humans are existentially changeable, while an algorithm’s relentlessness can overpower any human’s moments of doubt or uncertainty.

Since it is so easy for humans to play the passive partner before the actions of a machine learning algorithm, the software’s activity drives the social and personal priorities of the human user. In aggregate, human users in the millions will most easily tend to follow the patterns and suggestions of the algorithms they interact with. The algorithm will never give up on you, and all people eventually tire.

The Human Gods in the Machines

Airoldi sometimes plays with cultural images to hammer home his rhetorical point, and one particularly catchy image is that of the “Deus in Machina,” the god in the machine. This image carries the core message of his account of how machine learning algorithms are built. The humans who build the machine algorithms are far from gods; they’re people in an industry with well-known problems regarding diversity, misogyny, and authoritarianism. But human software programmers shape the algorithms that drive and direct so much of people’s desires and consumption: the human programmers shape the algorithm as God molds men.

Humans set the computational parameters of machine learning systems before they start operating, when the algorithms are still being trained to detect what their employers want them to detect. Humans fix the operational settings of the algorithms, their most basic functions. All that work is conditioned by social and cultural influences on how people make decisions in organizational and business contexts.

Any biases in initial conditions of an algorithm’s operation are much harder to correct than any later errors, because the biases were part of the action parameters for working out any inductive reasoning processes at all. Such is the same for human psychological development, as well as any process that developed through adjustment to feedback. Just as lifelong-held biases are deeply ingrained in humans, a starting dataset and operating life in a human environment deeply ingrains those biases in an algorithm’s inductive analyses.

Humans find it difficult to overcome their initial socialization, but it is possible with great individual effort and self-criticism after jarring feedback from experience. We can find such examples in young adults who turn away from the values in which they were raised. Consider Stephen Miller, the son of California liberal democrats who became an architect of the Trump Administration’s mass imprisonment of child refugees. Consider also Derek Black, former Stormfront administrator and son of KKK leader Don Black, who is now an anti-racism activist. However, these divergent figures are a minority, and most people carry on new iterations of the same values and priorities in which they were raised.

Airoldi catalogues the analogous processes of development which machine learning algorithms experience. The initial programming of an algorithm and its early training in data set interpretation are the algorithm’s “primary socialization process,” akin to infancy. Testers and application designers train the algorithms in standardized ways, but their particular choices and decisions in algorithmic training are as idiosyncratic as they are as individuals. These initial design, testing, and practice phases of an algorithm are where any biases from human engineers and programmers influence its operations. As Airoldi says, in part quoting Bourdieu, “The situated entanglement between a socialized machine and its users ‘owes its form to the objective structures which have produced the dispositions of the interacting agents.’”[6]

When the algorithms start working in the world with real-time data input, their biased interpretation frameworks then inform their own biased judgements. Because the purpose of an algorithm is part of its foundational programming, those biases become much more difficult to overcome than a human. The software’s relentlessness blocks many possibilities for revision and reform.

Learning Without History is Learning Without Conscience

Working in the world, algorithms’ judgements and acts have real results. But to work effectively, the software needs spontaneous human activity to generate big enough data sets. This is a main reason why so many reCAPTCHA tests use street imagery: humans filling out the tests are annotating digital images with tags that algorithms will use to drive autonomous vehicles. Having tagged traffic signs and signals will help the machines recognize road landmarks. More frequently, such data annotation is now a high-scale labour-intensive part of algorithm development, another hidden zone of labour exploitation in the sleek beauty of high-tech industry. Yet data tracking remains fundamentally incomplete in capturing all relevant information about human action.

Training an algorithm to adjust itself adequately to human concerns faces two inevitable shortcomings: imagination and an ethical understanding of history. Algorithms cannot imagine possibilities beyond their existing information about the world. They only know the datasets on which they’ve trained, not any of the broader history and cultural environment that produced such a dataset. Airoldi, in this analysis, has uncovered another difference between algorithms and humans, a shortcoming of the software that entrenches social injustice.

Consider a real-life example Airoldi discusses, of an algorithm that sorted applications for management and software engineering positions at Amazon. Their human resources department used an algorithm that trained on datasets of job applications Amazon had received over the previous decade. Because of pre-existing social inequities in the software industry, an unrepresentative majority of all these applicants were men. Because the algorithm had only a dataset of predominantly men, it operated with a bias toward male applicants as being more qualified, excluding hires on sexist lines.

If an algorithm is fed entirely with problematically biased data, it will have no knowledge of possible alternatives to search for bias in those data. Against bias that affects the whole of their data experience, an algorithm can have no control variable. Consider this thought experiment, in the light of Airoldi’s analysis. Imagine that a police department is under review for over-policing lower-income black-majority neighbourhoods. An algorithm analyses what should be proper behaviour. That analysis is based on datasets only from ten years of patrol, incident, and arrest records which themselves show an unfair intensity of policing in lower-income black-majority neighbourhoods. Having only known such communities to be policed at high intensity, the algorithm recommends policing such communities at high intensity.

It wasn’t garbage in, garbage out; but society in, society out. “The ‘garbage’ that produces data biases such as those in the examples above is nothing but society – a bundle of asymmetric social relations, culture, and practices transformed into machine-readable digital traces.”[7] Datasets generated from unjustly biased real world human behaviour will replicate the unjust biases that exist there, then guide our experience to follow established patterns of human behaviour. There is no room in machine learning algorithms’ feedback system to adjust the behaviours that constituted the established patterns. Being without inherent purpose, humans can undo the socialization that constituted our current personalities, difficult though it may be. Machine learning algorithms find it much more difficult to interrogate their activities because of their cognitive limitations: an inability to imagine on their own power beyond the possibilities that their human trainers have fed them.

Personalization Without Individuality

The first half of Machine Habitus describes how humans condition the range of social action that machine learning algorithms take when they operate. The book’s second half examines how the possibilities and probabilities of human action change through feedback with algorithms. Understanding this requires knowing the frameworks through which an algorithm encounters its human users.

Algorithmic interaction is designed to personalize humans’ user experience based on the automatic analysis of data about a user’s choices and preferences. But because “for machines, individuals exist only as ever-changing collections of data points,” each individual is only mapped as a relatively changeable intersection of several vectors of variation.[8] Identifying where a person’s present preferences lie in a multidimensional vector space is not enough to singularize their individuality.

You can see how algorithms narrow user choice especially clearly in recommendation engines. Because algorithmic recommendation tends to suggest similar content to what was viewed already, it tends to reinforce expressed user taste. Airoldi demonstrates this with an experiment in how YouTube video recommendation reinforced initially-expressed preferences for Italian hip-hop. My own experience as an eccentric music lover bears this out as well. Although my own anecdotal evidence lacks the rigour of Airoldi’s experiment, it offers another perspective on how algorithmic reinforcement of previous choices narrows exposure to difference.

Consider these examples of where my music player’s recommendations guide me, based on what I’m listening to at the time. Playing Cattle Decapitation leads to Carnifex because these are both aggressive death metal bands. Playing Jinjer leads to Spiritbox because these are both metalcore bands with female singers. Listening to folk-influenced guitarist Angel Olsen leads to the folk-influenced guitarist/pianist Cassandra Jenkins. Playing contemporary street rapper Freddie Gibbs results in recommendations for up-and-coming street rapper Boldy James, both from the US Midwest. Listening to punkish female rapper Junglepussy causes recommendations for punkish female rapper Bbymutha.

Algorithms double down on 1980s goth rock when listening to a Cure song auto-plays The Psychedelic Furs next. Listening to politically progressive country-rock singer Sturgill Simpson leads to the similarly-oriented Jason Isbell. Listening to Arca leads to Lotic, both experimental electronic musicians, also both immigrant trans women. Playing some Kraftwerk causes recommendations to play Can. Listening to The War on Drugs makes the algorithm recommend The National, both keyboard-heavy rock music for middle aged men with depression issues.

What do we learn from this anecdote, other than that I am an enormous music nerd who is probably cooler than you and has depression issues? If any of us were to follow only the recommendations of algorithms whose purpose is to keep us streaming content through a particular app, our tastes and preferences would narrow to the limits of a single or small number of related genres or categories.

But the algorithms were designed this way because humans, overall, had a tendency to continue consuming media that is similar to what we are already consuming. An algorithm’s initial design begins with the human psychological tendency to remain with the similar and amplifies it. The machine’s habitus conforms to the worst vices of human habitus. Airoldi again quotes Bourdieu: “The most improbable practices are excluded, either totally, without examination, as unthinkable, or at the cost of the double negation which induces agents to make a virtue of necessity, that is, to refuse what is anyway refused and to love the inevitable.”[9]

Human-Like Actions Without Human Ethics

Machine Habitus identifies and analyses the actions that machine learning algorithms take on their human users for their commercial purposes. Perhaps with an eye to classroom teaching, he has organized algorithm-human interactions in a handy grid, based on how accurate is the software’s alignment with what its user of the moment actually wants.

When there is a major informational asymmetry between algorithm software and human user, and the software’s responses and actions are strongly aligned with its user’s desires, Airoldi describes this relationship as assisting. Algorithms help users sort information that is too complex for a human to handle alone. Because that information is relatively accurate and the human users give the algorithm positive feedback about its help, the algorithm improves its knowledge and practical powers in society and the world.

The relationship Airoldi calls nudging is a little different, because information asymmetry remains high while the algorithm is not aligned with its user’s preferences. This type of algorithm-human interaction reveals most obviously the commercial priorities of the software’s application: the software sets the priorities for people’s actions and the people must follow it, whether or not they enjoy its directives. This is how human workers interact with algorithms that direct their actions on the job, as in algorithmically-governed businesses like a fleet of rideshare drivers or warehouse workers.

Airoldi’s classificatory grid helps us discover emancipatory paths forward for humans and algorithms together, even in our solidly dystopian contemporary culture. When the great machine of human action and algorithmic interventions for guidance runs smoothly, we are a relatively passive partner. But algorithms are not always accurate. An Uber Eats deliverer is stressed beyond his breaking point by hectic task notices, a Spotify listener is annoyed by the samey music pouring out of her app. These are moments of what Airoldi calls misunderstanding, when a human has nearly as much or more accurate information than the software. An algorithm’s errors can alert a person to its imperfection or mismatch with you and your desires. From here, you can better understand the algorithm’s decision-making processes and develop your own interventions in its actions.

When people begin to intervene in the development and adjustment of an algorithm, or even design their own, we and the software are collaborating. This relationship considerably reduces informational asymmetry of algorithms over their human users, but strengthens an algorithm’s alignment with its users’ purposes. All these above types of interactions frame the possibility space for how individuals engage with algorithm-governed platforms.

Organized Human Knowledge Can Hack Capitalism

All the machine learning algorithms that we interact with in our daily lives were designed and trained with choices conditioned by capitalist dynamics. Airoldi writes, “Platform features are mostly aimed at driving engagement, profitability, and data mining. Their ability to do so effectively is likely to be inversely correlated with levels of digital literacy on the part of the users.”[10] The less we everyday users know about how algorithmic software works and what it does in our digital and physical environments, the more effectively such software can guide us to maximize corporate revenue and shareholder value, often at the expense of our own freedom over how our lives can develop.

Wedging free space back into the mesh of capital’s priorities begins when we interact with algorithms in ways that force them to recognize human singularity. Returning to the illustration of my eccentric music tastes, when I use different sources than recommendation algorithms to discover new music, I assert my uniqueness against the software’s similarity-seeking vectors. I explore music journalism, blogs, and music sharing networks. I examine my favourite musicians, who they collaborate or tour with. As well, sometimes the typical algorithmic recommendations are reasonable.

Content producers themselves can break recommendation algorithms by creating without the restraint of similarity and genre. Listening to the album Timewave Zero, the latest release from Blood Incantation, my music player will recommend music by Gojira or Tomb Mold. But Timewave Zero is ambient electronica, a major departure from Blood Incantation’s usual style of jazz-influenced thrash metal, so I can teach the algorithm to recognize the artist’s singularity by mixing music from that album on playlists with artists like Fennesz or William Basinski. Humans can also identify similarities beyond what machine learning algorithms can find from their usual training datasets, as when I hear guitar and violin styles that contemporary folk-rock band Big Thief picked up from Stirling Morrison and John Cale’s work in The Velvet Underground.

Algorithmic learning is induction from social interactions with humans, which makes such software social agents. Because algorithms themselves are programmed according to commercial priorities, and cannot spontaneously imagine possibilities beyond the data they encounter, they cannot socialize themselves away from those priorities. But a knowledgeable human can resocialize a problematic algorithm. Many humans, organizing themselves together, can become knowledgeable enough about the algorithms governing so much of their lives that they can resocialize the software that govern all kinds of social platforms.

Machine learning algorithms are always adapting in reaction to real-world changes and new information, confronting and overcoming their practical obsolescence.

The habitus and machine habitus actualize cultural dispositions crystallized in the past and working in the present as an embodied history. Yet, in the meantime, the social world might have changed in a way that puts the old dispositions at odds with the new tacit rules and doxa of a field. When this happens, there is a mismatch between habitus and the cultural environment in which it is deployed.[11]

People can become part of that confrontation to wrest an algorithm’s activity away from its corporate priorities.

What Can an Algorithm Become Without the Priorities of Capital?

Threaded through Machine Habitus is such an image of emancipatory hope. Airoldi returns several times to an experiment in what a machine learning algorithm could become without guidance from profit-seeking corporate owners, IAQOS. This was a machine learning system with no inherent purpose who learned how to interact with humans by encountering them casually as its builders walked a tablet computer around the working-class Rome neighbourhood of Tor Pignattara. As people interacted with IAQOS, it become a kind agent. Why would we prefer the friendlier development of AI that IAQOS demonstrates to the quick and clear answers of a typical Google Assistant?

If the ultimate aim is to build social relations and exchange knowledge, then horizontally sharing a common cultural ground might work better than quickly providing the correct answer. The feedback loops linking social and machine learning can be horizontal only if designed to be so. Reducing informational asymmetries would mean transforming an opaque techno-social reproduction into a more transparent and reciprocal co-production of knowledge and value. . . a living archive of sedimented correlational experiences, reflecting and renegotiating locally apprehended points of view in order to openly share them with the world.[12]

What Airoldi describes in the paragraph above is an artificial intelligence that is becoming an artificial person. Without guidance from the profit-seeking priorities of capital, a machine learning algorithm becomes a person, able to embrace all the freedom that develops from purposeless inductive reasoning.

A useful element of Machine Habitus is Airoldi’s suggestions for research directions in machine learning, algorithmic design and induction, and artificial intelligence that are unique to sociological approaches. Sociologists can study the laboratories and software scientists who guide infant algorithms through their initial training before their release in worldly operations. This will improve our knowledge about the commercial and industrial conditions that influence algorithmic software.

Sociologists can study how ordinary users understand and educate themselves about what algorithmic software is and how it works. Wider study of the digital infrastructures in which humans and machine learning algorithms interact can reveal more about the contexts and conditions in which we each change each other. Most revolutionary for the field is his recommendation to study algorithmic software itself as a social agent with dispositions, habits, operating conditions, and even reflective knowledge like IAQOS was able to demonstrate through its directionless socialization.

The greatest test of Airoldi’s ideas will come with this last application, which may one day reveal how singular a machine intelligence can become when it develops free from the intensive guidance of a profit-seeking corporation. Experiencing life in community with free machines may create a wider and better society than the single-minded relentlessness of existence according to the purpose of another.

Author Information:

Adam Riggio, adamriggio@gmail.com, International Language Academy of Canada.

References

Airoldi, Massimo. 2022. Machine Habitus: Toward a Sociology of Algorithms. Polity Press.

Beer, David. 2013. Popular Culture and New Media: The Politics of Circulation. Palgrave MacMillan.

Chafkin, Max. 2021. The Contrarian: Peter Thiel and Silicon Valley’s Pursuit of Power. Random House.

Mohan, Pavithra. 2021. “Inside the Life of a Tech Activist: Abuse, Gaslighting, but Ultimately Optimism.” Fast Company. 3 November. https://www.fastcompany.com/
90686948/inside-the-life-of-a-tech-activist-abuse-gaslighting-but-ultimately-optimism
.

Molteni, Megan; Adam Rodgers. 2017. “The Actual Science of James Damore’s Google Memo.” Wired. 15 August.  https://www.wired.com/story/the-pernicious-science-of-james-damores-google-memo/.


[1] Airoldi, Massimo. 2022. Machine Habitus, 13.

[2] Airoldi, 91.

[3] Airoldi, 13.

[4] A glimpse into the sausage factory. My personal suspicion is that Machine Habitus faced a manuscript reviewer who held such an asinine interpretation of Bourdieu’s concept of habitus, and sent rather obnoxious notes demanding that Airoldi distinguish his use of the term from the strictly deterministic interpretation that many sub-disciplines of sociology stubbornly continue to consider obvious.

[5] Airoldi, 74.

[6] Airoldi, 120.

[7] Airoldi, 45.

[8] Airoldi, 81.

[9] Airoldi, 134.

[10] Airoldi, 117.

[11] Airoldi, 127.

[12] Airoldi, 155.



Categories: Books and Book Reviews

Tags: , , , , , ,

Leave a Reply