Sheldon Richmond is a philosopher who spent his working career in IT, and has written an ambitious and courageous book on the philosophy of the problem of computer technology that integrates philosophy of science and cultural and political aspects of the issues raised by the existence, and role, of an elite which controls this technology. The COVID crisis, with its overt interventions in information transfer on Facebook and Twitter, has made the issue of the power of the techno-elite clear. But Richmond is concerned with a more fundamental issue. He cuts through the various formulations of the “problem” with IT, and suggests that they miss the mark. The core problem is the user-expert relationship, which is fraught: users are frustrated by the technology; IT professionals blame the users. His question is basic: “Do we need technology experts and professionals to assist us in learning the use of computer technologies” (7)? This leads to a large question: can we “design our social institutions so that democratic control is in place, for the institutions themselves as well as for the technology” (6)?… [please read below the rest of the article].
Turner, Stephen. 2021. “Our Techno-Masters and their Philosophical Cheerleaders: A Review of Richmond’s A Way Through the Global Techno-Scientific Culture.” Social Epistemology Review and Reply Collective 10 (12): 66-68. https://wp.me/p1Bfg0-6om.
🔹 The PDF of the article gives specific page numbers.
A Way Through the Global Techno-Scientific Culture
Cambridge Scholars Publishing, 2020
The obstacle to doing so is the mystification of computing itself. Richmond wants to see computers as our servants. To get there we need an account of the relevant knowledge. He sees three options from the philosophy of science: a Popperian one, that emphasizes trial and error learning; a Polanyian one, that stresses tacit knowledge; and a Kuhnian one, which treats computing knowledge as a kind of culture. Each is wrong, but each adds something. He wants to build on what insights they can give to provide a synthetic account. The object of the relevant knowledge is elusive. He makes the point that computing is in a network, but that while the network is in theory open, access is controlled by systems administrators and computer companies, through what they call “policies,” meaning blocks on what the user can see, modify, or get to. This is the source of their power. But although the techno-elite controls their pieces of the interconnected system that results from these “policies,” they do not understand the system, and cannot keep up with its changes.
This is not to say that computing, and the techno-elites conception of it, and the mystification that attaches to it, does not have profound effects. Richmond points to the “dummification” that results from transferring work from knowledgeable worker to unintelligent machine processes which are mystified into a novel form of intelligence. He argues that this transfer, which centralizes power in IT professionals and centralized systems, is not inevitable. Computers are not mysterious, but have only been made so; the power in question is a matter of a social construction of smart machines for dumb people that can be reversed, so that the machines are seen as dumb and people as smart. This requires something, however, namely skills in using the computer, which can be taught and computers that can be redesigned. But this is not really enough: we are still trapped in the world of symbols, of digital nominal reality rather than the objective world, which is analog, like us. How can we break out of that trap?
Richmond thinks that we can. But his proposal is demanding. It is to cut out the IT intermediary and the secrecy, create participatory democracy by breaking down the regulatory distinctions between professionals and users that closes off access to parts of the system, and allow “people to develop mastery through trial and error and sharing ideas and skills. Social regulation will apply across the board to all participants, not by giving administrators special powers of control.” This would require educational reform, a Socratic style of teaching, and turn it into an experimenting society, as Donald Campbell used to say, leading to techno-pluralilty rather than the monolithic top-down systems of the present.
The key to this is the idea of extending the Socratic democratic participatory subculture to all spheres of public life as an alternative to domination by the techno-elite. The obstacle is the culture of this elite, which closes people out. Richmond turns to Karl Popper and Michael Polanyi to describe an alternative: Popper for his trial and error view of objective reality, Polanyi for his account of tacit knowledge and the culture of science, both of which have a role in his solution. “User error” is the key concept here—it is hierarchical rather than dialogical. The alternative is “design flaw.” That allows for dialogue and breaking the barriers by making the participants in what is inherently a distributed system, that no one fully understands, and in which different people have different kinds of knowledge into equal partners. To do this we need some new social architecture, which supports cross-(sub)cultural/group dialogue. He uses his own work experience in IT to indicate how this can be done, and how top-down methods of communication fail. We are, he thinks, only at the first stage of developing the right architecture.
His next question is this: where are the philosophers in all this? As in the case of science itself, there are those philosophers who become mouthpieces: who find a way to celebrate whatever emerges, and whoever is in the dominant group. He calls these techno-submissives. But his large project, as the title to the book makes clear, is to find a way through the technology that is not merely submission, and benefits everyone. The strategy is to replace elite control with client-centered control, with collaboration between client and technician on a small group scale where dialogue and equality are possible, and to match the distributed character of the internet itself with the distribution of control downwards, to the group levels where the core problem of usability can be addressed.
This requires, philosophically, the recognition that the root of the usability problem is the mismatch between human analog intelligence and machine digital intelligence. The celebration of “artificial intelligence” has done the opposite: to reduce the human, contextual, and analog to the digital, and to treat machines not as the dumb tools they are but as superior models of the defective and limited intelligence of humans. What philosophers should be doing is breaking this paradigm down, renewing critical inquiry, and participating in the transformation of the pervasive digital world into a democratic internet based on sharing of intelligence between small groups close to the user, which would result in a world of workers with genuine autonomy.
What does criticism imply here? Richmond dismisses the standard view of philosophical criticism as judgmental in favor of his teacher Joseph Agassi’s view of it as logically rather than judgmentally negative, more a form of debugging, which accepts a world of differences in opinion. Luciano Floridi is given as the exemplar of the idea that the world should be viewed from the point of view of information, and serve the information system, which he thinks would flatten values and allow democracy to emerge. But people disappear in this vision, becoming only sources of friction, and in the Weltanschauung of the techno-elite, something to be replaced by self-driving systems.
Is there a humane response to this? Richmond’s answer combines Popper and Polanyi, particularly his view of mentorship: “we need to open up computers to everyone so that we allow everyone to learn computer technology through trial and error and through consultation with our mentors, colleagues, and friends” (139). Everyone needs to participate in the development of a new social architecture for technology and in full dialogic social decision-making. We need to regain the cognitive functions of personal knowledge we have replaced by machines.
Is this utopian? I am reminded of DeMaistre’s barb at Rousseau: “Sheep are born carnivorous, but everywhere they eat grass.” Are we born to drive cars, but nevertheless prefer to let our Tesla drive for us? Isn’t that “easier”? Doesn’t it diminish us? Or is diminishing us an inevitable part of progress? Along with submission to an elite? And the submission of philosophers to it? These are big questions, and too big for a book review, but they are, as Richmond shows, the questions we should be thinking about. There are many signs that people are eager to assert themselves, speak, and resist, outside of the framework of submission. As Richmond says, we are in the early stage. Going beyond resistance is the hard next stage.
Stephen Turner, email@example.com, University of South Florida.
Categories: Books and Book Reviews
Stephen Turner asks very important but hard to answer questions at the end of his review. Such questions have been called “wicked problems”, that cannot be answered simply, if at all, because of the multiple, interrelated, variables involved. I simplify horribly: if in a bad situation, do we throw up our hands, and seek an escape route, only for the few? Or do, we realize that the situation was of human creation, and as such, can be changed and remade by humans? That is our choice, regarding not only remaking the hype world of delusion created by the techno-elite, but also repairing the environmental mess of our planet. My suggestion is: let us at least talk plainly and honestly across party-lines and leave behind the labels of our schools of thought, as well as the bounds of disciplines: discuss problems, not terminology. Is that utopian? I agree with Stephen, that it is: but, if we don’t begin the task now, when, and who will?