Note from Steve Fuller: On 6 November 2015, Michael Crow, President of Arizona State University, circulated a letter announcing the creation of a School for the Future of Innovation in Society, under the directorship of David Guston. It is the latest phase in Crow’s fashioning of what calls the ‘New American University’, a vision that has received considerable notice in higher education circles worldwide. Crow’s aim in the letter was solicit ideas about the general orientation and specific issues that the School should adopt both in the classroom and in the research setting, especially given the rapidly expanding frontiers of science and technology into areas of direct concern to the human condition. What follows is Steve Fuller’s response to Crow’s letter.
My most general point is that the future of the human condition—not in terms of sheer survival but in terms of what counts as ‘flourishing’—will depend on whether our default setting is to treat risk as a threat or as an opportunity: that is, a precautionary or a proactionary attitude. You can read a short introduction to the implications of this distinction here and here. I adopt a proactionary stance, which corresponds not only to the entrepreneurial spirit but also to what Donald Campbell called the ‘experimenting society’. It is a world in which people (in individual, collective and corporate form) are encouraged to conjecture boldly and to demonstrate their successes and make their mistakes in public, so that everyone may benefit. (Karl Popper’s ‘open society’ is my template.) It is a world that aims to remove taboos and criminal sanctions from trying out radical new ideas, while at the same time recognizing that harms will be committed along the way, and these require recognition and compensation.
I believe that two features of today’s world biases academia against the proactionary and towards the precautionary approach.
One bias is more general and pertains to the decadent state of social democracy, an ideology which academics (including myself) generally support but is nowadays more concerned with protecting than empowering people. This point is very clear in Europe, in which the precautionary principle is inscribed in EU innovation-relevant legislation—resulting in, among other things, the public relations debacle surrounding ‘genetically modified organisms’. Against this backdrop, neo-liberalism can seem like a breath of fresh air. Even though neo-liberalism doesn’t provide adequate recognition and compensation for failure, at least it removes paternalistic obstacles from people—including government and industry—trying out new things. Moreover, because the Left tends to focus on the losers in any given scheme, it tends to overlook the flexibility and adventurousness of neo-liberal regimes.
The second bias is more specific to academia and is exemplified by institutional review boards. The mentality informing these is anchored in the Nuremberg Trial. It serves to reinforce the precautionary principle in ways that instil a needlessly adversarial relationship between science and the public. Thus, a potential research subject is configured as someone who might be personally abused (and hence safeguards must be in place to prevent that outcome) rather than as someone who might contribute to a larger human project. Of course, this is not to deny the need for regulatory oversight on research, including the need for personal consent. But rather than pitting science against the public, the two should be joined in combat against some common enemy, be it defined as ‘disease’, ‘death’ or even ‘extinction’. In this respect, institutional review boards might even be reconceptualised as vehicles for brokering joint-stock companies formed by researchers and subjects for mutual benefit. And in terms of worst case scenarios from adventurous research, the legal orientation should be more towards compensation than prohibition.
Here one might devote an entire research programme or even institute to ‘securitized risk-taking’ as a general world-view, which should attract banks and insurance companies as potential funders. The point is to look at ways in which people have tried—not always with success!—to build trust and achieve results in a highly volatile world. Here one thinks, for example, of ‘megaprojects’, in which great achievements result from great faith combined with great underestimation of cost. There could even be national/patriotic dimension, given that US history has been punctuated by this sort of self-understanding from the early colonial days to the era of space exploration.
On the teaching side, why not have a liberal arts curriculum that is focused on ‘courage’ as the operative virtue to which all incoming undergraduates need to be exposed? This would not only provide historical and philosophical depth to entrepreneurship but also would help academics to re-engage the military, whose existence, if acknowledged positively at all, has been honoured more in the breach than in the observance. Yet, the military has been more consistent than even business in fostering a ‘strategic’ mentality that plans for short-term setbacks and losses in service of long-term progress and victory.
As it stands, academia trails behind ‘Silicon Valley’ in the consistent cultivation of a proactionary attitude towards risk. By ‘Silicon Valley’ I mean less the actual place than the global ideology that emanates from there. In this sense, ‘Silicon Valley’ is comparable to ‘Manchester’ in the 19th century, as the name for a radical liberalism that created an alternative and durable knowledge base outside the university sector, centring on manufacturing and including a much wider range of people than universities had hitherto taken seriously. For their part, academics spent most of the 19th and the early 20th centuries playing catch up by introducing science and technology-based education and research facilities into their campuses—as well as opening up their doors (somewhat more slowly) to the populace as a whole. Academia managed to evolve in the face of the ‘Manchester’ challenge and came out a stronger and more complex creature as a result. Clark Kerr’s ‘multiversity’ captures this change well. Universities are in a similar position vis-à-vis Silicon Valley today. 2015 is the new 1815.
Like the Manchester liberals, the Silicon Valley liberals are in their own high-tech way vulgar utilitarians, contemptuous of established institutions. However, they are not without ideas—and capital—to get things done, with or without universities. Academia needs to be more positive and creative in response to this development—and a look at how it adapted to the original Industrial Revolution would not go amiss. Generally speaking, academia should not try to compete with the private sector in terms of capitalizing innovation. But academia can play—and has played—a more substantial role than mere handmaiden who supplies relatively cheap intellectual labour for start-up companies. Indeed, universities are where the ‘normative horizons’ of innovation are set, which means establishing standards of technical performance and a cognitive frameworks that enable innovation to be understood systematically so that it can be taken to the next level. Moreover, all of this is streamed through a regularly revised curricular structure that allows people from all backgrounds to participate in the process.
In this context, an aim of general education must be to make people at least as ‘smart’ as the environments in which they increasingly live and work. To a large extent, the villain is personified in the late Steve Jobs, who created products with such smart interfaces that they effectively dumbed down their users, by channelling their responses within an expected range. Even people who formally work in the ‘IT sector’ don’t necessarily know much about coding, algorithms, let alone the emerging political economy in which this new capital is being generated. Alongside its classical goal of plugging students into established and ‘classic’ forms of academic knowledge, general education needs to address this very serious blindspot in contemporary culture.
The issue of general education raises the final point about the future of humanity, which returns me to the original theme of competing attitudes towards risk. In the last few years I have written of ‘Humanity 2.0’, which presumes that ‘humanity’, understood as an upgraded upright ape, has reached a crossroads in its development, whereby it can identify with either (1) where we have come from (i.e. our status as one among many species on planet Earth) or (2) where we might go (i.e. the prospect of substantially altering if not abandoning those animal origins, including existing in some silicon form and/or in outer space). The former is what I call ‘down-wing’ (and is associated with the precautionary principle) and the latter ‘up-wing’ (and is associated with the proactionary principle). The difference is briefly explained here. I believe that this polarity will replace the existing right-left ideological polarity in the 21st century. The question then is how to teach it effectively. Here our species’ relationship to the environment will provide a significant context. Will that relationship be defined as one of greater co-dependency with nature, à la down-wingers, even if that means scaling down humanity’s reach over the planet? This has been the traditional stance of the ecology movement and certainly dominates contemporary discussions of global warming. Or, will our relationship be defined as one of greater ‘decoupling’, say, through the discovery of energy-dense materials (e.g. nuclear) that require much less biomass so as to enable us to progress as we have? This is the way of the up-wingers, most notably the ‘ecomodernists’, whose recent manifesto can be found here.
There is much more that could be said, but I only wished here to outline the general contours of my thinking about a university that can meet the challenges of the 21st century.