Archives For technology and culture

Author Information: Frank Scalambrino, Duquesne University, franklscalambrino@gmail.com.

Scalambrino, Frank. “Reviewing Nolen Gertz’s Nihilism and Technology.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 22-28.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44B

Image by Jinx! via Flickr / Creative Commons

 

There are three (3) parts to this review, each of which brings a philosophical, and/or structural, issue regarding Dr. Gertz’s book into critical focus.

1) His characterization of “nihilism.”

a) This is specifically about Nietzsche.

2) His (lack of) characterization of the anti- and post-humanist positions in philosophy of technology.

a) Importantly, this should also change what he says about Marx.

3) In light of the above two changes, going forward, he should (re)consider the way he frames his “human-nihilism relations”

1) Consider that: If his characterization of nihilism in Nietzsche as “Who cares?” were correct, then Nietzsche would not have been able to say that Christianity is nihilistic (cf. The Anti-Christ §§6-7; cf. The Will to Power §247). The following organizes a range of ways he could correct this, from the most to least pervasive.

1a) He could completely drop the term “nihilism.” Ultimately, I think the term that fits best with his project, as it stands, is “decadence.” (More on this below.) In §43 of The Will to Power, Nietzsche explained that “Nihilism is not a cause, but only the rationale of decadence.”

1b) He could keep the term “nihilism” on the cover, but re-work the text to reflect technology as decadence, and then frame decadence as indicating a kind of nihilism (to justify keeping nihilism on the cover).

1c) He could keep everything as is; however, as will be clear below, his conception of nihilism and human-nihilism relations leaves him open to two counter-arguments which – as I see it – are devastating to his project. The first suggests that from the point of view of Nietzsche’s actual definition of “nihilism,” his theory itself is nihilistic. The second suggests that (from a post-human point of view) the ethical suggestions he makes (based on his revelation of human-nihilism relations) are “empty threats” in that the “de-humanization” of which he warns refers to a non-entity.

Lastly, I strongly suggest anyone interested in “nihilism” in Nietzsche consult both Heidegger (1987) and Deleuze (2006).

1. Gertz’s Characterization of “Nihilism”

Nietzsche’s writings are notoriously difficult to interpret. Of course, this is not the place to provide a “How to Read Nietzsche.” However, Dr. Gertz’s approach to reading Nietzsche is peculiar enough to warrant the following remarks about the difficulties involved. When approaching Nietzsche you should ask three questions: (1) Do you believe Nietzsche’s writings are wholly coherent, partially coherent, or not coherent at all? (2) Do you believe Nietzsche’s writings are wholly consistent, partially consistent, or not consistent at all? (3) Does Nietzsche’s being consistent make a “system” out of his philosophy?

The first question is important because you may believe that Nietzsche was a “madman.” And, the fallacy of ad hominem aside, you may believe his “madness” somehow invalidates what he said – either partially or totally. Further, it is clear that Nietzsche does not endorse a philosophy which considers rationality the most important aspect of being human. Thus, it may be possible to consider Nietzsche’s writings as purposeful or inspired incoherence.

For example, this latter point of view may find support in Nietzsche’s letters, and is exemplified by Blanchot’s comment: “The fundamental characteristic of Nietzsche’s truth is that it can only be misunderstood, can only be the object of an endless misunderstanding.” (1995: 299).

The second question is important because across Nietzsche’s writings he seemingly contradicts himself or changes his philosophical position. There are two main issues, then, regarding consistency. On the one hand, “distinct periods” of philosophy have been associated with various groupings of Nietzsche’s writings, and establishing these periods – along with affirming position changes – can be supported by Nietzsche’s own words (so long as one considers those statements coherent).

Thus, according to the standard division, we have the “Early Writings” from 1872-1876, the “Middle Writings” from 1878-1882, the “Later Writings” from 1883-1887, and the “Final Writings” of 1888. By examining Dr. Gertz’s Bibliography it is clear that he privileges the “Later” and “Unpublished” of Nietzsche’s writings. On the other hand, as William H. Schaberg convincingly argued in his The Nietzsche Canon: A Publication History and Bibliography, despite all of the “inconsistencies,” from beginning to end, Nietzsche’s writings represent the development of what he called the “Dionysian Worldview.” Importantly, Dr. Gertz neither addresses these exegetical issues nor does he even mention Dionysus.

The third question is important because throughout the last century of Nietzsche scholarship there have been various trends regarding the above, first two, questions, and often the “consistency” and “anti-system” issues have been conflated. Thus, scholars in the past have argued that Nietzsche must be inconsistent – if not incoherent – because he is purposefully an “anti-systematic thinker.”

However, as Schaberg’s work, among others, makes clear: To have a consistent theme does not necessitate that one’s work is “systematic.” For example, it is not the case that all philosophers are “systematic” philosophers merely because they consistently write about philosophy. That the “Dionysian Worldview” is ultimately Nietzsche’s consistent theme is not negated by any inconsistencies regarding how to best characterize that worldview.

Thus, I would be interested to know the process through which Dr. Gertz decided on the title of this book. On the one hand, it is clear that he considers this a book that combines Nietzsche and philosophy of technology. On the other hand, Dr. Gertz’s allegiance to (the unfortunately titled) “postphenomenology” and the way he takes up Nietzsche’s ideas make the title of his book problematic. For instance, the title of the first section of Chapter 2 is: “What is Nihilism?”

What About the Meaning of Nihilism?

Dr. Gertz notes that because the meaning of “nihilism” in the writings of Nietzsche is controversial, he will not even attempt to define nihilism in terms of Nietzsche’s writings (p. 13). He then, without referencing any philosopher at all, defines “nihilism” stating: “in everyday usage it is taken to mean something roughly equivalent to the expression ‘Who cares?’” (p. 13). Lastly, in the next section he uses Jean-Paul Sartre to characterize nihilism as “bad faith.” All this is problematic.

First, is this book about “nihilism” or “bad faith”? It seems to be about the latter, which (more on this to come) leads one to wonder whether the title and the supposed (at times forced) use of Nietzsche were not a (nihilistic?) marketing-ploy. Second, though Dr. Gertz doesn’t think it necessary to articulate and defend the meaning of “nihilism” in Nietzsche, just a casual glance at the same section of the “Unpublished Writings” (The Will to Power) that Gertz invokes can be used to argue against his characterization of “nihilism” as “Who cares?”

For example, Nietzsche is far more hardcore than “Who cares?” as evidenced by: “Nihilism does not only contemplate the ‘in vain!’ nor is it merely the belief that everything deserves to perish: one helps to destroy… [emphasis added]” (1968b: 18). “Nihilism” pertains to moral value. It is in this context that Nietzsche is a so-called “immoralist.”

Nietzsche came to see the will as, pun intended, beyond good and evil. It is moralizing that leads to nihilism. Consider the following from Nietzsche:

“Schopenhauer interpreted high intellectuality as liberation from the will; he did not want to see the freedom from moral prejudice which is part of the emancipation of the great spirit… Fundamental instinctive principle of all philosophers and historians and psychologists: everything of value in man, art, history, science, religion, technology [emphasis added], must be proved to be of moral value, morally conditioned, in aim, means and outcome… ‘Does man become better through it?’” (1968b: pp. 205-6).

The will is free, beyond all moral values, and so the desire to domesticate it is nihilistic – if for no reason other than in domesticating it one has lowered the sovereignty of the will into conformity with some set of rules designed for the preservation of the herd (or academic-cartel). Incidentally, I invoked this Nietzschean point in my chapter: “What Control? Life at the limits of power expression” in our book Social Epistemology and Technology. Moreover, none of us “philosophers of the future” have yet expressed this point in a way that surpasses the excellence and eloquence of Baudrillard (cf. The Perfect Crime and The Agony of Power).

In other words, what is in play are power differentials. Thus, oddly, as soon as Dr. Gertz begins moralizing by denouncing technology as “nihilistic,” he reveals himself – not technology – to be nihilistic. For all these reasons, and more, it is not clear why Dr. Gertz insists on the term “nihilism” or precisely how he sees this as Nietzsche’s position.

To be sure, the most recent data from the CDC indicate that chlamydia, gonorrhea, and syphilis are presently at an all-time high; do you think this has nothing to do with the technological mediation of our social relations? Yet, the problem of bringing in Nietzsche’s conception of “nihilism” is that Nietzsche might not see this as a problem at all. On the one hand, we have all heard the story that Nietzsche knew he had syphilis; yet, he supposedly refused to seek treatment, and subsequently died from it.

On the other hand, at times it seems as though the Nietzschean term Dr. Gertz could have used would have been “decadence.” Thus, the problem with technology is that it is motivated by decadence and breeds decadence. Ultimately, the problem is that – despite the nowadays obligatory affirmation of the “non-binary” nature of whatever we happen to be talking about – Dr. Gertz frames his conception in terms of the bifurcation: technophile v. technophobe. Yet, Nietzsche is, of course, a transcendental philosopher, so there are three (not 2) positions. The third position is Amor Fati.

The ‘predominance of suffering over pleasure’ or the opposite (hedonism): these two doctrines are already signposts to nihilism… that is how a kind of man speaks who no longer dares to posit a will, a purpose, a meaning: for any healthier kind of man the value of life is certainly not measured by the standard of these trifles [pleasure and pain]. And suffering might predominate, and in spite of that a powerful will might exist, a Yes to life, a need for this predominance. (Nietzsche, 1968b: p. 23).

In terms of philosophy of technology, if it is our fate to exist in a world torn asunder by technological mediation, well, then, love it (in this wise, even the “Death of God” can be celebrated). And, here would be the place to mention “postmodern irony,” which Dr. Gertz does not consider. In sum, Dr. Gertz’s use of the term “nihilism” is, to say the least, problematic.

Technology’s Disconnect From Nietzsche Himself

Nietzsche infamously never used a typewriter. It was invented during his lifetime, and, as the story goes, he supposedly tried to use the technology but couldn’t get the hang of it, so he went back to writing by hand. This story points to an insight that it seems Dr. Gertz’s book doesn’t consider. For Nietzsche human existence is the point of departure, not technology.

So, the very idea that technological mediation will lead to a better existence (even if “better” only means “more efficient,” as it could in the case of the typewriter), should, according to Nietzsche’s actual logic of “nihilism,” see the desire to use a typewriter as either a symptom of decadence or an expression of strength; however, these options do not manifest in the logic of Gertz’s Nietzsche analysis.

Rather, Dr. Gertz moralizes the use of technology: “Working out which of these perspectives is correct is thus vital for ensuring that technologies are providing us leisure as a form of liberation rather than providing us leisure as a form of dehumanization.” (p. 4). Does the “Who cares?” logic of Gertz’s “nihilism” necessarily lead to an interpretation of Nietzsche as a kind of “Luddite”?

Before moving on to the next part of this review, a few last remarks about how Dr. Gertz uses Nietzsche’s writings are called for. There are nine (9) chapters in Nihilism and Technology. Dr. Gertz primarily uses the first two chapters to speak to the terminology he will use throughout the book. He uses the third chapter to align himself with the academic-cartel, and the remaining chapters are supposed to illustrate his explication of what he calls Nietzsche’s five “human-nihilism relations.” All of these so-called “human-nihilism relations” revolve around discussions which take place only in the “Third Essay” of Nietzsche’s On the Genealogy of Morals – except one foray into The Gay Science.

Two points should be made here. First, Dr. Gertz calls these “nihilism relations,” but they are really just examples of “Slave Mentality.” This should come as no surprise to those familiar with Nietzsche because of where in his writings Dr. Gertz is focused. Moreover, there is not enough space here to fully explain why, but it is problematic to simply replace the term “Slave Mentality” with “nihilism relation.”

Second, among these “nihilism relations” there are two glaring misappropriations of Nietzsche’s writings regarding “pity” and “divinity.” That is, when Dr. Gertz equates “pity sex” (i.e. having “sexual intercourse,” of one kind or another, with someone ostensibly because you “pity” them) with Nietzsche’s famous discussion of pity in On the Genealogy of Morals, it both overlooks Nietzsche’s comments regarding “Master” pity and trivializes the notion of “pity” in Nietzsche.

For, as already noted above, if in your day to day practice of life you remain oriented to the belief that you need an excuse for whatever you do, then you are moralizing. (Remember when we used to think that Nietzsche was “dangerous”?) If you are moralizing, then you’re a nihilist. You’re a nihilist because you believe there is a world that is better than the one that exists. You believe in a world that is nothing. “Conclusion: The faith in the categories of reason is the cause of nihilism. We have measured the value of the world according to categories that refer to a purely fictitious world.” (Nietzsche, 1968b: p. 13).

Lastly, Dr. Gertz notes: “Google stands as proof that humans do not need gods, that humans are capable of fulfilling the role once reserved for the gods.” (p. 199). However, in making that statement he neither accurately speaks of the gods, in general, nor of Nietzsche’s understanding of – for example – Dionysus.

2) The Anti- and Post-Humanist Positions in Philosophy of Technology

In a footnote Dr. Gertz thanks an “anonymous reviewer” for telling him to clarify his position regarding humanism, transhumanism, and posthumanism; however, despite what sounds like his acknowledgement, he does not provide such a clarification. The idea is supposed to be that transhumanism is a kind of humanism, and anti- and post-humanism are philosophies which deny that “human” refers to a “natural category.” It is for this reason that many scholars talk of “two Marxisms.” That is to say, there is the earlier Marxism which takes “human” as a natural category and aims at liberation, and there is the later Marxism which takes “human” to be category constructed by Capital.

It is from this latter idea that the “care for the self” is criticized as something to be sold to “the worker” and to eventually transform the worker’s work into the work of consumption – this secures perpetual demand, as “the worker” is transformed into the “consumer.” Moreover, this is absolutely of central importance in the philosophy of technology. For, from a point of view that is truly post-human, Dr. Gertz’s moralizing-warning that technology may lead to “a form of dehumanization.” (p. 4) is an empty threat.

On the one hand, this fidelity to “human” as a natural category comes from Don Ihde’s “postphenomenology.” For Gertz’s idea of “human-nihilism relations” was developed from Idhe’s “human-technology relations.” (p. 45). Gertz notes, “Ihde turns Heidegger’s analysis of hammering into an exemplar of how to carry out analyses of human-technology relations, analyses which lead Ihde to expand the field of human-technology relations beyond Heidegger’s examples” (p. 49).

However, there are two significant problems here, both of which point back, again, to the lack of clarification regarding post-humanism. First, Heidegger speaks of Dasein and of Being, not of “human.” Similarly, Nietzsche could say, “The will to overcome an affect is ultimately only the will of another affect, or of several other affects.” (Nietzsche, 1989a: §117), or “There is no ‘being’ behind doing … the ‘doer’ is merely a fiction added to the deed – the deed is everything.” (Nietzsche, 1989b: p. 45).

Second, the section of Being & Time from which “postphenomenology” develops its relations of “co-constitution” is “The Worldhood of the World,” not “Being-in-the-World.” In other words, Dasein is not an aspect of “ready-to-hand” hammering, the ready-to-hand is an aspect of Dasein. Thus, “human” may be seen as a “worldly” “present-at-hand” projection of an “in order to.” Again, this is also why Gertz doesn’t characterize Marxism (p. 5) as “two Marxisms,” namely he does not consider the anti- or post-humanist readings of Marx.

Hence, the importance of clarifying the incommensurability between humanism and post-humanism: Gertz’s characterization of technology as nihilistic due to its de-humanizing may turn out to be itself nihilistic in terms of its moralizing (noted in Part I, above) and in terms of its taking the fictional-rational category “human” as more primordial than the (according to Nietzsche) non-discursive sovereign will.

3) His “human-nihilism relations”

Students of the philosophy of technology will find the Chapter 3 discussion of Ihde’s work helpful; going forward, we should inquire regarding Ihde’s four categories – in the context of post-humanism and cybernetics – if they are exhaustive. Moreover, how might each of these categories look from a point of view which takes the fundamental alteration of (human) be-ing by technology to be desirable?

This is a difficult question to navigate because it shifts the context for understanding Gertz’s philic/phobic dichotomy away from “care for the self” and toward a context of “evolutionary selection.” Might public self-awareness, in such a context, influence the evolutionary selection?

So long as one is explicitly taking a stand for humanism, then one could argue that the matrix of human-technology relations are symptoms of decadence. Interestingly, such a stance may make Nihilism and Technology, first and foremost, an ethics book and not a philosophy of technology book. Yet, especially, though perhaps not exclusively, presenting only the humanistic point of view leaves one open to the counter-argument that the “intellectual” and “philosophical” relations to “technology” that allow for such an analysis into these various discursive identities betrays a kind of decadence. It would not be much of a stretch to come to the conclusion that Nietzsche would consider “academics” decadent.

Further, it would also be helpful for philosophy of technology students to consider – from a humanistic point of view – the use of technology to extend human life in light of “human-decadence relations.” Of course, whether or not these relations, in general, lead to nihilism is a separate question. However, the people who profit from the decadence on which these technologies stand will rhetorically-bulwark the implementation of their technological procedures in terms of “saving lives.” Here, Nietzsche was again prophetic, as he explicitly considered a philosophy of “survive at all costs” to be a sign of degeneracy and decay.

Contact details: franklscalambrino@gmail.com

References

Blanchot, Maurice. (1995). The Work of Fire. C. Mandell (Trans.). Stanford, CA: Stanford University Press.

Deleuze, Gilles. (2006). Nietzsche and Philosophy. H. Tomlinson (Trans.). New York: Columbia University.

Heidegger, Martin. (1987). D.F. Krell (Ed.). Nietzsche, Vol. IV: Nihilism. F.A. Capuzzi (Trans.). New York: Harper & Row.

Nietzsche, Friedrich. (1989a). Beyond Good and Evil: Prelude to a Philosophy of the Future. W. Kaufmann (Trans.). New York: Vintage.

_____. (1989b). On the Genealogy of Morals /Ecce Homo. W. Kaufmann (Trans.). New York: Vintage Books.

_____. (1968a). Twilight of the Idols/The Anti-Christ. R.J. Hollingdale (Trans.). Middlesex, England: Penguin Books.

_____. (1968b). The Will to Power. W. Kaufmann and R.J. Hollingdale (Trans.). New York: Vintage Books.

Schaberg, William H. (1995). The Nietzsche Canon: A Publication History and Bibliography. Chicago: University of Chicago Press.

Author Information: Joshua Earle, Virginia Tech, jearle@vt.edu.

Earle, Joshua. “Deleting the Instrument Clause: Technology as Praxis.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 59-62.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42r

Image by Tambako the Jaguar via Flickr / Creative Commons

 

Damien Williams, in his review of Dr. Ashley Shew’s new book Animal Constructions and Technical Knowledge (2017), foregrounds in his title what is probably the most important thesis in Shew’s work. Namely that in our definition of technology, we focus too much on the human, and in doing so we miss a lot of things that should be considered technological use and knowledge. Williams calls this “Deleting the Human Clause” (Williams, 2018).

I agree with Shew (and Williams), for all the reasons they state (and potentially some more as well), but I think we ought to go further. I believe we should also delete the instrument clause.

Beginning With Definitions

There are two sets of definitions that I want to work with here. One is the set of definitions argued over by philosophers (and referenced by both Shew and Williams). The other is a more generic, “common-sense” definition that sits, mostly unexamined, in the back of our minds. Both generally invoke both the human clause (obviously with the exception of Shew) and the instrument clause.

Taking the “common-sense” definition first, we, generally speaking, think of technology as the things that humans make and use. The computer on which I write this article, and on which you, ostensibly, read it, is a technology. So is the book, or the airplane, or the hammer. In fact, the more advanced the object is, the more technological it is. So while the hammer might be a technology, it generally gets relegated to a mere “tool” while the computer or the airplane seems to be more than “just” a tool, and becomes more purely technological.

Peeling apart the layers therein would be interesting, but is beyond the scope of this article, but you get the idea. Our technologies are what give us functionalities we might not have otherwise. The more functionalities it gives us, the more technological it is.

The academic definitions of technology are a bit more abstract. Joe Pitt calls technology “humanity at work,” foregrounding the production of artefacts and the iteration of old into new (2000, pg 11). Georges Canguilhem called technology “the extension of human faculties” (2009, pg 94). Philip Brey, referencing Canguilhem (but also Marshall McLuhan, Ernst Kapp, and David Rothenberg) takes this definition up as well, but extending it to include not just action, but intent, and refining some various ways of considering extension and what counts as a technical artefact (sometimes, like Soylent Green, it’s people) (Brey, 2000).

Both the common sense and the academic definitions of technology use the human clause, which Shew troubles. But even if we alter instances of “human” to “human or non-human agents” there is still something that chafes. What if we think about things that do work for us in the world, but are not reliant on artefacts or tools, are those things still technology?

While each definition focuses on objects, none talks about what form or function those objects need to perform in order to count as technologies. Brey, hewing close to Heidegger, even talks about how using people as objects, as means to an end, would put them within the definition of technology (Ibid, pg. 12). But this also puts people in problematic power arrangements and elides the agency of the people being used toward an end. It also begs the question, can we use ourselves to an end? Does that make us our own technology?

This may be the ultimate danger that Heidegger warned us about, but I think it’s a category mistake. Instead of objectifying agents into technical objects, if, instead we look at the exercise of agency itself as what is key to the definition of technology, things shift. Technology no longer becomes about the objects, but about the actions, and how those actions affect the world. Technology becomes praxis.

Technology as Action

Let’s think through some liminal cases that first inspired this line of thought: Language and Agriculture. It’s certainly arguable that either of these things fits any definition of technology other than mine (praxis). Don Ihde would definitely disagree with me, as he explicitly states that one needs a tool or an instrument to be technology, though he hews close to my definition in other ways (Ihde, 2012; 2018). If Pitt’s definition, “humanity at work” is true, then agriculture is, indeed a technology . . . even without the various artifactual apparati that normally surround it.

Agriculture can be done entirely by hand, without any tools whatsoever, is iterative and produces a tangible output: food, in greater quantity/efficiency than would normally exist. By Brey’s and Canguihem’s definition, it should fit as well, as agriculture extends our intent (for greater amounts of food more locally available) into action and the production of something not otherwise existing in nature. Agriculture is basically (and I’m being too cute by half with this, I know) the intensification of nature. It is, in essence, moving things rather than creating or building them.

Language is a slightly harder case, but one I want to explicitly include in my definition, but I would also say fits Pitt’s and Brey’s definitions, IF we delete or ignore the instrument clause. While language does not produce any tangible artefacts directly (one might say the book or the written word, but most languages have never been written at all), it is the single most fundamental way in which we extend our intent into the world.

It is work, it moves people and things, it is constantly iterative. It is often the very first thing that is used when attempting to affect the world, and the only way by which more than one agent is able to cooperate on any task (I am using the broadest possible definition of language, here). Language could be argued to be the technology by which culture itself is made possible.

There is another way in which focusing on the artefact or the tool or the instrument is problematic. Allow me to illustrate with the favorite philosophical example: the hammer. A question: is a hammer built, but never used, technology[1]? If it is, then all of the definitions above no longer hold. An unused hammer is not “at work” as in Pitt’s definition, nor does it iterate, as Pitt’s definition requires. An unused hammer extends nothing vs. Canguilhem and Brey, unless we count the potential for use, the potential for extension.

But if we do, what potential uses count and which do not? A stick used by an ape (or a person, I suppose) to tease out some tasty termites from their dirt-mound home is, I would argue (and so does Shew), a technological use of a tool. But is the stick, before it is picked up by the ape, or after it is discarded, still a technology or a tool? It always already had the potential to be used, and can be again after it is discarded. But such a definition requires that any and everything as technology, which renders the definition meaningless. So, the potential for use cannot be enough to be technology.

Perhaps instead the unused hammer is just a tool? But again, the stick example renders the definition of “tool” in this way meaningless. Again, only while in use can we consider a hammer a tool. Certainly the hammer, even unused, is an artefact. The being of an artefact is not reliant on use, merely on being fashioned by an external agent. Thus if we can imagine actions without artefacts that count as technology, and artefacts that do not count as technology, then including artefacts in one’s definition of technology seems logically unsound.

Theory of Technology

I believe we should separate our terms: tool, instrument, artefact, and technology. Too often these get conflated. Central, to me, is the idea that technology is an active thing, it is a production. Via Pitt, technology requires/consists in work. Via Canguilhem and Brey it is extension. Both of these are verbs: “work” and “extend.” Techné, the root of the word technology, is about craft, making and doing; it is about action and intent.

It is about, bringing-forth or poiesis (a-la Heidegger, 2003; Haraway, 2016). To this end, I propose, that we define “technology” as praxis, as the mechanisms or techniques used to address problems. “Tools” are artefacts in use, toward the realizing of technological ends. “Instruments” are specific arrangements of artefacts and tools used to bring about particular effects, particularly inscriptions which signify or make meaning of the artefacts’ work (a-la Latour, 1987; Barad, 2007).

One critique I can foresee is that it would seem that almost any action taken could thus be considered technology. Eating, by itself, could be considered a mechanism by which the problem of hunger is addressed. I answer this by maintaining that there be at least one step between the problem and solution. There needs to be the putting together of theory (not just desire, but a plan) and action.

So, while I do not consider eating, in and of itself, (a) technology; producing a meal — via gathering, cooking, hunting, or otherwise — would be. This opens up some things as non-human uses of technology that even Shew didn’t consider like a wolf pack’s coordinated hunting, or dolphins’ various clever ways to get rewards from their handlers.

So, does treating technology as praxis help? Does extracting the confounding definitions of artefact, tool, and instrument from the definition of technology help? Does this definition include too many things, and thus lose meaning and usefulness? I posit this definition as a provocation, and I look forward to any discussion the readers of SERRC might have.

Contact details: jearle@vt.edu

References

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.

Brey, P. (2000). Theories of Technology as Extension of Human Faculties. Metaphysics, Epistemology, and Technology. Research in Philosophy and Technology, 19, 1–20.

Canguilhem, G. (2009). Knowledge of Life. Fordham University Press.

Haraway, D. J. (2016). Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press.

Heidegger, M. (2003). The Question Concerning Technology. In D. Kaplan (Ed.), Readings in the Philosophy of Technology. Rowan & Littlefield.

Ihde, D. (2012). Technics and praxis: A philosophy of technology (Vol. 24). Springer Science & Business Media.

Ihde, D., & Malafouris, L. (2018). Homo faber Revisited: Postphenomenology and Material Engagement Theory. Philosophy & Technology, 1–20.

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Harvard university press.

Pitt, J. C. (2000). Thinking about technology. Seven Bridges Press,.

Shew, A. (2017). Animal Constructions and Technological Knowledge. Lexington Books.

Williams, D. (2018). “Deleting the Human Clause: A Review of Ashley Shew’s Animal Constructions and Technological Knowledge.” Social Epistemology Review and Reply Collective 7, no. 2: 42-44.

[1] This is the philosophical version of “For sale: Baby shoes. Never worn.”

Author Information: William Davis, California Northstate University, William.Davis@csnu.edu.

Davis, William. “Crisis. Reform. Repeat.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 37-44.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-422

Yale University, in the skyline of New Haven, Connecticut.
Image by Ali Eminov via Flickr / Creative Commons

 

If you have been involved in higher education in recent decades, you have noticed shifts in how courses are conceived and delivered, and what students, teachers, and administrators expect of each other. Also, water feels wet. The latter statement offers as much insight as the first. When authors argue the need for new academic models, indeed that a kind of crisis in United States higher education is occurring, faculty and administrators in higher education are forgiven if we give a yawning reply: not much insight there.

Another Crisis

Those with far more experience in academia than I will, likely, shake their heads and scoff: demands for shifts in educational models and practices seemingly occur every few years. Not long ago, I was part of the SERRC Collective Judgment Forum (2013) debating the notion that Massive Open Online Courses (MOOCs) are the future of higher education. The possibilities and challenges portended by online education would disrupt (“disruptive technologies” often represent the goals not the fears of the California culture where I live and work) the landscape of colleges and universities in the United States and the rest of the world.

Higher education would have to adapt to meet the needs of burgeoning numbers of people (at what point does one become a ‘student?’) seeking knowledge. The system of higher education faced a crisis; the thousands of people enrolling in MOOCs indicated that hordes of students might abandon traditional universities and embrace new styles of learning that matched the demands of twenty-first century life.

Can you count the number of professional crises you have lived through? If the humanities and/or social sciences are your home, then you likely remember quite a few (Kalin, 2017; Mandler, 2015; Tworek, 2013). That number, of course, represents calamity on a local level: crises that affect you, that loom over your future employment. For many academics, MOOCs felt like just such a threat.

Historian of technology Thomas Hughes (1994)[i] describes patterns in the development, change, and emergence of technologies as “technological momentum.” Technological momentum bridges two expansive and nuanced theories of technological development: determinism—the claim that technologies are the crucial drivers of culture—and constructivism—the idea that cultures drive technological change. MOOCs might motivate change in higher education, but the demands of relevant social groups (Pinch and Bijker 1984) would alter MOOCs, too.

Professors ought not fear their jobs would disappear or consolidate so precipitously that the profession itself would be transformed in a few years or decade: the mammoth system of higher education in the U.S. has its own inertia. Change would happen over time; teachers, students, and universities would adapt and exert counter-influences. Water feels wet.

MOOCs have not revolutionized models of higher education in the United States. Behind the eagerness for models of learning that will satisfy increasing numbers of people seeking higher education, of which MOOCs are one example, lies a growing concern about how higher education is organized, practiced, and evaluated. To understand the changes that higher education seems to require, we ought first to understand what it currently offers. Cathy Davidson (2017), as well as Michal Crow and William Dabars (2015), offer such histories of college and university systems in the United States. Their works demonstrate that a crisis in higher education does not approach; it has arrived.

Education in an Age of Flux

I teach at a new college in a university that opened its doors only a decade ago. One might expect that a new college offers boundless opportunity to address a crisis: create a program of study and methods of evaluating that program (including the students and faculty) that will meet the needs of the twenty-first century world. Situated as we are in northern California, and with faculty trained at Research 1 (R1) institutions, our college could draw from various models of traditional higher education like the University of California system or even private institutions (as we are) like Stanford.

These institutions set lofty standards, but do they represent the kinds of institutions that we ought to emulate? Research by Davidson (2017), Crow and Dabars would recommend we not follow the well-worn paths that established universities (those in existence for at least a few decades) in the United States have trodden. The authors seem to adopt the perspective that higher education functions like a system of technology (Hughes 1994); the momentum exerted by such systems has determining effects, but the possibility of directing the course of the systems exists nevertheless.

Michael Crow and William Dabars (2015) propose a design for reshaping U.S. universities that does not require the total abandonment of current models. The impetus for the needed transformation, they claim, is that the foundations of higher education in the U.S. have decayed; universities cannot meet the demands of the era.

The priorities that once drove research institutions have been assiduously copied, like so much assessment based on memorization and regurgitation that teachers of undergraduates might recognize, that their legibility and efficacy have faded. Crow and Dabars target elite, private institutions like Dartmouth and Harvard as exemplars of higher education that cannot, under their current alignment, meet the needs of twenty-first century students. Concerned as they are with egalitarianism, the authors note that public institutions of higher education born from the Morrill Acts of 1862 and 1890 fare no better at providing for the needs of the nation’s people (National Research Council 1995).

Crow and Dabars’s New American University model (2015, pp. 6-8) emphasizes access, discovery, inclusiveness, and functionality. Education ought to be available to all (access and inclusiveness) that seek knowledge and understanding of the world (discovery) in order to operate within, change, and/or improve it (functionality). The Morrill Acts, on a charitable reading, represent the United States of America’s assertion that the country and its people would mutually benefit from public education available to large swaths of the population.

Crow and Dabars, as well as Davidson (2017), base their interventions on an ostensibly similar claim: more people need better access to resources that will foster intellectual development and permit them to lead more productive lives. The nation benefits when individuals have stimulating engagement with ideas through competent instruction.  Individuals benefit because they may pursue their own goals that, in turn, will ideally benefit the nation.

Arizona State University epitomizes the New American University model. ASU enrolls over 70,000 students—many in online programs—and prides itself on the numbers of students it accepts rather than rejects (compare such a stance with Ivy League schools in the U.S.A.). Crow, President of ASU since 2002, has fostered an interdisciplinary approach to higher education at the university. Numerous institutes and centers (well over 50) have been created to focus student learning on issues/topics of present and future concern. For instance, the Decision Center for a Desert City asks students to imagine a future Phoenix, Arizona, with no, or incredibly limited, access to fresh water.

To engage with a topic that impacts manifold aspects of cities and citizens, solutions will require perspectives from work in disciplines ranging from engineering and the physical sciences to the social sciences and the humanities. The traditional colleges of, e.g., Engineering, Law, Arts and Sciences, etc., still exist at ASU. However, the institutes and centers appear as semi-autonomous empires with faculty from multiple disciplines, and often with interdisciplinary training themselves, leading students to investigate causes of and solutions to existing and emerging problems.

ASU aims to educate broad sections of the population, not just those with imposing standardized tests scores and impressive high school GPAs, to tackle obstacles facing our country and our world. Science and Technology Studies, an interdisciplinary program with scholars that Crow and Dabars frequently cite in their text, attracted my interest because its practitioners embrace ‘messy’ problems that require input from, just to name a few, historians, philosophers, political scientists, and sociologists. While a graduate student in STS, I struggled to explain my program of study to others without referencing existing disciplines like philosophy, history, etc. Though I studied in an interdisciplinary program, I still conceptualized education in disciplinary silos.

As ASU graduates more students, and attracts more interdisciplinary scholars as teachers, we ought to observe how their experiment in education impacts the issues and problems their centers and institutes investigate as well as the students themselves. If students learn from interdisciplinary educators, alongside other students that have not be trained exclusively in the theories and practices of, say, the physical sciences or humanities and social sciences, then they might not see difficult challenges like mental illness in the homeless population of major U.S. cities as concerns to be addressed mainly by psychology, pharmacology, and/or sociology.

Cathy Davidson’s The New Education offers specific illustrations of pedagogical practices that mesh well with Crow and Dabars’s message. Both texts urge universities to include larger numbers of students in research and design, particularly students that do not envision themselves in fields like engineering and the physical sciences. Elite, small universities like Duke, where Davidson previously taught, will struggle to scale up to educate the masses of students that seek higher education, even if they desired to do so.

Further, the kinds of students these institutions attract do not represent the majority of people seeking to further their education beyond the high school level. All colleges and universities need not admit every applicant to align with the models presented by Davidson, Crow and Dabars, but they must commit to interdisciplinary approaches. As a scholar with degrees in Science and Technology Studies, I am an eager acolyte: I buy into the interdisciplinary model of education, and I am part of a college that seeks to implement some version of that model.

Questioning the Wisdom of Tradition

We assume that our institutions have been optimally structured and inherently calibrated not only to facilitate the production and diffusion of knowledge but also to seek knowledge with purpose and link useful knowledge with action for the common good. (Crow and Dabars 2015, 179)

The institutions that Crow, Dabars, and Davidson critique as emblematic of traditional models of higher education have histories that range from decades to centuries. As faculty at a college of health sciences established the same year Crow and Dabars published their work, I am both excited by their proposals and frustrated by the attempts to implement them.

My college currently focuses on preparing students for careers in the health sciences, particularly medicine and pharmacy. Most of our faculty are early-career professionals; we come to the college with memories of how departments were organized at our previous institutions.

Because of my background in an interdisciplinary graduate program at Virginia Tech, and my interest in the program’s history (originally organized as the Center for the Study of Science in Society), I had the chance to interview professors that worked to develop the structures that would “facilitate the production and diffusion of knowledge” (Crow and Dabars 2015, 179). Like those early professors at Virginia Tech, our current faculty at California Northstate University College of Health Sciences come from distinct disciplines and have limited experience with the challenges of designing and implementing interdisciplinary coursework. We endeavor to foster collaboration across disciplines, but we learn as we go.

Crow and Dabars’s chapter “Designing Knowledge Enterprises” reminds one of what a new institution lacks: momentum. At meetings spread out over nearly a year, our faculty discussed and debated the nuances of a promotion and retention policy that acknowledges the contributions of all faculty while satisfying administrative demands that faculty titles, like assistant, associate, and full professor, reflect the practices of other institutions. What markers indicate that a scholar has achieved the level of, say, associate professor?

Originally trained in disciplines like biology, chemistry, physics, or English (coming from the interdisciplinary program of Science and Technology Studies, I am a bit of an outlier) our faculty have been disciplined to think in terms of our own areas of study. We have been trained to advance knowledge in increasingly particular specialties. The criteria to determine a faculty member’s level largely matches what other institutions have developed. Although the faculty endeavored to create a holistic rubric for faculty evaluation, we confronted an administration more familiar with analytic rubrics. How can a university committee compare the work done by professors of genetics and composition?[ii]

Without institutional memory to guide us, the policies and directives at my college of health sciences develop through collective deliberation on the needs of our students, staff, faculty, college, and community. We do not invent policy. We examine publicly available policies created at and for other institutions of higher learning to help guide our own decisions and proposals. Though we can glean much from elite private institutions, as described by Crow and Dabars, and from celebrated public institutions like the University of California or California State University systems that Davidson draws upon at times in her text, my colleagues know that we are not like those other institutions and systems of higher education.

Our college’s diminutive size (faculty, staff, and students) lends itself to agility: when a policy is flawed, we can quickly recognize a problem and adjust it (not to say we rectify it, but we move in the direction of doing so, e.g., a promotion policy with criteria appropriate for faculty, and administrators, from any department). If we identify student, staff, faculty, or administrator needs that have gone unaddressed, we modify or add policies.

The size of our college certainly limits what we can do: we lack the faculty and student numbers to engage in as many projects as we like. We do not have access to the financial reservoirs of large or long-standing institutions to purchase all the equipment one finds at a University of California campus, so we must be creative and make use of what materials we do possess or can purchase.

What our college lacks, somewhat counterintuitively, sets us up to carry forth with what Davidson (2017) describes in her chapter “The Future of Learning:”

The lecture is broken, so we must think of better ways to incorporate active learning into the classroom . . . . The traditional professional and apprentice models don’t teach students how to be experts, and so we must look to peer learning and peer mentoring, rich cocurricular experiences, and research to put the student, not the professor or the institution, at the center. (248-9)

Davidson does not contend that lecture has no place in a classroom. She champion flipped classrooms (Armbruster, Patel, Johnson, and Weiss 2009) and learning spaces that emphasize active student engagement (Elby 2001; Johnson and Johnson 1999) with ideas and concepts—e.g., forming and critiquing arguments (Kuhn 2010).

Claiming that universities “must prepare our students for their epic journey . . . . should give them agency . . . to push back [against the world] and not merely adapt to it” (Davidson 2017, 13) sounds simultaneously like fodder for a press-release and a call to action. It will likely strike educators, a particular audience of Davidson’s text, as obvious, but that should not detract from its intentions. Yes, students need to learn to adapt and be flexible—their chosen professions will almost certainly transform in the coming decades. College students ought to consider the kinds of lives they want to live and the people they want to be, not just the kinds of professions they wish to pursue.

Ought we demonstrate for students that the university symbolizes a locale to cultivate a perspective of “sympathy, empathy, sensitivity, and responsiveness” (Held 2011, p. 479)? Do we see ourselves in a symbiotic world (Margulis and Sagan) or an adversarial world of competition? Davidson, Crow, and Dabars propose a narrative of connectivity, not just of academic disciplines, but of everyday problems and concerns. Professors ought to continue advancing knowledge, even in particular disciplines, but we must not imagine that we do it alone (individually, in teams, in disciplines, or even in institutions).

After Sifting: What to Keep

Crow and Dabars emphasize the interplay between form and function as integral to developing a model for the New American University. We at California Northstate also scrutinize the structure of our colleges. Though our college of health sciences has a life and physical science department, and a department of humanities and social sciences, our full-time faculty number less than twenty. We are on college and university committees together; we are, daily, visible to each other.

With varying levels of success so far, we have developed integrated course-based undergraduate research experiences for our students. In the coming year, we aim to integrate projects in humanities and social sciences courses with those from the physical sciences. Most of our students want to be health practitioners, and we endeavor to demonstrate to them the usefulness of chemistry along with service learning. As we integrate our courses, research, and outreach projects, we aim to provide students with an understanding that the pieces (courses) that make up their education unify through our work and their own.

Team teaching a research methods course with professors of genetics and chemistry in the fall of 2017, I witnessed the rigor and the creativity required for life and physical science research. Students were often confused: the teachers approached the same topics from seemingly disparate perspectives. As my PhD advisor, James Collier, often recounted to me regarding his graduate education in Science and Technology Studies (STS), graduate students were often expected to be the sites of synthesis. Professors came from traditional departments like history, philosophy, and sociology; students in STS needed to absorb the styles and techniques of various disciplines to emerge as interdisciplinarians.

Our students in the research methods class that fall saw a biologist, a chemist, and an STS scholar and likely thought: I want to be none of those things. Why should I learn how to be a health practitioner from professors that do not identify as health practitioners themselves?

When faculty adapt to meet the needs of students pursuing higher education, we often develop the kinds of creole languages elaborated by Peter Galison (1997) to help our students see the connections between traditionally distinct areas of study. Our students, then, should be educated to speak in multiple registers depending on their audience, and we must model that for them. Hailing from disparate disciplines and attempting to teach in ways distinct from how we were taught (e.g., flipped classrooms) and from perspectives still maturing (interdisciplinarity), university faculty have much to learn.

Our institutions, too, need to adapt: traditional distinctions of teaching, scholarship, and service (the hallmarks of many university promotion policies) will demand adjustment if they are to serve as accurate markers of the work we perform. Students, as stakeholders in their own education, should observe faculty as we struggle to become what we wish to see from them. Davidson, Crow, and Dabars argue that current and future crises will not be resolved effectively by approaches that imagine problems as solely technical, social, economic, cultural, or political. For institutions of higher education to serve the needs of their people, nations, and environments (just some of the pieces that must be served), they must acclimate to a world of increasing connectivity. I know: water feels wet.

Contact details: William.Davis@csnu.edu

References

Armbruster, Peter, Maya Patel, Erika Johnson, and Martha Weiss. 2009. “Active Learning and Student-centered Pedagogy Improve Student Attitudes and Performance in Introductory Biology” Cell Biology Education—Life Sciences Education 8: 203-13.

Bijker, Wiebe. 1993. “Do Not Dispair: There Is Life after Constructivism.” Science, Technology and Human Values 18: 113-38.

Crow, Michael; and William Dabars. Designing the New University. Johns Hopkinds University Press, 2015.

Davidson, Cathy. The New Education: How to Revolutionize the University to Prepare Students for a World in Flux. Basic Books, 2017.

Davis, William, Martin Evenden, Gregory Sandstrom and Aliaksandr Puptsau. 2013. “Are MOOCs the Future of Higher Education? A Collective Judgment Forum.” Social Epistemology Review and Reply Collective 2 (7) 23-27.

Elby, Andrew. 2001. “Helping Physics Students Learn How to Learn.” American Journal of Physics (Physics Education Research Supplement) 69 (S1): S54-S64.

Galison, Peter. 1997. Image and Logic: A Material Culture of Microphysics. Chicago, IL: The University of Chicago Press.

Hughes, Thomas. 1994. “The Evolution of Large Technical Systems.” The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge, MA: MIT Press.

Johnson, David, and Roger T. Johnson. 1999. “Making Cooperative Learning Work.” Theory into Practice 38 (2): 67-73.

Kalin, Mike. “The Crisis in the Humanities: A Self-Inflicted Wound?” Independent School, Winter 2017. https://www.nais.org/magazine/independent-school/winter-2017/the-crisis-in-the-humanities-a-self-inflicted-wou/

Kuhn, Deanna. 2010. “Teaching and Learning Science as Argument.” Science Education 94 (5): 810-24.

Mandler, Peter. “Rise of the Humanities.” Aeon Magazine, December 17, 2015. https://aeon.co/essays/the-humanities-are-booming-only-the-professors-can-t-see-it

National Research Council. Colleges of Agriculture at the Land Grant Universities: A Profile. Washington, D.C.: National Academy Press, 1995.

Pinch, Trevor and Wiebe Bijker. 1984. “The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other.” Social Studies of Science 14: 399-441.

Smith, Merritt, and Leo Marx. 1994. Does Technology Drive History? The Dilemma of Technological Determinism

Tworek, Heidi. “The Real Reason the Humanities Are ‘in Crisis.’” The Atlantic, December 18, 2013. https://www.theatlantic.com/education/archive/2013/12/the-real-reason-the-humanities-are-in-crisis/282441/

[i] My descriptions here of technological determinism and social constructivism lack nuance. For specifics regarding determinism, see the 1994 anthology from Leo Marx and Merritt Smith, Does Technology Drive History. For richer explanations of constructivism, see Bijker (1993), “Do not despair: There is life after constructivism,” and Pinch and Bijker (1984) “The social construction of facts and artifacts: Or how the sociology of science and the sociology of technology might benefit each other.”

[ii] Hardly rhetorical, that last question is live on my campus. If you have suggestions, please write me.