Archives For technology

Author Information: Paul R. Smart, University of Southampton, ps02v@ecs.soton.ac.uk

Smart, Paul R. “(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 45-55.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uq

Please refer to:

Image by BTC Keychain via Flickr / Creative Commons

 

Richard Heersmink’s (2018) article, A virtue epistemology of the Internet: Search engines, intellectual virtues, and education, provides an important and timely analysis of the Internet from the standpoint of virtue epistemology.[1] According to Richard, the Internet is an important epistemic resource, but it is one that comes with a range of epistemic hazards. Such hazards, he suggests, motivate a consideration of the ways in which individuals should interact with the Internet.

In particular, Richard appeals to a specific branch of virtue epistemology, known as virtue responsibilism, arguing that certain kinds of cognitive trait (e.g., curiosity and open-mindedness) are useful in helping us press maximal epistemic benefit from the Internet. Given the utility of such traits, coupled with the epistemic importance of the Internet, Richard suggests that educational policy should be adapted so as to equip would-be knowers with the cognitive wherewithal to cope with the epistemic challenges thrown up by the online environment.

There is, no doubt, something right about all this. Few would disagree with the claim that a certain level of discernment and discrimination is important when it comes to the evaluation of online content. Whether such ‘virtues’ are best understood from the perspective of virtue responsibilism or virtue reliabilism is, I think, a moot point, for I suspect that in the case of both virtue responsibilism and virtue reliabilism what matters is the way in which belief-forming informational circuits are subject to active configuration by processes that may be broadly construed as metacognitive in nature (Smart, in pressa). That, however, is a minor quibble, and it is one that is of little consequence to the issues raised in Richard’s paper.

For the most part, then, I find myself in agreement with many of the assumptions that motivate the target article. I agree that the Internet is an important epistemic resource that is unprecedented in terms of its scale, scope, and accessibility. I also agree that, at the present time, the Internet is far from an epistemically safe environment, and this raises issues regarding the epistemic standing of individual Internet users. In particular, it looks unlikely that the indiscriminate selection and endorsement of online information will do much to bolster one’s epistemic credentials.

We thus encounter something of a dilemma: As an epistemic resource, the Internet stands poised to elevate our epistemic standing, but as an open and public space the Internet provides ample opportunities for our doxastic systems to be led astray. The result is that we are obliged to divide the online informational cornucopia into a treasure trove of genuine facts and a ragbag collection of ‘false facts’ and ‘fake news.’ The information superhighway, it seems, promises to expand our epistemic power and potential, but the road ahead is one that is fraught with a dizzying array of epistemic perils, problems, and pitfalls. What ought we to do in response to such a situation?

It is at this point that I suspect my own views start to diverge with those of the target article. Richard’s response to the dilemma is to focus attention on the individual agent and consider the ways in which an agent’s cognitive character can be adapted to meet the challenges of the Internet. My own approach is somewhat different. It is borne out of three kinds of doubt: doubts about the feasibility (although not the value) of virtue-oriented educational policies, doubts about the basic validity of virtue theoretic conceptions of knowledge, and doubts about whether the aforementioned dilemma is best resolved by attempting to change the agent as opposed to the environment in which the agent is embedded. As always, space is limited and life is short, so I will restrict my discussion to issues that I deem to be of greatest interest to the epistemological community.

Reliable Technology

Inasmuch as intellectual virtues are required for online knowledge—i.e., knowledge that we possess as a result of our interactions and engagements with the Internet—they are surely only part of a much  broader (and richer) story that includes details about the environment in which our cognitive systems operate. In judging the role of intellectual virtue in shielding us from the epistemic hazards of the online environment, it therefore seems important to have some understanding of the actual technologies we interact with.

This is important because it helps us understand the kinds of intellectual virtue that might be required, as well as the efficacy of specific intellectual virtues in helping us believe the truth (and thus working as virtues in the first place). Internet technologies are, of course, many and varied, and it will not be possible to assess their general relevance to epistemological debates in the present commentary. For the sake of brevity, I will therefore restrict my attention to one particular technology: blockchain.

Blockchain is perhaps best known for its role in supporting the digital cryptocurrency, Bitcoin. It provides us with a means of storing data in a secure fashion, using a combination of data encryption and data linking techniques. For present purposes, we can think of a blockchain as a connected set of data records (or data blocks), each of which contains some body of encrypted data. In the case of Bitcoin, of course, the data blocks contain data of a particular kind, namely, data pertaining to financial transactions. But this is not the only kind of data that can be stored in a blockchain. In fact, blockchains can be used to store information about pretty much anything. This includes online voting records, news reports, sensor readings, personal health records, and so on.

Once data is recorded inside a blockchain, it is very difficult to modify. In essence, the data stored within a blockchain is immutable, in the sense that it cannot be changed without ‘breaking the chain’ of data blocks, and thereby invalidating the data contained within the blockchain. This property makes blockchains of considerable epistemic significance, because it speaks to some of the issues (e.g., concerns about data tampering and malign forms of information manipulation) that are likely to animate epistemological debates in this area.

This does not mean, of course, that the information stored within a blockchain is guaranteed to be factually correct, in the sense of being true and thus yielding improvements in epistemic standing. Nevertheless, there are, I think, reasons to regard blockchain as an important technology relative to efforts to make the online environment a somewhat safer place for would-be knowers. Consider, for example, the title of the present article. Suppose that we wanted to record the fact that a person known as Paul Smart—that’s me—wrote an article with the title:

(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!

We can incorporate this particular piece of information into a blockchain using something called a cryptographic hash function, which yields a unique identifier for the block and all of its contents. In the case of the aforementioned title, the cryptographic hash (as returned by the SHA256 algorithm[2]) is:

7147bd321e79a63041d9b00a937954976236289ee4de6f8c97533fb6083a8532

Now suppose that someone wants to alter the title, perhaps to garner support for an alternative argumentative position. In particular, let’s suppose they want to claim that the title of the article is:

Fake News Alert: Intellectual Virtues Required for Online Knowledge!

From an orthographic perspective, of course, not much has changed. But the subtlety of the alteration is not something that can be used to cause confusion about the actual wording of the original title—the title that I intended for the present article. (Neither can it be used to cast doubt about the provenance of the paper—the fact that the author of the paper was a person called Paul Smart.) To see this, note that the hash generated for the ‘fake’ title looks nothing like the original:

cc05baf2fa7a439674916fe56611eaacc55d31f25aa6458b255f8290a831ddc4

It is this property that, at least in part, makes blockchains useful for recording information that might otherwise be prone to epistemically malign forms of information manipulation. Imagine, for the sake of argument, that climatological data, as recorded by globally distributed sensors, was stored in a blockchain. The immutability of such data makes it extremely difficult for anyone to manipulate the data in such a way as to confirm or deny the reality of year-on-year changes in global temperature. Neither is it easy to alter information pertaining to the provenance of existing data records, i.e., information about when, where, and how such data was generated.

None of this should delude us into thinking that blockchain technology is a panacea for Internet-related epistemic problems—it isn’t! Neither does blockchain obviate the need for agents to exercise at least some degree of intellectual virtue when it comes to the selection and evaluation of competing data streams. Nevertheless, there is, I think, something that is of crucial epistemological interest and relevance here—something that makes blockchain and other cybersecurity technologies deserving of further epistemological attention. In particular, such technologies may be seen as enhancing the epistemic safety of the online environment, and thus perhaps reducing the need for intellectual virtue.

In this sense, the epistemological analysis of Internet technologies may be best approached from some variant of modal epistemology—e.g., epistemological approaches that emphasize the modal stability of true beliefs across close possible worlds (Pritchard, 2009, chap. 2). But even if we choose to countenance an approach that appeals to issues of intellectual virtue, there is still, I suggest, a need to broaden the analytic net to include technologies that (for the time being at least) lie beyond the bounds of the individual cognitive agent.

Safety in Numbers

“From an epistemic perspective,” Richard writes, “the most salient dimension of the Internet is that it is an information space” (Heersmink, 2018, p. 5). Somewhat surprisingly, I disagree. Although it is obviously true that the Internet is an information space, it is not clear that this is its most salient feature, at least from an epistemological standpoint. In particular, there is, I suggest, a sense in which the Internet is more than just an information space. As is clear from the explosive growth in all things social—social media, social networks, social bots, and so on—the Internet functions as a social technology, yielding all manner of opportunities for people to create, share and process information in a collaborative fashion. The result, I suggest, is that we should not simply think of the Internet as an information space (although it is surely that), we should also view it as a social space.

Viewing the Internet as a social space is important because it changes the way we think about the epistemic impact of the Internet, relative to the discovery, production, representation, acquisition, processing and utilization of knowledge. Smart (in pressb), for example, suggests that some online systems function as knowledge machines, which are systems in which some form of knowledge-relevant processing is realized by a socio-technical mechanism, i.e., a mechanism whose component elements are drawn from either the social (human) or the technological realm.

An interesting feature of many of these systems is the way in which the reliability (or truth-conducive) nature of the realized process is rooted in the socio-technical nature of the underlying (realizing) mechanism. When it comes to human computation or citizen science systems, for example, user contributions are typically solicited from multiple independent users as a means of improving the reliability of specific epistemic outputs (Smart, in pressb; Smart and Shadbolt, in press; Watson and Floridi, 2018). Such insights highlight the socially-distributed character of at least some forms of online knowledge production, thereby moving us beyond the realms of individual, agent-centric analyses.

On a not altogether unrelated note, it is important to appreciate the way in which social participation can itself be used to safeguard online systems from various forms of malign intervention. One example is provided by the Google PageRank algorithm. In this case, any attempt to ‘artificially’ elevate the ranking assigned to specific contributions (e.g., a user’s website) is offset by the globally-distributed nature of the linking effort, coupled with the fact that links to a specific resource are themselves weighted by the ranking of the resource from which the link originates. This makes it difficult for any single agent to subvert the operation of the PageRank algorithm.

Even ostensibly non-social technologies can be seen to rely on the distributed and decentralized nature of the Internet. In the case of blockchain, for example, multiple elements of a peer-to-peer network participate in the computational processes that make blockchain work. In this way, the integrity of the larger system is founded on the collaborative efforts of an array of otherwise independent computational elements. And it is this that (perhaps) allows us to think of blockchain’s epistemically-desirable features as being rooted in something of a ‘social’ substrate.

All of this, I suggest, speaks in favor of an approach that moves beyond a preoccupation with the properties of individual Internet users. In particular, there seems to be considerable merit in approaching the Internet from a more socially-oriented epistemological perspective. It is easy to see the social aspects of the Internet as lying at the root of a panoply of epistemic concerns, especially when it comes to the opportunities for misinformation, deception, and manipulation. But in light of the above discussion, perhaps an alternative, more positive, take on the Internet (qua social space) starts to come into sharper focus. This is a view that highlights the way in which certain kinds of online system can work to transform a ‘vice’ into a ‘virtue,’ exploiting the social properties of the Internet for the purposes of dealing with reliability-related concerns.

Image by Dariorug via Flickr / Creative Commons

 

Filter Bubblicious

Search engines form one of the focal points of Richard’s analysis, and, as with previous work in this area, Richard finds at least some aspects of their operation to be highly problematic. A particular issue surfaces in respect of personalized search. Here, Richard’s analysis echoes the sentiments expressed by other epistemologists who regard personalized search algorithms as of dubious epistemic value.

In fact, I suspect the consensus that has emerged in this area fails to tell the whole story about the epistemic consequences of personalized search. Indeed, from a virtue epistemological position, I worry that epistemologists are in danger of failing to heed their own advice—prematurely converging on a particular view without proper consideration of competing positions. In my new-found role as the virtue epistemologist’s guardian angel (or should that be devil’s advocate?), I will attempt to highlight a couple of reasons why I think more empirical research is required before we can say anything useful about the epistemological impact of personalized search algorithms.

My first worry is that our understanding about the extent to which search results and subsequent user behavior is affected by personalization is surprisingly poor. Consider, for example, the results of one study, which attempted to quantify the effect of personalization on search results (Hannak et al., 2013). Using an empirical approach, Hannak et al. (2013) report a demonstrable personalization effect, with 11.7% of search results exhibiting differences due to personalization. Interestingly, however, the effect of personalization appeared to be greater for search results with lower rankings; highly ranked results (i.e., those appearing at the top of a list of search results) appeared to be much less affected by personalization.

This result is interesting given the observation that college students “prefer to click on links in higher positions even when the abstracts are less relevant to the task at hand” (Heersmink, 2018, p. 6). From one perspective, of course, this tendency looks like a vice that jeopardizes the epistemic standing of the individual user. And yet, from another perspective, it looks like the preference for higher ranked search results is poised to negate (or at least reduce) the negative epistemological effects of personalized search. What we seem to have here, in essence, is a situation in which one kind of ‘intellectual vice’ (i.e., a tendency to select highly-ranked search results) is playing something of a more positive (virtuous?) role in mitigating the negative epistemological sequelae of a seemingly vicious technology (i.e., personalized search).

None of this means that the epistemic effects of personalized search are to the overall benefit of individual users; nevertheless, the aforementioned results do call for a more nuanced and empirically informed approach when considering the veritistic value of search engines, as well as other kinds of Internet-related technology.

A second worry relates to the scope of the epistemological analysis upon which judgements about the veritistic value of search engines are based. In this case, it is unclear whether analyses that focus their attention on individual agents are best placed to reveal the full gamut of epistemic costs and benefits associated with a particular technology, especially one that operates in the socio-technical ecology of the Internet. To help us understand this worry in a little more detail, it will be useful to introduce the notion of mandevillian intelligence (Smart, in pressc; Smart, in pressd).

Mandevillian intelligence is a specific form of collective intelligence in which the cognitive shortcomings and epistemic vices of the individual agent are seen to yield cognitive benefits and epistemic virtues at the collective or social level of analysis, e.g., at the level of collective doxastic agents (see Palermos, 2015) or socio-epistemic systems (see Goldman, 2011). According to this idea, personalized search systems may play a productive role in serving the collective cognitive good, providing a means by which individual vices (e.g., a tendency for confirmation bias) are translated into something that more closely resembles an epistemic virtue (e.g., greater cognitive coverage of a complex space of thoughts, ideas, opinions, and so on). Consider, for example, the way in which personalized search may help to focus individual attention on particular bodies of information, thereby restricting access to a larger space of ideas, opinions, and other information.

While such forms of ‘restricted access’ or ‘selective information exposure’ are unlikely to yield much in the way of an epistemic benefit for the individual agent, it is possible that by exploiting (and, indeed, accentuating!) an existing cognitive bias (e.g., confirmation bias), personalized search may work to promote cognitive diversity, helping to prevent precipitant forms of cognitive convergence (see Zollman, 2010) and assisting with the epistemically optimal division of cognitive labor (see Muldoon, 2013). This possibility reveals something of a tension in how we interpret or evaluate the veritistic value of a particular technology or epistemic practice. In particular, it seems that assessments of veritistic value may vary according to whether our epistemological gaze is directed towards individual epistemic agents or the collective ensembles in which those agents are situated.

The Necessity of Virtue

As Richard notes, virtue epistemology is characterized by a shift in emphasis, away from the traditional targets of epistemological analysis (e.g., truth, justification and belief) and towards the cognitive properties of would-be knowers. “Virtue epistemology,” Richard writes, “is less concerned with the nature of truth and more concerned with the cognitive character of agents” (Heersmink, 2018, p. 2). This is, no doubt, a refreshing change, relative to the intellectual orientation of traditional philosophical debates.

Nevertheless, I assume that virtue epistemologists still recognize the value and priority of truth when it comes to issues of epistemic evaluation. Someone who holds false beliefs is not the possessor of knowledge, and this remains the case irrespective of whatever vices and virtues the agent has. In other words, it does not matter how careful, attentive and assiduous an agent is in selecting and evaluating information, if what the agent believes is false, they simply do not know.

What seems to be important in the case of virtue epistemology is the role that intellectual virtue plays in securing the truth of an agent’s beliefs. In particular, the central feature of virtue epistemology (at least to my mind) is that the truth of an agent’s beliefs stem from the exercise of intellectual virtue. It is thus not the case that truth is unimportant (or less important) when it comes to issues of positive epistemic standing; rather, it is the role that intellectual virtue plays in establishing the truth of an agent’s beliefs. An agent is thus a bona fide knower when they believe the truth and the truth in question is attributable to some aspect of their cognitive character, specifically, a cognitive trait (virtue responsibilism) or cognitive faculty (virtue reliabilism).

What then makes something a vice or virtue seems to be tied to the reliability of token instantiations of processes that are consistent with an agent’s cognitive character. Intellectual virtues are thus “cognitive character traits that are truth-conducive and minimalise error” (Heersmink, 2018, p. 3), while intellectual vices are characterized as “cognitive character traits that are not truth-conducive and do not minimalise error” (Heersmink, 2018, p. 3). It is this feature of the intellectual virtues—the fact that they are, in general, reliable (or give rise to reliable belief-relevant processes)—that looks to be important when it comes to issues of epistemic evaluation.

So this is what I find problematic about virtue theoretic approaches to knowledge. (Note that I am not an epistemologist by training, so this will require a generous—and hopefully virtue-inspiring swig—of the ole intellectual courage.) Imagine a state-of-affairs in which the Internet was (contrary to the present state-of-affairs) a perfectly safe environment—one where the factive status of online information was guaranteed as a result of advances in cyber-security techniques and intelligent fact-checking services. Next, let us imagine that we have two individuals, Paul and Sophia, who differ with respect to their cognitive character. Paul is the least virtuous of the two, unreflectively and automatically accepting whatever the Internet tells him. Sophia is more circumspect, wary of being led astray by (the now non-existent) fake news.

Inasmuch as we see the exercise of intellectual virtue as necessary for online knowledge, it looks unlikely that poor old Paul can be said to know very much. This is because the truth of Paul’s beliefs are not the result of anything that warrants the label ‘intellectual virtue.’ Paul, of course, does have a lot of true beliefs, but the truth of these beliefs does not stem from the exercise of his intellectual virtues—if, indeed, he has any. In fact, inasmuch as there is any evidence of virtue in play here, it is probably best attributed to the technologies that work to ensure the safety of the online environment. The factive status of Paul’s beliefs thus has more to do with the reliability of the Internet than it does with the elements of his cognitive character.

But is it correct to say that Paul has no online knowledge in this situation? Personally, I do not have this intuition. In other words, in a perfectly safe environment, I can see no reason why we should restrict knowledge attributions to agents whose beliefs are true specifically as the result of intellectual virtue. My sense is that even the most unreflective of agents could be credited with knowledge in a situation where there was no possibility of them being wrong. And if that is indeed the case, then why insist that it is only the exercise of intellectual virtue that underwrites positive epistemic standing?

After all, it seems perfectly possible, to my mind, that Sophia’s epistemic caution contributes no more to the minimization of error in an epistemically benign (i.e., safe) environment than does Paul’s uncritical acceptance. (In fact, given the relative efficiency of their doxastic systems, it may very well be the case that Sophia ends up with fewer true beliefs than Paul.) It might be claimed that this case is invalidated by a failure to consider the modal stability of an agent’s beliefs relative to close possible worlds, as well as perhaps their sensitivity to counterfactual error possibilities. But given the way in which the case is characterized, I suggest that there are no close possible worlds that should worry us—the cybersecurity and fact checking technologies are, let us assume, sufficiently robust as to ensure the modal distance of those worrisome worlds.

One implication of all this is to raise doubts about the necessity of intellectual virtue, relative to our conceptual understanding of knowledge. If there are cases where intellectual virtue is not required for positive epistemic standing, then intellectual virtue cannot be a necessary condition for knowledge attribution. And if that is the case, then why should intellectual virtue form the basis of an approach that is intended to deal with the epistemic shortcomings of the (contemporary) Internet?

Part of the attraction of virtue epistemology, I suspect, is the way in which a suite of generally reliable processes are inextricably linked to the agent who is the ultimate target of epistemic evaluation. This linkage, which is established via the appeal to cognitive character, helps to ensure the portability of an agent’s truth-tracking capabilities—it helps to ensure, in other words, that wherever the agent goes their reliable truth-tracking capabilities are sure to follow.

However, in an era where our doxastic systems are more-or-less constantly plugged into a reliable and epistemically safe environment, it is not so clear that agential capabilities are relevant to epistemic standing. This, I suggest, raises doubts about the necessity of intellectual virtue in securing positive epistemic status, and it also (although this is perhaps less clear) encourages us to focus our attention on some of the engineering efforts (as opposed to agent-oriented educational programs) that might be required to make the online world an epistemically safer place.

Conclusion

What, then, should we make of the appeal to virtue epistemology in our attempt to deal with the  epistemic hazards of the Internet. My main concern is that the appeal to virtue epistemology (and the emphasis placed on intellectual virtue) risks an unproductive focus on individual human agents at the expense of both the technological and social features of the online world. This certainly does not rule out the relevance of virtue theoretic approaches as part of our attempt to understand the epistemic significance of the Internet, but other approaches (e.g., modal reliabilism, process reliabilism, distributed reliabilism, and systems-oriented social epistemology) also look to be important.

Personally, I remain agnostic with regard to the relevance of different epistemological approaches, although I worry about the extent to which virtue epistemology is best placed to inform policy-related decisions (e.g., those relating to education). In particular, I fear that by focusing our attention on individual agents and issues of intellectual virtue, we risk overlooking some of the socio-epistemic benefits of the Internet, denigrating a particular technology (e.g., personalized search) on account of its failure to enhance individual knowledge, while ignoring the way a technology contributes to more collective forms of epistemic success.

In concluding his thought-provoking paper on virtue epistemology and the Internet, Richard suggests that “there is an important role for educators to teach and assess [intellectual] virtues as part of formal school and university curricula, perhaps as part of critical thinking courses” (Heersmink, 2018, p. 10). I have said relatively little about this particular issue in the present paper. For what it’s worth, however, I can see no reason to object to the general idea of Internet-oriented educational policies. The only caveat, perhaps, concerns the relative emphasis that might be placed on the instillation of intellectual virtue as opposed to the inculcation of technical skills, especially those that enable future generations to make the online world a safer place.

No doubt there is room for both kinds of pedagogical program (assuming they can even be dissociated). At the very least, it seems to me that the effort to resolve a problem (i.e., engineer a safer Internet) is just as important as the effort to merely cope with it (i.e., acquire a virtuous cognitive character). But, in any case, when it comes to education and learning, we should not lose sight of the fact that the Internet is itself something that is used for educational purposes. Perhaps, then, the more important point about education and the Internet is not so much the precise details of what gets taught, so much as the issue of whether the Internet (with all its epistemic foibles) is really the best place to learn.

Contact details: ps02v@ecs.soton.ac.uk

References

Goldman, A. I. (2011). A guide to social epistemology. In A. I. Goldman and D. Whitcomb (Eds.), Social Epistemology: Essential Readings, pp. 11–37. New York, New York, USA: Oxford University Press.

Hannak, A., P. Sapiezynski, A. Molavi Kakhki, B. Krishnamurthy, D. Lazer, A. Mislove, and C. Wilson (2013). Measuring personalization of Web search. In D. Schwabe, V. Almeida, H. Glaser, R. Baeza-Yates, and S. Moon (Eds.), Proceedings of the 22nd International Conference  on World Wide Web, Rio  de Janeiro, Brazil, pp. 527–538. ACM.

Heersmink, R. (2018). A virtue epistemology of the Internet: Search engines, intellectual virtues, and education. Social Epistemology 32 (1), 1–12.

Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass 8 (2), 117–125.

Palermos, S. O. (2015). Active externalism, virtue reliabilism and scientific knowledge. Synthese 192 (9), 2955–2986.

Pritchard, D. (2009). Knowledge. Basingstoke, England, UK: Palgrave Macmillan.

Smart, P. R. (in pressa). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. (in pressb). Knowledge machines. The Knowledge Engineering Review.

Smart, P. R. (in pressc). Mandevillian intelligence. Synthese.

Smart, P. R. (in pressd). Mandevillian intelligence: From individual vice to collective virtue. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Socially Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. and N. R. Shadbolt (in press). The World Wide Web. In J. Chase and D. Coady (Eds.), The Routledge Handbook of Applied Epistemology. New York, New York, USA: Routledge.

Watson, D. and L. Floridi (2018). Crowdsourced science: Sociotechnical epistemology in the e-research paradigm. Synthese 195 (2), 741–764.

Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis 72 (1), 17–35.

[1] This work is supported under SOCIAM: The Theory and Practice of Social Machines. The SOCIAM Project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/1 and comprises the Universities of Southampton, Oxford and Edinburgh.

[2] See http://www.xorbin.com/tools/sha256-hash-calculator [accessed: 30th  January 2018].

Author Information: Damien Williams, Virginia Tech, damienw7@vt.edu

Williams, Damien. “Deleting the Human Clause: A Review of Ashley Shew’s Animal Constructions and Technological Knowledge.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 42-44.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uh

Animal Constructions and Technological Knowledge is Ashley Shew’s debut monograph and in it she argues that we need to reassess and possibly even drastically change the way in which we think about and classify the categories of technology, tool use, and construction behavior. Drawing from the fields of anthropology, animal studies, and philosophy of technology and engineering, Shew demonstrates that there are several assumptions made by researchers in all of these fields—assumptions about intelligence, intentionality, creativity and the capacity for novel behavior.

Many of these assumptions, Shew says, were developed to guard against the hazard of anthropomorphizing the animals under investigation, and to prevent those researchers ascribing human-like qualities to animals that don’t have them. However, this has led to us swinging the pendulum too far in the other direction, engaging in “a kind of speciesist arrogance” which results in our not ascribing otherwise laudable characteristics to animals for the mere fact that they aren’t human.[1]

Shew says that we consciously and unconsciously appended a “human clause” to all of our definitions of technology, tool use, and intelligence, and this clause’s presumption—that it doesn’t really “count” if humans aren’t the ones doing it—is precisely what has to change.

In Animal Constructions, Shew’s tone is both light and intensely focused, weaving together extensive notes, bibliography, and index with humor, personal touches, and even poignancy, all providing a sense of weight and urgency to her project. As she lays out the pieces of her argument, she is extremely careful about highlighting and bracketing out her own biases, throughout the text; an important fact, given that the whole project is about the recognition of assumptions and bias in human behavior. In Chapter 6, when discussing whether birds can be said to understand what they’re doing, Shew says that she

[relies] greatly on quotations…because the study’s authors describe crow tool uses and manufacture using language that is very suggestive about crows’ technological understanding and behaviors—language that, given my particular philosophical research agenda, might sound biased in paraphrase.[2]

In a chapter 6 endnote, Shew continues to touch on this issue of bias and its potential to become prejudice, highlighting the difficulty of cross-species comparison, and noting that “we also compare the intelligence of culturally and economically privileged humans with that of less privileged humans, a practice that leads to oppression, exploitation, slavery, genocide, etc.”[3] In the conclusion, she elaborates on this somewhat, pointing out the ways in which biases about the “right kinds” of bodies and minds have led to embarrassments and atrocities in human history.[4] As we’ll see, this means that the question of how and why we categorize animal construction behaviors as we do has implications which are far more immediate and crucial than research projects.

The content of Animal Constructions is arranged in such a way as to make a strong case for the intelligence, creativity, and ingenuity of animals, throughout, but it also provides several contrast cases in which we see that there are several animal behaviors which might appear to be intentional, but which are the product of instinct or the extended phenotype of the species in question.[5] According to Shew, these latter cases do more than act as exceptions that test the rule; they also provide the basis for reframing the ways in which we compare the behaviors of humans and nonhuman animals.

If we can accept that construction behavior exists on a spectrum or continuum with tool use and other technological behaviors, and we can come to recognize that animals such as spiders and beavers make constructions as a part of the instinctual, DNA-based, phenotypical natures, then we can begin to interrogate whether the same might not be true for the things that humans make and do. If we can understand this, then we can grasp that “the nature of technology is not merely tied to the nature of humanity, but to humanity in our animality” (emphasis present in original).[6]

Using examples from animal studies reaching back several decades, Shew discusses experimental observations of apes, monkeys, cetaceans (dolphins and whales), and birds. Each example set moves further away from the kind of animals we see as “like us,” and details how each group possess traits and behaviors humans tend to think only exist in ourselves.[7] Chimps and monkeys test tool-making techniques and make plans; dolphins and whales pass hunting techniques on to their children and cohort, have names, and social rituals; birds make complex tools for different scenarios, adapt them to novel circumstances, and learn to lie.[8]

To further discuss the similarities between humans and other animals, Shew draws on theories about the relationship between body and mind, such as embodiment and extended mind hypotheses, from philosophy of mind, which say that the kind of mind we are is intimately tied to the kinds of bodies we are. She pairs this with work from disability studies which forwards the conceptual framework of “bodyminds,” saying that they aren’t simply linked; they’re the same.[9] This is the culmination of descriptions of animal behaviors and a prelude a redefinition and reframing of the concepts of “technology” and “knowledge.”

Editor's note - My favourite part of this review roundtable is scanning through pictures of smart animals

Dyson the seal. Image by Valerie via Flickr / Creative Commons

 

In the book’s conclusion, Shew suggests placing all the products of animal construction behavior on a two-axis scale, where the x-axis is “know-how” (the knowledge it takes to accomplish a task) and the y-axis is “thing knowledge” (the information about the world that gets built into constructed objects).[10] When we do this, she says, we can see that every made thing, be it object or social construct (a passage with important implications) falls somewhere outside of the 0, 0 point.[11] This is Shew’s main thrust throughout Animal Constructions: That humans are animals and our technology is not what sets us apart or makes us special; in fact, it may be the very thing that most deeply ties us to our position within the continuum of nature.

For Shew, we need to be less concerned about the possibility of incorrectly thinking that animals are too much like us, and far more concerned that we’re missing the ways in which we’re still and always animals. Forgetting our animal nature and thinking that there is some elevating, extra special thing about humans—our language, our brains, our technologies, our culture—is arrogant in the extreme.

While Shew says that she doesn’t necessarily want to consider the moral implications of her argument in this particular book, it’s easy to see how her work could be foundational to a project about moral and social implications, especially within fields such as animal studies or STS.[12] And an extension like this would fit perfectly well with the goal she lays out in the introduction, regarding her intended audience: “I hope to induce philosophers of technology to consider animal cases and induce researchers in animal studies to think about animal tool use with the apparatus provided by philosophy of technology.”[13]

In Animal Constructions, Shew has built a toolkit filled with fine arguments and novel arrangements that should easily provide the instruments necessary for anyone looking to think differently about the nature of technology, engineering, construction, and behavior, in the animal world. Shew says that “A full-bodied approach to the epistemology of technology requires that assumptions embedded in our definitions…be made clear,”[14] and Animal Constructions is most certainly a mechanism by which to deeply delve into that process of clarification.

Contact details: damienw7@vt.edu

References

Shew, Ashley. Animal Constructions and Technological Knowledge. Lanham, MD: Lexington Books, 2017.

[1] Ashley Shew, Animal Constructions and Technological Knowledge p. 107

[2] Ibid., p. 73

[3] Ibid., p. 89, n. 7

[4] Ibid., pg. 107—122

[5] Ibid., pg. 107—122

[6] Ibid., p. 19

[7] On page 95, Shew makes brief mention various instances of octopus tool use; more of these examples would really drive the point home.

[8] Shew, pg. 35—51; 53—65; 67—89

[9] Ibid., p. 108

[10] Ibid., pg. 110—119

[11] Ibid., p. 118

[12] Ibid., p. 16

[13] Ibid., p. 11

[14] Ibid., p 105

Author Information: Robert Frodeman, University of North Texas, robert.frodeman@unt.edu

Frodeman, Robert. “The Politics of AI.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 48-49.

The pdf of the article provides specific page references. Shortlink: https://wp.me/p1Bfg0-3To

This robot, with its evocatively cute face, would turn its head toward the most prominent human face it could see.
Image from Jeena Paradies via Flickr / Creative Commons

 

New York Times columnist Thomas Friedman has been a cheerleader for technology for decades. He begins an early 2018 column by declaring that he wants to take a break from the wall-to-wall Trump commentary. Instead, ‘While You Were Sleeping’ consists of an account of the latest computer wizardry that’s occurring under our noses. What Friedman misses is that he is still writing about Trump after all.

His focus is on quantum computing. Friedman revisits a lab he had been to a mere two years earlier; on the earlier visit he had come away impressed, but feeling that “this was Star Wars stuff — a galaxy and many years far away.” To his surprise, however, the technology had moved quicker than anticipated: “clearly quantum computing has gone from science fiction to nonfiction faster than most anyone expected.”

Friedman hears that quantum computers will work 100,000 times faster than the fastest computers today, and will be able to solve unimaginably complex problems. Wonders await – such as the NSA’s ability to crack the hardest encryption codes. Not that there is any reason for us to worry about that; the NSA has our best interests at heart. And in any case, the Chinese are working on quantum computing, too.

Friedman does note that this increase in computing power will lead to the supplanting of “middle-skill and even high-skill work.” Which he allows could pose a problem. Fortunately, there is a solution at hand: education! Our educational system simply needs to adapt to the imperatives of technology. This means not only K-12 education, and community colleges and universities, but also lifelong worker training. Friedman reports on an interview with IBM CEO Ginni Rometty, who told him:

“Every job will require some technology, and therefore we’ll need to revamp education. The K-12 curriculum is obvious, but it’s the adult retraining — lifelong learning systems — that will be even more important…. Some jobs will be displaced, but 100 percent of jobs will be augmented by AI.”

Rometty notes that technology companies “are inventing these technologies, so we have the responsibility to help people adapt to it — and I don’t mean just giving them tablets or P.C.s, but lifelong learning systems.”

For that’s how it works: people adapt to technology, rather than the other way around. And what if our job gets outsourced or taken over by a machine? Friedman then turns to education-to-work expert Heather McGowan: workers “must reach up and learn a new skill or in some ways expand our capabilities as humans in order to fully realize our collaborative potential.” Education must become “a continuous process where the focused outcome is the ability to learn and adapt with agency as opposed to the transactional action of acquiring a set skill.” It all sounds rather rigorous, frog-marched into the future for our own good.

Which should have brought Friedman back to Trump. Friedman and Rometty and McGowan are failing to connect the results of the last election. Clinton lost the crucial states of Pennsylvania, Wisconsin, and Michigan by a total of 80,000 votes. Clinton lost these states in large part because of the disaffection of white, non-college educated voters, people who have been hurt by previous technological development, who are angry about being marginalized by the ‘system’, and who pine for the good old days, when America was Great and they had a decent paycheck. Of course, Clinton knew all this, which is why her platform, Friedman-like, proposed a whole series of worker re-education programs. But somehow the coal miners were not interested in becoming computer programmers or dental hygienists. They preferred to remain coal miners – or actually, not coal miners. And Trump rode their anger to the White House.

Commentators like Friedman might usefully spend some of their time speculating on how our politics will be affected as worker displacement moves up the socio-economic scale.

At root, Friedman and his cohorts remain children of the Enlightenment: universal education remains the solution to the political problems caused by run-amok technological advance. This, however, assumes that ‘all men are created equal’ – and not only in their ability, but also in their willingness to become educated, and then reeducated again, and once again. They do not seem to have considered the possibility that a sizeable minority of Americans—or any other nationality—will remain resistant to constant epistemic revolution, and that rather than engaging in ‘lifelong learning’ are likely to channel their displacement by artificial intelligence into angry, reactionary politics.

And as AI ascends the skills level, the number of the politically roused is likely to increase, helped along by the demagogue’s traditional arts, now married to the focus-group phrases of Frank Luntz. Perhaps the machinations of turning ‘estate tax’ into ‘death tax’ won’t fool the more sophisticated. It’s an experiment that we are running now, with a middle-class tax cut just passed by Congress, but which diminishes each year until it turns into a tax increase in a few years. But how many will notice the latest scam?

The problem, however, is that even if those of us who live in non-shithole countries manage to get with the educational program, that still leaves “countries like Egypt, Pakistan, Iran, Syria, Saudi Arabia, China and India — where huge numbers of youths are already unemployed because they lack the education for even this middle-skill work THAT’S [sic] now being automated.” A large cohort of angry, displaced young men ripe for apocalyptic recruitment. I wonder what Friedman’s solution is to that.

The point that no one seems willing to raise is whether it might be time to question the cultural imperative of constant innovation.

Contact details: robert.frodeman@unt.edu

References

Friedman, Thomas. “While You Were Sleeping.” New York Times. 16 January 2018. Retrieved from https://www.nytimes.com/2018/01/16/opinion/while-you-were-sleeping.html

Author Information: Emma Stamm, Virginia Tech, stamm@vt.edu

Stamm, Emma. “Retooling ‘The Human.’” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 36-40.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3SW

Ashley Shew’s Animal Constructions and Technical Knowledge challenges philosophers of technology with the following provocation: What would happen if we included tools made and used by nonhuman animals in our broad definition of “technology?”

Throughout Animal Constructions, Shew makes the case that this is more than simply an interesting question. It is, she says, a necessary interrogation within a field that may well be suffering from a sort of speciesist myopia. Blending accounts from a range of animal case studies — including primates, cetaceans, crows, and more — with pragmatic theoretical analysis, Shew demonstrates that examining animal constructions through a philosophical lens not only expands our awareness of the nonhuman world, but has implications for how humans should conceive of their own relationship with technology.

At the beginning of Animal Constructions, Shew presents us with “the human clause,” her assessment of “the idea that human beings are the only creatures that can have or do use technology” (14). This misconception stems from the notion of homo faber, “(hu)man the maker” (14), which “sits at the center of many definitions of technology… (and) is apparent in many texts theorizing technology” (14).

It would appear that this precondition for technology, long taken as dogma by technologists and philosophers alike, is less stable than has often been assumed. Placing influential ideas from philosophers of technology in dialogue with empirical field and (to a lesser extent) laboratory studies conducted on animals, Shew argues that any thorough philosophical account of technology not only might, but must include objects made and used by nonhuman animals.

Animal Constructions and Technical Knowledge lucidly demonstrates this: by the conclusion, readers may wonder how the intricate ecosystem of animal tool-use has been so systematically excluded from philosophical treatments of the technical. Shew has accomplished much in recasting a disciplinary norm as a glaring oversight — although this oversight may be forgivable, considering the skill set required to achieve its goals. The author’s ambitions demand not only fluency with interdisciplinary research methods, but acute sensitivity to each of the disciplines it mobilizes.

Animal Constructions is a philosophical text wholly committed to representing science and technology on their own terms while speaking to a primarily humanities-based audience, a balance its author strikes gracefully. Indeed, Shew’s transitions from the purely descriptive to the interpretive are, for the most part, seamless. For example, in her chapter on cetaceans, she examines the case of dolphins trained to identify man-made objects of a certain size category (60), noting that the success of this initiative indicates that dolphins have the human-like capacity to think in abstract categories. This interpretation feels natural and very reasonable.

Importantly, the studies selected are neither conceptually simple, nor do they appear cherry-picked to serve her argument. A chapter titled “Spiderwebs, Beaver Dams, and Other Contrast Cases” (91) explores research on animal constructions that do not entirely fit the author’s definitions of technology. Here, it is revealed that while this topic is necessarily complicated for techno-philosophers, these complexities do not foreclose the potential for the nonhuman world to provide humans with a greater awareness of technology in theory and practice.

Ambiguous Interpretations

That being said, in certain parts, the empirical observations Shew uses to make her argument seem questionable. In a chapter on ape and primate cases, readers are given the tale of Santino, a chimpanzee in a Switzerland zoo with the pesky habit of storing stones specifically to throw at visitors (40). Investigators declared this behavior “the first unambiguous evidence of forward-planning in a nonhuman animal” (40) — a claim that may seem spurious, since many of us have witnessed dogs burying bones to dig up in the future, or squirrels storing food for winter.

However, as with every case study in the book, the story of Santino comes from well-documented, formal research, none of which was conducted by the author herself. If it was discovered that factual knowledge such as the aforementioned are, in fact, erroneous, it is not a flaw of the book itself. Moreover, so many examples are used that the larger arguments of Animal Constructions will hold up even if parts of the science on which it relies comes to be revised.

In making the case for animals so completely, Animal Constructions and Technical Knowledge is a success. The book also makes a substantial contribution with the methodological frameworks it gives to those interested in extending its project. Animal Constructions is as much conceptual cartography as it is a work of persuasion: Shew not only orients readers to her discipline — she does not assume readerly familiarity with its academic heritage — but provides a map that philosophers may use to situate the nonhuman in their own reflection on technology. This is largely why Animal Constructions is such a notable text for 21st century philosophy, as so many scholars are committed to rethinking “the human” in the wake of recent innovations in technoscience.

Animal Knowledge

Animal Constructions is of particular interest to critical and social epistemologists. Its opening chapters introduce a handful of ideas about what defines technical knowledge, concepts that bear on the author’s assessment of animal activity. Historically, Shew writes, philosophers of technology have furnished us with two types of accounts of technical knowledge. The first sees technology as constituting a unique case for philosophers (3).

In this view, the philosophical concerns of technology cannot be reduced to those of science (or, indeed, any domain of knowledge to which technology is frequently seen as subordinate). “This strain of thought represents a negative reaction to the idea that philosophy is the handmaiden of science, that technology is simply ‘applied science,’” she writes (3). It is a line of reasoning that relies on a careful distinction between “knowing how” and “knowing that,” claiming that technological knowledge is, principally, skillfulness in the first: know-how, or knowledge about “making or doing something” (3) as opposed to the latter “textbook”-ish knowledge. Here, philosophy of technology is demarcated from philosophy of science in that it exists outside the realm of theoretical epistemologies, i.e., knowledge bodies that have been abstracted from contextual application.

If “know-how” is indeed the foundation for a pragmatic philosophy of technology, the discipline would seem to openly embrace animal tools and constructions in its scope. After all, animals clearly “know how” to engage the material world. However, as Shew points out, most technology philosophers who abide by this dictum in fact lean heavily on the human clause. “This first type of account nearly universally insists that human beings are the sole possessors of technical knowledge” (4), she says, referencing the work  of philosophers A. Rupert Hall, Edwin T. Layton, Walter Vincenti, Carl Mitcham, and Joseph C. Pitt (3) as evidence.

The human clause is also present in the second account, although it is not nearly so deterministic. This camp has roots in the philosophy of science (6) and “sees knowledge as embodied in the objects themselves” (6). Here, Shew draws from the theorizations of Davis Baird, whose concept “thing knowledge” — “knowledge that is encapsulated in devices or otherwise materially instantiated” (6) — recurs throughout the book’s chapters specifically devoted to animal studies (chapters 4, 5, 6 and 7).

Scientific instruments are offered as perhaps the most exemplary cases of “thing knowledge,” but specialized tools made by humans are far from the only knowledge-bearing objects. The parameters of “thing knowledge” allow for more generous interpretations: Shew offers that Baird’s ideas include “know-how that is demonstrated or instantiated by the construction of a device that can be used by people or creatures without the advanced knowledge of its creators” (6). This is a wide category indeed, one that can certainly accommodate animal artefacts.

Image from Sergey Rodovnichenko via Flickr / Creative Commons

 

The author adapts this understanding of thing-knowledge, along with Davis Baird’s five general ideals for knowledge — detachment, efficacy, longevity, connection and objectivity (6) — as a scale within which some artefacts made and used by animals may be thought as “technologies” and others not. Positioned against “know-how,” “thing knowledge” serves as the other axis for this framework (112-113). Equally considered is the question of whether animals can set intentions and engage in purpose-driven behavior. Shew suggests that animal constructions which result from responses to stimuli, instinctive behavior, or other byproducts of evolutionary processes may not count as technology in the same way that artefacts which seem to come from purposiveness and forward-planning would (6-7).

Noting that intentionality is a tenuous issue in animal studies (because we can’t interview animals about their reasons for making and using things), Shew indicates that observations on intentionality can, at least in part, be inferred by exploring related areas, including “technology products that encode knowledge,” “problem-solving,” and “innovation” (9). These characteristics are taken up throughout each case study, albeit in different ways and to different ends.

At its core, the manner in which Animal Constructions grapples with animal cognition as a precursor to animal technology is an epistemological inquiry into the nonhuman. In the midst of revealing her aims, Shew writes: “this requires me to address questions about animal minds — whether animals set intentions and how intentionality evolved, whether animals are able to innovate, whether they can problem solve, how they learn — as well as questions about what constitutes technology and what constitutes knowledge” (9). Her answer to the animal-specific queries is a clear “yes,” although this yes comes with multiple caveats.

Throughout the text, Shew notes the propensity of research and observation to alter objects under study, clarifying that our understanding of animals is always filtered through a human lens. With a nod to Thomas Nagel’s famous essay “What Is It Like To Be A Bat?” (34), she maintains that we do not, in fact, know what it is like to be a chimpanzee, crow, spider or beaver. However, much more important to her project is the possibility that caution around perceived categorical differences, often foregrounded in the name of scholarly self-reflexivity, can hold back understanding of the nonhuman.

“In our fear of anthropomorphization and desire for a sparkle of objectivity, we can move too far in the other direction, viewing human beings as removed from the larger animal kingdom,” she declares (16).

Emphasizing kinship and closeness over remoteness and detachment, Shew’s pointed proclamations about animal life rest on the overarching “yes:” yes, animals solve problems, innovate, and set intentions. They also transmit knowledge culturally and socially. Weaving these observations together, Shew suggests that our anthropocentrism represents a form of bias (108); as with all biases, it stifles discourse and knowledge production for the fields within which it is imbricated — here, technological knowledge.

While this work explicitly pertains to technology, the lingering question of “what constitutes knowledge overall?” does not vanish in the details. Shew’s take on what constitutes animal knowledge has immediate relevance to work on knowledge made and manipulated by nonhumans. By the book’s end, it is evident that animal research can help us unhinge “the human clause” from our epistemology of the technical, facilitating a radical reinvestigation of both tool use and materially embodied knowledge.

Breaking Down Boundaries

But its approach has implications for taxonomies that not only divide humans and animals, but humans, animals and entities outside of the animal kingdom.  Although it is beyond the scope of this text, the methods of Animal Constructions can easily be applied to digital “minds” and artificial general intelligence, along with plant and fungus life. (One can imagine a smooth transition from a discussion on spider web-spinning, p. 92, to the casting of spores by algae and mushrooms). In that it excavates taxonomies and affirms the violence done by categorical delineations, Animal Constructions bears surface resemblance to the work of Michel Foucault and Donna Haraway. However, its commitment to positive knowledge places it in a tradition that more boldly supports the possibilities of knowing than does the legacies of Foucault and Haraway. That is to say, the offerings of Animal Constructions are not designed to self-deconstruct, or ironically self-reflect.

In its investigation of the flaws of anthropocentrism, Animal Constructions implies a deceptively straightforward question: what work does “the human clause” do for us? —  in other words, what has led “the human” to become so inexorably central to our technological and philosophical consciousness? Shew does not address this head-on, but she does give readers plenty of material to begin answering it for themselves. And perhaps they should: while the text resists ethical statements, there is an ethos to this particular question.

Applied at the societal level, an investigation of the roots of “the human clause” could be leveraged toward democratic ends. If we do, in  fact, include tools made and used by nonhuman animals in our definition of technology, it may mar the popular image of technological knowledge as a sort of “magic” or erudite specialization only accessible to certain types of minds. There is clear potential for this epistemological position to be advanced in the name of social inclusivity.

Whether or not readers detect a social project among the conversations engaged by Animal Constructions, its relevance to future studies is undeniable. The maps provided by Animal Constructions and Technical Knowledge do not tell readers where to go, but will certainly come in useful for anybody exploring the nonhuman territories of 21st century. Indeed, Animal Construction and Technical Knowledge is not only a substantive offering to philosophy of technology, but a set of tools whose true power may only be revealed in time.

Contact details: stamm@vt.edu

References

Shew, Ashley. Animal Constructions and Technological Knowledge. Lanham, MD: Lexington Books, 2017.

Author Information: Robyn Toler, University of Dallas

Toler, Robyn. “The Progress and Technology of City Life.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 78-85.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3t9

Please refer to:

city_life

Image credit: Simon & His Camera, via flickr

The Progress and Technology of City Life

The products of today’s technology deserve scrutiny. The mass media and pop culture exert a powerful influence on Americans, young and old alike. Opportunities for quiet reflection are few and far between despite our much-trumpeted age of convenience. Scientific advancement is not necessarily accompanied by wisdom. In fact, in embracing the new and shiny it is easy to cast aside the tested and proven. Sometimes we tear down fences before finding out why they were built. Likewise, the alluring products of technology deserve some scrutiny before being accepted. The cool, rational consideration needed is all the harder to engage in because of the onslaught of the sensational. Still, prudence suggests that the best course is to take a step back and ponder the choices before us.

Under the influence of the “culture industry,” as described by Theodor Adorno and others, perpetual distraction and artificial consensus crowd out of people’s lives the solitude and individuality required for cultivating critical, independent thought, and the courage for following their own reasoned convictions. Sensitization to the mechanisms used by the culture industry can help audiences more effectively resist them, and preserve or regain an authentic experience and view of life. Some view technological advancements as unqualified goods by virtue of their nature as modern and scientific; however, the gains produced by these technologies bring their own attendant complications, such as compromised privacy, continuous availability to the workplace, and the stress of an externally imposed life rhythm over a natural, personal ebb and flow of work and leisure.

This article challenges the argument that technological advances have made work easier, created more time for leisure, decreased stress, increased satisfaction in relationships, simplified tasks, and made jobs less time consuming, resulting in a net benefit to lived experience.

While people in rural as well as urban locations can easily be involved with technology in many parts of the world, including being connected to the internet, city life has some clear contrasts to country life. Population density is higher in the city. The environment is noisier. Traffic, construction equipment, and the many people in close proximity all contribute to the volume. Limited green spaces reduce exposure to a variety of natural features like plants, birds, and bodies of water. The city is also filled with opportunities to interface with technology. Subway tickets, toll tags, video displays, elevators and escalators, point of sale terminals, smart phones, identification badges for areas with controlled access, passports, and games are just a few of the high-tech items an average person deals with in a normal day in the city.

Examining the Western philosophical tradition, especially from Kant onward, Adorno’s writings cover a wide range of topics including music and literary criticism, aesthetics, mass culture, Hegel, existentialism, sociology, epistemology, and metaphysics; however, his work on what he termed “The Culture Industry” is especially pertinent to understanding the dynamics of life in the city. In his article “How to Look at Television,” from The Culture Industry, Adorno reveals his thoughts on the influence of popular culture. He warned in 1957, soon after the advent of television, that it can produce intellectual passivity and gullibility (166). Current consumers of internet entertainment should heed his admonition, guard their powers of reason, and be wary of technology’s ability to hypnotize and immobilize. (cf. Reider 2015).

Boredom, unlike the hypnotic effect Adorno warned against, is not only an unavoidable part of life, it is the wellspring of creativity. Overscheduling, avoiding monotony at all costs, robs potential artists, poets, scientists, and inventors of their motivation to generate plans and projects. Boredom has developed a bad reputation as a companion to depression and vice and a precursor to mischief; but “research suggests that falling into a numbed trance allows the brain to recast the outside world in ways that can be productive and creative at least as often as they are disruptive” (Carey 2008). As another researcher indicated,

When children have nothing to do now, they immediately switch on the TV, the computer, the phone or some kind of screen. The time they spend on these things has increased. But children need to have stand-and-stare time, time imagining and pursuing their own [emphasis added] thinking processes or assimilating their experiences through play or just observing the world around them. [It is this sort of thing that stimulates the imagination while the screen] tends to short circuit that process and the development of creative capacity (Richardson 2013, 1013).

Some would argue that “switching on the TV” is “doing something;” however, Richardson asserts that imagining, observing, and mentally processing experiences are more valuable.

Leisure and the Workweek

Max Gunther’s The Weekenders takes an amusing yet probing look at the leisure time of Americans. It is particularly interesting to note that this book was published in 1964. While some of the pastimes available have changed, human beings are mostly the same. One of the most distinctive features of city life is scheduling. Busses run on a schedule. School bells, business meetings, and garden clubs stay on schedule so their participants can meet their next obligations. The use of leisure time and how it is incorporated into schedules is particularly interesting. Insights can be gained by studying the movement from an organic life rhythm to an arbitrarily imposed “five days on, two off” schedule, the perceived pressure to be productive during hours away from one’s paid employment, and the tendency to be connected continuously to one’s work through the technological mediation of devices such as smart phones (cf. Drain and Strong 2015).

People in the pre-internet years tended to look at their leisure time, primarily the weekend, as wholly separated and different from the workweek. Of course, there were always the workaholics, but as a national trend, the weekend seemed different. They wore different clothing and participated in different activities, all with a different attitude. Sixty-two hours were partitioned off from “work” to be spent in “leisure.” Divisions during the week between different professions were blurred on the weekend, and everyone, except that unfortunate segment whose businesses hummed on throughout the weekend, took up similar pursuits. “[City dwellers] can no longer work and play according to the rhythms of personal mood or need but are all bound to the same gigantic rhythm: five days on, two off” (Gunther, 10-12; cf. Ellul 1964; cf. Kok 2015). While one would expect that all this “leisure time” provided by the efficiency of industrialization would lead to a slower pace conducive to relaxation, quite the contrary seems to be the case.

Families plunged into furious activity on those days ostensibly set aside for leisure. The weekend was by and for the middle class. Ads were aimed almost exclusively at them. Students were weekenders in training. Sports, play, eating and drinking, cultural arts, church, and civic volunteering all took their share of available time. Although these activities sound pleasant, the real result was a vague insecurity and bewildering Monday fatigue. As Gunther appropriately pondered, it is not clear whether the fatigue was generated by the energy expended in reaching goals or by pent-up, unrelieved tension (Gunther 13-15).

Travel also occupied the weekenders of the 60s. Weekend trips, day trips, outings to events and places of interest, and visits with friends vied for attention. This may have been genuine curiosity about the world and fellowship with neighbors and loved ones, or something else. All that travel and dining out was expensive, even back then. Aggressive driving increased on the weekends, too. It is unclear what drove this restlessness, what inner devil goaded those mid-century weekenders, what they were so desperately seeking. Yet, it is clear that there were high expectations for leisure time, and somehow despite all the recreation, those two days off frequently disappointed (Gunther 16, 21). Technological advances have continued, but the expectations and restlessness do not seem to have abated.

Close and So Far Away

Even though residents in the city are in close proximity to one another, the trend toward social media and away from direct personal interaction has grown. Relationships in the city are heavily influenced by technological mediation. Perhaps limited access to natural settings pushes city dwellers indoors, and into virtual spaces. Sites designed to facilitate dating, networking, creative pursuits, and games, among other activities, have sprung up. The internet “surfing” is always fine because someone else is constantly adding new, tantalizing information. It is the epitome of content “crowdsourcing.”  Social networking sites provide crowds of people who create content, usually out of their own experiences, for the entertainment of others as they browse. Potential romantic interests, job openings, and decorating ideas are perpetually at the ready, with new ones popping up moment by moment. This makes it difficult to break away. Suspense and expectation create enticement. Every genre of social media has its niche and its devotees, but perhaps the most pervasive and invasive of them all is Facebook, with its plethora of “friends.” It is ironic that in cities with their high population density, online, virtual “meetings” are so popular.

Begun as a forum for college students, this social media giant has grown to include anyone who wants to join, with a non-stop, real-time feed of “Status Updates.” Founded by Mark Zuckerberg and some college classmates at Harvard, within 24 hours of its launch the site had over 1200 registrants. Private investors became involved and the company expanded. Facebook acquired a feed aggregator and then the photo site called Instagram. The company made its initial public offering (IPO) in 2012 valued at $104 billion. A new search feature was rolled out in 2013, and changes continue, including an opt-out feature that makes it the user’s responsibility to raise security settings from their lower, default positions. Today one in seven people is a member (Zeevie 2013). The desire to know instantly about the next update to appear in the feed—a great picture, word of something earthshaking in a “friend’s” life, a joke, a political rallying cry—can be addicting. In cities large and small, people often observe each other online in addition to, or instead of, from their front porches.

The moment-to-moment observation of others’ activities through monitoring their posts is not the only aspect of social media that makes it enticing, though. The ability to stay connected with all the people you have ever known—provided they are on Facebook—is a big draw. Consider the evolution of the address book. Years ago a small booklet next to the telephone held all the names, addresses, and phone numbers of the people one most frequently called or corresponded with. As social circles expanded and families became more mobile, address books expanded as well. The inconvenience of constant marking out and erasing information of friends and relatives that moved led to loose-leaf notebooks and index card files. The Rolodex system with its easily interchangeable cards was born, facilitating an ever-growing collection of constantly changing contact information.

Now leap ahead to the electronic version of the address book, the Palm Pilot. It was a utilitarian miracle and a status symbol in one! Then, just as carrying an address book gadget plus a cellular phone became tiresome, the technology merged to produce one convenient device to do both jobs—the smart phone. Cloud data storage debuted to protect data from hardware problems and to make information accessible anywhere with connectivity, cellular or wi-fi. Mail made a similar metamorphosis from postal mail (“snail mail,” referencing its comparatively slow delivery time) to electronically delivered “email,” to web-based systems like gmail. Now networking platforms like Facebook, and LinkedIn for professionals, are widening the messaging options further. Contacts are accumulated over time, surviving any number of physical moves by users, and stored remotely for ubiquitous access. For better or worse, the days of hunting for a scrap of paper with someone’s number on it are over.

Even though city dwellers have all those connections with all those people, and they could be interacting face to face with those nearby, they all too often choose online forums over personal meetings. A large segment of their connectivity is online instead of in person, and it has a negative side. Virtual personalities allow a spectrum of falsity ranging from simply curating one’s image to advantage, to manufacturing a fully fake identity. The self-absorbed use Facebook to promote themselves, not connect with others. Furthermore, instead of enhancing the ability to read social cues and body language, excessive time online erodes these crucial social skills (Kiesbye 55, 58-9). Facebook actually interferes with friendships rather than strengthening them. It seems that social needs would be more effectively met by simply arranging to meet in person, in the city environment with its physical proximity and variety of venues, instead of retreating behind a computerized mediator.

Some cite city crime statistics as a reason to retreat from malls, parks, and other public places. But new categories of crime and vice have arisen or proliferated on the internet. Somewhat, though not altogether, different from face to face encounters on sidewalks and in elevators, it is difficult to know with whom you are dealing on social media. Despite assurances by site administrators, malevolent users can easily misrepresent themselves, luring the young and naïve into dangerous, sometimes fatal, encounters. Teens’ desire for premature autonomy and willingness to lie to their parents in order to sneak off and meet someone surreptitiously complete the potentially tragic scenario (Luna 196-8). “Sexting” over cell phones, and now “sextortion,” have been introduced. Teenagers are particularly vulnerable to this kind of deception. They are notoriously “easy to intimidate, and embarrassed to tell their parents” when their judgment proves poor and plans go awry (Luna 196-7). A young person who carelessly snaps a compromising photo of himself (or is digitally captured by a companion) can be parlayed into a source for a self-incriminating file of pornography by an online predator.

Privacy and anonymity can be viewed two ways in the city. There can be anonymity in a crowd, yet we are captured on camera throughout the day at businesses, traffic lights, and elsewhere. With the exception of satellite surveillance, that type of tracking is rare outside the city. It is difficult to estimate how we modify our behavior because of this “watching.” City life also presents an opportunity for deception and abuse in privacy breaches. Privacy issues in public, in private, and on social networking sites concern politicians and culture critics. The high quantity of pictures posted on Facebook is a valuable source of data for anyone trying to match faces with identities.

In a study led by Alessandro Acquisti of Carnegie Mellon University, information from social media sites including Facebook was combined easily with cloud computing and facial recognition software to identify students on a campus (Luna 121). Whether or not students object to this, their parents may find it disconcerting that the children they have just released into the next phase of their growing independence can be surveilled in this way. Citizens who value their privacy will have a difficult time maintaining it in the age of Facebook, whether or not they are or ever have been subscribers. Friend lists yield copious amounts of information, and trails remain to anyone mentioned or pictured. Even non-subscribers can gain access through search engines (Luna 204-5).

Those who think they are too old or too cautious to become crime victims should consider how their online personas could still have negative repercussions for them. Potential employers and college admissions personnel routinely check their applicants’ presences on social networking sites. Students are careless about their passwords, allowing “friends” to make embarrassing posts in their names. Employers and administrators do not know or care who created the posts, but when they see information that makes a user look bad, they are likely to move on to more appealing candidates to fill their available positions (Luna, 199). Facebook does not cause people to lose opportunities, but it guarantees that many people will see it if you make a mistake.

If predators, lowered productivity, narcissism, and shortened attention spans are not enough incentive to reconsider one’s entanglement with social media, here is a puzzle to ponder: anyone actively attempting to conceal his identity or whereabouts will have a difficult time in the age of social media. This is a coin with two sides. While it seems appealing for local law enforcement and federal Homeland Security to be able to track and locate a suspect, honest citizens who just want to remain anonymous may rightly feel violated knowing that their every traffic decision, subway stop, casual comment, and convenience store errand is at least captured, and possibly monitored in real time. Movies and television shows like Fox’s popular series 24 demonstrate the use of this technology and promote its acceptance—even demand. Before capitulating to the easy solution of simply watching everybody all the time, think about whether that kind of scrutiny is really desirable or acceptable.

A Perfect Day

For a comparison between today’s city life full of electronic gadgets and software and a time before computers, or even electricity, had reached much of rural America, the following poem depicts a different way of life. Neither electronic entertainment nor boredom would have intruded on the grandmother portrayed in this poem. While physically busy, she would have had more opportunity for contemplation than most modern city dwellers.

Perfect Day

Grandmother, on a winter’s day,
Milked the cows and fed them hay;
Slopped the hogs, saddled the mule,
And got the children off to school.
Did a washing, mopped the floors,
Washed the windows and did some chores,
Cooked a dish of home-dried fruit,
Pressed her husband’s Sunday suit.
Swept the parlor, made the bed,
Baked a dozen loaves of bread,
Split some firewood and lugged it in
Enough to fill the kitchen bin.
Cleaned the lamps and put in oil,
Stewed some apples she thought might spoil,
Churned the butter, baked a cake,
Then exclaimed, “For mercy sake the calves have got out of the pen!”
Went out, and chased them in again.
Gathered the eggs and locked the stable,
Back to the house and set the table,
Cooked a supper that was delicious,
And afterward washed all the dishes.
Fed the cat, and sprinkled the clothes
Mended a basket full of hose,
Then opened the organ and began to play:
“When you come to the end of a perfect day!”—Author Unknown (Kaetler 54-5).

Cooking, cleaning, farm chores, organization, time management, nurturing behaviors, and aesthetics are all on display in this narration. Facebook would have been a shallow substitute for the creative work accomplished on this day, leaving the industrious grandmother with the same vague dissatisfaction as Gunther’s “Weekenders,” mentioned earlier.

Technological progress is here to stay, with or without a given individual’s active participation; but users can take steps to stay in control of their data and their minds. The advantages of dialing back technology are delightfully and creatively narrated in the book Better Off. In it author Eric Brende chronicles the lifestyle journey he and his wife made in search of the minimal amount of electronics and machinery necessary to optimize life for them. After spending an extended time living in a rural community that rejected almost all labor-saving devices, they concluded that they were happier and “better off” without most of the expensive, encumbering accouterments of 21st century life most of us take for granted. He ends his book by saying

… in all cases [technology] must serve our needs, not the reverse, and we must determine these needs before considering the needs for technology. The willingness and the wisdom to do so may be the hardest ingredients to come by in this frenetic age. Perhaps what is needed most of all, then, are conditions favorable to them: quiet around us, quiet inside us, quiet born of sustained meditation and introspection. We must set aside time for it, in our churches, in our studies, in our hearts. Only when we have met this last requisite, I suspect, will technology yield its power and become a helpful handservant (Brende 232-3).

Brende and his wife found the life balance that suited them away from the city before rejoining it. His focus on quiet and control are aptly put.

Adorno stated that modern mass culture has been transformed “into a medium of undreamed of psychological control (cf. Guizzo, 2015; cf. Scalambrino, 2015). The repetitiveness, the selfsameness, and the ubiquity of modern mass culture tend to make for automatized reactions and to weaken the forces of individual resistance” (Adorno 2006, 160). Preserving solitude, concentration, independent thought, and courage are worth the effort it takes to resist the popular culture. The culture industry will continue to usurp the territory of life wherever it is allowed to, within or outside of the city; but vigilance can give it boundaries.

References

Adorno, Theodor W. The Culture Industry: Selected Essays on Mass Culture. London: Routledge, 2006.

Brende, Eric. Better Off: Flipping the Switch on Technology. New York: Harpercollins Publishers, 2004.

Carey, Benedict. “You’re Bored But Your Brain is Tuned In.” New York Times August 5, 2008. http://www.nytimes.com/2008/08/05/health/research/05mind.html?_r=0 (accessed May 13, 2014).

Drain, Chris, and Richard Charles Strong. “Situated Mediation and Technological Reflexivity: Smartphones, Extended Memory, and Limits of Cognitive Enhancement.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 187-197. London: Rowman & Littlefield International, 2015.

Ellul, Jacques. The Technological Society. Translated by J. Wilkinson. New York: Vintage Books, 1964.

Guizzo, Danielle. “The Biopolitics of the Female: Constituting Gendered Subjects through Technology.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 145-155. London: Rowman & Littlefield International, 2015.

Gunther, Max. The Weekenders. Philadelphia, Pennsylvania: J. B. Lippencott Company, 1964.

Kiesbye, Stefan, editor. Are Social Networking Sites Harmful? Detroit, Michigan: Greenhaven Press, 2011.

Kok, Arthur. “Labor and Technology: Kant, Marx, and the Critique of Instrumental Reason Vanishing Subject. Becoming Who You Cybernetically Are.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 137-144. London: Rowman & Littlefield International, 2015.

Firestone, Lisa. “Are You Present for Your Children?” Sussex Publishers, LLC. May 5, 2014. http://www.psychologytoday.com/blog/compassion-matters/201405/are-you-present-your-children (accessed May 10, 2014).

Luna, J. J. How to Be Invisible. New York: Thomas Dunne Books. 2012.

Reider, Patrick. “The Internet and Existentialism: Kierkegaardian and Hegelian Insights.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 59-69. London: Rowman & Littlefield International, 2-15.

Richardson, Hannah. “Children Should Be Allowed to Get Bored, Expert Says.” BBC. March 22, 2013. http://www.bbc.com/news/education-21895704 (accessed May 14, 2014).

Scalambrino, Frank. “What Control? Life at the Limits of Power Expression.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 101-111. London: Rowman & Littlefield International, 2015.

Snopes.com. “Erma Bombeck’s Regrets: A dying Erma Bombeck penned a list of misprioritizations she’d come to regret?” September 29, 2009. http://www.snopes.com/glurge/bombeck.asp. (accessed May 13, 2014).

Snopes.com. “Grandma’s Wash Day: Description of how laundry was done in bygone days?” August 23, 2008. http://www.snopes.com/glurge/washday.asp (accessed May 13, 2014).

Unknown. Grandparents.net. Ltd. Australian Media Pty. 2000. http://www.grandparents.net/perfectday.htm (accessed May 13, 2014).

Zeevi, Daniel. “The Ultimate History of Facebook [INFOGRAPHIC].” SocialMediaToday. February 21, 2013. http://socialmediatoday.com/daniel-zeevi/1251026/ultimate-history-facebook-infographic (accessed May 13, 2014).

Zendaya, Sheryl Burk. Between U and Me. New York, New York: Disney-Hyperion Books, 2013.

Zuidervaart, Lambert, “Theodor W. Adorno.” The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), edited by Edward N. Zalta. https://plato.stanford.edu/archives/win2015/entries/adorno/. (accessed May 13, 2014).