Visioneering a Better Future: The Hieroglyph Project, STS, and the Future of Science and Technology, Joshua Earle

SERRC —  November 10, 2014 — 2 Comments

Author Information: Joshua Earle, Virginia Tech,

Earle, Joshua. “Visioneering a Better Future: The Hieroglyph Project, STS, and the Future of Science and Technology.” Social Epistemology Review and Reply Collective 3, no. 12 (2014): 67-83.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


O n October 2nd, 2014 authors, scientists, policy experts and journalists gathered to ask how the future of science and technology intersects with fiction and storytelling. Future Tense—a partnership between the New America Foundation, Arizona State University and Slate magazine—and Issues in Science and Technology hosted “Can We Imagine Our Way to a Better Future?” at the National Academies in Washington D.C. Inspired by Neal Stephenson’s 2011 piece “Innovation Starvation” and the resultant Hieroglyph: Stories and Visions for a Better Future anthology, the panelists tackled questions from the ethics of robot babysitters and drones, to who will get to imagine for the human race, how neuroscience might improve lives and the ethics therein, surveillance and privacy concerns, and the place of fiction in tackling wicked problems. I will take you through brief description of the Hieroglyph project, then introduce each of the panels with embedded videos, and then discuss some of the issues raised, some criticism of the discussions (including reactions of some of the people participating in the online discussion during the event), as well as identifying places where Science Technology and Society scholars may have be able to leverage our own expertise to affect some beneficial change. 

The Hieroglyph Anthology

A Hieroglyph is a simple, recognizable symbol, on whose meaning everyone agrees. The Hieroglyph anthology was created to use the power of narrative, of fiction, to try to inspire scientists to take the next moonshot, to imagine the next big thing, to give them a recognizable goal that everyone can agree is possible.

Conceived of by Neal Stephenson, author of bestselling novels Baroque Cycle, Snow Crash and others, and the president of Arizona State University, in conjunction with their Center for Science and Imagination, the stories in the Hieroglyph anthology are visions of ways in which we can create technology for the future that help humanity. There were a few rules for the stories to go into the book: no hyperspace, no holocausts, and no hackers. No Hyperspace meant that they wanted the future tech to be possible. Stephenson’s piece, “Atmosphaera Incognita,” about a twenty-kilometer tower (based on a proposal from 2003 by Geoff Landis and Vincent Denis), for example uses steel, not some new, likely impossible, material. Basically, they wasted to make sure that engineers wouldn’t tune out at the first hint of handwavium. No Holocausts meant that they wanted to avoid dystopias and ruinous futures, and no hackers meant that instead of writing stories where ragtag groups of people took advantage of current technology, they should be thinking about the creators of new technologies.

The anthology runs the gamut, from Stephenson’s story about the tower, and the design challenges it brings, to a story about the power of literacy and the ethics of drugs that change how the brain works. Others deal with surveillance, others with 3D printing, still others with solar energy and hotels in Antarctica. Each posits a technology (or more than one) that can change the future for the better. That is not to say there is no conflict, or that every technology is shown as entirely positive, without its share of dangers and drawbacks. We have people killed by drones, but also saved by them. We see oppressive surveillance drive people to do dangerous things, but also helped by it. All in their own way, they show us new paths our technology can take that could better the future of all mankind.

Can We Imagine Our Way to a Better Future?

After introductions by Kevin Finneran, director of the National Academies’ Committee on Science, Technology, and Public Policy, and Ed Finn, Director of ASU’s Center for Science and the Imagination, the day was kicked off by a short speech by Neal Stephenson. He outlined what the Hieroglyph project is, and noted some things he had learned since he helped create the project in 2011. You can see his talk below.

The only real issue I have with this particular talk is that I’m not sure I buy the premise, that we are no longer doing big things. No, we haven’t put people on the moon for many years, we don’t have people on Mars and we can’t live in a space station. But we have put a 2-ton rolling laboratory with a laser and HD cameras on Mars (and got to watch it land live online), and smashed atoms so completely that previously theoretical particles have been revealed. The Curiosity rover and the Large Hadron Collider are big things. So are Space X, 3D printers and prosthetics with sensory feedback, and brain-computer interfaces, and crowdsourcing science. If putting people in hazardous locations are how Stephenson categorizes “big things,” then it is probably true that we’ve gotten away from that. Yes, our space program has faltered since the seventies (though I would say the ISS, Cassini, Messenger, MAVEN, Rosetta and others show that we haven’t completely given up). And yes, NASA’s budget has been shrinking pretty steadily over the years, but they have plans to put people on Mars (and bring them back) by 2030 (as mentioned later in the seminar), so this lack of big things may just be a pause, or a “nudging of the steering wheel” by the political powers that be into more pressing matters. I think that there is plenty of science and tech out there to be excited about. Perhaps the issue is more that science and technology are catching up to science fiction so quickly (and in many ways surpassing it, as was a fear about this anthology) that it’s hard for science fiction writers to imagine a future that is possible without it becoming science fact before the book gets published.

That said I fully agree that the scarce resource that we need to get these dormant clever ideas going are the technoscience CEOs that can enroll the people and political inertia, and most importantly relax the grip on money that the large corporations and wealthy individuals have in order to get these big ideas done. Though I also think the institutional risk-aversion that has gripped much of western civilization, coupled with a frightening lack of science literacy (and downright science denial) among people of power and wealth is not a problem that should be underestimated. Perhaps that inspiring CEO with a vision can, indeed, get the capital. If so, I think fostering these people should be one of the main goals of people wanting these big things to happen. I also think that this is the perfect place where STS can help. We stand at the intersection of science, technology and society (hence the acronym), so we are better positioned than probably any other institution to grease the wheels between clever ideas, visionary CEOs, political organizations, and the money needed to make it all work.

Delivery Drones and Robot Babysitters.

Our first full panel is named “Delivery Drones and Robot Babysitters.” Moderated by Katherine Kramer, an editor of the Hieroglyph anthology, the panelists were Ryan Calo, Assistant Professor of Law at the University of Washington, Patric Verrone, a writer and producer of Futurama, and Dan Kaufman, Director of the Information Innovation Office at DARPA (the Defense Advanced Research Projects Agency). The panel is embedded below:

A couple of the things I really like about this panel are Calo’s discussion of the FAA with relationship to drones, and the policy story that goes along with both them and autonomous vehicles. These are areas that we need to get right and fairly quickly (and Patric Verrone’s “Good luck with that!” comment nicely sums up those possibilities with regards to our current legislature). He mentions the Nevada laws that were changed recently in response to the Google driverless cars as a model for how to deal with the situation, and he rightly notes that our current policy regarding both drones and autonomous vehicles is pretty lacking. I also like that Dan Kaufman realizes that using drones to deliver cake is a serious case of solving #firstworldproblems, and that we should aspire to more.

I really wanted the conversation to go more in this direction, looking at where we could do more. Instead of wondering if our robot babysitters and drones might be used for evil (spying/surveillance), what can we do to use these technologies to create a more equal world, and help those in need? A lot of the discussion throughout the day was focused on improving the lives of Americans. Very little time was spent on how these technologies might benefit the world on a larger scale, and especially for the most vulnerable. A lot of privilege was on that stage (true throughout the day), and most of the discussion was focused on maintaining or enhancing that privilege (indirectly, of course). I commend Kaufman for his wish for greater goals, and his optimism that there is a wide range of technological uses between delivering food to the hungry and spying on people that deserve to be talked about. One of the twitter conversers, Cara Mast (@digicara), a recent graduate of Boston College in International Studies, was particularly bothered by the lack of focus on economic access to these new technologies. When I asked her about the particulars of her reservations she wrote:

… I think my issue… is tied to whose future we are trying to imagine. Privileged groups tend to think “up, out, away!” reaching for the cosmos because they have what they want in the present (and are often uncomfortable being in the same space as have-nots struggling to get by). So in a way, looking for that “new moon” to shoot for seems suspiciously like averting one’s gaze from the bevy of large, society-wide problems that could be addressed on earth.

Another important point that I think deserves some focus is the idea that technologists cannot foresee or determine how their technology might be used. Humans are a creative, tinkering bunch and if we can use a technology in a novel way to solve a particular problem, we will. The SIRI example is a great one, but can be spread across any and all technologies. One of the sticky wickets of technology and policy is that uncertainty. Sure, we can ban 3D printers because they could produce a weapon, but like in the anthology, if we could use them to build habitats on the Moon or Mars (or in the desert for a hippie festival), then we may lose more than we gain. Same for drones, the danger of surveillance is a real one, but the benefits probably outweigh them, and through policy we need to manage this well. In fact, I think that our policy answers to these disruptive technologies are the most important challenge we face. The tech will keep coming, keep disrupting, and keep changing the way our society functions. And the reality is that technology does change society, in more ways than just raising expectations. Autonomous vehicles could fundamentally change the way we design automobiles and infrastructure. 3D printers could completely disrupt many current technologies and businesses. The internet has all but killed media as a big box store shelf item, taking down Blockbuster, Borders and many other large companies.

One of the main struggles with policy is the difficulty in communication. Both Kaufman and Calo touched on this point, that from bad communication between technologists and legislators comes bad policy, and it can be difficult for legislators to see the possible policy needs of disruptive technologies. Here, again is where STS scholars can help. These intersections are where we live, so we should become more active in fostering constructive communication between technologists and policymakers. Understanding, without dumbing-down or patronizing, is how we get good policy that can protect from real dangers without overly stifling our creative uses of the new technology.

Who and What Will Get to Think the Future?

The second panel of the day covers the topic of “Who and What Will Get to Think the Future.” Moderated by Ed Finn mentioned above and with but a single panelist: Ted Chiang, author of “Stories of Your Life and Others,” the topic starts off with the question of what thinking is and if that might change in the future. You can see the panel below.

The important takeaway, I believe, from this conversation is in the invisibility of the effects of our technologies on our ways of thinking. The fact that these influences are difficult for us to see opens us up to levels of manipulation (both intentional and not) by the creators of the technology and in particular the writers of the algorithms. Granted, these influences have always been there. We think about math in the way we do because of how our math textbooks are written, about politics like we do because of how our newspapers are written, and about our history like we do because of the authors of the texts we read. The creators of the information we use have always had a profound effect on our own thought processes. In this day and age, one would think that the wide range of views available for consumption would have a broadening effect on our point-of-view intake, but instead we see the reverse happening. Because everyone can find those places that most closely follow their own views, what happens is the creation of echo-chambers (most profoundly evident in the 2012 elections) where one view is in the majority with very little outside info allowed, regardless of its veracity. Granted this is a little different than the cognitive outsourcing that Chiang and Finn are talking about, but we need to look at the history of our information gathering, because it will inform us of how we might proceed today.

Shining a light on the possible influence of the algorithms behind predictive text, autocomplete, and more complex programs that make suggestions for topics and applicable research like the story Chiang told about who came up with that chapter, him or the program? Later in the day, a panelist suggests that we have to program our biases into our technology (and I’ll likely tackle this more when talking about that particular panel), and one of the comments on the twitter conversation immediately asked if it wasn’t more accurate to say we have to program them OUT of our technology. Ostensibly, this is because most biases are invisible to us. We have to make a conscious effort to notice when our biases or privilege are being enacted. This is true, also with the creation of algorithms that can direct and shape our own thoughts. Eventually, when we cede even more of our thinking to our devices (or merge with them to more seamlessly harness their power) this is going to become an even more important question. We will need to investigate, and become aware of the hidden biases and agendas of the creators of this cognitive technology, be they intentional or accidental.

Neuroscience and the Future of Ethics.

The third panel is titled “Neuroscience and the Future of Ethics.” It was moderated by Jamelle Bouie, a staff writer at Slate magazine, and the panelists were Elizabeth Bear, author of “Covenant” from the Hieroglyph anthology, Jonathan Moreno, the David and Lyn Silfen University Professor of Ethics at the Pearlman School of Medicine at the University of Pennsylvania, and Kathleen Ann Goonan, author of “Girl in Wave: Wave in Girl” from the Hieroglyph anthology. You can see the panel below.

This panel, not surprisingly, was one of the more controversial in the twitter discussion. One of the main points brought up by Dave Clifford (@DCDave) in the Twitter discussion was that we had a panel on neuroscience that had two Science Fiction authors and an ethicist on it. Not an actual neuroscientist in sight. The problems they bring up (for the most part) are compelling, but the panel is strikingly unqualified to answer most of them. The things that neuroscience is doing and may do in the future, and the actual functionality of the brain are things that this panel cannot speak to, and a lot of the assumption that they make are wildly inaccurate.

Particularly troubling was Kathleen Goonan’s assertion that most human studies have some way in which they are not well set up and don’t yield the information you want or need. She follows this up with painting the entire spectrum of animal testing with the unethical brush. I wonder, then, how she (a non-scientist) expects any medical study to ever get done, much less get done well? On what information does she claim that most studies are bad and give us bad information? According to whom are animal tests bad? Her opinion is thus, obviously, and she is completely within her rights to feel that way. I may not even argue that point personally in a lot of circumstances, but to make wide declarations that they are a “bad thing” without qualification is a statement I do not think she is in a position to make.

That aside, the issue of ethics, especially when it comes to neuroscience and the topics of treatments for neurologically atypical people as well as cognitive enhancement, may be the defining issue of medicine for many years to come. The questions of who determines what is neurologically typical, upon whom to we cede the authority to medicate people against their will, and what do we allow/require for people and especially children? Bear brings up the excellent point that the mechanisms of mental health care have often been used as very powerful instruments of social control. Science fiction has tackled this problem extensively, not only in the Hieroglyph anthology, but elsewhere as well; films such as Equilibrium, Gattaca, THX 1138, and books like 1984, and The Speed of Dark (which Bear later mentions). But our own past has shown us quite willing to use our conception of neurological typicality as a lever by which to enforce an unequal social order.

The flip side, of course, is what is our ethical obligation towards treating people who need it? We struggle with this today, with antipsychotics and people who may not be dangerous to society or themselves, but cannot function in society and often end up homeless or addicted or worse because of their illness. What do we do when they refuse treatment? What about if/when we can fix autism (like in The Speed of Dark) or PTSD, or ADHD, sociopathic tendencies, or anxiety disorders? What if we can “fix” homosexuality? Where do we draw the line between normal and abnormal? What about requirements like we have for vaccines in schools for these abnormalities? What about personal autonomy, and where do we as a society step in? As Bear says, our concept of personal autonomy is not based on whether or not they will thank us for it later. Also of note is the historical fact that we do not have the best track record of applying social justice, either in the legal system or the mental health system, equally.

Unfortunately this panel doesn’t have many good answers for any of these questions, except to bring them up and show that the problems exist. I don’t know that I do either, but I think STS may be able to help companies and policy makers navigate this minefield. Every day in our field we need to bring together disparate fields and opinions into some semblance of order, negotiating many different sets of rules, norms, and social structures to describe the workings of society with regards to science and tech. These skills could be supremely valuable to schools, laboratories, pharmaceutical companies, and policy makers. Moreno admits that most neuroscientists don’t generally consider the ethical implications of the work they do, and perhaps that is where we can start. Education is the great communicator, and knowledge and understanding can help to begin the conversation. A meeting of minds between the scientists and the policy folk, and the general public will be necessary to bring forth a world where we can fix problems, and lessen burdens for those people with psychological hardships.

Who Gets to Imagine for the Human Race?

The fourth panel talks about “Who Gets to Imagine for the Human Race?” Bill O’Brien, Senior Advisor for Program Innovation at the National Endowment for the Arts is our moderator. The panelists are Tom Kalil, Deputy Director for Policy at the White House Office of Science and Technology Policy and Laurie Silvers, the founder of the SyFy Channel and Hollywood Media.

This panel talks about some very cool and interesting new technologies being funded by DARPA the NSF and others, inspired by the Grand Challenges by the White House. They have put forth what they call “21st Century Moonshots,” goals that are big, but achievable. Things like making solar as cheap as coal, finding all of the potentially hazardous asteroids, or learning more about how the brain works. There is even a Grand Challenge scholars program, which enlists graduate and undergraduate students to create their own plans of study to achieve these goals. This is also the panel that is probably the most egregious offender of the privileged viewpoint, and of first-world thinking.

First of all we have an advisor to the White House through the Clinton and Obama Administrations, and the person who launched a cable network (among other things). These are people who have the means to engage with the most expensive and powerful organizations in the world. A question about the rise of democratic sources of innovation and imagination like Kickstarter and YouTube is briefly discussed, but quickly subsumed by discussion of larger organizations and wealthy individuals like Elon Musk and Space X, and the role of government in imagining. O’Brien mentioned right before they went to audience questions about government organizations attempting to engage the developing world and the negative effects of ignoring them and not giving them agency in their own future. I was obviously not the only one who was concerned by the focus on large corporations, wealthy individuals and governmental programs, because most of the questions from the audience focused on the imbalance between governments and powerful organizations and the layperson, especially those in developing countries. Vandana Singh had wonderful questions about those roles that I think were inadequately danced around by Mrs. Silvers before they ran out of time.

The amount of time given to Silvers’ story of the creation of the SyFy network, I believe, negatively impacted the panel’s ability to tackle the titular question here. Discussion of the place of government, entrepreneurs, and wealthy individuals is necessary, but a long, and to me off-topic, story about the founding of a television channel was not very helpful to the pressing questions posed by Ms. Singh at the end. The disparity in access and involvement by nationality, race and gender is a huge issue in technological development. There are vast gulf is access to even the most basic of health care and modern amenities. If the “who” of the title of this panel is only the wealthy people, corporations and governments, then the negative ramifications alluded to by Mr. O’Brien are only going to continue. Cara Mast, who I quoted earlier, also had some issues here. In our conversation about this seminar, she wrote:

I think there should have been more consideration of the question “A better future for whom?” …that’s kind of where I wish things had gone in the panels regarding tech advances. How can we use new tech, and develop future tech, to dismantle some of the systemic problems on earth? M-pesa is cell phone-based financial services, reaching populations throughout Africa that don’t necessarily have access to banks (I see this as a kind of democratic technology, as it works with what access a large number of people in the region already had/have). There are so many examples… of ways in which the technology we have now, or could reasonably develop in the near future, could tackle real, perpetual problems in the world we live in right now. We don’t have to put people on other planets to start making progress and inspiring people.

This get’s to the crux of the problem that I don’t think was focused enough on in this seminar: we are members of the most powerful country and the most technologically advanced country in the world. Our conception of what is needed or wanted for “the” future is necessarily shaped by our place in this world. The very idea that there is “a” future toward which we should reach is, I believe, both shortsighted and ignorant of the wider context of an unequal world. Access to new technology is, to my mind, THE problem that the 21st century will need to address. As technology advances, and those advancements continue to accelerate, the gap between the haves and the have-nots will only widen unless we make a conscious decision to fight it. This panel was the perfect place to talk about this, but it got bogged down in stories about Isaac Asimov and Gene Roddenberry. Perhaps it is asking too much to the panelists to fly quite so far out of their comfort zones, but as this is probably the most pressing and dynamic question facing us, especially in the west, I was disappointed that they did not address those issues more directly.

The obvious answer to the question posed as the topic of this panel is: Everybody. The (also obvious) follow-up, however, is how to we engage “everybody” and, maybe more importantly, how do we get those in power to listen to, and take seriously, the issues important to those with little to no power? The issue of access to this level of agency and autonomy for underdeveloped countries is, as I said, probably the foremost problem facing the worlds of science and technology. STS scholars are focusing on these issues now, and will be in the future. Perhaps embracing a bit of advocacy, and not just academe, will allow us to use our knowledge and our place within societies to bring the developing world into the conversation, and make the developed world listen and take them seriously.

Lost in Space: How Should We Approach Our Final Frontier?

The fifth panel is titled “Lost in Space: How Should We Approach Our Final Frontier?” It was moderated by Patric Verrone, and included Ellen Stofan, Chief Scientist at NASA, and Neal Stephenson as panelists.

One of my first memories regarding NASA and the Space Program is of the Challenger explosion, so when Neal Stephenson says that going to space is difficult, expensive, and dangerous, that resonates with me. However, also much like Stephenson, I have a great desire to see humans expand beyond the Earth, and a belief that we can do it, if only we can find the societal will. He believes that that will has been lost, and that we don’t have a good hieroglyph to rally behind to give us a good reason to explore. I covered a lot about this in my discussion of Stephenson’s introduction; so let me instead focus on what is being done, and perhaps where we can evoke our spirit of competition and exploration to make a trip to Mars or Europa possible within our lifetimes.

In a humorous anecdote, Stephenson tells of a NASA engineer who said that Communism’s greatest accomplishment was the Apollo moon landings. It was because of the fierce competition with the Soviet space program that fueled the political will to make the moon landing a reality. Since then we as a country have not had a motivation like that. China and India are currently building their own space programs, and we hitch rides on Russian rockets to get our astronauts to the ISS, but even with our currently anemic ability to get people to space, no other country can truly compete with the US, and certainly not in the way that the Soviet Union, with the added threat of the cold war going on, ever did. In the 60s, President Kennedy was able to put down the Hieroglyph that Stephenson mentioned in the beginning of the panel: We will put a man on the moon within the next decade. That declaration captured the imagination of the entire country and, with the additional motivation of the Soviet space program (especially since it, via Yuri Gagarin, beat us to the first milestone), got us to the Moon. Nowadays when President Obama puts forth the Grand Challenges, including putting people on Mars by 2030, there is not much political momentum behind it.

I think this ties in with the origination of the Hieroglyph project, in that stories and science fiction can capture the imagination of people. Perhaps the swing toward fantasy has negatively affected the national psyche. Perhaps the ubiquity of social media and the nanoscale attention span of the public have also made it more difficult to rally support behind programs that will take a decade or more to accomplish. We also have a rampant anti-science movement in this and country, and a severe lack of science literacy (to which Vandana Singh pointed in her questions to the previous panel). There is also the political landscape where partisanship scuttles many a good idea just because it came from another party, and our enemies have little to no interest in competing with us for dominion of space. But, much like the writers in this anthology, I believe that fiction can capture the attention of the people. Look at what Harry Potter did. For nearly 20 years every kid on the planet has wanted to become a wizard because of those books and movies, so perhaps it is up to the science fiction writers of the world to capture our imagination again, but I think it must be a coordinated effort between educators, scientists, politicians, pop culture icons, and even STS scholars to rally around a hieroglyph that can engage the popular will and catapult us back into space.

Reimagining the Future of the Internet, Surveillance, and Privacy.

After a brief break for lunch we then come back for our final three panels, the first of which is titled “Reimagining the Future of the Internet, Surveillance, and Privacy.” This panel is moderated by Kristal Lauren High, Co-Founder and Editor in Chief of Politic365, and has Barton Gellman, a reporter for the Washington Post who covered the Snowden papers, Madeline Ashby, author of “By the Time We Get to Arizona” from the Hieroglyph anthology, and Kevin Bankston, Policy Director for the Open Technology Institute and New America Foundation as panelists.

In what is probably the second most controversial panel, we discuss the issue of privacy and surveillance. In light of many recent events such as the Snowden papers, the massive iCloud hack and leak of celebrity photos, the hack of Snapchat, and the issues with credit card security at Target and Michaels (and others), this is a topic that is very much in the public eye. And while we might complain about the surveillance and the violation of private data, there is not much behavior-shifting going on in the public, so it’s a confused sort of response that we are giving. Most people accept the default privacy settings for the programs we use, not many people use proxy servers and do-not-track programs to shield their browsing. Even for the tech-unsavvy, there are simple options that are just not widely used or known about. So the question becomes: how much surveillance and privacy are we comfortable having? And to whom, and for what purpose might we loosen those restrictions? How can we educate people in ways to secure their information? And to what extent do we have control over that at all?

Madeline Ashby posits that our comfort with this surveillance may be a by-product of a cultural history of religious belief where we have a deity who is always watching everything that we do. Also noting that we as a species have always been one who is fascinated by observing. The part of us that is genetically coded to observe everything that kept us alive thousands of years ago, now fosters a culture of voyeurism that makes us kind of okay with privacy breaches and surveillance. It’s a pair of provocative hypotheses, and I cannot help but think she may be on to something. This is probably somewhere that an intrepid STS scholar could investigate and look at the cultural and biological underpinnings of our acceptance and resistance of surveillance, and perhaps our own voyeuristic tendencies.

Aside from that, the discussion, correctly in my view, focused a lot on the power differential between the actors of a surveillance culture. It was repeatedly pointed out that there can be no equality in effect between an organization that can bring billions of dollars to bear, and a person or organization who cannot. The Snowden papers brought the gaze of millions onto the actions of the NSA, but very little has actually changed policy-wise. Kevin Bankston talks about there being the beginnings of a shift to a more secure internet by companies who deal with large amounts of data, and of the use of better encryption. However, companies still gather mountains of data on users, even if they protect their information from the government or other outside intrusion. Those companies can still bring to bear far more economic and institutional power than any one or even all of its users, so the dynamics remain. A few fiction instances were talked about that have transparent societies, societies where everyone has access to these surveillance apparatus, but the consensus seems to be, and I agree, that there is not, as Ashby put it, an equal assumption of risk. A powerful government cannot be shamed into behaving by the gaze of its people (as evidenced by the lack of progress on curtailing the NSA via legislation) in the same way a person can be shamed by the government’s gaze.

The idea of power is brilliantly well put, and the first instance of someone on these panels acknowledging the privilege that they inhabit, when Bankston says it’s really easy to be a white American male and saying that if we give everyone the ability to surveil, that we’ll get some kumbaya moment, or that it will eliminate the power differential and equalize all people. Even if such a democratization of this technology were possible, power differentials still would exist, and one’s ability to leverage a technology depends directly on the amount of power they can enroll, be it monetary, popular, or scientific. While it is true that the power differential might shrink, it would not go away, and it is likely that we would find the very powerful companies and governments simply shifting to new avenues of control and data collection that continue to be secretive and deniable.

One of the parts of this panel that garnered a bit of criticism from the online discussion was the looseness of the speakers with narratives versus the actual facts/technology/science being discussed. Dave Clifford who I quoted earlier, specifically noted Ashby’s conflation of the school in Pennsylvania using laptops to spy on their kids with hackers using webcams to sexually exploit minors. There were concerns about such actions in the case, but no charges relating to lewd or pornographic images were brought against school administrators by the DA. Also of concern for him was the loose way in which Gellman talked about Facebook’s ability to track what you do cross-platform. These are important points that we need to be clear about. Overstating the abilities of companies to get your data helps no one, and we need to make sure to treat wrongdoing with technology with enough nuance so as to avoid painting a whole swath of people and companies with a brush that’s not relevant. Fear-mongering only leads to an overreaction in the opposite direction, and that doesn’t help to stem the problem.

Visions of an Alternative Internet.

The penultimate talk is given by Lee Konstantinou, author of “Johnny Appledrone vs. the FAA” from the Hieroglyph Anthology talking about “Visions of an Alternative Internet.”

This is a short talk based around basically solving all of the problems brought up by the previous panel. The basic argument is that there are multiple ways that the Internet (and thus the ways in which governments and corporations can surveil us) may develop in the future, and that the issue therein is one of governance. We will not remove the power brokers from the data equation, government will not go away, and some anarchist version of our world where everyone is equal is a pipe-dream. Knowing that, the question becomes how do we affect our governance in such a way that we create the future we want? If we are concerned about climate change, how to we enroll the right powers to affect change? If we are concerned with surveillance and privacy, how do we help create a more private Internet? The story posits one way, via a mesh network of drones that operate outside of the normal governmental controls (and the possible reactions the government might have towards those who create and maintain it). There are obviously others, and the danger of the AOL/walled-garden we’re already seeing the beginnings of via brand-exclusivity such as only being able to get iPhone apps and the Apple Store is one we might want to avoid.

Can Stories Solve Wicked Problems that are Bigger than Our Imagination?

Our final panel is titled “Can Stories Solve Wicked Problems that are Bigger than Our Imagination?” It is moderated by Dan Sarewitz, the Co-Director of the Consortium for Science, Policy & Outcomes, and includes Vandana Singh, author of “Entanglement” from the Hieroglyph Anthology, David Rejeski, Director of the Science & Technology Innovation Program at Woodrow Wilson International Center for Scholars, and Karl Schroeder, author of “Degrees of Freedom,” from the Hieroglyph anthology.

A wicked problem is a problem that emerges in a complex system for which there is no one solution. Any attempt to solve the problem will necessarily have wider effects and may alter the complex system in unforeseen ways that can make the problem worse or create new problems. The focus of the panel was using narrative and story to help bring people into a place where these wicked problems can be examined. Without context and history, we could look at something like climate change and it looks so big and complex with all the environmental, societal, governmental, and sovereignty issues entangled in it and just throw up our hands in futility. Stories, which can explore these wicked problems with context, are a perfect way to approach and start to unravel a series of solutions that might help. As Vandana Singh says, the arts are a place that can condense the complexity of the problem into a single experience.

The challenge comes, of course, in how to use narratives to affect change. What sort of narratives can we craft to shift legislators to take on problems like climate change? How do we use stories to find the tipping points and feedback loops of these complex systems, and perhaps figure out ways to leverage those points for positive change?

Both Singh’s and Karl Schroeder’s stories in the Hieroglyph anthology deal with small groups using technology in novel ways to affect positive change. They both kind of violate the third H (no Hackers) that Stephenson mentioned in his opening, with Singh’s protagonists using hacked drones to combat climate change and Schroeder’s native population effectively hacking the government to gain more equal representation. Considering the gridlock that has become the new normal in Washington (and the general inertia of large governmental systems), this sort of paradigm for solving these wicked problems may be the easiest avenue open to us. The realm of citizen science and activism has thrived due to the Internet and social media, and narratives that can capture the imagination of a wide group of people; enrolling the citizen scientists, activists, social media, and celebrity I believe may give us the sort of momentum we need to start tackling these wicked problems.

This may, I believe, be one of the main places where STS scholars can help. Actor-Network theory and it’s offshoots thrive on describing complex systems, breaking them down into their most important parts, and describing the ways in which changes to that network affect the whole. Also important is what I mentioned earlier: the positioning of STS at the intersection of technology and technology-producers, policy-makers, and the public. As we individually study realms like biotechnology, meteorology, nanotechnology, remote sensing, and others, we probably want to remember to bring our disparate studies together to form larger-scale solutions to not only the problems within those fields, but the larger wicked problems at their intersection, always remembering that at the heart of our study should be a desire to shape the changing world in a more productive, equal, and livable way. Through engagement with not only the science and technology sides of our study, but society and the powerful actors therein, we can visioneer a better future by pointing out the flaws of the past, and the ramifications of actions in the present. STS scholars can be the catalyst by which activists and scientists and policy makers come together and create more positive self-fulfilling prophecies.

2 responses to Visioneering a Better Future: The Hieroglyph Project, STS, and the Future of Science and Technology, Joshua Earle


    Mr. Earle, thank you for your summary of the Hieroglyph event in Washington, D.C. on October 2 2014.

    I am Kathleen Ann Goonan, a participant in this event and a Professor of the Practice in the School of Literature, Media, and Communication at Georgia Institute of Technology. Georgia Tech’s culture of research and cutting-edge innovation is deeply connected to the kind of thinking associated with science fiction; I was invited to teach at GT in 2010 because of my expertise in that field. I teach courses that investigate the confluence of science, technology, politics, the arts, gender studies, and history. I also teach the art of writing fiction.

    I have been publishing science fiction since 1990; my work includes seven novels and about fifty short stories in markets that include Discover Magazine and Popular Science. I was featured in Scientific American, in an article that included Greg Bear and Neal Stephenson, as a writer with a deep understanding of nanotechnology and the myriad fields that nanotech encompasses. My invitational talks have included venues such as RIT, the University of South Carolina, Idaho State, Virginia Tech (my alma mater), international literary seminars and festivals such as Kosmopolis and Utopiales. As a member of SIGMA, I am a consultant for the US Government and NGOs. My web page is

    In your paper, you state
    “Particularly troubling was Kathleen Goonan’s assertion that most human studies have some way in which they are not well set up and don’t yield the information you want or need. She follows this up with painting the entire spectrum of animal testing with the unethical brush. I wonder, then, how she (a non-scientist) expects any medical study to ever get done, much less get done well? On what information does she claim that most studies are bad and give us bad information? According to whom are animal tests bad? Her opinion is thus, obviously, and she is completely within her rights to feel that way. I may not even argue that point personally in a lot of circumstances, but to make wide declarations that they are a “bad thing” without qualification is a statement I do not think she is in a position to make.”

    It is indeed troubling that many human studies have design flaws. Human studies face serious design challenges because of the resources necessary to do double blind controlled studies (which is the gold standard). Many human studies compromise by using insufficient numbers of subjects, or incorporate other design characteristics that lead them to be subject to criticism. I recommend that anyone who doubts this talk to an epidemiologist to better understand the challenges that face those who do research on humans.

    I raised what I consider to be serious ethical concerns about animal research, and it is increasingly feasible to do good medical research without causing unnecessary harm to animals. In my view, only after all options are exhausted for non-animal research should animals be included in research design.

    I am not alone in hoping that we can find alternatives to this practice; others are committed to advancing this process. Here is one example: .

    Obviously, I do not have the power to bring about any such kind of radical change, except in fiction ( ).

    As I mentioned, research on chimpanzees was banned in the U.S., in 2013, for ethical reasons.

    Thanks again for paying attention to this event.

    Kathleen Ann Goonan
    Professor of the Practice
    School of Literature, Media, and Communication
    Georgia Institute of Technology
    Atlanta, Georgia
    FB Kathleen Ann Goonan


      Ms. Goonan,

      Thank you for taking the time to read my piece and comment. I did not expect to see any response from the folks at the event, much less one of the panelists, and it is a pleasant surprise, indeed.

      I understand that when speaking at a panel it can be hard to expound in detail on issues this complicated. It’s not like you can drop a link to examples when speaking. And it’s difficult to get an informed back-and-forth about issues that might come up on the fly. My main concern was that particular examples were being used (implicitly and explicitly) to paint the whole of medical science with a brush of unethicalness and/or ineptitude. And that even if the entirety (or even the majority) was as poorly-done as suggested, how then might we do it better? There are plenty of stories of bad science out there. The CDC report recently shows a distressing lack of rigor in many pharmaceutical trials. However, I would hesitate to say that that means that there is no good science out there, or that we should be throwing the baby out with the bathwater when it comes to human or animal trials.

      And while the double-blind is, indeed, the gold standard, circumstances often force us to use other methods. The Case Study is one that is widely used, and not just in human medical situations. Astronomers don’t get to line up supernovae when they want to, with control groups and the such… they have to wait for one to happen and get all the data they can and extrapolate from there. I wouldn’t call what they do “bad science” any more than I would for a drug trial for a rare disease, even if they couldn’t get to the 1000-ish number necessary for a “perfect” statistical model. And while the negative stories do seem to loom large, it is because they are so rare that they stand out. The testing process we use is, in general, so sturdy that it takes gross incompetence or corporate greed or outright fraud to make the science bad enough to make the news. There is a reason that there hasn’t been a Thalidomide-level event in this country since the 50’s.

      As to the suggestions for alternative methods listed in the PETA link, I have a few issues there. First of all, it should be stated that PETA itself has it’s own issues with a certain… arbitrariness… of how it applies its moral norms, and a vested interest in reading scientific studies in a certain way. That aside, let’s look at the suggestions. First is the “organ on a chip” which is a very new development and offers a very limited range of tests to be done. They are very useful, but an organ is not an organism, and one of the things we most often look for in testing is the unintended consequences outside of the target area for the drug/chemical/procedure. The organ on a chip cannot, by it’s very nature, tell us what might happen elsewhere in the body. The second suggestion is Computer Modeling, and this is extremely problematic, because it requires a level of computation, and a knowledge of physiology that far exceeds our current levels. This may be useful currently for very simple tasks, but in order to accurately model complex drug interactions in the human body we are decades away. The study they even link to showing it to be a success only showed success in the distribution of the particles of an inhaled asthma drug in the lungs, not the effects of it or any possible side-effects. Useful data, but hardly an alternative to animal testing. The third suggestion is volunteer human testing which should be worrisome on the face of it, since we usually use animal testing to weed out the really harmful potential drugs before they get on to the human trials, and of course would run into the same issues you already have with human trials to begin with. And even the PETA site, hidden between visceral descriptors, admits that the use of microdosing can only replace “certain tests on animals.” The use of fMRI to replace animal brain studies might work eventually, but it has some serious limitations right now. Sure we can see the firing of a single neuron, but we’re still really bad at seeing large areas of the brain working with great detail. The final suggestion I have no issue with because it certainly seems to legitimately do better than the animal version, and it looks like it’s already been adopted most places anyway.

      But when it comes down to brass tacks, there are trials that just cannot be done on human patients with any semblance of ethicality. I think Mr. Moreno from the panel even brought up a good example. How do we test the efficacy of an ebola vaccine? We obviously can’t risk giving ebola to people. And those who are sick are past the point where it would help them. So do we test it on chimpanzees? Is there another option? Can we use a computer model to see if it works? What about an organ on a chip? Or do we do nothing because we are paralyzed by our ethical quandary?

      So again I’ll pose the questions I did in my piece, but in a more direct way. First, how would you suggest we structure our human testing so as to guarantee the quality of outcomes you desire, and how would you make that standard enforceable? And second, what techniques and methods would you suggest we switch to in order to gather all relevant data we would get through animal testing, but without harming any animals? Note that the suggestions in the PETA piece will only take care of a very narrow sliver of the total data gathered from animal testing. Or would your suggestion be that we halt all animal testing until we can produce alternative methods? And, if so, what sort of ethical repercussions are you willing to accept in the human population due to the slowing of those tests? And at what cost, both monetarily and in time and other resources, do we consider other alternatives “exhausted?”

      I doubt you can answer those questions. I know I can’t answer them. And my original point was really just that there are layers and layers of nuance and ethical problems that were glossed over about this issue during the panel. The inclusion of a neuroscientist I think would have helped keep the discussion a bit more on track, and perhaps been able to give the panel, some more specific knowledge of how (and how well) the science is done. Anyway, as you can see, even in a short response to a short response to a couple of sentences spoken in a panel, these issues unfold in wicked ways that we both are probably ill-equipped to answer. And my belief is that dropping those rhetorical bombs in such a sweeping way really did the discussion a disservice. Even the nuance of what you wrote here — “only after all other options have been exhausted” — was glossed over in the panel. And the intersection of human and animal testing would have been a great place to knuckle down and get into some real issues in that panel. That it was almost a throw-away line and not unpacked in a more serious way was a big part of why I was disappointed in that particular exchange. I am interested in getting better science. I am also interested in doing it in a way that is least harmful to both people and animals. I believe most scientists want the same thing. I wonder, then, if there are ways in which do do what we are doing, only better, why aren’t we doing it? What obstacles exist, and how do we overcome them? And if you can answer those questions, then perhaps you can bring about such radical change.

      Anyway, despite my criticisms of it, I thoroughly enjoyed your panel (and the entire conference… and the book). And if a Negative Hieroglyph book ever comes out and you all do this again, I’ll have to make sure to be there in person to poke panelists when they get too broad with their rhetorical brush.

      Thank you again for your comments and I hope this made my position a little more clear (and also why I didn’t go into it too deeply as the article was already pushing 7000 words).


      ~Joshua Earle

      Masters/PhD Student in Science and Technology Studies
      Virginia Tech

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s