© David L. Stearns and Figure/Ground Communication
Dr. Stearns was interviewed by Laureano Ralon on April 20th, 2012
Dr. David L. Stearns is an adjunct lecturer in history at Seattle Pacific University, Seattle, USA. Prior to his return to academia, he was a software developer and designer for nearly twenty years. At present, he teaches history at various universities around the Seattle area and writes on the interplay between technology and culture. As a researcher, he specializes in the history of the VISA electronic payment system. His most recent book, Electronic Value Exchange, recaptures the origins of the electronic payment network known as VISA. The book examines in detail the transformation of the VISA system from a collection of non-integrated, localized, paper-based bank credit card programs into the cooperative, global, electronic value exchange network it is today.
How did you decide to become a university professor? Was it a conscious choice?
It was a conscious choice, though I think my reasons for moving in that direction initially were a little different than why I continue to work as a scholar now.
My undergraduate degree was actually in business and information systems, and I worked in the software industry for about 12 years before returning to graduate school (the first time) to study the history and sociology of technology. I enjoyed building software: it’s an interesting blend of engineering and artisanal craft work, and at the end of the day, you can point to the screen and say “I built that.”
By the late-1990s, however, I had started to become a little disenchanted with the utopian rhetoric one commonly heard in the software industry then (and perhaps still do now). After reading Neil Postman’s book Technopoly, I felt even more disenchanted. Although I continued working in software for a few more years, I made up my mind at that point that I wanted to return to graduate school and study the technology/society relationship.
After I finished my PhD at Edinburgh in 2008, I actually returned to the software world once more, as there were no teaching jobs available in our area, and I needed the income and medical benefits. This time I built open-source software for cancer researchers, which seemed more rewarding than building commercial software, but I never felt like it was the right place for me to be. In 2009, I got the opportunity to teach as an adjunct in the history department of Seattle Pacific University, so I transitioned out of software and into academics.
Although I haven’t found a full-time faculty position yet, I’ve decided to stay in academics because I love interacting with students and helping them develop intellectually. I sometimes miss being able to point at the screen and say “I built that,” but it’s even more rewarding to see your students’ minds open up as they think about new things and come to understand the world in a new way.
Who were some of your mentors in graduate school and what were some of the most important lessons you learned from them?
I actually had two graduate school experiences. When I left software for the first time, I spent one year at University of Washington (UW) studying the history of science and technology. My primary mentors that year were Bruce Hevly and Phil Thurtle. Hevly grounded me in the history of technology, and Thurtle awakened my imagination toward media studies, science and technology studies, and questions of artificial life.
I didn’t continue at UW though; my wife got into a PhD program at St Andrews in Scotland, so we sold most of what we owned, put the rest in a shipping crate, and moved there. After working for a few years for a small software company in Dundee, I decided to return to graduate school and pursue a PhD.
I contacted Donald MacKenzie at Edinburgh (I had read some of his work during that year at UW), and he kindly agreed to take me on as a PhD student. Donald has shaped my thinking probably more than anyone else. He helped me understand the problems with technological determinism, as well as the other extreme of social determinism. He also showed me how to apply concepts from the sociology of science to technology, and gave me a new way of looking at how we construct shared knowledge about devices and systems.
While at Edinburgh, I also benefited from a number of interactions with Robin Williams, John Henry, Luciana D’Adderio, and Ivan Crozier.
Joshua Meyrowitz’ thesis in No Sense of Place is that when media change, situations and roles change. In your experience, how did the role of university professor evolve since you were an undergraduate student?
Well, I can only speak from my perspective, which is limited to the humanities side of a small liberal arts college in the United States, but I don’t think there is much question that higher education is currently under significant pressure to adapt to the Internet era. It is also pretty obvious that the context in which higher education takes place has changed quite a lot since my time as an undergraduate in the late 1980s.
When I was an undergraduate, our access to information was pretty much limited to the university library. We did have some of these new-fangled CD-ROM research databases to augment the stacks, but we had nothing like the world wide web. Students today have unlimited access to a ridiculous amount of content, but without the skills necessary to critically evaluate and make sense of that content, it really just overwhelms them more than it helps. This change of context means that I have to teach a little differently than my professors taught me.
For example, when I teach our world history survey course, I see my job as being less about giving the students content via lectures, and more about helping them learn how to read and evaluate historical narratives in the first place. I try to convey that a historical narrative is really closer to a legal argument than to a neutral recounting of past events. Every author writes from a perspective and has an agenda. If the students learn how to divine the perspective and agenda of our textbook author, I’ve taught them a skill that will help them evaluate anything they read in the future. They can always use their phones to lookup any historical detail they want at any time, but Wikipedia won’t help them learn how to read critically. That’s my job.
Inside the classroom, how do you manage to command attention in an age of interruption characterized by fractured attention and information overload?
Students have always found ways to dissociate when in class. When I was an undergraduate, we would doodle in our notebooks, pass notes, read for other classes, or simply fall asleep. Today’s students can also use their computers and other mobile devices, which are perhaps more alluring, but in the end, the professor has to set clear expectations about what will and will not be tolerated. I also tend to limit the amount of time I lecture at the students (which is when they most often dissociate), and increase the amount of time they spend discussing questions or working on problems in small groups. Of course, I periodically do have to call out students who are distracting themselves or others, but the more interesting I can make the class, the less I have to do that.
I don’t think it’s helpful to ban computers and phones from the classroom. I am aware of the argument that taking notes by hand helps you better remember the content (due to some mysterious connection between memory and making shapes via handwriting), but I’m not really convinced by it, partially because it has never worked for me. But even if that was true, I don’t think we’d be helping students by training them to work in a way that will be contrary to how they will work once they graduate. We need to help students learn how to use these information tools effectively and responsibly so that they can easily make the transition into their post-college lives.
In 1964, Marshall McLuhan declared, in reference to the university environment that, “departmental sovereignties have melted away as rapidly as national sovereignties under conditions of electric speed.” This claim can be viewed as an endorsement of interdisciplinary studies, but it could also be regarded as a statement about the changing nature of academia. Do you think the university as an institution is in crisis or at least under threat in this age of information and digital interactive media?
Well, I can only comment on my experience, which again is somewhat limited. From what I have seen, I don’t think departmental sovereignties have yet melted away in any significant sense, except perhaps in newer universities or newer extensions of existing universities. For example, the University of Washington Bothell extension seems much more interdisciplinary than the core University of Washington campus in Seattle. In the older, more established universities, it seems that disciplinary and departmental boundaries are still very strong and well-guarded.
In 2009, Francis Fukuyama wrote a controversial article for the Washington Post entitled “What are your arguments for or against tenure track?” In it, Fukuyama argues that the tenure system has turned the academy into one of the most conservative and costly institutions in the country, making younger untenured professors fearful of taking intellectual risks and causing them to write in jargon aimed only at those in their narrow subdiscipline. In a short, Fukuyama believes the freedom guaranteed by tenure is precious, but thinks it’s time to abolish this institution before it becomes too costly, both financially and intellectually. Since then, there has been a considerable amount of debate about this sensitive issue, both inside and outside the university. Do you agree with the author? What are your arguments for and/or against academic tenure?
I looked up that article and it seems to be more assertion than argument, perhaps because it’s just a short piece meant to instigate discussion. Still, he offers no real data about costs, nor any suggested way of measuring “conservativeness.” I have certainly seen cases where a tenured professor stays well past his or her prime, but that has more to do with the issue of mandatory retirement ages than tenure.
I think there are some obvious arguments for tenure that are still valid. Scholars who want to develop controversial ideas that are threatening to the status quo should be protected from retribution, as their ideas may turn out to be important or even revolutionary. The arguments against tenure tend to focus on the aged professor who is clinging to a job when he/she should retire, but eliminating tenure would allow universities to dismiss any professor who ran afoul of the administration.
I don’t think eliminating tenure would solve the two core problems Fukuyama is alluding to. Even if you got rid of all the old professors there are still way too many PhDs chasing too few academic posts; I routinely hear about hundreds of PhDs applying for a single entry-level post. And the use of inscrutable jargon has more to do with who the expectations of the field, and who controls the ability to publish in peer-reviewed journals. Even if you eliminated tenure, universities would still expect their professors to publish, and doing so seems to require the use of inscrutable jargon in most fields.
What advice would you give to young graduate students and aspiring university professors, and who are the thinkers today that you believe young scholars should be reading?
If you want to study the technology/society relationship, I think you should spend some time reading Thomas Hughes, Donald MacKenzie, Trevor Pinch, and Bruno Latour, in addition to the standard big names from the previous generations (e.g., Mumford, Heidegger, Ellul, McLuhan, Postman, Ong, Borgmann, etc). Hughes has a wonderful way of situating technologies in their encompassing socio-technical systems. MacKenzie leads you into really interesting case studies in order to help you understand more general principles that will often completely reorient your perspective (e.g., see the conclusion to his book Inventing Accuracy). Pinch’s work is essential for understanding the role “users” play in shaping new technologies during adoption, and his work is often really fun to read (especially his book on the Moog synthesizer). And Latour is…well, Latour. You may not always agree with him, but he is such an inventive thinker, with a talent for helping you see things in entirely new ways (e.g., see his book Reassembling the Social).
I have also been an enduring fan of Sherry Turkle’s work. She is a psychologist who gets to the heart of our relationship with technology by studying how children and the elderly interact with new devices. Her latest book, Alone Together, takes a more pessimistic turn, but she still ends it on a hopeful note, and stresses that we have the ability to influence the way technology develops.
You mentioned off the record that, prior to becoming a university professor, you worked for several years in the software industry. How does your professional background reinforce or complement your academic background?
My background as a technological practitioner has deeply influenced my academic work, but my academic work has also helped me think more clearly about my experience as a practitioner. I see them as going hand-in-hand, each informing the other.
When I research a particular technology, I’m able to open the “black box” of a technology and get much farther inside it than someone who has never worked in a technical industry. When you do that, you discover what kinds of choices were made as it was designed, and what elements of the finished design was really essential to its function, and which were more accidental. This allows one to realize how the particular technology could have been made differently, and how those differences might have altered the way in which it was adopted and used. It also helps one understand how it could be redesigned to better fit with a culture’s desired social values.
Your blog, www.techsoulculture.org, explores the connections between technology, culture, and Christian spirituality. What, in your view, are some of the note-worthy resonating intervals between these areas of human activity?
I think that the Christian commentary on technology is rather antiquated and in need of reform, so part of what I am trying to do with that blog is introduce thoughtful Christians to some of the more helpful concepts from recent science and technology studies (i.e., literature published since the 1980s).
When Christians talk about current technology, and especially social networking and new media, they seem to just recycle ideas from Ellul and McLuhan and treat them as gospel (e.g., Shane Hipps’s recent book Flickering Pixels). While those scholars had important things to say, the context in which they thought and wrote is fairly different from our current one. They also wrote from a technological determinist perspective, which was common in their time, but is no longer considered to be entirely accurate by those who study technology and society today. Therefore, I think we need to rethink the Christian commentary on technology and incorporate some of the more recent scholarship, which I find to be more balanced and nuanced.
What have you learned from thinkers who explored similar connections: Ellul, McLuhan and Heidegger to name a few?
To continue from my answer to the previous question, the problem with earlier scholars like Ellul and McLuhan is that they wrote from the perspective of technological determinism. This perspective assumes that technologies “impact” societies in a kind of one-way, deterministic relationship. Technologies are assumed to be strong, non-neutral forces that carry with them intrinsic meanings, morals, and consequences. Strong technological determinists (which I think Ellul and McLuhan are) also believe that technology progresses autonomously according its own, unstoppable logic.
There are two problems with this position. The first is that it just doesn’t match up to what we see in detailed historical case studies. The same technologies are adopted in different ways by different societies, and don’t always have the same consequences. Most new devices also tend to have some degree of “interpretive flexibility” (Bijker & Pinch), allowing early adopters to play an active role in deciding what a new device actually is, and what it is good for. And social groups often play important roles in shaping the direction in which a new device or system evolves, calling into question the idea that technology has its own autonomous and unstoppable development trajectory.
Of course, I’m not saying that technologies are entirely neutral, nor that they have no significant effects on societies that adopt them, nor am I saying that their social meanings are infinitely flexible. That kind of social-determinism is just as flawed as technological determinism. Instead, I ascribe to what is known as the “social-shaping of technology” position, which acknowledges that technologies often do have significant consequences for the societies that adopt them, but these consequences are rarely deterministic, and that users play active roles during adoption.
The second problem with technological determinism is that if you accept its premises, it leaves you with a rather bleak choice: play along and suffer the inevitable consequences; or leave the game. This, I think, is why Ellul’s writings in particular come across as so fatalistic. In the Technological Society, he paints such a bleak picture and then leaves the reader with no real advice as to how one might avoid his supposed future. The reader is left thinking that the only way out is to entirely reject modern technology, which is not a realistic option for most people.
The more recent science and technology studies literature allows us to see that there is a third option: we can rewrite the rules. For example, Heidi Campbell has investigated the ways in which faith communities are playing an active role in their adoption of new media, sometimes even reshaping the devices and systems to better match their desired social values (e.g., the Kosher mobile phone).
This year, you announced on your blog that you were giving up Facebook for Lent, but returned to Facebook shortly after. What is so addicting about Social Media?
First, I think you have to be careful about assuming that everyone uses social media like Facebook in the same way, or for the same reasons. Many of my friends have a very casual relationship with Facebook—they rarely check it, and mostly use it as an easy way to share news and pictures with grandparents and other relatives or friends. For them, Facebook is not addictive at all. For others, Facebook can be far more addictive, especially if they are trying to use it to alleviate loneliness or boredom.
Sherry Turkle has a useful set of concepts for this: affordances and vulnerabilities. Affordances are suggestions built into the design of an artifact or system, or communicated via marketing, that suggest how it should be used (i.e., what it can do for you). Vulnerabilities are those deep-seated needs, wants, and insecurities that when manipulated, can easily lead to addictive behavior. It’s when a device’s affordances align with a particular user’s vulnerabilities that you get trouble.
What I like about these concepts is that they realign our way of thinking and focus our attention on the particular relationship between device and user. Technological critics love to make general, normative statements about the dangers of a particular device, as if every single person will have the same experience when using that device (again, this is a result of a technological determinist perspective). But that is simply not the case. Just as some people can walk into a casino, gamble a bit, and then walk out without issue, some people can use Facebook or other social media in a non-addictive way.
A recent article on The Atlantic, “Is Facebook Making Us Lonely?,” claims that social media have made us more densely networked than ever. Yet for all this connectivity, new research suggests that we have never been lonelier (or more narcissistic); that this loneliness is making us mentally and physically ill. Do you agree with this statement?
That article is tricky, as he doesn’t really develop the argument you’re implying in that question. Marche begins in a way that makes you think he’ll reach that conclusion, but he doesn’t end up going there. Here are a few key quotes from later in the article that make this apparent:
“Still, Burke’s research does not support the assertion that Facebook creates loneliness. The people who experience loneliness on Facebook are lonely away from Facebook, too, she points out; on Facebook, as everywhere else, correlation is not causation. The popular kids are popular, and the lonely skulkers skulk alone. Perhaps it says something about me that I think Facebook is primarily a platform for lonely skulking.
Loneliness is certainly not something that Facebook or Twitter or any of the lesser forms of social media is doing to us. We are doing it to ourselves. Casting technology as some vague, impersonal spirit of history forcing our actions is a weak excuse. We make decisions about how we use our machines, not the other way around.”
But he also acknowledges that Facebook encourages us towards behaving in narcissistic ways, or towards using it as a substitute instead of a supplement to our social lives. He echoes Sherry Turkle when he writes:
“The beauty of Facebook, the source of its power, is that it enables us to be social while sparing us the embarrassing reality of society—the accidental revelations we make at parties, the awkward pauses, the farting and the spilled drinks and the general gaucherie of face-to-face contact. Instead, we have the lovely smoothness of a seemingly social machine. Everything’s so simple: status updates, pictures, your wall.”
That distinction between encouraging and determining makes all the difference. It’s not that Facebook is making us lonely or narcissistic, as if there is only one possible way to use Facebook and everyone using it has all the same vulnerabilities. Instead, Facebook encourages those who are socially timid, anxious, or awkward to use it in lieu of face-to-face real-time social interaction. In other words, Facebook’s affordances align well with those kinds of human vulnerabilities, and it’s that group of people who are experiencing more loneliness despite the increased social interaction.
The better question to ask is why are these kinds of articles so popular? Why are we seeing such a sudden rash of articles entitled “is pick-your-new-technology making us stupid/narcissistic/lonely/shallow/etc.?”
Overall, do you feel as though digital interactive media has done more to empower or alienate people?
I think Neil Postman was right when he said that technological change is ecological, not incremental. Whenever a society adopts a new technology, it usually brings about many kinds of changes, some foreseen, but others not.
I differ from Postman in that I don’t think those consequences are entirely determined by the technology itself, but I do agree that most innovations result in a wide array of benefits and costs, neither of which are equally distributed amongst the population. Whether you think a particular innovation is “good” or “bad” largely depends on whether you happened to receive more of the benefits and less of the costs, or vice versa.
Looking forward, what do you see as the main benefits and the main hazards of computer technology for life on planet earth?
I’m a historian, so I’m leery of making predictions about the future. I often go back and read predictions that were made in the past about the present, and very few ever come close to the mark!
What are you currently working on?
My current research area is money and payments—that is, money and the sociotechnical systems that make money move. My first book was a history of the VISA electronic payment system, and I’m now expanding out to the idea of the “cashless society” in general. A couple of colleagues and I wrote a paper on the origins of the concept, and we also created a short piece derived from that for Bloomberg.com. We are now looking at writing a book on the subject, contrasting the approaches taken by different countries throughout the last 60 years. We hope to put the current discussions of mobile payments in a historical context, as these same discussions were happening in the 1960s and 70s around some of the early electronic funds transfer systems.
I’m also helping to organize a conference that will take place this June at Seattle Pacific University. The conference is entitled “The Digital Society: Rethinking the Christian Commentary on Technology for the 21st Century.” We hope to gather both academics and technological practitioners to develop new Christian perspectives on technology that are more balanced and ultimately more helpful to both fellow Christians and the wider secular society.
© David Stearns and Figure/Ground Communication. Excerpts and links may be used, provided that full and clear credit is given to David Stearns and Figure/Ground Communication with appropriate and specific direction to the original content.
Your feedback is welcome and appreciated! If you like what you see, please consider voting, commenting or donating to help us grow. Figure/Ground is currently on the outlook for collaborators to help with the expansion of this section into the largest repository of scholarly interviews on the net. For specific suggestions regarding future/potential interviewees or to obtain permission to republish any of the interviews already on the site, please contact me directly at firstname.lastname@example.org.