© Mark Johnson and Figure/Ground Communication.
Dr. Johnson was interviewed via Skype by Laureano Ralon on November 15th, 2011.
Mark L. Johnson is Knight Professor of Liberal Arts and Sciences in the Department of Philosophy at the University of Oregon. He is well-known for contributions to embodied philosophy, cognitive science and cognitive linguistics, some of which he has co-authored with George Lakoff such as Metaphors We Live By and Philosophy in the Flesh: The Embodied Mind and its Challenge to Western Thought (Basic Books, 1999) His latest book, The Meaning of the Body: Aesthetics of Human Understanding (Chicago, 2007), further investigates aspects of embodied meaning and cognition that have traditionally been ignored or under-valued in mainstream philosophy.
How did you decide to become a university professor? Was it a conscious choice?
Well, I was a kid in Kansas back in the 1960s, and when I went off to college I did not even know what a doctoral program was. I just knew that after high school you go to college. As it turns out, I fell in love with philosophy right off, but I never had any idea of ending up on a faculty track. As so often happens, however, I had a couple of professors who encouraged me along the way, and all they had to say was “you’re doing well.” I was excited enough about it so that when I went off to graduate school at the University of Chicago, it finally dawned on me that I really wanted to teach. Of course, at some point you make a conscious decision to pursue some course of action, but in my case it was not the idea of a career in philosophy that was guiding me throughout my graduate years. When I entered grad school in 1971 at the University of Chicago, the American Philosophical Association had sent out a letter for the first time saying “if you think you are going to go into graduate work in philosophy, you may want to rethink your decision,” because the job market was so bad at the time. Still, I was so naïve: I just loved what I was doing and so that warning letter did not stop me. About half way through I started seeing that what I really wanted to do was teach, and as soon as I got on the job – my first job – it was only then that I realized that I had a certain excitement about doing research.
Who were some of your mentors in graduate school and what were some of the most important you learned from them?
One of the most important was Paul Ricoeur, the French hermeneutic phenomenologist. Ricoeur held the Louvain Chair of Philosophical Theology at the University of Chicago; he spent half his time in Paris and the other half in Chicago, and something that he drove home to all of his students was that no philosophical or scientific orientation had a complete hold on truth. By this he meant that a person should pay attention to, and immerse themselves in, every philosophical tradition they could. This is something that completely structured my thinking about the need for a pluralistic approach to any problem; that you need different methods, different traditions at play, and that we shouldn’t be in a position whereby we focus on trashing traditions as though they are completely worthless. We may just not agree with some of their assumptions, but each one of them has some critical and constructive insight to contribute! Ricoeur was a model of a philosopher always looking at a problem from a number of different perspectives.
Ricoeur was on my doctoral committee, but my dissertation advisor was Ted Cohen. He did some analytic philosophy, aesthetics, and philosophy of language, with an Austinian influence evident in his teaching and writing . Professor Cohen, like two or three other faculty I was interested in, was always concerned with bringing philosophical perspectives to bear on your lived experience – that has been important to me. I never was interested in technical problems, such as formal problems in logic; rather, it has always been the case for me that if a philosophical orientation does not make a difference for values and how you live your life, then it is just a kind of game. So, from Ricoeur, Cohen, and others, I acquired a pluralistic view of philosophy and an insistence that philosophy should make a difference for how you live.
Something that I find extremely significant for every student is that, if you find a faculty member who takes you seriously, gives you time and is encouraging – that makes a huge difference; and in my case, that had an impact on my teaching and my sense of what I needed to do as a faculty member, and how I should engage my students.
So mentorship is important…
Yes, absolutely. Working with students and letting them know that some of what they are doing is good, that can make a big difference to people who are struggling or not sure about their future. I have seen that happen over and over again and it really lets people blossom, come into their own.
Joshua Meyrowitz’ thesis in No Sense of Place is that when media change, situations and roles change. In your experience, how did the role of university professor evolve since you were an undergraduate student?
That’s a tough one. It is interesting because there is a student named Jeremy Swartz in the PhD program in communication here. He is interested in what he calls metamedia, and he has been working with me and educating me about the dramatic changes that happen as these new media capacities emerge. A simple example of this is the way things changed when we went from typewriters to having word-processing. I have a paper from 1983, which was the last paper that I ever wrote without a word processing program, and as I remember, it was very labour intensive: if you had to edit something you had literally to cut out the parts of it that you wanted to keep and then tape them on to a sheet of paper, on which you had typed a new sentence or paragraph – unless you wanted to completely retype every time. Well, that is a simple example, but as soon as you have word-processing capacities, it means infinite revisability. I am working on a book now: you start looking at a chapter, and instead of going to where you were before and continuing, you are always tempted start again from the beginning, editing and adding new material along the way. It probably makes it better, but it is just a different way of interacting.
Having access to the web, one of the things it does for you, it seems to me, is giving you all of this very broad but often superficial information. I am grateful for that, but I do not think it particularly encourages a depth of reflective thinking. So I am becoming more aware of how these technologies affect the way you approach issues and how you think about yourself in terms of the kind of work that you are doing. And this graduate student I mentioned earlier that I work with on media, he of course draws extensively from McLuhan. He says: “look, the basic ideas that McLuhan had are being worked out on a large scale today, and we need to pay attention to that because it affects the way we think about and practice education; the way students access information, and so on.” I think one of the issues coming up is that we are going to do away with hard copy books eventually. I don’t know whether that will prove to be a good or a bad turn of events, but it will certainly change our research culture and how we access information. Sometimes I think I’ve been fortunate to have lived mostly in a slower, simpler time than what we are experiencing today. You could actually master the material that was written on topics, whereas now, if you have more than one or two interests, the amount of material out there is completely overwhelming, almost disconcerting.
What about student-professor relations – do you think that might have changed?
It might have. You know, that is hard to tell. I am pretty wedded to person-to-person communication and exchange of ideas. The personal, bodily, interactive dimensions of the educational process seem pretty important to me. For example, I believe in face-to-face contact where we experience the mutual give and take of verbal exchange and we can read each other’s body language and gestures. Today a lot of business can be conducted via email, and that is okay, because a lot of it is just routine stuff, although probably if they paid attention to what you said in class or the syllabus, they would know what the answer was much of the time. In any event, I am a big believer in face-to-face encounters, and I teach an Intro to Philosophy class with 340 students. I use PowerPoint, but I will not put PowerPoint up on Blackboard, because some of them would not come to class if I did that. I believe there is something about being there, and I cannot quite articulate it, but I try to carry on a conversation even with a 340 student group. I happen not to be a fan of the online university, even though I know that a lot of things can be dealt with very efficiently that way. I guess I have not yet seen some sort of dramatic change that I am aware of, frankly. Maybe it is the kind of person I am: I always spend a lot of time with the students and work intensely with my graduate students; it has always been that way, so I do not think things have changed for me in 35 years of teaching.
What makes a good teacher today? How do you manage to command attention in an “age of interruption” characterized by fractured attention and information overload?
That is an important question and I talk about this with all of our new faculty when they come in. First, sometimes there are large classes — for example, this 340-student Intro to Philosophy I was just talking about — and this may be obvious but if you let any element of chaos or indeterminacy enter in, it will spread through the class like wildfire and things will come apart for you. This does not mean that you have to rule with an iron hand and be some sort of strong authoritarian figure, but it is absolutely essential to command some sort of respect – both you respecting them and them respecting one another as well as you. I make a big deal of that as necessary for the kind of activity we are engaged in, so I do not want people texting their girlfriends or surfing the web in class. Nowadays everybody has a laptop so you cannot have complete control over it, but I think you can still communicate a sense that we are here to engage each other – in my case philosophically – on issues that I think are important for your life. And I am going to ask for your participation and concentration for a certain number of minutes. The teacher does not have to be a grouch about it; you can use humour – and I have some humorous ways of addressing these issues –, but I think it is very important to establish a set of expectations about how we are going to respect each other. That also happens in smaller classes, but it is not such an issue there because you still have that face-to-face intimacy. In a class of 20 people, you are right there and they are held accountable for their behaviour in such a way that they cannot have anonymity. So I think there are all these kinds of distractions, but I have found ways to at least minimize the negative effects. Still, it is always an issue and we as a faculty, at the beginning of the year, often end up talking about how to manage those kinds of situations.
What advice would you give to young graduate students and aspiring university professors?
Well, there is no generic kind of advice that applies to everyone, but one of the things that I am concerned is the fact that we have elevated the level of expectation for performance in virtually all of our disciplines, to a level where I am not sure it is healthy for people in the long run. So I try to get people to pace themselves a little bit. I have all these very bright students who are expected to have publications to improve their job chances. This means that they have to start on this fairly early in their graduate career, in part because in philosophy getting things published takes a while. Students need guidance about getting the stuff out there, and they also need to understand that things that were published before they became professors are not going to count toward their tenure and promotion, so they may want to pace themselves in that transition between graduate school and their first job. Like I say, I think we are putting too much pressure on people publication-wise, and it is producing a lot of junk. This also causes graduate students to try to find some little thing they can write on and make a contribution to; they get exclusively focused on that, and I just do not think that is good or healthy – it is not what produces mature, reflective, interesting scholarship. I see this happening all over the place, so I try to communicate to my students that life is not just their jobs. They are all go-getters and they are going to be committed to their work, but it can eat you up, too, if you are not careful. I worry about younger generations of students who will be going to the job market – I hope I am wrong about this – but I am expecting to see in their 40s and 50s some kind of burn-out more than we have seen in the past. And this is partly media and access; it is just accelerating the pace of everything we do. This actually makes me think back about an earlier question you asked: once you have email and computational capacities – that was supposed to make everything easier…
Yes, that was the idea…
…and administratively it was all going to be easier. But you know what it did? It meant that they could ask you to generate data sets, do analyses, produce reports, and so on. It sped everything up – letters, memos, reports, evaluations – with demands for every more available information and replies to communications. Well, it just skyrocketed and everybody I know says now that they have less time than they had before. I think that is one of those unintended consequences of those technologies. This is what students face and there is no easy solution to it, but they need to think about the quality of their life, too. They have to understand that they are going to be in harness for the first few years. I see a lot of them pretty successfully being able to balance that and actually have a life.
I guess this leads nicely into the next question. In 1964, Marshall McLuhan declared, in reference to the university environment, that “departmental sovereignties have melted away as rapidly as national sovereignties under conditions of electric speed.” This claim can be viewed as an endorsement of interdisciplinary studies, but it could also be regarded as a statement about the changing nature of academia. Do you think the university as an institution is in crisis or at least under threat in this age of information and digital interactive media?
You might expect this to be the case, but I have not seen that happening in a really significant way yet. I am a great believer in interdisciplinary/cross-disciplinary work. And that follows from what I said to you early on, which is that there is no single method for getting truth or insight. So we need to draw pluralistically on the methodologies from a number of different disciplines and try to find ways to blend those together and find converging evidence. Everybody knows that starting twenty years ago, or maybe even earlier, universities were all boasting about how important interdisciplinary work was – and they wanted you to do it. Well, I really believed in that, and I still believe in the importance of cross-disciplinary research, but the fact of matter is that universities are still, administratively, centered around departments; and I am not seeing great changes there. I know there are some interdisciplinary centers out there, but those things, in my experience, especially at State universities, where there is not a lot of funding, do not have an independent life of their own. People participate in interdisciplinary projects and research centers because they care about it, but they tend not to get a lot of support to do it. So I do not see those disciplinary boundaries melting away very fast in the new technological age. The reason for this inertia is mostly administrative. Departments remain invested in getting money, resources, faculty lines, and professional recognition, and all of that still works against the dissolution of disciplinary boundaries.
Now, what you were really asking is what becomes of the university in this age of open source. Of course, we are seeing some changes in terms of online material, but I think it is pretty clear that the university is a conservative institution by nature. Yes, there are changes but I do not see the university melting away or being in crisis about that at this point. I know there are all kinds of pressures, of course. One thing that happened is that we have become obsessed with the metaphor of Education as Business, and this transformation is driving everything. Everybody knows it, and by now there is even a certain necessity to that, but oftentimes it is just a cart-before-the-horse situation. Good academic priorities and practices are too often being led around by the nose by business concerns. I do not think that has anything to do so much with technological changes; what will bring money and students has taken over other considerations. That said, I think the university structure is going to be around for a very long time.
In Understanding Media (1964) McLuhan claims that “all media are active metaphors in their power to translate experience into new forms” (p. 57). Would it be fair to say that the philosophy of language, so in vogue in the 1970s and 1980s, has been assimilated by what is nowadays called the philosophy of information?
Good question. I wish that were true in a way. I have a particular perspective on this, because much of my career has been in a way a criticism of traditional philosophy of mind and language – views about meaning and cognition. Traditional philosophy of language was pretty much armchair. It did not pay much attention to actual linguistics. It did not look at the science of language and thought. It just sat around and made conceptual distinctions and worried about problems of reference and truth, and I find that whole approach problematic, to say the least. In much of my work I have provided a criticism of and an alternative to mainstream analytic philosophy of mind and language, and the alternative is one that incorporates the work of the cognitive sciences. What that means is that you bring in empirical work on mind, on conceptualization, on reasoning, on symbol systems, on emotions, and on neuroscience, and I do think we are making progress there. But cognitive science is not just information science in a narrower sense; it is cognitive science, part of which is information science. I think meteoric rise of the cognitive sciences has been therapeutic, but I must say that if you look around the major programs in the USA, most of them are still doing old-fashion, analytic philosophy of mind and language. I think that cannot stand, however, because there is too much good empirical work being done – including neuroscience and information science and various aspects of cognitive science – that changes how we understand cognition and language, that no informed and reflective person can ignore. I think the younger people coming up are more attuned to this, and they see the promise it has. Nowadays there are more and more programs in my areas of research that appreciate the value and contribution of the cognitive sciences. So a new blend is emerging. In my own work I am trying to contribute to what that new perspective would be. It is what George Lakoff and I call a kind of “empirically responsible philosophy”. So my hope is that traditional philosophy of language will actually be superseded by a new cooperation of many different empirical approaches and methods that focus on natural language, meaning, conceptualization, and thought. And I hope that happens during my lifetime rather than later! I sometimes wonder how some people can look themselves in the face when they continue to ignore empirical work on language, thought and cognitive processing, and still claim to be doing philosophy of language and mind.
Your work on metaphor has shown that there is an embodiment dimension, an ontological weight, a constitutive function to language. Yet many people today continue to think of language – and information – in analytic and atomistic terms. What would you say to those who still believe that language represents a pre-given reality and information amounts to a set of symbols?
As you know, my whole career has been about getting people over that view that formalist and atomistic conception of meaning. What would I say to those who persist with this view? I would say: “Look at what we are learning about human cognitive processing. It raises deep and serious questions that that older conception of thought and language according to which thought as the manipulation of meaningless symbols, under the guidance of formal rules, and somehow ending up being related to actual states of affairs in the world. The cognitive science research also calls into question many of the founding assumptions of generative linguistics, information processing psychology, and early artificial intelligence models. By “cognitive science research” I am not referring to highly speculative work, but rather to widely accepted experimental bodies of empirical work. No doubt, we can expect that not all of this research will stand the test of time, but blithely ignoring this empirical research amounts to putting on blinders. It reminds me of a child who sticks his fingers in his ears and goes: “la, la, la, la, la. I am not going to listen to that. Don’t tell me that. I don’t want to hear that.” That is how I feel. If you pay any attention, I think, to the last thirty years of cognitive science, you cannot sustain those traditional models.
People are invested in their projects and I understand how they get invested, but it seems to me that they have to narrowly and restrictively defined the problems they deal with, and they have ignored vast stretches of human meaning. That is one of the things they do; they say, “well, language is about concepts and propositional structure, and that is what meaning is about – it’s about reference and truth conditions that are propositionally expressible.” And then they ignore all of the vast dimensions of human meaning that go beyond the propositional, as though that is not relevant to their project. They just dismiss it as something they do not have to deal with. And then they come up with this story: “Well, we have these innate modules that generate language and logic and values.” Well, yes, there is some modularity in brain systems, but nobody who does the neuroscience thinks that you can explain everything in terms of modularity. You have to have the interaction of these systems with reciprocal feedback and so on. You have to have massive parallel processing and neural binding across functionally different brain regions.
So that is what I would say to these people, and my own work on metaphor is an example of this. Critics were very resistant to seeing metaphor as a central process of human cognition, because it upset their nice little picture that we have literal sentences, that they are the bearers of meaning, and that they map on to the world or they do not, and that is how knowledge is possible. Well, Conceptual Metaphor Theory challenged virtually every aspect of that received view, and nowadays everybody has a theory of metaphor and they are fighting this out. Some of the metaphor research led into explorations of the role of the body and brain structures in the constitution of meaning and in how we think. I think the water is under the bridge on that now. We cannot go back. And this new empirical work requires us to do dramatic rethinking of those received views about language. So I find it almost bewildering and grotesquely irresponsible that people can ignore all of the thought and language research from cognitive science. I realize that you can look at empirical work and have different interpretations of what it means and what it entails for our view of mind, and there is no way to avoid these kinds of discussions, but at least they are addressing the empirical research.
The following question was drafted by Professor Lance Strate: “The linguistic relativism associated with Edward Sapir, Benjamin Lee Whorf, and Dorothy Lee was successfully challenged by Noam Chomsky and his followers, and it is only recently in linguistics, in the post-Chomsky era, that renewed interest in that perspective has been in evidence from scholars such as Lera Boroditsky. Your own work on metaphor seems more consistent with what was known as the Sapir-Whorf Hypothesis than with Chomsky’s notion of a universal grammar. What is your view on Sapir, Whorf, and Lee, and the recent revival of interest in their perspective?”
Okay, I will just say two things about this. First, George Lakoff wrote a chapter of his book, Women, Fire, and Dangerous Things, on Whorf. He said: here are X number of dimensions concerning aspects of cognition and language use, and so you can have several possible interpretations of the Sapir-Whorf hypothesis about language structuring our thinking. Some of these interpretations represent a much stronger linguistic relativity view than others. So the first thing to realize is that there are degrees of linguistic relativity; and what Lakoff tries to do in a really nice way I think is this: given where we are now in terms of cognitive science, a certain set of these interpretations seems supported, while others seem too strong. So it is not just the Sapir-Whorf hypothesis, as if there was only one interpretation. The hypothesis can mean a number of different things. It has been almost 30 years since that book came out, so there is probably new work that could be done even now to fine tune Lakoff’s analysis.
The second thing is: Lakoff and I said that metaphors are conceptual. By this, we were not saying that the language we use determines how we think in the strongest sense; what we were saying is that metaphors are not just a matter of words that point us to a more basic, literal conceptualization. We were saying that the human cognitive processes rely on metaphors as one of the primary devices of abstract conceptualization and reasoning. These underlying metaphorical concepts are going to show up, not just in linguistic expressions, but in all forms of human symbolic interaction. The fact that a language rests partly on a certain number of systematic conceptual metaphors means that our language will activate those metaphors for us and create a framework for how we think about a topic. The meaning is generated by the conceptual mapping. Now we have added on to that a neural-theory of mapping, and people are beginning to develop it. So we are saying that the strongest version of the Sapir-Whorf hypothesis – that language determines how we think – probably does not hold up across the board; however, that language is a manifestation of these cognitive or conceptual metaphors, which vastly frame and systematically organize the way we think, does seem to be accurate. So ours is not the strongest version of the Sapir-Whorf hypothesis.
Now, I am wondering whether it wouldn’t make more sense to speak of an “ontological metaphor” instead of a conceptual one…
Well, at one point in my writing I used the word experiential metaphor, but it never really caught on. With conceptual metaphors, it is not like they are reflections on a pre-given reality. It is not that our world comes to us with all of its structures in a determinate fashion, and then our language just mirrors it – that is not at all what happens! So, I do think that you are right: in some contexts it is useful to say that there is an ontological dimension to metaphor, because reality is not a kind of complete externality that we as an organism encounter. I happen to be a Deweyan pragmatist on this, and I think the cognitive sciences support Dewey’s view that the basic locus of all the experience, thought, and action is an organism-environment interaction, and you cannot articulate this organism-environment whole adequately if you treat the organism and the environment as independent of each other. So when you are talking about cognition you are talking about processes of organism-environment interaction, and that is ontological because that is the way your world is being revealed to you or enacted by and for you. Our world is not just given to us as an objective fact, nor do we just make it up; rather, it is a product of an ongoing, developing, dynamic organism-environment transaction. The organism has its purposes, its interests, its goals, its past history, and a body with certain functional structures – and the way the world opens itself up to us will depend on all of that. So that is an ontological process, you are absolutely right, and metaphor is just one part of that process. We never said that it was going to be the be-all and end-all of all thinking; we focused on it because it had been ignored, and it was a way to open up some new ways of thinking about language and meaning and experience.
Of course, some people do not like it when you speak of “ontological metaphors,” but those who do not like it are probably still working with a kind of dualist, form/content, subject/object sort of approach. The pragmatist orientation – and it’s not just pragmatism, because you find it in phenomenology, in Merleau-Ponty and many other people – says that the world is an interactive process. And “interactive” is not even the best word, because “interactive” suggests that you have two independent things (organism and environment, mind and body) that come into relation. Well, what Merleau-Ponty and Dewey and William James said was that the distinctions we make emerge for creatures like us as significant, but they don’t exist ontologically independent of one another. We select out aspects of this experiential enactment, because this allows us to recognize important dimensions of our experience that we care about. So the whole story is an ontological one, you are right. We have to overcome the metaphysical dualism that underlies most of our traditional thinking about language, meaning and thought – and of course, all forms of communication.
I think Heidegger would be in agreement with you here, especially as pertains the adhesive character of intentionality – how when I go out the door there is no me over here and door over there, but rather a me-going-out-the-door sort of flow – and the notion discourse as articulating the “background of intelligibility” – that which enables beings to come into their own and be in language.
Well, I am not a trained Heideggerian, but when you put it like that I am completely on board. In that sense, Heidegger contributed profoundly to our appreciation of how we inhabit our world and of all of the background conditions that enable the very possibility of any foreground or focal experience – and he was marvellous on that. The part that I resist on Heidegger is when he starts talking about Being; I get uncomfortable about that for various reasons. But at the level which he is focusing on being-in-the-world, the embodied and situated character of our intentionality, our thrownness and all of that, yes. And that illustrates where we started this interview, when I was talking about the need for these different methodologies. Here is phenomenology saying something that cognitive linguistics supports, that cognitive neuroscience supports, and that pragmatism agrees with too. So we get a kind of convergence through at least four different philosophical orientations.
What are you currently working on?
I am working on morality and values. My last book, The Meaning of the Body, was a continued treatment of the role of qualities and emotions in the constitution of human meaning, and the fact that all these aspects of meaning are really aesthetic dimensions of experience. Currently I am returning to my earlier concern with moral deliberation that I first took up in my book Moral Imagination. In light of all the naturalistic approaches to ethics that are popular today, I am trying to work out my view about values and the nature of moral deliberation, given this revised picture of the human being and the role of embodiment and aesthetic dimensions in meaning and value.
© Mark Johnson and Figure/Ground Communication. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Mark Johnson and Figure/Ground Communication with appropriate and specific direction to the original content.
Your feedback is welcome and appreciated! If you like what you see, please consider commenting or donating to help us grow. Figure/Ground is currently on the outlook for collaborators to help with the expansion of this section into the largest repository of scholarly interviews on the net. For specific suggestions regarding future/potential interviewees or to obtain permission to republish any of the interviews already on the site, please contact Laureano Ralon at email@example.com.