© Frank Macke and Figure/Ground Communication
Dr. Macke was interviewed by Laureano Ralon on February 14th, 2012
Frank Macke is Professor of Communication Theory and Rhetoric and Chair of the Department of Communication Studies & Theatre at Mercer University in Macon, Georgia. His work is frequently published in the fields of continental philosophy, semiotics, phenomenological psychology, and communication theory. He is a Fellow of the International Communicology Institute and serves on the Editorial Board of the Interdisciplinary Coalition of North American Phenomenologists.
How did you decide to become a university professor? Was it a conscious choice?
As an undergraduate student, I geared my entire academic program toward law school. I majored in Political Science and was deeply involved in intercollegiate tournament debating. I pivoted toward graduate school and university teaching during my last semester. It was because a very good friend and mentor chose to be candid with me. He told me that I was simply not cut out for a legal career. I was stunned to hear those words, but soon it became quite a relief to me. He was right. There was nothing about my way of thinking and living in those years that would have enabled me to psychologically survive (much less enjoy) a career in law. A number of people important to my intellectual development at that time were reading and discussing Heidegger and Merleau-Ponty. It became quite clear to me that I needed to be reading and thinking with those texts as well. Once I started graduate school in 1978, I naturally assumed I would be a university professor. That, I think is a statement about the 1970s. Universities were not strapped for cash in those days. There were quite a few fellowships and graduate assistantships available in many areas of study. And then, upon completion, college and university positions were available. Everyone I knew from the era in which I was a graduate student who completed his or her doctorate found a suitable teaching job. I think it is much more difficult for students to make this choice now. Money to subsidize graduate study is available only to a relative few. Many doctoral graduates are underemployed if they are employed at all.
Who were some of your mentors in graduate school and what were some of the most important lessons you learned from them?
When I began my doctoral study in the philosophy of communication program at Southern Illinois University in 1980, I had the good fortune to take a number of classes with Thomas Pace, Richard Lanigan, and Stanley Deetz. All three of them were fully grounded in Continental philosophy as well as having a detailed knowledge of advanced theoretical work in communication theory. Beginning in the 1970s, the philosophy of communication program at Southern Illinois had assembled a remarkable faculty—and it succeeded in attracting as graduate students some of the most brilliant people now working in the field. It was quite a stroke of luck for me to be there. It was not that other programs in communication were without continental philosophers. It was that, initially, the unique combination of Deetz, Lanigan, and Pace sparked conversations that weren’t happening other places—and many dissertations came out of that program that were years ahead of their time. About the time that Deetz departed in the 1980s (first for Rutgers, and then for the University of Colorado), Lanigan began publishing work that put Merleau-Ponty and Foucault in dialogue with one another. It was a profound and important intellectual accomplishment. As of 1988 Lanigan had published two books intertwining Foucault and Merleau-Ponty, with a third published in 1992. In 1988, after a six-year hiatus from working on my degree, I devoted myself to rereading Merleau-Ponty’s Phenomenology of Perception along with every translated Foucault text I could get my hands on. By the time I finished my dissertation in 1993 under Lanigan, I felt quite confident in the sense I had of Foucault’s concept of discourse and what it implied for communicative experience. My work with Lanigan’s writing and thinking has been ongoing. I was among the select group of scholars who laid the foundations for the International Communicology Institute (of which Lanigan is the director), and the Institute has had five international symposia in the last decade or so.
Joshua Meyrowitz’ thesis in No Sense of Place is that when media change, situations and roles change. In your experience, how did the role of university professor evolve since you were an undergraduate student?
In a number of ways the role has not changed much at all, particularly for faculty at research universities. The central set of tasks remains the same. The work itself is, no doubt, conducted much more efficiently and with much greater range of access to research materials. But for the vast majority of students and faculty, yes, Meyrowitz’s thesis is very much supported by the highly visible changes that have taken place—especially at universities that are primarily devoted to the teaching of undergraduate students (which describes my case). To begin, quite a bit of the formal distance between students and faculty has collapsed. I can clearly remember my relationships with faculty when I was a student. It was a situation, a circumstance in which the only time faculty could be contacted for the purposes of having any sort of conversation was in the classroom, before and after class, and during office hours. Telephone calls tended to be brief; as I recall, when telephone conversations became complex, I was usually encouraged to make an appointment to speak with the professor in his or her office.
These days I am flooded with email. Students use it to conduct all sorts of business and they feel free to ask open-ended questions that would no doubt be better discussed either face to face or in the forum of the classroom. University business, of course, is all electronic, and in the last decade the amount of email and electronic documentation that a faculty member is expected to process in a day, a week, a semester, can be overwhelming. As well, email protocol seems to have emerged with a cultural expectation that it be answered within an hour or so of it having been sent.
I got my first full-time teaching job in 1982. So I taught for well over a decade before I was provided with any sort of computer for my office, much less a modem. I can well recall having a fair amount of quiet time in my office. I was able to do a fair amount of reading and writing when I wasn’t teaching in the classroom or keeping office hours. I would receive via campus mail a few memoranda each day from university colleagues or administrators that would demand my attention—which entailed, of course, a day or two to respond in most cases. Now, it seems all I do when I am in my office is sift through and respond to a continuous stream of email. There’s very little time for anything else. Once in a while I can grade a few exams and mark a few papers, but typically I prefer the quiet hours that I can find in my office at home.
The other day, purely by accident, I happened to discover that Raymond Geuss, who is a faculty member at Cambridge and an outstanding philosopher, refuses to do any business with anyone via email. I had to take note of what he further instructs on his faculty webpage: “Please do not attempt to use the secretarial staff in the Faculty Office as an indirect e-mail conduit; such attempts will be unsuccessful and will merely add to their work-load unnecessarily.” Mind you, I find nothing intrinsically wrong with making this sort of choice. He does provide a phone number, and I can only assume that his relations with students, colleagues, and administrators are satisfactory. (This particular case happens to embody the reason why I began the answer to this question as I did.) What I find stunning is his ability to pull this off in this era! So, I played on the web for a bit and decided to look up the webpages of scholars who I assessed to be of a stature comparable (or greater) to Raymond Geuss. Every single one I could think of is fully reachable via email—even, and especially, Noam Chomsky, whom you’ve interviewed for Figure/Ground! Geuss may not be the only prominent scholar who has availed himself of at least some of the communicative distance that was a part of a previous era of university life, but I find him significant in terms of his exceptionality.
Now, there is a tremendous benefit to students in being able to traverse this, once, formal distance. I very much prefer the relationships I have with students now over the relations I had before the advent of electronic communication. It is easier to talk with them in the classroom when I feel as though we know each other a little better. And, overall, they do seem much more relaxed around me when talking face-to-face than they once were. But the structural loss of distance, in my experience, can be debilitating to the life of a teacher/scholar unless he or she is able to find a way to sequester time and space to think and write. The bottom line, for me, is that the greatest service that a faculty member can provide to her or his university is to develop intellectually by way of her or his scholarly commitment and to engage students as fully as possible with the insights and energy that derive from this commitment.
I simply do not keep up with my colleagues, especially the younger ones, in their commitment to cellphone use and social-networking. I do not carry a cellphone and I am not on Facebook. That’s not going to change any time soon. (I am a rather private person.) But my wife (who is a Professor of Theatre at my University) is a fairly active user on Facebook, and a number of my colleagues have come to enjoy it very much. A number of my colleagues are relationally interconnected with students via an array of social networks—practically running their courses or programs by way of such networks. I can see the changes taking place, and they are, generally, quite positive. But this is not a part of my experience. I am not an eager adopter of new strategies and gadgets for engineering communicative contact—I am mostly a sceptic—and so in this regard I am probably not the best person to ask about sensing new horizons for the role of university professors.
What makes a good teacher today? How do you manage to command attention in an age of interruption characterized by fractured attention and information overload?
I honestly do not think that the communicative elements that combine to produce effective university teaching have changed in any significant way. There are, no doubt, numerous potential sources of distraction, but I do not find students to be any more distracted or inattentive now than they were twenty years ago.
At the undergraduate level, only senior-level students, in my estimation, are ready to seriously undertake the seminar mode of classroom pedagogy. I have argued with my peers about this assessment. Nevertheless, after 30 years of undergraduate classroom teaching, I just can’t agree that even advanced lower division students can sufficiently measure up to the intellectual demands of a seminar in the human sciences; they need at least one solid theory course to give them a chance to attune their voices to the patterns of discourse that have come to define the theoretical foundations of the human sciences.
Personally, I prefer what Stanley Fish, years ago, labelled a “theatrical” mode of classroom teaching. (I do not recommend it for every teacher, and if my teaching were to occur at the graduate level, I would take a somewhat different tack.) This mode invites students’ voices into a dialogue, but it intersperses lectures and illustrations within this dialogue for the purpose of making or amplifying an argument in the readings that have been assigned. I can well remember my own level of comprehension when I entered college. I remember taking a class titled “Types of Eastern Religions” with a professor, now emeritus, named Dale Bengston. Needless to say the reading assignments for each class were daunting, especially for a college freshman. (At the time I was barely comfortable speaking knowingly about Western religion.) I kept up with the readings, but was far from able to speak confidently about them. Going to class, listening, and taking extensive notes was my primary means of survival in that course and it was my only hope of making sense of what I was assigned to read. There were 150 students packed into the lecture hall and during each class session I was swept away by the professor’s ability to bring all of these difficult concepts to life. I would exit the lectures and go back over what I thought I had read and, voila, everything seemed to make sense. My subsequent undergraduate and early graduate education in philosophy and social theory very much confirmed the virtue of this method. And so I have stuck by it.
Colleagues of mine have experimented with teaching styles, often in response to what I perceive as an ongoing premise that seminar-style approaches are, somehow, always pedagogically superior. I’ve seen numerous articles critical of the “sage on stage” model of teaching, accusing it of transforming students into passive receptacles for dead information. But I’ve not seen a lot of data documenting learning differentials for undergraduate students exposed to different models. I think students do respond better when they are actively engaged with the literature of the course and, again, I think graduate students benefit greatly from seminar approaches. Nonetheless, my experience guides me to believe that undergraduate students benefit from witnessing new ideas embodied by levels of language comfortable to their habits of mind, and then entering into the discussion from where they happen to be at that moment.
What advice would you give to young graduate students and aspiring university professors?
I sincerely believe aspiring professors should continue in their studies. Worries about jobs should not pull dedicated minds off of the path of scholarship. It likely will be the case that work will be hard to find for human science scholars in the next decade or so—and, perhaps, beyond, if the current political forces constraining national investment in higher education cannot be disabused of the foolish concept of “doing more with less” while simultaneously expecting quality higher education opportunities for an expanding population. But the alternatives to scholarship in the career possibilities of people seeking meaningful work in the current era are rather depressing.
Someone who finds tremendous meaning in the investigation of ontological or epistemological problems will not be able to find appropriate sublimation in, say, insurance, public relations, real estate, or banking. Now, in the natural sciences things are always different. It will, most likely, take ever more education for dedicated students of the natural sciences to stay competitive with market demands around the globe, but until there is a major philosophical shift in what the world values, persons well-educated in the natural sciences will find their education to comport well with the fundamental materialism of the market.
In 1964, Marshall McLuhan declared, in reference to the university environment that, “departmental sovereignties have melted away as rapidly as national sovereignties under conditions of electric speed.” This claim can be viewed as an endorsement of interdisciplinary studies, but it could also be regarded as a statement about the changing nature of academia. Do you think the university as an institution is in crisis or at least under threat in this age of information and digital interactive media?
I do think “the University” is heading toward a crisis, but I see the reason as entirely financial. McLuhan’s remark about departmental sovereignties was important and quite prescient during the age in which it was made. (One of my favourite phrases from McLuhan concerns academic inquiry suffering from “hardening of the categories.”) And, I think, it is still true that a number of departments and programs of study have been embarrassingly slow to adapt to social and cultural changes that have transpired during the information age. (Harvard, Yale, and Princeton, despite all of their history and prestige, may very well feature the least adaptive curriculum to the events of the latter half of the Twentieth Century!) But “the University,” in general is not nearly as conservative as it once was.
I teach at a university that one might hastily infer to be an intellectually conventional university. We more-or-less follow a Southeastern tradition with an arts and letters approach to matters of culture, language, and history; as well, we have a “Great Books” program that is prominently featured in recruitment materials (despite the fact that most students are not enrolled in it). Nonetheless, we have what I think are very forward-looking programs in Media Studies and Communicology. A large and ever-increasing portion of our faculty are involved in some aspect of interdisciplinary teaching and research. In fact, for well over a decade we have been hiring faculty with appointments in Interdisciplinary Studies rather than in the departmental structures left over from the mid-Twentieth Century.
I cite this example not to praise my own university but to make the case that universities in general have come quite a long way in their adaptation to the changes in information technology and social connection. To be somewhat more direct: if my university is actively pursuing a such forward-thinking curriculum with respect to media and society, you should have a strong sense of the direction the vast majority of North American universities are headed—Harvard, Yale, and Princeton, be damned. No doubt there is always going to be a set of challenges in appropriately problematizing discourse and knowledge in order to address technological and social networking changes that occur (and are experienced), as events, always in advance of received theory. But these challenges have been met and are being met within the ranks of University culture. Douglas Rushkoff and Jaron Lanier, as outsiders to the culture of academia, have been fascinating sources of insight with respect to new media and culture, but they are significant by virtue of their exceptionality, not their typicality. The Media Ecology Association (a group, whose beginnings are very much tied to a commitment to McLuhan’s scholarship) is, with a few exceptions, an overwhelmingly academic group.
The critical matter, I think, is that there is, at present, simply no alternative to the university as a site of inventio and radical intellectual work. As Russell Jacoby, in The Last Intellectuals, carefully argued, years ago there was a thriving non-academic intellectual culture that could be found in major cities. Figures like Dwight Macdonald, Paul Goodman, Malcolm Cowley, and Kenneth Burke were able to contribute to Western intellectual life without becoming beholden to the restrained conventionalism of academic culture at that time. But, for decades now, the world outside of university culture and academic life has become a vast wasteland. When right-wing political voices decry the state of university life as having become bastions of radical thought, they are not entirely wrong. I agree with Jacoby that there ought to be some other place for intellectuals and artists to live and work and interact. Yes, it would be nice. Yet even two decades into the new economy of the world wide web, there are no such places. There is, to be sure, a proliferation of voices on the web, but as of now the most important intellectual developments to be found on the web, especially regarding media and culture, still emanate from university intellectuals.
If you look closely at the way in which McLuhan employs the phrase “departmental sovereignties,” I think you might see that he is not arguing against the notion of disciplines or even departments, per se. Departments, after all, are a logical convenience of academic systems to organize their faculties. There is nothing particularly forboding about the concept of an academic department. The name and personnel of my academic department have changed three times since I have been at Mercer, and I would not be surprised were it all to change a couple more times before I retire. McLuhan’s scepticism, as I see it, was a matter of witnessing the resistance of academic departments to examining changes that were occurring right in front of their noses. Perhaps another way of phrasing the matter is that the first third of the twentieth century couldn’t believe its ears, the second third couldn’t believe its eyes (which is the point at which McLuhan was thinking and working), and the last third can’t believe its flesh. But curiosity, discourse, and scholarship are not standing still within the academy as critical questions arise.
As a Foucaultian, I find the concept of discipline to be a matter of tremendous epistemic significance. Although it does not take nearly as much time to form as it did in previous ages, an academic discipline is a significant intellectual accomplishment. It is, of course, contingent on other disciplines, flowing in and out of their histories and methods, and like all systems it is bound to experience entropy. But it serves a critical function for intellectual work. A discipline entails a set of conditions for asking questions, addressing experience, assembling data, and reading texts. The discipline of sociology concerns the socius, the social body, as a thematic and problematic for the analysis of the comportment of groups. The discipline of psychology concerns the psyche, the Geist (spirit) or mind, as a thematic and problematic for the analysis of meaning, intention, and individual comportment. The discipline of communicology, as my colleagues and I have recently come to formulate and define it, concerns the communis, the relational body—the chiasm (or flesh), as Merleau-Ponty puts it. The chiasm, understood, again, as the relational-body, or the speaking-perceiving body, is not interchangeable with the socius or the psyche. The speaking-perceiving body is defined by its mortality—by the inarticulate vulnerability of its birth and infancy and, then, by the processes of maturity, aging, and departure.
Communicology, psychology, and sociology are, all three, quite recent developments in the history of intellectual work. The three of them, along with the systematic study of language and representation (in the philological tradition as: semiotics, linguistics, rhetoric, and poetics) have immense potential to constitute a vital and responsive Geisteswissenschaften for the 21st Century, particularly if the North American scholars in these fields would, once and for all, let go of positivism and behaviorism. Simply, I have no idea how one can productively generate theory and insight into matters of human experience with the same general methods used for the analysis of the natural world.
So, going back to your original question, I do not think “the University” is in any sort of danger for reasons having to do with a failure to adapt to the revolution in social media and information technology. What threatens “the University” is an ideological turn toward a particular species of governmental austerity that refuses to see expenditures in science, understanding, and education as necessary investments in the future of civilization and the future of the planet. Right now, colleges and universities pay their bills by a combination of tuition, charitable contribution (both personal and corporate), and various forms of government support. The bill for tuition has gone up dramatically in the last couple of decades—and it is a sum that is borne by the student and his/her family. When borne by the student and his/her family, it seems to be a massive amount—an insurmountable sum to pay back, even with limited interest rates on loans. If trends continue, college education will be priced beyond the reach of most families. Unless government contributions are able to pick up the slack, the University—the intellectual superstructure of contemporary existence—is likely to crumble either into some sort of subspecies of corporate hegemonic life, or a rarefied intellectual retreat. This sounds, I am sure, melodramatic, but I can’t really phrase it any other way. Given what the developed world spends on its military, educational investments shrink in comparison—even considering a significant increase in commitment to offset the direct investments of families and students themselves. Without getting into the micropolitics of budgeting and efficiencies, my argument ultimately becomes focused on a matter of values and priorities.
What will it take to sustain a civilization such as ours as the planet becomes ever more crowded and endangered by the various toxins secreted by a population that has made the sorts of commitments ours has made toward the generation of power and the production of food? As an intellectual, my immediate response is to say that the cultivation of an ever-greater percentage of literate and tolerant people, educated in the various sciences, and less inclined toward panic and violence is an unquestionable good. But the political trends, at the moment, are not moving in this direction. In both Europe and America, dominant political voices are calling for an attitude of austerity toward matters of domestic and social spending in an era of government indebtedness. Rather than increasing government revenue or seriously contemplating reasonable cuts to the military, the strong consensus is toward cutting funding for health, education, human welfare, and domestic infrastructure. The various attempts to make higher education into a for-profit market venture have, it seems to me, been dismal failures. The cost remains high for students and the quality that is offered pales in comparison to what traditional colleges and universities can provide. It seems that unless there is a systemic shift in values and priorities, colleges and universities will continue to seriously diminish overall quality under the premise of “doing more with less” and tuition costs are going to separate more and more young people from higher education.
In 2009, Francis Fukuyama wrote a controversial article for the Washington Post entitled “What are your arguments for or against tenure track?” In it, Fukuyama argues that the tenure system has turned the academy into one of the most conservative and costly institutions in the country, making younger untenured professors fearful of taking intellectual risks and causing them to write in jargon aimed only at those in their narrow subdiscipline. In a short, Fukuyama believes the freedom guaranteed by tenure is precious, but thinks it’s time to abolish this institution before it becomes too costly, both financially and intellectually. Since then, there has been a considerable amount of debate about this sensitive issue, both inside and outside the university. Do you agree with the author? What are your arguments for or against academic tenure?
For persons who have never worked within the academy, the tenure system must seem like a quaint set of practices. I’m in my 30th year of university teaching. I’ve observed the tenure process quite closely over the years at universities in which I have studied and taught, and I’ve paid quite a bit of attention to the experiences of friends and colleagues at a number of institutions who have had to fight battles through various aspects of the tenure process. I’ve heard the arguments made by Fukuyama expressed by a variety of people through the years, so I am not taken aback by hearing them again.
In general and in specific, I find these arguments quite unpersuasive. It is well within reason that the tenure process can be improved. But I find the notion of abolishing it, or even changing it in some radical way, to be profoundly dangerous for the future of academic life. The economic situation for intellectuals in America has been deteriorating for decades. Without an assurance that a professor can expect to build his or her relationship to the community of scholars comprising her or his college and discipline, he or she is not going to emerge as a scholar in the mode that Fukuyama envisions—that is, taking intellectual risks and, put another way, not repeating themselves in print and in the classroom. Fukuyama himself was raised in a richly intellectual environment in Manhattan and, later, State College, PA. His parents were established academics. As such, he does not have much of a personal point of reference to a life lived by moving from one teaching job to another, year after year—as is the case for a growing number of people who have completed their terminal degrees in the sciences and humanities. It’s not as though teaching jobs have disappeared because of a demographic shortfall in demand. No, the number of people needing a college education has grown every year alongside the growth in the national population. I would argue that academia’s increasing reliance on thousands upon thousands of adjunct positions each year, each semester has done far more to damage the intellectual work of university faculties than any perceived threat from tenure ever could.
If there were genuine enthusiasm about improving the tenure process so that faculty could focus on scholarship without having to load their belongings onto moving vans on a regular basis, I would be interested in having a serious conversation about the subject. But the only enthusiasm I detect when the subject comes up involves the implicit goal of further destabilizing the lives of academics in order to make them a less powerful work force. People want to pay less for a university education, or at least get some control over the rising costs. On the administrative and corporate side of the issue, the amount of money paid each year to compensate faculty has seemed an effective place to make cuts—particularly if university faculties are increasingly powerless to fight back. Shrinking the number of secure teaching posts in academia has had the effect of producing what appears to be an “oversupply” of highly qualified people holding doctorates (or other terminal degrees) in their academic fields. But it is my considered belief that there really is no such “oversupply”—or at least there shouldn’t be. If universities were to actually hire into reasonably secure teaching posts the people they truly need to teach their preferred curricula to the growing number of people who want a college degree, the glut of qualified academics would shrink considerably. It might even disappear. Now, of course that is not going to happen anytime soon! The corporate attitude now pervading the economy of higher education will continue to find virtue in squeezing university faculties for as long as possible. Getting rid of tenure would simply be the coup de grâce.
Let’s move on. You are a Professor of Communication Theory and Rhetoric at Mercer University and also serve on the Editorial Board of the Interdisciplinary Coalition of North American Phenomenologists; in fact, one of your areas of specialization is phenomenological psychology. What are, in your view, the most important points of contact between Phenomenology and Communication Theory?
I see the phenomenological tradition, exemplified in the work of Husserl, Heidegger, and Merleau-Ponty, in particular, to be less a matter of intersection with communication theory and more a matter of foundation. There has been an unfortunate tendency among many academics, even and especially academics who identify themselves as “communication scholars,” to conflate the theoretical premises of information theory with the theoretical premises of communication theory. For far too many academics, the word “communication” itself is used in an astonishingly careless way. Information theory concerns messages, codes, and feedback. In any context, for an information theorist, a message is a discrete entity and the relationship of the message to the code is one of logic. “Decoding” a message is a rational process, which is one of the key reasons machines do it so well! The ongoing project in cybernetics and artificial intelligence is fundamentally one about discrete messages and efficient codes. However it is not a project concerning human communication. Hubert Dreyfus made this point compelling clear in 1972 in What Computers Can’t Do, and the argument, for me, has only gotten stronger over the years. Dreyfus does not address the concept of communication per se, but he does talk extensively about culture and context. Though it is true that there are many aspects to how humans think and make decisions that can be simulated on various sorts of computational models, what he argues is that there are critical aspects of human thought processes—even including the nature of thought itself—that cannot be generalized, that are specific to the trajectory of human lives, that are tied to moments of crushing anxiety and unanticipated pleasure, that are imbedded in tribal, cultural, and cosmological myths.
Ultimately, communication entails a transformation (perhaps merely a shift, but ultimately a change) in consciousness and meaningfulness. Roman Jakobson, a brilliant, pioneering linguist and semiotician whose work drew heavily from the Prague School, Russian Formalism, Saussure, Husserl, Bühler, and Peirce, is the thinker that communicologists have in mind when we speak of the theory and model of human communication. Using his model we can take note of some fundamental distinctions between information theory and communication theory. For Jakobson, a communicative event entails six elements, each associated with a communicative function. The “message” is but one of these elements, and according to Jakobson its function is poetic. That is, it opens the textual product (and residue) of a code up for interpretation. It is that which is taken as the meaningful text in a communicative encounter—which is also to say it can only be remembered as an interpretation, and it ought not be confused with the intention or intentionality of the communicative encounter. The meaning of the encounter itself exceeds the message. A communicative encounter creates intersubjective relations.
When a psychotherapist interviews a client what she or he is wants to know is the network of relations that substantiate the client’s selfhood. Communicology is concerned with the way in which selfhood is embodied through our relationships. I have taken quite an interest in family systems as the primary circle of intimacy through which we experience our childhood attachments, our growth, and our will to explore. In particular, the “crisis” of adolescence fascinates me inasmuch as it provides an account of subjectivity, by dint of necessity, breaking away from the necessary tribalism of family order. (I say it’s a necessary tribal order simply because families cannot operate by modern principles of democratic order; parents need to be in charge.) In adolescence, the potential for a new regime of relationships and relational intimacies produces, simultaneously, a tension with the various attachments and suasions of history and tradition embodied in the work of parents raising children. It’s a relatively recent development in human history, really. For a very long time the experience of puberty was overwhelmed by tribal and, then, religious imperatives. In such a context, there really was no way of experiencing “community” aside from commitment to religious and/or tribal order. Adolescence became increasingly decoupled from procreative impulses when people started living longer and, then, fast-forwarding to the modern age, it became a time of maturation—the launch of the individuated ego and selfhood. In the current era adolescence can last for a considerably longer period of time than the experience of puberty. It is a psychological transition rather than a physiological one. One could, I believe, make an interesting case that the experience of so-called “mid-life crisis” is really just the rupture of a prolonged adolescence.
I mention adolescence in the context of your question because, in so many ways, it represents the fragility of selfhood and the difficulty of transformation—as well as the anxiety that obtains from both entering and resisting the crucible of intimacy that tests each incarnation of identity. Phenomenological inquiry is built upon a dialogic premise, that what we ultimately mean when we are talking about “reality” is that we are talking about something that is intersubjectively constituted. Not merely subjectively constituted (that is, out of one’s mind and imagination) or objectively constituted (as a collection of things in themselves and therefore having a fixed meaning on that basis), but a reality whose meaning is derived from a set of social habits, customs, myths, symbolizations—in other words, culture, understood as a collective intelligence having both a history and a sense (mostly vague and inarticulate) of where it might be headed.
Each of our identities is contingent. For a personal identity to work it must be recognized. Other people must confirm in some way or another what it is that person believes him or herself to be. This process, of course, begins in infancy. A child’s hunger for affirmation will continue until he or she gains both comfort and confidence in who he she is—and his or her existence is one of belongingness. Obviously he or she is “a member of a family,” but the child seeks to know what sort of member she or he is. And the stability of that identity is placed in jeopardy every time he or she falls out of favor, gets taken for granted, is yelled at (for good reason or for no reason), is ridiculed, humbled, confused, ignored, and so forth. This continues throughout life, of course, and becomes a game of a different order when family affirmation becomes secondary to social affirmation. In this age, rather than accepting family-arranged marriages and family-specified careers, we pursue our own lovers and negotiate, often against great odds and disappointment, our own professions. The anxiety can be tremendous. It wears a body out. The statistics regarding the number of people in developed countries on prescription antidepressants are staggering. Nevertheless, there really isn’t another path to a meaningful existence. Anxiety is the fundamental condition of life for a person in search of meaning, transformation, insight, bodhi, wisdom, enlightenment, whatever you might wish to call it. This is what I take to be at the core of Heidegger’s Sein und Zeit, and it’s also quite elegantly explicated in Hermann Hesse’s Siddartha. The tranquility that finally takes hold of the body of the aging Siddartha who finally has learned how to “listen to the river” could never have come from following “the Buddha” during an earlier phase of his life. It would have meant nothing without all of the adventures and failures, without all of the stress and work undertaken in pursuit of something greater, something transcendent, something beyond the givenness of his comfortable place in the community.
In the later work of Foucault, this mode of being is seen as “the art of living”—or the choice to make of one’s life a work of art. (This was an important theme for John Dewey as well.) On one hand, it requires acts of parrhesia, which is a term from classical Greek philosophy that refers to the telling of difficult truths. Of course, we’ve known for a long time that a healthy society depends on the courage and insights of its parrhesiasts. But on the other hand, it necessitates a certain indifference to the rules of living, which some might see as an indifference to society itself. Nonetheless, it is very much implied in the concepts of individuality and differentiation that persons are unique, that everybody is “an exception to the rule.” So, to preserve itself as a system, to avoid entropy, a society will deploy all sorts of normalizations, even as it encourages its people to be “special,” to make their lives extraordinary. And, so, the artist, the philosopher, the poet, the cultural theorist, the communication theorist, has to navigate these inescapable ambiguities. Ultimately, this is how Foucault came to his own unique embrace of the figure of Socrates. It takes a great deal of work and patience to be able to grasp and appreciate these various negentropies and, at the same time, understand the necessity of change and transformation. It means that someone working in the human sciences can never be comfortable with received knowledge or with established ways of looking at things. It means paying attention to that which hides or is hidden in the shadows as well as that which is defined by light.
The existential phenomenological philosophies of Heidegger and Merleau-Ponty provide a rich set of insights into the ambiguity of human relatedness as human-being-in-the-world. Richard Lanigan has emphasized the significance of the French concept of le même et l’autre, which means both “self and other” and “the same and the different.” (Vincent Descombes, a philosopher who holds appointments at the École des Hautes Études en Sciences Sociales and the University of Chicago, titled his important 1979 work on 20th Century French philosophy: Le même et l’autre.) Not only is the concept vital to understanding phenomenology’s perspective on the human condition, it is critical to understanding the complex interlacing of human connectedness. Merleau-Ponty came to see that this interlacing of connection and consciousness describes the very essence of the human body as an instrument of life, thought, and pleasure that is simultaneously perceptive and expressive. The body, as a medium of life and experience, is communication. I am both self and other. As well, any personal relationship that one might have, for example, especially a family, is both a sameness and a difference. It is a difference in that it is unlike any other relationship one might have with another person, but it is also a sameness in the same way that a system persists in its form and structure to avoid entropy. When we reconnect with a person, we don’t have to reinvent the wheel. Instead of asking, “which is it really—is a friendship a sameness or a difference?” one has to say it is both, as a person, any person, is herself or himself both a sameness and a difference. We imbed our sameness in our identity, our habits, our memories, and our ethics, but, at the same time, nothing about us—particularly time, biology, chemistry, thought, emotion, and memory—ever stands still.
Inasmuch as communication theory obliges us to understand the meaning of human community and communion, it follows that we must pay attention to the manner in which human experience is constituted in relationship. Even with incredible scientific advances, such as in vitro fertilization, or even cloning, human beings are not ultimately created in laboratories. We are, each, carried to term in a womb. Upon birth, even after the umbilical cord is cut, we cannot survive apart from human nurturing. Our existence is relational. Our experience is relational. Our meaning is relational. From where I sit the only way of asking the right questions about experience is by way of existential phenomenology and, after the strong influence of Richard Lanigan, semiotic phenomenology (which he has termed “communicology”).
You have recently partaken in a book edited by Corey Anton, Valuation and Media Ecology: Ethics, Morals and Laws. What attracted you to Media Ecology and how did your background in Rhetoric inform your contributions to these fields?
I think anyone interested in the body, especially following the work of Merleau-Ponty and Foucault, will find immediate satisfaction from reading McLuhan. Many years ago I had read The Medium is the Massage. It was not assigned for any of my classes. In fact, a roommate had picked up a used copy at a bookstore, and I started reading and experiencing it as soon as he was finished. This was the 1970s, and though I had vaguely heard of Marshall McLuhan (apart from seeing him in Annie Hall in 1977), I was unprepared for what I was about to read. Since then, I have assigned it numerous times to students—and it still catches them by surprise. Many of them don’t know what to make of the book at first. In any case, that book was an excellent gateway into the mind of McLuhan for me. I started a careful reading of Understanding Media in the 1990s after I had become conversant in the work of Walter Ong and Eric Havelock. Ong, himself, was well-versed in Merleau-Ponty, and includes a number of references to the Phenomenology of Perception in Orality and Literacy. I decided to spend some time thinking about media, the body, semiotics, technology, and human experience after having read a fair amount of work by Baudrillard. Baudrillard always struck me as thinking along lines similar to McLuhan, but never explicitly references McLuhan’s work—which is not all that uncommon for European scholars—and while reading Baudrillard has always been fun, I thought that if I was going to comment on media and experience, I would be better off grounding my thinking in McLuhan’s work. So, in 1998, Lance Strate organized a conference calling for a reassessment and recovery of McLuhan’s work at the Lincoln Center campus of Fordham. I did not have a paper to present, so I just showed up to listen and take notes, and it was well worth it. With the exception of Corey Anton, who joined the group a year or so later, all of the major contributors to what was soon thereafter going to be named the Media Ecology Association were there presenting papers or in attendance. Plus, Mark Poster, a distinguished Baudrillard scholar gave a presentation, as well as Donald Theall and Neal Postman. The listserve for the Media Ecology Association was created, initially, from all of the participants and observers at that conference, so I have been getting all of the emails ever since. It was from the MEA listserve that I had discovered the call for Corey Anton’s book project.
I have known Corey for a number of years. When we first met, he had not yet fully developed his interests in media ecology. That came later. But I knew that any book project that he was going to undertake was going to be excellent. I happened to have a completed paper sitting around on my desktop that seemed quite relevant to the book project as he described it. Needless to say, I was quite pleased that he was able to include the paper as one of the chapters in the collection.
You ask about my background in rhetoric. Eleven years of my time spent as full-time professor were spent as a director of intercollegiate tournament debating programs. So, a great deal of my time was spent working with my students researching, writing, and refining arguments and cases dealing with matters of ideology and public policy. All of that effort and all of that time (not to mention my four years of undergraduate debating and my four years in graduate school working with college debate teams) has contributed to a fairly sharp sense of what is and is not defensible. It has enabled me to do the sort of theoretical writing that has come to characterize my work as a scholar over the years. Basically, human science theorists think through problems, ask fundamental questions, and find interesting ways to make cases. Thinking through problems, however, must simultaneously avoid what phenomenologists have termed the “natural attitude” (i.e., following one’s own “common sense,” or accepting either the discourse or the definitions of received narratives—what journalists have come to call “conventional wisdom,” which, nonetheless, fairly saturates the discourse of the vast majority of journalistic work) as well as resisting the temptation to let ideological positions of any sort frame the manner in which the theory will emerge. I took a course in the Summer of 1989 with Richard Lanigan titled, “Foucault’s Rhetoric.” In addition to being a comprehensive course in the thought of Foucault, as well as a window onto the rich intellectual connections between Foucault and Merleau-Ponty, it was a course in the rhetorical and semiotic strategies Foucault himself employed in his writing and thinking. As I said, that was back in 1989, and that course has had more of an effect of my thinking, my writing, and the pattern of my intellectual development than any other academic experience I can think of. All of the subsequent projects, institutes, postdoctoral coursework, as well as the degree in psychotherapy I finished in 2006, have, I think, been intellectually and psychologically translated through that incredibly rich course in Foucault’s semiotic and rhetorical methods.
What are you currently working on?
At the moment I am finishing the editorial work on a book that is due to be published this year with Fairleigh Dickinson University Press. It will be titled The Experience of Human Communication: Body, Flesh, and Relationship. Quite a bit of what we’ve spoken about regarding phenomenology and communication theory is developed in the book, as well as some of the work I have done with the writing of Georges Bataille. There are some fascinating theoretical overlaps between Bataille’s theory of the body and flesh and that of Merleau-Ponty, although there’s virtually no evidence that the two of them were in any sort of direct communication. There is no doubt that they knew of one another. In fact, they were both participants in a seminar conducted by Alexandre Kojève in the first decades of the 20th Century. What I have for some time found quite interesting is Bataille’s set of reflections on the connection between eroticism and communicative experience.
Next year I will start another book. It is tentatively titled What Is Communicology? I have been granted a sabbatical next year to get as much of it finished as I can. What I hope to do in the text is conceptualize communicology—which, again, Lanigan has defined as a semiotic phenomenology—as a course of study, or more specifically, as an academic discipline. Now, I am not interested in writing a “textbook”—at least as academic textbooks are typically conceived. It would be a graduate level text that would, from my unique perspective, provide a history and framework for the concepts and premises that differentiate communicology from cybernetics (that is, information theory), psychology, sociology, anthropology, philosophy, cultural studies, and so forth. As well, I think the text is needed because the field that now goes by the name “communication studies” or, more ambiguously, “communication,” has been without clear theoretical and methodological parameters for some time. The first intellectually significant academic textbook purporting to be about the subject of “communication” came out in 1951. It was written by Gregory Bateson and a psychiatrist by the name of Jurgen Ruesch. It was widely read and well received at the time, but its concepts were absorbed into the fields of psychology and psychiatry as though they had always belonged there. Speech and rhetoric scholars who had become interested in general systems theory folded the insights from the text into their curricula, along with another well-known text from a group of authors working at the Mental Research Institute (MRI) in Palo Alto, California. The MRI was comprised of a group of psychiatrists and psychotherapists who had collaborated with and were strongly influenced by the work of Bateson.
The text was The Pragmatics of Human Communication. It was written in 1967 by Paul Watzlawick, Janet Beavin, and Don Jackson, and it soon became required reading for graduate students in speech and rhetoric. By the 1960s and 1970s, courses had begun to crop up in American departments of speech that had nothing to do with argumentation, public speaking, or rhetoric: group dynamics, interpersonal communication, non-verbal communication. By the 1970s, some of these departments were changing their names from “speech” to “speech communication”—eventually, by the 1980s, to simply “communication” (or its more recent variant, “communication studies”). But there really was no intellectual synthesis taking place. Theoretically, the departments that evolved were just a hodgepodge, a medley of courses that had something to do with talking, connecting, understanding, listening, relating, deciding, and organizing, along with the standard fare of courses in speech, argumentation, and rhetoric. If the courses were not in such demand and the departmental majors were not as incredibly popular as they soon became, it is quite hard for me to imagine a College or University curriculum committee approving, from scratch, a curriculum like the ones that emerged in colleges and universities all over the country by the end of the 20th Century! As such, it’s not uncommon, even now to hear scholars in “communication studies” defining rhetoric as communication or, as I mentioned earlier, completely confusing communication theory with information theory.
The term “communicology” has come to describe the project of explicating the theoretical and methodological coherence that should have emerged following the work of 20th Century scholars such as Bates, Ruesch, Jakobson, Sapir, Whorf, Simmel, and Cassirer, as well as the theories of embodiment, expression, and intersubjectivity that flow from the phenomenological tradition. In any case, that is the story that I seek to tell through What Is Communicology?
© Frank Macke and Figure/Ground Communication. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Frank Macke and Figure/Ground Communication with appropriate and specific direction to the original content.
Your feedback is welcome and appreciated! If you like what you see, please consider voting, commenting or donating to help us grow and expand. Figure/Ground is currently on the outlook for collaborators to help with the expansion of this section into the largest repository of scholarly interviews on the net. For specific suggestions regarding future/potential interviewees or to obtain permission to republish any of the interviews already on the site, please contact email@example.com.