In Defense of Catch-Alls

I had planned on blogging on a different topic this week, but yesterday’s reading, specifically New Media Old Media: A History and Theory Reader (2006, edited by Wendy Hui Kyong Chun and Thomas Keenan), presented a really interesting connection to a number of fields and concepts that I work with. In the introduction to the reader, Chun unpacks the etymology of the term “new media,” and reveals how it came to its current usage. New media refers to a wide variety of media objects, ranging from hypertext fiction to software studies to video games, and at times applying the term new media to all of them seems too reductive to describe such variety. Chun comments on this and a number of other limitations with the term, claiming that it is not accommodating:

“it portrayed other media as old or dead; it converged rather than multiplied; it did not efface itself in favor of a happy if redundant plurality” (1).

Add to this the problem of the “new” part of the phrase—how long are we going to call video games “new” media?—and it seems there are more issues with the term than benefits. One of the only unifying descriptions it offers, at least at first glance, is the general effects that new media has: “It was fluid, individualized connectivity, a medium to distribute control and freedom” (1). Consider how easily that description could apply to language itself!

Yet I argue, as Chun does later in her introduction, that the benefit of a term like “new media” is not in establishing a discrete set of objects for study, or a unified methodology, or a single theory that guides the whole enterprise. New media is something of a catch-all for its many objects, methodologies, and theories, and while that frustrates projects of definition, it also presents a rich, living, evolving toolbox for the study of media broadly construed. New media brings together a plethora of differing perspectives based on the relative closeness of their interests—it is not that they share the same objects, etc., but rather that they have some relationship to each other. They exist in the same constellation, or on the same “map” of new media studies, even if there’s a lot of ground between them (4). There’s a lot to be gained if the places on that map are in dialogue with each other, trading concepts and perspectives that build up their projects, often in unexpected ways. And, as our current political moment makes abundantly clear, there’s a lot to be lost by a sort of intellectual isolationism, looking only to a specific object, methodology, or theory while ignoring the wide-ranging relations that they have.

Of course one also needs specificity in order to have clarity and understanding. If our terms are too amorphous and fuzzy, then we can never get a conceptual grip on the things we study. This is where a catch-all like new media needs further contextualization and historicization, but one can readily see that in the field’s subdivisions—in the case of New Media Old Media, new media archaeology and new media cultural critique (4). The term new media has many referents, but it is not so difficult to identify which one is in question at a given moment.

A number of other fields and concepts enjoy the same benefits and limitations of catch-all status, such as Digital Humanities, electronic literature, and game studies. Indeed, it seems that any field title is something of a catch-all. However in each case I think we can see the same process sketched out above. A blanket term brings together a lot of meaning, but also demands further inquiry and context. For example, last week was the Global Digital Humanities Symposium at MSU, and afterward my colleague, Laura McGrath, tweeted the following:

“I find the label “DH” to be empty/meaningless, but also utilitarian.”

There’s an interesting dynamic at play here, where something that encompasses everything means nothing. Where meaning becomes overloaded and loses all meaning. So I share Laura’s frustration—I often joke that studying games makes me a digital humanist simply because games are often digital. But I also see the “utilitarian” aspect of the term. DH, like new media, brings together many scholars from many areas around a set of interests and questions—what does it mean to do humanities in a digital age? How does one make meaning with digital tools and objects? What can the humanities do to critique and foster newer and better uses for the digital? “Digital” is doing a lot of work here, and its apparent emptiness reflects how it creates a space for work–for generating meaning. I think there is a lot of work to be done, and where there’s a lot of work, one needs plenty of hands. So let’s use our catch-alls, all the while maintaining a critical eye to what they mean, what they include, and what they exclude.

The Holodeck 20 Years Later

As I move into the final weeks before taking my comprehensive exams, I’m taking the opportunity to blog about my reading and thinking as a way to reflect in preparation for being tasked to bring it all together. Of course writing is a practice too, and I think it grows through trying out ideas in informal settings such as conversations and blogs like this one. So for anyone reading this, I hope this provides you with a similar opportunity for reflection, and for trying out concepts old and new.

Today I reread Janet Murray’s Hamlet on the Holodeck, a seminal work for studies of narrative in new media. While I expected Holodeck to show its age after twenty years of media development, I was repeatedly surprised by how relevant Murray’s thoughts still are today. For example, Murray begins her book with a wary and defensive posture. She notes how new media are always frightening and seemingly fraught with danger: “Any industrial technology that dramatically extends our capabilities also makes us uneasy by challenging our concept of humanity itself” (1). She even feels the need to declare that the computer is “not the enemy of the book” (8), as though computers were or are out to get traditional media. It’s hard not to compare this to similar political and disciplinary moves made a few years later by ludologists eager to defend games from the specter of literary imperialism. Such posturing is regrettable if inevitable (I’m doing something similar right now!), but perhaps with the benefit of two decades’ distance we can reveal the blindspots it introduced.

The medium Murray is talking about is the computer, which itself might be a sign of age—these days we talk more about software applications and interfaces as media than we do the computer as a whole. This is necessary in a culture of media convergence where computers do more and more things all the time, and it seems reductive to amalgamate all media involving computers into the monolith of “the computer.” Yet in the 1990s the computer medium was more limited and less ubiquitous, and consequently more stable as a concept. To be clear, it is not that the computer is no longer a usable term, but rather that its possible referents have changed significantly in two decades.

What has happened to the computer has also happened to narrative—Murray’s other theme in the book. Murray was spot on in noting the fragmentation of linearity in computer narratives (be there hypertext, electronic literature, games, etc.), and several of her terms and metaphors, such as “multiform” (30) and “kaleidoscopic” (159) stories, remain helpful today. Likewise, her four essential properties of digital environments are still relevant: digital narratives are procedural, participatory, spatial, and encyclopedic, though that last property might be better expressed in terms of archiving and system memory. All of these properties remain in play, and indeed they have proliferated over the years into an incredible constellation of media and narratives.

Yet they also beg a question that is notably absent in Holodeck: what have these properties done to the concept of narrative itself, and how do we define narrative after such transformation and fragmentation? Some have pounced on this question to claim that narrative is no longer relevant (or at least primary) in digital media, and recently Markku Eskelinen has gone as far to charge Murray and others with being “unacademic” in failing to define narrative. Exactly how we might define narrative will have to wait for another blog (or, more likely, another dissertation—I’m feeling coy), but Murray might inadvertently point the way. Just as the concept of the computer has fragmented and scattered now, narrative has done the same. This does not mean that these concepts are no longer applicable, useful, or significant. Indeed, I think they indicate the need to reexamine them in the field, as it were, and to unearth the similarities between the objects and places they have dispersed to. Computer and narrative apply broadly now, but there are still common elements between their broad applications. Let’s follow the paths they have trodden, and see what trails they have left behind. Only then will we be closer to seeing what they are and what they mean, at least until they change again.

What do we (English, the humanities) do?

The oft-cited crisis in the humanities of the past decades and the turn toward things like cognition, neuroscience, or the digital humanities generally are topics that everyone seems to have an opinion on. I realize that my ramblings here will be just another one of those opinions, but as a young, inexperienced scholar of the digital and the new (for now) I would be remiss to not deal with these topics in some way. The CFP we looked at this week seemed to be onto something when it, in true humanities fashion, wrote the following in a paragraph-long sentence of jargonese: “This will include exploring the extent to which discourses engendering neuroscience in fact do match neuroscience’s real world (social) effects; but it will also include interrogating the anatomy of the neuro-discourses themselves. . .” (it goes on at length from there). What stands out to me here is the focus on “discourses” and “interrogating”–in other words, on what is said and how it matches with what is done.

What seems to be the crux of much of these discussions is this: what do the humanities do, or what should they do? We often talk about how the humanities make the world a better, richer, more aesthetic, etc. place, but does this ever amount to more than rhetoric? To take it in another direction, I recall in my undergraduate years when one of my professor’s commented to me (this is a bit of a paraphrase), “English is always borrowing from other disciplines because it has no territory of its own”. This comment may seem a bit reductive, but it has always stuck with me because it bothered me so much. Why do we in English always need outside insight to do what we do? You’ll noticed I’ve slipped into talking about English rather than the humanities more generally–let’s take it as something of a case study close to home.

What I suggest, and here I’m drawing on (always speaking through others) Derrida’s 1984 essay “No Apocalypse, Not Now”, is that we in English do discourse and the texts of all sorts through which it operates. What this means in practice, however, is that we do not do anything save through talking about how, where, when, and why other things are done. This isn’t a knock against us English folks, that is unless the how, where, when, and why things are said and done don’t matter. And here we arrive at something invaluable about the field of English–it reminds us that these things do matter.

To bring it back to current topics with cognition, neuroscience, and the digital humanities, it seems crucial that we always ask ourselves what these things do, and how. These trends have become very popular, and as such they must bear both great potential and great discernment. We shouldn’t do these things just because they are popular, or trendy, or even because they can “save” the humanities. We should do them because they are meaningful–because they offer something new to our pursuit of discourse. In doing so, they also alter fields supposedly distince from the humanities too, including science. As Cohen writes in “Next Big Thing in English”: “The road between the two cultures — science and literature — can go both ways”. Not only can it, it must. It isn’t that science somehow legitimizes what we do in English, as though discourse generally needs science (itself a discourse) to operate. It is that we, if we are honest in our work, traffic in discourses and texts of all types. That is what we do.

Maybe Kristin Chenoweth can help a bit here. Mostly I just want to link a song from Wicked.

 

Avatars, Narrative, and Absent Minds

My post for this week is going to be something a little different from previous weeks. I’ll be using this opportunity to introduce everyone to a particular game that relates to some of the questions we have been pursuing this semester–Gone Home (2013, PC/Mac) by indie game company Fullbright.

Gone Home is a first-person exploration game that tells the story of Kaitlin, a college student who returns home to find her family curiously missing. gonehome_titlescreen.pngKaitlin explores the house trying to find out what has happened to her family, and discovers quite a bit about them while doing so. Without revealing too much, the game has become noteworthy for its endearing portrayal of LGBTQ characters and their struggles.

What makes Gone Home so interesting in regards to our course is its focus on discovering and encountering the minds of other characters through the objects they have left behind. As we read in Bailenson’s “The Virtual Laboratory” for this week, “virtual behavior is, in fact, ‘real'” (94). Through a series of experiments with virtual reality, Bailenson and his team were able to show that “agents” and avatars encountered in virtual spaces are perceived in much the same way actual people are in actual space. I use the term actual space quite intentionally–as anthropologist Tom Boellstorff has noted with his studies in Second Life, it is not very apt to call it “real” space when what happens in both actual and digital spaces is “real”. Bailenson and Boellstorff (amongst many others) have thus shown us that our cognitive processes in virtual/digital realities are not so different from such processes in the actual world.

But Gone Home presents a different case. So we encounter avatars similar to how we encounter real people, but what happens when there are no avatars to encounter? What happens when those avatars are absent, and all we have is whatever they have left behind (or, to complicate things, what the designers created and made to look left behind)? We still get a sense of character in GoGone-Home-3.jpgne Home, but that character must be discovered as part of an emergent narrative found and created by the player. I suggest that we use similar theory of mind processes to construct and interpret characters in Gone Home, but that these processes have been broken up. In other words, we are still encountering minds, but minds that have been fragmented into different objects that can be discovered or ignored by the player. This necessarily requires space–space for the objects to dwell in, and for the player to move in.

A further point to consider in Gone Home is that every act becomes a narrative one (a significant point in the game narrative study our group is designing). Unified character has been removed, and in its absence character must be recovered through interaction with objects. Because of this,  even the simple act of moving within the game world has narrative import by virtue of navigating the space and objects that comprise the entirety of the game’s story. Play in Gone Home is narrative, exactly what our group is trying to prove in other games.

These are just a few threads to pursue as an introduction to the game. We will play Gone Home together in class on Tuesday–I look forward to seeing what everyone has to say about it!

Narrative, Play, and ASD

At last year’s International Narrative conference, I had the great pleasure of attending a panel chaired by Lisa Zunshine on “Cognitive Approaches to Narrative”. One of the panelists, Ralph James Savarese, gave a fascinating talk on using fiction to help persons with ASD to develop better social skills and the ability to understand other minds (talk was titled “Reading Ceremony with Autist Jamie Burke”). At the time I remember being very intrigued by the prospect of using theory of mind to help others in this way, and (if memory serves) I recall Savarese also mentioning this activity being similar to using games and play to help persons with ASD to simulate interacting with others. Unfortunately this was little more than a fleeting thought at the time, and I have never returned to it until this week.

If narrative creates space for play and play moves narrative–things games are making us realize–then what implications do these things have for persons with ASD? What caught my attention about the description of ASD on PubMed was its effects on “creative or imaginative play”, a “crucial area of development” (PubMed Health). I understand how ASD affects creativity and imagination, but why play in particular? Of course such a question opens up on a whole host of other ones dealing with play as a cognitive tool for exploration and growth, so it may be helpful to narrow it down a bit here. Is it that persons with ASD do not play imaginatively or creatively, or that they do not play at all?

41AVVhtHugLThe answer to the latter question seems obviously no–as we can see in our primary reading for this week (The Curious Incident of the Dog in the Night-Time), people like Christopher certainly do play. One of the objects in Christopher’s pocket when he is picked up by the police is a piece of a wooden puzzle (13), his mother later buys him another wooden puzzle that he plays with (216-217), and he even plays a game of imagining the trains to help himself cope at the train station (179). He also often plays Minesweeper when he is at home in his room with Toby. So it isn’t that someone with ASD (and here I know it’s problematic to draw general conclusions from a portrayal of a single fictional character, but bear with me) cannot play, nor is it that they cannot imagine or create. The puzzles Christopher solves are often of the brain-teaser variety, and require him to think very creatively in order to solve them. And yet there is something different about the way Christopher plays.

I suggest that that something relates to the structure and end-state of the play Christopher engages in. Christopher’s play is almost always rigidly structured, and more importantly it is play that must have a solution. Christopher does not like open-ended play, as seen in the imaginative play in the train station I mentioned: “And normally I don’t imagine things that aren’t happening because it is a lie and it makes me feel scared . . .” (179-180). Unfettered imagination is scary for Christopher because it presents too many possibilities that are impossible to bring down to one solution, and the stimulation and uncertainty of that is terrifying for him. Imaginative play must be tied to what is really happening, and failing that it must have a purpose and solution. This seems to me a crucial clarification of the PubMed definition of ASD–it is not that Christopher or anyone like him cannot imagine and create in their play, but rather that that imagination and creativity needs to be structured with a purpose/solution. As seems to often be the case, Christopher is not dealing with a disability or lack of capability so much as a different form of ability, a capability that requires certain rules and structures to function.

Emotion, Feelings, and All Sorts of Nope

It’s rare that I find myself mostly opposed to a text, but one of this week’s readings provided just such an instance. I wrote in a previous week’s blog about the strange and apparently irresistible call to evolution in cognitive studies, 2787652as though the origins of every cognitive process can be explained with “the Hamburglar (I mean evolution!) did it!” This week’s reading in Damasio’s Joy, Sorrow, and the Feeling Brain provides yet another example of this trend, with its seeming reduction of emotion to evolutionary hardwiring. I say seeming reduction here because it’s quite possible that these arguments get fleshed out more elsewhere in the book, but alas they do not here. I’ll try to avoid simply restating the problems with assuming cognitive processes are evolutionary though, and take this post in a different direction with Damasio’s argument.

One of Damasio’s basic claims in Chapter 2 is that emotions and feelings are not the same, which could be the beginning of a really fruitful discussion about how a seemingly singular cognitive/physiological process is operating in a few different ways. However Demasio defines these different concepts in problematic ways in an effort to isolate them for study. He writes: “Emotions play out in the theater of the body. Feelings play out in the theater of the mind” (28). On the one hand this conception of emotion and feeling clearly delineates them, making them easily observable and testable. However this comes at the cost of reifying the mind/body distinction that is so endemic and problematic in Western thought. What do we lose when we reduce emotion to simply being a physical or physiological process? And are we merely enforcing an arbitrary distinction here, dividing emotion and feeling when they are always already bound up with one another?

The distinction becomes even more problematic when we encounter Demasio’s description of feelings as “always hidden, like all mental images necessarily are, unseen to anyone other than their rightful owner, the most private property of the organism in whose brain they occur” (28). If feelings truly are hidden in this manner that emotions are not, then we run into a problem of seeing where the hidden and the unhidden interface with each other. In other words, if we cannot see feelings, then how can we make claims about what the content of feelings are in relation to emotions? This problem does not stop Demasio from claiming that feelings “are mostly shadows of the external manner of emotions” (29), indicating that feelings come after emotions. Even taken within his own argument that these processes are bound closely together, this is a shaky assumption at best.

I found myself thinking about these problems with Demasio’s argument throughout my reading of Persepolis (by Marjane Satrapi). When we see Marjane’s mother and grandmother remembering the difficult life of her grandfather (24-26), can we truly say that the emotion is coming first, and the feelings second? The opposite seems to be the case. They are not sad until their feelings surrounding the memory of their loved one make them sad. Demasio would likely attribute this to the example coming from a fictional narrative that places feeling before emotion, but at the very least it seems to demonstrate that the connection he is tracing can work both ways. The anger present in revolution in the graphic novel seems to point to this as well–it isn’t that the revolutionaries are immediately angry and then find their feelings afterward. Rather, they perceive a narrative of a particular feeling, giving rise to emotion in equal measure. If emotion and feeling are truly separable here, they are interwoven in a feedback loop that makes them seem inseparable, and this definitely complicates any effort to locate a beginning and end to the loop. While there are parts of Demasio’s argument that seem to bear weight, as used they play host to a great many problematic assumptions.

The Logic of Nonsense: Stein’s Meaning in the Meaningless

It’s fascinating that we come to this week’s topic, Psychoanalysis & the Critical Interpretation of Narrative, through texts that strive to be profoundly un-narrative. Or perhaps queerly narrative? Unnatural narrative? In any case, there seems to be a definite trend in human knowledge-making to only see things clearly when they cease to work normally, or when they take up a position of enough distance and difference.

Let’s start with narrative when it succeeds though, and here we probably mean that it succeeds when it is communicated correctly. In their article “Speaker-listener neural coupling underlies successful communication”, Stephens, Silbert, and Hasson discuss their findings in a fMRI study of storytelling. Specifically, they note that brain activity seems to undergo “coupling” in communication, meaning that the brains of speaker and listener demonstrate remarkably similar activity in the process of relaying information (with delays accounting for the time it takes to speak and then hear the information). Furthermore, the closer the neural coupling, the more successful communication becomes (14428). These findings suggest that the processes of producing and comprehending speech (and thus auditory narrative) are similarly engaged in by both speaker and listener in communication. The implications of this for narrative are profound. It provides more evidence for what game narratives have been suggesting to us for some time–that narratives of all kinds are inherently interactive, involving listeners (and readers/players?) in creative and interpretive processes of storytelling.

What happens when neural coupling is frustrated or blocked, however? Does communication and meaning itself just stop? Our readings in Stein this week might suggest otherwise. While Stein’s writing often seems to forego meaning altogether, it also operates on an internal logic that progresses through both repetition and sudden turns. For example, consider this passage from “Rooms”: “A lilac, all a lilac and no mention of butter, not even bread and butter, no butter and no occasion, not even a silent resemblance, not more care than just enough haughty.” Here several words are repeated and iterated upon as the sentences progresses. “Lilac” leads to “all a lilac”, taking a sudden turn to “butter”, repeated in “not even bread and butter”, and finally taking a sudden turn to “occasion” and “a silent resemblance”. While the sudden turns render the narrative here fragmentary, a sense of progression remains to both the sentence and the concepts it contains thanks to the repetitions and additions of words. This internal logic simultaneously obfuscates meaning while also suggesting it, forcing the reader search for meaning perhaps absent and recognize the relative limitations of meaning in doing so.

Stein’s writing is by no means the first to accomplish this internal logic of nonsense, and it appears prominently throughout Lewis Carroll’s Alice’s Adventures in Wonderland and Through the Looking Glass. For example (just one of many), the exchange between Alice and the Red Queen in TTLG demonstrates the relative meaning of nonsense in a similar way: “‘You may call it ‘nonsense’ if you like,’ [the Red Queen] said, ‘but I’ve heard nonsense, compared with which that would be as sensible as a dictionary!'” (140). This statement is part of a longer passage where the Red Queen repeatedly contradicts Alice with nonsensical comparisons. Notice how the structure here is similar to Stein’s–repetition, addition, and a sudden turn (in this case an inversion).

What stands out in both these cases is how nonsense–an apparent rejection of meaning–can never fully escape meaning either. The instant anything enters language (or perhaps even consciousness itself), it becomes a thing, and importantly a thing that cannot be entirely divorced from meaning. Lerer recognizes this in his chapter, “Gertrude Stein: The Structure of Language”: “Because words are always interconnected by syntax, they can never say nothing” (166). Despite the difficulty of identifying any stable meaning in nonsense (if meaning can ever be really stable in any condition), the reader inevitably engages in the interpretive and creative acts of finding such meaning, even if only on a surface level. This point about reading and language speaks to a larger difficulty that nothing as a concept poses to consciousness, a difficulty I think is similarly posed by the concept of the infinite. The active conscious cannot truly inhabit or comprehend nothing, as the instant nothing is recognized it becomes something. At the same time, nothing always lurks beyond the boundaries of consciousness, much the same way meaninglessness lurks beyond the boundaries of language. And there always seems to be something generative about grasping after the ungraspable, as there is meaning in grasping after nonsense.

Making Things Up: Memory, Narrative, and Play

As I was completing my MA thesis in 2013, I ran into something of a conundrum. I was trying to talk about narrative in video games, and fighting against the notion that narrative in games is just something added onto play experiences after the fact. As Markku Eskelinen famously remarked, “if I throw a ball at you, I don’t expect you to drop it and wait until it starts telling stories” (Simons, “Narrative, Games, Theory”). This argument always struck me as something of a straw man–it’s not like anyone talking about narrative in games expects inanimate objects to suddenly start speaking. Nevertheless, it has proved to be a remarkably stubborn argument holding on in game studies. I recall my thesis advisor asking me something to the effect of, “But surely you don’t mean to say that playing kick the can in an alley is narrative?”.

Actually, that is exactly what I mean to say (more or less). Narrative isn’t just the unfortunate byproduct of experience, the redheaded stepchild showing up late to the party. Rather it is inherent to experience, always-already present and bound up in the very cognition of events. How would one even begin to prove this though–to the extent that one can *prove* anything of the sort? I was stumped by this question, until I made a truly serendipitous discovery when I was reading through the Ocober 2014 edition of the journal Narrative, in which Hilary Dannenberg points out the importance of narrative in memory and the field of trauma therapy. As she says, “memory is narrative” (“Gerald Prince and the Fascination of What Doesn’t Happen”, 309). If memory, itself so experiential, is narrative, then other experiential things like play certainly can be too. But this is pretty speculative and has wandered pretty far from this week’s topics of memory and forgetfulness, so I should return to those.

The point that Dannenberg makes about narrative is precisely the point Jonah Lehrer makes about Proust and memory in Proust Was a Neuroscientist (2007). Lehrer is not dealing specifically with narrative in his text, but he is arguing extensively for a Proustian view of memory as something always changing: “Simply put, [Proust] believed that our recollections were phony. Although they felt real, they were actually elaborate fabrications” (82). Memories are not events, feelings, and experiences captured in stillness, but rather are “fabrications” or stories–constantly shifting, never quite the same as the experience when it happened. Lehrer goes on to say that memories get more inaccurate with each act of remembering, or perhaps more aptly named misremembering (89). The narrative of memory shifts with each telling of the story, and this is not a bad thing. Indeed, this ever-changing process is how memory endures.

Lest memory feel lonely in its projects of making up stories and fabrications, it is important to remember that such processes are crucial to knowledge-building in general. Lehrer’s own project with Proust and neuroscience demonstrates this quite well. As much as there is apparently a link of ideas between a French writer who died almost 100 years ago and contemporary neuroscience, it would be a pretty large leap to sincerely think that today’s neuroscience is built on Proust, and training neuroscientists will probably be forgiven having never read his writings. The connection between the two is itself a fabrication–an incredibly apt one that reveals exactly what Lehrer and Proust are talking about with memory. It isn’t mere coincidence that a writer musing on his own life and past could come up with valid theories of memory. Proust observed tendencies in his own personal experiences with memory, and then built stories and theories on those observations. Is this not the similar or same process we use in scientific experimentation? Thus while Proust was not in reality a scientist, he provides an excellent example of how scientific processes and fabrication–making things (such as theories) up–are never too far apart. This relationship does not render all science less real any more so than it makes all fiction more real. It simply reminds us that our mental processes might not be as easily compartmentalized as we’d like to think.

As further food for thought, here’s an image from the video game Bioshock Infinite, which also plays with the plasticity of memory:

2013-03-27_00036

By the Bye: A Defense of Distraction

The past 12 or so hours have been very distracting–my focus on reading things like Proctor and Johnson’s Attention: Theory and Practice and Laurence Sterne’s much earlier Tristram Shandy has been repeatedly derailed by MSU’s sudden win over Michigan. While this has been annoying in terms of productivity, it actually relates really well to the concepts of attention, distraction, and perception that this week brings us to. What does it mean to pay attention to something in terms of cognition, and how much can we pay attention to at once? How are attention and perception related to each other? Why does any of this matter?

In The Principles of Psychology from 1890, William James defines attention as the mind drawing specific objects out of a host of other ones: “[Attention] implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German” (404). James argues throughout his chapter on attention that attention necessarily excludes or subordinates the sensing and cognition of some stimuli–in other words, focusing shoves some stimuli to the periphary or even out of the picture entirely. What I find so interesting here, however, is how distraction–normally presented as attention’s opposite–is referred to negatively or dismissively. Distraction is “confused”, “dazed”, and “scatterbrained”, and a truly great education would involve minimizing it and training the mind to always return to attention (424). Distraction is the not-important and insignificant, attention is the important and significant.

It would be easy to assume that this view of distraction has more to do with the values and attitudes of when James is writing, but the devaluation of distraction persists in modern studies of attention as well. In Attention: Theory and Practice (2004), Addie Johnson and Robert Proctor detail the history of attention studies from philosophy to psychology, and they begin to do so by introducing the example of an aircraft pilot. A pilot must focus on the task at hand by navigating a plethora of stimuli available to them, correctly deciding which information is important in order to successfully fly the plane (1-2). Here again we have mention of distraction as the negative–that which is unimportant and must be excluded in favor of what should be paid attention to. This makes sense from the perspective of performing a task; after all, paying attention to everything is not possible and in the case of flying a plane is actually really dangerous. So it seems logical to want to maximize attention and minimize distraction in order to get things done successfully. Still–doesn’t distraction itself have a role in this? Are there ways in which distraction is not negative, but is rather generative?

Tristram Shandy certainly thinks so. In Volume I, Tristram makes a defense of his constant digressions in his narrative by claiming that the digressions are actually crucial to the continuing of the story: “In a word, my work is digressive, and it is progressive too,––and at the same time” (52). Tristram will go on to say (for what is his narrative if not itinerant) that digressions are “the life, the soul of reading” (52). At first glance these remarks might appear simply as weak justification for a truly bizarre narrative–the musings of a silly gentleman. However this passage might be the closest thing a reader of Tristram Shandy gets to a real point. The narrative of the novel would be fundamentally different if its events and characters were arranged otherwise, and certainly the characterization of Tristram would altogether change. The digressions of the novel and the distractions they pose are crucial to accessing the mind of Tristram and gaining perspective on the events of his life–something we have to assume will become important *somewhere* down the line. Furthermore a reframing of Tristram Shandy would diminish its critical power. Without its ability to upend traditional forms and expectations, the novel becomes just another example of social drama and the usual narrative in the period. Distraction in the form of digression is thus quite generative in Tristram Shandy, and one could even say (as Tristram does) that the focus and attention of the novel are built on it.

While attention might seem better than distraction in terms of accomplishing mental and physical tasks, I would argue that attention is not possible without distraction. Rather distraction is what draws attention along, allowing it to focus on new and different things. As a result, distraction is generative in that it provides perspective and direction otherwise lacking in attention. I cannot help but think of serendipity here as well–it seems that emergence, innovation, and discovery must always contain some element of distraction by way of drawing off from a given focus and giving it a new route. So it is never the case that we can simply maximize attention and minimize distraction in order to gain knowledge–the two need each other in order to progress.

Edgar Huntly, the Senses, and Madness

This week’s readings take us in a slightly different direction from previous weeks–rather than focusing on processes and conceptualizations of minds, this week we look at the mind agitated, afflicted, and even overwhelmed. In order to cover these topics, I will refer to Charles Brockden Brown’s American Gothic tale Edgar Huntly (1799) in conjunction with Gabrielle Starr’s “Multisensory Imagery” in Introduction to Cognitive Cultural Studies (2010). While over two centuries separate these two works, there are several ways we can see Starr’s commentary on the senses in literature playing out in Edgar Huntly.

Starr’s “Multisensory Imagery” lays out what she calls the “structure of cognition” (276) and later the similar “architecture of the imagery of the senses” (291), all built on our “imaginary perceptions” (276). Her basic argument with these terms is that thought and perception take certain structures, and that these structures are directly related to the interplay of our senses, whether they be visual, auditory, olfactory, etc. This is especially true of art and fiction, where our senses are as often as not imagined–we do not actually see Spot run, but we imagine we do. It is the combination of different sensory images in fiction that build up our thoughts, experiences, and cognition of a story. What interests me here, however, is not how this process works, but how it falls apart. If the senses have an architecture, what happens when that architecture becomes overwhelmed and cannot bear its load? Do the senses break down? Do they freeze? Do they operate at diminished capacity? Edgar Huntly helps us to start thinking about these questions.

Edgar Huntly is at first the story of a man (Edgar Huntly) trying to solve the murder of his friend, all related as a lengthy letter to his fiancé Mary Waldegrave. Very early on in the story the reader encounters how Edgar’s “perturbations” have very physical manifestations: “Till now, to hold a steadfast pen was impossible; to disengage my senses from the scene that was passing or approaching; . . .” (5). Edgar’s mind and senses have been afflicted to such an extent that he has been both physically and mentally shaken, causing him to lose basic faculties like holding a pen. A similar affliction appears later in the novel in Clithero, the man Edgar initially suspects of murdering his friend. While relating his story, Clithero suddenly falls into a fit that prevents speech: “As this period of his narrative, Clithero stopped. His complexion varied from one degree of paleness to another. His brain appeared to suffer sever constriction.. . . In a short time he was relieved from this paroxysm, and resumed his tale with an accent tremulous at first, but acquiring stability and force as he went on” (46). In both of these instances the senses of the communicator (one in writing, one in speech) are overwhelmed and arrested, and their abilities to communicate are temporarily terminated. Additionally, in both cases it appears to be a recollection or reimagining of traumatic events that leads to the attack. Relating back to Starr’s work, in Edgar Huntly we encounter the possibility of multisensory imagery not just shaping cognition and experience, but also potentially overloading and paralyzing those very same processes. Recover is definitely possible, but it requires decompression or release from the brain “constriction”. Many other examples of this exist in the novel, including Clithero’s freezing at the point of his attempted murder and suicide.

All of this sensory overload bears a strange relationship to madness in the text, and the paroxysms and somnambulism demonstrated by both Edgar and Clithero seems to incriminate them or at least suggest heavy guilt. The strangest and best example of this is the aftermath of Clithero’s killing of Wiatte, and the consequential buildup to his attempted murder of his patroness. The logic that leads Clithero to conclude he must kill his patroness is extremely circular, and appears to form a mental feedback loop that can only lead to the one end it has already designed. First, Clithero realizes and repeatedly emphasizes that he has killed his patroness’ brother–this is the initial fixation. The next fixation is on the completeness of his guilt, and the dreadful effect he assumes it must have on his patroness–it can do nothing else but kill her: “The same blow that bereaved him of life, has likewise ratified her doom” (54). To simplify, the mental feedback loop here always comes back to death, going something like death->guilt->death->guilt. Clithero is unable to conceptualize any possible outcome other than death, and ends up concluding that it would be merciful to kill his patroness outright rather than with the knowledge of her brother’s death. We witnessed this same sort of fixation and feedback loop earlier in Othello–the worst must be true because it can be nothing other than true, so it becomes true. The feedback loop climaxes in the overload of the mind and the senses, paralyzing the person and rendering them unable to act rationally. Madness takes hold…

Which means it’s probably time for a tea party.

mad-hatter-makeup-tutorial