When we discussed it in class last week, I wasn’t feeling a strong connection between Berlin’s social epistemic rhetoric and my own class, Technical Writing (ENGL 3323). In the current curriculum we focus explicitly on workplace writing, which I think is entirely appropriate for my students, most of whom will soon become engineers. I do not share the belief, implicit in some of the theoretical work on this subject, that preparing students for their future professional lives (or even their other academic courses) merely serves to further the “evils” of capitalism. I’m not ashamed of focusing exclusively on students’ ability to write, or of situating that writing in the context of their future  work. As I mentioned in class, the attitude I take is that my students make things. They make machines and construct buildings, and  ensure that they function safely, as with the 15% of my students studying Fire Protection and Safety. If I can better prepare them to do that work, I feel like my own efforts have social value.

But after our discussion, I spent the next few days thinking about our curriculum, and my own defense of it. I can’t remember how vehement I was about it during class (lately I seem to be picking a new fight every day), but I had strong feelings about the subject, and would have gladly defended our approach to the course. I have long felt that composition pedagogy emphasizes political action in ways that I would not be comfortable with in my own classroom, even if I were to teach First-Year Composition instead of tech writing. But my occasional self-righteousness is often tempered by the realization that I can be (and often am) completely full of s***. And I was curious about what Dr. Lewis had mentioned about incorporating ethics into her previous 3323 classes. So I decided to do a bit of reading about ethics in technical writing.

Based on my efforts so far, it seems to me that incorporating ethics into the curriculum revolves around two key issues: 1) How language enacts different versions of reality, and 2) Ethical decisions in document creation and design. Issue #1 addresses the long-held belief by members of the scientific community (and many technical writing pedagogues) that language merely needs to be clear, so that it objectively describes what exists in the real world. This is known as the “windowpane” theory of language, and it quickly becomes problematic when issues of science and technology interact with sociopolitical reality. For example, Dr. Lewis described the Nazi memo that some of you read in the History of Technical Writing course last year (see next image). The author, a German bureaucrat named Just, writes his superiors with recommendations about improving the efficiency of the trains that brought millions of victims to the death camps. Far from being merely neutral, his language reflects and makes possible the dehumanizing attitude that was prevalent toward Jews and other victims of the Nazi regime. Their humanity is entirely absent from the memo, which refers to them merely as pieces or load. For example: when the doors are shut, the load always presses against them. Elsewhere they are rendered invisible by the use of nominalization:

Because of the alarming nature of darkness, screaming always occurs when the doors are closed.

The point is that although some topics addressed by technology are entirely objective, such as the width of a connecting O-ring in an assembly, others are culturally defined. And if we ignore how we as a society create these definitions, by focusing purely on the technical aspects of a problem we risk ignoring its real human consequences. I think this is at least part of the purpose of showing students the Just memo, although those of you from last year’s History course are more familiar with it, so feel free to correct me if I’m getting that wrong.

The second issue concerns ethical decision making. Many of the textbooks that address ethics focus on case studies in which engineers made important ethical choices that had life-and-death consequences. For example, before the Three Mile Island nuclear disaster (1979), another plant run by the same company had difficulty with the reactor core. The head engineer at the plant reported the design flaw to management, but did not want to emphasize the magnitude of the problem, presumably for reasons of political expediency. As a result, he buried that information in the middle of his final report, a section many readers typically skip over. He did not  take additional steps to emphasize the potential consequences of the problem, which would have made harder for management to ignore the problem.

In contrast, before the Space Shuttle Challenger explosion (1986), engineers at Morton Thiokol took a firm ethical stand against pressure from management over a design flaw involving simple rubber O-rings. Though supervisors pushed them to underplay the potential dangers for fear of postponing a launch that had already been delayed, in part because Thiokol was negotiating a $1 billion contract with NASA. But in the memo below, dated seven months before the explosion, the engineer assigned to investigate the flaw refuses to gloss over the seriousness of the problem.

The result would be a catastrophe of the highest order- loss of human life…

There’s much more to the concept of integrating ethics, but those are some of the key points. Another topic worth exploring in the context of employing a social epistemic approach is the service-learning approach. I’ll talk about that in a few weeks during my paper presentation.

Advertisements