As I noted here last Monday, I spent last week at the Tennessee Valley Interstellar Workshop (TVIW), handling the conference’s social media activities. As an interested participant, I had my own interests to pursue as well. And given the awfulness of the shooting in Las Vegas, I had the problem of human evil/self-destructiveness high in my mind amidst the soaring aspirations of sending people to the stars. How does one get ethical concerns “elevated” in a technical organization? Follow along, friends. I’ll offer some ideas.
The light conversation phase
Like most technical meetings, it’s not uncommon for the most animated or entertaining discussions to occur in the hallways or at the bar after the formal sessions. TVIW was not immune from this tendency. So I started asking random folks at the conference a variation of the following question:
Human beings have moments of insanity or violent self-destructiveness that can include harm to others. What do we do about that tendency on a theoretical starship?
The responses varied from stunned disbelief to horror to dismissiveness, i.e., “That’ll never happen”. Those who decided to engage with the question offered up solutions ranging from societal (“Include an internal security force”) to medical (“Have a team of psychologists on board to watch for exactly that sort of behavior”) to technological (“Install psychotherapeutic software that puts the unhinged into a therapy session automatically if certain brain chemistry goes in the wrong direction” [Mine]).
Aside from the hallway conversations, the only speakers who concentrated on human behavior dynamics in the conference were ethicists, and the attendance at their talks was not as high as it was for some of the technical tracks. They were more concerned with the thinking/social mechanisms that went into a mission before it launched to prevent larger disruptions like civil disobedience or civil war. The technologists, perhaps as a function of their interests and training, were more concerned with figuring out how to get the hardware to work without addressing what might happen if someone decided to misuse or damage it.
Raising concerns formally
If you don’t get anywhere raising a concern in one-on-one conversations or working up the chain, the next thing to do would be to raise the concern in a public forum/meeting. This is especially important if technical people who know the system/hardware agree with you that there is a problem but have not spoken up.
This method carries risks with it, especially if you’ve been speaking with individuals one on one and been told a) your concern is not a problem and b) do not to mention your concern to a larger group.
However, if you think it’s potentially worth your job, you could raise your concern in a forum that prevents leadership obfuscation or at least makes it very difficult because if you have the concern, odds are others will as will.
In the end, the best answers I got were, unsurprisingly, from the science fiction writers. I suppose this is a function of my SF fandom as well as the potential for “drama” that human cussedness can create in space. It makes for great storytelling, but stressful mission operations.
- “It hasn’t happened in space yet. Maybe we’ve been very lucky, or maybe space changes us. Either way, we’d better hope that continues.” (Allen Steele)
- “That’s what the airlock is for!” (James Cambias)
- “We write stories about them.” (Geoffrey Landis)
- “Designing societies is pointless. You don’t. They [the crew] are going to go there and live it.” (Gregory Benford)
- “There’s a strong libertarian bent in the space science fiction community. Yet I can’t help wondering if libertarianism works best in a place like Earth, where water and oxygen are free. The farther you get from Earth, the more likely society might be optimized for totalitarianism.” (Landis)
- “They’ll evolve in ways we can’t anticipate.” (Steele)
“We’d better anticipate them. That’s what we get paid for.” (Larry Niven)
- “Humans will will take their culture and behavior with us. We have to plan for it.” (Teri Weisskopf)
Some of these responses were meant to be humorous, but they were at least taking the issue seriously. If technical writers can add value or find a “heroic” role for themselves in the technocratic workplace, it’s in getting the techies to think about the “what if?” questions, not just the “how?” questions. The trick, I believe, is finding the right individuals/groups within a community that share a common understanding of the problem you raise and, more importantly, have the biggest stake in the outcome.