It isn't all that often that people take the time to send an email in response to one of my
Inroads columns, or about my computers and society book. When people do write, they are more often than not thoughtful, and
they always give me something useful to think about. Recently, I received a communiqué (i.e. email) from a person who had gone to some lengths to suggest how I might improve my book to better align with their course material. We had a pleasant conversation, enlightening I suspect for both of us.
I wonder how long it will be before it catches on in the greater community that societal issues in computing is not a synonym for ethics. Nor should it be.
[For some odd reason I'm suddenly itching to launch into the discussion in terms of equivalence classes, negation, intersections and tail recursion, but I'm restraining myself for the greater public good]
You see, what my correspondent was attempting to help me with, was how I could make my book a better "ethics" text. By including such things as a history of the development of Western ethical thought starting with the Greeks,
The Precautionary Principle in European Union Law as compared to the United States orientation towards a
Postcautionary Principle , seminal cases and legal precedent in technology patent and trade law, and....
Boring. I'm sorry, but: Boring. Unless you are a nerdy academic of a very certain ilk or a lawyer.
Certainly boring to most students.
Not to mention, there are only so many students (and hence future computing professionals) who
are going to be attracted by the abstract mathematical qualities of
computing. We
know that many students want to make a positive difference
in their world. We need to show potential computing students the exciting rubber hits the road aspects of computing.
We shouldn't bore them into studying something else.
The other problem with the "computing & society == ethics" idea is that ethics discussions in "real life" contexts tend inevitably to focus on problems. Either evaluating problems after they occur or attempting to prevent problems in the first place (hence the Post and Pre Cautionary Principles discussion). When we do this, the implicit message sent, not just to students, but to the public in general, is:
"when we consider computers in a societal context we are referring to bad things.
Causing, either directly or indirectly, harm." We drill down on correcting harm or avoiding harm.
Not necessarily boring, but often harmful. Why?
For one thing, by equating computing societal issues only with problems, we are likely to chase away more of those people who want to be of benefit to society. Off they go to study Biology. (Personally, I love almost all things Biology but that is beside the point).
We need to show, at all levels, in all places, that we do not have to "choose" between technological progress and helping people; that it isn't inevitable that computing's impact on society at large embodies allocating people to run around cleaning up messes.
There is a lot more to say on this, but I have an Inroads column deadline breathing down my neck and I sense an opportunity awaiting.
Meanwhile: Think Good Thoughts, Say Good Words, Do Good Things.