Wednesday, February 23, 2011

The Pledge of the Computing Professional

There is a movement afoot to form something known as The Pledge of the Computing Professional. There is a web site and an article appeared in the CACM Virtual Extension about it. I found the CACM blurb, followed it to the VE and then to my friend and colleague Anne Applin who is involved with developing the project. To quote Anne,

"The idea is fairly simple, to acknowledge that the computing professional has an obligation to use their specialized knowledge for the good of society and do no purposeful harm.  It's for every graduate, not just those with the best grades (UPE) or those who join ACM."


I had not heard of this idea before so off and running I was asking questions and digging for information. I learned that the idea of the Pledge came from the Order of the Engineer, which in turn was inspired by the Canadian "Ritual of the Calling of the Engineer" which in turn draws inspiration (as I see it) from organizations going back in time to the Middle Ages when apprentices were formally inducted into trade guilds; all manner of secret or exclusive societies developed.

In truth, one can go back even further to the Romans and the Greeks who had formal associations for various craftsmen (I suspect no women were involved but I defer to the historians among you to confirm that). Or perhaps earlier - the Freemasons have very early historical origins. To make sure we are not being Eurocentric note that there were similar organizations in early India and China.

These societies always embodied a practical and a mystical side. The mystical side imbued them with something special and sometimes secret, while the practical side conferred some obligation or responsibility. Upholding values and traditions of integrity and honesty - the Hippocratic Oath comes to mind.

So I wonder...if computer science is to follow in this tradition that may be as old as civilization there are clear areas for developing an ethical and moral stance. What would be the mystical side? What mystical properties does computer science embody?

Monday, February 21, 2011

Healthy Mouse Batteries?

My mouse has been bothering me. I persist in using a mouse tethered to my computer via the umbilical cord because I try not to use batteries. Batteries are just plain toxic. But, I confess, the tail of my mouse gets in the way. It gets tangled up and sometimes it has to be shoved out of the way or the mouse runs over its own tail. Poor thing.

I know, small potatoes in the name of being Green. Everywhere I look we are switching to wireless mice so the time may come soon when my mouse dies with a small squeak (or not) and I cannot replace it with anything but the tail-less version. So I have been researching batteries. In general, throughout my living space I use rechargeable batteries. Fewer heavy metals and other noxious chemicals go in the landfills. But eventually they do go in the landfills. And as the following site describes, though they beat normal batteries hands down, even a rechargeable battery is on some level bad news:

http://www.greenbatteries.com/aa-battery-faqs.html

The page claims that NiMH (Nickel-Metal Hydride) rechargeable batteries are environmentally friendly but this is true only by comparison to the other rechargeables. Although the disposal hazard is reduced because there is no (highly toxic) Cadmium in NiMH batteries, the site neglects to mention that there can be significant environmental degradation from current mining practices and processing of the base components.

What is a mouse user to do...

I found something really cool... How about a living mouse! No, I'm not suggesting you train a furry creature to let you hold it and scoot it around on the table with an antennae of some sort attached to it. PETA would probably be on *my* tail really fast - with justification. (Besides, you know what mice like to do besides eat)

Not yet ready for prime time, but MIT is at work on a virus-driven battery!  What a totally cool idea. The viruses "create a cathode by coating themselves with iron phosphate and then grabbing hold of carbon nanotubes" (full article below). Cool. I think. It gives new meaning to "reach out and touch someone".  Now if we all had these little critters powering our mice just think of the good it would do for the environment. I could get rid of my umbilical tail. And then maybe we could replace the battery in my laptop as well?

I wonder what viruses they use...

In the interests of science I am currently looking for a friend who would like to test one under my supervision as soon as they become available.


Article:
http://www.businessgreen.com/bg/news/1802467/mit-team-touts-sci-style-virus-battery

Publications page: http://belcher10.mit.edu/publications/

Tuesday, February 15, 2011

A New Spin on "Should I get a smart phone?"

An article in yesterday's New York Times (link below) discussed advances in artificial intelligence, the future of AI and what it means to be human. As a computer scientist, many of the questions and topics posed were the same ones we have discussed for decades such as:

What are the possibilities and implications of using AI to augment human capability (i.e. help humans do better at what we already do well, or could do well) vs using AI to replace humans? The former has always been a goal with positive implications for most people and the latter has always been a goal that provokes uneasy, mixed reactions. That duality has not changed, although the article makes the point that the lines are blurring between the two.

An interesting example is the pervasive use of smart phones. No argument: they augment human capabilities: find information faster, sooner, in more detail, add to it, pass it on, discuss it, disseminate. Provoke thought, enlightening discussion, be "on" whenever we want.

I don't have a smart phone and people sometimes say to me: "Lisa, you of all people? No smart phone?". It isn't that I don't want one. My gadget nerd side wants one - bad. On the other hand, there is the issue of the blurring line between my human self and, dare I call it, my technology self. Something there says: maybe not a good idea?

I am already mainlined into my computer almost 24/7 and I when I think about the withdrawal symptoms that occur when I am off for very long...hmmm. Is my computer a part of my bloodstream and nervous system? When did that happen anyway? If we agree that yes, my computer is an extension of me, what happens if I get a smart phone and it never leaves my pocket. I have friends (you know who you are :) who post from their smart phones wherever they are (the library, the donut shop, the laundromat, 15 feet away from the parking lot) and I wonder what would happen if they were forced to stop? Could they do it without becoming as grouchy as a caffeine addict (of which I am one) who misses their morning coffee? Not a pretty sight.

The very last line in the NYT article is a quote that really bothered me: “The essence of being human involves asking questions, not answering them,” he [John Seely Brown] said.


Say what???????? Evolution would argue otherwise. Besides which, life would be SO BORING if I didn't spend time seeking answers to the endless barrage of questions my mind comes up with. 

So...as we become more and more connected, and the lines blur between augmentation of human abilities and the replacement of human requirements to actually do things for ourselves ("hey, if I can have the computer take care of it, why should I do it?" goes the refrain) do I want a smart phone really?

Yes.
No.
Yes.
NO.
YES.
NES, YO.

This is why I have not bought one yet. I need to work this out - am I enhancing myself in a beneficial way or am I creating a part of myself that will supplant myself while I'm obliviously typing away?

I want to be the one answering the question. I don't want a computer to answer it for me. Much as I love them.


NYT Article that inspired this post (and discusses many topics that have nothing to do with what I discussed here): http://www.nytimes.com/2011/02/15/science/15essay.html?pagewanted=1&_r=1&nl=todaysheadlines&emc=tha26

Thursday, February 10, 2011

Writing well about computing - What does it take?

For the past 10 days or so I have been following a conversation in the LinkedIn group Science Writers. The conversation started when someone asked for opinions about whether or not it was necessary to have a science background to succeed as a science journalist. Her exact words were "to be a good science journalist". Oddly enough, the woman who asked the question has had a successful career as a science journalist for 20 years. The question turned out to be more about whether or not people felt that she should follow her interest in gaining additional education in environmental journalism. By most standards she would no doubt be considered "good" already as she is publishing in widely read science magazines.

The comments back to her were interesting though from the point of view of perceptions and marketing of computer science. Many of the respondents were people without a science background so they predictably replied that no, you don't need a science background. Yes, it was a self selected respondent group. However, they had some points worth considering. One that stood out was the claim that it was more important to be a good writer than to have a science background and that it was easier to obtain "scientific literacy" than the "fundamental craft of writing".

Now, I would take some issue with that comment. I do believe that at least in computer science, given the widespread misconceptions about what we are, what we do, what we don't do, and the frequency of journalism articles that are off target in one area or another - it IS important to have a background in some area closely related to computing if you want to tackle head on the public perceptions of what who how etc. That takes more than "scientific literacy" or in this case the more narrow "digital or computing literacy", terms that we are currently wrestling to define.

There are no doubt writers who are so darned good that they can do the background research, talk to the computer scientists and write an article that is accurate, digestible, interesting and has solid content for public consumption. But as with any other interdisciplinary situation those cross over professionals are a rare breed.

That said, I agree that a writer has to be a "good" writer to get across the computing points so that the general public is interested and gets the correct information. An academic writing style flops with the general public. Writing well is as much a creative and technical endeavor as computing. Thus I would argue that neither the computing background nor the writing background trumps the other.

Thursday, February 3, 2011

Medical Informatics Musing

Not for the first time, I am thinking about the complex world of health care informatics. At the moment I am studying efforts underway to standardize and put into electronic form medical data from personal health records, laboratory and research centers, existing specialized databases and archival hard copy. There are people devoting hundreds of hours to this effort, sometimes on a volunteer basis. And you want to talk about opportunity to do cool things...

Several organizations promote standards for medical terminology and for the most part they are complementary. They have their own histories and apply themselves to different subsets of data. In the United States several federal agencies manage data (in addition to many private and non profit organizations too numerous to list). Examples include the National Library of Medicine, The National Cancer Institute, the Centers for Disease Control and Prevention, The Agency for Healthcare Research and Quality, The Office of the National Coordinator for Health Information Technology. Different data sets, different functions, different structures.  Medical clinicians I have recently spoken to tell me the data gathered, maintained and disseminated by government, non profit and private groups have played an incredibly positive role in improving patient care nationally and internationally.

However, there is still a lot of work to do and much of it hinges around harmonizing data standards and developing the most effective ways to computerize these data as we move forward.

The most widely used international medical terminology set is SNOMED-CT, shorthand for Systemized Nomenclature of Medicine - Clinical Terms. In addition to providing detailed terminology definitions in machine readable form, SNOMED (as it is often called for short) provides a logical structure of relationships between concepts that covers virtually all areas of medicine.

Another large internationally accepted set of medical terminology and structure is LOINC, acronym for Logical Observation Identifiers Names and Codes. The data maintained in LOINC format comes primarily from research labs and is available for use by hospitals, physicians and others.

The third large and highly visible standardization effort is RxNorm, terminology and relationship structure for clinical drugs (to most of us that means prescription drugs) approved for use in the US. As with SNOMED and LOINC, RxNorm defines terms in machine readable format.

These are just the 3 biggest names associated with computerized standardization efforts. And if you think there are a lot of acronyms here, that is just the beginning. I suppose there may be no more acronyms in the medical world than in the computer science world. Just as non computer scientists often find our discussions puzzling and difficult to comprehend, non medical personnel may initially find it eye boggling to wrap their head around medical terminology and standards documents.

I'm in neck deep reading about these standardization efforts right now and it IS fascinating. There is a real coming together of two fields and two worlds here and a huge opportunity for computing and medicine. I am impressed with the enormity of the task needed to bring all these data into harmony with one another. Advances in computing technology and in medicine are each advancing so fast ... cutting edge on both sides.