Wednesday, July 28, 2010

Contentious Definition of "Social Good" in Computing

What does it mean for a computing project to be good for society? It depends upon who you ask. The question is inevitably loaded. In part this is because of the vagueness of the terms "society" and "good" - there are cultural contexts that affect what each word means to the speaker. It would be nice if we could resolve this question of definitions, but it isn't likely to happen any time soon. What would you answer right now if asked to explain the phrase "computing that is socially beneficial" in 3 sentences or less?

As a result of the difficulty of coming to a common understanding, some computing projects are considered socially beneficial by some people and socially destructive by others. Hindsight always helps answer these questions. But that ignores the present and we don't have the benefit of future-sight. Why is this important? Because we have to decide on an ongoing basis what cutting edge computing initiatives to support and/or become involved in. The worst option is to display Ostrich Syndrome. That lets others decide for you.

Take for example the highly contentious Google Books Library Project and the Google Books Partner Program . If you follow this issue at all, you are aware of the vociferous arguments (and legal battles) over these initiatives. For those who may not be familiar with some of the arguments pro and con, here are a few by no means all inclusive, points.

Those in favor of the projects or a variant of the current Google projects, point to:

  • The potential benefits of having an online library available anywhere on the globe that has Internet access
  • The potential benefits of shared knowledge ("knowledge is power")
  • Access to out of print and other rare books 
  • Digital publishing and e-books are coming anyway, and let's be on the forefront of this technology
Those not in favor of the projects, or desiring a major realignment of their instantiation, point to:
  • Will authors of in-copyright books be fairly compensated for sales? How do we verify this?
  • There will be a disincentive to purchase traditional books, with economic ramifications for a variety of groups already struggling to adapt to the digital future of reading and writing
  • Can any large and powerful for-profit organization (Google in this case) be trusted? e.g. is there a "dark side" based upon monopolization of the e-market?
For a seriously detailed set of information, arguments, and resources, see The Public Index.

Important and fascinating as the twists and turns of this case are, the bigger picture of what is socially beneficial goes beyond Google and their book projects. 

It is incredibly hard to define what is "good" for modern society when it comes to technological innovation. Computing innovation in particular. That is why when I ask people for examples of socially beneficial computing projects, their first response is almost always a reference to a project that uses technology in an underdeveloped country. It is easy to label something as socially beneficial if lives are being saved. There are some fantastic projects out there and I hope we keep right on doing them.  But once my discussant and I have gone over some of those projects and I ask for other ideas, she or he tends to get stuck. 

However, if we want to put computing to its maximal use to make the world a better place, we need to get unstuck. 

If we don't consider, discuss, debate, and initiate "beneficial" computing projects in our own back yard we have several problems:
  • Corporations, non profit organizations, any other group you can think of, will be free to make their own decisions about what is socially beneficial, and the rest of us will be left crossing our fingers that it all works out nicely. Maybe it will; but is this the approach we really want?
  • We miss out on the opportunity to use our professional skills and experience to shape the future application of computing technology. Hey, we studied and trained a long time for our expertise - let's put it to work!
  • We miss the opportunity to grapple with the reality of necessary compromises and accommodations. This is simply life. Multi-colored and faceted, and we should view that as a good thing.
As fast as we can read our computer screens, there appear computing opportunities that will affect our lives. How are we going to steer projects in a positive direction?

First we need to ask and discuss : How do you define a project as "socially beneficial"? 

Thursday, July 22, 2010

Open Source and Socially Beneficial Projects?

Some time ago a colleague suggested to me that the Open Source movement was "socially beneficial". I have been pondering the claim.  With cross country flight time on my hands yesterday, I took the opportunity to read The Cathedral and the Bazaar . (Fittingly, it is also available online free from several locations. Just do a search for the many ways to get it.) Published a few years ago, it has become a de facto "must read" for any one interested in the history, philosophy and  future of Open Source.

There are the economic arguments, and the philosophical arguments for the societal benefits of Open Source, which are well laid out in Eric Raymond's book. I learned a lot about Open Source which I did not know, some of which surprised me. Quiz question: What is the difference between "free software" and "Open Source software"?

I recommend the book as a fast and informative read if you have even the slightest interest in Open Source, the history of computing, hackers and hacking, or if you are working in software development of any sort. Or if you could not answer the quiz question without doing a search for the answer :)

After finishing the book, somewhere over the Grand Canyon, I was still not satisfied that I had the answer to my initial question: How has Open Source had a direct, indisputably positive social impact?  In the sense that I am interested in. What I am looking for are examples. Examples of where Open Source development has been used to directly benefit specific people, or the planet. Or some other definition of "society" - the term can be interpreted broadly. I'm not suggesting that it has not done so. I simply am not familiar with any examples. I would like to learn about some solid fully operationalized projects.

Do such projects exist out there? Recent projects - within the past couple of years? Is anyone working on something right now?

Tuesday, July 20, 2010

Optimizing the Management of Digital Images for Sick Children

After almost a week of in depth meetings with people about pediatric medical issues, computing-medical infrastructure innovations and other related topics, all interspersed with a bit too much coffee and pastries, I am very happy with how welcoming and helpful everyone here in Philadelphia and beyond has been.

Trying to decide what to write about has been a challenge because there are so many interesting computing projects that The Children's Hospital of Philadelphia (CHOP) is involved in. The decision for today's post topic was cinched yesterday, by a personal memory that surfaced in my meeting with  Chris Tomlinson  the Administrative Director of Radiology. We had been discussing Vendor Neutral Archives (VNA), a state of the art process for managing digital images and their associated metadata that Chris has successfully championed at the hospital.

My insight came while we were discussing how patient care has been improved by installation of the VNA system. A large hospital, especially a large children's hospital, will often have years worth of digital image data for each patient. For example, data collection may start when the child is born prematurely and continue as the child grows older and is monitored developmentally; or if a child acquires a life-threatening condition that requires ongoing interventions, the volume of digital imaging data can grow exponentially as well. When a patient comes to the hospital for any reason, it is logical to want to pull all historical image data as part of that child's current assessment.

In a non VNA environment, digital image data may be stored on servers in different departments (opthamology, radiology, etc) on incompatible systems,  or archived offsite somewhere secure but difficult to access rapidly. As a result, when a patient walks in the door for an appointment, their prior history may not be fully available to the doctor they are seeing that day.

As Chris and I were speaking I flashed back to the times I have gone in for a medical appointment and been told that my "file has not arrived". I also remembered the times I have had to tell a care provider that I will fail a standard tuberculosis (TB) test because I received the BCG vaccine as a child. Yet because this information seems to periodically vanish from my records I submit to taking (and failing) another TB test. Three office visits typically result:  Trip 1: get poked for the TB test; Trip 2: return to have the failed test examined and watch the staff get very very nervous; Trip 3: come in for a chest x-ray. Wait for the doctor to receive the x-ray and report to me that I am healthy.

Frustrating. Time consuming. Expensive. Excess radiation.

Imagine the effect of these inefficiencies upon a sick child and their family.

Chris Tomlinson and his computing team at CHOP are working very hard to ensure optimal and full availability of patient imaging data at any time of day or night. It is hard to describe how real and non academic this effort becomes when you walk the hallways  in Radiology or the Neonatal Intensive Care Unit and observe room after room of state of the art digital imaging equipment in constant use helping sick children.

Saturday, July 17, 2010

Medical Data Mining in the Pre-Computer Era

Yesterday I found out what my "surprise" here in Philadelphia was (referred to in my last post). I am here to gather information about interesting intersections of computing and medicine and so it was only fitting that on Day 1, along with my host Peter DePasquale, I visited the Mutter Museum, part of The College of Physicians of Philadelphia. To quote their literature, this incredible museum contains a vast historical collection of "unique and pathological specimens".

Not for the squeamish. Fascinating for anyone interested in the history of medicine. Row upon row, shelf upon shelf of human bones, internal organs, miscarriages, and many other preserved specimens. Some were quite unusual, such as the human colon that was 8' 4" long - pity the poor man who carried that in his body well into adulthood (he eventually died of complications tied to constipation). Or kidney stones the size of my clenched fist. At the time these stones were collected they only came out of the body one of two ways: surgery without anesthesia, or "passed" naturally. Ouch!

But for me the most interesting aspect of this museum was the information about how these specimens were used for training and research. Not all specimens were pathological. The hundreds of skulls provide a perfect example. In addition to an impressive group of deformed skulls, there was an enormous collection of "normal" skulls; in other words, either no abnormality, or only a minor problem such as a deviated septum or tooth overbite. Each skull was labeled with the name, age and gender of the person, and if available, the cause of death.

The United States in the 19th Century was much more conservative than Europe about allowing dissections, so one way that students learned was to study such collections. They would look at skull after skull after skull, in order to discern patterns and natural variations. Each skull was one example, and by itself could not teach very much. But after extensive study of hundreds or even thousands of skulls, a student would learn what was considered normal variation and be better able to discern a true pathological condition. True, a patient would present themselves in the doctor's office with a lot of stuff on top of the skull (like skin), but one worked with what one had.

Each skull was a data point and an ever growing collection of skulls became a database to mine for information. With dedicated time and experience the true pathological skull (or live human head) became much easier to identify.

As I looked at this collection, and began observing patterns myself, it occurred to me that these 19th Century medical students were doing what we do now with the aid of computers. We do it faster and may leave much of the initial pattern matching to software, but basically the process is unchanged.

To compare a modern example to the 19th Century study of skulls, let's use the still elusive causes and origins of Autism (more formally referred to as "Autistic Spectrum Disorders", to acknowledge that all Autism is far from alike):

  • Gather as many individual pieces of data as possible across as broad a spectrum as possible (e.g. 19th Century as many skulls as one could legally obtain; 21st Century as many cases of individuals with an Autism diagnosis as possible).
  • Note and save the details assumed relevant (19th century the skull itself and the skull data listed above; 21st Century possible Autism markers such as cognitive or motor difficulties and environmental influences). Hindsight may prove that we left out important data, but we do our best, then and now. 
  • Put the item of interest and associated notes in an organized collection (shelves and drawers of skulls with hand written notes in the 19th Century; a database on a server in the 21st Century.)
  • Analyze in as many ways as possible - pattern match, look for themes, trends, deviations. (19th Century students and researchers would pore over each specimen; 21st century students and researchers will pore over pattern data spit out by an intelligence engine perhaps)
  • Arrive at hypotheses and possible theories about the topic of interest. Use to work on treatments.
  • Repeat all of the above.

I now more than ever appreciate that I was not born in the 19th Century. On the other hand, 22nd Century citizens may view our medical research methods as scary to them as I considered the skulls with 6 inch holes from the practice of  Trepanning: drilling these holes to relieve, among other things, mental disorders.

Wednesday, July 14, 2010

Healthcare Reform and Computing - I'm on my way.

Healthcare reform has got to be one of the most contentious topics of the previous year. If it weren't for the BP Oil Spill, which pulled many people away from the healthcare debate, I think we'd still be duking it out publicly. Well, reform is still in the air, reform legislation was passed, and reform is on the way. Oh - the duking is still going on, just less visibly.

Computing professionals in the healthcare industry are already impacted by changes and for someone in the computing-health care field, it must be quite exciting. Out with the old (by most accounts unwieldy and inefficient) in with the new.

Most of us in computing who are on the lay side of health care, talk mostly about Electronic Medical Records (EMRs) - issues of privacy, security, who gets to see what.

If you want to see the latest information on standards setting for EMRs, there is a wonderful blog that I follow, written by a CIO/Physician John D. Halamka, MD  who, according to his URL blog name, calls himself a "geekdoctor". How cool is that? He has information for you on his posting today about hot off the press standards (and more). If you aren't versed in medical IT terminology there is a steep reading curve, but it is worth it if you want to know the latest about what is happening in IT healthcare.

But there is yet something else exciting afoot. Digital Images - you know, the results of those MRIs, CT scans, Mammograms, etc. Personally, I'll never have an MRI because I have metal in my head and have no desire to be either permanently glued face first to the wall or made into a bloody shredded mess.  But I bet many of you have had an MRI, perhaps several. Or one of the other digital images, too numerous to list. And that is where some of the really interesting computing work is going on in healthcare. Computing professionals are hard at work on new and creative systems to deal with all these digital images.

I'm excited because I'm going to go have a good look at how one hospital is breaking new ground in this area.  I'm off tomorrow to visit The Children's Hospital of Philadelphia and to learn first hand about how Radiology and IT are changing their systems to improve patient care.

I expect to be blogging about some of my adventures in Philly over the next week. I'm also told that there is going to be a little "surprise" in store for me too. I'll keep you posted.

Sunday, July 11, 2010

Does Computing Ethics Have to be Negative?

It seems that most of the time when we discuss "computing and society", especially in educational settings, we talk about ethics. For a long time I have had mixed feelings about this. On the one hand, I do believe that we need to talk more about societal issues, and if ethics gets the conversation going - good. On the other hand, conversations about ethics lean towards discussing negative events - either those that have occurred or those that could occur. If we focus on negative issues much of the time, then it doesn't help computing put its best foot forward to ... anyone.

In the July issue of the Communications of the ACM there is a Viewpoint article on Computing Ethics by Jason Borenstein about the potential challenges facing the workplace if (as) robots become increasingly commonplace. He makes many good points about the challenges for displaced or deskilled workers, the diminishing of creative opportunity and other important topics that have been discussed before and should probably continue to be discussed.

But I couldn't help coming away thinking that here I had just read another depressing article, another warning, about how computing technology is likely to be used. By one of my peers. I wished that equal time had been given to the positive potential of robotics in the workplace.

Is ethics is by definition negative? I grabbed the weighty (yes, a physical volume) copy of The American Heritage Dictionary of the English Language and looked up "ethics". According to the weighty tome, "ethics" is either a) A set of principles of right conduct b) A theory or a system of moral values c) The study of the general nature of morals and of the specific moral choices to be made by a person; moral philosophy.

Those definitions do not say that ethics is inherently a negative topic. As I read it, ethics is about challenges and choices. This interpretation does not insist that we primarily discuss how things can go wrong. Yet we often choose to do just that. As further frustrating evidence, I pulled 5 college computing ethics textbooks off my shelf - each book discussed problems, problems and more problems.

Can we discuss computing ethics with less of an over riding negativity? If so, why don't we? Is it just more exciting to talk about horror stories?

If  computing ethics is entrenched in the negative, for whatever reason, then why is the curricular (or media for that matter) topic "computing and society" so often only about ethics? Can we talk about positive societal computing issues in an exciting motivating way? Is positive equated with boring???

A colleague recently told me that a focus on positive computing stories was likely to give a false sense of "feel good".  Is that how we feel about ourselves?