Many people have been asking me about the jury trial I spent most of the last two weeks on. Two weeks of being able to say only "I'm on a jury trial", especially as the trial was pretty intense, was challenging. Interestingly enough, this particular trial was trying out the use of computing technology in a way they hadn't done before so we were guinea pigs. What was most interesting to me was that the technology itself wasn't particularly unusual or radical, but the people in the court were using it for the first time and I got to see how they chose to use it, what worked and didn't work, how it contributed (or not) to the trial process, and how everyone (jurors, judge, lawyers, witnesses, plaintiffs, defendant) reacted to it. There was a tech guy in charge and I wasn't supposed to talk to him (or anyone else). I wanted to find him after the trial was over and pick his brains but he had vanished by the time we were out of deliberations. Too bad.
The trial was a messy complex personal injury lawsuit (now I am allowed to say anything I want). The "facts" were not at all clear. Enormous sums of money were at stake and people's lives on all sides hung in part, in great part, on the credibility of witnesses and the technology they worked with.
A key issue was whether or not one of the plaintiffs had suffered a particular back injury, and what options (surgical or otherwise) were warranted if he had. Spinal injuries are very complex, more than I ever could have imagined. Surgery can involve pulling out all your innards and laying them...somewhere... in order to get access to the spine. Ew. We the jury heard DAYS of testimony from doctors and surgeons about the spine and soft tissue injuries and viewed shot after shot of digital imaging tests including MRIs, X-Rays, Discograms... The courtroom set up a system whereby there was a giant screen TV in front of the jury box; the judge had a monitor, the witness on the stand had a monitor, the plaintiff and lawyer tables had their own monitors and there was a tech guy in the back controlling who could see what and when. Sometimes control was given over to either of the lawyers. Cables ran around across the floor.
Sometimes they wanted all of us to see a Discogram image for example (a somewhat controversial procedure where they stick 8 inch long needles into your spine and ask if it hurts). Other times they wanted all of us to watch a 3D MRI or a simple document display (usually of endless spreadsheets of mind numbing medical or financial data). The images were pretty interesting actually, especially as I have done research into digital image use in medicine and now got to see it put to use in a legal setting. Other times they wanted the witness to see something along with the judge and lawyers but not the jury. Basically, pick your combination, and at some point in the trial any given subset of people were supposed to see it.
Sounds easy right? By the end of the trial they more or less had it down. Here are some interesting highlights of their learning curve from a usability perspective in the juror box.
The giant TV was on a stand about 6 feet high. Hard to move, trailed a long cable and had to be shoved around the room without tripping over other cables. One of the lawyers tripped on a cable early on. (not badly, but it threw him momentarily off stride). The TV blocked some of the jury from seeing the witnesses (not good) and blocked the judge from seeing some of the jurors (not good). We paused while they figured out where to best put the TV. The lawyers and judge could not see what displayed on the juror TV without walking around in front of it. This led to some early significant "oops" moments. For example, one witness was looking at his screen (along with the lawyers and judge at theirs) describing in great detail something about the spine and we jurors were completely in the dark about what was being referred to. Eventually they figured this out and some time was taken to figure out why the connection wasn't working. We started over again.
In another case, the plaintiffs' lawyer thought we were all looking at some emotionally laden photographs the witness was describing, however we couldn't see them. There were quite a few photographs and it went on a while. It turned out that our not being able to see the photographs was a good thing because the lawyer hadn't requested and obtained permission to show them as exhibits (not good). Near the end one picture momentarily flashed on the screen. Our generally genial judge looked rather annoyed when all this came to light moments later. A backroom huddle between the judge and the lawyers took place and I suspect some strong words were exchanged. In the end we saw none of the photographs and ignored the one we had seen. Had we been shown all those photos which were eventually ruled as inadmissible I suspect there would have been some truly severe backroom lecturing by the judge. As it was, both lawyers were pissed off from what I could tell from their faces, though nary another word was spoken on the matter.
Then there was the witness whose monitor on the stand stopped working and was asked to come down to the big TV. He stood smack in front of it, back to the audience (ever had a teacher do that? :) and had to be gently and repeatedly asked by the lawyer to move out of the way so we could see what he was talking about. The lawyer actually helped him move at one point.
One time the judge's monitor stopped working and they tried to swap it out with the witness monitor which was a few feet away. The cord wasn't long enough. Our very cheerful and friendly judge disappeared under his bench - one moment he was there and the next moment he was gone. I had glanced away at the tech guy and when I looked back - no judge. Huh? Moments later he popped up, in black robe, happily holding a cable and cheerfully announcing he had figured it out. It was an amusing moment in another wise not at all happy trial situation. We appreciated every rare light moment that came our way.
There was the first tech guy (not the one referred to above) who fell asleep sitting at his little table at the back of the courtroom. The lawyers had been in the habit of requesting him to do something without turning around and it was only when there was no response that we all noticed the poor guy snoozing. Probably bored to tears by all the talk of the T4, L4-L5 spinal disks, protuding jelly globs of spinal material, endless lists of complicated drugs that I had never heard of (and hope to heck never to have to take after hearing about their side effects) and the gorey details of spinal disk replacement vs. disk fusion vs. inserting tubes around the spine full of pain medication....(there was more but I'll stop). After that we got the second tech guy.
I was pleasantly impressed with the grace and patience with which all the court personnel handled the technical experiment. It must have interfered with their usual mode of doing business and caused them to do some cognitive context switching when they least wanted to. From a juror perspective I wonder how they conducted such a trial before? All those complex digital images, some rotating and zooming. The computing setup was very effective in presenting the information under discussion and made it quite clear why there was lack of medical agreement on various procedures and possible outcomes. One variable at least was made less abstract and easier to evaluate in our deliberations.
As a passing note, I want to comment that this experience, in spite of the time it took up and the emotional stress it induced for all the jurors (we compared notes extensively afterwards) was absolutely worth it. My perspective on our trial system has risen significantly - everyone took it very seriously, worked hard together and we did our very best in deliberations. It was a fair process. If you haven't been on a jury trial I suggest you take advantage of it when given the opportunity.
Computing and people who work with computers are not the nerdy and negative images often portrayed in the media. As a computer scientist, educator and project evaluator with my hands and feet in many fields I live these realities every day. I am like the kid who never stops asking “why?” In this blog, I share my questions and curiosity about the interdisciplinary role of computing with a special concern for how computing can make the world a better place.
Thursday, July 28, 2011
Computing Technology in My Courtroom
Labels:
community,
digital imaging,
health,
law,
medicine,
Social Issues in Computing
Sunday, July 24, 2011
A Tale of Two Valuations: Academia Next
I initially thought this post would be a piece of cake compared to the previous post about the corporate perspective. Not so. Perhaps because of my extensive academic experience I am far more aware of the variance of how professionals are valued in the world of higher education. So I have been looking for points of common ground within academia...I am almost afraid someone will throw a rotten tomato at me because I know too much (I've been reading a murder mystery, where the person who knows too much often comes to a messy end).
To state the obvious (to academics at least) there are the three classic areas of valuation for faculty: Teaching, Research and Service. Service (pretty much anything not teaching or research related such as committee work, performing outreach or being a student adviser) can safely be said to come last in the pile. Do no committee work and you will get dinged; do too much and you will get dinged (I know someone who was denied tenure because he was told he performed too much service work). So the trick is to find the middle ground according to the culture of your department and institution.
From there is gets murkier.
Teaching: In some institutions this is virtually all that counts. But how it is measured varies widely. In some cases, it is all about teaching evaluations. Period. Get those numbers up and get them high or else. The pressure can be intense, and in extreme examples there can be a completely predictable desperation to "please" students above all else. In my experience, this is not the norm, and is incredibly destructive to the learning process. More often, in a teaching oriented institution, evaluations are important, but only one indicator of how a faculty is evaluated. More sophisticated methods of assessing effective learning are used. And by effective I'm not talking statistical evaluation; I'm talking qualitative evaluations. That is healthy imo. Institutions with a well rounded process for evaluating teaching can produce amazing students who go on to do amazing things inside and outside the classroom and after graduation. And the faculty feels professionally successful and appreciated.
Research: In some institutions this is virtually all that counts. Again, the worst case scenario is where not only are publications counted (literally) but the venues for publication are ranked. If you don't get into "the top" pubs, forget it. You are toast. Even within computing, there are disagreements as to what counts as a quality publication venue. Then there are grants. Worst case scenario, you need a fixed number of grants and big dollars - millions would be nice. Not healthy, as there is only so much money to go around and that sets up a system of guaranteed winners and losers regardless of quality. Very much like using a normalization curve in grading. Something I never used and never will because it has all sorts of negative side effects that many educators are well enough aware of that I won't repeat them here.
Quite a bit of gloom up in those paragraphs. Now to inject a positive perspective. BALANCE. It is all about balance. The original idea behind Teaching, Research and Service was to promote balance. Some of all three are needed from every faculty. Many institutions, although they do by design weight teaching and research differently, maintain a healthy balance. What are the factors that indicate successful teaching in such situations? Each professional is evaluated in the context of both institutional need and known pedagogical / cognitive learning understandings. Other factors come into play to varying degrees depending upon the context: local needs, student needs, etc. What are the factors that indicate successful research? Very similar actually. The institutional needs and established understandings of rigor in scientific research lead to an evaluation of individual contributions (reminder here that we are talking computing and related areas. I can't speak to areas such as the arts and humanities). Grants in these institutions aren't just about how much money is brought in, but about the effect the work is likely to have on science, or in the case of educational research such as computer science education research on the discovery and dissemination of improved teaching and learning theory and application.
It feels like I've short circuited my comments, but that comes from knowing too much.
A summation might be: for faculty in academia professional valuation is based on teaching, research and service, and in a healthy environment there is a contextually appropriate and healthy balance. Value is not just numbers, nor is it vague and undefinable.
I feel like I'm stating the obvious, but that is only the case if you are an academic reading this. Based upon some of the comments I received to my last post about valuation in the corporate world, there are many to whom this post will be news.
One of my next tasks will be to see where I can locate opportunities for common ground.
But first, I'll ask you to graciously do what you did before, and provide your perspective on:
1) What can you add about how academics in higher education are judged to provide value to their organizations? What can you add that is concrete - i.e. can be said in very concise form?
2) Where do you see common ground between between corporate and academic valuation of professional contribution to their organizations?
(I think we have all heard about the areas where there is supposedly no common ground. Let's look for the positive now).
To state the obvious (to academics at least) there are the three classic areas of valuation for faculty: Teaching, Research and Service. Service (pretty much anything not teaching or research related such as committee work, performing outreach or being a student adviser) can safely be said to come last in the pile. Do no committee work and you will get dinged; do too much and you will get dinged (I know someone who was denied tenure because he was told he performed too much service work). So the trick is to find the middle ground according to the culture of your department and institution.
From there is gets murkier.
Teaching: In some institutions this is virtually all that counts. But how it is measured varies widely. In some cases, it is all about teaching evaluations. Period. Get those numbers up and get them high or else. The pressure can be intense, and in extreme examples there can be a completely predictable desperation to "please" students above all else. In my experience, this is not the norm, and is incredibly destructive to the learning process. More often, in a teaching oriented institution, evaluations are important, but only one indicator of how a faculty is evaluated. More sophisticated methods of assessing effective learning are used. And by effective I'm not talking statistical evaluation; I'm talking qualitative evaluations. That is healthy imo. Institutions with a well rounded process for evaluating teaching can produce amazing students who go on to do amazing things inside and outside the classroom and after graduation. And the faculty feels professionally successful and appreciated.
Research: In some institutions this is virtually all that counts. Again, the worst case scenario is where not only are publications counted (literally) but the venues for publication are ranked. If you don't get into "the top" pubs, forget it. You are toast. Even within computing, there are disagreements as to what counts as a quality publication venue. Then there are grants. Worst case scenario, you need a fixed number of grants and big dollars - millions would be nice. Not healthy, as there is only so much money to go around and that sets up a system of guaranteed winners and losers regardless of quality. Very much like using a normalization curve in grading. Something I never used and never will because it has all sorts of negative side effects that many educators are well enough aware of that I won't repeat them here.
Quite a bit of gloom up in those paragraphs. Now to inject a positive perspective. BALANCE. It is all about balance. The original idea behind Teaching, Research and Service was to promote balance. Some of all three are needed from every faculty. Many institutions, although they do by design weight teaching and research differently, maintain a healthy balance. What are the factors that indicate successful teaching in such situations? Each professional is evaluated in the context of both institutional need and known pedagogical / cognitive learning understandings. Other factors come into play to varying degrees depending upon the context: local needs, student needs, etc. What are the factors that indicate successful research? Very similar actually. The institutional needs and established understandings of rigor in scientific research lead to an evaluation of individual contributions (reminder here that we are talking computing and related areas. I can't speak to areas such as the arts and humanities). Grants in these institutions aren't just about how much money is brought in, but about the effect the work is likely to have on science, or in the case of educational research such as computer science education research on the discovery and dissemination of improved teaching and learning theory and application.
It feels like I've short circuited my comments, but that comes from knowing too much.
A summation might be: for faculty in academia professional valuation is based on teaching, research and service, and in a healthy environment there is a contextually appropriate and healthy balance. Value is not just numbers, nor is it vague and undefinable.
I feel like I'm stating the obvious, but that is only the case if you are an academic reading this. Based upon some of the comments I received to my last post about valuation in the corporate world, there are many to whom this post will be news.
One of my next tasks will be to see where I can locate opportunities for common ground.
But first, I'll ask you to graciously do what you did before, and provide your perspective on:
1) What can you add about how academics in higher education are judged to provide value to their organizations? What can you add that is concrete - i.e. can be said in very concise form?
2) Where do you see common ground between between corporate and academic valuation of professional contribution to their organizations?
(I think we have all heard about the areas where there is supposedly no common ground. Let's look for the positive now).
Sunday, July 17, 2011
A Tale of Two Valuations: Hi-tech Corporations First
Sometimes you have to hit a problem head on and just lay it out there. I've been thinking about the best way to discuss some of what I've been hearing and learning on and offline, in public and private communications, as I tackle this question of corporate and academic valuation of a professional's contribution. As I stated earlier, I believe there is more in common than may appear on the surface, but as I discuss this topic with people, I am not yet finding the concrete information I want to ferret out. I'm not ready by a long shot to give up. We need this information to be able to bridge the cultural divides and learn from each other. So I have been debating how to get away from the more abstract and/or philosophical discussions and into the reality of day to day existence.
I am going to write some of what I have heard and do it from two angles. First from a corporate angle, and second, from an academic angle. And both, I might add, based also upon my experience in both arenas.
First, perhaps the more challenging for me: the corporate angle. Although I have worked in the hi-tech world, more of my career has been in academia. Let's stick to the realm of hi-tech and management level positions - here you are likely to find more interdisciplinary activity, however defined, and innovative companies recognize that. Here goes...
I have been told that, from an American corporate perspective, a person is valued primarily on their monetary contributions. Irregardless of where they are coming from (professionally). To be more precise, here is a close paraphrase of what someone said to me:
"you have to show quantitatively how much $$$ you have brought the organization; you have to show how much $$$ you saved the organization; you have to show the resources (physical) you obtained for the organization. That is what they expect. That is what they want to see. That is what they care about. That is what matters".
Is that IT?
Is that ALL?
Is that REALLY the only thing that counts or the thing that weighs far heavier than anything else?
Is it a fact that no matter what else one may have accomplished, that if one can't show precise fiscal contributions one is not highly valued?
This question is burning at me when I look at the question of professional value from the corporate angle. If the answer is yes, I'm feeling rather uneasy. The implications for bridging a cultural divide with academia become more daunting (but still not impossible my optimistic side says).
So let me put it out there to those of you in corporate hi-tech, especially if you have held or currently hold, a management level position of any sort.
Do you agree with the above statements about how you and your peers are valued? If you agree, are you ok with this or how are you trying to change it? If you do not agree, or if there is more to it, then what else in your experience really counts?
I am going to write some of what I have heard and do it from two angles. First from a corporate angle, and second, from an academic angle. And both, I might add, based also upon my experience in both arenas.
First, perhaps the more challenging for me: the corporate angle. Although I have worked in the hi-tech world, more of my career has been in academia. Let's stick to the realm of hi-tech and management level positions - here you are likely to find more interdisciplinary activity, however defined, and innovative companies recognize that. Here goes...
I have been told that, from an American corporate perspective, a person is valued primarily on their monetary contributions. Irregardless of where they are coming from (professionally). To be more precise, here is a close paraphrase of what someone said to me:
"you have to show quantitatively how much $$$ you have brought the organization; you have to show how much $$$ you saved the organization; you have to show the resources (physical) you obtained for the organization. That is what they expect. That is what they want to see. That is what they care about. That is what matters".
Is that IT?
Is that ALL?
Is that REALLY the only thing that counts or the thing that weighs far heavier than anything else?
Is it a fact that no matter what else one may have accomplished, that if one can't show precise fiscal contributions one is not highly valued?
This question is burning at me when I look at the question of professional value from the corporate angle. If the answer is yes, I'm feeling rather uneasy. The implications for bridging a cultural divide with academia become more daunting (but still not impossible my optimistic side says).
So let me put it out there to those of you in corporate hi-tech, especially if you have held or currently hold, a management level position of any sort.
Do you agree with the above statements about how you and your peers are valued? If you agree, are you ok with this or how are you trying to change it? If you do not agree, or if there is more to it, then what else in your experience really counts?
Labels:
business,
creativity,
industry issues,
innovation,
professional issues
Sunday, July 10, 2011
Random Weekend Thoughts About Creative Computer Use
I'm going to lay off the super serious stuff this evening as I have a little list of "items" ripe for a creative computer scientist to evaluate.
- A GPS system might have been useful for the small child I saw this afternoon on the beach who was happily walking about a mile away from his parents. My friend followed the toddler, who was totally unconcerned, and with the aid of a lifeguard got him back to his mother. Her only comment about how far away her kid had wandered on the packed beach was an unconcerned "there are a lot of people around". Who do you think most needs the GPS system? a) the child b) the mother c) the lifeguard d) all of the above e) none of the above ?
- I have jury duty next week and San Diego Mass Transit's software connection to Google Maps is determined to route me to either Tijuana Mexico, Monterrey Mexico, or Paraguay. Where do you think the problem lies? (I truly believe I will not be crossing the international border tomorrow, regardless of what the software system insists)
- A friend was very concerned about the fact that software was not being used to more efficiently assist in food redistribution to the homeless. He described in great detail what such software would look like and asked why no one had created it. I realized as he spoke that the software to do what he says does exist, but is used in other applications. Last week I spent a good hour with a visiting family member at the harbor watching a Dole (as in bananas and pineapples) tanker unload box after box of food and load them onto trucks off to destinations all over the western US. The plaque on the harbor walkway briefly described the complex distribution system used to get those bananas I eat every morning from Central America effectively and efficiently and in just the right state of green to all the places they have to go. What would it take to get such a system into the hands of large scale non profits doing essentially the same thing?
- While I am sitting in the stuffy courthouse waiting room (NOT in Paraguay) I am going to be on the lookout for just how computerization is or isn't being used...and how it could be. I might even get a bit inquisitive and ask questions. This would not be unusual behavior for me. As long as it doesn't result in my landing on the other side of the court system than was intended.
Labels:
business,
community,
creativity,
Social Issues in Computing
Wednesday, July 6, 2011
Computing and the Reduction of Global Conflict
I came across some creative examples of university faculty who are using computing for societal benefit. I located these faculty through podcasts produced out of New Zealand by "The Sustainable Lens". One faculty member is taking an empirical approach to studying factors that promote peace.
A broadcast from 5/13/11 profiles the work of Juan Pablo Hourcade at the University of Iowa. Hourcade earned his doctorate in Computer Science with a focus in HCI. One of his goals is to convince people in the computing field that computing technologies can be used to reduce global conflict. He recognizes that a key to making the study of peace acceptable is to apply empirical scientific methodologies to the research. There are many aspects of this work. One of the most fascinating is the mining of masses of data to identify factors that increase or decrease the chances of conflict. These data are drawn from a myriad of sources including: demographic, historic, financial and economic, supply chain analysis, social and human condition, gender and inequality, environmental stress, social stress, and consumer behavior data. The power of computing is also leveraged to provide transparency of connections between individuals and transactions.
Computing is used to identify the factor(s) that matter the most in supporting or reducing conflict and are drawn from contemporary and historic sources - some going back several thousand years. Predictive modeling has a role as well. Visualization renders complex results easier to understand (there is a small pun in there by the way). The precision of computing provides the ability to zero in on the interaction of critical factors, providing the all important empirical (rather than philosophical) basis for making large scale policy decisions. Hourcade also discusses at some length implications for personal decision making.
Using known information about human psychology, Hourcade talks about how social media can be actively used to promote compassion - which he claims psychology has shown is key to reducing or altogether avoiding conflict. Social media can be used to bring together people who might see things from different perspectives. Psychology refers to this as reducing personal distance, a proven highly effective method of promoting the "humanization" of those who appear threatening but do not necessarily need to be so.
Although he only touched on the topic in one sentence during the interview, my ears perked up when Hourcade said he saw a role in conflict reduction for electronic voting systems. As I have learned through researching this topic for my book project (here is an earlier post I wrote about internet voting), electronic voting is incredibly controversial and often promotes passionate conflict! I wish there had been more time in the interview to pursue Hourcade's view on the role of electronic voting.
Hourcade made the interesting observation that there has been a significant amount of research in the computing field into ways to improve warfare and very little research aimed at reducing it. Good point.
Why not put the power of computing to work for the cause of global conflict reduction?
Is there any plausible reason not to pursue this line of research?
What ideas do you have about why computing research for peace has not been explored as much as say...economics? (Much of the data comes from the same sources.)
A broadcast from 5/13/11 profiles the work of Juan Pablo Hourcade at the University of Iowa. Hourcade earned his doctorate in Computer Science with a focus in HCI. One of his goals is to convince people in the computing field that computing technologies can be used to reduce global conflict. He recognizes that a key to making the study of peace acceptable is to apply empirical scientific methodologies to the research. There are many aspects of this work. One of the most fascinating is the mining of masses of data to identify factors that increase or decrease the chances of conflict. These data are drawn from a myriad of sources including: demographic, historic, financial and economic, supply chain analysis, social and human condition, gender and inequality, environmental stress, social stress, and consumer behavior data. The power of computing is also leveraged to provide transparency of connections between individuals and transactions.
Computing is used to identify the factor(s) that matter the most in supporting or reducing conflict and are drawn from contemporary and historic sources - some going back several thousand years. Predictive modeling has a role as well. Visualization renders complex results easier to understand (there is a small pun in there by the way). The precision of computing provides the ability to zero in on the interaction of critical factors, providing the all important empirical (rather than philosophical) basis for making large scale policy decisions. Hourcade also discusses at some length implications for personal decision making.
Using known information about human psychology, Hourcade talks about how social media can be actively used to promote compassion - which he claims psychology has shown is key to reducing or altogether avoiding conflict. Social media can be used to bring together people who might see things from different perspectives. Psychology refers to this as reducing personal distance, a proven highly effective method of promoting the "humanization" of those who appear threatening but do not necessarily need to be so.
Although he only touched on the topic in one sentence during the interview, my ears perked up when Hourcade said he saw a role in conflict reduction for electronic voting systems. As I have learned through researching this topic for my book project (here is an earlier post I wrote about internet voting), electronic voting is incredibly controversial and often promotes passionate conflict! I wish there had been more time in the interview to pursue Hourcade's view on the role of electronic voting.
Hourcade made the interesting observation that there has been a significant amount of research in the computing field into ways to improve warfare and very little research aimed at reducing it. Good point.
Why not put the power of computing to work for the cause of global conflict reduction?
Is there any plausible reason not to pursue this line of research?
What ideas do you have about why computing research for peace has not been explored as much as say...economics? (Much of the data comes from the same sources.)
Labels:
community,
data mining,
ethics,
HCI,
modeling and simulation,
New Zealand,
problem solving,
psychology,
public policy,
research,
Social Issues in Computing
Subscribe to:
Posts (Atom)