Thursday, May 5, 2011

Interdisciplinary Computing Meeting Number 2: Day 2, Part 1

The second (and final) day of our meeting was jam packed. I need a giant suitcase to contain all the notes I could write about our discussions. So... I'll start again with some of the items that jumped out at me from the day.

  • Interdisciplinary need not be (as some have feared) a zero sum game. In fact, Interdisciplinary Computing (IC) can change the zero sum game (either you go into field X or field Y). With IC, one department does not lose students to another department ("student stealing"). Quite the opposite. Programs that implement IC programs, minors, (or the variety of forms I have discussed previously) often result in students having a foot in multiple departments. Depending upon how an institution calculates FTE  a student may be counted towards FTE in more than one department, thus giving "credit" to both departments - a good thing. (FTE = Full Time Equivalence, or to the layperson: a calculation of how many students are considered to be in class/department/division/school. FTE numbers often have a powerful effect on fund distribution, hiring etc.).
  • Here is an interesting observation: look at the National Science Foundation Discoveries site. This is a site where the NSF highlights innovation; most of the featured projects are IC - computing and another science. 
  • Big Science and Small Science. These terms were introduced and there was a bit of a discussion about what they mean and the implications. One table (I was not sitting there) later spent time discussing the topic in detail. But from the more general discussion, here is the gist I came away with. It started when someone posited the idea that computer science projects would more likely be "small science", i.e. local projects that produce a lot of data; "large science" (aka "long tail science") projects are large multi-institutional / multi-group / multi-person projects such as those conducted by NOAA (National Oceanic and Atmospheric Administration). The "problem" would be that larger projects are more likely to get large funding, and thus dissemination. What to do?
I'm not sure I totally buy this definition and the gloomy conclusion. There are certainly computing projects that obtain multi-million dollar funding - when I was working on my PhD some faculty I knew received that kind of funding. And even if we are talking smaller funding levels, I think multi-institutional grants of $500,000 or $1,000,000 are nothing to be sneezed at and can certainly produce results and wide dissemination. Those are available. I was recently one of several PIs on a grant in the first category. Sure it helps to be a government agency, but one does not have to be one in order to get your results "out there". You may have to work a little harder at dissemination, but there are channels. And in fact, sometimes, you know exactly who the groups are that have an interest in your type of work and can target them in a direct semi - personalized manner. And we haven't even mentioned the fact that Industry is working on IC projects and they have their own mechanisms for dissemination, sometimes working with academia, and other times going it alone.

But really, the question is: Why is "small" vs. "large" important for a discussion of IC? Our conversation turned to how small science (let's say 2 departments such as CS and History working together) can leverage large science. NASA, Historical Archives and a variety of organizations make large amounts of data publicly available; these data can be used and added to at the local level. Perceptions of both fields can be changed and information made available that otherwise might gather dust. Here is a great example: check out this project, which one of the meeting participants is working on: Digital Durham, a fascinating integration of computing and history.

Arg. I'm out of space (assuming the general protocol of not making a post tooooo long). To be continued...

No comments:

Post a Comment