Showing posts with label high performance computing. Show all posts
Showing posts with label high performance computing. Show all posts

Thursday, March 17, 2011

Computing has an Important Role to Play in Earthquake Preparedness and Response

When devastation as large as that currently happening in Japan occurs, it can be hard to know what to say or do. If you are like me, you have been reading the news daily (or more often), caught up in a mix of complicated reactions. This morning for example I watched computer generated weather simulations of possible flow patterns of radioactive contamination (via the BBC).  As the simulation looped over and over I couldn't help but be transfixed by one large multicolored plume as it slid like a mutant amoeba over Southern California. Right here in other words. The colors registered different levels of radiation. Computers generated those simulations and unsettling as they were, I'm glad to be able to see them. It is better to have knowledge from a reliable source than no knowledge, even when that knowledge is based on probabilities and a great deal of the unknown.

I was very grateful for computer science when the recent earthquake struck New Zealand. A friend lives in Christchurch and it was only a matter of a few nerve wracking days before a brief post appeared on Facebook telling all of us that she was ok - no doubt considerably freaked out, but ok. Thank you to the computer scientist creators of social networking.

The situation was very different in 2004 when the tsunami hit Sri Lanka and someone I know was very near the coast. It was over a week before we learned that he and his family were alive. There was no email access, no smart phones, no Facebook page, nothing but waiting and telephone calls to the US State Department (who were terrific by the way). The computing communication infrastructure either was not there to start with or had been completely disabled by the dual natural disasters of earthquake and tsunami.

Watching the developing situation in Japan, the triple disasters unfolding as nuclear contamination possibilities are added to realities of earthquake and tsumani, watching the weather models, I was reminded of the researchers around the world who work full time developing models of earthquake simulation and  who perform seismic hazard analysis. They work on these models so that we can know as much as possible about what can happen, how it can happen, how we can best prepare, where to erect buildings and other structures and how to protect them as best we can.

Developing 3-D and 4-D maps and models are classic computing problems of large scale data analysis: selecting and applying the "best" constraints, knowing that the model you develop will depend upon choices about possible epicenter (location of the earth directly above the underground origin of the earthquake), focal depth (how deep the origin is), magnitude (amount of energy released) and possible paths the seismic waves may follow. There are innumerable factors to include or leave out of this type of model such as local and regional variations, ground type, land masses, rock type...just for starters. It is all about improving probabilities and predictions.

The paths of seismic waves are not always what one might expect. For example, one reason Los Angeles gets hit so hard by some earthquakes on the famous San Andreas fault is because there is a natural "funnel" that directs ground motion directly into the city from a section of the fault well east of the city. Complex modeling and a solid knowledge of the land revealed this important information. 

You have to know your hardware, firmware and software; you have to know how to work with the latest and most sophisticated networks of high performance computing. Operating system, algorithm and programming language optimization. Databases to hold all those data and sophisticated networks to link the distributed grids of computers.

If you have an interest in earth science and scientific computing (you don't need expertise in both - this is where collaboration between fields comes in) then here is an area where you can work to make a difference in people's lives.

Saturday, January 8, 2011

What CS Gains From Interdisciplinary Computing

Following up on last night's post about the reasons why many people engage in interdisciplinary computing work, I'd like to briefly list off some examples that came out in our meeting discussion today. At one point we decided to get specific and share examples of how the computer science discipline has directly benefited from interdisciplinary collaborations.

Here are some of them, written as close to verbatim as I could take notes on the fly. I'm certainly not expert in many of them, so anything I say that is missing an important or interesting piece hopefully someone will chime in and amplify for me. However! this is a classic facet of interdisciplinary collaboration - no one individual can know multiple fields at the same depth and accuracy. That is part of why it is such productive work!

The Folding at Home project involves experts in bio-medicine, distributed computing, bio-technology, high performance computing. As the site implies, computer science has been stimulated in HPC (high performance computing), algorithm development, networking, and simulations. We have expanded boundaries of computational understanding in all these areas.

Distributed computation not only in that project but in other very large scale projects has pushed the boundaries of computational efficiency to new levels as we develop the necessary algorithms to tackle ever more seemingly intractable problems, that are not necessarily so intractable.

Online auctions. I missed the intro to this conversation, but when I picked up, the discussion was about the movement from manual to electronic auctions requiring a change in how economists worked on what turn out to be NP-hard problems. Computer science theory is making advances so that these auctions can function properly. I could use someone helping to fill me in on what I missed in this conversation because it sounds very interesting!

Working with the film industry has driven both the development of 3D graphics and User Interface development. Many of the "old" rules of thumb (meaning circa early 90s) no longer apply and computer science has stepped up to revamp our understanding of what we can do with graphics at very fundamental levels. User Interface theory and application has evolved right along with it. For example, imagine how far we have come from Star Wars  (1977) to  Toy Story (1995) to Avatar (2009).

These advances in computer science from interdisciplinary work with the film industry in turn spurred development on the side of GPUs (Graphic Processor Unit), which were then deployed in so many areas of computing from games to simulations and beyond that they cannot be easily itemized.

Music downloads - the virtually ubiquitous desire to stream music in one form or another, has led to advances in basic streaming technologies and support algorithms that now are used in far flung applications such as digital image processing in medicine and transfer of image data (MRIs for example) to medical service providers on short notice across long distances. There was mention specifically of recognition algorithms as a subset of algorithms that have advanced, - I could use some supplementary information on this one!

Just a few ideas to whet your appetite. Speaking of appetite, I may not have one for a week. Two days of intense cognitive load and equally intense gastric load have left me with wonderful memories and a bit of a tummy ache. Brings a new meaning to "Brain Food".