Thursday, June 30, 2011

Assessing Value in the Corporate and Academic Worlds: An Interdisciplinary Problem?

During all of these ponderings about innovation and interdisciplinary computing I have been working with the question of where interesting ideas coming from the corporate world can (should? do?) apply in the academic world. Sometimes the cross over seems quite reasonable with a little twist in thinking, as in this, a summative statement from The Innovator's Dilemma book (the past 5 or so posts, in case you haven't been following the conversation):

"...historically, disruptive technologies involve no new technologies - rather they consist of components built around proven technology (ies) and put together in a novel product architecture that offers a set of attributes never before available".

It only takes a few word changes to see how this applies to interdisciplinary computing. I bet you can figure it out if you have been following this conversation (can you?).

A larger question that looms over me now, is - just how far and in how many directions, can experiences of successful or unsuccessful processes in the business world be applied in academia? And conversely, what can the corporate world learn from successful and unsuccessful processes in academia?

On the one hand the cultures are vastly different. On the other hand, I am becoming convinced that there is a lot that can transfer from one to the other. Because my reading came from a book about the corporate world, I was looking at how the academic world sometimes fits the author's theories of successful "product" innovation.

I am equally curious about how understandings of success and failure in academia can be useful to corporate success.

There are a lot of hard questions when you start from scratch. I'll start with a very basic one: the concept of "value".

The corporate world often places value upon monetary contributions (Person X saved the corporation $Y by doing XYZ, or Person X brought in $Y dollars  or Z physical resources to the corporation). Lately, I have been asking some professional acquaintances how they would place "value" upon academic contributions quantitatively.

I was not surprised by the reactions I have been receiving, which show just how differently "value" is determined in  higher education and in corporate America. The more difficult I find it to try and bridge this cultural divide, (I'm starting to think of it as an interdisciplinary computing problem of a new sort) the more I want to do it.

How can the academic and corporate worlds arrive at a common understanding of value that does not involve one unilaterally imposing its "definitions" on the other?

How is VALUE going to be defined?

  • If you are an academic and you had to express your "value" to a corporate executive in a way they would appreciate, what would you say?

  • If you are in the corporate world and you had to find "value" in an academic's career in a way that acknowledged their worth honestly, what would you say?
Ideas about these questions could lead to a really productive conversation.

Thursday, June 23, 2011

What is Innovative Interdisciplinary Computing Anyway?

So I promised to write about innovation - what it is, how to identify it. In particular, with regards to interdisciplinary computing. The more I have pondered this issue, the more it feels like a trick question.

Innovation is something unusual, different, new - pick your vocabulary, but the basic idea is that it is something no one has thought of before. Now there is successful innovation and unsuccessful innovation, a distinction that sometimes gets forgotten. After all, there are plenty of innovative ideas that don't gather traction. Purple star shaped twinkies anyone?  Some ideas are unique, but unlikely to gather a following.

It is easy to identify an innovative idea in retrospect. Interdisciplinary computing programs and activities such as I have discussed across many blog posts provide some excellent examples. Pattern recognition and computational emotion, computational journalism, bio-informatics, Charles Babbage in the front seat of your car. Successful innovations eventually mainstream themselves. For example, Bio-Informatics as an interdisciplinary field is becoming perceived as mainstream. But it wasn't originally - not when I was in grad school not tooo many years ago. Conversely, the talking Babbage GPS has a ways to go before we all have one in the passenger seat. Assuming we ever do.

When each of these areas of innovative computing appeared they were probably only recognized as interesting by a limited number of people. That is the nature of true innovation - if everyone could think of it then it wouldn't be innovative. It is easier to discuss what successful innovation is using past examples than it is to identify it at the moment of inception. (hence the feeling of a trick question)

For quite a while interdisciplinary computing as a concept didn't exist. It was (and in ground breaking areas arguably still is) considered a strange term. Before we recognized the idea of equal contribution of two fields to create a new creative field, we tended to think in terms of: computing; other field; applications of computing in other field.

The whole idea of interdisciplinary computing was very innovative. It was disruptive to traditional computing as evidenced by objections, denials, ignoring, "it isn't real computer science" types of reactions. The Innovator's Dilemma book I have been jumping off from for the past several posts calls this being "trapped in reactivity" -  perfectly normal, predictable, human, and a great way to miss the boat. To not recognize the force of merging fields until they are upon us.

Interdisciplinary computing on a high level is becoming mainstream and the innovations are occurring within the divisions - what new fields will emerge and gain traction? What do we need to be on the look out for?

We can't rely on psychic powers to spot significant innovative potential in interdisciplinary computing. However, there are a few guidelines to identifying these ideas if we continue to follow the theory presented in The Innovator's Dilemma. As we cruise along in our professional lives, we can keep alert to changes around us and ask questions.

1. Is there a problem? In other words, is something not right? For example, there has been a "problem" for many years with declining interest in studying computing in school (at any level) and an increase in computing savvy students choosing to study other fields. We knew that a long time ago. It can take a while, and it did I believe, for the severity of the problem to be acknowledged - and this is "the real problem". Seemingly logical hypotheses occupied our energies: "enrollments are down because of the booming economy and thus students don't need a computing degree to get a job in a computing field" or "enrollments are down because of the poor economy and the perception that all the computing jobs are going overseas". The economy definitely has an effect on computing enrollment - no argument there. However it can't always be the economy! If that were the case, we could all throw in the towel, because the economy will ALWAYS be either good or bad or heading between one and the other!

So. First: recognize a problem. Second, recognize when the common responses may be reactive and not fully understanding the problem.

2. Do I understand the problem? Really understand the problem? If you aren't sure, how do you come to understand it? You don't run a lot of surveys and focus groups (so goes the theory) and ask people what they think or want. Instead you watch what they do. In our example, we would have watched and seen that students were going to study biology/law/engineering/economics (whatever) and learning the computing they needed for that field through those studies, or on the job. Taking it one step further, watching many of those people would have shown that many of them *enjoy* computing though they might have said otherwise if you asked the question. This is still the case: many of the cross over people don't consider themselves computer scientists but they thoroughly enjoy computational thinking and using sophisticated computing skills. Hmm.... Realizing that early on might have triggered a different way of thinking about how to tackle computing enrollment challenges. Many in the computing community are now understanding the problem and taking action. The truly innovative ones recognized it early and jumped on the opportunity.

3. How can I view the problem as an opportunity? Once you understand, really understand the problem, look at it as a chance to think outside the box, take risks and do something really different in response. Be prepared to learn as you go - as many successful interdisciplinary computing programs have in fact done (here was one nice example I profiled a few months ago). The challenge/opportunity includes searching out the market (substitute: students) for your new idea rather than trying to convert the existing market (substitute: students who have expressed interest in computing but not followed up). In our example, that might mean looking for the students who demonstrate through their actions that they are multi-disciplinary by nature and interested in the intersection of fields. Many successful interdisciplinary programs have done just that: attract people who are deeply interested in subjects that appear unrelated to computing.

In other words: look for the direction that people are already going (into biology/law etc) and aim your program or project at them. They might very well love your idea. And if you don't succeed the first time, try again - plan for this. Keep it simple. Assume you won't get your program or project right the first time and reserve resources (people, time, energy, money etc) for re-tooling and re-tooling. Acknowledge you don't know where you will end up other than that you will end up somewhere new where there is a need now. Not sometime in the future.

4. How can I view the plan's so-called weaknesses as strengths? There will be doubters and the weaknesses of your interdisciplinary efforts will be pointed out to you. Turn it around. Because the weaknesses may well be unrecognized strengths - that is part of what makes the plan innovative!

Reading The Innovator's Dilemma has been one of the more fascinating and eye opening ways to look at computing, whether in industry or academia. Interdisciplinary computing is a perfect ongoing case study to test the book's ideas and perhaps move computing forward in new ways.

Monday, June 20, 2011

Might "Silo-ing" Be a Good Thing for Interdisciplinary Computing?

Two questions about innovation and interdisciplinary computing are on my mind at the moment (continuing my conversation from the last 3 posts). One question is "what is innovation"/"how does one identify innovation" and the second is "How does one support innovation"?

I'm going to tackle the second question this time, and the first question in the next post. In a comment to my last post Jim L. spoke about the changes that have taken place at RIT over the last decade, and as I replied, I see a lot of innovation at work in the diversifying and splitting off of related computing degree programs.

The approach that RIT appears to have taken (not being privy to internal institutional decisions) in some ways fits with one of the primary tenets of how to nurture and succeed with truly innovative change (the book refers to these changes and technological advances as "disruptive"). The claim is that in order to succeed, an organization must branch off into a separate organization the process, resources and values needed to make the change fly. Sometimes the change is geographic but whether physically remote or not, there are firmly established boundaries drawn between existing institutional culture and the development of a new culture for the innovation to flourish. So when RIT (and I again emphasize that I am speculating here) broke off its computing programs into different areas run by different faculty, I would hazard to guess they were in effect creating new cultural structures. The fact that the programs were very successful, and "students are pouring in the door" reflects doing something seriously right - the Innovator's Dilemma (ID) book is littered with examples of organizations that attempted to create innovative change and failed because the prevailing cultural norms of doing business impeded the change.

Now, where I am particularly curious is Jim's comment that the groups are silo-ing - and that this is a problem. We traditionally view silo-ing as negative. However, from the theoretical stance of the ID book as I interpret it, this behavior may  be a positive. Research and case studies in the book point to example after example where initially separate organizational boundaries were initially successful but flopped when forced by external pressures to re-merge into a larger or pre-existing organizational structure.

Let me be heretical for a moment: Is "silo-ing" in fact a way to maintain healthy boundaries for innovation? 

BUT, and I just throw this out there - intended as a thought that applies well beyond any one school - can silo-ing be viewed as something different? What if we alter our assumptions of what is "good" and "bad"? Just as "traditional ways of doing business" sometimes fail at supporting innovation, and thus one has to re-evaluate what is "good management" in those contexts, is it possible that "silo-ing" or the separation of innovative interdisciplinary groups is a productive thing to maintain?

Following that thought, if successful separate organizations (in the academic departmental/major lingo this would mean degree programs) succeed because they split off and form their own values, processes, resources => culture, is maintaining that separation perhaps a positive event?

There is a wonderful table on page 177 of the ID book, that I don't dare scan in, for fear of copyright violation, but it lays out really nicely  the ways to fit the requirements of an innovation to an organization's capabilities (note: not the same as the people's capabilities who work in that organization). It is worth looking at, because there have been other educational institutions that have succeeded (and failed) in similar innovations and I found the insight gained from studying this table fascinating.

Before I leave off on this post, I want to point to two other educational organizations that are following the most important approach to supporting innovation laid out in The Innovator's Dilemma: creation of a separate protected organization.

One is well established, one is just getting off the ground.

The Rose-Hulman Institute of Technology created a separate entity called Rose-Hulman Ventures, a think tank and incubator of sorts that involves students and industry in collaborative efforts to spark innovation and entrepreneurship (disclosure: I used to work at Rose-Hulman). Ventures, as it is called locally, is in a physically separate location from the main campus, and operates on a very different model than the academic programs. They have had some noteworthy successes.

A very new venture, in the early stages of development is the creation of graduate studies in Wireless Health through the school of Engineering by Case Western Reserve. Case Western is located in Ohio. This program is being set-up in San Diego. You might say (devil's advocate speaking here): "what? a whole program in wireless health?". That would be a typical response from an establishment pov to a radical risky venture - according to the theories of why innovation sometimes gets shut down. On the other hand, they are creating a clear boundary (at least geographically) between the main campus and the location of this program. Now, whether this program succeeds will depend upon many other factors besides location - as we have been discussing. But it is a very interesting example to watch develop and see what happens.

Wednesday, June 15, 2011

Did Academic Computing Overlook a Big Threat to Its Sustainability?

I'm going to go out on a limb and field an idea for your consideration. It may be provocative but I'm going for it because it is worth serious evaluation.

Take 2 related and heavily discussed issues:

1. What "is" computer science? The verbal wars that have been waged over this question.....yikes. Especially every time a new set of ACM/IEEE curricular guidelines are developed. I (and possibly you) have heard people publicly go for the jugular defending one point of view or another.

2. Why do enrollments in computing programs continue to have high attrition, and a problem attracting students in the first place - especially so-called "under-represented" students? By the way, to state the (hopefully) obvious, computing students overall are under-represented from the pov of numbers in program compared to any one of a large number of other fields.

As I continue to barrel through the book The Innovator's Dilemma (see last two posts) I wonder:

  • Is it possible that part of the reason computing struggles for legitimacy among students is because the faculty did not recognize soon enough an increased use of computer science in other fields? 

  • Is it possible that computing faculty did not recognize the deeper and deeper incursion of computing into other fields for what it was to become - a serious curricular challenge? 

  • Is it possible that faculty were focusing so heavily on what they had been doing all along, and focusing innovation on traditional areas within the discipline, that they simply were unable to recognize a need to embrace radical curricular and research change and maybe (out on my limb here for sure) a radical re-evaluation of the definition of computer science? 

These ideas cross my mind because:

The Innovator's Dilemma demonstrates case after case where established companies (substitute "academic departments") steadfastly focus their innovations on existing customers (substitute "traditional ideas of what a computing student should be interested in and good at"), such that they do not recognize "entrant" companies (substitute "other departments that increasingly rely on computing to support their cutting edge advances") as threats until it is too late and they lose most of their business (substitute "students").

Anecdotal Evidence: I have heard many discussions (as I suspect have you) where the claim is made that computing used in another discipline is "not real computer science". Hence not to be seriously worried about.

  • Is it possible that computing programs got into the position they are in with regards to enrollment problems because they did not recognize the nature of future competition?

Maybe yes, maybe no. I'm not going to step into that tar pit.

More important: The historical answer becomes moot if current reality is that other disciplines attract more students than computing attracts, and students find those other disciplines more tractable, and if along the way students acquire enough computer science skills such that they go on to successful careers where computational thinking is required. Minus a degree in computing and all the critical skills and experience brought with it.

If you have not read the The Innovator's Dilemma and know this already, I want to strongly point out that in no way does the above imply companies/departments were doing anything "wrong". In fact, the theory goes out of its way to show how successful corporations that were eventually over-run were following well accepted good management practices and "doing all the right things". That is much of what makes the whole notion of good management missing the boat so fascinating.

To transpose the situation onto computing departments would mean that computing departments were playing to their strengths, following established understandings of what success in computer science teaching and research entailed, and working very hard at attracting those students who had historically been successful. In point of fact, there is a lot (a LOT) of pressure on successful corporations (and by extension disciplines) to keep doing what they have expertise in. And a lot of pressure not to branch out in other risky directions.

I'd sure like to know what computing faculty think about these ideas.

So? What do you think?

Friday, June 10, 2011

The Tricky Problem of Anticipating the Future

I am full of questions today about how innovation and innovative thinking can be encouraged. I have more questions than answers so far. The issue applies equally to academia and to industry. Although the details of an approach to fostering innovative thinking may well differ, my gut tells me that the process is likely to be very similar. If this is true, there is a lot that industry and academia can learn from each other and as a result become closer allies. Interdisciplinary computing is a great example.

Here is the fundamental problem (as in the last post I am pulling my initial thoughts from reading The Innovator's Dilemma). One of the threats to ongoing success is an inability to anticipate new technology (broadly defined) and to switch to it in a timely manner. "Timely manner" as presented in the book means before the technology is widely accepted and in demand.

Let's talk Interdisciplinary Computing (IC). I recall, some 10 years ago, when the term first started to be widely used the many computer scientists I spoke with about the subject interpreted IC as one branch of computing working in synch with another; networking and AI for example. Perhaps, at a stretch, the definition of IC might include computing and a form of engineering, usually electrical or computer engineering. The notion of IC as bringing computing together with an entirely different field was not something people were ready to embrace.

Today, when I speak of IC to people, there is a far greater recognition that computing expertise can work with expertise in other fields. There still tends to be a comfort zone containing the natural sciences, engineering and math, but as I have written about elsewhere, other fields are entering the picture.

Right now we are on the cutting edge of IC and exciting synergies are occurring. But history tells us that even if we embrace every conceivable field as one that can play ball in the IC world, that will not be the end of the story.

Something will change, if only because we will eventually run out of other disciplines to partner with. It may be sooner than you think. The problem is of course that we don't know.

So here come my questions:

  • What will the next big wave be?
  • How do we spot it while it is in its infancy, so we can grab hold of it and run with it?
  • Why do some people seem to "see" these things (the innovators)?
  • Do we have to just rely on these visionary innovators to appear? (I hope not - that is a passive waiting game full of risks)
  • What can be done to encourage  more people to "see" or anticipate the future direction of IC?

What ideas do you have about these questions?

Wednesday, June 8, 2011

Can We Surf the Wave of Innovation Without Falling Off?

I am thinking about interdisciplinary computing and the innovation involved with pulling it off as akin to surfing (Southern California is growing on me). The people who cross traditional disciplinary boundaries to create interdisciplinary programs or projects tend to be innovators. They are also skilled surfers. This is especially true in academia where the institutional culture supports specialization within one discipline. It takes a certain amount of guts and a balancing act to propose, plan and carry out a synthesis of very different fields.

What makes a new program or project successful? Many interdisciplinary innovators are what I'd call "think outside the box" people. Or, as one person I know recently described himself and how he runs his business compared to many of his competitors, contrarians. Whatever your word choice, in my experience innovators have some serious personal drive and are less concerned with following the establishment than are non-innovators. Academic or corporate. At the same time they have to understand the establishment and be able to function successfully within it.

What is bothering me is an idea I am kicking around about the future of interdisciplinary computing in academia. I only just started reading The Innovator's Dilemma and am already wondering what it will take for academia not to fall prey to the problem outlined in this book. Sure, the book discusses the corporate world, but the idea that innovators can get so good at implementing their vision that they completely miss or refuse to see the next wave until it rolls over them and they drown, is completely relevant. It takes a long time to get a new academic program up and off the ground. When you have a great idea that may not be welcomed with open arms by the institution, you may have to develop tunnel vision to keep afloat.

All that work, all that expended political capital, all that time and sacrifice. Eventually an amazing interdisciplinary program is created and virtually everyone finally acknowledges just how great it is. Then the pressure, direct or indirect, self imposed or external: just keep doing what you are doing. Don't make any radical changes to what is working.

That terrible phrase "if it ain't broke don't fix it". A path to stagnation.

It was spotting the need for a radical change and making change happen that led to success. More radical changes will be needed in the future. The advance of computing technology makes it inevitable. Who will be watching for the next wave, the next incarnation of interdisciplinary computing while the initial creators are keeping it all together and running smoothly?

This concerns me. Innovators are by definition not in the majority. Interdisciplinary computing in academia requires insight, innovation, passion and dedication. Given competing demands and extended implementation times, there is a real possibility of running out of innovators. How does interdisciplinary computing development maintain an influx of new insight, new eyes, new energy? We need it, if for no other reason than to keep the original innovators from burning out or simply getting too tied up in day to day planning to catch that next wave.

Friday, June 3, 2011

Pattern Matching and Information Discovery in Professional Journalism

My day might have been called Tangled in Twitter. It morphed into a Recursion  that spiraled inwards but then morphed again, this time into clusters and patterns that caused seemingly unrelated events to make a lot of sense. In some ways a typical day, but in the end, not really. Jonathan Stray, a journalist at the Associated Press (see my last post if you want a full introduction), is part of a team working to use search engine technology to cluster and categorize "big nasty document sets" such that information emerges that would probably otherwise never have been found. When you are dealing with millions of data points, you could use some algorithmic help.

Tonight a light bulb went off in my head  about the important social potential of computationally driven pattern matching when applied to enormous linguistic data sets. Without my own almost overwhelming set of seemingly unrelated activities today, I don't know if I would have made the connection quite so solidly. So I'll fill you in. I'll also point out the globally significant ways computing is starting to be used in Journalism and where Artificial Intelligence could be used in the future if people like Jonathan keep doing what they are doing.

It all starts with data points. Lots and lots and lots of data that initially seem unrelated. My day's data points included: a morning Skype call that left my brain a bit sore; literally minutes later, before I could even make it  5 feet to the caffeine, an unplanned Skype call from a colleague who wanted to discuss project paperwork issues (groan); a tear up and down the freeway to run an important errand; within seconds of walking in the door a request for another unscheduled Skype call to discuss, among other things, "bandwidth issues" (in retrospect I find this really amusing); a round of phone calls to a clinic about a topic I have been trying to make sense of for 6 weeks; the next unscheduled Skype call; at one point I got annoyed at Twitter for being dense and impenetrable when I least wanted it to be; woven around all of this I was getting lost in journalism-related website after website, trying to figure out where all the behind the scenes computing technology was located, what it was doing and how it was constructed (that was fun). Last but not least this evening I had yet another mind stretching Skype call, this time to Africa, so part of the day was spent on logistical planning for that.

The cool moment, when the patterns of my day fell into place, came in the evening after I bailed for a while, went to a yoga class and worked on getting my legs around behind my head (very non cognitive, thus freeing the mind up to become receptive to new things). I came home and listened to a recording of Jonathan giving a talk about the infinite number of ways in which documents (with all their text data) can be arranged; he reminded his audience that the algorithm we choose for any analysis is based upon preconceptions we hold about the end results;  those preconceptions impose a framework which in turn affect the results. Stop and think about that for a few minutes.

[pause...]

The group Jonathan works with isn't concerned about my preconceptions, personal bandwidth or discoveries about how I allocate my time, who I choose to allocate it to, and what communication methods I use. Yet thinking about the personal internal "algorithms"  I use to structure my actions and make my choices, as well as what I bring to that analysis, led to a mental reorganization of my day. The light bulb turned fully on after I listened to Jonathan's talk (filled with absolutely nifty visuals of course) about mining information from Iraq and Afghanistan war logs for previously unknown patterns of casualties - and other information, really, you just have to watch the video - AND after I thought about the conversation we had a few days ago about the potential of Artificial Intelligence to aid the process of rapid discovery and dissemination of information to the public.

Jonathan is active in the machine learning and semantic web communities. Where he finds the time to read all the reports he reads, I don't know, but he follows the latest advances from academia, industry and the government, including DARPA reports (which, if you have read any official government reports, you know are sometimes tortuous). He follows twitter feeds, open publications by the intelligence community, reports and advances in the fields of law and finance. Well, I guess the ability to suck up and absorb information like an industrial vacuum cleaner is part of what makes a successful journalist. But it makes even more sense to me now why a computer scientist/journalist would see the enormous potential in harnessing AI to mine for information, scrape all the social media outlets, suck up data in real time and dynamically transform it into useful public information.

This is what Jonathan wants to do more of in Journalism. Get those tech savvy journalists and set them to work analyzing the gobs and gobs (my word choice) of data out there that has been (and is being) collected - data that is only going to increase exponentially. And why not? Suddenly this whole idea of "computational journalism" which two months ago seemed a puzzling term makes a whole lot of sense. As I see it,  incorporating AI into document analysis is a logical, practical and viable way to go. For example, what do you think an artificial neural network might make of some of these data sets?


The video you must watch that shows clustering at work on big nasty document sets (and explains how it works too).