Showing posts with label assessment. Show all posts
Showing posts with label assessment. Show all posts

Monday, August 12, 2013

ICER Day 1: An Unexpected Foray into Learning & Design

A Layout

Synchronicity can be a freaky and wonderful thing. Within the first two hours of the ICER conference today I realized that I had not left Design behind (see my last several posts), and that a book I am currently reading (and will eventually post about) on egotistical versus empathic people in the corporate world was directly relevant to pedagogy and computing education.

The keynote speaker today was Scott Klemmer, who, in spite of what that link says, has just moved here to UC San Diego, apparently split between the Cognitive Science and Computer Science departments. He was originally trained as a designer and it showed as he took us on an interesting adventure around UI design from the back end - code in other words. He started out with the statement "View Source is a great example of UI design". That got my attention - to spend the next hour talking about the UI (UX) aspects of something users never see (code) but computer scientists see day in and day out. How very very cool.

In contrast to the industry-oriented conference I reported on a few weeks ago, ICER is an academic conference. Thus, Scott's talk was loaded with wonderful theoretical backup, explanations, references and citations, thought provoking ideas and suggestions that researchers love to roll around in. It was great.

So when Scott said "intuition is a difficult thing to teach students" it was the opening volley into wide ranging but highly focused and well supported discussion about using examples to aid learning, to generate creative design, to generate effective design, to arrive at quality prototypes while building group (i.e. team) rapport. One theme: sharing and adapting examples is good technically and good for learning purposes. But in a much deeper way than you might think. The discussion then rolled on into the role of peer assessment in design and how it can be used in software tools. Drool.

One of the interesting areas Scott has worked on is researching single vs. parallel design development. We know (the "research We" for those not used to the lingo) that people become ego attached to their own ideas and are loath to let go of them, even sometimes to their own detriment. It is called Functional Fixation and was written about back in 1945 by a guy named Dunker. But, we are learning that this ego attachment is reduced significantly if people develop multiple ideas in parallel. It becomes much easier to hear critique and drop one idea for another if you have several ideas to compare and contrast. As opposed to if you have only one idea and you are completely invested in it's success or failure.

Lots of very practical application along with some wonderful theory - established theory and theory being built. Some of what Scott spoke about was targeted at formal pedagogy but I can easily see it adapted without too much effort to an industry setting. For example, he spent some time on how to incorporate self-assessment and peer assessment into the development of designs and prototypes in the computing classroom. One of the big challenges, as educators are all too painfully aware, is that novices aren't always very good at providing useful feedback. Even when the faculty supplies an assessment rubric as guide, all sorts of issues arise - if the rubric is too specific, it can stifle creativity and lead students to just check off the boxes without any deep thinking; yet if the rubric is too abstract it can lead students to have no idea what to do with it and either flail or provide random, not very helpful, feedback.

Here comes the useful research note that provides insight into not only the classroom, but the business world too (I refuse to say "the real world", because schooling is real; where ever you are at any moment in time is your real life).

Novices have a very hard time with abstract rubrics. The question becomes how to operationalize a rubric for the particular level of students you are working with? (CS1, CS2, an upper division course)? We have research about how novices become experts and how along the way they gradually become more able to deal with abstraction. Thus, the rubric that works for a sophomore level software engineering class is probably not the same one you'd use for a senior level software engineering class or for a newly hired software engineer or for a seasoned developer.

A technique, one that Scott is working with, is to create the rubric such that up to a certain point it is fairly concrete (i.e. you can get 90% of the possible points) and after that it is more abstract (you want an A, then you have to go above and beyond). What he didn't discuss, but where my thoughts were going, was that you could take the same (or similar) rubric and start shifting that abstraction point downwards as you work with more advanced students or professionals.

Fascinating idea to consider. Fascinating idea to strategize, design, prototype, implement.

Monday, March 25, 2013

Suggestions to a School Considering External STEM

Prepackaged All-In-One Brain Food
I had an opportunity recently to do a little investigation into an organization that is selling its system of STEM learning to K-12 school systems. A friend of mine is the Principal of a K-8 public school that is looking at ways to beef up its technology offerings and STEM in general. I had a meeting with my friend and another of the school administrators to discuss their needs and to hear about a company that has been recommended to them.

I did my due diligence prior to the meeting and reviewed the Common Core State Standards, the Next Generation Science Standards (NGSS), and of course, the CSTA recommendations for computing in K-8. I also reviewed PARCC, and existing state standards in their state.

For those of you not intimately familiar with all of the above, here are a few items to set the context: The Common Core is sweeping the nation; most states are on board and at some stage of planning and implementation. The Common Core covers English/Language Arts (ELA) and Math. Science is not included, and technology merits a mere mention under "Media and Technology" where it is suggested that technology be integrated throughout as a support tool for other subjects. The NGSS covers science, but explicitly omits computing, suggesting that it belongs under math (for more details see my post on the K-12 Computing Hot Potato).

It was painful to look at the department of education's STEM guidelines web  pages in my friend's state. Practically nothing was there. Sparse would be putting it mildly. Especially compared to the finely detailed descriptions for other areas of study. Oddly enough, for Primary School the STEM listings indicate STEM as Physical Science, Math and Art (Art?). In Middle School it is the same with the addition of some mention of technology under Career Technical Education (CTE for short, and commonly known as vocational education).

Now it became clear to me why there was such an appeal to a slickly developed set of web pages promising to provide a full STEM curriculum, complete with detailed learning goals, class lectures, exercises, assessments, materials, and professional teacher training and support.

The selling organization knows how to do its PR. They use all the right lingo. I was intrigued. I was curious. Of course none of this comes free; there is a hefty price tag. Not necessarily unreasonable - after all, someone put in a lot of work developing this educational system. I sure wish we paid our teachers commensurately with the work they do! I decided to investigate further.

Once I got past the glossy print and graphics, I realized there were very few substantive details. Transparency was not the goal here. Not in the least deterred, I donned full archeological gear and went website digging. This was fun.

[dig...dig...dig...]

If you know what you are looking for you can glean some interesting things from such digging. You read the bios of the leadership team, and the board of directors. What is their educational background, professional education career experience, non-academic experience, affiliations, interests? What is the legal status of the organization? Who is likely to benefit from sales of this system? It's often all there if you look hard enough, in this case about 7 levels down the site tree.

What wasn't there were any examples of syllabi, curriculum, assessments, detailed learning goals. One has to buy their system, and ... here is the kicker: you have to commit to teaching their two base classes.

In the spirit of being open minded, I concede that the company wouldn't want to give away their methodology enough that someone else could scoop them.

It just felt a bit weird, considering all the brough-ha-ha about the need for transparency and accountability in public education.

All of which leads to fascinating speculation about the role and potential contribution from corporate education. Any given system and materials might be of high quality. On the other hand they might not.

What is a school, feeling perhaps left in the dust by the less than transparent maneuvers of their various state agencies and politicians, to do?

After sharing all the interesting specifics I learned about the company in question with my friend and her colleague, I recommended a few things:

- Ask the organization to supply their curricula for examination. The school could offer to sign a Non Disclosure Agreement. This happens all the time in industry, so although perhaps strange to a small school, it shouldn't come as a novel idea to the educational company. There is no reason I can think of why this offer shouldn't result in a "yes".

- If that fails to produce the desired result, and you are still determined to check out this curricula, track down some current users and interview them. In fact, I'd recommend doing that anyway. The organization should be more than willing to supply names as references. Ask those adopter schools the questions that matter to your school and see what they have to say. Also, ask them to share some of their curricular materials with you. Again, this is common, smart, strategy in the business world; same thing when interviewing for a job! Check out the company before you commit!

- One last thing. If, as in this case, the company requires you to commit to using a significant portion of their system, thus locking you in, ask to what extent the required curricular materials can be customized to fit your local situation. You don't want to be forced to teach in a way that conflicts with your mission, culture, students in any substantive way.

Wouldn't it be interesting to do an in-depth comparison of industrial K-8 educational models with traditional public education models? 


Monday, September 5, 2011

How To (and How Not To) Hear What Someone is Telling You

A few days ago I was standing at a store counter and fell into conversation with a programmer who was feeling on top of the world and wanted to talk about his accomplishments. I had never met this guy. When he asked what I do, I decided to share how much I enjoy working with people and discovering what they want, need,  or perhaps are stuck on with regards to computing and technology. These discoveries can lead to innovative, or more importantly, truly helpful solutions.

I had barely uttered a sentence when the guy enthusiastically told me that he "does that all the time" and launched into a monologue about how he conducts the entire software engineering life-cyle and in particular, knows all about requirements gathering and specification development. I could hardly get a word in edgewise, but tried to tell him that I was talking about something more holistic than "reqs and specs" - I was talking about a bigger picture that involves much deeper work. No such luck - he repeated that yes, yes, he worked with customers regularly. I have no doubt he was sincere in his belief that he interacts effectively with users.

Nonetheless, as he continued to talk a mile a minute it dawned on me that here was a perfect example of missing the boat. His overly assertive interaction style caused me to rapidly lose interest in the conversation and revert to  a surface level discussion - after all it was clear he wasn't really hearing me. He was completely unaware of the dampening effect he was having on his listener. Unless he undergoes a personality change at work, he probably interacts similarly with customers and clients. Will the code he develops be well targeted to address their unique needs? Will it be innovative in a way that is meaningful to them?

If you want to really understand a customer, a client, a student (current or potential) you have to stop talking and listen with more than your ears. This is sometimes called "active listening". A contradiction? Not at all - you can be incredibly active without moving, without talking, without forming premature conclusions or judgements. Active listening is how you find out what is really going on - critical subtext, or something that takes a while to rise to the surface, shows up through words, body language, intonations, and many other clues.

Active listening is a way of being - it is not something you turn on or off "when needed". Interesting and important information doesn't appear on cue or when you will it. Human psychology doesn't work like that. There are many skills and techniques you can develop to facilitate meaningful insights, but the moment you stop listening actively something critical will fly by and you'll never know it.

Test out active listening with a small experiment. The next time you interact with someone you don't know such as the guy behind the deli counter, your hair dresser, the woman whose dog wants to play with your dog at the park:

1. Talk less - let them do more of the talking. You don't have to pull a silent routine, but subtly encourage them to talk while holding back on directing the conversation as much as you can.

2. Resist coming to premature conclusions or drifting off on mental tangents ("What a boring job...why don't they use more modern cash registers?...I wonder how much money s/he makes?...did I remember to lock the car?...do those receipts really contain dangerous chemicals on them?). Just take it all in.

3. Listen with more than your ears. Be alert to any little voice inside that wants to point something out to you about this person's motivations and perspective. Register the information for later and keep listening.

What do you hear when you actively listen? Based upon your interaction, what have you learned about this person's perceptions and what is important to them?

Sunday, July 24, 2011

A Tale of Two Valuations: Academia Next

I initially thought this post would be a piece of cake compared to the previous post about the corporate perspective. Not so. Perhaps because of my extensive academic experience I am far more aware of the variance of how professionals are valued in the world of higher education. So I have been looking for points of common ground within academia...I am almost afraid someone will throw a rotten tomato at me because I know too much (I've been reading a murder mystery, where the person who knows too much often comes to a messy end).

To state the obvious (to academics at least) there are the three classic areas of valuation for faculty: Teaching, Research and Service. Service (pretty much anything not teaching or research related such as committee work, performing outreach or being a student adviser) can safely be said to come last in the pile. Do no committee work and you will get dinged; do too much and you will get dinged (I know someone who was denied tenure because he was told he performed too much service work). So the trick is to find the middle ground according to the culture of your department and institution.

From there is gets murkier.

Teaching: In some institutions this is virtually all that counts. But how it is measured varies widely. In some cases, it is all about teaching evaluations. Period. Get those numbers up and get them high or else. The pressure can be intense, and in extreme examples there can be a completely predictable desperation to "please" students above all else. In my experience, this is not the norm, and is incredibly destructive to the learning process. More often, in a teaching oriented institution, evaluations are important, but only one indicator of how a faculty is evaluated. More sophisticated methods of assessing effective learning are used.  And by effective I'm not talking statistical evaluation; I'm talking qualitative evaluations. That is healthy imo. Institutions with a well rounded process for evaluating teaching can produce amazing students who go on to do amazing things inside and outside the classroom and after graduation. And the faculty feels professionally successful and appreciated.

Research: In some institutions this is virtually all that counts. Again, the worst case scenario is where not only are publications counted (literally) but the venues for publication are ranked. If you don't get into "the top" pubs, forget it. You are toast. Even within computing, there are disagreements as to what counts as a quality publication venue. Then there are grants. Worst case scenario, you need a fixed number of grants and big dollars - millions would be nice. Not healthy, as there is only so much money to go around and that sets up a system of guaranteed winners and losers regardless of quality. Very much like using a normalization curve in grading. Something I never used and never will because it has all sorts of negative side effects that many educators are well enough aware of that I won't repeat them here.

Quite a bit of gloom up in those paragraphs. Now to inject a positive perspective. BALANCE. It is all about balance. The original idea behind Teaching, Research and Service was to promote balance. Some of all three are needed from every faculty. Many institutions, although they do by design weight teaching and research differently, maintain a healthy balance. What are the factors that indicate successful teaching in such situations? Each professional is evaluated in the context of both institutional need and known pedagogical / cognitive learning understandings. Other factors come into play to varying degrees depending upon the context: local needs, student needs, etc. What are the factors that indicate successful research? Very similar actually. The institutional needs and established understandings of rigor in scientific research lead to an evaluation of individual contributions (reminder here that we are talking computing and related areas. I can't speak to areas such as the arts and humanities). Grants in these institutions aren't just about how much money is brought in, but about the effect the work is likely to have on science, or in the case of educational research such as computer science education research on the discovery and dissemination of improved teaching and learning theory and application.

It feels like I've short circuited my comments, but that comes from knowing too much.

A summation might be: for faculty in academia professional valuation is based on teaching, research and service, and in a healthy environment there is a contextually appropriate and healthy balance. Value is not just numbers, nor is it vague and undefinable.

I feel like I'm stating the obvious, but that is only the case if you are an academic reading this. Based upon some of the comments I received to my last post about valuation in the corporate world, there are many to whom this post will be news.

One of my next tasks will be to see where I can locate opportunities for common ground.

But first, I'll ask you to graciously do what you did before, and provide your perspective on:

1) What can you add about how academics in higher education are judged to provide value to their organizations?  What can you add that is concrete - i.e. can be said in very concise form?

2) Where do you see common ground between between corporate and academic valuation of professional contribution to their organizations? 

(I think we have all heard about the areas where there is supposedly no common ground. Let's look for the positive now).

Monday, March 28, 2011

Computing and a Toothbrush?

I was hunting for something quite different (and far more serious) when I found this unusual toothbrush and it was just too intriguing to pass up. There exists this computer scientist and inventor who got into toothbrushes (among other things). I just had to read on. The CS inventor, Richard Trocino, lives in Austin, Texas one of my favorite cities and former home. That clinched it. My textbook writing project was temporarily diverted.

This toothbrush is pretty slick if you are into gadgets. A gift for the person who has it all (make a note on your holiday or birthday list). Called the OHSO, it is a refillable toothbrush with an oxygen intrusion prevention setup so that the toothpaste will never gum up (so go the marketing materials). You can put your favorite toothpaste in - I wonder if that includes a homeopathic paste made from baking soda and water."Suction technology" makes it easy to fill. That makes me want to see just how much sucking is done and what else could get sucked in.

It self dispenses in different ways depending upon if you twist the little knob or if you tilt it a certain way. There is a little window where you can peer in at the stuff to see how much paste is left. No leaking, airtight all around and with replaceable parts. I am dying to play with one of these.

I have read everything I can find and I don't see where computing technology comes into the OHSO. Unless there is a little microchip hidden in there somewhere. But you'd think they'd advertise it. I would like to take it apart and find out.

Maybe the computer science only comes in via the fact that they don't do formal advertising but rely on word of mouth and social media to generate sales.

Is this a socially useful device? There are testimonials on the site from business travelers to active duty military personnel and everyone in between swearing how much they love this toothbrush.  Clean teeth, the prevention of cavities and recessing gum lines are definitely a good thing. Keeping one's teeth is a good thing. There is something perhaps "green" in a toothbrush that might last for years. I can't tell if it is made primarily from plastic or metal. That would add or subtract from the beneficial environmental aspects. But you have to be the judge on this one.

What I really want is to approach it as a technological device and test it in every way possible, including taking it apart and putting it back together. I know I could put my research design skills to work on coming up with some very creative experiments.

Hopefully I would have better luck than the time back in my 20s when out of curiosity I took the front passenger door off my 1969 Dodge Dart and couldn't put it back on.  (the door was too heavy - those wonderful cars were tanks) I drove 20+ miles down the highway at full highway speeds without a door in order to find another pair of arms. But a toothbrush is a far cry from a solid steel car door. What could possibly go wrong? There is that suction aspect to consider I suppose. Unlike a garbage disposal however I wouldn't have to stick my hand into a dark place full of nasty sharp blades.

But really, the scientist in me hears about something this unusual and drives me to want to take an OHSO toothbrush and put it through some serious paces. Then I can find out or infer if computing plays a role in the device and if the device is socially beneficial.

If someone will provide me an OHSO I promise to take it traveling on business, into the mountains on retreat, to the dentist (just have to see what she'd say about this) and in fact I'd stick it in my pocket, take it everywhere, and brush at random intervals wherever I happened to be. I might even alternate between randomness and statistically planned brushing events.

In between taking it apart  and putting it back together.

I'll devote the full range of my assessment and evaluator experience to the task.

How about it OHSO - want to send me one? Consider it free product testing.

Friday, January 28, 2011

Questions from an Industry Point of View

Yesterday I had another conversation with an industry acquaintance about interdisciplinary computing. Once again the conversation drained my latte but left my head full of interesting ideas. Part of our conversation was intentionally oriented towards what a large corporation might want to know about interdisciplinary computing initiatives. It is always refreshing to look at things from new perspectives and ask the tough questions. I'd like to share some of the questions that arose and my initial thoughts on them.

For fun, I'm going to pose them in the form of a fictional conversation between myself and a CEO. I'll call her CEO C (for computing :) . This is *not* a report of an actual conversation with a CEO.

What might CEO C want to know?

CEO C asks: From having discussions of the benefits and challenges of interdisciplinary computer science initiatives (see Lisa's posts going back to Jan 7th in particular) how can you then create action? Cynical CEO C says she has seen many great ideas that don't make it past the idea or prototype stage.

My Thoughts: There are examples of success out there (see my post on the 10 year track record of an IC program in Missouri). We need to ferret out as many examples of success as we can find and look carefully at what made them succeed. The people involved in the Missouri program have a wealth of information to share. There are other programs out there in academia - we need to do our research and look for patterns. We also need to nail down what did not work and why. We don't need to stick to Computer Science. We can learn a lot from other disciplines that have a record of interdisciplinary integration. Two areas come to mind. The Physics Education community has done an excellent job of gaining acceptance and respect for education research from with Physics departments. Math Education has also done well. These are examples of interdisciplinary success of a particular type - we can learn from them how they got there.

Bottom Line: We do not have to reinvent all the wheels. We can learn from others.

CEO C then asks: How will you get measurable results?

My Thoughts: This sounds like a classic Goals, Outcomes, Measurables discussion that needs to take place. Given that this is an area I consult in, I could go on for a long time about the topic. But I'll keep it brief and just say that if the time is put in up front to develop these items clearly and concretely and in the proper order, understanding what each term means, then we will be in a position to obtain those measurables.

CEO C then asks: Ok, are you gaining a better workforce out of these initiatives? Global competitiveness is very important.

My Thoughts: I'm sure that others will have excellent ideas to add, but my thoughts on this are that one way to answer the question is to build it into the development of Goals/Outcomes/Measurables. Ask the right questions, develop the right assessment mechanisms and follow up longitudinally. Include industry deeply in the conversation.

CEO C says as a comment: If you accomplish all of the above, we will spur the economy and overall economic growth.

I respond: Yes :)

Wednesday, December 1, 2010

Alice -> Excel in the APCS Principles Pilot

Midterm follow up, but I'm afraid there isn't anything earth shatteringly new to add. As Beth Simon had promised, the (34) questions on the midterm looked very much like the in-class quiz questions. If you have been following the podcasts, then you know exactly what those questions look like. There were exam questions on virtually all the major topics covered in the course.

Having started to tally the types of topic questions, I began to decide that this effort probably wasn't going to help out those of you who are still interested in details about the excellent midterm results. I believe that the varied analyses of the project that will take place after the course concludes may produce more engaging information than if I list out a topic frequency count.

It is more interesting to briefly discuss what happened in the class sessions in which Beth discussed Excel. As mentioned in a previous post, she went to great lengths to make a smooth transition between Alice and Excel, showing the relationships between two seemingly disparate programs. Alice is an animation oriented programming system and Excel is a spreadsheet program, albeit a now quite sophisticated piece of software. However, in working through complex concepts such as relative vs. absolute addressing (often a tricky distinction for learners), Beth demonstrated through example how an Alice exercise (such as a singing group called The Beetles - no that is not a typo, they were insects) had underlying code similar to formulas they could create in Excel. She discussed similarities and differences in concepts such as loops and conditionals. All very creatively.

This type of teaching supports what the learning literature describes as Transfer - the ability of a learner to successfully apply understanding from one learning experience to another and to rapidly learn new related information. Observing the class it appeared that transfer was likely occurring because students asked intelligent and thoughtful questions about the Excel exercises and remained engaged throughout the lectures.

This is also a good time to note that the course has been supporting two other types of well researched learning that I have not specifically referred to previously:  problem based learning and collaborative learning. If you are not familiar with the research in these areas, an excellent read is How People Learn put out by the National Academies Press. It has become a classic and very accessible reference on learning in educational settings. Well worth the read.

Friday, November 26, 2010

Visiting Scholar Quintin Cutts & the APCS Principles pilot (part 1)

For those of us in the States who, as a result of the Thanksgiving holiday yesterday, may feel we do not need any caloric intake for the next week, exercise is a good thing. Most of my exercise today came in the form of talking (wondering how many calories are burned by vocalization - anyone studied that?) to Quintin Cutts, a Senior Lecturer in the School of Computing Science at the University of Glasgow. Quintin is here in San Diego on sabbatical to work with Beth Simon on the APCS Principles pilot class. Quintin, showing good form, rode his bike to our meeting, facing a daunting straight up hill on the way back. So extra points for Quintin on this post-holiday day. For agreeing to meet with me, and for exercising at the same time.

Choosing to take his sabbatical here made perfect sense. Quintin has been teaching computing with Peer Instruction and clickers himself for some time, and has just finished 4+ years work on a large grant-funded outreach project to the pre-college schools in Scotland. Quintin has extensive experience with project development and curricular issues that bridge university and pre-college computing, and he has taught classes for both non computing and mixed majors for nearly 15 years. He is also involved with revisions to Scotland's computing curriculum (those of us in the States might sigh wistfully to learn that there is a national computing curriculum in Scotland and all secondary schools teach computing).

Asked about his interests and how they relate to working on this particular class, Quintin didn't pause for more than a second before saying: a focus on finding blockages in learning; getting feedback to students as quickly as possible; keeping students engaged. These three items are clearly present in the structure of the class:

One of the pedagogical approaches Quintin has used in the past and which he helped implement in his collaboration with Beth Simon, he refers to as "turning the teaching model on its head".  In a common form of the "traditional model" the student does the "easy" part first: attend lecture. Not to imply that the material in a traditional lecture is necessarily easy - not at all. Content may be seriously complex and challenging from both the student and lecturer point of view. What is meant by "easy" is that in many lecture situations, the typical student is passive. She or he listens, takes notes, perhaps asks a question, perhaps not.

Later comes the "hard" part, where the student goes off and works by themselves on homework, lab assignments etc. Blockages in understanding may become show stoppers and timely helpful feedback difficult to obtain.

This class implements a different approach. Students do a homework assignment *before* attending lecture. The homework is a prerequisite for success in class, and the assignment is part of their grade - hence a two-fold motivation to take it seriously. When students arrive in class Beth engages with them, and they with each other, participating in "hard stuff" with the presence and support of the instructor, the teaching assistants, and their peers. These activities include the interactive quizzes, group discussions, and other activities that keep them sitting up and engaged (ok, good posture is neither always present nor required). With that experience and immediate feedback about their understanding under their belts, the students do additional reinforcement activities  in their labs.

Quintin and Beth are gathering real-time data about this new process to monitor how well it is working. In addition to the typical measures of scored assignments and exams, the pair are regularly collecting feedback sheets from students. They are also audio recording (with student knowledge) class discussions, in order to understand what is happening in all those group conversations about the clicker quiz questions. There are 4 audio recorders circulating through the class during each lecture period.

When asked how he felt the course objectives were going so far, Quintin was quite optimistic, citing many of the same items mentioned in this past Tuesday's post: the increasing scores on clicker quiz questions, the midterm scores, and the sense he is has from various other sources that students are picking up on complex concepts. He is looking forward to digging into and further analyzing the data as it comes in.

Part 2 of our talk to come....

Tuesday, November 23, 2010

The APCS Principles pilot flies into the last few weeks

The APCS Principles pilot class is moving right along into its final weeks. Several people have been asking how the midterm went - so consider this "part 1" of a response to you.

The midterm grades came back and they were quite good. The average score on the midterm was 86% even though there is evidence that the exam was not particularly easy. In one of my next posts I hope to have some information about the topics and their frequency  on the test to bolster this statement and provide additional information. For now I am able to report that of the well over 500 students in the class, only 4 got a perfect score and approximately 50 scored over 95%. I do not have a statistical standard deviation at this time, but have been told that there was a tight clustering of grades with almost no students falling into the traditional "failure" range.  

In recent weeks Beth Simon has added a new tactic to her exam preparation techniques. After a quiz question and discussion of the correct and incorrect answers she often says: "Write down what you need to remember about this question - for the final exam".  She pauses while they do so. In other words she is helping them prepare by progressively building a study sheet each day during lecture. In addition, to bolster the quality of what goes on these study sheets, part of the class and quiz discussions include asking students to "debug" their choices - to form the habit of mind of learning to problem solve when dealing with a computer, whether it is code or a different type of computing problem.

Students are now well into working on their final project, which will be presented late next week. There will be a contest in which students will get to vote for best projects. The projects should be interesting as there is room for significant student creativity. More on that when it happens.

Finally, and this will also be the subject of a fuller posting, the class has shifted into using Excel. Rather than having a clean break, Beth is making connections between "coding" that students can do in Excel and what they learned in Alice programming. More on that to come as well.

Ok, that wasn't really "finally". There will also be a report soon about a conversation with another visiting scholar. That is truly finally the last item in this post!

Wednesday, October 20, 2010

Mini-monsoon does not stop attendance at APCS Principles Class; Podcasts available too

For those of you still curious about details of the content of the APCS Principles pilot course, I am very pleased to be able to point you to where you can see and hear podcasts of every lecture. Go to the podcast site at UCSD and look under the course list for CSE3 - Fluency/Information Technology. You can click through to a listing of all of the course lectures and pick which one you want. You are able to hear instructor Beth Simon and see the slides she puts up, complete with her interactive note-taking on them during class. You can experience some of the interesting and innovative pedagogic techniques I have been discussing.  

If you have never listened to a Podcast there are instructions available for you to read about how to do it. Generally class podcasts come down shortly after the course completes to make room for new course podcasts, so if these interest you, don't wait until January!

Meanwhile, back at the ranch. In two weeks the students will be taking their midterm. One of the pedagogic goals Beth has is to do everything possible to help students prepare for the midterm. She is thinking about, and has begun to implement, some strategies.

Beth has been encouraging students to utilize all the resources available to help them since the beginning of class, and yesterday she took things one step further. She used the technique of clicker questions to ask the students to click in about how they  felt they were doing in the course - with the promise that no one was going to look at the individual responses. She asked them [brief paraphrase] "How are you doing?" and the choices were [also slightly paraphrased for brevity]:

a) Doing fine, I totally understand this stuff
b) Doing fine, I have had to work at it, but I'm pretty sure I get the concepts
c) Not sure I know the concepts as well as expected
d) Pretty lost
e) Have no idea

Note that grades had been posted a few days before so that students could see their personal cumulative progress from the point of view of their instructor. They were also able to see the course distribution and compare where they were in relation to the rest of the class. Armed with that information, they were now asked to report spontaneously how they felt they were doing.

The responses, as percentages, were as follows:
a) 8%
b) 36%
c) 39%
d) 10%
e) 7%

Though these percentages are opinions, which are influenced by various factors (confidence, self expectations, were they paying attention to the question?) they give everyone something to mull over.

Beth would like to see an additional 20% in the b range on this question. So she reinforced then and there the resources available to students - of which there are many. Many alive and breathing resources. Beth spoke more about the best ways to prepare for the exam - practically giving it away by telling them (not for the first time) that the exam would be based in great part on the interactive clicker questions they worked on during every class. These questions cover content, as well as design, debugging and abstract thinking skills. Students are able to return (via the podcasts and hardcopy access to slides) to the clicker questions and re-take them live.  Not just look at the question and answer. Clicker success rates on questions are going up although Beth would like to see them consistently at the 90% success range. Beth even said that she would be lonely in office hours if no one came - which created smiles across the lecture hall.

This brings up another point. All the evidence indicates that the students love coming to class. We are having very weird weather for October in San Diego. Normally at this time of year we are dealing with the famous Santa Ana winds and threat of wildfire. It rarely rains in October. Well, it has been raining. And raining. And raining. Yesterday there was a deluge reminiscent of living in the Pacific Northwest (I feel qualified to say that, having lived there). Umbrellas? Who has umbrellas here? Nonetheless, the lecture hall was filled - Beth can verify this numerically because she can see on her personal display how many students are in the lecture halls via the clicker registrations. Students came out for class and were as dynamic and engaged as ever (although damp).

While she works on the issue herself, Beth would also love to hear any creative advice (via this Blog) about what to tell students to do to help themselves prepare for the midterm.

Wednesday, September 22, 2010

APCS Principles Pilot: Ready to Go!

Day 1 of the pilot APCS Principles class is tomorrow and things are pretty much ready to go. This leaves instructor Beth Simon with a moment or two to reflect and wonder about something: just how are students in this unique and exciting class going to experience the class? In other words, where will the easy and tough spots be?

Other ways that Beth is thinking about this issue are - there is a wide demographic of students in her class. Some may major in ... well, you name it, someone will surely major in it! The student body is very different from what one finds in a standard CS1 course. It is a very interdisciplinary group. What will the best resources turn out to be for these students? The key items to make this the best computing experience they have ever had? (It may also be their first experience, which would make a very nice double win for  all).

It is easy to keep thinking about these questions from different angles. Aside from many in-place methods of getting to know students, working with them, and encouraging their efforts, Beth is considering having a student conduct some random interviews with members of the class, especially right at the beginning of the term, to see what she can learn. Within a few weeks, the use of Peer Instruction and the clickers will start identifying some of the places to pay additional attention to, but meanwhile, holding some friendly low key interviews student to student seems like a good way to get the word on the ground.

Friday, September 17, 2010

How Do You Understand if Students Understand???

Wow. Class starts in six days and we are down to the wire on some hard questions for our APCS  Principles pilot course. We have some very nice labs shaping up (we think. we hope). We have some very nice homework assignments shaping up (we think. we hope). We have some awesome lectures shaping up (we think and hope). But arrrrggggggggg. We will want to know, really want to know, if the students understand what they are doing! And we want to know it right away. Silly desire huh? Why would we care about THAT?

There are going to be some well tested uses of Peer Instruction in lecture, which will provide one mode of rapid feedback to us and them (and some other nifty lecture related in-class assessment activities). Good. Good. Good.

BUT....

But what about those labs and homeworks? Let's just talk labs although a similar principle applies to homework. There was a loooooooong and painful** discussion today about how to assess the labs not just for a grade, but to really understand and to help the student understand if they understand. Do they understand the "while loop" construct or not? Conditional expressions - understand or no? or parameters - understand?

Especially for the purpose of this pilot offering we want to find out what is going on cognitively - immediately.  Not just after an exam or a week or so later. But while it is still fresh. And to transmit that in a formative way to the students. So a score or a checkoff sheet may produce a grade (recall: 4 assistants per 40-46 students in a 2 hour lab) but we want more. In itself this is not new - it is always a pedagogical goal to have assignments not only produce a grade, but produce deep learning that both student and instructor can be aware of.

For our project data gathering purposes the goal is even more important. We are considering asking a set of questions as the last part of each lab that will serve as formative feedback and thought provocation for the student and give us some concrete info to pore over.

Set aside the large numbers of students for a moment. What exactly do we ask in this short list of questions? This is not the kind of question we ask our CS1 students. ("Do you really and truly and in a deep and meaningful way understand what you just did? Write an answer that thoroughly convinces us one way or the other please")

We are in uncharted territory. But we understand that.

** PAIN: definition provided by the Merriam-Webster Online Dictionary b : acute mental or emotional distress or suffering

Tuesday, September 7, 2010

UCSD APCS Principles Pilot Course: Lectures

Following up on my previous post about the UCSD implementation of the pilot APCS Principles course under Beth Simon, here is an on the ground report into some of our most recent discussions about lecture development, and with that, more background material about the approach the course will take:

Lectures: As I reported, Beth will be delivering interactive clicker based Peer Instruction (she reminds me that I can refer the interested reader to additional information on the use of clickers).  Beth is looking at different ways to integrate societal concerns into her lectures - this is going to be fun and challenging at the same time. You see, ideally, the lectures will include excellent examples of computing applications (among other possibilities) that are being used for good or ill (balance is desired) and that will plug into the course content on a given day. Beth has been doing some digging into previously published material. Another area that we may draw upon is the research I have been doing recently into computing centric, socially beneficial "real world" projects. Alternately, we may look for inspiration to other schools' curricular implementation of projects in "Computers and Society" courses. Or........as you can see, the conversation is just beginning. What is the most fruitful way to smoothly integrate societally interesting (from the students pov) issues into the lecture material?

Speaking of the material, I should mention that our base applications will be Alice for about 2/3 of the term, followed by Excel for approximately the last 1/3 of the term. These applications were chosen after several years worth of meetings (started prior to the APCS Principles project coming into existence) with the divisions who traditionally require this course for their students.  Psychology specifically requested Excel for their majors, and representatives from a wide range of perspectives that represent the freshmen decided that Alice should work nicely for the non computing (so far :)  first-years. Alice, smoothly transitioning into Excel while following the ideals of the APCS Principles project. And don't forget those 750 students :)

Next posts will continue most likely, with some discussion of labs and assessment development.

Friday, September 3, 2010

APCS Principles Course Development at UCSD

In an earlier post I discussed the new APCS Principles Course and the terrific opportunities it provides for integrating interdisciplinary subject matter and social issues with computing - aimed at a broad audience.  One of the pilots of the course is being conducted this year by Beth Simon at The University of California San Diego. I have agreed to work with Beth on this project. Although I will be collaborating wherever needed, my particular emphases are currently in two areas: lab development and evaluation / assessment.

This pilot poses some exciting challenges. First of all, Beth will be delivering twice weekly lectures to approximately 750 students. Yes, 750. The students will be from two very different audiences: one group will be upper division Psychology students and the other will be freshmen who may end up majoring in any area.  Most have not currently expressed a preference for computing (otherwise they would likely be enrolled in the CS1 course). All of the students are required to take this course. Students will be seated in three adjacent lecture halls. Using various pedagogical techniques that Beth has been refining over several years, including innovative use of clickers to create dynamic interchanges between student and instructor, this pilot will aim to demonstrate that the Principles course can be scaled to the largest of classroom audiences.

Among other things, Beth and I have discussed the need to integrate social and ethical issues into the course rather than take a typical and known to be ineffective approach of tacking them on somewhere such as the last week of class. This is an area where I will be heavily involved, starting with the labs I am developing. I intend to write a weekly update of our progress with this course. Although I will discuss anything interesting that comes up, I shall often focus on our progress with including interdisciplinary and societal issues within the course.