Grazing - a personal blog from Steve Ehrmann

Steve Ehrmann is an author, speaker, and consultant.

Sunday, December 16, 2012

Can studying hard, long, and thoughtfully make a student 'smarter?'

The issue of how student (and faculty) beliefs affect teaching, learning, and learning outcomes is intriguing, but not simple.  I became interested initially in faculty's implicit theories of teaching (what Don Schon and Chris Argyris called "theories in use.")

Studies by Kenneth Bain (What the Best College Teachers Do) and by Schneider, Klemp and Kastendiek, found that faculty widely regarded as excellent teachers also typically believed that almost any student was capable of excellence, if taught properly and if they invested themselves in an appropriate way.

More recently, I've learned about the work of Carol Dweck and her colleagues.  Do students believe:

  • I have have specific, unchangeable talents and intelligence ("I'm smart" "I'm not wired to be good at math.") or 
  • Intelligence, ability, and talent are like muscular strength: the more you practice (which is often painful), the stronger (smarter) you can become.  

Research seems particularly strong in support of the idea that, when students believe that arduous effort is rewarded with learning, they'll often invest themselves in the effort.  On the other hand, when the students, or the faculty member, believe that student's ability is fixed:
  • Some students who believe they lack the needed talent may see it as irrational to try hard;
  • Some students who believe that they are talented therefore may slack off, because they think their ability will enable them to succeed easily. (I recall a study of a student learning that involved computer software - students with high confidence in their computer skills had more problems than students with only moderate confidence because the highly confident students ignored manuals, briefings, etc.)
In sorting some of these ideas out, I found this article from the Chronicle of Higher Education to be quite helpful.

My next post is going to look more broadly at student ideas about learning that are counter-productive (e.g., when the expert is lecturing, I'm learning the way I should.  If I have to learn something by doing assignments in my room, the instructor has let me down.)

Monday, November 5, 2012

Faculty research, institutional economics, and the reward system

Some folks at research universities act as though research grants are a way for a university to make money.  Not exactly.

At universities such as George Washington, most revenue comes from student learning, either directly (tuition and fees) or indirectly (alumni gifts).

Doing research should help keep our educational programs at the cutting edge of their disciplines, and fresh. (As Ernest Boyer pointed out several kinds of research can do that. More on that below).

Research costs the university money, even grant-funded research - it must almost always be subsidized with some of those teaching-derived dollars.

So any research university walks a tightrope:

  • Divert too many of those teaching-derived dollars to non-teaching uses, and we risk our reputation as a good teaching institution (and the income we get from that reputation);
  • Divert too few teaching-derived dollars to research and our teaching could stagnate, with perhaps the same result.
I've written this down because, until recently, it wasn't obvious to me. So I thought perhaps some readers might find this useful (or worth arguing with). In either case, feel free to post a comment!

PS. Ernest Boyer, in his book Scholarship Reconsidered, described four types of scholarship, each of which, in its own way, can make teaching more vital and current:

  1. The scholarship of discovery that includes original research that advances knowledge.
  2. The scholarship of integration that type involves synthesis of information across disciplines, across topics within a discipline, or across time.
  3. The scholarship of application (also later called the scholarship of engagement) that goes beyond the service duties of a faculty to those within or outside the University and involves the rigor and application of disciplinary expertise with results that can be shared with and/or evaluated by peers.
  4. The scholarship of teaching and learning that the systematic study of teaching and learning processes. It differs from scholarly teaching in that it requires a format that will allow public sharing and the opportunity for application and evaluation by others.
We're talking more at GW about the possibilities of encouraging all four kinds of scholarship, especially (in current conversations) for associate professors building a record for promotion to full professor.

Friday, August 31, 2012

Observations on learning from the Lilly Conference on College and University Teaching

This past June, a number of our faculty and I attended the Lilly Conference on College and University Teaching here in DC.

One of the keynoters, Terry Doyle of Ferris State, spoke on what brain research suggests about college teaching.  At one point, he asserted that, if a person can't apply what they were taught after a sustained period of disuse of that knowledge or skill, then they never truly learned it in the first place.

When you learn something new and practice it a bit, new dendrites form in your brain.  But if the practice doesn't recur periodically, the brain reabsorbs that material. Your brain can eat your homework, eventually.

Doyle asked, If you watch someone lift weights three times a week for fourteen weeks, how much stronger will you become? It's an important question for students and for faculty: the one who does the work does the learning.

But in a typical college course, Doyle asserted, only one person in the classroom does any heavy lifting, mentally.  That's the faculty member.  To practice thinking as a scholar thinks, even at novice level, ishard work. Even harder for the novice than for the expert.  But to just take notes on what the expert concluded, that's not practicing expert thinking at all. It's learning part of what you need to do the exercise, without actually exercising.

Suppose you had to judge the success of a course you were teaching this fall.  You decide to do that by waiting a year and then seeing what your students could do in Fall 2013.  If that were your plan, what kind of practice in thinking would you give them in fall 2013? That's what Doyle was inviting us to do.

PS.   The Lilly Conference is in DC every year.  The faculty who attended this year seemed to get a lot out of the sessions and workshops.  If you'd like my office to pay for your registration in June 2013, let us know now. We prepurchased 10 registrations last year. We could provide more if there's demand. (College Park sent almost 45 faculty, as I recall.)

If you have thoughts (or objections), please post them or email me.

Saturday, June 16, 2012

Disdaining an Emerging Innovation: Why That Makes Sense

It makes sense for mainstreamers to ignore or disdain emerging innovations, especially those that eventually trigger revolutions:

Odds Are Against Any One Innovation Being Any Good: First, overwhelming numbers of innovations turn out not to have been worth attention.

It is a Poor Imitation:  The most revolutionary and disruptive innovations can, in their infancy, appear like a cheap imitation of the real thing, a fraud perpetrated on the uninformed.  As Clayton Christensen has illustrated with cases from many fields, disruptive innovations often get their initial foothold as cheap, low value products or services that, initially, are only used by people who can't or won't get the real thing.  (Of course, this is one of the things the early adopters love: its ability to engage those who have previously been excluded.)  For example,
  • Personal computers began their evolution as cheap toys that were laughably called microcomputers.  
  • In its early days, the World Wide Web would never have been used by anyone could find (far better) information in a real library.
  • When Socrates tried to imagine educational uses of reading, he assumed the document would be read without conversation, writing, or any other activity.  The reader would only be able to repeat what had been read, fooling himself and others into thinking he had learned something.  Real learning, Socrates argued, could only come from rigorous dialogue.
It Seems Like a Dodge to Increase the Innovator's Status: As Elting Morison pointed out decades ago, many advocates of emerging innovations are the young or outsiders who don't have what it takes to climb the ladder in the normal way.  (Or at least that is how they are seen by the people who already have climbed that ladder.) The innovation is merely a cheap trick for gaining power.

The innovation will corrupt the young: Learning the mainstream practice requires extensive training, selfless behavior, courage, hard-won experience, and deep understanding.  Climbing the traditional ladder has been ennobling.  In contrast, the innovation lets the young achieve miracles without effort. So they will never acquire the deep understanding and moral strength that can only be developed by learning in the traditional way.  Elting Morison described how this fear motivated officers of the US Navy in the 19th century to violently oppose a new technology that made it extraordinarily easy to aim and fire guns at 100 times the traditional range.

The Innovation Poses a Real Threat:  Disruptive innovations eventually wreck and replace a whole chain of jobs and companies.  That's what 'disruptive' means. Think of Tower Records, Blockbuster Video, and Borders Books. Their former employees all lost their jobs.  Economist Joseph Schumpeter called this creative destruction because it creates opportunities and resources for better ways of doing things. But, when you're an insider, creative destruction is still destruction.

It's rational to ignore any innovation. And the greater the innovation's potential for transformation and disruption the more reasonable it is for many people to ignore or oppose it.

Christiansen, Clayton, THE INNOVATOR'S DILEMMA

Tuesday, June 5, 2012

Questions: When good 'teaching' can defeat learning

What is it called when a student sits listening to lectures week after week, thinking, "Yes I understand." But suddenly, in a later lecture, homework assignment, project or test, the student realizes. "I'm completely lost!"  

Is there a name for the student's delusion of understanding? or for the sickening realization that it was indeed a delusion?  It's such a common phenomenon, but I don't know what to call it. 

Ironically, this phenomenon may be most common when the lecturer is "good" in a traditional way: crystal clear in explaining things, visibly caring about students, perhaps charismatic and riveting.  

There's another learning problem, also unnamed, that can afflict students of particularly good professors. I still have it often myself. The lecturer makes a provocative point, and I start thinking about it. A moment or two later I suddenly begin hearing the lecture again and, if I'm unlucky, am completely lost - for a few minutes and sometimes for the remainder of the talk. It happens to me all the time at conferences. I wonder how often it happens to students? Does it happen more often to good students who are really actively listening?   Is there anything a faculty member can do to give students the time needed to process what they've just heard before trying to cram more invaluable information and insight into their short term memories?  Please post a comment or send me an email.

Friday, May 18, 2012

The troubling question of grades

I wasn't in charge of a college course until I was 62.  So please forgive me for yapping about something that many readers of this blog have thought about for years.

As I was grappling with the problem of grading my students ("How much should they be able to do in order to rate a "B+?"), I talked with my old friend Tom Angelo, who reassured me that grading was the shameful secret of teaching. 'No one can really tell you how to do it properly.' For example, faculty will disagree forever on such profound questions as "Are there many courses and circumstances in which it would be appropriate for all students in a large class to earn an A?"  At GW and many other institutions many faculty are upset about 'grade inflation' (such a large fraction of students being given A's or B's.)

Against this backdrop, here are three pieces of information.

The first I encountered years ago: there is virtually no relationship between undergraduate GPA and anything about the student's experience in life after college (except for graduate school grades). Grades don't predict job success, voting behavior, likelihood of winning a Nobel Prize, or anything else.
  • 1984      Cohen, Peter, “College Grades and Adult Achievement: A Research Synthesis,” Research in Higher Education, XX:3, 281-293.
I've only seen one study of GPAs in professional schools - it was about a business school - and they didn't find any relationship between an MBA student's grades and later success in business, either.

The second piece of information comes from  Ken Bain's classic book from 2004, What the Best College Teachers Do.   On page 32, he writes (I'm paraphrasing) , "For years, psychologists have studied what happens when someone has a strong intrinsic interest in something, and they're also offered an extrinsic reward to do it (e.g., a high grade, money, etc.).  Would that reward increase their fascination with the subject, leave it unaffected, or decrease it?"

In one set of experiments by Deci and others, students were asked to play an unfamiliar game under supervised conditions.  Some students were paid money for winning, while the others received no rewards for their performance.  When left 'alone,' the students who'd been competing for prizes stopped playing the game. In contrast, students who had not played for any reward often continued playing after the supervisor left.  Variations of this experiment showed that, in order to provide external feedback while doing minimal damage to the student's own interest in the subject, it's best to combine feedback about performance 'Here's what you did right; here's how do it better.') with selective praise for performance ('The way you castled your pieces was quite ingenious!').

Bain reports that decades of research by different investigators indicate that, 'when given a reward for good performance, students' own interests in the subject actually go down and sometimes disappear.'

But why should instructors worry about whether grading is sapping the student's own interest in the subject if students are doing well enough in the course?

If literature students no longer love novels so much, or if engineering students lose their taste for creating things, they're less likely to keep using their academic learning on their own.  We're trying to help students develop their 'thinking muscles.'   If students leave a course (still) loving what they've learned, they're more likely to find ways to use that capability and, in the process, get even better.  (If the receipt of a final grade also stops the use of those thinking skills, on the other hand, it seems possible that those skills will atrophy.)

Those great college faculty that Bain studied do everything possible to increase the students' intrinsic motivation, and tap that energy.  That fits what I've seen over the years.  For example, I've written about Jon Dorbolo's experience as a novice teacher of introductory philosophy.  When his first course went sour, Dorbolo asked his students to rate every assignment anonymously. To his dismay, a majority of his students rated every single assignment useless and boring. However Dorbolo then noticed that every assignment was rated useful and interesting by a minority of students.  Different students liked different sets of assignments. Some students liked only religious philosophers.  Some liked only utilitarians.  Dorbolo realized that his most important goal was to help students learn the rudiments of philosophical thinking. And an implicit goal was to not teach them to hate philosophy.

So, by the next time Dorbolo taught the course, he'd created different, overlapping reading lists. Each set of assignments helped students learn skills of close reading and analysis.  But each student got to choose a set of readings most likely to motivate them to think hard, work hard, and appreciate what they were learning.  Dorbolo also converted the course from mostly lecture (since it made no sense to lecture about one reading when most students in the room hadn't seen it).  Instead class sessions were mainly devoted to small group discussion. Dorbolo saved a lot of time that would otherwise have been devoted to creating and delivering lectures.  Instead he focused on managing the discussions, assessing learning, and studying what helped and hindered student learning.

But what about grades?  Is it possible that grade inflation is a good thing, if it motivates us to provide feedback and documentation of student learning in other, less destructive ways?

Do you know of any faculty who've looked at research on assessing and valuing student performance? motivating learning? Tried any experiments?  If so, please add a comment.

PS. If you have any trouble posting comments on the blog, please email me.

Post slightly revised on May 19, 2012

Tuesday, May 15, 2012


I just came across some mentions on the ProfHacker site about THATcamp (The Humanities and Technology camp), a plethora of  self-organizing get-togethers around themes. There's one at George Mason next month around the theme of "History and New Media" (registration now capped at 150).  Sounds like folks are staying at a motel and meeting in a research center, not in tents!  Have you ever taken part in, or organized, a THATcamp?

PS. If you try to post a comment on this blog, and have problems doing so, please email me at and let me know. Thanks!

Sunday, May 13, 2012

Dysfunctional Illusions of Rigor

Several years ago, Craig Nelson wrote a book chapter attacking 'dysfunctional illusions of rigor.'  There were seven ideas about rigor which he no longer believed:
  1. Hard courses weed out weak students. When students fail it is primarily due to inability, weak preparation, or lack of effort.
  2. Traditional methods of instruction offer effective ways of teaching content to undergraduates. Modes that pamper students teach less.
  3. Massive grade inflation is a corruption of standards. Unusually high average grades are the result of faculty giving unjustified grades.
  4. Students should come to us knowing how to read, write, and do essay and multiple-choice questions.
  5. Traditional methods of instruction are unbiased and equally fair to a range of diverse students of good ability.
  6. It is essential that students hand in papers on time and take exams on time. Giving them flexibility and a second chance is pampering the students.
  7. If we cover more content, the students will learn more content.
"Rigor" is a term I've heard used frequently in conversations about college courses, but rarely (until now) defined. Nelson's book chapter, adapted in Tomorrow's Professor 1058 and 1059, explains the problems inherent in each of those illusions, and suggests a way of defining rigor that is more likely to educate students than to repel or reject them.  (For example, he discovered that giving every exam twice and allowing students to use the better of the two grades actually improved student learning because most of them studied twice for each exam.)  Although the extensive data Nelson cites come mainly from science and mathematics education research in colleges, most of his argument applies equally well to other fields. 

Wednesday, May 9, 2012

A GW Hybrid Master's Program in GSEHD

Patty Dinneen (TLC) and I just attended a master's portfolio defense at the Graduate School of Education and Human Development (GSEHD).  This hybrid program (Early Childhood Special Education) takes advantage of study on-campus, on-site (clinical experience) and online.

The ECSE program is organized around a set of about fourteen capabilities that students develop cumulatively as they take courses (e.g., the ability to use a variety of assessment procedures as they work with children).  Each ECSE course helps students take another step in developing several of those capabilities. And in each course the student uploads projects to their portfolio and writes about how those documents provide evidence of what they can now do.

As the program comes to a climax, the students use their portfolios to describe their personal philosophy of the practice and document their achievements in each of the fourteen areas as they relate to that philosophy.  It was obvious that this particular student, Rebecca Parlakian, was exceptionally good at articulating what she had learned and how she now thought as she worked in the profession; she drew interesting lessons from each of the experiences she had documented.

It's looking more likely that the Innovation Task Force will help us work with GW departments at creating hybrid master's programs.  The ECSE program is just one example of what such a hybrid degree program might look like.

Monday, May 7, 2012

Interdisciplinarity at Stanford

The article linked to the title of this post, and also this Ken Auletta piece from the New Yorker, paint an intriguing picture of an effort at Stanford to develop a more interdisciplinary undergraduate education there. Worth reading!

Sunday, April 22, 2012

Assessment - Improving Learning AND Accountability?

In a recent op-ed in the New York Times, "Testing the Teachers," David Brooks contrasted the increasing price of higher education with the evidence that results are getting worse.   For example, he cited studies showing that the reasoning ability of graduating seniors is disappointingly poor, significantly worse than a couple of decades ago. A co-author of one of those studies, Josipa Roksa, spoke at our Teaching Day this past October. (Click here to see a video of her talk.) People have faith that going into years of debt is worth the price. Taxpayers have faith that spending large amounts of tax money on student aid and research is justified. What if that faith begins to falter? Brooks suggests that academics study learning in order to figure out the problems, and fix them, before it's too late.

Fortunately, many faculty (like Nobel Laureate Carl Wieman who spoke here at GW last October) have been working on the same problem because of their own doubts about whether their students are actually mastering even the most elementary ideas and capabilities in their disciplines.

As Wieman showed, the problem is that hard-working faculty can spend enormous effort on teaching with methods that sometimes don't work very well.  (As a rueful Gregory Peck remarked in "Twelve O'Clock High," "I'm choppin' but no chips are flyin'.")

Wieman reported research that compared two sections of physics, each of about 270 students.  In one, students were taught by an experienced faculty member using traditional, respected methods of instruction. The other section was led by an inexperienced GTA who had been trained to use new methods that continually challenged students to think and work together throughout the class rather than silently taking notes.  The worst of the GTA's students learned as much as the best students learning from traditional methods.  Attendance in the GTA's section was substantially higher, too.   Wieman's study appeared in Science on May 13, 2011 (p. 862-864)  (Click here to see a video of Wieman's talk.)  These kinds of results are not unusual, but, when a Nobel Laureate does the research, it tends to draw attention to the new possibilities.

To oversimplify, here are a few ideas, all of which are already being tried by at least a few GW departments:
1. Figure out what capabilities your program is trying to develop in all your students by the time they graduate.  For example, an engineering program might be trying to educate students who can figure out a design that can solve a problem, convert that design into a production plan, produce a product and test it.
2. Don't expect that one course can develop one capability by itself ("We need our graduates to be ethical so we will require them to take an ethics course.")
3. Assign students projects and other activities in course after course: activities that reveal whether those capabilities are actually developing satisfactorily.  Use capstone courses, senior projects, and student portfolios to see whether, by graduation, students have learned what they need to learn.
4. Where many students are having problems making progress, don't get waste energy on the blame game. Instead find different ways to teach them. While you're at it, look for methods that can also save faculty and students time.
5. Of course, each student will also learn many things that beyond those core capabilities. Those same student projects and activities (#3) can help the student and faculty see how each student's distinctive achievements (and problems). Taking a good look at that evidence may suggest ways of improving this aspect of students' education, too.

I heard about an example of these ideas in action a few days ago when I talked with Prof. Jay Shotel, Chair and Prof. of Special Education and Disability Studies in GSEHD.   Jay and his colleagues have identified 14 capabilities that all their master's students in Early Childhood Special Education should master.  In each course, students do projects that help them and faculty see how they're progressing in developing those capabilities. And the students continually write about that learning ("reflection").  As part of those courses,  students are also each developing an online portfolio -- a collection of their projects that, along with those reflections, documents the cumulative development of those 14 capabilities.  They present their portfolios as a requirement for graduation. And they can use that evidence to help get jobs.  (Jay, did I get this right?)

PS. I was also fascinated to see that this GSEHD program is a 'hybrid,' i.e., it combines learning at Foggy Bottom, online learning, and clinical experience at off-campus sites.  If the Innovation Task Force approves our plans, we'll be selecting a couple of faculty teams each year that want to take advantage of the hybrid format in order to develop world class master's and graduate certificate programs.  We will support the winning proposals with grants and expert consulting help.  Let me know if you'd like to learn more about these ideas.

Sunday, April 1, 2012

One Reason Why Faculty Resist

I've been enjoying helping Natalie Milman by writing contributing several "Ends and Means" columns for the magazine Distance Learning this year.  Today I'm submitting my most recent contribution (and my last one for a while) called "One Reason Why Faculty Resist."  It begins:

"Have you ever heard the phrase “resistance to technology” used to imply that some faculty are irrational dinosaurs? I have, and I don’t like it. In my experience, most such resistance is quite reasonable.  The following story about online discussion in real time suggests what worries these instructors."

My story's point is that, when the terrain of teaching and learning changes, faculty are quite likely to encounter unexpected problems in their courses.  I don't just mean technical problems.  I mean problems with teaching and learning that are frustrating, embarrassing and sometimes potentially threatening. And, when they encounter such problems, they may well blame themselves. And they may feel that student reaction may put them at risk.

We know all this. But most institutions do little or nothing to prepare faculty for those problems.

So one reason for faculty "resistance" is that they sense that the effort to get them to teach online is a bit of a con game: "Come on in, the water's fine!"  Young technology staff,  leading training workshops, probably aren't aware of the problems. And, when workshops are led by faculty enthusiasts, they often  paint a rosy picture because they discount once-painful problems and don't want to scare their colleagues away.

My column concludes with some suggestions for how to organize self-sustaining, scalable, inter-institutional faculty conversations about teaching a particular course (e.g., "Econometrics 202; ). Their online and face-to-face discussions should be comparatively brief, brisk, relaxed, and helpful enough (trading tips, insights and moral support about what happened last week) that faculty would look forward to next week. That's the theory. Perhaps we can start a few such groups from GW.

If you'd like to learn more, the column should be published in a few months. Or contact me and I'll send you the draft.

Monday, January 2, 2012

Beyond "Comparability"

My colleague Patty Dinneen and I are just finishing up a column for publication in the next issue of "Distance Education."  (In the last issue, Natalie Milman and I wrote a column about ways in which online educational formats make it easier to identify and respond to the different needs of different students in the same course.)

This new column, entitled "Beyond 'Comparability,'  begins:

“Comparability”: an institutional strategy for assuring quality in online and hybrid courses by insisting that the content and, sometimes, the assessments be “comparable” to courses already offered on campus.  

As a standard, “comparability,” sounds reasonable enough.  After all, this sameness makes it possible to compare the quality of learning outcomes without regard to delivery method.  So long as the distant learners get test scores that are comparable to the students on campus, all is well and no further thinking or oversight are needed.

In a similar vein, Richard Clark argued in a classic article that the quality of learning is unrelated to the technology used for teaching.  For example, previous meta-analyses of huge numbers of studies of ‘presentation’ (i.e., information from a single source is made available to many students) – these studies had shown that students who are taught by presentation learn just about as much, no matter what the medium of presentation.   It doesn’t matter whether they get the presentation via live lecture, video tape, streaming video over the internet, or textbook. Clark also pointed out that the activity of self-paced instruction (SPI) produces substantial learning gains over the activity of presentation. But SPI implemented using paper produces almost as much learning as SPI using computers. According to Clark,  using technology for teaching is analogous to a vehicle delivering your groceries to your home.  The quality of milk  is the same, whether the delivery truck is made by GM or Ford.  Technology and quality are completely unrelated, he argued. (Clark, 1983: 445)

But Clark’s analogy is misleading, and his conclusion is the problem with the standard of comparability as well.  It’s true that any teaching/learning activity can be implemented with a variety of technologies or facilities. However, for any particular teaching/learning activity, some facilities or technologies are a better fit than others.  For example, SPI can be done far more easily and inexpensively with digital technology than with paper; that’s why such tutorials have become more common as computers have become more common and why paper versions of SPI have become almost extinct.  The process approach to writing, a pedagogy, spread once computers became common because rewriting is easier with computers.  Course activities involving analysis of video (e.g., video recordings of science experiments in action; film clips) became more common when individual manipulation of video became inexpensive and easy.

Once the medium or tools of learning change, it also becomes easier to change who is involved in the course.  Obviously distance learning makes it possible to involve not only more students but also students with specific kinds of backgrounds or needs.  Equally important, the institution can make different choices about who to use as instructors, or assessors of student work, when those activities can be done online.

Changes in learning spaces and tools can also enable improvements in assessment: self-grading assessments can be administered more readily online, for example.

And the dominos keep falling.  When changes in learning spaces and tools enable improvements in the activities, assessment and people, the content and goals of the course, or course of study, can be improved, too.  In the early 1980s, for example, Prof. Marvin Marcus of the University of California Santa Barbara, was able to use a new computer lab in mathematics to begin offering the math department’s first minor in applied mathematics, consisting of several on-campus courses and an off-campus internship program in which students applied their skills to solving problems faced by community agencies.  A more recent example: the Internet and cheaper international communications have helped Worcester Polytechnic Institute make research abroad become so easy, inexpensive and common that applications research abroad has become a signature activity of that institution.

When universities change technologies and/or facilities (e.g., from campus-bound to hybrid), faculty ought to take a fresh look at learning goals, content, teaching/learning activities, and assessment. The change of facilities will make some goals harder to pursue than before, others easier; some teaching/learning activities easier, others harder; and so on.  The problem with “comparability” as a standard is that it discourages faculty from thinking about how they might take advantage of new learning spaces and tools in order to offer more valuable hybrid or online courses of study. 

Remember the old tale about the tiger that had been caged since birth. It would roam its cage ceaselessly. One morning it awoke. The bars had been removed. But for a long time, the pacing tiger didn’t notice. It continued to pace within the boundaries of its vanished prison."  ....

The column goes on to summarize nine different ways in which online or hybrid formats create the potential for courses that are more valuable and more effective than do campus-bound course formats.

The column ends, in part:  "If “comparability” should not be used to provide a quick and easy method of quality assurance, what should we do instead?  Our answer is simple: we should evaluate online and hybrid offerings in the same way we ought to assess campus-bound offerings:
  1. Are we doing the right thing?  Use internal and external points of reference to discuss whether the goals are valuable.  This will almost always involve comparing ‘apples and oranges’ so it’s important to think carefully about what points of reference to use. 
  2. Are we doing the thing right? Ask whether there is a good alignment between that goal, the teaching/learning activities proposed, and the facilities and technologies to be used to support those activities.

Do you think that we should abandon ‘comparability’ as a standard for quality assurance for online and hybrid programs?