In 1983 Richard Clark stated that “studies clearly suggest that media do not influence learning under any conditions”. And I agree with that. It has been shown that we can swap out one medium for another and still get the same results or, at least, results that show “no significant difference” from the original delivery system
But, what happens when we study technology instead of just media? Semantics, you say? Not really. For technology is more than just the media appliances used to deliver content. Technology also includes the pedagogy guiding that use of the media; it includes the teaching strategies driven by stated objectives; it drives toward desired outcomes, using a healthy mix of experiences, activities, and tasks. It mingles collaboration with reflection. Technology is not the buffet where you can pick and choose; it is the entire seven-course meal. You may not polish off every course; but you’d better get a good sample from each. If you wish to research the effectiveness of a technology, you must consider more than just the efficacy of the hardware.
This is not to say that one must examine every factor present, every issue involved. Such a study would include assessments of cultural impact, administration concerns, faculty development, learning styles, and student attitudes. It would encompass everything from initial instructional design to post-test results. Pedagogy, return on investment, the allocation of resources, and the economies of scale. And more. Too much information, I say.
But, to truly gauge student satisfaction or achievement, the relevant internal elements and external forces must be drawn into the study. In effect, we must examine the issues surrounding and impacting the focus of our study, before we can focus on our study.
We must find the relationships or correlations between different elements. Comparing social networking to collaborative PBL is of limited value; examining which approach is more effective for different learning styles tells us something that we can use to improve specific areas and expand out to the more general learner populations.
By continuously studying different systems, we can assess what works and what doesn’t in given situations; then compare those results to related studies with somewhat different conditions. In time, we can capture significant elements in the “statistical crossfire” that results from this multitude of related research. We must move away from the head-to-head studies of one medium against another, or our knowledge will always reflect the insight of the blind men describing the elephant.