Lessons about researching technology-enhanced instruction

May 23, 2015 Tony Bates
Meiori, Amalfi Coast

Meiori, Amalfi Coast – when it’s not raining

Lopes, V. and Dion, N. (2105) Pitfalls and Potential: Lessons from HEQCO-Funded Research on Technology-Enhanced Instruction Toronto ON: Higher Education Quality Council of Ontario

Since it’s raining heavily here on the Amalfi Coast today for the first time in months, I might as well do another blog post.

What this report is about

HEQCO (the Higher Education Quality Council of Ontario) is an independent advisory agency funded by the Ontario Ministry of Training, Colleges, and Universities to provide recommendations for improving quality, accessibility, inter-institutional transfer, system planning, and effectiveness in higher education in Ontario. In 2011, HEQCO:

issued a call for research projects related to technology-enhanced instruction…. Now that the technology studies have concluded and that most have been published, this report draws some broader conclusions from their methods and findings.

What are the main conclusions?

1. There is no clear definition of what ‘technology’ means or what it refers to in many studies that investigate its impact on learning:

One assumes that the nature of the tools under investigation would have an impact on research design and on the metrics being measured. Yet little attention is paid to this problem, which in turns creates challenges when interpreting study findings.

2. There is no clear definition of blended or hybrid learning:

The proportion of online to face-to-face time, as well as the nature of the resources presented online, can both differ considerably. In a policy context, where we may wish to discuss issues across institutions or at a system level, the lack of consensus definitions can be particularly disruptive. In this respect, a universal definition of blended learning, applied consistently to guide practice across all colleges and universities, would be helpful.

3. Students need orientation to/training in the use of the technologies used in their teaching: they are not digital natives in the sense of being intuitively able to use technology for study purposes.

4. Instructors and teaching assistants should also be trained on the use and implementation of technology.

5. The simple presence of technology will rarely enhance a classroom. Instead, some thought has to go into integrating it effectively.

6. New technologies should be implemented not for their own sake but with a specific goal or learning outcome in mind.

7. Many of the HEQCO-funded studies, including several of those with complex study designs and rigorous methodologies, concluded that the technology being assessed had no significant effect on student learning.

8. Researchers in the HEQCO-funded studies faced challenges encouraging student participation, which often led to small sample sizes in situations where classroom-based interventions already limited the potential pool of participants.

9. The integration of technology in postsecondary education has progressed to such a point that we no longer need to ask whether we should use technology in the classroom, but rather which tool to use and how.

10. There is no single, unified, universally accepted model or theory that could be applied to ensure optimal learning in all educational settings.

Comment

I need to be careful in my comments, not because I’m ticked off with the weather here (hey, I live in Vancouver – we know all about rain), but because I’ve spent most of my working life researching technology-enhanced instruction, so what appears blindingly obvious to me is not necessarily obvious to others. So I don’t really know where to start in commenting on this report, except to say I found it immensely depressing.

Let me start by saying that there is really nothing in this report that was not known before the research was done (in other words, if they had asked me, I could have told HEQCO what to expect). I am a great supporter of action or participant research, because the person doing the research learns a great deal. But it is almost impossible to generalise such results, because they are so context-specific, and because the instructor is not usually trained in educational research, there are often – as with these studies – serious methodological flaws.

Second, trying to define technology is like trying to catch a moonbeam. The whole concept of defining a fixed state so that generalisations can be made to the same fixed state is entirely the wrong kind of framework for researching technology influences, because the technology is constantly changing. (This is just another version of the objectivist vs constructivist debate.)

So one major problem with this research is HEQCO’s expectations that the studies would lead to generalisations that could be applied across the system. If HEQCO wants that, it needs to use independent researchers and fund the interventions on a large enough scale – which of course means putting much more money into educational research than most governments are willing to risk. It also means sophisticated design that moves away from matched, controlled comparisons to in-depth case studies, using though rigorous qualitative research methodology.

This illustrates a basic problem with most educational research. It is done on such a small scale that the interventions are unlikely to lead to significant results. If you tweak just a little bit of a complex environment, any change is likely to be swamped by changes in other variables.

The second problem in most of the studies appears to be the failure to link technology-based interventions to changes in learning outcomes. In other words, did the use of technology lead to a different kind of learning? For instance, did the application of the technology lead students to think more critically or manage information well rather than reproduce or memorize what was being taught before? So another lesson is that you have to ask the right kind of research questions that focus on different kinds of learning outcomes.

Thus it is pointless to ask whether technology-based interventions lead to better learning outcomes than classroom teaching. There are too many other variables than technology to provide a definitive answer. The question to ask instead is: what are the required conditions for successful blended or hybrid learning, and what counts as success? The last part of the question means being clear on what different learning outcomes are being sought.

Indeed, there is a case to be made that it may be better not to set firm outcomes before the intervention, but to provide enough flexibility in the teaching context to see what happens when instructors and students have choices to make about technology use. This might mean looking backwards rather than forwards by identifying what most would deem highly successful technology interventions, then working back to see what conditions enabled this success.

But fiddling with the research methods won’t produce much if the intervention is too small scale. Nineteen little, independent studies are great for the instructors, but if we are to learn things than can be generalized, we need fewer but larger, more sophisticated, and more integrated studies. In the meantime, we are no further in being able to improve the design of blended or hybrid learning than before these research studies were done, which is why I am depressed.

Previous Article
Update on online learning in Africa
Update on online learning in Africa

Anderson, M. (2015) Out of Africa: e-learning makes further education a...

Next Article
UBC develops an institutional strategy for learning technologies
UBC develops an institutional strategy for learning technologies

Bates, S. et al. (2015) UBC’s Learning Technology Ecosystem: Developing a...