Chauhan, A. (2014) Massive Open Online Courses (MOOCS): Emerging Trends in Assessment and Accreditation Digital Education Review, No. 25
For the record, Amit Chauhan, from Florida State University, has reviewed the emerging trends in MOOC assessments and their application in supporting student learning and achievement.
Holy proliferating MOOCs!
He starts with a taxonomy of MOOC instructional models, as follows:
- BOOCs (a big open online course) – only one example, by a professor from Indiana University with a grant from Google, is given which appears to be a cross between an xMOOC and a cMOOC and had 500 participants.
- DOCCs (distributed open collaborative course): this involved 17 universities sharing and adapting the same basic MOOC
- LOOC (little open online course): as well as 15-20 tuition-paying campus-based students, the courses also allow a limited number of non-registered students to also take the course, but also paying a fee. Three examples are given, all from New England.
- MOORs (massive open online research): again just one example is given, from UC San Diego, which seems to be a mix of video-based lecturers and student research projects guided by the instructors
- SPOCs (small, private, online courses): the example given is from Harvard Law School, which pre-selected 500 students from over 4,000 applicants, who take the same video-delivered lectures as on-campus students enrolled at Harvard
- SMOCs: (synchronous massive open online courses): live lectures from the University of Texas offered to campus-based students are also available synchronously to non-enrolled students for a fee of $550. Again, just one example.
MOOC assessment models and emerging technologies
Chauhan describes ‘several emerging tools and technologies that are being leveraged to assess learning outcomes in a MOOC. These technologies can also be utilized to design and develop a MOOC with built-in features to measure learning outcomes.’
- learning analytics on MIT’s 6.002x, Circuits and Electronics. This is a report of the study by Breslow et al. (2013) of the use of learning analytics to study participants’ behaviour on the course to identify factors influencing student performance.
- personal learning networks on PLENK 2010: this cMOOC is actually about personal learning networks and encouraged participants to use a variety of tools to develop their own personal learning networks
- mobile learning on MobiMOOC, another connectivist MOOC. The learners in MobiMOOC utilized mobile technologies for accessing course content, knowledge creation and sharing within the network. Data were collected from participant discussion forums and hashtag analysis to track participant behaviour
- digital badges have been used in several MOOCs to reward successful completion of an end of course test, participation in discussion forums, or in peer review activities
- adaptive assessment: assessments based on Item Response Theory (IRT) are designed to automatically adapt to student learning and ability to measure learner performance and learning outcomes. The tests include different difficulty levels and based on the response of the learner to each test item, the difficulty level decreases or increases to match learner ability and potential. No example of actual use of IRT in MOOCs was given.
- automated assessments: Chauhan describes two automated assessment tools, Automated Essay Scoring (AES) and Calibrated Peer Review™ (CPR), that are really automated tools for assessing and giving feedback on writing skills. One study on their use in MOOCs (Balfour, 2013) is cited.
- recognition of prior learning: I think Chauhan is suggesting that institutions offering RPL can/should include MOOCs in student RPL portfolios.
Assessment in a MOOC does not necessarily have to be about course completion. Learners can be assessed on time-on-task; learner-course component interaction; and a certification of the specific skills and knowledge gained from a MOOC….. Ultimately, the satisfaction gained from completing the course can be potential indicator of good learning experiences.
Alice in MOOCland
Chauhan describes the increasing variation of instructional methods now associated with the generic term ‘MOOC’, to the point where one has to ask whether the term has any consistent meaning. It’s difficult to see how a SPOC for instance differs from a typical online credit course, except perhaps in that it uses a recorded lecture rather than a learning management system or VLE. The only common factor in these variations is that the course is being offered to some non-registered students, but then if they have to pay a $500 fee, surely that’s a registered student? If a course is neither massive, nor open, nor free, how can it be a MOOC?
Further, if MOOC participants are taking exactly the same course and tests as registered students, will the institution award them credit for it and admit them to the institution? If not, why not? It seems that some institutions really haven’t thought this through. I’d like to know what Registrar’s make of all this.
At some point, institutions will need to develop a clearer, more consistent strategy for open learning, in terms of how it can best be provided, how it calibrates with formal learning, and how open learning can be accommodated within the fiscal constraints of the institution, and then where MOOCs might fit with the strategy. It seems that a lot of institutions – or rather instructors – are going into open learning buttock-backwards.
More disturbing for me though is the argument Chauhan makes for assessing everything except what participants learn from MOOCs. With the exception of automated tests, all these tools do is describe all kinds of behaviour except for learning. These tools may be useful for identifying factors that influence learning, on a post hoc rationalization, but you need to be able to measure the learning in the first place, unless you see MOOCs as some cruel form of entertainment. I have no problem with trying to satisfy students, I have no problem with MOOCs as un-assessed non-formal education, but if you try to assess participants, at the end of the day it’s what they learn that matters. MOOCs need better tools for measuring learning, but I didn’t see any described in this article.
Balfour, S. P. (2013). Assessing writing in MOOCs: Automated Essay Scoring and Calibrated Peer review. Research & Practice in Assessment, Vol. 8, No. 1
Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom: Research into edx’s first mooc. Research & Practice in Assessment, 8, 13-25.