Online Learning Consortium 2014 – How We Discuss the Future of Higher Education
I presented at the Online Learning Consortium’s Annual Conference today about MOOCs as a phenomenon, specifically my Delphi research done in partial fulfillment of my Ed.D. from Pepperdine (the dissertation can be found here). The room was interested in the presentation, and I enjoyed the opportunity to discuss the evolution of MOOCs beyond my research and how the expert Delphi panel related to today’s MOOC news when we were talking in 2013.
What is most interesting to me from the first day of #aln14, however, is how my research plays into the emerging discussion of online learning as happening at this year’s conference. It started with a Twitter conversation Tuesday night with Alexandre Enkerli and Mathieu Plourde about the MOOC acronym
— Mathieu Plourde (@mathplourde) October 28, 2014
The planks of this discussion evolved throughout the Day 1 conference proceedings and discussions. Put together, I saw these takeaways regarding the conference and how our discourse on education at #aln14 merges with my research:
- Three of the four presentations I attended cited Clayton Christensen’s disruptive innovation as a foundational aspect of their work. This was in contrast to my Delphi study, where a Delphi participant wrote within the research study that disruption as an educational concept “should be taken out back and shot.” It is curious that an expert in the field of MOOCs had such a provocative statement against the disruption ideology in 2013, but a year later more and more researchers and practitioners are including it within their theoretical basis. More and more, the notion of undercutting not only the model of higher education but the purpose of higher education is a core tenet of the learning models and platforms we build and/or research today.
- Discussions of data analytics as inherent to online learning were prevalent as well. A session I attended where Charles Severance (Dr. Chuck) discussed forecasting the LMS of 2020 noted that the future attributes of a LMS would need to incorporate features such as third-party app collaboration, but also a need for more nuanced data sets that go beyond what basic analytics are telling teachers, faculty, administrations and statisticians. I did not get the chance to ask my follow-up question on how this notion of the future worked with Jim Groom’s notion of an LMS future where the LMS is replaced by student ownership of content through domain registration and a reclamation of the digital space for the individual. Instead, the conversation happened on Twitter and soon included Groom, Dr. Chuck and Michael Berman, with potential for a LMS point/counterpoint at this April’s #et4online conference
— drchuck (@drchuck) October 29, 2014
- A wonderful presentation from Seattle Pacific University (David Wicks, Baine Craft, Andrew Lumpe, Robin Henrikson & Nalline Baliram) about a research study involving low-collaborative versus high-collaborative learning resulted in a Q&A that epitomized how we as a society are looking at data in higher education. The study in question noted no statistical difference in learning outcomes for students who participated in low-collaborative learning versus those in a high-collaborative environment, and the high-collaborative environment requires more from a teacher (who must transition from content administrator to context facilitator). The research team noted that the statistics used did not take into account a number of variables, including the knowledge creation that happened with the high-collaborative students, nor the satisfaction aspect of the learning. In the Q&A, though, many people in the audience focused entirely on the stats — if there was not statistical advantage to high-collaborative learning, didn’t that show its lack of effectiveness especially when considering the labor costs to produce? The researchers noted the elements they saw but did not code for or were unable to adequately put in a trial research study, but many in the room could not look past the lack of statistical significance.
It feels like a change in the air at #aln14, where learning outcomes have become another buzzword akin to scalability, personalization and analytics. In a quest for measurement, we have picked the low-hanging fruit of content to be our measuring stick, and when a research study such as the SPU high-collaborative learning questions the results as curriculum-narrow, we cannot see beyond the klaxon of data and outcomes, when in fact the best outcome is the transformed student and not the merely competent one.
In my Delphi study, the main call to action was for professors and educators to better advocate for what they saw as the purpose of education. After today’s presentation and subsequent sessions, this seems just as important if not more so. Education research has been a field for over 100 years, a soft science depending on qualitative and mixed-methods as much as the quantitative. The heavy push of learning analytics as seen best through the MOOC phenomenon is changing the way we look at and measure outcomes. If the learning that happens is not being picked up easily through the first-generation of statistical measurement, we must talk about what was missed, why it was missed and work to build ways to show it to a greater public.Posted on: October 29, 2014admin