Advocacy of robo-readers hasn’t won the debate — it still misses the main point

The Hechinger Report has a recent article out about the potential of software in the realm of robograders.  The title, Robo-readers aren’t as good as human readers — they’re better, is sensationalistic and sacrifices the tenor of the article’s argument for a round of quick clicks.  But the premise of the article is worth testing out — a research study borne of instructional technology use at the New Jersey Institute of Technology showed that robo-readers could be put to good use to help students see problems in their writing, in part because students are more willing to engage their writing over the computer rather than with a human teacher.  The researchers then make efforts to determine why this is so (you can gamify writing via the software, students look at human instructors as punitive and the computer as nonjudgmental, computer feedback can be more individualized than human labor).  Like a lot of EdTech solutionism, it sounds pretty good, enough kernels of truth and dismissal of the anti-robograder camp (through building Les Perelman as the embodiment of this perspective and rendering him a straw man) to gloss over the inherent problem robo-readers pose.

In the essay, author Annie Murphy Paul notes how the NJIT sees a positive use of robo-readers in the initial steps of creating an essay:  basic writing tutor, proofreader, and revision specialist.  The final draft of the essay is then given to a human for grading.  NJIT has seen students more willing to engage with their writing in the digital realm through the guidance of the computer, and Paul references a 2011 discussion of robo-readers that extensively quotes NJIT’s Andrew Klobucar where he states that students are writing three times as many words in the computer programs (e-rater, Criterion) in comparison to the more traditional writing practice.

The argument against robo-readers is not an affirmation of traditional writing practices, but rather a admonishment of leaving human communication to the auspices of an expert system.  e-rater is an aspect of parent software Criterion, and was developed by ETS, an assessment-based business specializing in testing and research.  The software was developed to meet assessment needs in a manner that notes ease, economy and standardization.  Using the software as everything but an assessment tool looks good but ignores the objective of the software, creating a false amorality.  Looking strictly at 3x more words and seemingly more engagement from student writers gives the impression of a successful software system that can solve a largesse in student writing (24% of students meet government proficiency levels), but it ignores the larger problem:  we view writing as a skill-based proficiency rather than as a form of human communication.  Gamifying writing or giving the duties of lower-level skills and revision to a computer does not bolster writing as a form of human communication, but rather buffers the notion that we write to a standardized metric, a metric antithetical to the human condition.

Much of the problem with how we teach writing is the notion that writing proficiency must be standardized.  If writing, a form of human communication, is not important enough for humans to read, then why are we doing it?  Skills and proficiencies and rules and knowledge and theories are vital to empowering individuals to communicate effectively and creatively, but this is not accomplished by giving protocol to a gamified system that judges the minute parts in spite of the whole.  The purpose of subject/verb agreement, grammar, argument enumeration and literary device is to allow people to share their own unique thoughts and beliefs with others.  There is a space for a computer to help in establishing that, both hardware and software.  But that must be the driving force of the developer’s objective, or we are going to continue to bolster a broken ideology and be disappointed in the students who come out of the system.

Posted on: August 14, 2014admin

2 thoughts on “Advocacy of robo-readers hasn’t won the debate — it still misses the main point

Leave a Reply

Your email address will not be published. Required fields are marked *