Fawei and I recently had a trip up to Newcastle to attend the 2019 Turnitin Summit at the Baltic Centre for Contemporary Art. I thought it would be good to give a summary.
Quite a lot has happened with Turnitin in 2019 not least the takeover by the Advance Publications family of companies. In a similar vein to last year, there was a lot of talk about the plague of essay mills and their ever-evolving tactics employed to keep one step ahead of the game – some essay writers will now guarantee a low similarity score in order to attract customers.
The first talk from Val Schreiner was an overview of Turinitin’s long term plans / ambitions.
Turnitin would like more visibility of their “results” within VLE. There is growing acknowledgement that the most visible metric – the similarity score – is actually a fairly small part of the jigsaw. Most VLEs concentrate on showing a percentage match plus a colour code (green / amber / red) and this is the only indication that one sees in a VLE: a high percentage plus “red” indicator can be quite worrying for students.
As Turnitin expands its attention towards more complex submission types (often by STEM subjects) they must educate their users to be more considerate in their interpretation of high level metrics. For example, a similarity score of 100% for an answer to a maths problem can mean two entirely different things
- it is an extremely strong indicator of collaboration if both students got the answer wrong
- it may be a strong indicator of competence if both students got the correct answer (or one may have copied the other’s correct answer!)
Val spoke about a new marking tool for STEM subjects called GradeScope, this supplements GradeMark which is focused on marking essay type questions. More about GradeScope later.
There is a plan to offer formative feedback to students as they being to compose their essays, this would take the form of a Word or Google Docs plugin and would be akin to spell and grammar checking tools.
The “Code Similarity” tool (formerly Code Investigate) for highlighting plagiarism within computer programmes has moved out of pilot and is now offered as a “beta” tool. Val commented that “adding new programming languages is easy”.
Another one of the products announced last year has a new name, it is now known as “Authorship” (instead of “Authorship Investigate”). Authorship is now on its third release.
Turnitin would like to offer more tools to manage “Complex Workflows” (paper and digital) and would like to standardise grading workflows across all assignments and all disciplines. (I took this to mean they plan to merge GradeMark & GradeScope but in my mind, this seems a long way off.) It also sounds like Turnitin would like to enter the e-exams market as they spoke about implementing ‘”browser lock-down”, “ID verification” and “Proctoring” – Val spoke about enabling remote summative exams via the aforementioned initiatives.
There were a couple of announcements about improving reporting dashboards. Two areas mentioned were
- a “persona dashboard” – this is a longitudinal summary of an individual student
- a high-level “institutional view” for senior managers and committees
Turinitin are planning a new implementation framework whereby the individual components mentioned above can be made available without having to buy the whole gamut. They also plan to allow other tools to be plugged in. They didn’t go into details (or give examples) but I assume they are talking about IMS LTI compliant tools.
Next up was Ron Park (CTO) followed by Bill Loller (Product Manager) and then Zemina Hasham (Customer Experience).
Ron spoke about load testing Turnitin – he stated that they were confident that they could handle a load of over 6 times the current peak. (There have been one billion submissions in the eleven months since 1 Jan 2019.)
Ron is also in charge of R&D and mentioned an initiative in using synonym replacement when compiling similarity reports. Apparently there are a number of tools which will “re-phrase” chunks of text using synonym replacement (known as “word spinners”) so in the arms-race that is contract cheating, Turnitin must also do their own synonym replacement when attempting to identify textual matches. Ron is also overseeing an initiative to detect plagiarised images (pictures and diagrams)
Bill spoke about upcoming improvements to the Canvas integration – feedback about problems will be available in the Canvas UI – this is for both staff and students. Zemina acknowledged that there have been certain failings in support and announced that they will be implementing a self-service portal where Turnitin admins will be able to access a service dashboard to check on the progress of support tickets. This should come online “early next year”.
The best breakout session was an overview of the very impressive GradScope product. As mentioned above, this is aimed at STEM subjects and manages the marking process. Apparently LTI 1.3 support is planned for the near future.
GradeScope has a great strap-line “Turning grading into learning”. This aside, the tool does look interesting – it is essentially a tool for marking handwritten exam type scripts. The process starts with the question sheet being scanned and the areas where questions will be answered identified and assigned a question number. The next task is to scan student submissions. There is also a “homework module” wjhere the studen submissions can be scanned and uploaded and the tool will integrate with a VLE (via IMS LTI) and display in an iFrame.
Once the groundwork has been done, the marking can begin. The system supports multiple markers with individuals either sharing the marking of an single question or each marker looking at a different question.
The idea behind turning grading into marking is that markers can provide ‘actionable feedback’ but also should be able to spot where a lot of people are making the same mistake or where a question has appeared to be ambiguous to students. They can adjust their teaching / exam question authoring accordingly.
To use GradeScope effectively, one must set up a series of rubrics – where a student has made a mistake a the rubric should include a mark to be subtracted from the final score. This mark can be modified retrospectively and all individual marks will be recalculated. It is also possible to group multiple question answers together and mark them all at once thus saving time. One can either group manually of employ AI matching routines to group answers, the grouping can be modified if mistakes have been made during the auto-grouping.
AI is also used to try to decipher that student’s name and link to the individual in the roster. Audience members pointed out that for summative exams, relying on AI techniques would be risky.
Student Authoring Assistance
In this session, Turnitin outlined their plans for helping students whilst composing. The tool will (optionally) offer help with spelling, grammar, referencing, citation styles, paraphrasing and will offer on-demand similarity checking.
The plenary session focused on contract cheating. It is estimated that 6% of essays fall into the category of “cheating” but only 1% of essays are identified as falling into this category. It was also noted that contract cheating tactics are evolving fast and Turnitin is constantly playing catch-up. Apparently the essay mills are now targeting secondary school students using YouTube influencers and the like to attract attention.
The QAA have been working with the government on criminalising contract cheating companies which will then allow the law to be used to remove YouTube adverts and take down website. This initiative has stalled due to the upcoming general election.