Them Algorithms ain't all bad! - Kloodle

The education secretary announced recently he would be culling this summer’s exams amid the escalating coronavirus crisis here in the UK, creating a plethora of questions in the process. What will replace them? How will we assess learners? How will learners stay motivated? Will learners struggle to get university places?

Teachers are not misplacing their angst.We saw “algorithm-gate” last summer, where Ofqual botched their system of assessing learners based on a nefarious algorithm designed to promote toffs and stifle the proletariat uprising (jokes). A major constituent of their algorithm appeared to be postcode data: how successful are other learners in your area? The algorithm then fit teacher predictions against historic data, downgrading outliers whose results strayed too far from the line-of-best-fit.

It’s now a case of once bitten, twice shy. Rightly, teachers are fretting about a journey down the same road. Ivory-towered Westminster residents aren’t hit nearly as hard by this as the teachers sat opposite a sobbing teenager whose future prospects has been shattered by a computer.

Does last year’s experience mean all algorithms are bad? That we’ll never be able to harness “machine learning” to generate a robust assessment system? I’d like to think not.

As glamorous as it sounds, an algorithm is merely a set of instructions, similar to a cake recipe. The success of an algorithm is based on the data it receives, how it interprets this data, and the results it spits out. Rubbish in equals rubbish out and it appears that we plugged rubbish into last year’s algorithm. We put in rubbish and received rubbish in return.

How do we ensure that any future algorithm doesn’t suffer the same fate? We merely determine a better set of data by which to make our predictions.

Double Double, toil and trouble: Our recipe for algo-rhythm!

As a first pass, here are some data points I feel would be relevant in producing a solid outcome.

Every homework and class assessment mark available

Schools record assessment data to the nth degree. These marks form a strong indicator of performance over time. We can also measure this in terms of relative performance – i.e. where does a particular student rank against immediate peers from similar socio-economic backgrounds? Different school types etc etc. This “relative performance” can create an “outlier weighting” which can then be applied to the end prediction. i.e. if a student has a consistent history of being 2 standard deviations above their peers, this should be reflected in their end result.

Collecting this data should not be difficult. Teachers collate marks all the time. Properly recorded (even in a spreadsheet), this type of data can provide strong indicators of end performance and could be plugged into an algorithm.

Effort grades, time management & behavioural indicators

Teachers should also attempt to collate behavioural data as, quite often, exam success is a function of good habits. Are you able to organise yourself? Are you able to revise well? Do you work hard? Are you engaged whilst in lesson? Do you attend frequently? These are data points collected by schools all the time. Also, academic performance in terms of “hard results” tend to peak at exam times. Mock exams can sometimes produce results lower than expected. The extrapolation we make is that learners will assimilate the content in time for the exam and then achieve higher results. We can make a better fist of this by basing this on behavioural traits. Learners who work hard, revise well and engage in lessons are likely to succeed. Baking these data into our algorithm will make allowances for our learner’s attitudes towards exams.

Also, “teacher predications” have these elements embedded into them by definition. Your teacher knows their learners’ propensity for hard work, whether they’ll revise, whether they deserve good grades and all of the “unseen” character traits the best students have. We have means of measuring these types of behaviour, so why not bake them into our algorithm.

Teacher Predictions

We should also collate teacher predictions, as teachers have a very real understanding of their learners. In machine learning, we can assess an algorithm’s accuracy by having humans make judgements  alongside the machine. Teachers grades can serve as a “training aide” for our algorithm, highlighting areas of discrepancy between teacher predictions and the machine’s guesses. By enabling teachers to predict grades, we can teach our algorithm how accurate it is.

Core Skills Assessments

Teachers can also judge a learner’s likely success based on their abilities in core areas such as reading, writing, reasoning, critical analysis and numeracy skills. There are standardised, online tests that can determine these traits amongst learners and are pretty tough to cheat. Tests of similar ilk to CAT tests could be taken online, providing a steer of a learner’s ability in fundamental academic skills.

The knowledge we teach in our subjects requires application of these core fundamental academic skills to access. For example, true understanding of trigonometry requires a blend of abstract and numerical reasoning. If we take our homework and internal assessment grades and couple them with an indicator of a learner’s ability in the fundamental academic skills, we can make a good guess as to a learners abilities.

These data, in my opinion, provide solid data points for judging a young person’s academic capabilities. We might have to play with the weightings of each data point in how they determine a grade, but machine learning can help refine this process.

Where the difficulties come from

The challenges for assessment arise because grades are used as an admissions tool. As a result, we need to create clear distinctions between people. We might not like to admit it, but we play algorithm jiggery pokery every year by altering grade boundaries based on population data. In strong years, the barriers raise. Each year, proportions of the population achieving each grade remains fairly constant.

We could argue that grades are required to inform us whether a learner is capable of accessing an A Level or University course. I’d argue that we’re getting this wrong, anyway. Drop out rates, especially at university, are unacceptable. Academic grades are a poor determinant of university success, where your ability to self-motivate, self-learn, understand concepts from first principles and organise your time differ significantly from the coddled environment many A-Level students enter from.

If lockdown has taught us anything, it’s that we can shift a lot of our learning online. This might shift admissions policies to a completely different approach. What if we admitted everyone who applied? We then provide the first term of our courses online, and let learners access the course. The rigours of university or A-Level life will then separate out those who can succeed from those who can’t. The unit cost of delivery is reduced significantly as courses can be recorded and reused. Attention can be shifted to assessment as opposed to delivery, and those with the gumption to succeed will. They can then carry on the course as normal after a successful first term. With this approach, we’ll see who succeeds by exposing them to the requirements we need them to succeed at, as opposed to using a proxy such as A Level grades.

I’d also argus that ongoing assessment like this, where every piece of homework counts, every attendance counts and every impression you make on a teacher counts will serve to create better motivations in lessons. Learners can no longer kick the can down the road in terms of their performance. They have to turn up every day and engage fully in order to succeed. This also creates a safety valve for learners, as any day, they can up performance and see a meaningful impact, as opposed to waiting for some mythical results day that seems a long way off and inconsequential to your bog standard teen.

Incidentally, this is how real life works. There’s no “exam day”. Every day is a results day. You have to produce. School should reflect this. Each day should mean something. We should be teaching our kids that education is a process, that all knowledge matters, that the skills we build on a daily basis are crucial and that relegating knowledge that’s “not on the exam” to second-class status is a loser’s mentality. Teachers should be free of the burden of exams. They suck the soul out of education. People do need incentives – you want to know if you’re winning or losing – however, shorter feedback cycles are better. If each day impacts our grades and we can see it in cold light, we may just pull our socks up.

I’ve probably missed plenty of nuance in this essay, and, it’s easy for a non-teacher to wax lyrical about what education should do. However, my purpose is not to dictate what should happen, but to explore my own thinking on the matter, as well as to spark debate. If you’ve any strong views on how this could play out, I’d love to know! I really believe that we can harness technology to make every school day count and to instil the value of lifelong learning. Assessment is fundamental to achieving this. Let’s use this pandemic as a watershed moment to create a system that serves our young people’s best interests.

 

Phill

About Phill

Phillip is co-founder of Kloodle.

Leave a Reply