<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d33662887\x26blogName\x3dGeography+and+all+that+Jazz\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dSILVER\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://geographyjazz.blogspot.com/search\x26blogLocale\x3den\x26v\x3d2\x26homepageUrl\x3dhttp://geographyjazz.blogspot.com/\x26vt\x3d-3273195495134634114', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

Friday, January 23, 2009

You can't escape ASSESSMENT... but you could e-Scape assess.....
The GA has been involved, with Goldsmiths College, in an exciting trial of a new method of assessing student's geographical work, using handheld technology (PDAs) to create a digital portfolio, which is then assessed using a method called 'comparative pairs'. This is a more robust method of comparison between individual pieces of work than the traditional method of moderating pieces of coursework. It was suggested by Alastair Pollitt, former head of research at Cambridge Assessment, based on earlier work in the 1920s (see later)

The final report on the trial, written by Fred Martin with David Lambert has now been made available on the GA website, along with further details on the project.

The trial involved schools taking part in a field visit to Porthcawl, and exploring the issue of rebranding on their return. There are links to other projects which involved the use of handheld technologies, and also the idea of media landscapes. The report also mentions a range of other field investigations which Fred Martin produced.
An e-portfolio was created as a result of the process, and this was judged by comparing each portfolio with all of the others, and saying in each case "which is best" ? The software that was used was an online system, which meant that judging could take place at a time and place to suit the judges within the (fairly tight) timeframe that we were given.
Over time, the software decided that there were some pairs that didn't need to be compared (if you take the 'best' and the 'worst' piece from a sample, you don't really need to compare them to see which is best as it's fairly obvious...)

As one of the judging team, I have to say that this whole process was a fascinating insight into the techniques (and in some cases, deficiencies) of the current systems of assessing large numbers of exam candidates. I certainly learnt a great deal about the way that assessment works. A related issue is that this could form an approach to the management of controlled assessment, as the software on the PDAs could be set up to

The appendices in the report, which can be downloaded from THIS PAGE of the GA website would reward closer reading by those who are interested in an alternative approach, which also taps into the

The later appendices contain too many 'hard sums' for me, but I think they say that I was a reasonable judge - was I more Craig Revel Horwood than Bruno Tonioli ?

For those who also want a little more, Tony Wheeler has published a useful summary of the whole process on the FUTURELAB website's FLUX section, and there is also a TEACHERS TV programme on e-assessment. Mobile phones are mentioned here too (iPhones perhaps ?)

This includes a useful analysis of the comparative pairs method, and the reason why an e-portfolio makes the judging of this a possibility....

Alastair explained how Louis Thurston had developed this theory of assessment in the 1920s, based on simply comparing one piece of work directly with another. Alastair argued that abstract assessment criteria did not help in the process of marking, as examiners inevitably convert the abstract into concrete exemplars, increasing variability and unreliability. So why not just compare work directly? If enough comparisons between two different pieces of work are made by enough judges, a very reliable rank order emerges (the one that always wins moves to the top, the one that always looses goes to the bottom and the others spread appropriately between). I understand that QCA use this system already to monitor inter-board comparability, basically to ensure an ‘A’ in maths from OCR is the same as an ‘A’ in maths from Edexcel.

The problem lies in the scale of the award. With twenty paper scripts and half a dozen judges it can be done round a table, but when there are thousands of scripts and dozens of judges it becomes a logistical impossibility. However, with the advent of web-based portfolios, like the e-scape set of portfolios, are available anywhere and anytime each assessor has an internet connection. Multiple copies can be viewed at anytime, making the paired process possible in a high-stake assessment for the first time.

Labels: ,


Post a Comment

<< Home