Writing (or perhaps I should say "creating", for the benefit of UK/Canada/Australia/NZ grammarians) good exams is not a trivial task. You want very much to test certain concepts, and you don't want the exam to measure thing you consider comparatively unimportant. For example, the first exam I ever took in college was in honors mechanics; out of a possible 30 points, the mean was a 9 (!), and I got a 6 (!!). Apart from being a real wake-up call about how hard I would have to apply myself to succeed academically, that test was a classic example of an exam that did not do its job. The reason the scores were so low is that the test was considerably too long for the time allotted. Rather than measuring knowledge of mechanics or problem solving ability, the test largely measured people's speed of work - not an unimportant indicator (brilliant, well-prepared people do often work relatively quickly), but surely not what the instructor cared most about, since there usually isn't a need for raw speed in real physics or engineering.
Ideally, the exam will have enough "dynamic range" that you can get a good idea of the spread of knowledge in the students. If the test is too easy, you end up with a grade distribution that is very top-heavy, and you can't distinguish between the good and the excellent. If the test is too difficult, the distribution is soul-crushingly bottom-heavy (leading to great angst among the students), and again you can't tell between those who really don't know what's going on and those who just slipped up. Along these lines, you also need the test to be comparatively straightforward to take (step-by-step multipart problems, where there are still paths forward even if one part is wrong) and to grade.
No comments:
Post a Comment