Sometime in 2000 or 2001, I was asked to create a skills assessment for senior-level relational database management systems experts for a very young Brainbench. I was not alone: the company selected a total of four of us, all given the assignment of coming up with forty-five multiple-choice problems related to the general concepts of RDBMS. It took me about two days to complete the task, and a few weeks later, when the test went into a public beta, folks could take the test and vote on the quality of the questions. The results of the votes were not shown to the public, and we (the consultants who created the questions) were not privy to the voting statistics either. Several weeks after that, the test became official, and I aced it: every one of the forty-five pages were from my packet.
Despite being rather proud of myself for a relatively small accomplishment, I was actually really surprised; the quality of the submissions from the other consultants was all – in my opinion – very, very good. In fact, I felt more as though some of my own paled in comparison. It seemed that all of the other submissions were by folks who really understood the topic thoroughly, and were masters in their field. In fact, I did later learn that I was the only one of the four hired who did not have a computer science degree. Talk about humbling.
Today I decided to take a few skills assessment tests on Elance – a leading online freelance marketplace – on a variety of technical subjects. Included in the ones I took were tests to evaluate one’s comprehension of Linux and Amazon Web Services.
I was disgusted.
The grammar was horrible. The content was filled with fluff and trash. The questions weren’t representative of someone’s working knowledge on the subject — in fact, some even took text from “About Us” sections that described the company, not the service provided. And in the Linux test specifically, there were cases of areas where multiple answers were correct, but only one could be chosen; other times where no answer was technically correct, but a choice had to be selected. The most appalling of it all: questions on obscure, unnecessary things like “Which of the following software packages is used to create a closed-circuit television system?” I had to look that one up after the fact. I didn’t take a test on how to set up a video system, I took a test on Linux skills. I highly doubt the MSCE or MVP tests ask for the steps of motion tweening in Flash.
It was quite obvious that the tests were created by folks with limited knowledge on the subject matter. In fact, it was probably completed – at least in majority – by the lowest bidder, who may very well have been a non-native-English administrative assistant. Hell, nowadays, anyone with Internet access thinks they have the skills and marketability to work as a professional freelancer. Some do…. most – and I really mean MOST – do not. These so-called “skills assessment” tests were proof-positive of that; they’re a joke, and folks serious about testing the skills of others would be ashamed to have them as the representation of their own knowledge on a given subject.
Granted, I can’t speak for all of the tests. There are many available, and on a wide variety of topics. I’m sure that some are much better than others, and that some of those may actually be very good at gauging an individual’s skill on the matter. Now they just need to try to get that same quality across the board.
Because if I can take a test on something of which I admittedly have almost zero knowledge, be more confused by the spelling and sentence structure of almost every single question and option, score a barely-passing 65%, yet still be in the “Top 10%” of all test-takers, something must be wrong.