Testing candidates?
Hi Everyone - I just started with a new company of 170 employees. During my candidate exp., I was given a computer test on Word, Excel etc... What I thought was strange was the way it was given. They had an IT person speaking to me on the other end of the phone and asking me questions, while she was able to shadow my computer screen. After the exp., I seem to have some concerns and would like your thoughts and recommendations on improving the testing process?
Thanks
Comments
Hi Everyone - I just started with a new company of 170 employees. During my candidate exp., I was given a computer test on Word, Excel etc... What I thought was strange was the way it was given. They had an IT person speaking to me on the other end of the phone and asking me questions, while she was able to shadow my computer screen. After the exp., I seem to have some concerns and would like your thoughts and recommendations on improving the testing process?
Thanks [/quote]
What are your concerns?
Why do they do it the way they do it? Is the IT person the most qualified MS Office professional in the building?
I'm concerned with the consistency of the questions being asked. Also, the results could be subjective because there's an actual person giving the test, rather than a software or system.
I think they do it to save money on buying a software and I'm not sure if the IT person is the most qualified.
Thx
I'm concerned with the consistency of the questions being asked. Also, the results could be subjective because there's an actual person giving the test, rather than a software or system.
I think they do it to save money on buying a software and I'm not sure if the IT person is the most qualified.
Thx[/quote]
OK -- so a good place to start would be to find out if the testing process is scripted somewhere. Does the IT person walk every tested candidate through the exact same scenario and ask the same questions? If it's not, then I would certainly advise that the test be formalized so that test "resulsts", however they are construed are at least apple to apple.
The subjectivity of the results depends on what's being tested. If it's a series of check-offs like "does candidate demonstrate the ability to change fonts in the middle of a string of text?" then it can be "objective enough" but that's not going to go very far if the testing module is different for every person because the process is not formalized in a documented procedure that the IT person demonstrates an ability and willingness to follow.
There's nothing wrong with skill testing using a work sample but there are always problems with subjecting different people to different tests for the same job. The more objective the measurement and scoring, the safer the test is to use.
Do you think I should recommend using a 3rd party to conduct the tests? For example, a company I heard of called experttraining ?
[quote user="HRBaby"] Do you think I should recommend using a 3rd party to conduct the tests? For example, a company I heard of called experttraining ? [/quote]
That's not really necessary for skills testing so much as ensuring that the test is equitably administered. These aren't as open to criticism as psychological tests. What could be a more reasonable measure of a person's ability to perform a job than the person's demonstration of job related skills and abilitiies?
You might start with a list of what office related skills are required (e.g., control of font type, shape, size, use of section breaks, page breaks, paragraph formatting, etc.) and then create a series of tasks to be completed that demonstrate understanding of those tasks. Have a plan for how you want to deal with people who are able to efficiently find out how to do a task versus those who are able to efficiently execute the task without looking for instructions or having to search for the option in menues and those who can't figure it out or cannot do so in a timely fashion. Perhaps put a time on the test overall and use that as the measure of efficiency and avoid looking at individual successes and failures.
Glad to help.
Here's a testing idea:
Create a set of tasks to be done under observation. The list of tasks has generally more important things at the top (font, format, paragraph control and page numbers) and less important things at the bottom (section breaks, foot notes, column control). If the critical parts can be done in, say, 20 minutes by someone who is really, really good, then allow 25 minutes to complete and score based on how deep down the list they go.
If you really throw some time into it, you can do this without observing them perform the operations because the tasks will result in a document file you can look at to score task completion.