If these are the ways of exam?

Perhaps the question should have stated "should have a minimum range of".
... but if the 'minimum range' only went down to 0.2Ω, one wouldn't necessarily be able to measure resistances below 0.2Ω - which, as I said, I do not believe would be 'adequate for purpose'.

Kind Regards, John
 
Sponsored Links
No. I didn't mean the minimum value measured.

I meant the minimum requirements for the task. The minimum range.
I.e. a multimeter with 0.2 - 2Ω range.

It would verify that the bonding (main?) was adequate.
 
I made a mistake in the above posts with the values. (now corrected)

I don't know why I am trying to find excuses.
It was a silly question.
 
The real problem with these unanswerable questions is the unfairness, in that many of the candidates will have wasted time trying to answer them.
From the examiner's point of view, he has to decide how to adjust the marking. If a 'long paper', with 6 questions, it's a real dilemma, but if a 'tick-test' with 100 questions, it's not so bad.
 
Sponsored Links
No. I didn't mean the minimum value measured. ... I meant the minimum requirements for the task. The minimum range. ... I.e. a multimeter with 0.2 - 2Ω range.
I understood (or thought I understood) what you mant, but surely the things you say abov are the same. If one has a meter with a range of 0.2&#937; to 2&#937;, that surely means that the minimum value that can be measured is 0.2&#937;, since anything less than 0.2&#937; will be displayed as zero, "<0.2&#937;", "under" or something like that.
It would verify that the bonding (main?) was adequate.
I wouldn't have thought so. As I said, there appears to be guidance that MPB conductors should not be >0.05&#937;. How would you know if it were adequate (i.e. that guidance was satisfied) if all you knew was that it was 'under 0.2&#937;'?

Kind Regards, John
 
The real problem with these unanswerable questions is the unfairness, in that many of the candidates will have wasted time trying to answer them. ... From the examiner's point of view, he has to decide how to adjust the marking. If a 'long paper', with 6 questions, it's a real dilemma, but if a 'tick-test' with 100 questions, it's not so bad.
Indeed - but, as I said, with MCQ's the handling/processing of the results has become very sophisticated, and I've been somewhat involved in the planning of such processes in my time. To give a few examples ... if a 'higher than expected' proportion of candidates give the same 'wrong' answer, the question (and its answer) will, as one would expect, be revisited. 'Poorly discriminatory' questions will either be excluded or given a low weighting - that basically relates to the situation in which correctness of response to a particular question correlates poorly with candidates' overall performance across all questions (and hence will include those questions which most people get right, or most people get wrong, regardless of their overall performance). However, it can then get much more clever in terms of looking at patterns of responses across questions and groups/types of questions.

Kind Regards, John
 
I agree but I was just pointlessly trying to find a reason.

Therefore the answer is just wrong.
 
I agree but I was just pointlessly trying to find a reason. Therefore the answer is just wrong.
Exactly.

However, as I've said, if there were a good marking/review system, if the candidates were better than the examiners, in that a substantial proportion got the same 'wrong' answer, then they ought to end up getting credit for it (being right!).

Kind Regards, John
 
With a novice likely to score 10 and an expert still only 15 out of 20 clearly the exam is not fit for purpose.

Out of interest looked at testing an earth rod video and shows using a earth rod tester but the questions in the exam only give the option of using an earth loop impedance tester.
 
These questions seem to be aimed at testing the student's recall of what was said in the lessons rather than the student's ability to understand what was being explained.

Knowing which range of resistance to use on the meter doesn't mean the student also understands why that range is the optimum range to use.

The examine does not discover if the student would be able to use a meter which did not have those ranges available.

Unfair on the students who pass and then discover that the real world is nothing like the classroom.
 
I know - and that's why MCQs have caused me to tear out more hair than most other things - and why I so often felt the need to write 'qualifying statements' on the paper, even though I knew that no-one was going to read what I'd written!
There's no paper to write on when the test is done using a computer.

Nor is there anybody reading the answers - by the time you've left the room, collected your belongings from the locker, your certificate (or not) is waiting for you at the reception desk.


The saving grace these days is that the marking/scoring algorithms are pretty intelligent. If results indicate that most of the candidates were more sensible/knowledgeable than the person who wrote the question, the question will either be ignored or else the matter of the 'correct answer(s)' will be revisited (and, if necessary, results re-scored accordingly).
Possibly. Eventually. But too late to be of any use to those who failed the test right there and then, and needed to pass it as a requirement of their job. Or to stand a chance of getting a job.

What happens (certainly in my case) is the candidate tries to work out in which way the question setter was wrong so that they can work out which wrong answer they should pick in order to get a point.
 
Indeed - but, as I said, with MCQ's the handling/processing of the results has become very sophisticated, and I've been somewhat involved in the planning of such processes in my time. To give a few examples ... if a 'higher than expected' proportion of candidates give the same 'wrong' answer, the question (and its answer) will, as one would expect, be revisited. 'Poorly discriminatory' questions will either be excluded or given a low weighting - that basically relates to the situation in which correctness of response to a particular question correlates poorly with candidates' overall performance across all questions (and hence will include those questions which most people get right, or most people get wrong, regardless of their overall performance). However, it can then get much more clever in terms of looking at patterns of responses across questions and groups/types of questions.
And for those people for whom this revisiting came too late, because they didn't get the qualification, and thus they didn't get a good performance rating in their annual review, or didn't get selected for a job interview?
 
But I'll tell you an interesting story about people not performing as expected on a given question.

I attended a talk once given by the Special Educational Needs coordinator for Bucks, in the context of gifted children, and he was saying how hard it was for tests done in class to identify the over-performers compared to the under-performers, sometimes because the highly able children would "think round" questions.

He gave an example of a primary school class doing a test, and one question in it was

Which is the odd one out:

CAVE
BARN
TRACTOR
TENT

A child who the teacher knew to be very bright picked the "wrong" one.

Any idea which was the "right" answer, and which this child picked, and why?
 
CAVE is the one that is not man made, all the others have other words in them ( tenT )

TRACTOR is the one that is mobile in use. only one with more than one syllable.

Each one is an odd man out in some way.
 
I know - and that's why MCQs have caused me to tear out more hair than most other things - and why I so often felt the need to write 'qualifying statements' on the paper, even though I knew that no-one was going to read what I'd written!
There's no paper to write on when the test is done using a computer. ... Nor is there anybody reading the answers - by the time you've left the room, collected your belongings from the locker, your certificate (or not) is waiting for you at the reception desk.
Fortunately, my personal experiences pre-dated that - at last I had paper on which to scribble away my frustrations.
Possibly. Eventually. But too late to be of any use to those who failed the test right there and then, and needed to pass it as a requirement of their job. Or to stand a chance of getting a job.
My comments about 'sophisticated' results analysis/processing obviously have fairly limited applicability to 'instant marking' systems such as you are talking about - which are necessarily fairly dumb (since some human input is required for the 'sophisticated' approaches).[/quote]

Kind Regards, John
 

DIYnot Local

Staff member

If you need to find a tradesperson to get your job done, please try our local search below, or if you are doing it yourself you can find suppliers local to you.

Select the supplier or trade you require, enter your location to begin your search.


Are you a trade or supplier? You can create your listing free at DIYnot Local

 
Sponsored Links
Back
Top