返回列表 发帖
@legacy

I definitely know that last time in December the passing rate was close to 30% rather than 35%. And I know the gold standard of 70% passing score, so that is what I entered in the linear equations above.

Off the analysis now:

Here is a philosophical argument against the MPS, if cfa does use this method. The MPS method seems to not account for easiness or the toughness of the test. Let us say a very easy test came (not too dissimilar from what happened in June 2009). Now, top 1% would be very close to 100, lets say 98. The MPS would say any candidate who scored 70% of 98 passes. This translates to 68.6. But what MPS does not account is HOW MANY and WHAT % of people are falling in this zone of 68.6>= to <= 100. So potentially you can have 50% of the candidates in this zone! Percentile based scoring would eliminate this bias and would ensure only 30% candidates pass. Of course the criticism there is that in case a smart batch of candidates appears, then the average candidate is at a disadvantage. So neither scoring approach is totally fair. Nor is an ad-hoc score because that would also not account for the easiness or the toughness of the test. A 70% score in Dec 08 test definitely meant more than 70% score in June 2009.

I do think cfa should have an either/or scoring approach under which if a candidate scores more than 70% then he passes for sure or if the candidate scores in top 25th percentile then the candidate passes for sure. But if the candidate is not in top 35th percentile or has not scored above 65% then he or she must not pass. So there is a small wedge between which can act as a buffer for the toughness or the easiness of the test and can be discretionally manipulated.



Edited 1 time(s). Last edit at Wednesday, June 10, 2009 at 03:57PM by mfin27.

TOP

mfin27,

I get what you are saying, the problem I have with percentiles(even though the probability is slim) is what you had mentioned about others being disadvantaged given a certain environment.

Obviously smarter people than myself have thought about this and likely come up with the best solution using the Angoff method, it does suck that we don't really have at least a general reference if we can't use the top 1% method as a guide. The Angoff method should theoretically take into account if a test is harder or easier because the experts would either say that the minimum qualified candidate would know say 168 questions for an easy test and 156 for a hard one(just as an example).

My guess is in reality they use a combination of the two methods to come up with a reasonable outcome and check against one another for a conflict. If the top 1% was scoring 90% and so 63% is the pass according to that standard, if the panel decides that the minimum qualified candidate should get 65% right, then they could say "wait a minute" and come up with a compromise, assuming a 2% deviation is out of ordinary and cause for concern of course.

I hear people saying that this test was easier and I agree but I agree because it was my second time taking it so I am also better prepared so I am biased and it should seem easier. I assume they know how to properly write the test and make it fairly consistent.

Prob just semantics at this point. ;-)

Ben

TOP

keelim, undoubtedly that would have an impact. That said, I'd say that the vast majority (upwards of 90%) of people who are going to pass the test will do so in a fashion where 10 minutes will make little difference. I guess my point is that even with both those factors considered, I doubt if it moved the average score more than about 1%.

TOP

when i say indians, i mean candidates apperaing from india..That counts indians and chinese appearing from US as americans and so on..

TOP

返回列表