8.12 Station Cut Score
Cut Scores are calculated based on the Standard Setting methods you apply. As a first calculation a cut score determines that anyone above the score passes, and anyone below will fail. The Cut score can be changed to make a passing score.
There can be a cut score for an entire Exam as well as for individual stations. Typically, station cut scores are added together to give the Exam cut score. By then adding an SEm the cut scores can be manipulated.
Item | Description | Useful links |
---|---|---|
Mean score | Average score |
|
Cut score | Calculated by (max score of station times standard method value) per 100 |
|
Max score | Max score of the question/station (OSCE: summary of observation criteria scores) |
|
Standard deviation | scored.standard_deviation() --> numpy.std() | |
Alpha (if station deleted) | Cronbach’s alpha is a measure used to assess the reliability, or internal consistency, of a set of scale or test items. In other words, the reliability of any given measurement refers to the extent to which it is a consistent measure of a concept, and Cronbach’s alpha is one way of measuring the strength of that consistency. Cronbach’s alpha is computed by correlating the score for each scale item with the total score for each observation (usually individual survey respondents or test takers), and then comparing that to the variance for all individual item scores: The resulting α coefficient of reliability ranges from 0 to 1 in providing this overall assessment of a measure’s reliability. If all of the scale items are entirely independent from one another (i.e., are not correlated or share no covariance), then α = 0; and, if all of the items have high covariances, then α will approach 1 as the number of items in the scale approaches infinity. In other words, the higher the α coefficient, the more the items have shared covariance and probably measure the same underlying concept. | https://data.library.virginia.edu/using-and-interpreting-cronbachs-alpha/ |
Passes | number of passes per criteria |
|
Fails | number of fails per criteria |
|