8.7 Double Marked Stations

It is possible to assign double (or multiple) Examiners to assess a candidate in an OSCE Station. These Examiners are presented with the same station resources and candidate marksheet which they can mark indendpently in parallel. Marks from these Examiners can be combine in different ways: by averaging the scores or adding them together. If global rating criteria are used, Examiners must agree on these in order to create a valid mark to enable standard setting methods such as Borderline regression.

You can run the marking concordance reports to identify any discordances between the markers. As part of Exam Set up you can define whether concordance validation is based on a per criteria or whole station basis. View Sessions Page for concordance reports - will only appear if running a double marked station.

There are several approaches to the delivery and processing of double marking within Practique. The options below relate to exams assessed by an examiner (OSCE, Oral, MMI etc).

Examiners

Practique allows for any number of examiners to be assigned to mark a station, however the most common configuration will be two examiners.

Marking Basis

Marks can be processed either on a per-criteria basis, or on an overall basis. Per-criteria marking means that each assessment criteria is compared between the two examiners, and is processed accordingly. An Overall basis looks at the total marks given by examiners, and compares them on a holistic basis.

For per criteria marking, if a station has 5 observation criteria each with a maximum mark of 5, the marks for each individual criteria (out of 5) will be compared (and then averaged/summed/reconciled as selected). For Overall marking, the total marks out of 25 will be compared (and averaged/summed/reconciled as selected).

Mark Processing

There are three options for how multiple marks can be processed in Practique:

  • Summation - the system will simply add the marks of each examiner together, so that the candidate’s final mark for a station is the sum of all marks.

  • Average - the system will average the marks of each examiner, in order to give the final mark for a candidate. In essence, Summation and Average have the same effect, scaled to the total marks of a station.

  • Manual Reconciliation - every discrepancy between examiners must be resolved. This in effect means that they must agree completely on the marks given to a candidate. The reconciliation takes place after the exam.

Marking Tolerance

It is possible to set a marking tolerance for an exam, which allows the processing of marks based on the degree to which examiners marks are concordant. For example, it is possible to configure the exam so that if examiner’s marks differ by less than 10%, their marks will be averaged, and above this they will need to reconcile the marks with each other. 

Marking Report

It is possible to generate a marking report during an exam, which will show if there are any discrepancies between examiners that need resolution. This can be generated per-examiner, or overall for the exam.

Borderline Criteria

If there are criteria on the exam that are used as Borderline Markers, these have to be fully concordant between examiners. This is because in order to use a borderline group or borderline regression standard setting methodology, two parameters are needed for each candidate’s station - the station score and borderline category. The score can be calculated based on the average/sum/reconciliation of marks from examiners. However the borderline category can only be set by agreement between examiners.