SAQA (The South African Qualification Authority) defines assessment as: A structured process of gathering evidence and making judgements about an individual’s performance in relation to registered national standards and qualifications. This definition is based on the belief that a competent individual is most suitable to assess the competence of another individual. It stands to reason. The question then arises, is it possible for computer software that use assessment criteria marking templates to be developed for the purposes of assessing candidate competence?
The standard generating bodies don’t see any dilemma – as long as a competent individual who is a qualified assessor is in control and responsible for the entire process. SAQA invites innovation and technology into the training and development process. The various criteria have been written with this in mind. In this scenario, assessment criteria marking templates need to adhere to the approved methods of assessment. The most important method is that of observation by an assessor competent at the particular task. The candidate needs to perform the function being assessed under normal conditions expected in the workplace. With this method it becomes clear that templates would serve as results recording tools. No programme will be able to replace, or should be allowed to replace the involvement of a trained observer. Of-course the software is the aid in streamlining the process and improving work efficiency.
The next methodology following in order of importance is questioning. For some tasks all the requirements of this method will be fulfilled by software run assessment criteria marking templates. However, there are some scenarios where SAQA assessment criteria stipulate that part or all questioning must be done via interview or panel appraisal.
The methodology of simulation is already widely practised in most task requirements where either expense or danger prevents an actual scenario to be assessed. Consider pilots and firemen. With aeronautical pilots a simulator is rather important. Various assessment templates are part of the simulation programme. A trained assessor will look at the results and draw conclusions which determine the competence. The results from the software serve as the basis of the evaluation. With firemen or similar jobs computer software can be integrated to assist in the correlation of results from observations. Again, a trained assessor will subsequently conduct the final analysis on results produced.
Forth in the scale of preferred evidence and assessment methods is the evaluation of a final product. The final product, however, does not give a full picture of how a person was able to produce it, so it only demonstrates part of a person’s competence. Once more the use of assessment criteria marking templates would be employed as tools to effectively and efficiently correlate results.
A minimum of two of the above methodologies are required for an assessment of task competence according to SAQA. The ease and effectiveness of the marking software combined with associated hands on verification proves extremely advantageous here. In addition, various tasks have a specific amount of evidence that needs to be gathered to prove adequate competence by a candidate in training. Even though testimony is accepted as an assessment method it can only be used to support the above recognised evidence gathering techniques.
There is substantial room for technology, especially when the varying requirements of different industries are considered. More changes may still be underway as more advancement is made with computerised evaluation processes. However, we have already seen the significant benefit of the above software and assessment criteria marking templates in accelerating and enhancing corroboration of SAQA assessment criteria, by providing evidence of competency.