ded outcomes are higher productivity, improved quality of work, improved communication across horizontal and vertical lines, higher worker morale, greater job satisfaction, increased output and sales, reduced turnover, reduced scrap rate, and lower absenteeism etc.
In order to determine these outcomes in our evaluation, we intend to measure the effectiveness of the program both before and after the training. We will allow a certain time elapse after the training program in order for the results to be achieved. Also, our evaluation design does not only cater to over optimistic results, because one of the most important factors of our evaluation is to find out how, if at all, the training program could be improved.
In the first level of our evaluation design, we will record the participants’ reaction to the overall program immediately after the training program, by using instruments such as a questionnaire with both open-ended and closed-ended items (including rating scales etc.) in order to determine whether or not participants have a positive attitude towards all components and sub-components of the program. From this we would be able to evaluate the most important strengths and weaknesses of the program. We will respect the confidentiality of participants’ responses by keeping our instruments anonymous; this would produce more honest answers.
Level two of our design is more imperative to our evaluation design. In this, we will gauge the learning of participants. We will match their learning outcomes with the trainer’s learning objectives mentioned in the beginning of the program. The assessment will be done across three areas, namely: knowledge, skills, and attitudes. For example, if the training program is on computer systems, we will evaluate after the training program whether the participants know the difference between Windows 95 and Windows ME (knowledge); whether they can upload a new operating system to a computer (skill); and whether their