One of the most significant functions of a manager is to offer training to employees. This is important in order to provide employees with the knowledge to keep up with changing technology and social-economic set up in the globe (Koontz & Weihrich, 2006). It is essential to acknowledge the fact technology influence the effectiveness of training programs. Therefore, the managers need to implement training programs for the employees which also act as a motivational aspect of work benefits to the employee. The manager should come up with ways to handle training, how to implement it and how to measure the effectiveness of the training (Batra, 2007).
This is primarily so that the manager can measure cost of training the employees versus the benefits gained from training them. One of strategies to reduce cost in employees’ training programs is by using technology.
In an effort to reduce the cost of training, a manager should use tools at his disposal to compare different methods of training available for the training (Ferreira, Erasmus, & Groenewald, 2009). Some of the costs of training that the manager should compare include the cost of booking facilities, travel costs and hours spent away from the job by the employees.
The emergence of e-training in the recent years has been a breakthrough for corporate and educational organizations. In recent years, at least 40% of employees in organizations advocating modern for technology have adapted to e-training using computers and the internet. The United States is considered the leading country in-terms of adopting e-training of employees. The significance of e-training in motivating employees become productive has been sighted as the primary goal of the evolving concept. However, the need to evaluate the effectiveness of e-training has become critical for managers who want to determine the value of the same in both short and long-term basis.
Therefore, evaluating the effectiveness of e-training in modality is key to the continued use of the concept. The Kirkpatrick’s model is sighted as an effective tool of evaluation for e-training programs. The model is effective in terms of determining whether personnel, money and time play a critical role in making the e-training program a success. Majority of organizations and managers opt to evaluate the effectiveness of an e-training program to identify areas that pose maximum potential.
Through evaluation, managers are able to align an organization’s strategies to achieve the predetermined goals. As indicated earlier, the value of the e-training program can only be validated through the use of models such as that of Kirkpatrick. In this regard, the significance of professional trainers who adopt modern technology and strategies within an organization is prioritized within the respective systems structures. The use of Kirkpatrcik’s model is considered effective in not only evaluating the implementation of the e-training programs, but also the satisfaction of the same from participants and trainers.
Firstly, corporate managers are motivated to adopt e-training as means to motivate the employees accept change and improve on performance. Moreover, managers’ quest to save organizations from increased cost of training employees is addressed through e-training. Additional values associated with using e-training to motivate employees include benefits such as convenience, self-placed learning and acquisition of diverse information. From this perspective, employees are motivated to work for a common organization goal.
Although the evolution of e-training of employees has taken shape in the majority of the organizations around the world, the program is yet to achieve full success as discussed in this paper. There have been allegations that poor development of the e-training program’s content is a major problem (Rosenberg, 2001). In this context, the program developers use incorrect content which is sometimes directed to the wrong audience. Sometimes, the program does not serve the intended purpose owing to lack of proper planning. Lack of proper reinforcement or follow-up after the program result to devastating outcomes for the organization and the employees.
Beside the mentioned issues that have not been properly researched, the increasing drop-out of employees registered for the e-training program has resulted to inclining lack of motivation and willpower to perform in organizations. Researchers and e-training developers have not emphasized on the impact of interruptions on the program’s outcomes. Nonetheless, the following issues have been sighted as major problems that have not been focused by managers during the evaluation of the e-training program.
- Lack of motivation
- Technological problems
- Lack of proper program planning
- Incompetent program instructors
- Incompetent managerial oversight
- Employees’ special needs
Over the years before the inception of the Kirkpatrick’s model of evaluation, other techniques that existed for the same purpose included the Treadway Parker’s and Jackson and Kulp’s models. Majority of studies do not mention how both models have been integrated to Kirkpatrick’s model. In this regard, Kirkpatrick’s model has is considered a hybrid of the two techniques in terms of elements involved, as well, as the intended purposes.
For example, the Parker’s model focuses on evaluating job performance through timeliness, quality and workouts. Other important aspects of evaluation according to Parker’s model include group performance, participants’ satisfaction and knowledge. From this prescriptive, majority of the studies do not acknowledge that Kirkpatrick’s model is advanced version of Parker’s model. Alternatively, studies do not imply how Jackson and Kulp’s model compliments Kirkpatrick’s model in evaluating the program’s outcomes in terms of reaction, capability, application and worth.
Researchers have failed to bridge the gap between the evaluations models for many years. The importance of an e-training program is the overall outcome which must be evaluated using the most effective model. Nonetheless, other important strategies to ensure that evaluation of an e-training program derives appositive outcome should utilize the following techniques:
- Performance records
Aim of the study
The aim of the study is to examine the effectiveness of the Kirkpatrick’s model in evaluating e-training programs. In this regard, the study focuses on participants reaction to the program implementation. The study also aims at examining the level of participants learning in terms of knowledge acquired through form e-training programs. Finally, the study evaluates the transfer of training by the participants at the end of the program.
This paper was established to assess if e-training programs were effective as other traditional training methods. The main objective is to evaluate the e-training programs by using Kirkpatrick’s model of training evaluation. This is done with the hope that it may provide the evidence to the benefits of e-training program.
Levels of Kirkpatrick’s model
Kirkpatrick’s model of evaluation is based on four levels. The significance of the four levels of Kirkpatrick’ model is to ensure that the implementation of the e-training program and evaluation of the same is gradual. In this context, it becomes simpler to identify areas that require immediate correction or improvement.
Level 1. Reaction
In this stage, participants of the e-training program are assessed in terms of how they feel about the program’s experience. In addition, the evaluation focuses on participant’s level of satisfaction. Other areas of concern in this level are whether the participants find the content of the program to be relevant. Therefore, the evaluation seeks to know the reaction of the participant and whether they are ready to continue with the e-training program. In any case, immediate correction is made in the program to address participant needs. The relevant evaluation questions for this level are:
How do learners feel?
Understanding the reaction of the participants from their feeling is critical for the program. From this perspective, Kirkpatrick’s evaluation model seeks to know whether participants derive happiness, flexibility and convenience from e-training. The level of participation in terms of program enrolment reflects the participants’ reaction to the same. In a study conducted by ASTD-Masie Center, more than 700 participants took part in the study of e-learning technologies. In this context, the results of the study revealed that 87% of the participants were happy with e-training and preferred digital courses.
From this perspective, the reaction was a result of flexibility that e-training technologies provide people with other duties to attend. In addition, 52% of the participants engaged in e-learning while in their respective workplaces. Moreover, 38% of the participants preferred e-training to traditional methods.
How do e-training instructors feel?
It is necessary to know whether the trainer is competent enough to handle e-training. In this regard, the success of the program relies on the familiarity of the trainers with the program. It is important to evaluate whether instructors prefer e-training or traditional methods.
Level 2. Learning
Kirkpatrick perceives learning as a process where the elements of principles, facts and techniques are to be integrated systematically and interpreted for the benefit of the trainee. In this context, benefits of the learning process are evaluated by assessing the level of skills and knowledge acquired by the trainees. Evaluating whether the trainees have changed in terms of attitude and behavior validates the e-training program.
However, evaluating the learning process requires the use of attitude survey and tests. Reputable organizations such as Sun Corporation’s Network Academy have used pretests and posttests to evaluate employee’s learning experience and achievement in terms of skills and knowledge acquired (Bylinsky, 2000). At this evaluation level, the program trainer seeks to understand trainees’ individual performance. The following questions can be used at this level of evaluation.
Did the learner use the correct training material for the e-training program?
It is important for the trainee to use the correct program material, as well as, related content. From this perspective, the trainee accrued the required skills and knowledge that are consistent with the e-training goals and objectives.
Is the knowledge acquired applicable?
An evaluation of whether the trainees can apply knowledge and skills acquired in any context is necessary. The e-training program can only be validated if the trainers improve on their performances. However, the achievement made from the e-training program must be measured through quantifiable metrics. For example, the trainees should be subjected to practical tests that require solving examples of real-life problems.
Is there a change of attitude after completing the e-training program?
In this context, trainees are observed whether their attitude towards e-training and the program’s content have changed over the time. Using performance records to ascertain the same change of attitude is necessary.
Level 3. Behavior
As indicate earlier, the main goal of an e-training program is to impact the behavior of the trainee in terms of improved behavior, performance and transfer of the same to other trainees or organizations. Normally, this level of Kirkpatrick’s model of evaluation is critical in understanding the connection between behavioral change and expected outcomes or results. Therefore, the complexity of using empirical measurement in assessing the change of behavior becomes a critical issue for the trainers. Nonetheless, the impact of positive behavioral change through the trainees’ tests results is helpful.
To be precise, business corporations measure customer service from client’s satisfaction, improved sales and profit margins. Reputable organizations such as the Unilever use e-training programs for their employees, and then measures customer service and satisfaction by using reaction sheets (Hoekstra, 2001). Due to the complex nature of the behavioral level, participants’ behavior and transfer of the same to clients is measured using teleconferencing.
Level 4. Results
The level involves measuring the results of the e-training program and how it directly affects the organization. The e-training program in a school expects improved skills, knowledge and teaching methods from the teachers, as well as, the overall institutions performance. In a corporate environment, business managers expect an improved productivity from employees, as well as, reduced costs and increased production for the organization.
As indicated earlier, Kirkpatrick’s model is a hybrid of two other significant models used for evaluations. The Treadway Parker, and Jackson and Kulp’s models are ineffective when evaluating an e-training program. Parker’s model focuses on performance and participants aspects while Jackson and Kulp’s model focuses on outcomes only. However, Kirkpatrick’s models combine both aspects to achieve a comprehensive evaluation process. In this regard, Kirkpatrick’s model is convenient, reliable and flexible for application in corporate and education contexts.
It is critical to evaluate the effectiveness of e-training programs either in schools or corporations. Establishing an evaluation mechanism that is consistent with the objectives of an e-training program is essential. Kirkpatrick’s four levels of evaluation are the most effective for this assessment. From Kirkpatrick’s model of evaluation, it is evident that a progressive evaluation of e-learning programs is assessed by levels. In this regard, it is easy to detect any error or malpractices exhibited in the e-training programs. As indicated earlier, it is important to achieve an overall goal of effectiveness at the end of e-training programs.
Effectiveness of e-training programs based on Kirkpatrick’s model
It is essential to understand that e-training programs are established to achieve effectiveness in terms of functionality and cost. Strother (2002) uses Kirkpatrick’s evaluation model in assessing the effectiveness of corporate e-training programs. Nonetheless, the evaluation model can be replicated in schools’ e-learning programs (Galloway, 2005). In fact, most of the learning institutions are now enrolling students into e-learning courses (Galloway, 2005). Kirkpatrick’s model uses reaction, learning, behavior and results as the major levels of the evaluation process. Unlike other evaluation models, Kirkpatrick’s model uses scientific methodologies to achieve optimal results.
Level 1: Reaction
This level is used to determine how participants of an e-training program feel. In addition, this level evaluates trainees’ satisfaction with the e-training program. Perception towards the e-training material is to be assessed among the trainees.
Why measure reaction
The reaction level is significant during the initial stages of implementing an e-training program. For example, school administrators can use Kirkpatrick’s model to determine students’ reaction to methods of teaching, the curriculum and the training material used in the program.
How to measure reaction
In a research conducted among corporations establishing e-training programs, Strother (2002) alleges that the corporations identified 87% of the trainees’ preferred digital training during office hours. Another 385 of the participants that were interviewed preferred to undertake e-training lessons rather than classroom sessions. The reaction was measured using a questionnaire provided to the trainees.
Unlike other models, Kirkpatrick’s model identifies both positive and negative reaction. In most cases, a positive reaction among the trainees increases the possibility of an e-training success and effectiveness. On the other hand, a negative reaction reduces the effectiveness of the program. A negative reaction requires attention and an immediate correction mechanism. From this perspective, the model assesses why the trainees’ reaction affects the next level of evaluation.
Level 2: Learning
This level describes learning as a process where trainees acquire and understand facts and techniques. In addition, this level evaluates by how much the trainees have gained new skills and knowledge.
Why measure learning
There are corporations that take great interest of their trainees’ progress at this level of evaluation. Therefore, corporations are required to keep records of their employees’ performance at this level as a point of reference. Sun Corporation Network Academy is known to keep records of employees’ performance at the learning level of evaluation (Bylinsky, 2000). A comparison between e-training and class-based lessons suggests that the former is efficient and economically advantageous.
How to measure learning
Kirkpatrick’s model advises trainers to use pretests in determining trainees’ level of understanding before completing the e-training process. In order to complete the learning level, a posttest can be issued to trainees to determine their qualification to the next level of evaluation. However, it is important for the evaluating personnel to use a control group to determine the practicability and effectiveness of the learning process.
Results derived from the evaluation at this learning level should be used to improve e-training programs. A survey conducted in California State University Northridge reveals that at least 20% of students undergoing e-learning perform better than class-based learners (TeleEducation, 2000). Nelson (2001) performed a similar survey from 406 university students, and established that e-learning students acquired better grades than class-based students. Serrano and Alford (2000) research on infusing technology into the school curriculum suggest that e-learning empowers students. From the same research, it was evident that students’ ability in critical thinking and literacy skills improved significantly.
From the model, trainees can apply acquired skills and knowledge in real life situations. Developing critical thinking and literacy skills are essential goals in any learning environment. Kirkpatrick’s model is effective in assessing the net change in knowledge, behavior and improved skills.
Level 3: Behavior
The effectiveness of the e-training program is evaluated through a behavioral change among the trainees (Hamtini, 2008). Behavioral change among the trainees is termed as a quantitative learning objective.
Why measure behavior
In corporations, a direct link between a change of behavior and an improved performance is perceived as the desired results of a training program. The evaluation of behavior does not entirely entail measuring trainees learning results. In any case, behavioral change is measured by the overall organizational improvement. However, the trainees’ performance is also measured against newly acquired behaviors. Performance of the trainees must be tangible for it to qualify as an improved behavioral change. At this level, the effectiveness of an e-training program is perceived by how trainees improve on their previous skills.
How to measure behavior
The behavioral improvement must be preceded by a previous e-training program on customer service and behavioral management. Unilever is an example of a corporation that improved its sales volume by more than $20 million after subjecting its sales staff to an e-training program (Hoekstra, 2001). However, Unilever’s e-training program supervisors argue that it is difficult to measure behavioral change. Nonetheless, it is easy to notice how trainees improve on their previous skills as evidenced from the excellent customer relations techniques.
Kirkpatrick’s model ensures that behavior evaluation is successful through reinforcement methods and follow-up programs. The model involves managers and other stakeholders such as supervisors. This level involves managers and supervisors who are entirely entitled to evaluate the trainee’s desired behaviors. Therefore, e-training instructors are required to prepare administrators, managers and supervisors for the behavioral evaluation role.
For trainees, a change in behavior is preceded by a desire to change. In addition, trainees must work towards behavioral change and get a reward upon completing the program successfully. The Kirkpatrick’s model advocates for the use of a control group during the behavioral evaluation process. The model entails a continuous evaluation of behavioral change to establish a 100% achievement.
Level 4: Results
It is critical to assess results of the e-training program against its objectives. This is because results are deemed to affect the overall performance of an organization.
Why measure results
Although it is difficult to measure the direct impact of an e-training program in corporations and other institutions, an assessment of a program’s desired results is possible. For example, reduction of operating costs, improved productivity and customer satisfaction are desired results of an effective e-training program (Zimmerman. 2001). On the other hand, learning institutions desire an improved grade score among students, as well as an excellent discipline record.
How to measure results
Unilever Company measures the results of its e-training program in terms of sales volume. The same criterion is used by Etera Nursery Supply Company in measuring its sales volume. Evaluating results requires a control group that is manageable in terms of cost and time. It is important to allow the achievement of results to be revealed with time before evaluating the same. It is advisable that evaluation of estimated results to be conducted before the commencement of the e-training program.
This makes it easier to understand the impact of results after completion of the program. Experts in Kirkpatrick’s model argue that measuring each result at an appropriate time is necessary. Moreover, the evaluation of results should include a cost-benefit analysis as a critical element of the process. In addition, stakeholders of e-training programs are advised to seek satisfaction of the end-results.
The model assesses the trainees’ ability to apply theoretical knowledge. Therefore, it is critical to review the e-training program results against the predetermined objectives. In addition, assessing whether a program’s results can be applied in employment or an academic setting is of the essence. The model assesses whether trainees’ change of behavior had an impact in an organization performance is critical. Evaluating and comparing the impact on cost and time before and after the introduction of the e-training program is another important element supported by the model. From the model, trainers and trainees may deem it necessary to improve on critical areas of failure. Finally, Kirkpatrick’s model supports a continuous assessment of the program functionality in a regular basis.
An e-training program has been designed and conducted for this research in order to provide the participants an environment to learn with instructors and other participants. Then, the researchers used the guidance of the Kirkpatrick’s evaluation model as the main tool of assessment for this e-training program. The tool helped the researchers to come up with the survey questions used in questionnaires, interviews and making observations on the sample for analyzing the results.
The Kirkpatrick’s model of evaluation has four levels evaluation which are reaction, learning, behavior and results (Kirkpatrick, 2009). The reaction level evaluates the attitude of the learner towards the training program. Learning level measure the knowledge gained by the sample population that has undergone training. Behavior level evaluates how well the knowledge gained is applied by trainees. Level of Results evaluates how well the main objective of the e-training is achieved.
The program instructors sampled 5 schools in the local district and picked 10 teachers from each. The program was designed to include a digital curriculum for the teachers to learn and implement in respective schools. The e-training encouraged the teachers to acquire computer laptops and internet for training that lasted 4 months. In this context, teachers learnt basic computer knowledge and how to develop curriculum through computer skills. The program’s main agenda was to help teacher use computer-based knowledge as an alternative in teaching. Teachers were expected to improve school performance, teaching techniques and behavior, as well as, learners’ performance.
In this experiment, the researchers assessed the effectiveness of e-training programs on 50 teachers who went through training. The teachers’ main objective was to obtain a diploma in learning resource centers. The duration of the programs was one semester, which is equivalent to four months. The teachers went through the training while simultaneously carrying out their duties.
Demographic profile of participants
|Gender||Demographic attribute||Frequency||Percentage (%)|
Kirkpatrick evaluation model has been conducted as the instrument in this research. The following sections descript four levels of this model in detail as well as the procedure of each level.
Kirkpatrick’s evaluation model
Kirkpatrick main objective of coming up with the level of the evaluation was to measure how favorable the training program was rated by learners (Foreman, 2008). It was also to assure the learners that their feedback was important to researchers and the trainers.
The main objective was to measure the reaction of the trainees to the training program, the instructors and the environment. Questionnaires were administered to the sample, and 100% response was sort. The questionnaires were administered to learners during and after training, and the reaction was measured qualitatively as favorable to trainees or not. Questionnaires were administered during training after every two months. The same evaluation can be extended to the instructors in order to measure the reaction of the instructors to the program and the learners. The following were used as questions to evaluate the participants’ reaction.
- Did you enjoy the training?
- Did you consider the training relevant?
- Was the training a good use of your time?
- Did you like the elements of venue and timing?
- Did you participate in the training program all the time?
- Did you feel at ease and comfortable with the experience?
- Did you make any effort to benefit from the learning training and experience?
- Would you take another training program if presented with another chance?
- Did you like the training environment?
- Did you interact well with the instructor?
According to Kirkpatrick, learning is a measure of the skills gained by learners through understanding principles, facts and techniques of the program (Mathison, 2005). Learning is also a measure of change in knowledge and attitudes of the learners due to the training (Ibrahim, Rozar, Razik, & Kormin, 2011). Questionnaires to measure learning in this research were administered at the end of training programs.
The researchers interviewed the supervisor and management of the institution where the trainees were teachers. This was in the form of survey which helped the researchers understand the reasons behind which the training program was implemented. The following are the survey questions (Kirkpatrick, 2009).
- What were the skills that the trainees lacked before training?
- Was the lack of skills and knowledge affect trainees’ deliverables at work?
- Is the training necessary for trainees to gather new skills or to improve their skills?
- Was lack of certain skills one of the criteria for identifying the trainees?
- What are the skills and knowledge do trainees expect to acquire?
- What is the impact of the acquired knowledge on the whole school?
However, the learning level is evaluated using both pre and posttest questionnaires as indicated below:
Pretest survey questions
- Are you at the entry level of learning resource centers in education?
- Dou you have prior knowledge of learning resource centers in education?
- Do you have any practical experience?
- Do you have any experience prior the e-training program?
- Did you have any training prior the e-training program?
- Do you think e-training is best suited for practical skills?
- Do you prefer training in a traditional classroom to increase knowledge base?
- Do you look forward for a new experience with e-training?
Posttests survey questions
- Did you learn what was intended to be taught?
- Did you experience the intended purpose of the e-training program?
- Did you gain practical skills to use on a daily basis?
- Did you learn new principles of a learning process?
- Were all aspects of learning included in the e-training program?
- Was there exra information relevant to the training?
- Did you learn from interacting with other trainees and the instructors?
- Did the e-training encourage critical thinking?
- Did you learn how to use e-learning resources?
- Did the training improve your ability to visualize situations accurately?
The researchers sought to understand if the management were satisfied with knowledge and skills gained from the training. The researchers asked the management if they would organize training from the experience of the just concluded training. The researchers inquired from the managers if the cost incurred was covered with the knowledge base acquired by the trainees.
A pre-test of the expectations of the learners was carried out before the training program started (Parry, 1997). This sought to figure out what the trainees thought about the e-training program as compared to other style of learning programs. The pretest also sought to evaluate the skills and knowledge of the learners before the test which would act as datum for measuring improvement.
Kirkpatrick’s argument on behavior evaluation is in the importance of behavior and attitudes in learning. Most training programs are considered successful if they help instill behavior change in the work place (Capps, 2008). Measuring the behavior change is a complex study and is mostly qualitative than quantitative (Teddlie, 2009). Behavior change is mostly observed by a third person, although behavior change may occur as a result of change in attitude (David, Salleh, & Iahad, 2012). The test results of the trainees alone cannot be used to measure the behavior change of a trainee after the training. Other factors like the trainees approach to dealing with problems and their attitude while implementing their knowledge should be considered.
The questionnaire of this level was administered after 2 months of finishing the training programs. Observations on the behavior of the trainees in the job place were made regularly without the knowledge of the trainees. Other resources that were used to measure the behavioral change of the trainees were interviews with fellow teachers and the supervisor of the trainees.
The researchers sought to know the expectations of the trainees’ supervisors before and after the training. The researchers also sought to evaluate change in behavior of the trainees from their colleagues who did not go through the training. The colleagues helped the researchers to understand how well the trainees implement their skills thus inspiring them to undergo that. This helped the researchers to understand the attitude of the trainees while implementing the skills and knowledge gained.
The researchers administered questionnaires on the trainees’ supervisors and their colleagues to help the researchers compile attitudes and behavior of the trainees before and after the training. The survey included the following questions (Kirkpatrick, 2009).
- Did you use the training into effect at work?
- Did you use relevant skills and knowledge?
- Was there any change in activities and improvement in skills and performance while on duty?
- Was there a change in behavior and improvement on advanced skills and knowledge?
- Was there a transfer on knowledge and skills?
- Has critical thinking helped overcome communication and education barriers?
- Did you get more time for learning through change of attitude and proper planning?
- Are you self-conscious of on behavior change and development?
Additional questions to evaluate behavior are as following:
- How are the trainees’ teaching and guidance different from the training?
- As a supervisor, what are the changes in the trainees’ behavior?
- What are the behavioral changes expected from the trainees?
- Was behavior a criterion in determining the trainees legible for this training?
Level 4- Results
Kirkpatrick’s level of evaluation of results focuses on the direct impact of training on the job (Combs & Falletta, 2000). The job in question here is education, which then means that evaluation of results, is the direct impact on the teaching abilities of the trainees. This means that the objective of this research on this level would be to measure the effects of training on the students who learn from the trainers. The outright measure would be the increase in skills and knowledge of the students handled by the trainees in contrast to before the training.
The researchers sought to measure the value of education offered by the trainees before and after the training. The researchers used questionnaires on the trainees to measure the results of the training (Combs & Falletta, 2000).
On the other hand, the researchers also carried out pre and post training evaluation of the trainees’ abilities to carry out their work effectively. The researchers used interviews administered on the students and supervisors in order to evaluate the results of training on the teachers. The following survey questions were used for evaluation of results (Kirkpatrick, 2009).
- As a student, how technical are trainees after training as compared to before the training in their teaching?
- How do the trainees inspire their students to use the critical thinking in solving problems?
- How do the trainees encourage their students to relate with others in use of language for learning?
- As the supervisor, how has e-training helped in achieving set goals of the school?
- What are the benchmarks used to measure the benefits of the training?
This is a level of evaluation that allows the instructors and management of an organization to get feedback on the perspective of the trainees on the program. The feedback allows management and instructors to deliberate on the program’s improvements that can harness effectiveness in consecutive training programs. This feedback also allows management to make decisions on the aspects of the program that are not beneficial to their employees and the institution. The training benefits become realistic with time hence it is done a few months after the training.
In this research, a post training program evaluation was carried out. This was in the form of interviews on the trainees 3 months after the training program was over. The whole research sample was interviewed using the following questions:
- What aspects of the program benefited you most?
- What aspects of the program were interesting but not relevant to the training?
- What aspects of the program should be omitted in consecutive training? Why?
- What aspects of the program should be added in the consecutive training that was missing in the just concluded one? Why?
- What aspects of the program should be approached differently and why?
- What are the aspects of the program that require the more time?
- Is the program good as it is or should improvements be made?
- Do you have any other comment concerning the program?
The results of four levels of Kirkpatrick’s evaluation model were shown in the following subsections.
Most of the participants agreed or strongly agreed with the statements of questionnaire in reaction level. It showed that most of trainees like the learning process of e-training program.
Table 1: Questionnaire in reaction level.
|Statement||Strongly disagree||Disagree||Natural||Agree||Strongly agree|
|I did like and enjoy the training.||0||0||1||10||39|
|I did consider the training relevant.||0||0||14||36|
|It was good use of our time.||0||0||0||16||34|
|I did like the elements of venue and timing.||0||1||12||37|
|I did participate in the training program all the time.||0||0||0||18||32|
|I felt at ease and comfortable with the experience.||0||0||2||19||29|
|I did make effort to benefit from the learning training and experience.||0||0||0||8||42|
|I would take another training program if presented with a chance.||0||0||1||4||45|
|I like the training environment.||0||0||10||40|
|I interacted well with the instructor.||0||0||1||19||30|
The following tables are questionnaires showing a pre- and post-training questionnaire for the participants. As shown in Table 2A, the participants thought they the e-training program is better than other style of learning programs before the training program started. For the post-training test questionnaire, as shown in Table 2B, most of participants agreed or strongly agreed with the improvement after e-training program.
Table 2A: Pre-training test questionnaire in learning level.
|I am at entry level of learning resource centers in education.||46||4|
|I have prior knowledge of learning resource centers in education but no practical experience.||47||3|
|I have prior experience with e-training program.||23||37|
|I intend to learn practical applications of the training.||49||1|
|I have prior training but not with e-training program.||48||2|
|I think that e-training is best suited for practical skills.||41||9|
|I would have preferred to undergo training in a traditional classroom to increase the knowledge base.||39||11|
|E-training is new to me but looking forward to the experience.||45||5|
Table 2B: Post-training test questionnaire in learning level.
|Statement||Strongly disagree||Disagree||Natural||Agree||Strongly agree|
|I did learn what was intended to be taught.||0||0||1||39||10|
|I did experience what was intended for us to experience.||0||0||1||39||10|
|I gained practical skills to use in my daily job.||0||0||0||9||41|
|I learnt new principles of learning.||0||0||0||11||39|
|All aspects of learning were not included in the training.||5||15||4||19||7|
|There was extra relevant information in the training.||2||17||1||9||21|
|I learnt from interaction.||0||0||0||7||43|
|I learnt from the use of e-resources.||0||2||1||6||41|
|The training encouraged critical thinking.||0||0||0||14||36|
|The training encouraged me to visualize situations accurately.||0||0||0||8||42|
The pretests are used to evaluate the trainees’ preparedness before the program commences. Each pretest question is marked according to the number of respondents who undertook the test. On the other hand, post-training test question are filled by the trainees after undertaking the program. The post-training questions seek to evaluate the trainees’ attitude after the program.
As shown in Table 3, the majority of participants agreed or strangely agreed with the statements of behavior questionnaire. It means that most of them felt the change in their behavior after the e-training program.
Table 3: Questionnaire in behavior level.
|Statement||Strongly disagree||Disagree||Natural||Agree||Strongly agree|
|I used my training into effect at work.||0||1||2||40||7|
|I used relevant skills and knowledge.||0||3||1||42||4|
|There was a change in activities and improvement in skills and performance while on duty.||0||5||0||39||6|
|There was a change in behavior and improvement on advanced skills and knowledge.||0||2||3||37||8|
|There was a transfer of knowledge and skills.||1||13||6||25||5|
|Critical thinking has helped overcome communication and education barriers.||0||4||9||30||7|
|I am able to get more time for learning through change of attitude and proper planning.||0||0||0||42||8|
|I am self-conscious on behavior change and development.||0||2||1||6||41|
Level 4- Results
Table 4 presents the a striking impact of results after completion of e-training program. Which means the e-training program could provide the great help for the participants.
Table 4: Questionnaire in results level.
|What aspects of the program benefited you most?||Using IT to develop curriculum, using IT as teaching strategy.|
|What aspects of the program were interesting but not relevant to the training?||IT communication strategies, students’ behavioral evaluation techniques|
|What aspects of the program should be omitted in consecutive training? Why?||Teachers’ reward system. The aspect requires input from local district school authority.|
|What aspects of the program should be added in the consecutive training that was missing in the just concluded one? Why?||Parent and community involvement aspects. Because the program requires a social support system form both parents and the community.|
|What aspects of the program should be approached differently and why?||Using IT to develop a curriculum and teachers’ reward system. The aspects requires intensive deliberations from school administrators and local education authorities.|
|What are the aspects of the program that require more to time?||Development and implementation of teachers’ reward system and use of IT in curriculum development.|
|Is the good as it is or should be?||It is good at the moment.|
|Do you have any other comment concerning the program?||The program requires constant monitoring and improvement.|
The research showed that most trainees were satisfied with e-training as it allowed them time to carry out other duties. Most trainees were encouraged to use e-training programs to learn as it allows them time to use knowledge and skills immediately after learning it. Although there was minimum interaction between learners, a majority of the trainees agree that e-training is cost effective to all parties involved. According to the results of the research, most trainees felt that e-training is cheaper than traditional training for the trainees in terms of living and traveling for training. The management of the school where the trainees were teachers thought it was a good idea to use e-training. E-training helps the teachers open up to technology thus learning its vices and importance to use effectively.
The trainees appreciated the platform that is offered by the technology for exchange through e-training. This is because they were able to have discussion forums through the internet without physical proximity and with least cost as compared to other forms of exchange. They then encouraged their students to get into a discussion that help them to increase their knowledge on different aspects of learning.
The trainees exhibited change in behavior and teaching approach from the training. These inspired other teachers to undergo the same training when they get the chance. The students of the trainees noted change in attitude and availability of the teachers in attending to them. The students agree that the trainees encourage them to use technology to foster learning and use critical thinking to enhance learning.
The trainees were able to use the tools they were provided with to enhance learning. These tools included unlimited internet and an e-classroom to get the materials they need for learning. The trainees were able to use e-learning tools to communicate to their instructors and other students who were undergoing e-training. This shows that the trainees did not get just knowledge on the learning subject but also on the usage of technology for interaction and communication as well as for learning. The ability to use technology for learning was an achievement for the teachers as it requires personal planning to achieve objectives within the time stipulated. This is especially because the trainees did not get time off work for training.
The trainees were also able to pass on this information to their students and even went ahead to apply e-learning for their students on certain aspects of education. This included setting up a portal where the trainees could leave coursework for their students and collect the same on time. This helped the students, and the trainees to reduce the bulk of hardcopies carried around for the institution.
Technology is a global set-up that is recommended for use in an effort to globalization. Therefore, the trainees were able to achieve the link between their localized work, and the globalized way of executing their duties in education. This is necessary as it broadens the margins of thought of an individual hence a better understanding of education and the world.
The management of the schools where the trainees worked was satisfied with the training and appreciated the knowledge and skills gained in comparison to the cost of the training. The management also acknowledged that the trainees showed behavior change in the way they handled their work. The management was open to try out other forms of training with regard to the nature of subject to be taught. This is because not all subjects are compatible with e-training as they require one-on-one guidance in practical experimentation.
The management would still consider traditional classroom training especially when training is carried out simultaneously with team building exercises. The management felt that e-training encouraged online research since it is cheaply available compared to physical libraries and information centers.
The trainees feel that the management can work together with the instructors to improve the program in consecutive training. They all appreciated that the program was beneficial to them, but improvements in some aspects can help improve learning.
Kirkpatrick’s model of evaluation of learning is an essential and effective tool. The researchers agree that all four levels of evaluation offered by Kirkpatrick’s model are necessary aspects of training (Alvarez, Salas & Garofano, 2004). The researchers also agree that e-training is cost effective as opposed to traditional methods of learning (Strother, 2002). This is because it saves on time, transport and inconveniences of weather and other natural occurrences. The researchers acknowledge that e-training is broader as compared to traditional training.
E-training offers broader research base as compared to traditional classroom training (Clark & Richard, 2011). The training also offers a discussion platform that is not limited by time or physical locations. This makes e-training acceptable since learning is offered on a global scale.
Alvarez, K., Salas, E., & Garofano, C., M. (2004). An integrated model of training evaluation and effectiveness. Human resource development Review, 3(4), 385-416.
Batra, V. G. (2007). Organisation development systems: A study in organisation behavior and organisation management. New Delhi, ND: Concept Publishing Company.
Bregman, P. & Jacobson, H. (2000). Searching for answers: Yes you can measure the business results of training. Training, 38(8), 68-72.
Bylinsky, G. (2000). Hot new technologies for American factories. Fortune, 142(1), 288A.
Capps, P. (2008). The use of kirkpatrick’s four levels of evaluation by performance improvement practitioners. Michigan, MI: ProQuest LLC.
Clark, R. C., & Mayer, R. E. (2011). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. Hoboken, NJ: Wiley.
Combs, W. L., & Falletta, S. V. (2000). The targeted evaluation Process: A performance consultant’s guide to asking right questions and getting the answers you trust. Washington, DC: American Society for Training and Development.
David, O., Salleh, M., & Iahad, N. (2012). The impact of e-learning in workplace: Focus on organizations and healthcare environments. International Arab Journal of E-technology, 2(4), 203-210.
Ferreira,E., J., Erasmus, A., W. & Groenewald, D. (2009). Administrative management. Cape Town, SA: Juta and Company Ltd.
Foreman, S. M. (2008). Kirkpatrick model: Training evaluation practices in the pharmaceutical industry. Michigan, MI: ProQuest LLC.
Galloway, D. L. (2005). Evaluating distance delivery and e‐learning is Kirkpatrick’s model relevant?. Performance Improvement, 44(4), 21-27.
Hamtini, T. M. (2008). Evaluating e-learning programs: An adaptation of Kirkpatrick’s model to accommodate e-learning environments. Journal of Computer Science, 4(8), 693.
Hoekstra, J. (2001). Three in one. Online Learning, 5(10), 28-32.
Ibrahim, A., Rozar, N. B., Razik, M. A., & Kormin, K. B. (2011). Comparing effectiveness e-learning training and traditional training in industrial safety and health. International Journal of Online Marketing, 1(3), 46-61.
Kirkpatrick, D. L. (2009). Evaluating training programs: The four levels. Buckingham, NSW: ReadHowYouWant.com.
Koontz, H., & Weihrich, H. (2006). Essentials of management. New Delhi, ND: Tata McGraw-Hill Education.
Mathison, S. (2005). Encyclopedia of evaluation. Thousand Oaks, CA: Sage Publications, Inc.
Nelson, G. (2001). Do no harm: A first measure of effectiveness in small distance education programs. Proceedings of ED-MEDIA 2001: World Conference on Educational Multimedia, Hypermedia and Telecommunications, June 2001. Tampere, Finland.
Parry, S. B. (1997). Evaluating the impact of training: A collection of tools and techniques. Washington, DC: American Society for Training and Development.
Serrano, C. & Alford, R. L. (2000). Virtual languages: An innovative approach to teaching EFL/ESL English as a foreign language on the World Wide Web. In Lloyd, L. (Ed.). Teaching with Technology: Rethinking Tradition (pp. 195-205). Medford, NJ: Information Today, Inc.
Strother, J. (2002). An Assessment of the Effectiveness of e-learning in corporate training programs. International Review of Research in Open and Distance Learning,3(1), 1-17.
Teddlie, C. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Thousand Oaks, CA: Sage Publications Inc.
TeleEducation. (2000). Is distance education any good? Web.
Zimmerman, E. (2001). A competitive edge. Online Learning, 5(10). E2-E7.