Medication Error Risks and Program Evaluation Methodology

Qualitative research usually represents the researcher’s point of view on certain issues including experiences. The research is always suitable for answering questions that examines general human behaviour, motives and the hindrances. Qualitative research seeks to critically analyze issues that are very significant in identification of research problem, development of the tentative answers to the formed theories and development of good concepts that are applicable. Researchers in the field of healthcare have diverse theoretical experiences that mainly govern their research work. This has made the previous researches to have different results due to different qualitative approaches used (Black, 1994).

Evaluation refers to type of study having typical aim of assessing how viable a phenomenon is, its effectiveness in relation to service practices. Evaluations are important since they determine the channels by which certain programs can be defined to achieve intended goals. The procedures identified in solving problems must be tested to verify whether they are appropriate to us in achieving the set objectives (Owen & Rogers, 1999).

The purpose for which this study was done was to evaluate the processes in community hospital interface program. This section of the paper outlines the evolution of evaluation through some stages. Then there is the discussion concerning methodological issues of evaluating programs. This study will utilize qualitative analysis method, the data will be collected through in-depth semi structured interview, client’s medical record, and the organization’s documents.

Generation evaluation program

The process of evaluation has been structured by the Weiss-Patton debate from the late 1980s. The structure argued that in most of the cases the person involved in evaluation should pay more attention to generation of evaluation findings that provide efficient data for accurate answering of the questions. The author went ahead to suggest that an active role of researchers is indeed necessary for effective outcome. The role played by the researchers enhances good communication and development within organizations (Guba & Lincoln, 1989).

According to Guba and Lincoln (1989), the first evolving generation involves measurement which makes the work of the evaluator to appear more complicated. This is because the exercise required them to have full knowledge of all the instruments used for the measurement of variables.

The second generation evaluation focuses on describing the phenomenon. It differs from the processes dealing with measurement in that the purpose gives detail information on strengths and weaknesses which is helpful in driving the whole process towards attaining specific objectives. They noted that in the third evolving generation, the person doing evaluation played the role of being the overall judge. The person draws conclusions regarding the whole evaluation process, and makes appropriate recommendations on the evaluator and the sections that need improvement (Health Communication Unit at the centre for Health Promotion, 2009).

Evaluation in the fourth generation takes on the roles of the researchers in the previous three generations, redefines and expands them, and incorporates them into an even more highly skilled practitioner. The new elements of fourth generation evaluator’s roles involve political, ethical, and methodological ramifications of the hermeneutic (philosophy concerned with human understanding and the interpretation of texts) and dialectic (method of discovering truth through questioning and debate) process (Guba & Lincoln, 1989).

These researchers identified four strategies of effective evaluation. First, the evaluator must share control and solicit and honour stakeholder inputs, not only about the substance of constructions, but also with respect to the methodology of the evaluation itself. This is a political role to attain the conditions of participating for all. Second, the elevator must assume the role of the teacher and learner rather than investigator. He/she has to go beyond testing hypotheses to know what stakeholders competing constructions are and teach them to all stakeholders (Ricketts, 2000).

This process is basic to the hermeneutic (inquiry) mode. Third, the fourth generation evaluator should attain to reality sharper rather than discoverer. They must assist in emergent reconstructions in order to be an active participant who shapes the product as well. With shared mutual responsibility, evaluator can’t escape the consequences of the reconstruction. Fourth, the evaluator recognizes and embraces the role of change agent. The evaluator is a key figure in a process that creates a new and more sophisticated ‘reality’ that has inbuilt direct and immediate implications for action. The evaluator teaches stakeholders the constructions of others and introduces other information such as documentary analysis from similar evaluations in similar contexts. Thus the evaluator is a leading agent in the process of reconstruction of existing reality construction.

The fourth generation evaluation model is a social construction built on experience that has been absorbed on the form of vicarious learning (values, beliefs, setting and contexts) and the influence of other’s constructions from which meaning and possibility is drawn. It involves the process of negotiation where stakeholders are identified and evaluation can be described. Potential outcomes such as decisions about how to proceed from a given point are developed jointly by stakeholders and can be estimated (Guba & Lincoln, 1989). Fourth generation evaluation provides a foundation upon which the conceptual framework developed for this research is built.

Methodological Issues of program Evaluation

For the understanding of the working of programs and the change involved, evaluation processes were conducted in free manner and flexible manner (Cook & Reichardt, 1979). Some of the challenges of program evaluation encountered were not easy to avoid, such as organizational, political, and interpersonal challenges (Mathison, 1988). The decision to use qualitative, quantitative or a combination of these two methods when evaluating a program required a lot of consideration.

Program evaluation

Program evaluation as one of the methods of qualitative analysis can be defined as the process of collecting information systematically, analyzing and reporting it appropriately for the purposes of decision making. The value and impact of CHIP was assessed through this study and by interviewing patients, asking them appropriate questions and later conclusions were made based on the feedback and the information used to improve on the outcome of the research. However, there are three common types of program evaluation which include; process evaluation, outcome evaluation and impact evaluation (Centers for Disease Control and Prevention, 1985; Shadish, Cook & Leviton, 1991).

The process of evaluation focuses on the processes of achieving the program through implementation and outcome results. The planning of the program, the activities involved and how they are conducted during implementation process are all part of process evaluation. It further identifies both the strong and weak points of the program in question. Impact evaluation focuses on the benefits, consequent results of programs and the results on the behaviour patterns of the participants.

While outcome evaluation focuses on the change within the systems of operation. These changes may include; rate of morbidity, mortality rate and the general health status. With the evaluation of the CHIP program it was found that evaluation of outcome goals were a little tedious. The outcome evaluation generally focuses on the overall results of the research process.

In this research various groups were interviewed one of which included the stakeholders. These were the people at the community level, the health agency officials and those from the community-based organizations. Within this group, there were those who were fully involved in implementing the program and those whom the program affected. The process of evaluation was done in partnership with other community organizations, members of healthcare unit and the individuals under investigation at the community level. For the success of these partnerships, some ethical measures were taken into consideration to accommodate the different values held by participants. The active involvement of the health practitioners made it easy for verification and evaluation of the findings (Shreffler et al, 1999).

The stakeholders were identified using credibility criteria, whereby individual’s opinions and perspective about the research was analysed. This made sure that every participant’s interest was catered for including those who were likely to be affected by the changes arising from the evaluation process. Researchers were assigned different activities for each evaluation program. They were involved directly in the design and evaluation process. The evaluation process comprised of some community initiated plans despite some of them lacking appropriate training on evaluation program. This was found to work well in case there were commitments on building community health groups. The following table represents the kind criteria used to identify serious participants.

Participants/ stakeholders Their areas of concern
Parents
students
Health practitioners
Healthcare institutions
Learning institutions
Town officials
Government administrators
Project staff

The process of engaging the stakeholders ensured that all the researchers were very familiar with the evaluation process. This ensured that the ultimate results of the evaluation were in agreement with the needs of the researchers (Fawcett et al, 2010; Fawcett et al, 2009).

The description of CHIP program was necessary since it provided the scope of reference for all future answers concerning CHIP evaluation. The programs aimed at addressing and enforcing different solutions. The description process enabled the members of various evaluation groups to make comparisons which gave clear linkage between results and their effects. The description also enabled hearing of different views and perspectives concerning CHIP and this enabled conclusive agreement to be reached (Connell et al, 1985).

There are several elements considered in program description. These elements enabled a clear focus towards program evaluation. The elements include those that provide concrete ways of developing the program which are the expectations, the need statement, equipments required and the activities undertaken (Schoenman, Mohr & Mueller, 2001). Then there are those elements not used in technical description of the program, they include the context of program and development stages of program. The description of the program outlined the necessary base on which well articulated program was evaluated. The extent of program description differs for each evaluation process depending on the activities involved (Rossi et al, 1999).

There was also focus on the evaluation design program whereby the program was structured to capture all the crucial information agreed upon by the research team. This made sure that the evaluation process satisfied the need of all recipients by answering all the study questions. It also ensured that the program evaluation process considered the problems encountered during the process like time and resource availability. Finally the data collected was synthesized, analysed and presented. This ensured that the benefits withdrawn from the research work was justifiable and acceptable to everyone, interviewees, participants and recipients (Fetterman et al, 1996; Patton, 1997; Taylor-Powell, Steele & Douglas, 1996).

The different results can also be attributed to difference that exist in data description, analysis and interpretation used. The main objective of qualitative research is always to produce vivid description of the phenomenon in question and to conclusively interpret the meaning of the phenomenon. Qualitative research method also aims at developing concepts from the analyzed healthcare data and ultimately link them to appropriate theories. The interview guide used in this process is normally semi-structured and focus on the areas that bear much controversies within healthcare (Black, 1994).

Qualitative research involves inquiry processes that help in understanding social and human aspects within a defined environment. It helps in building an understanding on the manner in which people see and build their lives as important processes, how they relate to one another and finally interpret the relationships within the context of social environment (Black, 1994; Creswell, 1998). This research seeks to establish an understanding between the research objectives and findings from the interview. Grounded theory has been used for the purposes of interpreting the data collected. Collecting, interpreting and understanding of data is done best in grounded theory, where the collection of data, its analysis and theory are closely related bringing some relevance to the research undertaken (Strauss and Corbin, 1990).

Previous researches conducted found out that patients are always ignored despite having the desire to express their opinion on the type of medical services they receive. Patients should not only be considered as passive clients but also their active participation is very necessary. This calls for the use of common concepts during the research process that are understandable to the patients and other participants (Strauss and Corbin, 1990).

References

Black, N. (1994). Why we need qualitative research. J Epidemiol Community Health (48), 425-426.

Centers for Disease Control and Prevention.(1985). National Center for Chronic Disease Prevention and Health Promotion, USA

Cook, T. D., & Reichardt, C.S. (1979). Qualitative and Quantitative methods in evaluation research, (Vol 1). Beverly Hills

Connell, J.P., Kubisch, A.C., Schorr L.B., Weiss, C.H.(1985) New Approaches to Evaluating Community Initiatives, New York, NY: Aspen Institute.

Creswell, W. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, CA: Sage.

Fawcett, S.B., Paine-Andrews A., Francisco V.T., Schulz J., & Ritchter K. P. (2009). Evaluating community initiatives for health and development. In Evaluating Health Promotion Approaches, edited by I Rootman and D McQueen. Copenhagen, Denmark: World Health Organization — in press.

Fawcett, S.B., Sterling, T.D., Paine Andrews, A., Harris K.J., & Francisco V.T.,(2010). Evaluating Community Efforts to Prevent Cardiovascular Diseases. Atlanta, GA.

Fetterman, D.M., Kaftarian, S.J., Wandersman, A. (1996). Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability. Thousand Oaks, CA..

Guba, E., & Lincoln, Y., (1989). Fourth generation evaluation. Califonia Health Communication Unit at the Center for Health Promotion. Evaluating.

Health Promotion Programs. University of Toronto Mathison, S. (1988). Why Triangulate? Educational Researcher, 17 (2): 13-17.

Owen, J. & Rogers, P. (1999). Program evaluation: forms and approaches. London.

Patton M.Q.,(1997). Utilization-focused Evaluation. Thousand Oaks, CA: Sage Publications.

Ricketts, T. C. (2000). The changing nature of rural health care. Annual Review Of Public Health, (21): 639-657.

Rossi, P.H., Freeman, H.E., Lipsey, M.W. (1999) Evaluation: A Systematic Approach. Newbury Park, CA: Sage Publications.

Schoenman, J., Mohr, P., & Mueller, C. (2001). Impact of the Rural Hospital Flexibility Program On rural emergency medical services: Evidence from the first two years. Findings from the Field, 2(1)

Shadish, W.R., Cook, T.D., Leviton, L.C. (1991). Foundations of Program Evaluation. Newbury Park, CA: Sage Publications.

Shreffler, M. J., Capalbo, S. M., Flaherty, R. J., & Heggem, C. (1999). Community decision making about Critical Access Hospitals: Lessons learned from Montana’s Medical Assistance Facility Program. Journal of Rural Health, 15(2): 180-188.

Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. London.

Taylor-Powell, E., Steele, S., & Douglas, M. (1996). Planning a program evaluation. Madison, WI: University of Wisconsin Cooperative Extension.