Conducting an assessment for a study requires developing an assessment plan whereby key factors to facilitate the desired outcome and assessment instruments are considered. As this paper shall explore, an assessment practice should first consider factors that determine whether a researcher will use commercially designed instruments or create own instruments as well as those factors that inform on the nature and type of instruments a study should adopt. Bachkirova and Smith (2015) give an example of assessment instruments such as rubrics, interviews, questionnaires, and focus groups as some of the techniques that a researcher can use when intending to measure the outcomes of an assessment. Even so, it is paramount to carefully consider factors before selecting an instrument (Heidrich & Tiwary, 2013).
Factors to consider evaluating the utility of a management method
Bopp, Fallon, Bolton, Kaczynski, Lukwago, and Brooks (2012) (2012) indicate that many researchers usually begin with conducting a survey when charged with the duty of conducting an assessment on a particular field. A survey approach without establishing factors to follow presents a difficulty in meeting desired outcomes. While many management students consider survey as an easy approach, it is imperative to note that it should be one of the most challenging steps in assessment. The survey in itself is an assessment process (Heidrich & Tiwary, 2013). Therefore, considering important factors prior to selecting the instrument for assessment is crucial.
The process of selecting appropriate instruments for conducting an assessment in a management study requires careful consideration of several factors. Some of those factors include audience expectation, availability of resources, methodology, types of assessment outcomes, and assessment purpose. Schoenherr, Ellram, and Tate (2015) suggest that it is critical for a researcher conducting an assessment to organize and follow the factors sequentially (Heidrich & Tiwary, 2013).
As such, there is a need to begin the assessment by determining the purpose of conducting the management study. Obviously, starting with a purpose as the paper shall explore will lead to the next step of the choice of an assessment outcome that the researcher will be required to measure. Heidrich and Tiwary (2013) further indicate that the desire to achieve the goal of a study prompts for the selection of a type of data collection method. In this case, various options can be used and which can include a quantitative method or a qualitative design. In other instances, a researcher can choose to adopt multiple methods. Finally, a researcher should take into account what the audience is expecting from the research and the resources needed to attain a desirable outcome (Heidrich & Tiwary, 2013).
An understanding of the purpose of a study on management is cited by management strategists is a key requirement in developing a proper assessment that is relevant to the goals that are being sought. The role played by management in the business arena makes it an important discipline that requires a careful understanding and as such researchers are left with little option but to ensure that it is effectively understood.
Importantly, it is recommended by professional literature that a researcher seeks to answer the question “why” before conducting an assessment. Bunch, Cameron, and Mora (2011) in the article Guidelines to conducting threat susceptibility and identification assessments of pipelines prior to reactivation challenges management practitioners to examine why they are conducting an assessment. As such, they should review their strategic goals and missions, and organize their assessment instruments since these are critical for meeting acceptable outcomes. Aligning the design of an assessment offers the necessary steps necessary for collecting and measuring data.
The article by Harris and Clark (2014) points out that in order to effectively assess the outcomes of a study, a researcher or practitioner must align missions and goals (Azadegan & Kolfschoten, 2014). In order to determine the style of assessing outcomes in management assessment, it is critical to review goals and missions. As such, a practitioner’s outcome will be assessed based on one of the following categories:
- Customer service,
- product development, or
- organizational learning.
Management scholars have cited the three categories as crucial for assessing outcomes of management practices (Heidrich & Tiwary, 2013).
Assessing extra value service that an organization provides to customers is a management role that is concerned with meeting the quality threshold so as to satisfy the various needs of customers. Developing an understanding of the various needs of clients in a business and ensuring their needs are sufficiently addressed and quality programs are established is an important assessment measurement. This will aid a practitioner to understand the need of enhancing customer satisfaction and establish effective policies in order to improve customer service. Wright and Ogbuehi (2014) indicate that gaining knowledge on characteristics of clients or users of organizational products or services is critical for assessing service outcomes.
Learning, development, and service provision bear inherent differences and this requires the application of different instruments and assessment methods. Once an important measure has been identified, data to be assessed will be collected either through a qualitative or a quantitative method. Sometimes, a post-test or pre-test instrument is used to measure learning outcomes. Ghasabeh, Reaiche, and Soosay (2015) point out that it is easy to identify the type of data that management practitioner intents to measure by determining the type of outcome needed (Heidrich &Tiwary, 2013).
It is important to highlight that determining the type of assessment is critical for choosing the type of data collection method to adopt. According to Bunch, Cameron, and Mora (2011), after a type of outcome is selected, a management practitioner should then continue with an assessment using an appropriate method of the data collection method. Such assessments methods can be carried out via one or either of the qualitative and quantitative methods and it is vital that the researcher knows the advantages and disadvantages of a method. Also, multiple methods can be used and this depends on what is fit to give the desired outcome (Azadegan & Kolfschoten, 2014).
The article entitled An Assessment Framework for Practicing Facilitator. Group Decision & Negotiation by Azadegan and Kolfschoten (2014) indicates that numerical data can be collected via quantitative methods. The validity and statistical reliability of the findings will depend on a management practitioner’s selection of constructed statistical procedures (Heidrich &Tiwary, 2013). Quantitative methods are exemplified through the use of pre and post-test instruments which are standardized, questionnaires, and surveys. It is important to mention that this factor allows a management practitioner to gather data assess large groups of respondents. Besides, practitioners can produce statistical significance from the data gathered. However, a qualitative methodology is required to determine the practical significance of an outcome (Azadegan & Kolfschoten, 2014).
An assessment conducted via qualitative methods focuses on gaining an in-depth understanding of the subject being assessed. Data is collected through analysis of written material, observation, focus groups, and interviews. Bunch, Cameron, and Mora (2011) indicate that, unlike the quantitative method, a qualitative approach generates details and meanings that are rich and descriptive. In management, it examines details inductively looking at measures beyond the scope of a quantitative approach.
Hillier, Cannuscio, Griffin, Thomas, and Glanz (2014) elaborate that an effective assessment requires resources to facilitate its completion. As such, a practitioner must consider the resources to use before engaging in an assessment or choosing an assessment instrument (Heidrich & Tiwary, 2013). Budgetary considerations should be made and multiple resources, as well as assessment skills, factored in. Besides, it is very important to consider other factors such as time available for the assessment and departmental support in order to accomplish the assessment smoothly. In management assessment, it is imperative to consider the following:
- Time: What are the demands that require your time? In your planning, what time of the year have you set for conducting the assessment? What duration of time is needed to collect data, analyze and evaluate?
- Departmental support: Do you have a support office or an assessment office? Does the assessment require technological or technical support? Do you have a team that will aid in the process of collecting and analyzing data?
- Skills: Do you have enough skills to cede or analyze data? Do you have sufficient skills needed for the research work? As an investigator, are you new to assessment? Are you acquainted with skills in survey software or statistical analysis software?
An assessment project has many audiences. The audience includes an external audience comprising of national standards, state officials, trustees, and the board of regents. Besides, there are internal constituents which include an institutional division or department. Lanz (2015) indicates that the audience in an assessment project has a voice. They contribute in the assessment by answering questions and they also determine the instrument the practitioner will use to gather data (Azadegan & Kolfschoten, 2014).
Heidrich and Tiwary (2013) point out that an understanding of an assessment audience in management is important since it influences the design of an assessment. It is also a source of direct evidence. This is critical in categorizing data (Heidrich & Tiwary, 2013). The use of an approach where qualitative and quantitative data is got from an audience or respondents is important in ensuring that the research questions are framed in a manner that is specific to the research problem being dealt with and plays an important role in ensuring high levels of specificity to the issue under discussion.
Management as an operational factor is affected by a large number of factors and can be approached from numerous points of view; the same can be said of the role played by technology in enhancing production. It is apparent that the definition of the research approach is determined by the nature of the research which makes it necessary for the researcher to engage in designing the nature of the response. The use of statistical data collection and analysis tools will also play a role in ensuring that the approach is carried out smoothly.
Azadegan, A., & Kolfschoten, G. (2014). An assessment framework for practicing facilitators. Group decision & negotiation, 23(5), 1013-1045.
Bachkirova, T., & Smith, C. (2015). From competencies to capabilities in the assessment and accreditation of coaches. International Journal Of Evidence Based Coaching & Mentoring, 13(2), 123-140.
Bopp, M., Fallon, E. A., Bolton, D. J., Kaczynski, A. T., Lukwago, S., & Brooks, A. (2012). Conducting a Hispanic health needs assessment in rural Kansas: building the foundation for community action. Evaluation & program planning, 35(4), 453-460.
Bunch, C., Cameron, G., & Mora, R. G. (2011). Guidelines to conducting threat susceptibility and identification assessments of pipelines prior to reactivation. Journal of Pipeline Engineering, 10(1), 19-29.
Ghasabeh, M., Reaiche, C. & Soosay, C. (2015).The emerging role of transformational leadership, Journal of Developing Areas, 49(6), 459-467.
Harris, J. & Clark, C. (2014). Strengthening methodological architecture with multiple frames and data sources, Statistical Journal of The IAOS, 30(4), 381-384.
Heidrich, O., & Tiwary, A. (2013). Environmental appraisal of green production systems: Challenges faced by small companies using life cycle assessment. International Journal of Production Research, 51(19), 5884-5896.
Hillier, A., Cannuscio, C., Griffin, L., Thomas, N., & Glanz, K. (2014). The value of conducting door-to-door surveys. International Journal of Social Research Methodology, 17(3), 285-302.
Lanz, J. (2015). Conducting Information Technology Risk Assessments. CPA Journal, 85(5), 6-9.
Schoenherr, T., Ellram, L., & Tate, W. (2015). A note on the use of survey research firms to enable empirical data collection, Journal of Business Logistics, 36(3), 288-300.
Wright, B, & Ogbuehi, A. (2014). Surveying adolescents: the impact of data collection methodology on response quality’, Electronic Journal of Business Research Methods, 12(1), 41-53.