Introduction
In quantitative research, we categorize features, tally them, and even create more intricate arithmetical models to explain what is seen. Results can be generalized to a bigger population; moreover, the direct relationship can be made linking two corpora, as long as applicable sampling and significance techniques have been used (Hittleman D & Simon A, 2002). Therefore, quantitative study lets us ascertain which phenomena are likely to be actual reflections of the behavior of a language or diversity, and which are purely possibility occurrences. The more fundamental charge of just look at a single language multiplicity permits one to get an accurate picture of the rate of recurrence and infrequency of particular phenomena, and thus their relative normality or abnormality.
Conversely, the picture of the information which materializes from the quantitative analysis is not as much as that obtained from qualitative analysis. For statistical intentions, classifications need to be of the hard-and-fast, that is, the “Aristotelian” type. At this juncture, an item either belongs to class x or it doesn’t. The quantitative study is for that reason an adulation of the data in some cases. Also, quantitative analysis tends to marginalize rare occurrences. To make certain that certain statistical tests (such as chi-squared) present dependable results, it is crucial that minimum frequencies are achieved – signifying that grouping may perhaps have to be collapsed into one another resulting in a loss of data richness (Hittleman D & Simon A, 2002).
Quantitative researchers acknowledge that both the natural and social sciences struggle for testable and confirmable theories that elucidate phenomena by presenting how they are derived from theoretical postulations. They also decrease social reality to variables in the same manner as physical reality and finally, they try to firmly control the variable in issue to see how other variables are influenced.
Quantitative research seeks to determine the relationship between a certain thing (an independent variable) and another (dependent or outcome variable) (Creswell J, 2003). In other words, quantitative research can be said to be a formal, intentional, orderly process in which numerical data is employed to gather information about the world (Scott D, 2000). Thus it determines underlying relationships between two or more variables, by utilization of arithmetical techniques to test the strength as well as the substance of the relationship (Creswell J, 2003).
Quantitative research designs
Quantitative research design can either take the form of descriptive, in which case the subjects are normally measured once, or experimental where subjects are measured before and after the examination (Creswell J, 2003). However, a descriptive study only acknowledges the relationship existing between variables while on the other hand, an experimental study establishes causes or causality.
Descriptive studies
Descriptive research is employed in situations where information relating to the current status of phenomena is needed to describe “what exists” concerning variables or conditions in a situation (Wallen N & Fraenkel J, 2001). In a descriptive study, the researcher does not challenge behavior or conditions as things are measured as they are. The techniques involved vary. One can employ the survey method, which describes the status quo, or the correlation study which explores the relationship between variables as well as a developmental study that seeks to verify changes over time.
Descriptive studies are also known by another name as observational studies because subjects are viewed without intervening. A simple descriptive study refers to a situation where data is reported on only one subject (Creswell J, 2003). Descriptive studies involving a limited number of cases, i.e. few cases are known as case series. In cross-sectional studies, variables being considered in a sample of subjects are examined once and the relationship between them is examined.
In prospective or cohort studies, several variables are examined at the beginning of the study for instance dietary habits, then after a certain time length the outcomes are determined, for example, the incidence of heart disease, it is important to note that cohort studies are also referred to as longitudinal although the term is also employed in experiments.
Case-control studies attempt to compare cases i.e. subjects with a particular feature or quality. Taking an illustration of injury, with controls (subjects which lack the trait or quality) comparison is made through exposure to a condition alleged to cause the case for instance the number of glasses of beer one consumes in a day (Wallen N & Fraenkel J, 2001).
Case-control studies are also referred to as retrospective as they tend to be centered on past occurrences which might have led subjects to become cases rather than controls. It is worth noting that to arrive at true estimates of the relationship between variables, a descriptive study requires a sample of hundreds or more subjects while in the experiment, ten subjects are seen to be enough (Creswell J, 2003). The estimate of the relationship cannot be biased where there has been the utilization of a high involvement rate in a sample selected indiscriminately from a population.
Experimental studies
Experimental research designs are based on the rationale that the world functions according to causal laws. These laws are fundamentally linear, nevertheless complex and interactive (Creswell J, 2003). The experimental study aims to ascertain these cause-and-effect laws by isolating causal variables.
A supple attitude of the philosophical postulation backing experimental designs is that sometimes and in some ways, the world works according to causal laws. Such cause-and-effect associations may not be an absolute view of certainty, save for demonstrating cause and effect is valuable in some situations (Hittleman D & Simon A, 2002). Important psychological questions such as what causes what is best answered by the use of experimental research design tools.
Experimental research designs are utilized for the controlled testing of causal processes. The broad-spectrum method is whereby one or more independent variables are influenced to establish their effect on a dependent variable (Creswell J, 2003). These designs are employed where: There is time priority in a causal relationship, there is steadiness in a causal relationship, or where the size of the correlation is great.
Thus the object of experimental research methods is to establish cause-and-effect relationships between variables. In so doing, a hypothesis is made to the effect that Independent Variable caused the changes in the Dependent Variable. On the other hand, these changes or effects may have been caused by many other factors or Alternative Hypotheses (Creswell J, 2003). The aim of experimental designs is therefore to eliminate alternative hypotheses. Where alternative hypotheses are successfully eliminated, then through a process of elimination it can be asserted that the Independent Variable is the cause.
Experimental designs are acknowledged as the most ‘rigorous’ of all research designs or, as the ‘gold standard’ upon which other designs are evaluated. This might be true in one sense. If one can implement an experimental design well, then the trial is probably the strongest design concerning internal validity.
Experimental studies are known by other names as longitudinal or repeated measures studies in addition to being referred to as interventions as the researcher intervenes with the subjects rather than sit and observe(Creswell J, 2003). In experiments, subjects’ arbitral assignment to treatment, coupled with the researcher’s ability to identify the treatments, produces impartial results (Sowell E, 2001). In an experiment, a study researcher can try some sort of intervention after he has taken initial measurements. Then after the intervention he/she can re-measure again to see what transpired.
In time series (simplest experiment) one or several measurements are taken approximately on all subjects before and after treatment (Wallen N & Fraenkel J, 2001). The single-subject design which is a classical example of a time series involves the frequent taking of measurements (e.g. 20 times) before and after interfering with one or a few subjects.
Time series has some discrepancies since changes could be as a result of something else other than the treatment. For instance, subjects can do well on the successive test due to the experience they had with the first test, while on the other hand, they (subjects) can alter diets between the tests, which can interfere with their response to the test (Scott D1999). To cure this, the cross-over design is employed. This calls for the giving of two treatments – one being real treatment while the other acts as control or reference treatment. Real treatment is administered to half the subjects first the other half, the control first. Finally, after the effects of the treatment are settled, then the treatments are crossed over (Wallen N & Fraenkel J, 2001).
Where treatment effects cannot wash off, it is advisable to use a control group – in this design, all subjects should be measured – with the experimental group receiving treatment alone. Finally, measurement of the subjects is done with changes in each group being noted and compared (Scott D, 2000).
Randomized control trial encompasses situations where subjects are allocated randomly to experimental as well as control groups or treatments. Random assignment decreases the likelihood that either group is not representative of the population. Where the subjects are blind or masked to the identity of the treatment, the design is a single-blind controlled trial while its reference treatment is known as the placebo (Creswell J, 2003). Blinding of subjects is purposely done to eliminate the effects of a placebo. Blinding of the experimenter is also considered essential as it helps alleviate bias treatment of subjects on the part of the experimenter. It is crucial to note that in a double-blind study, the experimenter is unable to know the nature of the treatment the subjects are receiving until after all measurements are taken
Quasi-Experimental design
In conditions not conducive to experimental control, quasi-experimental design is employed. The designs have been extended to manage as many threats to soundness as possible in a circumstance where at most one of the three ingredients of exact experimental study is wanting. (I.e. manipulation, randomization, control group) (Creswell J, 2003). Quasi-experimental designs exist in many different types, with many being alterations of experimental designs. For instance, a researcher may use groups (control and treatment) that have developed uniquely in a way rather than being randomly selected.
Quality designs
Evidence quality for cause-and-effect relationships varies through various designs. The poorest design is the cases and series while well-designed cross-sectional or case controlled study can afford good evidence, in the absence of a relationship. Where such a study exhibits a relationship, in most cases it depicts suggestive evidence of a causal relationship. Prospective studies are seen to be cumbersome as well as time-consuming, however, their results are more convincing in cause and effects studies (Tuckman B, 1999).
Experimental studies are definitive and normally show how something affects another thing hence it has fewer subjects (Sowell E, 2001). Double-blind randomized trials offer excellent experiments.
In descriptive studies, attempting to establish cause and effect there can be a confounding problem. Confounding takes place when a component or entire association between two variables, being as a result of third variable cause (Creswell J, 2003). For instance, negative association related to routine activity and most types of degenerative diseases and old age. To overcome the problem you have to control confounding factors. In the above scenarios, one can make sure that all subjects are of the same age (Scott D, 2000).
Samples
In trials, one works with a sample rather than the whole population. To be able to generalize the outcome of a trial, the sample must be entirely representative of the population. To ensure this random selection procedure should be employed or instead of the stratified sampling procedure (Creswell J, 2003).
Improper sampling results in what is known as selection bias. The net effect of selection bias tends to the unequal expected value of the statistics and the value of the statistics population. Socio-economic inclinations and age are considered to be sources of bias in population studies as people with elevated virtues for these variables, don’t participate in the studies (Wallen N & Fraenkel J, 2001).
Bias can also be brought by one’s failure to randomize subjects to control as well as treatment groups in experiments. Thus it is vital to randomly assign subjects in a manner that ensures equilibrium concerning important variables that can influence the effect of the treatment (e.g. age, gender, physical performance). The number of subjects one should study can be arrived at by acknowledging, statistical significance, confidence intervals, or by use of confidence intervals ‘on the fly.
Statistical significance -This is complicated as the sample size has to be big enough to ensure that one detects the smallest worthwhile effect or relationship between the variable’s confidence intervals. This requires only enough subjects allowing acceptable bounds on the estimate of the population values. Abound connotes 95% confidence limits. Acceptable on the other hand depicts that the upper and lower limit should be close (Scott D, 2000).
On the fly- with small sample sizes, the confidence interval is wide. Where observed effects tend to zero, the confidence interval is seen to be narrow thus eliminating the possibility of true population value being substantially positive or substantially negative (Creswell J, 2003). A researcher needs to begin with small sample size, then gradually increase the number of subjects until he/she achieves the confidence interval pre-requisite for the degree of the effect he/she ends up with.
Effect of sample on research design
As indicated elsewhere the type of research design has an impact on the sample size (Creswell J, 2003). Descriptive studies require hundreds of subjects to provide suitable confidence intervals for small effects. Controlled trials in most cases require 10 as many, while cross-over require less.
Validity and reliability effect
The accuracy of the research design depends by and large on sample size, thus the worse the measurements the greater the numbers of subjects required to overturn the effect. Precession is articulated as validity and reliability – validity depicts how exact a variable measures what is supposed to be (Wallen N & Fraenkel J, 2001). In descriptive studies, poor validity of the main variable calls for many subjects as opposed to hundreds. Reliability is the ability of a measure to reproduce similar outcomes on tests in experimental studies. The more reliable a measure is, the fewer subjects one needs, to investigate changes in a measure.
Validity can either be internal or external (Scott D, 2000). Internal validity shows that the experiment can be interpreted, while on the other hand, external validity highlights the issue of generalizability i.e. can the experiment be generalized and to whom (Creswell J, 2003). Some factors generally interfere with internal validity for instance; history, particular circumstances appearing between intermediate measurements including the experimental variables.
Maturation – this connotes all the happenings within the subjects as time passes by. They include events such as advancing in age, getting hungered, more tired……. (Hittleman D & Simon A, 2002).
Testing – the effects of performing a test upon the scores of successive testing.
Instrumentation – calibration tool change or changes in observers or scores can lead to altered measurements.
Statistical regression – this appears when groups are chosen based on their extreme scores.
Selection – bias in the selection of respondents from control groups (Wallen N & Fraenkel J, 2001).
Experimental mortality was brought about by the death of subjects in the control group (Creswell J, 2003).
Selection maturity interaction – This occurs mostly in quasi-experimental designs representativeness can be jeopardized by reactive or interaction effect of testing; increasing a pretest, interaction effects of bias selection as well as the experimental variable; reactive effects of experimental arrangement as well as multiple interferences in treatment where treatment effects are unerasable (Scott D1999).
Pilot studies
Where it is impossible to have adequate time as well as resources to gather a sample of the required size the study can be taken to be a pilot study in anticipation of a large study (Cooper H, 1998). Pilot studies are important in developing or checking the likelihood of methods to be employed and how big the sample of the final study should be (Tuckman B, 1999). In running, pilot studies sampling procedure and techniques should be adhered to as if one was performing a large study.
Meta-analysis
Failure to test enough subjects to get an acceptably narrow confidence interval should not bar one from publishing the findings as it will at least set the bounds on how big and how small the effect can be (Cooper H, 1998). The finding if combined with findings of similar studies forms the Meta ‘analyses. It derives confidence interval for the effect from several studies.
Collecting the data
Data collection in quantitative studies is done through different forms. They include face-to-face, telephone as well as online methods (Cooper H, 1998). There is also a growing trend towards the use of mystery shopping and direct observation. Whichever method one employs there are bound to be pros and cons associated with it as far as the research objectives, time, and finance are concerned (Wiersma W, 2000). For instance, postal or self-completion surveys are mainly recommendable in certain circumstances.
Question design/questionnaire
This is done through designing questions that are clear, consistent, well understandable unambiguous, meaningful well as being relevant and tightly defined (Creswell J, 2003). The questionnaire must be capable of drawing deep and relevant information to enable the researcher to retrieve more effective decisions (Sowell E, 2001). Each research technique has its advantages and disadvantages in different circumstances and therefore requires to be used with strict observance to sound principles of sampling to guarantee acceptable excellence (Cooper H, 1998).
Background
Such a study is centered on the area of distributed leadership styles in schools being a reflection of improved performance (Cooper H, 1998). School improvement should be focused on the principals ‘traits’ of successful leadership, encompassing the principal’s abilities, behavior as well as interpersonal style.
Earlier researches have shown that there is cogent evidence that strong leadership styles from principals are positively related to successful improvement in school (Johnson B, & Christensen L, 2000). However current researches have bludgeoned the leadership skills associated with improvement in schools from the sole domain of principals to include the teaching fraternity as a whole, however, I would argue that teachers are under the control of principals (Cooper H, 1998).
Researches have been carried out to examining the relationship between the leadership styles of principles and the enhancement of school performance encompassing the school learning environment (Johnson B, & Christensen L, 2000). I will modify such research to include leadership behaviors that can control the relationship dimensions of a school learning environment in addition to student supportiveness as well as teacher affiliation (Cooper H, 1998). It is important to note that transformational principals are more able to manipulate and change the environmental constraints to achieve their performance objectives.
Theoretical framework
The experimental technique is the only method of research that can truly test hypotheses concerning cause-and-effect relationships. It represents the most valid approach to the solution of educational problems, both practical and theoretical, and the advancement of education as a science.
Under this method, the principal’s role is taken to be both dependent and independent factors. If taken as a dependent variable- The principal seems to be influenced by external factors, for instance, socioeconomic status or present external environmental circumstances such as technological advancements (Wiersma W, 2000). On the other hand, if taken as an independent variable the principal is seen to control any change; influencing directly what teachers do; environmentally pleasing conditions as well as induction of outcome constituting things like, teachers’ satisfaction in work, in addition to indirectly influencing students to study outcome.
Transformational, transactional and laissez faire leaders
Transformational leadership is said to occur when leaders’ and followers’ objectives become fused, a situation whereby leader and follower boost each other’s morale in achieving the desired goals. The leader develops within his followers’ anxiety to develop high spirits of dedication to the desired objectives (Scott D, 2000). Transactional leadership is seen to happen where there is the simple exchange of one thing for another. Laissez-faire – this is theorized to take place where there is no leadership (Wiersma W, 2000). Transformational leadership models assume that the leader is capable of changing their environments to achieve their desired goals. Transformational principals achieve this by way of promoting educational reformation and novelty. He highlights construction vision, encourages mutual contribution as well as elevates the role of followers (students/teachers) to that of leaders (Hittleman D & Simon A, 2002).
School learning environment
These connote a set of features said to influence the feel or personality that a school displays. In another term, it can be said to be the distinguishing characteristics of a school from another that has a bearing on the behavior of its members, both staff and students. They can be both classrooms as well as school level (Hittleman D & Simon A, 2002). It is crucial to note that there are a wide variety of factors that can influence the personality of a school. But as for this research on the relationship between the principal’s styles and school improvement will consider the variables of student supportiveness and affiliation as well. By this I mean to what extent, that connection is between teachers and students or to what extend is there collegiality between staff members (Tuckman B, 1999).
The purpose of the study
While it does not contain everything the research is aimed at studying the effects of principals’ transformational, transactional as well as Laissez-Faire leadership styles of school (Wiersma W, 2000). The paper will aim at showing transformations and transactional leadership on teacher outcomes (Hittleman D & Simon A, 2002). Particularly the paper examines three leadership behaviors, behavior propagation, individual consideration as well as laissez-faire leadership.
Method
Random sampling schools principals together with their schools. The staff sample size is labeled (n = x) here one should employ the multifactor leadership questionnaire (5x – short) should be utilized to examine transformational and transactional leadership constructs (Cooper H, 1998).
There is also the school learning environment questionnaire (SLEQ) (Wiersma W, 2000). This is used to examine learning environmental constructs such as student supportiveness, affiliation, professional interest, centralization, innovation, resource availability as well as achievement orientation.
Four scales should be adapted to selected principal’s achievements/outcomes (Cooper H, 1998). These should encompass satisfaction with leadership (to be taken from MLQ – 5X (SHORT) receptions of teacher effectiveness, as well as the perception of teacher influence and perceptions of teacher control (Wiersma W, 2000). Then this should be followed by the identification of variables. confirmatory factor analysis technique in analyzing the data collected from the selected schools and principals is considered (Johnson B, & Christensen L, 2000). After this, calculation of the factor scale scores for each of the variables utilized should be calibrated and a finish up is done by use of multilevel modeling technique to examine the relationship between the variables (Wallen N & Fraenkel J, 2001).
Results and discussion
The reason why the multilevel modeling technique is employed is due to its efficiency in working out the descriptive variables (vision, individualized, consideration, and laissez-faire leadership and the associated element of school learning environment response variable, i.e. student’s supportiveness and affiliation (Wiersma W, 2000). It is imperative to note this technique is best suited since it does not infer causality between the variables examined (Cooper H, 1998). The data should be standardized to have variables having a common metric, to allow comparison between them.
At the end of the research, one would be able to see that school principals as leaders in schools have the capacity to manipulate their behaviors and so develop some distinct effects on some aspects of the learning environment (Wiersma W, 2000). As discussed elsewhere it’s important to note that laissez-faire leadership requires to be considered when principals wish to manipulate the school learning environment (Wallen N & Fraenkel J, 2001).
Reasons for choosing the design
The study can be achieved through employing experimental study techniques as the number of subjects required to carry out the research is controlled (Hittleman D & Simon A, 2002). With the use of this method the problem of confounding is controlled by ensuring that the subjects are all from equivalent school settings. Its wide use of questionnaires enables it to be cost-effective and non-cumbersome (Hittleman D & Simon A, 2002).
In this method, the what, how, and why questions are fully taken into consideration. For example, what is happening to a certain school, how this school can achieve improved performance, and why this school can perform well (Creswell J, 2003)? By hypothesizing well, the outcome of the study fully allows for the association to be established. In the present study, one can make the following hypothesis.-hypothesize that the principal’s leadership styles have a relationship with school performance (Johnson B, & Christensen L, 2000). One would be interested to know the principal’s leadership skills in high-performing schools as well as how high-performing school principals utilize these leadership skills (Cooper H, 1998).
The design underscores the purpose of the study and it wholly unveils the leadership styles of school principals in adhering to the above design method several questions are fully answered such as (Cooper H, 1998): the type of leadership styles employed by principals to improve school performance, whether the leadership styles vary about principal’s field experience, whether leadership styles vary according to teachers field experience or whether leadership skills change with teachers work experience with their current principals (Sowell E, 2001). There is also the consideration, whether the principals regard themselves as effective managers/leaders or whether the teachers refer to principals as effective managers/leaders (Scott D, 2000).
Reference
Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed methods approach (2nd ed.). Thousand Oaks, CA: Sage Publications.
Hittleman, Daniel R. & Simon, Alan J. (2002). Interpreting educational research. (3rd ed.).. Upper Saddle River, NJ: Merrill Prentice Hall.
Johnson, Burke, & Christensen, Larry. (2000). Educational research: Quantitative and qualitative analysis. Boston: Allyn and Bacon.
Scott, David. (Ed.). (1999). Values and educational research. London: Institute of Educa-tion, University of London.
Scott, David. (2000). Realism and educational research: New perspectives and possibili-ties. London: RoutledgeFalmer.
Sowell, Evelyn J. (2001). Educational research: An integrative introduction. Dubuque, IA: McGraw-Hill.
Tuckman, Bruce W. (1999). Conducting educational research. (5th ed.). Fort Worth, TX: Harcourt Brace College Publishers.
Wallen, Norman E., & Fraenkel Jack R. (2001). Educational research: A guide to the process. Mahwah, N.J.: Lawrence Erlbaum Associates.
Wiersma, William. (2000). Research methods in education: An introduction. (7th ed.). Boston: Allyn and Bacon.
Cooper, H. (1998). Synthesizing research: A guide for literature reviews (3rd ed.). Thousand Oaks, CA: Sage Publications.