Post factum control, Features of causal analysis in...

Post factum control

Statistical control based on methods of covariance analysis can be considered as a special type of general reception of indirect control. This method, widely used in quasi-experimental plans, has received the name of control post factum. The essence of this technique is that the results obtained in the experiment are grouped on different grounds besides the base expected by the experimental a hypothesis that specifies the basic, basic variable. The general scheme of such control is shown in Fig. 14.1.

Fig. 14.1. Control scheme post factum

Suppose, we have reason to believe that the success of the kindergarten teacher's establishment of emotional contact with the child is determined by the level of her emotional intelligence. To prove this hypothesis, we can divide the entire potential sample of our subjects into two groups - with a high and low level of emotional intelligence. Then we can estimate for each teacher the effectiveness of establishing an emotional contact

with the child and compare the results of the two groups to this classification basis.

The problem here, however, is that with this separation we are not in a position to fix other individual indicators, which can also act as predictors of success. Such indicators include the qualification level of the teacher, the length of her work, the age, the presence or absence of her own children, etc.

Evaluating the differences between the two experimental groups, we can not be completely sure that these differences are determined precisely by the factor of emotional intelligence, and not by any other factors related to the individual characteristics of our subjects, their household or family conditions, and working conditions. It is necessary to prove that other variables either do not at all influence the dependent variable we are investigating, or their influence is inferior to the influence of the basic basic variable under study.

To prove this, the results of both experimental groups are combined, and then the entire experimental sample is divided into some other variable that acts as an analog of an independent variable-say, by the presence or absence of their children. The newly obtained groups are compared among themselves. Then the same actions are performed on the other secondary variable, and so on until the list of potential threats of internal validity is exhausted. If no significant statistical differences between groups are detected during such a procedure, or if such differences are less pronounced than the differences between the groups identified on the basis of the experimental hypothesis, the experimental hypothesis under consideration can be adopted.

Note that this type of control is usually used when the side variables that represent a threat to internal validity are specified in the naming scale. Otherwise, the procedures of covariance or correlation/regression analysis will be preferable.

Features of causal analysis in correlation plans

In a correlation study, as well as a true experimental study, it is important to trace not only the relationship of the two variables to each other, but also how one of the variables can affect the other. A method that allows one to predict the value of one variable based on an estimate of the values ​​of another variable, provided that the relationship between them is linear, is called linear regression. It involves the study of pairwise correlations between a group of variables, one of which is considered as a dependent variable. It is called a criterion. All other variables are treated as independent variables. They are called predictors.

This method, in fact, solves the problem of the third variable based on various methods of statistical analysis of data. It consists that, probably, the reason of changes of two correlating variables is the third variable, also correlating with one or both studied variables. For example, the correlation between the number of accidents on roads and the level of water in natural reservoirs can be explained by the amount of precipitation. Then it can be assumed that the precipitation leads to an increase in the water level in the reservoirs, and to the complication of the traffic situation, which in turn leads to an increase in the number of accidents. Explanations of this kind are referred to as causal models.

Causal models are implemented in practice using structural modeling methods. One such method is regression analysis. Since regression analysis is a derivative of the correlation analysis, which assumes the linearity of the relationship between the variables being studied, this method is referred to as linear structural modeling.

On the other hand, regression analysis is one of the varieties of the general linear model method - Other methods of factor and variance analysis are already available. The essence of the method is to represent the dependent variable (criterion) as a linear combination of independent variables (predictors).

Formally, for independent variables this can be expressed by the following structural equation:

where In are the coefficients of linear regression. They reflect the influence of each of the independent variables under consideration on the dependent variable. Since the magnitude of these coefficients is closely related to the scale of the scales in which the predictors are presented, their standardized options are often used instead of these coefficients. In this case, the linear regression equation takes the following form:

Multivariate regression analysis allows us to estimate the contribution of the variance of all independent variables to the total variance of the dependent variable, as well as to separate the effect of each variable. In general, it is designed to answer the following three questions.

1. To what extent are all independent variables ( predictors) related to the dependent variable Y that we are investigating?

To answer this question, calculate the coefficient of determination R. It shows the percentage of variance of the criterion common to the variance of all independent variables. The square root of this quantity is called the multiple correlation coefficient. Note that these coefficients, in contrast to the bi-variable correlation coefficient, are always positive, i.e. vary from 0 to 1.

2. What part of the variance of the dependent variable Y is additively determined by each of the dependent variables we are investigating?

The answer to this question involves calculating a special type of correlation, called the correlation part, or partial correlation. By raising each correlation of the part to the square, we obtain the percentage of the total variance of the criterion, which is determined by the variance of the given independent variable.

3. What are the correlation coefficients between the dependent variable and each of the independent variables in the situation , when all other variables become constants?

To answer this question, we calculate another kind of correlation coefficient-a partial correlation.

For each of the designated statistics, it is possible to put forward specific statistical hypotheses. Most often put forward hypotheses about the equality of these parameters in the general population of zero. Thus, it is possible to estimate to what extent the variable, which the researcher considers as an analogue of the dependent variable, is really determined by one or another independent variable.

Also We Can Offer!

Ошибка в функции вывода объектов.