# Positional memory effects - Mathematical methods in psychology

## Positional memory effects

Even H. Ebbinghaus, conducting experiments to memorize sequences of meaningless syllables, noted that the most easily remembered are those elements of the list that are either at the beginning or at the end of the series to be memorized. These effects are called the effects of a series, or memory positional effects.

In the classroom of the general psychological workshop at the Institute of Psychology. LS Vygotsky RSUH students get acquainted with various schemes of organizing psychological experiments. One of the typical classes is devoted to multi-level plans, using the scheme of cross-individual equalization. This is a typical scheme of an experiment with repeated measurements.

The subjects were presented with a list of 20 words. To monitor the task factor, the list was divided into five parts. The position of each part was balanced between the subjects under the Latin square scheme (R. Gottsdanker [5]). During the experiment, the words were consistently demonstrated to the subject on the monitor screen. The time for each word was 1.5 seconds. The interval between the two words is also 1.5 seconds. Thus, the rate of presentation of words in the list was one word in 3 seconds. At the end of the presentation of the list, a beep sounded and a signal appeared on the monitor screen to start playback. The subject had to immediately reproduce all the words he had managed to memorize, in any order.

Each student had to perform an experiment on the described technique with the participation of five subjects in accordance with the scheme of the Latin square. For each subject, the number of words correctly reproduced by him in each of the five positions was counted. Then these results were translated into percentages, reflecting the probability of playing a block of words from each position.

The data obtained in this way by one of the students is presented in Table. 4.7.

Table 4.7

Dependency of remembering words from their position in lists

 Subject Line Items 1 ... 4 5 ... 8 9 ... 12 13 ... 16 17 ... 20 1 1.00 0.25 0.75 0.50 0.75 2 1.00 0.75 0.50 0.75 1.00 3 0.75 0.50 0 0.50 0.50 4 1.00 1.00 0.50 0.75 1.00 5 1.00 0.5 0.75 0.75 1.00 Avg 0.95 0.60 0.50 0.65 0.85

As is evident from Table. 4.7, the obtained data do indeed exhibit edge effects. The best recall is noted at the beginning of the list - in the first four positions, subjects on average recall 95% of the words. This is the effect of primacy. In second place is the recall of the last four positions - on average 85% of words are remembered. This is the effect of the "recentness". Finally, the words that are in the middle position are the worst - 9 ... 12. These words are remembered on average only in half the cases.

This dependence is more clearly illustrated in Fig. 4.2 (the line of the quadratic trend is shown in dashed lines). It can be assumed that the results obtained reflect a quadratic relationship between the position of the word in the list and the probability of its successful recall: it is evident that the line of such a trend practically coincides with the empirical dependence obtained by us.

Let's try to evaluate the presentation in Table. 4.7 results using the analysis of variance analysis with repeated measurements. This time, we use the statistical package IBM of SPSS Statistics to analyze the data.

Run SPSS Statistics and go to the Variables tab. Unlike the procedures for processing the results of an ordinary experiment, when it is necessary to specify the independent and dependent variables explicitly, the processing of data obtained in experiments with repeated measurements, as already indicated in subparagraph 4.1.2, suggests a somewhat different logic. The variables themselves are determined only at the stage of statistical data analysis, and the levels of the independent variable are used as the variables that appear in the statistical package.

Fig. 4.2. Success of remembering monosyllabic words depending on their position in the list

Therefore, in the corresponding tab, we define five variables that will correspond to the five levels of our experimental impact. Let's call them so: 1) pos1_4; 2) pos5_8; 3) pos9 12; 4) pos13_16; 5) pos17_20. For convenience, we introduce the appropriate labels for these variables (Figure 4.3).

Fig. 4.3. Defining Variables in an Experiment Exploring Memory Positional Effects

Once you've defined the variables, return to the Data and introduce the results of the experiment presented in Table. 4.7. As a result, a table of data consisting of five columns and five rows should appear (Figure 4.4, the columns indicate the levels of the independent variable, the rows are the results of the five subjects).

Fig. 4.4. Data for variance analysis with repeated measurements in IBM SPSS Statistics

Now, in accordance with the recommendations given in sub-paragraph 4.1.2, we select in the Analysis item General Linear Models and then - OLM-repeated measurements ... & quot ;. The window for determining the intra-group factor opens. In this window, enter the name of the independent independent, whose effect we want to investigate, - "Position". We also indicate the number of levels of this variable - five (Figure 4.5).

Fig. 4.5. Defining an intragroup variable in IBM SPSS Statistics

So, we defined our independent variable. To save it, click the Add button. The created independent variable will be added to the list. In parentheses it will be indicated that the number of levels of this variable, as we have determined, is five. It remains only to click the Set button. in the same window at the bottom.

We find ourselves in another window (Figure 4.6). The field on the left lists all of our variables that we defined on the Variables tab. But we remember that these are not five independent variables, but only five levels of one independent variable, which we just asked. This independent variable is specified in the field to the right. It is indicated as Intragroup variables & quot ;. In our case, this is one variable - Position & quot ;. It is a set of empty slots that correspond to all the levels of our independent variable defined by us at the previous step. At the top in parentheses, we see the name of this variable.

Fig. 4.6. Configure a variance analysis with repeated measurements in IBM SPSS Statistics

First of all, we are interested in a priori contrasts: after all, it is necessary to show that the dependence we are investigating is described by a polynomial function of the second degree, i.e. there is a quadratic relationship between the position and the probability of reproduction. Therefore, select the item "Contrasts ... and get into a new window (Figure 4.7). We check that the selected type of contrast - polynomial - matches our goals and press the "Continue" button.

Now everything is ready for statistical analysis. Click OK and we get to the output window. It contains a fairly large amount of information.

First of all, we should pay attention to the results of estimating the homogeneity of the variational-covariance matrix using the Mouchley sphericity criterion (Table 4.8). These data indicate a rather pronounced heterogeneity of the estimated matrix. The result actually falls on the boundary of the 5% quantile (see the Znch column). Therefore, the results concerning the estimation of the statistical reliability of the position effect under consideration should be treated with extreme caution.

Fig. 4.7. The a priori contrast settings window in IBM SPSS Statistics

Table 4.8

The results of estimating the homogeneity of the variational-covariance matrix

 Mocilli's sphericity criterion Measurement: MEASUREMENT-1 Intragroup Effect W Maughley Near Chi-square Number of degrees of freedom Znch Epsilonb Greenhouse - Heisser Yunha - Feldt Bottom-bound Position 0.000 19,494 9 0.049 0.323 0.412 0.250

Testing the null hypothesis that the covariance error matrix of the orthonormal transformation of the dependent variables is proportional to the unit matrix.

aPlan: Free member In-group plan: Position

bIt can be used to adjust the degrees of freedom for the averaged significance criteria. The adjusted criteria are displayed in the table Testing Intragroup Effects & quot ;.

The results of the variance analysis are presented in Table. 4.9. Let's consider them in more detail.

Table 4.9

The results of the evaluation of intra-group effects in IBM SPSS Statistics

 Checking for intra-group effects Measurement: MEASUREMENT-1 Source Sum of squares of type III Number of degrees of freedom Average square F Znch Position Assuming sphericity 0.835 4 0.209 3,976 0.020 Greenhouse - Heisser 0.835 1.292 0.664 3,976 0.097 Yunha - Feldt 0.835 1.646 0.507 3,976 .078 Bottom-bound 0.835 1,000 0.835 3,976 0.117 Error (Pos. tion) Assuming sphericity 0.840 16 0.052 Greenhouse - Heisser 0.840 5,167 0.163 Yunha - Feldt 0.840 6,586 0.128 Bottom-bound 0.840 4,000 0.210

In the left column of Table. 4.9 shows the dispersion sources of our experimental plan, which are used in statistical analysis to construct the F -relations. We see two sources: our independent variable Position and an experimental error.

Four methods of statistical estimation of these effects are presented to the right. The first method assumes full correspondence of the basic structural model considered in subparagraph 4.1.1. The remaining variants represent various ways of correcting this model, differing in the degree of their conservatism. As indicated above, the conservative strategy for estimating statistical hypotheses for plans with repeated measurements is to reduce the degrees of freedom of the numerator and denominator. In this case, the values ​​of the mean squares of the experimental impact and errors can vary, but the values ​​ F -statistics do not change. As is obvious, the most conservative is the latest decision-making model, which is designated as a source limited to the bottom. In this case, the degrees of freedom of both the numerator and the denominator are divided into n - 1. The other two models are less conservative, they calculate the number of degrees of freedom based on the heterogeneity of the covariance-variation matrix and making appropriate corrections.

In the following columns of Table. 4.9 contains statistics that are already familiar to us: total squares, degrees of freedom, mean squares, and also the value of F and the quantile of the standard F -distribution corresponding to this value.

As you can see, the zero hypothesis about the homogeneity of the effects of an independent variable can be rejected at the 5% level of reliability only for the basic model that assumes the homogeneity of the variational-covariance matrix. The extremely conservative decision-making model forces us to maintain the null hypothesis, while the other two models give marginal significance.

Given the pronounced heterogeneity of the variational-covariance matrix, it is necessary to be cautious about the possibility of adopting an alternative hypothesis, despite the fact that outwardly the effect of an independent variable seems obvious. Apparently, this result is due to the insufficient volume of the experimental group for our chosen method of estimating the dependent variable. Since such an assessment is made rather rudely in 25% steps, increasing the size of our sample, rather, will allow us to adopt an alternative hypothesis.

Now we turn to the evaluation of the relationship between the independent and dependent variables. The results of this evaluation are presented in the following table, which is issued by the statistical program (Table 4.10). These are standard data for analysis of variance, but now we have not two but eight sources of variance. The variance of the experimental impact is divided into four parts, each of which has one degree of freedom. In sum, this gives four degrees of freedom, as our basic structural model suggests. These parts describe the linear and nonlinear relationship of the dependent and independent variables. The variance of the experimental error is similarly divided. Each part of it has four degrees of freedom, which in sum gives us 16 degrees of freedom for the residual variance.

Table 4.10

Assess the nature of the dependency in IBM SPSS Statistics

 Checking for intra-group contrasts Measurement: MEASUREMENT-1 Source Position Sum of squares of type III Number of degrees of freedom Avg n Square F Znch Position Linear 0.001 1 0.001 0.054 0.828 Quadratic regression 0.751 1 0.751 240,286 0.000 Cubic regression 0.080 1 0.080 1.376 0.306 Order 4 0.003 1 0.003 0.023 0.887 Error Linear 0.092 4 0.023 Quadratic regression 0.012 4 0.003 Cubic regression 0.232 4 0.058 Order 4 0.503 4 0.126

As is evident from Table. 4.10, the most pronounced effect is the quadratic regression effect. If, on the whole, the total square of the experimental impact is 0.835 (see Table 4.9), then quadratic regression gives the value of the total square of 0.751, which is almost 90% of the effect studied. In addition, this dependence is the only one that gives us a statistically reliable value F -relations: F (l, 4) = 240.29; p & lt; 0.001.

Thus, on the whole, the hypothesis that the relationship between the position of an element in the list and the success of its reproduction in the situation of immediate free recall can be described by a quadratic function is found experimentally. Nevertheless, the heterogeneity of the variational-covariance matrix forces us to draw such a conclusion with a certain degree of caution.

[...]

[...]

[...]

[...]

[...]

[...]

[...]

[...]

[...]

[...]

[...]

[...]

[...]

[...]

## Other services that we offer

If you don’t see the necessary subject, paper type, or topic in our list of available services and examples, don’t worry! We have a number of other academic disciplines to suit the needs of anyone who visits this website looking for help.

## How to ...

We made your life easier with putting together a big number of articles and guidelines on how to plan and write different types of assignments (Essay, Research Paper, Dissertation etc)