The approach proposed by W. Thorgerson, obviously, has a number of limitations that define it as excessively rigid.
For example, one such restriction concerns the equivalence of distance estimates. Let's say we have an estimate of the distance from A to the point B. Will the distance from B to the point A equivalent to it? If we are dealing with a physical estimate of distances, then the answer seems obvious. In the case when the estimates are subjective, the non-equivalence of the estimates turns out to be quite probable. Let us imagine that we estimate the semantic closeness between two species of birds, just as the subjects did in the above-mentioned study of E. Smith et al. . Will not the subjective distance, say, between the canary and the ostrich be more or less than the subjective distance between the ostrich and the canary? Also, estimating the distance from the point A to the point B, will not we give different estimates in a situation where ways of movement in one and the other side are perceived differently for one or another reason?
To get rid of this type of constraint of the metric model, a series of non-metric models was developed. One of the first such models was the Shepard model . The essence of this model is to obtain a metric Euclidean space based on non-metric estimates, for example, presented in the ordinal scale . Thus, in the non-metric model of multidimensional scaling, information about the distance between objects is initially missing. Only the degree of similarity between us is indicated. R. Shepard calls this monotonous function distances. The purpose of such a procedure is to obtain a Euclidean space of minimum dimension, in which the distances between the evaluated objects turn out to be as close as possible to the originally found measures of similarity of these objects.
The further development of the approach proposed by R. Shepard was the work of J. Kruskal. The merit of J. Kruskal is that he developed a procedure for non-metric multidimensional scaling, which makes it possible to significantly reduce the dimension of a multidimensional space. If R. Shepard's approach provided the possibility of selecting n - 1 measurements with the number of objects equal to n, developed by G. Kraskal, made it possible to significantly reduce the number of such measurements. In fact, the solution of this problem is provided by achieving the maximum degree of correspondence between the monotonic transformation of the data available to the researcher and the desired multidimensional space. To do this, G. Kraskal applied the step-by-step downward procedure. It is based on the calculation of partial derivatives for functions relevant to each model parameter under study. These derivatives are used in the future to solve the problem of optimizing the desired solution.
The degree of correspondence between the solution obtained and the structure of the data that were analyzed, J. Kruskal denoted by the term "stress". This characteristic is a measure of the normalized residual dispersion, determined for the Euclidean distances and the extent of the differences that exist.
It should also be noted that the multidimensional scaling procedure proposed by J. Kruskal allows in principle to obtain a solution not only for Euclidean, but also for non-Euclidean space.
Multi-dimensional scaling of individual differences
One of the fundamental limitations of the methods discussed above, both metric and non-metric, is that they are applicable only to one data matrix. If we wanted to process the group data with the previously described methods, we could use the averaging procedure, or we would have to process each individual matrix separately, and then try to compare them. Not always such a solution has a real meaning.
For example, if we ask a group of voters to assess the similarity of candidates participating in elections, it is clear that the structure of the data received can be significantly different. In the same way, the structure of the representations of newcomers and experts in any field may be largely different if we ask them to assess the similarity or difference in the set of concepts reflecting the main content of knowledge presented in this field.
To solve this problem, a multidimensional scaling technique was proposed, which allows to circumvent this limitation. This procedure is called multidimensional scaling of individual differences . Several variants of this procedure were suggested.
The first option joins the multidimensional scaling and factor analysis procedure. The data of some subjects correlate with the results of other subjects, and the correlation matrix thus obtained is subjected to factor analysis.
The second variant of the procedure is based on the selection of two types of individual differences: in the cognitive style and in the response styles. This procedure defines individual differences as a cross between these two extremes and completely excludes any averaging of the data. Thus, this non-metric procedure allows processing a large number of individual matrices simultaneously. It enables the researcher to perform a monotonous transformation of the data separately for each matrix or for all matrices simultaneously at the choice of the researcher himself. As a result, we have a single multidimensional model for the entire array of data to be processed, thus eliminating the factor of individual differences or a separate model for each subject if the individual differences are so large that a single model can not be built.
The third option procedures presuppose the existence of a Euclidean property space, or attributes defining individual differences. If at least one subject has this attribute, the corresponding difference is included in this space, forming one of its dimensions. Thus, the individual differences of the subjects are represented by weight coefficients that determine the coordinates of each subject in this space. If the subject does not have the corresponding property, the weight coefficient for it is set to zero.
Another group of procedures is represented in the ALSCAL - alternating least squares scaling. This algorithm essentially consolidates previously developed approaches and allows processing data of almost any volume and configuration, in particular, it is included in the statistical package SPSS.
Also We Can Offer!
- Argumentative essay
- Best college essays
- Buy custom essays online
- Buy essay online
- Cheap essay
- Cheap essay writing service
- Cheap writing service
- College essay
- College essay introduction
- College essay writing service
- Compare and contrast essay
- Custom essay
- Custom essay writing service
- Custom essays writing services
- Death penalty essay
- Do my essay
- Essay about love
- Essay about yourself
- Essay help
- Essay writing help
- Essay writing service reviews
- Essays online
- Fast food essay
- George orwell essays
- Human rights essay
- Narrative essay
- Pay to write essay
- Personal essay for college
- Personal narrative essay
- Persuasive writing
- Write my essay
- Write my essay for me cheap
- Writing a scholarship essay