Non Experimental Research Design Mindset Essay

This section details the strategy that will be used to perform this study. Strategy is an procedure, or method used to conduct a study. The aspects that will be discussed in this section are the research's design, procedure, sample, instrument, process, and also data collection and research.

3. 2 Research's Methodology and Design

3. 2. 1 Research's Approach

There are two types of techniques found in research, that are quantitative approach and qualitative approach. According to Blaxter, Hughes and Tight (as cited in Hughes, 2006) Quantitative research is a study that concerned with the collection and examination of data in numeric form. It stresses relatively large-scale and representative sets of data, which is often shown or perceived as being about the gathering of `facts'. Qualitative research can be involved with collecting and analysing information in as much forms as possible, mainly non-numerical form. It targets exploring smaller numbers of instances or instances which have emerged as being interesting or illuminating, and aims to attain `depth' alternatively than `breadth'.

In this research, the data accumulated by researcher will be in numerical form, which obtained from the questionnaires. Thus, the approach used in this research is quantitative research.

3. 2. 2 Research's Design

According to Chua Yan Piaw (2006), there are three main designs in research, namely true experimental design, non-experimental design, and quasi-experimental design. True experimental design studies the relation between the impartial factors and manipulated variables, manipulates the impartial variables and observes the changes on manipulated variables. Non-experimental design is employed when the procedure is not possible as the indie variables naturally can be found, and the respondents will be arbitrarily allocated into every group. This design studies the relationship between independent factors and manipulated factors without manipulate the indie factors. Besides that, quasi-experimental design normally used to judge the effectiveness of certain program when then your randomly syndication of the respondents in the study is not possible. It studies the connection between independent parameters and manipulated variables (Chua, 2006).

Non-experimental research design???

According to Chua Yan Piaw (2006), there are a great number of non-experimental designs, including survey, field research, research study, action research, ethnography, and so forth. Study is one of the most trusted non-experimental research design, which used in several form of press, such as publication, newspapers, and television to accumulate data from subject matter who react to a series of questions about conducts and views regarding to certain issue, or to review the potency of a product or plan. Normally interview or quetionnaires will be used in collecting data. Field research identifies gathering primary data from an all natural environment without performing a lab experiment or a study, where in fact the researcher must be inclined to step into new surroundings and observe, participate, or experience those worlds independently. Case study is an in-depth examination of an individual event, situation, or individual, where in fact the researcher examines existing sources like documents and archival files, conducts interviews, engages in immediate observation, and even participant observation, to accumulate the information in depth regarding to specific behaviours and social conditions. Action research identifies the research initiated to resolve an immediate problem or a reflective process of progressive problem solvin lef by individuals working with others in teams or as part of a "community of practice" to increase the way they dwelling address and solve problems.

Survey???

3. 3 Sampling

According to Gay and Airasian (2003), sampling is the procedure of choosing the number of participants for a study in such a way that they symbolize the larger group that they were preferred (p. 101). Choosing the sample is a very important step in conducting a research study, particularly for quantitative research. The "goodness" of the sample determines the meaningfulness and generalizability of the research results (Gay & Asian, 2003, p. 103).

3. 3. 1 Population

The first rung on the ladder in sampling is to explain the population. A sample includes the individuals, items, or events selected from a larger group referred to as a population. The population is the band of interest to the researcher, the group to which the results of the analysis will ultimately generalize (Gay & Airasian, 2003, p. 102).

The location of this analysis is Sabah, and the populations of the study are the upper form students in supplementary schools in Sabah, who are 16 to 17 yrs. old. In the beginning, the researcher will obtains the info about the upper form students from all the secondary colleges in Sabah, and then determine the samples which get excited about this analysis. The researcher are certain to get the info about the total number of upper form students in each of the schools, since it might influences the outcomes of the study.

3. 3. 2 Sampling Method

According to Chua Yan Piaw (2006), there are two main sampling methods, that are likelihood sampling and non-probability sampling (p. 189). Corresponding to Gay and Airasian (2003), possibility sampling, which also known as arbitrary sampling, methods are easy for the researcher to specify the probability, or chance, that all member of a precise society will be chosen for the sample, and these sampling are predicated on randomness in the selection of the sample (p. 103). Non-probability sampling, which is also called non-random sampling, methods do not have random sampling at any stage of sample selection, it is used when the arbitrary sampling is extremely hard, such as teachers or administrators select students or classes as the samples (p. 114). With this study, arbitrary sampling method will be utilized to select the sample.

Gay and Airasian (2003) declares that there are four basic arbitrary sampling techniques or procedures, namely simple arbitrary sampling, stratified sampling, cluster sampling, and organized sampling, which can be generally known as likelihood sampling (p. 103). Simple arbitrary sampling is the process of selecting a sample so that individuals in the chosen population have an equal and indie chance to be determined for the sample, where the randomness in sampling will take selecting the test completely from the researcher's control by making a random, or chance, method select the sample (p. 103). Stratified sampling is the process of selecting a sample in such a way that recognized sub-groups in the populace are symbolized in the sample in the same proportion that they are present in the population (p. 106). Besides that, cluster sampling arbitrarily selects groups, not individuals; all the users selected communities have similar characterisics, and it is most useful when the population is very large or disseminate over a broad geographic area (p. 108). Lastly, organized sampling is the sampling technique in which folks are chosen from a list taking every Kth name (p. 110).

Although the instrument of students' nationwide identity was created to be applicable to all or any students in Malaysia, the secondary college students will be chosen to test the instrument in this study. The rationale of choosing higher form students as the mark population to test the device is they could be older if set alongside the lower form students who are just 13 to 15 years of age. They could have a better understanding on the items explained in the questions, as well as perhaps they will give their response really. So, the outcomes which obtained from the top form students might be more reliable and dependable if in comparison to lower form students.

All top form students from the extra school in Sabah are over sampled. Two arbitrary examples of 200 students and 2500 students will be purchased for both survey studies by using cluster sampling method. A number of universities will be randomly selected from each one of the districts in Sabah as the representatives of these districts. The supplementary university students in Sabah meet the criteria to get the mailed survey.

In the first study research, the questionnaire will be initially delivered to the random sample of 200 students to look at test-retest reliability of the tool and its subscale. Then, a duplicate mailing review will be sent to the respondents from the initial mailing again. The test-retest trustworthiness of the tool will be reviewed using the data from the remaining surveys came back from both the initial and do it again mailings.

In the second survey review, the sample needed to be large in order to conduct factor analysis to test the construct validity of the device. There's a general arrangement among way of measuring methodologists that large test sizes are required for the balance of results of factor evaluation, the use of larger test sizes in applications of factors evaluation will provide results in a way that sample factor loadings will be more precise estimates of populace loadings and are also more secure, or less changing, across repeated sampling. The researcher will send the refined questionnaire to the second random sample of 2500 students to be able to execute factor examination, and then to test develop validity and inner consistency reliability of the device.

3. 4 Research's Procedures

This research will be conducted in two stages. Stage is the device development, and stage two is the device testing and refining. Level one includes three steps and level two contains five steps. Each of these steps is described in the following section.

3. 4. 1 Level One: Tool Development

The device development stage has three steps: (1) producing conceptual and operational definitions of the construct of student's countrywide identity, (2) making item pool, and (3) deciding the format for way of measuring (or selecting a scaling way of the measurement). All of the three steps in this stage involved willpower of content validity of the instrument of student's countrywide identity.

a. Step one 1: Growing Conceptual and Operational Explanations of the Construct.

The first step in producing this device is to find out clearly what the concept of student's national individuality is. This is of the construct, student's national identity, is situated on (need more info from section 2)

b. Step 2 2: Generating an Item Pool for the Instrument

The target of second step is to create a huge pool of items for all the proportions of the construct of student's national identity. In the beginning stage, it is better to generate more items. Thus a 60 items size might be evolve from an item pool over 100 items.

The instrument of students' countrywide identity was created to measure the degree to which a student. Wording of the things is very important and should reveal this goal of the device. Appropriate wording can effectively capture the fact of the build. The items are evaluative in nature and can mirror respondents' views about the desirability of something.

Moreover, the researcher developed several negatively worded items with careful wording that examine students' besides producing positively worded items that measure students' national identity. The goal of constructing several negatively worded items is to find respondents with acquiescence bias by their response routine when there is any. Those respondents will not be contained in the data evaluation to avoid or lessen the influence triggered by acquiescence bias. The issues related to acquiescence bias were reviewed in detail in Section 2.

After a large pool of items that measures students' national identity is set up, the dissertation committee will be first critique all the items before sending that pool to the expert panel for content validity review. The researcher will revises that pool based on reviews of the dissertation committee. The items which are repeated, inappropriate or badly worded, confusing, or irrelevant to the construct will be taken away or revised. After this revision, 84 items were retained in the device.

c. Step three 3: Deciding the Format for the Instrument

The researcher considers the format concurrently with the generation of items so that the two are appropriate. The Likert scales with five reactions options is chosen to build up the instrument measuring the student's countrywide identification. Each item is shown as a declaration sentence, accompanied by the response options that suggest varying examples of arrangement with the declaration. The five response options are: (1) highly disagree, (2) disagree, (3) either agree with the fact or disagree, (4) consent, and (5) highly acknowledge. These five details Likert scales are ordinal scale level. Each item in the instrument is categorised as two extensive categories, which can be favourable (positive) or unfavourable (negative). Scoting is reversed for negatively worded items such that disagreement with a adversely worded resulted in high credit score.

At this step, the development stage is completed. Then, the researcher will moves on to another level, which is device tests and refining.

3. 4. 2 Level 2: Instument Trials and Refining

The instrument assessment and refining level includes give steps: (1) establishing content validity of the instrument, (2) developing guidelines for responding, (3) preparing a modified draft of the questionnaire, (4) examining test-retest consistency and pretesting inner consistency consistency, dan (5) screening build validity and inside consistency trustworthiness. All steps in this stage involve refining the instrument and evaluating the psychometric properties of the tool.

a. Step one 1: Establishing Content Validity of the Instrument

Determining the number of experts needed has always been somewhat arbitrary in content validity dedication. Relating to Lynn (1986) (as cited in Wynd, Schmidt, & Schaefer, 2003), at the least five experts (rates, observers, or judges) and a larger range of categories for data project yield greater utter agreement and raise the risk of chance agreement. The use of more experts may therefore contribute right to chance contract (pg 511). The usage of only two judges isn't just statistically unjustifiable, but and yes it places the tool creator at great threat of an erroneous final result that content validity has been achieved when it actually has not (Bu, 2005, pg 73). In this particular review, five experts who either have conducted research related to student's national identification or have affinity for student's national identity will be asked to examine the tool as this content experts.

A delineation of the entire content dimentions of the build of student's countrywide personality with specific instructions pertaining to this content relevance of every item are provided to the experts by email or email for review. The five will be asked to return the instrument and their feedback within four to five weeks. This review assists multiple purposes related to deciding and maximizing the content validity of the device.

First, having experts review that pool, confirm or invalidate the definition of the happening of student's countrywide identity and this content validity of the device is quantified. The experts will be asked to rate how relevant they think each item is related to what the researcher intends to assess. The tool of students' countrywide personal information includes three subscales. The overall objective of the subscale of is to gauge the degree to which the student Beneath the broad objective, more specific targets that are assumed to measure relevant items will provided as well. The intensity of students' countrywide individuality can fluctuate over time but they are believed stable throughout a certain time frame, typically 3 to 4 weeks. These goals will be delivered to experts along with the instrument. Professionals will be asked to rate this content relevance of each item to its basic objective and its own more specific objective.

Content validity index (CVI). . In addition, professionals will be asked to clarify reasons and provide suggestions if indeed they disagree with some items included in the instrument.

Second, professionals will be asked to evaluate the items' clearness and conciseness. Sometimes, the content of an item may be relevant to the build but its wording may be difficult. This yields problems on item trustworthiness because an ambiguous or otherwise unclear item, to a larger degree when compared to a clear item, may mirror factors extraneous to the latent adjustable.

Third, in addition to judging each item, professionals will be asked to recognize phenomena which were omitted from the tool as part of the content validity assessment. Thus, by requesting experts to examine the instrument in a variety of ways the researcher could get the phenomenon of interest, and the expert reviewers help the researcher increase the content validity of the instrument.

The researcher will be paid careful attention to all ideas from these content experts then made an informed decision about how exactly to use their advice. The device will be modified to improve content validity with the consensus of the dissertation committee.

b. Step 2 2: Developing Directions for Responding

The strategies for giving an answer to the claims, as well as the meaning of the anchor tips on the continuum, will be carefully develop and then reviewed by the dissertation committee seat and fellow workers to avoid confusing respondents. Dillman (1978) (as cited in Bu, 2005) suggested ideas about providing directions to subjects about how to answer questionnaires. He says that the encirclement process leads to fewer ambiguous markings and should be encouraged. It's important that the same marking technique be used throughout the questionnaire. Lower circumstance words are preferred for directions because of their better readability (pg 76). The researcher will apply these suggestions when developing guidelines for responding.

c. Step 3 3: Preparing a Draft of the Questionnaire.

This step entails building a draft of the questionnaire and evaluating the questionnaire. Three activities will be completed in this step. First, a section of questions for gathering demographic information from participants will be designed and contained in the survey combined with the tool of student's countrywide identity. The purpose of developing this portion of questions is to gather information that might be used to describe characteristics of the participants.

Second is the problem of buying the questions in the questionnaire. In such a study, items which are calculating the same dimension will be grouped mutually. The questions that submission demographic information are placed at the end of the questionnaire. The questionnaire contains two parts.

Part one comprises some items regarding student's national identification. Items represented. proportions. .

The form of the tool of student's countrywide identity looks like the following sample.

Item

Strongly disagree

(1)

Disagree

(2)

Either disagree or agree

(3)

Agree

(4)

Strongly agree

(5)

Item 1

1

2

3

4

5

Item 2

1

2

3

4

5

Item 3

1

2

3

4

5

Part two of the questionnaire includes the demographic questions and includes as well as characteristics such as age, gender, ethnicity, geographic location.

Third, the questionnaire is sent out to. For the overview of clarity of guidelines, ease of responding and time had a need to complete the questionnaire.

d. Step 4 4: Examining Test-Retest Reliability and Pretesting Internal Regularity Reliability

After questionnaire is produced, the researcher conducts two study studies to gather data to look at construct validity and stability of the tool. Step 4 4 is to look at test-retest consistency of the instrument, preliminarily test internal consistency trustworthiness of the tool, and perform item examination.

It is assumed that the construct of student's countrywide identity will not change in things within two to three weeks. The researcher initially mailed the questionnaire to. . In two to three weeks, a repeat survey will be mailed to the respondents from the initial mailing. The test-retest trustworthiness of the instrument of student's national identity and its own. . subscales are reviewed among the themes responding both first and the repeat mailings.

Cronbach's alpha of the device and its own. Subscales is calculated using the data from the questionnaires delivered from the initial mailing to pretest inner consistency trustworthiness of the instrument and its. subscales.

In addition, something analysis for each and every subscale will be performed using the data from the returned questionnnaires from the initial mailing for the purpose of refining the tool. An item had a need to meet Likert's criterion of inner consistency in order to be retained in the size. A given item whose report significantly correlated with the relevant size score, which is 0. 3 is considered to meet the criterion of interior consistency and is retained in the instrument. Something whose score does not significantly related to the scale report, which is 0. 3, is rechecked and motivated for retainment, reduction or revision depending on theory, content of that, function of the item in the instrument.

e. Step 5: Evaluating Build Validity and Internal Reliability Reliability

In this step, the refined review will be delivered to. . to acquire data in order to examine the construct validity of the tool using factor evaluation and the inner consistency stability of the instrument using Cronbach's alpha.

Factor analysis is performed using data from the went back questionnaires. Factor evaluation is used to determine develop validity of the instrument of student's countrywide identity and choose items for addition in the instrument.

Factor evaluation is a wide category of methods to determine the framework of relationships among options (Nunnally & Bernstein, 1994). Factor evaluation may be used to determine: (1) grouping factors, (2) which variables participate in which factor and exactly how strong their romantic relationship, (30 how many dimensions are had a need to explain the relationships among the parameters, and (4) a structure of mention of describe the relation among the parameters more conveniently.

There are two major methods to factor analysis: exploratory factor examination and confirmatory factor analysis. In exploratory factor research, one seeks to summarize data by grouping along factors that are intercorrelated. The parameters themselves may or might not exactly have been chosen with potential fundamental structure at heart. Exploratory factors are described to attain a mathematical goal such as increasing the variance accounted for. In confirmatory factor research, factors are defined directly, which combine the properties such as the number of factors and content (or parameters) of each factor that have been hypothesized and then regulate how well these fit the info (Nunnally & Bernstein, 1994). Relating to Tabachnick & Fidell (1983), exploratory factor evaluation is usually performed in the first stages of research to combine factors and generate hypotheses about connections in a reduced data establish. Confirmatory factor examination generally occurs later in the study process, when a theory about composition is usually to be examined or when hypothesized distinctions in composition between groups of research devices are tested. Parameters are specifically chosen to expose underlying structural operations. Data found in confirmatory factor evaluation, then, might vary from those found in exploratory factor examination.

Cronbach's alpha will be computed in this large sample to look for the internal steadiness reliability of the instrument of behaviour toward patient advocacy and its three subscales.

Up to this step, the device of students' nationwide personal information will be established.

3. 5 Data Collection Timetable and Procedures

There are some distinctions between the data collection program and procedures for the test-retest analysis and the build validity research. For the test-retest dependability study, research with the cover words will be mailed to the 200 arbitrarily selected content. A stamped coming back envelope will be contained in the mail for return of the review. A code quantity will be mounted on the 200 subject matter, and those respondents from the initial mailing review will be recognized and dispatched a repeat survey within two to three weeks after the initial mailing in order to look at test-retest consistency. Thus, anonymity of respondents is not guaranteed in the test-retest consistency study for the purpose of the do it again mailing to the people who came back the study.

Considering budget limitation for the dissertation, the researcher will makes only one connection with the 2500 randomly selected subjects. On this contact, a mailing that included the review, a detailed resume cover letter explaining the type of the analysis and requesting response and a company reply envelope will be sent to the 2500 content. Since only 1 contact was designed to the 2500 students, anonymity of respondents is ensured in the analysis.

A codebook for data entry will be developed and reviewed. A data entry program will be create to help in inputting data. Data will be entered as replies to the questionnaire are returned. Joined data will be confirmed to check for problems in data type by going over 25% of the returned and moved into questionnaires and running frequencies of every item of the questionnaire. Any dissimilarities between original data accessibility and verification required checking out the fresh data and correcting the entered data. Data research will begins after conclusion of data admittance and included statistical consultation with the dissertation committee people.

3. 6 Data Examination and Interpretation

Three types of data analyses will be conducted in this analysis, which are research related to trustworthiness, analysis related to validity, and descriptive information. First, this content validity of the tool will be dependant on CVI. The CVI was the percentage of items given a score of 3 (relevant and requires a little revision) or 4 (very relevant) to the aims of a measure based on 4-point ordinal range by at least six from the seven experts (86% agreement) in this study.

Second, data from the respondents came back from the initial mailing in the test-retest analysis will be used for the examination of Cronbach's alpha to pretest the internal consistency consistency of the instrument. Data from the questionnaires that are delivered from both preliminary and the repeat mailings will be used for examining the test-retest trustworthiness. A total rating for every of the subscales and the whole tool will be obtained with the original mailing, and a second total score for each of the subscales and the whole instrument will be obtained with the second mailing. The test-retest stability coefficient of the device and its own subscales will be obtained by correlating the initial mailing ratings with the next mailing ratings using Pearson Product Point in time Relationship Coefficient. High relationship coefficient suggests high balance or test-retest dependability of the device. This trustworthiness coefficient above. 70 is considered satisfactory.

Third, item examination will be performed using data from the questionnaires delivered from the original mailing. Correlations among items within each subscale and between each item with the full total subscale score will be analyzed. Items with low correlations with the relevant subscale credit score (< 0. 3) will be rechecked and considered for revision or deletion. Deletion or revision of any items will be chosen predicated on the re-evaluation of content of those items and views of the dissertation committee.

Fourth, factor analysis will be performed using data from the questionnaires to determine the construct validity of the device of students' nationasl identification. Factor analysis is () In this particular research, exploratory and confirmatory factor research approaches will be utilized to look at the construct validity of the instrument.

Exploratory factor research will be first performed to draw out factors from the device of students' nationwide individuality and determine the items to be included in the instrument. If information for construct validity exists, the number of factors caused by the research should approximate the amount of dimensions assessed by the device, and the things with the highest factor loadings defining each factor should match the items designed to measure each one of the proportions of the tool (Waltz et al. , 1991). Exploratory factor research consists of two levels: remove factors and rotating extracted factors (Nunnally & Bernstein, 1994). The evaluation first condenses the parameters (items) in to the smallest range of factors that explain the most variance.

Principal component research (PCA) and main axis research (PAF) are two most popular ways to condense data. . .

Then, three most popular statistical guidelines were used to determine the range of factors. First, the Kaiser-Guttman guideline keeps factors with absolute eigenvalues of just one 1. 0 or increased. However, this guideline tends to suggest way too many factors. The next rule is scree test that uses comparative changes in these eigenvalues. .

Since the unrotated factors are usually difficult to interpret, the second level of exploratory factor research is to turn these factors to make sure they are more meaningful or even more interpretable. . Orthogonal and oblique rotations???

Results of the exploratory factor examination may also be used for recognition and selection of signals (items) for the device.

Confirmatory factor analysis was used to validateIn confirmatory factor examination, the researcher specifies which items insert on each factor regarding to preconceived theory to test the theory. The consequence of this analysis indicates how well the empirical data actually comply with these features, that is, whether the items actually form the theorized constructs.

In order to check the fit of the model to the info, multiple fit indices that mirror somewhat different facets of model fit are suggested. .

Goodness-of-Fit Index (GFI)???

Comparative Fit Index (CFI)???

Adjusted Goodness-of Fit Index (AGFI)???

Root Mean Square Residual (RMR)???

Root Mean Square Problem of Approximation (RMSEA)???

Finally, Cronbach's alpha will be calculated using the info from the came back surveys to look for the internal consistency reliability of the device and its own subscales. High Cronbach's alpha shows that the instrument has high internal reliability. Nunnally and Bernstein claim that, for the recently developed device, the Cronbach's alpha with. 70 is satisfactory. Also, descriptive statistics such as consistency, signify, and standard deviation will be used to spell it out the characteristics of the sample in both test-retest reliability review and the construct validity research. Test-retest stability, item research, descriptive reports, exploratory factor examination and Cronbach's alpha will be performed using SPSS version 20. 0. 0 for Home windows. AMOS 5. 0 for Windows will be utilized to perform confirmatory factor analysis.

3. 7 Missing Data

Missing data often take place anticipated to factors beyond the control of experts like the failure of content to respond to a question or their attrition from a report. Its seriousness depends on how much of the info are absent and if the pattern of absent data is random or systematic (Need more publications to aid). If only a few units of data are absent from a sizable data set, the issues created are not so serious and nearly every procedure for controlling them should yield similar results. There is no clear guideline about how exactly much missing data is too much. But even more important than the absolute percentage of lacking data is the style of absent data. Randomly absent data scattered within a data matrix hardly ever pose serious problems. Systematically absent ideals, on the other side, are always serious. There is no statistical "fix" that will remedy the problem triggered by systematically lacking values. The way of coping with absent data among returned surveys is described as follows.

In the analysis of examining build validity and inside consistency reliability, of the delivered questionnaires, the questionnaires which have 100% complete data in the instrument of students national personality and the questionnaires which have missing data in the device will be identified. The researcher will check the absent data pattern among the questionnaires that contain missing ideals. The questionnaires, with the proportion of absent data significantly less than 30% of the full total items of each subscale, will be contained in the examination with the missing data imputed with that mean. All other missing data will be dealt with by listwise deletion, meaning questionnaires with lacking data more than 30% of one or more subscales are excluded from all computations. Thus, after excluding questionnaires with absent data more than 30% of 1 or even more subscales, data from the rest of the respondents will be utilized to perform all analyses of the construct validity and inner consistency reliability of the device. Likewise, in the test-retest trustworthiness analysis, of the research returned from both initial and duplicate mailings, one review with largely imperfect items in the 3rd subscale (>30%) will be excluded from data analysis. The remaining questionnaire will be included in the analysis with the absent data imputed with the item mean.

3. 8 Security of Human Subjects

This proposed review will be employed for acceptance from the. Potential individuals will obtain instructions in the cover letter explaining the purpose and are given the choice of completing the questionnaire. They'll be informed that conclusion of the questionnaire assumed consent plus they have the right not to answer any questions if they do not wish.

In the test-retest review with an example of 200 content, the study will be mailed double and a code number will be mounted on these topics. Thus the respondents from the original mailing study will be determined and receive the repeat mailing study two to three weeks after the initial mailing study. But the website link between the names and the code statistics will be stored in a locked drawer independent from the data.

In the build validity study, the researcher only sends one mailing to the test of 2500 things. The mailing included the questionnaire, the resume cover letter and the business reply envelope. Only delivered questionnaires receive a code number. Among the random test of 2500 content, names of subject matter are not from the questionnaire or with any coded information. Thus, because of this mailing, it is not necessary to web page link titles with the mailings and replies, so anonymity will be assured.

3. 9 Conclusion

Hughes, C. 2006. Qualitative and Quantitative Techniques: To Community Research. Retrieved from http://www2. warwick. ac. uk/fac/soc/sociology/staff/academicstaff/chughes/hughesc_index/teachingresearchprocess/quantitativequalitative/quantitativequalitative/

Chua, Y. P. 2006. Kaedah Dan Statistik Penyelidikan: Buku 1 Kaedah Penyelidikan. Malaysia: McGraw-Hill (Malaysia) Sdn. Bhd.

Gay, L. R. & Airasian, P. 2003. Educational Research: Competencies For Research And Applications (7th Eds). NJ: Merill Prentice Hall.

Wynd, C. A. , Schmidt, B. , & Schaefer, M. A. 2003. Two Quantitative Approaches for Estimating Content Validity. Western Journal of Nursing Research, 25(5), 508 - 518.

Bu, X. 2005. Development and Psychometric Analysis of an Instrument: Attitudes towards Patient Advocacy. (Doctoral dissertation). Retrieved from ProQuest databases. (DOI: 10. 1002/nur. 20233)

Also We Can Offer!

Other services that we offer

If you don’t see the necessary subject, paper type, or topic in our list of available services and examples, don’t worry! We have a number of other academic disciplines to suit the needs of anyone who visits this website looking for help.

How to ...

We made your life easier with putting together a big number of articles and guidelines on how to plan and write different types of assignments (Essay, Research Paper, Dissertation etc)