Egyéb kategória
When To Use Agreement Versus Reliability Measures
Kottner, J., Audige, L., Brorson, S., Thunder, A., Gajewski, B. J., Hr.bjartsson, A., et al. (2011). Guidelines have been proposed for reliability reporting and agreement studies (GRRAS). Int. J. Nurs. Stud. 48, 661-671. doi: 10.1016/j.ijnurstu.2011.01.017 With regard to the use of screening tools, the validity and reliability of an instrument are important indicators of quality.
They are necessary to assess the usefulness of evaluations in therapeutic, educational and research contexts and are therefore of great importance in a variety of scientific disciplines such as psychology, education, medicine, linguistics and others, which often rely on assessments to assess behaviours, symptoms or abilities. Validity is defined as – the degree to which evidence and theory support interpretations of scores related to the proposed uses of tests – (American Educational Research Association et al., 1999). In a way, the validity of an assessment instrument reflects its ability to grasp what it intends to measure. Reliability estimates describe the accuracy of an instrument, they refer to its ability to achieve consistent and similar results. There are different ways. B to measure reliability, for example, through spleens who evaluate the same participant (Inter-Rater reliability) or at different times (reliability of test tests, for an in-depth discussion on validity and reliability see z.B. Borsboom et al., 2004). Reliability estimates, for example. B of children`s language capacity, are often limited to linear correlations and lack a precise understanding of methodological approaches, which can lead to significant restrictions on the ability to interpret and comparability of reported outcomes.
This article therefore aims to provide a methodological tutorial to assess the reliability, compliance and correlation of expressive vocabulary assessments. By applying the proposed strategy to a concrete research question, i.e. whether a screening questionnaire designed to be used by parents can also be used with maternal assistants, we can highlight the potential effects of using different levels of reliability, consistency and correlation in interpreting concrete empirical results. The proposed approach can potentially benefit from the analysis of credit ratings for a wide range of skills and behaviours in different disciplines. While the correlation analyses used (mostly pearson correlations) provide information on the strength of the relationship between two groups of values, they do not cover the agreement between the elderly at all (Bland and Altman, 2003). Kottner et al., 2011). Nevertheless, allegations of inter-rated agreement are often drawn from correlation analyses (see z.B. Bishop and Baird, 2001; Janus, 2001; Van Noord and Prevatt, 2002; Norbury et al., 2004; Bishop et al., 2006; Massa et al., 2008; Gudmundsson and Gretarsson, 2009.) The error of such conclusions is easy to detect: a perfect linear correlation can be obtained if one group of advisors systematically distinguishes itself (by an almost consistent amount) from another, although there is not a single absolute match. On the other hand, an agreement is reached only if points are on the line (or within a field) of the equality of the two ratings (Bland and Altman, 1986; Liao et al., 2010). Therefore, correlation-based analyses do not measure the match between rats and are not sufficient to briefly assess reliability between boards.
A szerzőről
Hozzászólások letiltva
Legutóbbi hozzászólások