Statistical Analyses. The agreement for nominal variables on routes to diagnosis was assessed by chance-corrected agreement in the form of kappa coefficients [17] and Gwet’s AC1 [18]. ▇▇▇▇▇’▇ kappa statistics is widely used to compute agreement between raters on nominally scaled data. However, it is known to be affected by an unbalanced prevalence of the trait, i.e. in a situation where a large proportion of ratings is either positive or negative, kappa may then yield a low value despite high overall percentage agreement [19]. We used AC1 as an alternative agreement coefficient to remediate this issue. Agreement measured by kappa and AC1 was interpreted as: poor (below 0), slight (0–0.2), fair (0.2–0.4), moderate (0.4–0.6), substantial (0.6-0.8) and
Appears in 2 contracts
Sources: Research Agreement, Research Collaboration Agreement