Statistical Analysis. For the agreement between the 2 cytology raters, we calcu- lated the total agreement with a binomial 95% confidence interval (95% CI). We calculated the ▇▇▇▇▇ kappa with 95% CI as a chance-corrected measure of agreement as described by ▇▇▇▇▇▇▇.18 Because kappa does not account for the degree of disagreement between categories and treats any disagreement equally, we calculated linear- weighted kappa with 95% CI for the ordered cytology cat- egories. Thus, disagreement between adjacent categories results in a lower reduction of kappa values than disagree- ment between nonadjacent categories. Kappa values < 0.20 were interpreted as poor, values between 0.21 and 0.40 were interpreted as fair, values between 0.41 and 0.60 were interpreted as moderate, values between 0.61 and 0.80 were interpreted as good, and values > 0.80 were interpreted as very good. Exact versions of symmetry (4-category) and ▇▇▇▇▇▇▇ (2-category) chi-square tests were used to test for statistically significant differences in the distribution of the cytologic interpretations between raters. A nonparametric test of trend was used to assess the trend in the percentage of positive results for each bio- marker for the risk of AIN2 or higher (AIN2+) with increasing severity of the cytologic interpretation.19 Finally, a ▇▇▇▇▇▇ exact test was used to test for differences in the percentage of positive results for each biomarker between subgroups defined by the paired cytologic interpretations.
Appears in 2 contracts
Sources: Interrater Agreement of Anal Cytology, Interrater Agreement of Anal Cytology