Content Disclaimer Copyright @2020. All Rights Reserved. 
Links : Home Index (Subjects) Contact StatsToDo 
Explanations
Javascript Program
R Codes
D
E
F
G
Kappa was first described by Fleiss in 1969. It is a measurement of concordance or agreement between two or more judges, in the way they classify or categorise subjects into different groups or categories. It became a popular method of measuring concordance for nominal data
Cohen modified Fleiss's algorithm for use when there are only two raters or measurements, by inserting a weighting to the differences between the pair of measurements. This increases the influences with the width of difference, making the algorithm suitable for ordinal scales Cohen's Kappa is therefore a measurement of concordance when the data is ordinal Nomenclature Ordinal data These are data sets where the numbers are in order, but the distances between numbers are unstated. In other words 3 is bigger than 2 and 2 is bigger than 1, but 32 is not necessarily the same as 21. A common ordinal data is the Likert scale, where 1=strongly disagree, 2=disagree, 3=neutral, 4=agree, and 5=strongly agree. Although these numbers are in order, the difference between strongly agree and agree (54) is not necessarily the same as between disagree and strongly disagree (21). In the example on this page, babies are classified as small (1), as expected (2), and large(3). Large (3) is bigger than expected (2), and expected (2) is bigger than small (1). However, the difference between large and expected is not the same as between expected and small Instrument is any method of measurement. For example, a ruler, a Likert Scale (5 point scale from strongly disagree to strongly agree), or a machine (e.g. ultrasound measurement of bone length). In the example of this page, the instrument is the judgement of the two doctors concerned Subjects are the subjects of the measurements. The babies in this example Example The example on this page are artificially created to demonstrate the procedure, and they do not reflect any real clinical situation. The data purports to be from two doctors evaluating the size of 30 babies in their mother's abdomen, and classified them as smaller than expected (1), size as expected (2) and larger than expected (3). Cohen's Kappa then evaluates how much the two doctors agreed with each other (in concordance) The data can be entered in two manners
The result consists firstly the display of the count matrix, then the Kappa, its Standard Error, and its 95% confidence interval. Two common methods of interpretation can be used
ReferencesCohen J. A coefficient of agreement for nominal scales. Educational and Psychological Measurement. 20:3746, 1960.Cohen J. Weighted kappa: nominal scale agreement with provision for scale and disagreement or partial credit. Psychol. Bull. 70:21320. 1968. Fleiss, Joseph L.; Cohen, Jacob; Everitt, B. S. (1969) Large sample standard errors of kappa and weighted kappa. Psychological Bulletin, Vol 72(5): p 323327 Fleiss JL Statistical methods for rates and proportions second edition. Wiley Series in probability and mathematical statistics. Chapter 13 p. 212236 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977; 33: 15974. wikipedia.org/wiki/Cohen's_kappa University of York Department of Health Sciences Measurement in Health and Disease Cohen's Kappa a teaching paper with easy to understand and full formulation for Cohen's Kappa, weighted and unweighted, and Standard Error calculations.
This panel presents the algorithms for Cohen's Kappa for ordinal data
Firstly, the subroutine function that calculates Kappa from a matrix of counts by ranks # Cohen Kappa for ordinal data # function for Kappa Algorithm using matrix of counts by ranks CalCohenKappa < function(mx) { print("Matrix of count by ranks") print(mx) g = nrow(mx) # converts values into ranks # ranking by range of values and not by number of cases n = 0 # n = total number of paired values mxSq < matrix(data=0, nrow=g+1,ncol=g+1, byrow=TRUE) # data matrix with row and col totals added for(i in 1:g) for(j in 1:g) { v = mx[i,j] n = n + v mxSq[i,j] = v mxSq[i,g+1] = mxSq[i,g+1] + v # col total mxSq[g+1,j] = mxSq[g+1,j] + v # row total mxSq[g+1,g+1] = mxSq[g+1,g+1] + v } # print(mxSq) # optional print out # Calculate Cohen's (weighted) Kappa mxp < matrix(data=0, nrow=g,ncol=g, byrow=TRUE) mxpe < matrix(data=0, nrow=g,ncol=g, byrow=TRUE) mxw < matrix(data=0, nrow=g,ncol=g, byrow=TRUE) for(i in 1:g)for(j in 1:g) { mxp[i,j] = mxSq[i,j] / mxSq[g+1,g+1] mxpe[i,j] = mxSq[i,g+1] * mxSq[g+1,j] / mxSq[g+1,g+1] / mxSq[g+1,g+1] if(i==j) { mxw[i,j] = 0; } else { mxw[i,j] = abs(ij) } } sumWP = 0 sumWPe = 0 sumW2P = 0 for(i in 1:g) for(j in 1:g) { sumWP = sumWP + mxw[i,j] * mxp[i,j] sumWPe = sumWPe + mxw[i,j] * mxpe[i,j] sumW2P = sumW2P + mxw[i,j] * mxw[i,j] * mxp[i,j] } kappa = 1.0  sumWP / sumWPe #Cohen Kappa se = sqrt((sumW2P  sumWP * sumWP) / (n * sumWPe * sumWPe)) # SE print(paste("Cohen's Kappa=", kappa," SE=", se )) print(paste0("95% CI = ", (kappa  1.96 * se), " to ", (kappa + 1.96 * se))) }Program 1: data entry is by pairs of values #Program 1: data entry by 2 coulumns of paired values datValues = (" 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 1 2 1 3 1 3 1 2 1 2 2 1 2 3 2 3 2 1 2 1 3 1 3 2 3 1 3 2 3 2 ") datMx < read.table(textConnection(datValues),header=FALSE) # matrix of count in ranks # datMx # optional printout for original data n = nrow(datMx) # ranking tmpMx < datMx # temporary scratch matrix rankMx < matrix(data=0, nrow=n,ncol=2, byrow=TRUE) # data ranked to range of values minv = 0 rank = 0 cycle = 0 while(minv<1e10 & cycle<2*n) { minv = 1e10 rank = rank + 1 cycle = cycle + 1 minv = min(tmpMx) if(minv<1e10) { for(i in 1:n)for(j in 1:2)if(tmpMx[i,j]==minv) { rankMx[i,j] = rank tmpMx[i,j] = 1e10 } } } g = rank  1 # number of ranks # rankMx # optional printout of ranks # Create count matrix countMx < matrix(data=0, nrow=g,ncol=g, byrow=TRUE) for(i in 1:n)countMx[rankMx[i,1],rankMx[i,2]] = countMx[rankMx[i,1],rankMx[i,2]] + 1 # countMx # optional printout of count matrix CalCohenKappa(countMx) # call function to calculate and present resultsThe results are [1] "Matrix of count by ranks" [,1] [,2] [,3] [1,] 5 3 2 [2,] 3 5 2 [3,] 2 3 5 [1] "Cohen's Kappa= 0.278481012658228 SE= 0.14691180903751" [1] "95% CI = 0.00946613305529259 to 0.566428158371748"Program 2 allows data entry using the count matrix by ranks (if this has already been calculated) # Program 2: data entry using matrix of counts by ranks datCount = (" 5 3 2 3 5 2 2 3 5 ") mx < read.table(textConnection(datCount),header=FALSE) # matrix of count in ranks CalCohenKappa(mx)The results are [1] "Matrix of count by ranks" [,1] [,2] [,3] [1,] 5 3 2 [2,] 3 5 2 [3,] 2 3 5 [1] "Cohen's Kappa= 0.278481012658228 SE= 0.14691180903751" [1] "95% CI = 0.00946613305529259 to 0.566428158371748"
Contents of D:3
Contents of E:4
Contents of F:5
Contents of G:6
