A case that is sometimes considered a problem with Cohen`s Kappa occurs when comparing the Kappa, which was calculated for two pairs with the two advisors in each pair that have the same percentage agree, but one pair gives a similar number of reviews in each class, while the other pair gives a very different number of reviews in each class.  (In the following cases, there is a similar number of evaluations in each class. , in the first case, note 70 votes in for and 30 against, but these numbers are reversed in the second case.) For example, in the following two cases, there is an equal agreement between A and B (60 out of 100 in both cases) with respect to matching in each class, so we expect Cohens Kappa`s relative values to reflect that. Cohens Kappa`s calculation for each: Cohens Kappa measures the agreement between two advisors who categorize each N position into exclusion C categories. The definition of “textstyle” is as follows: in the second case, it shows a greater resemblance between A and B compared to the first. Indeed, if the percentage of agreement is the same, the percentage of agreement that would occur “by chance” is much higher in the first case (0.54 vs. 0.46). Suppose you analyze data for a group of 50 people applying for a grant. Each grant proposal was read by two readers, and each reader said “yes” or “no” to the proposal. Suppose the data for the counting of disagreements were the following, where A and B are drives, the data on the main diagonal of the matrix (a and d) the number of agreements and the non-diagonal data (b and c) the number of disagreements: Kappa is an index that takes into account the observed agreement with respect to a basic agreement. However, investigators must carefully consider whether Kappa`s core agreement is relevant to the research issue. Kappa`s baseline is often called random tuning, which is only partially correct. The basic agreement of Kappa is the agreement that could be expected because of the accidental allocation, given the quantities declared in quantity in the limit amounts of the square emergency table.
Kappa – 0 if the observed attribution appears to be random, regardless of the quantitative opinion limited by the limit amounts.