How to compute inter-rater reliability metrics (Cohen’s Kappa, Fleiss’s Kappa, Cronbach Alpha, Krippendorff Alpha, Scott’s Pi, Inter-class correlation) in Python

Recently, I was involved in some annotation processes involving two coders and I needed to compute inter-rater reliability scores. There are multiple measures for calculating the agreement between two or more than two coders/annotators. If you have a question regarding “which measure to use in your case?”, I would suggest reading (Hayes & Krippendorff, 2007) … 

 

Introduction to Python Dash Framework for Dashboard Generation

Dashboards play a crucial role in conveying useful and actionable information from collected data regardless of context. As with economically feasible sensors, improved computation, and storage facility, and IoT framework, it has become easier to collect an enormous amount of data from a variety of contexts (e.g. military, health-care, education). However, finding insights out of …