An evaluation of inter-rater reliability of the technicians in a manufacturing environment

Authors

  • Ng WC School of Management, Universiti Sains Malaysia, 11800 Minden, Penang
  • Teh SY School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Minden, Penang
  • Low HC School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Minden, Penang
  • Teoh PC School of Science and Technology, Wawasan Open University, 10050 Penang

Keywords:

Fleiss’ Kappa analysis, Smart Manufacturing, Continuous Improvement, Technicians, Inter-rater reliability

Abstract

In the era of digital economy, Industry Revolution 4.0 has become the aim for manufacturing organisations in order to transform into a smart factory. With the advancement of technology, company engages in continuous improvement projects to ensure high quality products being manufactured. Assessing the strength of agreement between technicians’ ratings of quality problem identification results is of primary interest because an effective diagnostic procedure is dependent upon high levels of consistency between technicians. However, in practice, discrepancies are often observed between technicians’ ratings and it is considered as a major quality issue in monitoring the troubleshooting and repairs of the equipment. This has motivated us to evaluate the accuracy and agreement between technicians’ ratings. The primary objective of this study is to evaluate the interrater reliability of the technicians on the continuous improvement projects before actual implementation. A case study is conducted in one of the smart manufacturing companies in the Penang Free Trade Zone. This study utilised Fleiss’s Kappa analysis because it is suitable in situations where there are more than two raters, i.e. six technicians who are responsible to identify six problems simulated for a continuous improvement project. The findings of the study show good to excellent agreement and high accuracy in problem identification. Overall, the technicians are capable in understanding the newly developed troubleshooting and repairs database and able to carry out the continuous improvement project effectively. This outcome provides top management an insight for evidence-based decision making to thoroughly execute the newly developed digital database in smart manufacturing.

References

J. Lee, H-A, Kao, S. Yang, Service innovation and smart analytics for Industry 4.0 and big data environment, Procedia CRIP, Vol. 16,

pp. 3-8, 2014.

B. Nadia, B. Amit, An overview of continuous improvement: From the past to the present, Manage. Decis., Vol. 43, No. 5/6, pp. 761-

, 2005.

J. Oakland, Total Organizational Excellence – Achieving WorldClass Performance. Butterworth-Heinemann, Oxford, 1999.

S. Caffyn, Development of a continuous improvement selfassessment tools, Int. J. Oper. Prod. Man., Vol. 19, No. 11, pp. 1138-1153, 1999.

M. Gallagher, S. Austin, S. Caffyn, Continuous Improvement in Action: The Journey of Eight Companies. Kogan Page, London, 1997.

J. Cohen, A coefficient of agreement for nominal scales, Educ. Psychol. Meas., Vol. 20, pp. 37-46, 1960.

W. A. Scott, Reliability of Content Analysis: The Case of Nominal Scale Coding, Public Opinion Quarterly, Vol. XIX, pp. 321-325, 1955.

J. Cohen, Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit, Psych. Bull., Vol. 70, pp.

-220, 1968.

B. S. Everitt, Moments of the statistics kappa and the weighted kappa, British J. Math. Statist. Psych., Vol. 21, pp. 97- 103, 1968.

A. E. Maxwell, A. E. G. Pilliner, Deriving Coefficients of Reliability and Agreement for Ratings, Br. J. Math. Stat. Psychol., Vol. 21, pp. 105-116, 1968.

J. L. Fleiss, Measuring nominal scale agreement among many raters, Psych. Bull., Vol. 76, pp. 378-382, 1971.

J. L. Fleiss, Statistical Methods for Rates and Proportions. John Wiley & Sons, Inc., New York, 1981.

S. I. Bangdiwala, H. E. Bryan, “Using SAS Software Graphical Procedures for the Observer Agreement Chart,” in Proc. 12th Annu. SAS Users Group Int. Conf., Texas, 1987, pp. 1083-1088.

W. Barlow, N. Y. Lai, S. P. Azen, A comparison of methods for calculating a stratified kappa, Statist. Med., vol. 10, pp. 1465-1472, 1991.

M. L. McHugh, Interrater reliability: The Kappa statistic, Biochem. Medica., Vol. 22, No. 3, pp. 276-282, 2012.

D. Michael, C. Frederick, G. Gregory, S. Steve, B. David, Measurement Systems Analysis (4th ed.). Automotive Industry Action Group (AIAG), Michigan, 2010.

N. Gisev, J. S. Bell, T. F. Chen, Interrater agreement and interrater reliability: Key concepts, approaches, and applications, Res. Soc.

Admin. Pharm., Vol. 9, No. 3, pp. 330-338, 2013.

Downloads

Published

2024-02-26

How to Cite

Ng, W. C., Teh, S. Y., Low, H. C., & Teoh, P. C. (2024). An evaluation of inter-rater reliability of the technicians in a manufacturing environment. COMPUSOFT: An International Journal of Advanced Computer Technology, 9(04), 3629–3632. Retrieved from https://ijact.in/index.php/j/article/view/563

Issue

Section

Original Research Article

Similar Articles

<< < 5 6 7 8 9 10 11 12 13 14 > >> 

You may also start an advanced similarity search for this article.