Student Automatic Correction Using Natural Language Processing
Keywords:
educational assessment, Natural Language Processing, Recurrent Neural Network, NLTKAbstract
In this paper, we present a system for evaluating the quality of a question paper automatically. The question paper is an important part of educational assessment. The quality of a question paper is critical to achieving the assessment's goal. Question papers are prepared by hand in many educational sectors. Prior analysis of a question paper may aid in identifying errors in the question paper and better achieving the assessment's goals. We will concentrate on higher education in the technical domain in this experiment. First, we conducted a student survey to identify the key factors influencing question paper quality. We identified three key factors: question relevance, question difficulty, and time constraint. Automatic grading necessitates the use of cutting-edge technology. The traditional evaluation system entails a specific evaluator of a specific subject manually evaluating the answers written by students using Natural Language Processing (NLP). Manually evaluating answer scripts for each student is a time-consuming process for an evaluator using Recurrent Neural Network (RNN). As a result, an automatic exam paper evaluation framework has been proposed to enable an automatic answer correction and grading system. This system can be used in any educational institution to reduce the time spent manually evaluating answer scripts. The proposed automatic exam correction framework for digital answers [AECF] was created using Python programming languages and Python packages such as NLTK. For the backend, a Python flask server was used, and the UI was enhanced with HTML and CSS.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2023 A. Saranya, M. Monisha, P. Nandhini, M. Rakshana, R. Tejashree
This work is licensed under a Creative Commons Attribution 4.0 International License.