IRJET- Subjective Answer Evaluation using Natural Language Processing and Machine Learning

Page 1

International Research Journal of Engineering and Technology (IRJET)

e-ISSN: 2395-0056

Volume: 08 Issue: 04 | Apr 2021

p-ISSN: 2395-0072

www.irjet.net

Subjective Answer Evaluation using Natural Language Processing and Machine Learning Abhishek Girkar1, Mohit khambayat2, Ajay Waghmare3, Supriya Chaudhary4 1, 2, 3 Student, 4 Professor,

Dept. of Information Technology, Padmabhushan Vasantdada Patil Pratishthan’s College of Engineering, Mumbai, Maharashtra – 400022 ---------------------------------------------------------------------***--------------------------------------------------------------------2. Literature Survey Abstract - Every year boards and universities exams are conducted offline mode. Large number of student attend subjective type exam. For evaluation of such large number of paper manually required hard efforts. Sometimes quality of evaluation may change according to mood of evaluator. The evaluation work is very lengthy and time consuming. Competitive and entrance exams typically contain objective or multiple choice questions. These exams are evaluated on machine as they conducted on machine and therefore their evaluation is easy. It also saves multiple resources and human interaction and hence it is errorless. There are multiple system are available for evaluation objective (MCQ) type question but there is no provision for subjective (Descriptive) type question. It will be very helpful for educational institutions if the process of evaluation of descriptive answers is automated to capably assess student’s exam answer sheets.

[1]A model to evaluate subjective Answer papers using SemiAutomated Evaluation technique is created. For that first they create Question base which contains question type, sub type, question and marks. Then Answer base is created with model answer. Evaluated answer is mapped using hash index which referred as question number. The student answer is evaluated by considering semantic meaning and length of the sentence. [2]The proposed system is designed to evaluate answers for five students providing five different answers. The standard answer is stored in the database with the keywords, meaning and the description of that answer. Then each answer is evaluated by matching the keywords as well as its synonyms with the standard answer. It will also check the grammar and spellings of the words. After the evaluation, the answer is graded depending on the correctness of it.

Key Words: NLP, Contextual similarities, Semantic Analysis Grammatical Correction, WordNet, TFID, Cosine-Similarity, Contradiction, Antonyms, Synonyms, Machine Learning.

[3] E-assessment system developed to checks the answer sheet of the student and provides marks to the same. The system consists of an algorithm that compares the student’s answer against three reference answers given by three different faculties and the answer with most close results and with highest precision is taken into consideration and marks are allocated accordingly. Algorithm based on TFIDF, Grammar check, WMD, cosine and Jaccard Similarity. Both the answers need not be exactly the same or word to word. This approach can be a quick and easy way for the examiners by reducing their workload.

1. INTRODUCTION In general, students' academic performance is assessed on the basis of the examination, either as a subjective or objective pattern. Nowadays during this Covid-19 pandemic, everyone is working in a virtual manner. In the present scenario, manual evaluation of subjective answers is a hectic task. There are multiple systems which can evaluate objective type or MCQ questions quickly. These techniques are evaluated in machines itself after providing a pre-defined correct answers. But it helps only in competitive or objective type exam evaluation. The subjective examinations are the backbone of all the universities and board level examinations. . Large number of student attend subjective type exam. On the basis of the descriptive answer, the moderator will know how much knowledge the student has gained during his academic, on which moderator assign marks. Manual evaluation of subjective answers is a very tedious, time-consuming task and requires lots of manpower. Answer evaluation varies from moderator to moderator according to their way of evaluation, mood at time of evaluation and interrelation between student and moderator. This affects the result of the student. The project aims to automate evaluation process for subjective answer using ML and NLP.

© 2021, IRJET

|

Impact Factor value: 7.529

[4]The proposed system receives solution sets from admin and student’s answers. Then stop words are removed from them in order to generate keywords. After keyword generation it checks for similarity and by calculating similarity it also checks the relation of the keywords along with sentences with the sentences in the dataset which finds the exact similarity and correctness of the sentence with the datasets. If the sentences match with the datasets it generates similarity score as per the overlapping percentage. It also checks the synonyms and similar words before relating the keywords in order to increase the accuracy of the overlap. Data duplication technique is used to compare the previous answers submitted by students and on the basis of uniqueness of answers, grades are generated.

|

ISO 9001:2008 Certified Journal

|

Page 5040


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
IRJET- Subjective Answer Evaluation using Natural Language Processing and Machine Learning by IRJET Journal - Issuu