Enhanced Bert solution to score the essay ‘s relevance to the prompt in Arabic language

Abstract:

In recent years, automated essay scoring systems have shown remarkable improvement, notably
using deep learning algorithms. This evolution has been driven by a change in perspective, from general and superficial notation focusing essentially on style and grammar, to detailed and deeper notation focusing on text content. However, works that takes into account the relevance between the essay and the prompt is still limited, especially for works in the Arabic language. It is in this context that we propose a new approach of scoring the relevance between the essay and the prompt. More specifically, we aim to assign a mark reflecting the degree of adequacy of the student’s long response to the open-ended question. Our Arabic-language proposal is built upon AraBERT the Arabic version of BERT and has been enhanced with specially developed handcrafted features. Our approach gave promising results, with a correlation rate of 0.88 with the human score.