Abstract:
The present paper introduces a new method for arabic text summary evalua- tion. This method relies on machine learn- ing approach which operates by combin- ing multiple features to build models that predict the human score (overall respon- siveness) of a new summary. We have tried several single and ensemble learning classiers to build the best model. We have experiment our method in summary level evaluation where we evaluate the quality of each text summary separately and in system level evaluation where the aver- age quality of text summary system was calculated. In both evaluation levels, the results show that the correlation between built models and human score is better than the correlation between the baselines and the human score.