Abstract:
In this paper we present a short study on the user perspective of the capability of LLM-based AI to verbalize mathematical content, i.e. transcribe symbolic notation and formulas into natural spoken language. For the selected base of mathematical expressions, we run a series of experiments and analyze the results of verbalization obtained by prompting LLM in terms of repeatability and precision. As a reference we use the output of an efficient rule-based verbalization tool – Equation Wizard. Our experiments are performed with the use of ChatGPT 3.5 – a popular, free-of charge LLM that is frequently and eagerly used by students. We demonstrate the inconsistency and unrepeatability of LLM output based on repeated verbalization requests, as well as showcase imperfect verbalization.