University of Limerick
Browse

Assessing the robustness of conversational agens using paraphrases

Download (207.4 kB)
report
posted on 2020-01-22, 10:00 authored by Jonathan Guichard, Elayne Ruane, Ross Smith, Dan Bean, Anthony Ventresque
Assessing a conversational agent’s understanding capabilities is critical, as poor user interactions could seal the agent’s fate at the very beginning of its lifecycle with users abandoning the system. In this paper we explore the use of paraphrases as a testing tool for conversational agents. Paraphrases, which are different ways of expressing the same intent, are generated based on known working input by performing lexical substitutions. As the expected outcome for this newly generated data is known, we can use it to assess the agent’s robustness to language variation and detect potential understanding weaknesses. As demonstrated by a case study, we obtain encouraging results as it appears that this approach can help anticipate potential understanding shortcomings and that these shortcomings can be addressed by the generated paraphrases.

History

Publication

2019 IEEE International Conference On Artificial Intelligence Testing (AITest);pp. 55-62

Publisher

IEEE Computer Society

Note

peer-reviewed

Other Funding information

SFI

Rights

© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Language

English

Usage metrics

    University of Limerick

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC