On Meaning-Preserving Adversarial Perturbations for Sequence-to-Sequence Models tl;dr How you should evaluate adversarial attacks on seq2seq (Show abstract) Adversarial examples have been shown to be an effective way of assessing the robustness of neural sequence-to-sequence (seq2seq) models, by applying perturbations to the input of a model leading to large degradation in performance. However, th