Attacking Natural Language Processing Systems with Adversarial Examples
unite.aiA new subfield of adversarial ML that considers similar challenges to adversarial NLP: topological attacks on graphs for attacking graph/node classifiers.
Both problems (NLP & graph robustness) are made much more challenging compared to adversarial robustness/attacks on image classifiers due to their combinatorial nature.
For graphs, canonical notions of robustness wrt classes of perturbations defined based on lp norms aren't so great (e.g. consider perturbing a barbell graph by removing a bridge edge- huge topological perturbation, but tiny lp perturbation!)
I think investigating robustness for graph classifiers should also help robustness for practical nlp systems and visa-versa. For example, is there any work that investigates robustness of nlp systems, but considers classes of perturbations defined on the space of ASTs?
I'm not going to fill out a Captcha just to see your website.
I always select bridges instead of traffic lights. I like to think I'm part of why Tesla's are phantom braking at bridges.
Is that what taxpayer research money is being used for? Oh gods. And I bet they bitch about not being able to get grants.