Gautam Kamath, a professor at the University of Waterloo also working on unlearning, says the problem that project found and fixed is an example of the many open questions remaining about how to make machine unlearning more than just a lab curiosity. His own research group has been exploring how much a system’s accuracy is reduced by making it successively unlearn multiple data points.
Kamath is also interested in finding ways for a company to prove—or a regulator to check—that a system really has forgotten what it was supposed to unlearn. “It feels like it’s a little way down the road, but maybe they’ll eventually have auditors for this sort of thing,” he says.
Regulatory reasons to investigate the possibility of machine unlearning are likely to grow as the FTC and others take a closer look at the power of algorithms. Reuben Binns, a professor at Oxford University who studies data protection, says the notion that individuals should have some say over the fate and fruits of their data has grown in recent years in both the US and Europe.
It will take virtuoso technical work before tech companies can actually implement machine unlearning as a way to offer people more control over the algorithmic fate of their data. Even then, the technology might not change much about the privacy risks of the AI age.
Differential privacy, a clever technique for putting mathematical bounds on what a system can leak about a person, provides a useful comparison. Apple, Google, and Microsoft all fete the technology, but it is used relatively rarely, and privacy dangers are still plentiful.
Binns says that while it can be genuinely useful, “in other cases it’s more something a company does to show that it’s innovating.” He suspects machine unlearning may prove to be similar, more a demonstration of technical acumen than a major shift in data protection. Even if machines learn to forget, users will have to remember to be careful who they share data with.
This story originally appeared on wired.com.