The Key To Protecting Customer Privacy While Embracing AI-Powered Experiences

4 min read Original article ↗
ChatGPT Website Illustration

Privacy and protection increase in importance as AI continues to grow in all aspects of life.

NurPhoto via Getty Images

Today’s AI tools are being taught how to interact with customers in more personal ways, and “get to know” them in order to provide individualized service. To make this work, chatbots gather and instantly access all sorts of information, such as what the customer has bought in the past; where their problems or challenges have been; what customer service issues they’ve faced, and more.

To keep this information coming in, these tools can learn to be “curious” about each person, gathering new information during each interaction. But this process needs limits.

As AI becomes a bigger part of business operations — and a bigger part of our lives — people are understandably wary. The last thing they want is to feel that this technology is becoming “big brother,” knowing far more about them than it should.

“Consumers are keenly aware of situations where brands request excessive amounts of personal information, extending beyond what is deemed necessary for enhancing their overall experience,” Shuyi Zhang of Haaga-Helia University in Finland noted in a recent study. “This heightened awareness underscores the importance of maintaining boundaries between personal and digital spheres while respecting consumer privacy and data protection concerns.”

The problems can involve not only what information the technology collects, but also what it does with that information. The possibility that this information could be given to third parties or even leaked publicly is worrying consumers more and more.

Some of these concerns involve publicly available technologies that certain businesses are adopting. “Asked about the risks to their organization from using public generative AI tools, 39% of respondents pointed to the potential leak of sensitive data,” ZDNet reported earlier this year.

Graphic Best High-Yield Savings Accounts Of 2024

Graphic Best 5% Interest Savings Accounts of 2024

As Nextiva’s Alex Doan explained in a blog post, “Public AI models may not be sufficiently secure to trust with confidential customer conversations. Any exposure to such sensitive information can lead to data breaches or legal consequences.” So before adopting any tool, it’s crucial to discuss it in detail with security and IT teams to ensure customer data is kept confidential.

Redactions Across Multiple Channels

Inside organizations, proprietary AI tools should be programmed carefully to avoid wading unnecessarily into private terrain. These tools can be taught questions to never ask, for example.

But often, certain private information is necessary and helpful in providing customers the experiences they seek. So these tools need secure ways to store the data. And they need to do something else: automatically redact certain information.

For example, in conversations with AI chatbots or real agents, people often provide personally identifiable information (PII). Those details should be redacted from voice recordings as well as transcripts. Among other reasons, this is necessary to reduce risks of lawsuits related to identity theft.

The same goes for written chats. People are increasingly sharing private information this way, even in some of the most sensitive environments. For example, if information shared with healthcare providers this way gets out, it risks a HIPAA violation. Similarly, the financial sector is increasingly using chatbots. As a recent study by John Giordani of the University of Fairfax pointed out in a study about chatbots in banking, the industry needs “robust security measures to protect sensitive customer data against privacy violations.”

It’s also important to integrate PII redaction with screen recordings. Customers can unintentionally expose private information when sharing their screens or when conversing with an agent during a screen recording.

The good news is that AI tools can learn to understand what types of information must be redacted and spot those immediately.

Incorporating Cultural Differences

As business is increasingly global, businesses should know that customer expectations around privacy can vary in different locations. To address this, give customers easy-to-understand ways that they can help determine what information is and is not kept.

“Incorporating features that respect privacy or allow users to control the visibility of their information aligns with cultural expectations related to personal boundaries,” a team of researchers from the United States, the UK and Nigeria wrote recently in the International Journal of Management & Entrepreneurship Research.

Be sure to give people opportunities to tell you whether the steps you’re taking are adequate and what else they’d want to see. When they request or suggest new steps, consider trying them out and seeing how other customers respond. This is part of the customer feedback loop that fuels more successful innovation.

Ultimately, a great deal of this effort revolves around communication. The more that organizations work to clearly describe the steps they’re taking to keep conversations secure, the more people will feel comfortable interacting with them in any given channel. That will yield the greatest thing today’s businesses can offer: amazing customer experiences.