I feel insulted when an LLM answers the phone
I recently called an upscale restaurant to order a pizza, and my call was answered by an LLM. There was no disclosure; it was just a young woman's voice saying "Hi you've reached <business name>, how can I help?" It wasn't until I noticed the unusual delays and ever-so-subtly robotic speech cadence that I realized I wasn't talking to a human.
This felt crappy. It felt like I was being tricked; that this company wanted me to think I was speaking with a human. To think you're speaking with a human when you aren't is embarrassing, and in the process of recognizing that the voice isn't real there is an inevitable moment of awkward self-consciousness when you aren't sure whether to speak as if there is another person listening or not.
If you don't want to pay people to answer your phones, use a phone tree. Don't insult your customers by trying to trick them into thinking they're getting real service. I had the same experience recently and felt similarly (more or less, I didn't really find it embarrassing, more irritating). If the restaurant I was calling had simply said "Hi, I'm an automated service" at the beginning, I'd have be less irritated. I guess I'll have to start calls by asking "are you human?" when I call for reservations (or other services I suppose). I haven't encountered this yet, but if/when I do, that business will instantly become one I don't use anymore. Personally, I would vote with my wallet. I went home for holidays last month. One day, my mom had a complaint about her food delivery and raised a ticket in the app. She was assigned "someone" on chat, and she carefully typed her issue. Then, she got a call from the same "person" who asked her to explain her issue in detail. After the call, she came to me confused and frustrated. She said the "person" on the other end kept giving unrelated solutions, and signed off saying they were happy to have resolved her issue. Of course, you know this "person" on the other end was an LLM, which I figured once she handed over her phone. I was livid, and despite having better things to do, wasted the next few hours sending a notice to the legal team. They paid a small change to shut down the issue. Looking back, if the app had at least stated she was talking to a machine and given her an option to escalate to human support, the situation would not have deteriorated. I feel LLMs can never be used for negative interactions like complaints, or transactional interactions like placing orders. Scope should be limited to answering factual, generic questions, like "What's my order's ETA?", etc.