Man scammed after AI told him fake Facebook customer support number was real
cbc.ca> What he didn't know at the time is there is no phone number for Facebook customer support.
Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers. They don't even have a real number to play a recorded message affirming that there is no ability to call.
ETA: For instance, I notice Facebook appears to own the typo squat `facrbook.com`. I feel like it's the same principle, though I assume toll free numbers are more expensive.
It’s untenable from a marketing perspective to advertise a phone line that just talks about the services you don’t offer. One could maybe hope for a statement on a help page that says “Facebook will never ask you to call a support number”.
I think what you've gotta do is say, "You can't call, but here is the number anyway," because customers aren't necessarily interacting with your page anymore. They're interacting with AI summaries of your page. Those AIs might be in house, or might be provided by a search engine. What is tenable or untenable will have to shift to the realities of how users are interacting with the information you present.
If you can't provide their AI with text answering their direct question (eg, "what is the support number for Facebook"), they'll find a document which does provide such text. If it's not you then it's a scammer or competitor. UX for these customers means presenting information in a way that sorts high in a semantic search and is robust to transformation.
If you provide text indirectly answering the question ("that number doesn't exist" rather than a literal number), you're liable to be scored as less relevant than a wrong but direct answer ("the number is 1555 SCAMMER"). You're also less robust to transformations, because you can't pull a valid phone number out of the text.
Or maybe I'm wrong, take any certainty implied by my language as rhetorical. That's just the pattern I'm seeing in these tea leaves.
Also, realistically, I don't imagine the phone number literally just telling you that the service wasn't available and hanging up. I imagine it would offer you options to get various pieces of information (the URL of the website, the legal address of Meta, how to navigate to the support knowledge base on the website, ...) and let you draw your own conclusion about how useful it was. Maybe it's occasionally handy to someone. At worst it's harmless.
I think in an ideal world, you could use speech recognition to let people leave a message, and open a ticket, as if they had emailed support@. When someone responds, the system gives them a call them back and delivers it using text to speech.
I like how we've suddenly accepted "AI" "summaries" as simply the way of the future, despite the inherent problems and repeatedly botched rollouts.
It could be rejected for sure. I personally don't think it's working well. I don't support it really.
I once had a Facebook rep I could call (they later ended this), and they didn't know that were two online newsletters about changes to internal Facebook apps used by advertisers (we used to be able to see who had clicked "interested" on an event). So they put in a bug report when the app stopped working, etc., but we later found it had been deprecated. All to say that dedicated support is often a cause of issues or confusion.
It is easy as hell
> Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers. They don't even have a real number to play a recorded message affirming that there is no ability to call.
Contrast with Experian, which has a number for consumers to call, but actually has an elaborate infinite loop in its phone tree that prevents you from actually talking to a human (this is by design).
If you're one of their customers (read: a business paying for their service), there's support you can call, but for individuals who have issues with their online Experian account or credit report, you can't, even if you're a paid subscriber to their consumer-oriented credit reporting services.
>Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers.
Frankly it's absurd to me that it's legal to do so. Any public facing company that is sufficiently large should be required by law to operate a phone service where you can talk to a real human being.
All of these huge mega corps are run with absolute impunity and there is often absolutely 0 avenue for regular everyday people to get in touch when they have issues. They direct you in these endless loops to FAQ's and "Community Resources"; even getting an email address is like getting blood from stone sometimes.
For some cases, your local small claims court may be an efficient escalation path. If enough people do it, companies will learn that too much stonewalling doesn't actually save money, because now your customer support is done by the legal department.
Just a matter of time. The adoption of expectations is dependent on the visibility of the occurrence.
They all are required to have a process service agent and address, legally.
My wife and most of her friends have all lost their original accounts. She got an email that password was changed. We immediately took action. They had changed the email associated with account. No way to change it back. Only thing we could accomplish was getting the account disabled. Zero way to contact Facebook. These are all woman that FB was primary storage place for kids photos.
"Pls fix" proposed a market for bribing meta employees to deal with customer support requests.
So lobbying?
“There is a phone number for Meta online. When CBC called it, an automated recording said, ‘Please note that we are unable to provide telephone support at this time,’ and directed callers to meta.com/help.”
My mistake. Thanks for the correction.
> Please note that we are unable to provide telephone support at this time
Mealy-mouthed corporate lying horeshit.
They are able, just unwilling.
If free-market libertarianism is as great as the tech bros want us to think, why do these companies lie so much and so often despite the need for participants in the market to be correctly and full informed so they can make rational decisions?
This one is pretty bad. This guy found a fake Facebook customer support phone number in a Google search, then asked the Meta AI chat in Facebook Messenger if the number he found was a real Facebook help line... and Meta AI said that it was. There's a screenshot of the chat in the article.
The bad thing is that people still think LLMs can be trusted at all. Companies integrating them into their offerings are not helping the public adopt the correct mental framing of these tools as "plausible text generators".
Companies integrating them into their offerings are not helping the public adopt the correct mental framing of these tools as "plausible text generators"
"Not helping" seems a wild understatement. "Deceiving people into taking the wrong frame" seems more accurate.
The general public is getting lied to constantly. HN users have a bit more context to see through the bullshit but the marketing getting pushed in people is that these AI tools are super genius incredible world changing tools that make everyone 100x more productive.
Even many HN users instantly resort to misdirection via comparisons to humans or nebulous upcoming AGI instead of acknowledging that we have to live with these limitations for the forseeable future.
Maybe we have a bunch of users who primarily code in languages with duck typing. So that extends over to assessing the abilities of LLMs -- "talks like a human, therefore it is the same thing."
I'm only sorta kidding. I am surprised at the number of people who are comfortable with such a shallow conclusion.
This can be solved with more data. New tech like Windows Recall should be able to scrape enough of the world's data so that this sort of thing doesn't happen anymore.
a) There is no evidence that it can be solved with more data.
b) Windows Recall data is never going to make its way into public models.
> b) Windows Recall data is never going to make its way into public models.
It doesn't need to. It just needs to make its way into OpenAI's models, which it will.
> The bad thing is that people still think LLMs can be trusted at all.
LLMs are as trustworthy as humans.
Humans have been being wrong for about as long as we have been lying.
Whether you get information from a human or an LLM, check it.
I worry about the people who insist on credible sources rather than checking information for themselves. I think 80% or more of them are trolling me, but there are some who genuinely do not apply the Scientific Method to check facts in their everyday life. I truly feel sorry for them.
This is not true. Sure, humans can lie or get things wrong. But normal people will also admit when they don't know something. LLMs tend not to admit when they don't know something, and they use an authoritative voice that sounds like they know what they're talking about. To an untrained person, this can easily be misleading.
> But normal people will also admit when they don't know something.
You'd like to think so, right? However, this isn't really a solid thesis. Decent people will admit when they don't know. Is that normal? I've worked with so many people that just do not fit that definition at all, to the point it just seems like that's the normal way to behave. Maybe I'm jaded grossly overweighting it, but it just seems I have been in way too many meets with too many arguments over something because someone refused to back down and admit their ignorance/arrogance wasted valuable time because of refusal to accept input from others.
> However, this isn't really a solid thesis
Let's get 1000 random people in a surgery room and ask them to perform brain surgery.
You actually think that most of them will say "sure I know exactly how to do this".
Be serious.
Some people can admit when they’re wrong.
When was the last time Trump admitted he was wrong?
Nothing about Trump is normal.
Even if that were true (I don't think it is): The more important distinction between humans and LLMs is accountability.
If a customer support agent gives you incorrect information, you can often hold the company liable for it (assuming you can prove it; I suppose there's a reason for why companies prefer certain support channels over others).
If an AI "lies" to you, you're largely on your own right now.
Not necessarily. In Canada, a case in February (https://www.mccarthy.ca/en/insights/blogs/techlex/moffatt-v-...) held that Air Canada could be held responsible for incorrect information about a refund given to a customer by its chatbot.
Notwithstanding differences in jurisdiction, applying that idea to this case would rely on finding that Meta owed Gaudreau a duty of care that extended to the Meta AI chatbot.
It would be more difficult to make this claim if Gaudreau had asked the question of Google, since Google itself is not usually responsible for false information uncovered by its searches.
That's going to be indeed an interesting question (also discussed in this sibling thread: https://news.ycombinator.com/item?id=40536860).
My gut feeling is that it should be possible for companies to distinguish an AI product (i.e. as something provided to customers like a search engine, as you say) from an AI "working for them", but I can see a lot more disclaimers showing up in Meta's various AI chat channels soon.
did Meta present the AI as an official customer-service chatbot?
What I see on WhatsApp:
"Messages are generated by Meta AI. Some may be inaccurate or inappropriate. Learn more."
Which leads to a pop-up further explaining that use cases include things like "creating something new like text or images".
I think it's going to be really interesting to see whether that's considered enough by courts, or if they'll take the position that these things pretend too well to be a real person to make such a disclaimer sufficient, similarly to how e.g. a brokerage can't disclaim "no investment advice" and then go on to say "but buy this stock, it's gonna moon tomorrow, trust me bro".
Look at the screenshot in the article. If a human Facebook representative would give that response, would you not trust them? And if not, how would you apply the Scientific Method to fact-check it?
It would be nice to have a confidence level for pieces of information, like humans have
In theory. In practice, every piece of information you can get from a human has mountains of context around it which lets you gauge the reliability of the information.
A skilled motorcycle rider explaining how to take corners in a widely watched youtube video, with hundreds of comments confirming the advice and several recommended videos from other riders that basically say the same thing is an extremely strong positive signal.
The same answer gotten from a magic AI answer box is just as likely to be right as wrong, 50/50.
Good luck checking every fact you encounter with the scientific method (and making sure to repeat your experiments to ensure reliability, oh and don't forget peer review to evaluate your methodology). What is your proposed scientific experiment to test... what Facebook's support number is?
My point is just that credible sources are absolutely necessary for information to disperse. Nobody can afford to figure out the modern world from first principles.
It's not about intentional deception. LLMs are very confidently incorrect way more often than humans are.
Eh, I've had the questionable pleasure of talking to first level support call centers a couple of times recently, and I wouldn't be so sure about that.
The number of times I've been told that yes, resetting my iPhone's network settings and reinstalling an app will resolve my billing issue or similar...
This reminds me of that recent issue with a Canadian airline, where (IIRC) a court ruled that their chatbot made a wrong, but binding, commitment to a customer.
I'm curious if a Canadian court would hold Meta liable for the man's losses in this case as well.
That was a very interesting case. The chatbot in question was not LLM based (the incident was pre-chatGPT in any case) and was simply parroting an out of date or incorrect policy that it had been explicitly programmed to do. It seemed to gain a lot more traction in the press because of LLMs. "Air Canada forced to honor terms and conditions on their website" is a whole lot less interesting.
This FB thing is a case of an LLM simply hallucinating without direct human intervention.
Very different cases from a computer science perspective. My hope is that legally, they don't get viewed differently.
If you outsource functions of your business to a third party contractor you are still responsible for what they do and say. I don't think we should allow companies to weasel out of their obligations because they were dumb enough to let a sentence generator loose in a way that it could make commitments.
Yea, it’s certainly a reasonable argument if the wrong information comes from the company itself.
That's an excellent point. That court decided that an AI agent was an agent in the legal sense. "Agent" is a legal concept - someone acting for someone else.[1] It's what allows employees to act for a company. Otherwise nobody could do anything without signoff from the top. There are limits to agency, but it's a rule of reason thing - you can assume a store clerk has the authority to sell you stuff, and someone whose job is to answer questions has the authority to answer questions. The company has responsibility for the agent's actions within the scope of their authority.
[1] https://en.wikipedia.org/wiki/Law_of_agency
[2] https://www.upcounsel.com/lectl-what-the-california-civil-co...
The situation here is slightly different, though. Meta's AI in their various products is explicitly marketed as an LLM chatbot, not as a customer support channel.
Whether they've been diligent enough in making that distinction (and whether that's even possible) will very likely be determined in court at some point.
Yeah, headline is overly broad by just saying 'AI'. From just the headline itself, it'd be easy to write this off as "duh, this guy's a fool", but the AI in question here is from Meta, itself. And, not only is it from Meta, but it's the AI they've put in charge of support.
> And, not only is it from Meta, but it's the AI they've put in charge of support.
Does it say that in the article or somewhere else? I didn’t see that in the article.
You can see it in the screenshots.
It says “Meta AI”, but I don’t see an indicator that it’s labeled as providing support. On my device, it doesn’t say so, and is labeled as possibly “inaccurate or inappropriate “. (It still provides a bogus phone number.)
I wonder if he has a legal claim like the Air Canada passenger who the AI quoted a ficticious reimbursement policy.
That incident happened before ChatGPT was released and probably didn't involve AI. Anyone can write a wrong customer support script if they try.
We're going to see a lot more SEO scams coming from social media platforms now that Google is promoting places like reddit and quora. Even on rSEO you can see moderators there asking themselves questions from alt accounts subtly promoting themselves. It's dog shit scammers all the way down.
I mean that’s kind of on meta, as a customer I shouldn’t really have to care about the internals of the company. If a disgruntled employee lies to customers, that shouldn’t be the customers problem either. To me, that’s all just a statement by the company.
Meta ai is so bad. What did they really do with all those h100’s ?
Seems like this information came from Quora: https://www.quora.com/Is-1-844-457-1420-really-Facebook-supp.... Screenshot: https://postimg.cc/gallery/2nFq5Cm.
I suspect the helpful SEO guy who posted this answer was trying to get more visibility on Quora so answered many questions automatically or semi-automatically without verifying anything.
This is the beginning of the post:
Ruhul Alom
Social Media Marketer at Social MediaAuthor has 2.9K answers and 1M answer views6mo
My dear !
Yes, 1-844-457-1420 is a valid Facebook support phone number. It is a toll-free number that is available 24/7. You can call this number to get help with a variety of Facebook issues, such as:
Resetting your password
Logging in to your account
Recovering a hacked account
[...]The "helpful SEO guy" likely is (or was hired by) the scammer.
StackOverflow gets lots of fake posts like this promoting numbers. Around tax time there's a lot of Quicken ones.
See, this is what confuses me to know end. Not once, ever, have I thought of asking an online forum for a phone number. Maybe I'm paranoid enough after all??? Also, I'm old, so I actually visit companies webpages. We've been through enough "don't fall for phishing" enough now, right? You don't trust links, phone numbers, whatever from anything that is not the official places for that information.
Facebook doesn't provide phone support, and people get desperate. People then Google things like "Facebook support phone".
All you'll find is results like https://gethuman.com/phone-number/Facebook or https://facebook-pay.pissedconsumer.com/customer-service.htm... or the Quora post that all have numbers that almost certainly goes to a scammer.
Bonus points when it makes it into the AI models (as happened here) where they repeat it verbatim as if it were the truth.
again, even if I were the one doing that Google search, with the domain examples you provided, I wouldn't trust one of them.
Like, common sense on the interwebs just continues to disappear. Gullibility seems to have increased as critical thinking and coming to logical conclusions are disappearing.
In this case, it was Meta's own AI regurgitating something it found on Quora. Quite a few people would trust that, especially non-technical folks.
Yes, that is the point of the TFA, but I was commenting on what data was posted online well before the AI was "born". I'm guessing that Meta can fix it with a prompt that tells the system it is not allowed to verify FB phone numbers or there are NO phone numbers for the public to use to contact FB but do not inform the user that there are not phone numbers. Only deny any phone number is a valid phone number for FB (or Meta).
That’s where the black hat SEO comes in. Google lost the war well before tinkering with generative AI in the results; they just make it even worse.
The collapse of search has happened faster than non-technical folks realize. They still trust the computer quite a bit.
I see a ton of this on Quora. Not just for Facebook, but for a lot of online banks and others. They have hundreds of accounts doing it.
Quora doesn't even pretend to police this kind of thing. Automated moderation might remove it, only after it has been reported. There's far, far too much of it for users to report all of it.
Nobody pays attention to it on Quora, but it's clear that it's out there to poison AI and search engines.
I doubt they even care https://www.google.com/search?q=site%3Aquora.com+valid+Faceb...
Bit tangential, but what the heck is it with scammers saying "dear" so much? Pretty much every pig butchering or social engineering attempt has had them repeatedly addressing me as "dear."
It fell out of fashion in western English speaking countries decades ago but not the 3rd world English speaking countries the scammers come from.
Many companies outsource their customer support staff as well.
That, and the fact that LLMs are now available to pretty much anyone for effectively nothing, would make me very cautious in basing my judgement of something being a scam or not exclusively on a caller's accent, spelling, mannerisms etc.
Lately, it's actually been quite the opposite in my experience, and I don't find that too surprising either: A lucrative scam business can afford to pay much more than the average US company that sees customer support as a cost center to be optimized at any cost. So why wouldn't their staff's English be better?
Social engineering scams are about to become a lot more exciting (in a bad way), not least thanks to LLMs (with and without voice capability), and I think people are absolutely not ready for it, not even us professionals working in tech.
As well as the instantly recognizable obsequious politeness.
In learning English as a second language, I suspect the textbooks tell them to start all correspondence with "Dear" so as to not appear impolite
Ma'am just do one thing for me, go take a coffee or a glass of water and I will take care of each and everything.
Again and again we see that LLMs are great for creative output and terrible for anything where correctness matters. You should only use it for the latter scenarios when generating answers is slow/hard/expensive, but verification of answers is quick/easy/cheap. Probabilistic and non-deterministic answers have their place, but these companies marketing them in products need to do a better job expressing the limitations.
It shows an amazing lack of understanding for what an LLM is, even from the people selling and implementing them. You're exactly right in that they are terrible if correctness matters, but that should be obvious. If they where 100% correct, the size of the models would be much larger, as they'd need to retain all the original training data.
You can use the LLMs for language understand and interpreting questions, but the would need access to databases containing authoritative answers and not answer anything for which they don't have an answer.
An older client got scammed by a fake Amazon-Hotline. They bought a XBox-gift-card while on his PC via Teamviewer, till he pulled the power cored.
He then called me and I tried to find the official Amazon-Hotline on amazon.de. Since I was unable to find it I had to asked a search engine. The only results where third-party sites. It where from journalistic magazines I recognize (like chip.de) but still yet another gamble.
When I worked on a customer facing chatbot at my previous employer, we specifically wrote in the prompt "our customer service is not reachable by phone", and we tested that the chatbot was able to use that information and respond appropriately.
But I guess you can't expect a tiny startup like Facebook to invest money into having 1 employee part-time tweaking the prompt of the chatbot to respond appropriately to commonly recurring user questions.
Was the chatbot you worked on using an LLM?
Yes
This will get worse when scammers get good at data manipulation for AIs.
After SEO we'll get AIO.
Same with prompt injection by malware.
Yes, AI in its current form is going to be a problem. I'm sure we haven't heard the worst yet. An AI may eventually kill a user.
I believe the heart of the problem is that corporations are riding a hype wave as long as they can, and an AI chat looks like super convincing, next level stuff thanks to the simple interface that hides the fact that you cannot communicate with this one as you would with a human being. You use natural language and it responds with natural language, which makes it not only convenient, but also dangerous.
There's money to gain on all this. While at the same time, hallucinations are an unsolved problem as well as making AI humble enough to realize and tell users that they just don't know. The combination of hallucinating, raising convincing arguments, being confidently incorrect, and not knowing the boundaries of your knowledge base, is a terrible one to let loose as officially sanctioned products.
One of the things about LLM-based AI that concerns me the most is realizing that the average person doesn’t understand that they hallucinate (or even what hallucination is).
I was listening to a debate on a podcast a while ago and one of the debaters kept saying, “Well, according to ChatGPT, […]”—it was incredibly difficult listening to her repeatedly use ChatGPT as her source. It was obvious she genuinely believed ChatGPT was reliable, and frankly, I don’t blame her, because when LLM’s hallucinate, they do so confidently.
This does not add up. How did the Meta CS scammer get in to the PayPal account?
>The woman [from fake tech support] said she would clear the hackers out, but he had to give her access to his phone through an app she had him download.
Sure, but nothing in story says he did do that.
That doesn't answer the question. No app on any Android or iPhone phone can reach out and take your PayPal credentials. These scam victims never own up to the most important fact - that they themselves give away the keys to the castle. There's always some hand-wavy techy explanation.
Easy enough to drop a remote access app or a fake Paypal app and let the idiot user put their credential into it.
> These scam victims never own up to the most important fact - that they themselves give away the keys to the castle.
I bet they were wearing a real short skirt, too.
What?
I think pavel_lishin's comment is alluding to that what we are reading in mvdtnz's comment is victim blaming. It is a bit coded. Being overly concerned what a female victim of sexual assault was wearing is textbook case of victim blaming. I believe this is what the sentence "I bet they were wearing a real short skirt, too." is evoking to say that the sentence quoted from mvdtnz's comment is blaming the victim of the scam.
Got it in one.
Likely by convincing him to install some remote access tool. The scam is just your regular tech support scam.
Then you'd expect the story to say he did. It doesn't.
My 89 yr old data called "AMEX" and was scammed. He googled the number for AMEX and took the top result (he says, I did not witness this). I'm across the country, so that zoom session was quite tedious (it took us an hour to get the permissions straightened out for zoom to be able to share his screen).
> (he says, I did not witness this)
He's speaking the truth.
Google has, for a long while now, let scammers just buy advertisements to get their fake scam page to the top of the results. And not just major banks, various open source software have been subject to this exact attack.
It's imperative for security that you install adblockers on all their devices.
Yep, I had. uBlock Origin, but he uses Safari sometimes. He doesn't really know the difference between Chrome and Safari. I'll check it this weekend when I zoom with him. Thanks.
I'm terrified of this happening to my elderly parents. It's why, even though it can be time consuming, I always have them run "tech support" issues (no matter how small) through me or my bro in law so some foreign scammer doesn't drain their accounts.
This is the real danger of AI, forget the “singularity” or any of that sci-fi crap. AI is going to destroy the average human’s already suffering reasoning ability.
The tolerance of society for social experiments, entrepreneurial and ai is something we consider allmende, but we are currently building up a solid "anti" sentiment against all of it, liberalism, disruptive technology and i can imagine a "Luddite" party like MAGA shutting it all down hard and fast in the future. I can already imagine some future bureaucracy, evaluating any business idea suggested for scam and harm potential and ending most of them before they even start. And this stuff right here is, where it was born. The prison holding your future self, it was planted right here.
_____________________________________________________
Everything ever worth reading was written in the Pre-Collapse internet. So why not become a software-archeologist - digging for the golden past? Exhume it, get it back running, bring it all back, perfectly fine, software, books, games, our decadent ancestors abandoned and threw away to write off as rust. You too can help, rediscovering a past that worked better, untainted by AI, not yet riddled with Add-HD-Adds, when developers still had to be competent and companies still competed. Meet hot dig-site-teams near you- now. Join Past-Querries-Quary Inc. Can we dig it, yes we can!
This reminds me of the time I reported a fake PayPal email saying my account had been suspended to PayPal. The woman who answered the phone for PayPal told me very emphatically that I HAD BETTER HURRY UP AND DO EVERYTHING THEY TOLD ME TO!
My bank's phone reps have repeatedly asked for the 6-digit 2FA code that comes in a text saying WE WILL NEVER ASK FOR THIS CODE OVER THE PHONE.
Their answer to me is "but you called us", which is fair, but they could just rephrase the message to "WE WILL NEVER CONTACT YOU TO ASK FOR THIS CODE"
Exactly!
That's what you get by trusting a stochastic parrot.
If it did that, it’s not intelligent, not “AI”. Let’s agree to stop abusing the term AI, for the sake of people like this.
This right here is where AI safety counts.
if companies are held liable for LLM provided info, we will begin to see appropriate use
The AI information was 99% correct. Only 1 word was wrong.
Yeah it said "kill" instead of "save". Oops.
I'm not a native English speaker, so I don't know how it is in any English-speaking country, but when I ask my Polish friends about the word "epistemology", they just don't know it.
https://en.wikipedia.org/wiki/Epistemology
According to Google: the theory of knowledge, especially with regard to its methods, validity, and scope, and the distinction between justified belief and opinion.
Even though they wouldn't know the term, we all learn how to figure out what's true and what's not: we learn it when watching cartoons about lying, or when interpreting texts in school and so on. But imagine you go to a doctor, and have a small talk in which you say "I was always fascinated by medicine", to which the doc responds "What is medicine?" - you probably would run away from that doctor.
And yet here we are, living in the "Information Era", and yet we're still missing the very basic techniques of figuring out the truth: if you look at the statistics of religion/atheism, no group holds over 50% of population - meaning THE MAJORITY IS WRONG - and not on a nuanced thing like the majority not being able to tell the average distance between the Earth and the Moon with 1 meter accuracy. No, on something as important and world-view defining as the existence and character of God, most of us are wrong.
The percentage of flat-earthers in America is a 2-digit number...
So the problem here isn't that Facebook doesn't have a support number. The problem is much deeper, and in a way, it's good that people suffer from their stupidity: it's like programmers suffering from errors - in the end of the day they end up with their logical thinking improved. Question is: how do we reshape the society to replace production errors with compilation errors, or how do we educate ourselves to minimize the frustrating error messages.
I look forward to this never happening again and not becoming a massive problem for the next 10 years.
Not to apologize for the irresponsible deployment of this chatbot but it should be noted that the guy got the number from a Google search (think about the results you'd get for "facebook support number"). It's been a massive problem for at least the last 10 years.
It's unfortunate that Google is unable to solve this intractable technical problem.
Twice scammed: Once by throwing away his life by having an F'book account. Then, the support scam.-
I understand the downvotes and that it might be an unpopular position, but an empire built on stealing people's attention, through addiction, and one scientifically proven again and again to cause serious mental issues on vulnerable demographics (teens), deserves to be shamed.-
damn dude you got his ass good
Zuck must be proud.
As a millenial, I'm more amazed that someone willingly uses a phone for non-mandatory and not-burningly-urgent phonecalls... why on earth would anyone do that is way beyond me.
I'm Gen-Z and talking to a human representative of a company makes me much more confident that something will happen as a result of my efforts (though still not certain).
I scheduled an apartment viewing recently, and the only method they provided to do so was chatting with an AI (seriously)... I then tried and failed to find a way to contact a human for confirmation multiple times. Lo and behold nobody at the leasing office when I showed up at the scheduled time. Came back later and eventually found somebody - they had not seen anything I'd done with the bot.
Software for small businesses and local governments is often really bad and I'd much prefer to make sure a person knows what I'm trying to get accomplished.
When I was searching for apartments every complex had the same AI program for scheduling. It was horrible.
I got to talk to one of the leasing managers at one of the viewings and I told him it made them seem cheaper, not more tech-savvy. He told me they had spent millions of dollars on it.
Crazy. If they won't let me speak to a person I'd still much prefer just having a generic click-your-timeslot web app than waste time talking to a bot. And for millions of dollars they could just hire a human for a decade or more...
There seems to be a semi-infinite market for garbage software sold to landlords. At my current place I need an account to unlock my door, a different account to open the garage door (because the garage is managed by a third party), an account to reserve the elevator for move in day (which tried to up sell me moving services), an account to get sent my water bill which charges me $15 a month for the privilege (I don't pay me bill though this service, just have it emailed to me) , an account to pay rent and and an account to submit maintenance requests. Part of the trick seems to be to offload the costs onto the tenets who have no choice, but I'm sure our landlord is paying a good chunk for some of these.
If you have minimal to zero scruples, this seems to be an easy market to make a start up in. Landlords will buy anything!
Don't forget the account to open shared mailboxes for packages. "Luxor" for me. It actually works so I don't mind much but I hadn't really considered how much extra rent all the apps might be costing me.
> When I was searching for apartments every complex had the same AI program for scheduling. It was horrible.
Was it RealPage? I hear they're illegally colluding to raise prices.
Had the same thing happen for a town home I was interested in buying. Went through their online scheduling app. Got email confirmation with agent's name, but no phone number. Got another confirmation day of. Didn't think anything was amiss. Go out to building, wait for 20 mins and leave after agent was a no-show, no-call.
I called their office and after 20 minutes of trying to go around their obnoxious automated phone menu's I finally got someone who informed me who said they don't use THAT app any more to schedule appointments I need to use their NEW app and sent me a totally different app link in an email. I told them they are probably losing a ton of business because very clearly the OTHER app is still very much out in the wild and still very much being used.
I went with a different company and had much better luck.
As someone a little older I remember being able to talk to a person to get issues resolved fairly easily and reliably. The online help is great when the issue at hand is pretty cut and dry. It is nice for a non expert to be able to explain to support on the phone and just have things taken care of.
Support from days gone by was not perfect (hold times, support reading off a script)but it was often a nice option.
Not sure why throwing in the randomly assigned label of millennial, but fine, I also fall in the category and I've taken to just calling people and companies.
First of all, understand that many especially smaller companies have people who has the job of answering phone calls. Rather than doing a multi day back and forth via email or chat where you're one out of five that "agent" is currently servicing, calling is really really efficient. Clarification and confirmations are instant, alternatives can be quickly discuses. I call because it's efficient.
Also, have you ever noticed that most people SUCK at email? Try sending an email to company with two or more questions. What will happen is that you'll get an answer for the first question and then they forget about the rest. The larger the company the more likely this is to happen, because they can deal with three issues in one support ticket, at least that's my theory. So now you need three email.
I used to hate calling people, but I found that I hated uncertainty more and I hate getting wrong half answers to my questions. Calling people fixes all of this. Always call, but get confirmation in writing.
Fellow millennial, I also hate using the phone for anything, but very often a business provides no other interface to resolve my edge-case issue. Connecting to a human representative to discuss the situation ends up being the only way to resolve it. If they have a [solve my specific problem] button on their website, I'll use that, but often there is no such button.
AI has kind of fucked this, but for me (also millenial) I prefer to speak to real people because they are intelligent beings with roughly the same motivations as me and usually want to help out their fellow man.
For example, I can call a local store and ask "hi, do you have this item in stock, can you check on the shelf and set it aside for me please, I will be there in 25 mins".
By contrast stuff like "click and collect" order flows online are super rigid.
> I prefer to speak to real people
Thanks to AI, you will speak to who you think is a "real person"...
I remember, some time ago, some scammer tried to get my wife to address an "urgent bank issue."
What tipped her off, was how incredibly good their "tech support" was.
Yeah, indeed.
Time to go outside more :)
Millenial here. I hate receiving phone calls. Being a foreigner in the country I live where they speak another language being the most important.
I do not mind a phone call when I am the one initiating and hence, I know the context of it and the expectations.
As a millennial I think voice calls sometimes are great. It obviously doesn't always work with big orgs like Facebook, but because so many people are now so afraid of or annoyed by just talking to a real person for a few minutes it's become a real power move to sometimes just go through the minor effort to make a call and expect some sort of immediacy to get things moving quickly. Email or text can be easily ignored and punted off (ex "whoops I didn't see it"), and increases the odds of miscommunication or having things be dragged out going back and forth.