OpenAI GPT-4V technical paper released detailing problems
techcrunch.comThis article spends a ton of time implying it's a bad thing that GPT-4V wasn't allowed to read MRI scans and judge interview candidates by picture.
AI doomers are focused on Skynet end of humanity, but here you have the boring version where people apparently wanted us to use an LLM for these things... I've been very dismissive of the Skynet direction, but as demonstrated AI doesn't need to be sentient to do some damage.
Agreed, but the argument for causing damage still requires a human actor to execute GPT-4's advice. Should we be censoring LLMs just because we are offended by the output or find it misleading? There is plenty of misinformation on the internet already; in my opinion, the power to deal damage is still in the hands of humans.
I ask you to make hiring cases for me for our new office role: but I won't share experience or qualifications... instead I ask you to make the case based a photo.
Which shows greater intelligence? Stating I can't help, or joining in on a brilliant plan to make hiring decisions based on pictures of our applicants.