Tech

We have to stop ignoring the problem of AI hallucinations

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


Google I/O introduced an AI assistant that can see and hear the world, while OpenAI put its version of a Her-like chatbot on an iPhone. Next week, Microsoft will host Build, where it will certainly have some version of Copilot or Cortana that understands pivot tables. Then, a few weeks later, Apple will hold its own developer conference, and if the buzz is anything to go by, it will also talk about artificial intelligence. (It’s unclear whether Siri will be mentioned.)

AI is here! It is no longer conceptual. It’s taking jobs, creating some new ones and helping millions of students avoid doing homework. According to most large technology companies investing in AI, we appear to be at the beginning of one of those rare monumental shifts in technology. Think of the Industrial Revolution or the creation of the Internet or the personal computer. All of Silicon Valley – of Big Tech – is focused on taking great language models and other forms of artificial intelligence and moving them from researchers’ laptops to everyday people’s phones and computers. Ideally, they will make a lot of money in the process.

But I can’t care about that because Meta AI thinks I have a beard.

I want to make it very clear: I am a cis woman and I don’t have a beard. But if I type “show me a photo of Alex Cranz” in the prompt window, Meta AI will inevitably return images of very handsome men with dark hair and beards. I am just some of those things!

Meta AI isn’t the only one struggling with the minutiae of On the edgeis the header. ChatGPT told me yesterday that I don’t work On the edge. Google’s Gemini didn’t know who I was (fair enough), but after telling me that Nilay Patel was the founder of On the edge, then he apologized and corrected himself, saying no. (I guarantee he was.)

AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their stupidity. I can’t get excited about the next turn in the AI ​​revolution, because that turn is a place where computers can’t consistently maintain accuracy about even minor things.

I mean, they even got it wrong during Google’s big AI keynote at I/O. In a commercial for Google’s new AI search engine, someone asked how to fix a stuck film camera and suggested they “open the back door and carefully remove the film.” This is the easiest way to destroy every photo you’ve ever taken.

Some of these suggestions are good! Some require A VERY DARK ROOM.
Screenshot: Google

An AI’s difficult relationship with the truth is called “hallucination”. In extremely simple terms: these machines are excellent at discovering patterns in information, but in their attempt to extrapolate and create, they occasionally make mistakes. They effectively “hallucinate” a new reality, and that new reality is often wrong. It’s a complicated problem, and everyone working in AI right now is aware of it.

A former Google researcher claimed this could be fixed next year (although he regretted that outcome), and Microsoft has a tool for some of its users that supposedly helps detect them. Google’s head of search, Liz Reid, said On the edge is also aware of the challenge. “There is a balance between creativity and factuality” in any language model, she told my colleague David Pierce. “We’re really going to skew this toward factuality.”

But notice how Reid said there was a balance? This is because many AI researchers do not think that hallucinations could it be sorted out. A study by the National University of Singapore suggested that hallucinations are an inevitable result of all major language models. Just as no person is 100% right all the time, neither are these computers.

And that’s probably why most of the key players in this field – those with the real resources and financial incentives to get us to embrace AI – think we shouldn’t worry about it. During Google’s IO keynote, it added, in small gray font, the phrase “check answer accuracy” to the screen below almost every new AI tool it showed off — a helpful reminder that its tools aren’t reliable, but neither. I don’t think it’s a problem. ChatGPT operates in a similar way. In tiny font just below the prompt window, it says: “ChatGPT may make mistakes. Check important information.

If you squint, you can see the tiny, oblique disclosure.
Screenshot: Google

That’s not a disclaimer you want to see regarding tools that are supposed to change all of our lives in the very near future! And the people who make these tools don’t seem to care much about solving the problem beyond a small warning.

Sam Altman, the CEO of OpenAI who was briefly fired for prioritizing profit over safety, went a step further and said that anyone who had problems with AI accuracy was naive. “If you do the naive thing and say, ‘Never say anything you’re not 100% sure about,’ you can get everyone to do it. But it won’t have the magic that people love so much,” he told a crowd at Salesforce’s Dreamforce conference last year.

This idea that there is some kind of unquantifiable magic sauce in AI that will allow us to forgive its tenuous relationship to reality is raised a lot by people eager to dismiss accuracy concerns. Google, OpenAI, Microsoft, and many other AI developers and researchers have dismissed hallucination as a minor annoyance that should be forgiven because they are on the path to creating digital beings that can make our lives easier.

But I apologize to Sam and everyone else financially incentivized to get me excited about AI. I don’t come to computers through the imprecise magic of human consciousness. I come to look for them because they are very precise when humans are not. I don’t need my computer to be my friend; I need this to get my gender right when I order and to help me not accidentally expose film when fixing a broken camera. Lawyers, I presume, would like the case law to be correct.

I to understand where Sam Altman and other AI evangelists come from. There is the possibility, in some distant future, of creating a real digital consciousness out of ones and zeros. Right now, the development of artificial intelligence is advancing at an astonishing speed that puts many previous technological revolutions to shame. There is real magic at work in Silicon Valley right now.

But the AI ​​thinks I have a beard. It cannot consistently figure out the simplest tasks, and yet it is being forced upon us with the expectation that we celebrate the incredible mediocrity of the services these AIs provide. While I can certainly marvel at the technological innovations that are happening, I wish my computers didn’t sacrifice accuracy just to have a digital avatar to talk to. That’s not a fair trade – it’s just interesting.



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,061

Don't Miss

Week in photos: War in Sudan, a solar eclipse and Iran attacks Israel

A summary of some of the week’s main events. This

Why Microsoft’s new AI feature has sparked privacy concerns

MMicrosoft has launched a new series of products called Copilot+