It will take a lot of work to solve Google's AI hallucinations

It will take a lot of work to solve Google's AI hallucinations

Artificial intelligence against fatigue

Until the advent of ChatGPT and other more popular LLM (Large Language Modeling) courses, I never found Googling things to be a necessarily tiring activity. I don't remember getting to the end of the day and thinking, “Wow, I'm exhausted! This must be all my Google searches today.”

Everything indicates that the habit of checking the list of links as a means of obtaining information is over. After outsourcing our memory capacity to Google and Wikipedia, we will now also delegate our ability to analyze content to choose what interests us most. The answer, more than ever, will come ready.

But what happens when AI hallucinates and provides wrong, meaningless, or even dangerous answers?

You should not eat stones, put glue on pizza, or cook with gasoline. Everyone already knows, but it doesn't hurt to remember, that chats associated with large language models were designed to generate coherent texts. Whether they will be talking about something true, verifiable and safe is another matter.

This second part can be addressed through improvements in the form of AI training, customization of specialized databases, human supervision, creation of tools for evaluation and feedback of results, increasingly extensive testing, etc.

See also  Twitch: Anonymous hacker exposes payments from streaming platforms

You May Also Like

About the Author: Osmond Blake

"Web geek. Wannabe thinker. Reader. Freelance travel evangelist. Pop culture aficionado. Certified music scholar."

Leave a Reply

Your email address will not be published. Required fields are marked *