Google stock loses $100 billion after new AI chatbot gives wrong answer in demo – zoohousenews.com
- February 12, 2023
- No Comment
(Natural News) Shares of Alphabet, Google’s parent company, fell 7.7 percent on Wednesday, causing the company to lose a remarkable $100 billion of its market value after its new AI chatbot gave an inaccurate response to had given a question in a public demo week.
Google’s new AI chatbot tool, dubbed Bard, hasn’t been released to the public yet, but it was the subject of significant hype — at least until the disastrous demo the company posted on Twitter this week.
In the demo, a user asks Bard, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”
The AI tool then gives the user an answer that includes several bullet points about the telescope. One of them claims, “JWST has captured the first-ever images of a planet outside of our own solar system.”
However, NASA reports that the first image of a planet outside our own solar system, known as an exoplanet, was not captured by the James Webb Space Telescope. Instead, it was captured by the European Southern Observatory’s Very Large Telescope in 2004.
This very public embarrassment underscores Google’s struggle to keep up with AI technology ChatGPT, which is getting a lot of positive attention for its competitors. ChatGPT can be used to create answers to questions people typically search for using Google, as well as essays and even song lyrics. It’s enjoying a sudden surge in popularity that reportedly prompted Google management to release its own version as soon as possible.
Google’s event came just a day after Microsoft announced it would be adding a more advanced rendition of the artificial intelligence used by ChatGPT to its Bing search engine.
AI is error prone
Some observers believe that conversational AI will radically change the way people search online, but the Bard fiasco could see Google’s search engine’s reputation take a big hit after providing unreliable information .
Similar to ChatGPT, Bard is based on a large language model. That means it’s been trained on massive amounts of online data to find convincing and realistic-sounding responses to user input. While many of these tools provide answers that sound reasonably natural and colloquial, they can also spread inaccurate information.
For now, Google is trying to do damage control and says the incident will help them improve the project. In a statement, a Google spokesman said: “This underscores the importance of a rigorous testing process, something we are launching this week with our Trusted Tester program.
“We will combine external feedback with our own internal testing to ensure that Bard’s responses meet high standards of quality, certainty and realism in real-world information.”
While misidentifying the name of a satellite that took a particular photo might seem harmless on the surface, what happens when Google’s bard gives people inaccurate information about things like administering first aid or gives incorrect instructions on how to carry out home improvement projects that people can be affected Warning?
The problem is that many of the answers these chatbots provide sound so compelling that it’s hard for people to tell if they’re inaccurate. The appeal of these AI-driven searches lies in their ability to provide queries in plain language rather than presenting a list of links, which helps connect people to answers faster.
However, in addition to concerns about accuracy, these systems have been criticized for their susceptibility to inherent bias in their algorithms, which can skew their results. When used en masse, the potential for spreading false information is overwhelming. Tech news site CNET recently had to remove 77 articles it authored with an AI tool that found significant factual inaccuracies and plagiarism. AI chatbots are designed to essentially invent things to fill in gaps, and if they become widespread, it could soon be harder than ever to tell fact from fiction online.
Sources for this article are: