What a difference seven days makes the world’s generative AI.
Last week, Microsoft CEO Satya Nadella gleefully announced to the world that Bing’s new AI-powered search engine would “make Google dance,” challenging its long-standing dominance of web search.
The new Bing uses a little thing called ChatGPT—you may have heard of it—which represents a huge leap in computers’ ability to process language. Thanks to advances in machine learning, it essentially figured out how to answer a wide variety of questions on its own by ingesting trillions of lines of text, much of it scraped from the Internet.
Google really danced to Satya’s tune by announcing Bard, its answer to ChatGPT, and promising to use the technology in its own search results. Baidu, China’s largest search engine, has said it is working on similar technology.
But Nadella may want to watch where his company’s fancy work leads.
In demonstrations held by Microsoft last week, Bing appeared to be able to use ChatGPT to offer complex and comprehensive answers to queries. It developed an itinerary for a trip to Mexico City, generated financial summaries, offered product recommendations that gathered information from numerous reviews, and offered advice on whether a piece of furniture would fit in a minivan by comparing dimensions posted online.
WIRED had some time at launch to test Bing, and while it seemed adept at answering many types of questions, it was clearly out of place and not even sure of its own name. And as one astute expert noted, some of the results Microsoft demonstrated were less impressive than they first appeared. Bing seemed to have compiled some information about the travel route he had drawn up and left out some details that no one would have missed. The search engine also mixed up Gap’s financial results, mistaking gross margin for unadjusted gross margin — a serious mistake for those relying on a bot to perform what might seem like a simple task of crunching the numbers.
This week saw more issues as the new Bing became available to more beta testers. They seem to include argue with the user what year is this and is experiencing an existential crisis when pushed to prove their own sanity. Google’s market capitalization dropped a staggering $100 billion after someone spotted errors in the answers generated by Bard in the company’s demo video.
Why do these tech titans make such mistakes? This is due to the strange way ChatGPT and similar AI models work, as well as the extraordinary hype at the moment.
What is confusing and misleading about ChatGPT and similar models is that they answer questions by making very educated guesses. ChatGPT creates what it thinks should be your question based on statistical representations of characters, words, and paragraphs. The startup behind the chatbot, OpenAI, has refined this basic mechanism to provide more satisfying answers by getting people to give positive feedback whenever the model produces answers that seem right.
ChatGPT can be impressive and entertaining because the process can create the illusion of understanding, which can work well in some use cases. But the same process will “hallucinate” false information, a problem that may be one of the most important problems in technology right now.
The intense hype and anticipation surrounding ChatGPT and similar bots increases the danger. When well-funded startups, some of the world’s most valuable companies, and most prominent tech leaders say that chatbots are the next big thing in search, many people will take that as gospel, prompting those who started the chatter. double with more AI omniscience predictions. It’s not just chatbots that can be misled by pattern matching without fact-checking.