Google and Microsoft are in a race to ruin search with AI
Web agency » Digital news » Google and Microsoft are in a race to ruin search with AI

Google and Microsoft are in a race to ruin search with AI

Templates like ChatGPT are a poor source of information.

It's been a tough year for the tech industry: investors are nervous, layoffs dominate the news, and a pesky tool called ChatGPT threatens to destroy search engines. So, ironically, Google and Microsoft are responding to this threat by rushing to ruin their own search engines.

Let's start at the top; ChatGPT is not a reliable source for any information. It's an interesting tool, it can perform several tasks, but it's not smart and lacks any kind of understanding or knowledge. Unfortunately, chatbots like ChatGPT can seem very authoritative, so a ton of people (including journalists, investors, and tech workers) mistakenly believe it to be knowledgeable and accurate.

Now, you could say that tools like Google Search aren't smart, and you'd be right. You could also criticize the effectiveness of Google Search or Bing – these services are gamified by websites and as a result, they bring a ton of garbage to the surface of the internet. But unlike ChatGPT, Google does not answer questions. It simply links you to websites, allowing you to determine if something is right or wrong. It is an aggregated tool, rather than an authority.

But the hype machine sends ChatGPT into the stratosphere, and big tech wants to follow. After tech blogs and random internet personalities hailed ChatGPT as the downfall of Google Search, Microsoft made a big investment in the technology and started integrating it with Bing. Our friends at Google responded with Bard, a custom AI chatbot that builds on the experimental LaMDA model.

An overview of Google's Bard intrusion into normal search results.

Microsoft and Google are in a race to see who can rush this technology. Not only will they give authority to tools like ChatGPT, but they'll introduce that unpolished AI to users who don't know any better. This is irresponsible, it will amplify misinformation and encourage people to put research aside.

Not to mention that this technology will force Microsoft and Google to become the arbiters of what is right or wrong. If we want to treat conversational AI as an authority, it needs to be constantly adjusted and moderated to eliminate inaccuracies. Who will make these decisions? And how will people react when their beliefs or “knowledge” is not represented by sophisticated AI?

I should also note that these conversational models are built on existing data. When you ask ChatGPT a question, it can answer using information from a book or article, for example. And when CBS tried to test an AI writer, he posted a ton of plagiarized content. The data that pilotent this technology are a copyright nightmare, and regulation seems like a strong possibility. (If that happens, regulation could dramatically increase the cost of developing conversational AI. Licensing all the content needed to build this technology would cost a fortune.)

Early previews of Google's Bard show that it can be deeply integrated with search, presenting information inside the regular search engine and in a dedicated chat box. The same goes for Bing's conversational AI. These tools are currently being tested and finalized, so we hope to hear more in the coming weeks.

By the way, both Google and Microsoft have released a bunch of AI principles. It may be useful to read these principles to see how companies approach AI development.

★ ★ ★ ★ ★