View

Embracing the AI Mindshift: From Google to ChatGPT

Exploring the evolution of information access, decision-making, and society's acceptance of more authoritative answers
August 11, 2024
·
5 Min

Exploring the evolution of information access, decision-making, and society's acceptance of more authoritative answers

Whether we like it or not, thinking will happen through AI, as it happens today through Google. We will ask language models how to deal with a broken heart the way we started doing that with the search giant. Generate recipes. Deal with grief. Diagnose disease. We will ask it to be our assistant as well. It will write our cover letters, resignation emails, pick up lines, wedding vows, and eulogies. All the things we've already asked of Google.

The difference is that the AI gives authoritative answers. Yes, you can chat back forth with it but that’s likely not what most will do. Like with search where most people don’t tend to double, triple and quadruple check the results—let alone look past the first three. It follows then, that AI will not be the downfall of thinking as many proclaim. Instead, the problems it invents will be nuanced and novel. Like Google.

Misinformation spreads because information became so easily available. People can discover in seconds how to play a song on the piano, but they can also find information about the earth being flat. It’s hard to say with any certainty that Google is all bad because of misinformation. The company should do something about it, but the technology has many positives—particularly before search optimized blogs muddied results with low quality trash. We need to wait and see what the grey zones will be with ChatGPT.

So what is uncomfortable about AI? Why does this feel different? To me, it has to do with the trajectory we’re going on. We’re cementing ourselves in a culture obsessed with output rather than process.

Google gave us outputs at breakneck speed but, compared to ChatGPT’s authoritative single answers, search results allow us to stop and consciously choose the link. But we didn’t. Instead, we clicked the first link... maybe the second or third. We did that so much, Google now pastes facts from the first hit straight on the results page.

For things like writing an email, search online gave us templates that we could use for inspiration—unlike GPT spelling it out for us. But again, we copy/pasted the template and sometimes even forgot to change the Dear [BOSS NAME]. We all got accustomed to clicking and taking the first and fastest thing we could.

There isn’t any singular person or entity to blame here. Society is fast-moving and unrelenting. Corporations have no checks and balances they can’t just pay their way out of. People aren’t all equally equipped to filter through information. With AI, these tendencies are only becoming more extreme. As machines seem more intelligent, what’s required of us to get a good enough result decreases dramatically. We are prioritizing output over process, and this will lead to a lot of mental pain if we're not intentional.

Author

Charlie Gedeon

Charlie Gedeon is the cofounder of Pragmatics Studio

Related News

See all
See all
UX

The Art of Taste-Building with UX and AI

UX

Can AI Challenge Us to Think Better? Insights for Building Responsible Platforms

UX

Unlikely UX Lessons for Product Managers and Growth, Inspired by Game Design