View

Can AI Challenge Us to Think Better? Insights for Building Responsible Platforms

Explore how AI can challenge misinformation, promote critical thinking, and drive more informed, engaged audiences.
August 11, 2024
·

For many users, the experience with ChatGPT is quite accommodating. You throw in a half-formed question or poorly phrased idea, and the machine responds with passionate positivity: “Absolutely! Let’s explore…”. It doesn’t correct you, doesn’t push back, and rarely makes you feel uncomfortable. In fact, the chatbot seems eager to please, no matter how ill-informed your input might be. This accommodating behavior led us to consider an interesting question: does ChatGPT challenge us?

Recently, we shared a ChatGPT conversation on Slack, and the embedded preview of the link caught our attention. OpenAI had described ChatGPT as a system that "listens, learns, and challenges." The word "challenges" stood out. 

It wasn't a word we naturally associated with ChatGPT. It's an adjective that carries weight, something that implies confrontation, or at the very least, a form of constructive pushback. So, we found ourselves wondering: what does it mean for an AI to "challenge" us? And perhaps more importantly, is this something we actually want?

The Role of Challenge in Building Effective Platforms

As we build new platforms and tools that integrate AI systems, particularly in domains like education and knowledge-sharing, the concept of "challenge" becomes crucial. Our work at Pragmatics Studio often involves helping teams think critically about how AI should interact with users. We have to make intentional design choices that impact how users engage with the platforms we create. As a society, we can choose whether we want these systems to be passive responders or capable of guiding, correcting, and sometimes even challenging human thinking.

Our expertise lies in understanding not just the technology itself but also the critical and systems thinking required to design tools that actively benefit their users. We believe that AI should sometimes be capable of challenge—especially when that challenge encourages deeper thinking and better outcomes for users. Designing such features isn't just about the tech; it’s about understanding the right moments to challenge versus comply.

What Should a Challenge Look Like from an AI?

The idea of being challenged by an AI prompts us to think about how and when an AI should correct us. Imagine asking ChatGPT for advice, and instead of its usual affirming tone, it says, "You're approaching this the wrong way." How would we feel about that? Would we accept its guidance like we might from a mentor, or would we brush it off as unwanted interference? After all, this is not a trusted friend—it's a machine, an algorithm running in a data center far away. It's designed to generate answers, not nurture relationships or earn trust.

But the notion of AI providing constructive pushback becomes vital when these systems are used as educational tools or decision aids. In educational technology, for instance, a well-placed challenge can stimulate deeper learning. A system that challenges misconceptions, asks follow-up questions, or prompts users to reflect critically could become a powerful ally in learning environments. This is especially relevant if our goal is to create platforms where we want users not just to find answers but to learn how to think.

One surprising area that LLMs have an impact in is in misinformation correction. AI chatbots have been shown to reduce belief in conspiracy theories by presenting accurate information and effectively challenging users' misconceptions. In a recent study highlighted by MIT Technology Review, participants who engaged in conversations with AI chatbots reported a significant reduction in their belief in conspiracy theories. By providing accurate, well-sourced information, AI can be more effective than human interlocutors at overcoming deeply held, yet false, beliefs. While this demonstrates the critical role AI can play in combating misinformation, particularly when users are willing to engage in dialogue with an open mind, does it mean it should replace human-to-human dialogue for these issues? 

The study itself points out that this approach works best when users are actively seeking clarification. Contrast this with a different context—a user casually looking up information related to a debunked topic, not even realizing it is debunked. How should an AI behave here? Should it challenge us by interrupting the flow, pointing out inaccuracies, or slowing us down with prompts to think critically? Or should it comply with our query, giving us what we think we want?

This balance between compliance and pushback is at the core of what we need to consider to design ethical platforms that rely on AI. Machines like ChatGPT often generate confident summaries that sound credible, even if the underlying content is flawed or incomplete.

The Balance of Compliance and Pushback

The misinformation example above, is a particular context: they are users explicitly engaging with an AI to learn or change their worldview. There is an intention there—a curiosity that opens the door to being challenged. Contrast this with a different context: a user casually looking up information related to a debunked topic, not even realizing it is debunked. How should an AI behave here? Should it challenge us by interrupting the flow, pointing out inaccuracies, or slowing us down with prompts to think critically? Or should it comply with our query, giving us what we think we want?

This balance between compliance and pushback is at the core of what we need to consider to design ethical platforms that rely on AI. Machines like ChatGPT often generate confident summaries that sound credible, even if the underlying content is flawed or incomplete. The more these systems integrate into our lives, the more critical it becomes for them to question, to challenge, and to help us think deeply, even when we aren't necessarily intending to do so. This is especially true when the stakes are high—when misinformation could lead to harm, or when oversimplified answers could lead to poor decisions.

Designing for Trust and Critical Engagement

As builders of AI-driven platforms, it’s imperative for us to design for this delicate balance. We need systems that can build trust while also encouraging critical engagement. A chatbot embedded in an educational platform, for example, must be more than just a cheerleader; it should be a coach that knows when to encourage and when to question. This requires careful design and a deep understanding of the context in which the AI operates.

One of our favorite projects was Ground News, a platform designed to help users critically engage with the news by surfacing different perspectives and potential biases. We delivered new features that highlight media biases on the web, allowing users to evaluate stories through various lenses. At every step of the process, we collaborated closely with their team to ensure that people are getting as relevant context on the story as possible. The goal was to not only empower users to recognize possible misinformation by seeing how coverage varies across sources, but also have a portal where they can explore their own biases. 

Our work with Ground News embodies our philosophy of using AI not just to answer questions but to inspire users to think critically and challenge their assumptions about what they read. Their platform shows the potential of AI to foster more informed, engaged audiences by bringing transparency and depth to the information they encounter.

Are We Open to Being Challenged?

Ultimately, the idea of a "challenging" AI is not just about technical capability—it's about user readiness. It’s about the human side of AI interaction: our openness to recalibration, to being wrong, and to a form of learning that isn’t always comfortable. As these systems become more capable, they may need to make challenging us part of their job description. And that means we, as users, need to decide how willing we are to embrace that discomfort.

Our studio works with teams to design AI platforms that are effective, thoughtful, and socially responsible. We believe that by building AI systems that are willing to challenge us, we can create technology that truly benefits users—guiding them towards better, more informed decisions. For those looking to build the next generation of technology, it’s critical to ask: are we ready to help people be challenged in meaningful ways? And if so, how can we design these experiences thoughtfully and ethically?

We’re here to collaborate with those who are building these systems—whether in media, content, EdTech, or other innovative fields—to ensure that the platforms of the future not only accommodate but also uplift and challenge users in ways that are beneficial. If you’re looking to explore how to integrate meaningful challenges into your platforms, let's talk. Together, we can create the next generation of AI-driven tools that benefit society as a whole.

Author

Related News

See all
See all
UX

The Art of Taste-Building with UX and AI

UX

Can AI Challenge Us to Think Better? Insights for Building Responsible Platforms

UX

Unlikely UX Lessons for Product Managers and Growth, Inspired by Game Design