Balancing skepticism and blind trust: critical thinking as the key to responsible and effective use of GenAI

Thema:
Artificial intelligence
14 January 2026

Which generative AI tool (GenAI) should you deploy for your employees? What access should it have? What rules should govern its use? These are questions many organisations are grappling with right now. But there’s another crucial element that deserves attention: how do you keep users motivated to think critically while using these tools? TNO is working with major organisations to develop a critical thinking toolbox with practical resources to explore and improve this.

Wrong information, delivered convincingly

Critical thinking in GenAI is about the ability to assess AI output and knowing when to trust the machine. ‘People are naturally quite capable of critical thinking’, says Jasper van der Waa, senior researcher at TNO’s Human-Machine Teaming department. ‘But GenAI systems sound very convincing, even when they’re wrong.’

His colleague Liv Ziegfeld adds: ‘You start doubting yourself before you doubt the model. That’s what makes it so tricky.’ It’s one of the pitfalls employees need to be prepared for when using GenAI.

The paradox of smart tools

GenAI tools like ChatGPT have become mainstream almost overnight, often without training or guidelines. This creates tension between blind trust (‘AI said it, so it must be right’) and unwarranted skepticism, which leaves potential untapped.

Critical thinking doesn’t mean putting the brakes on AI use, the researchers emphasize. ‘On the contrary, support features within GenAI tools can actually encourage and strengthen critical thinking’, says Jasper. Think of a chat app for a medical helpline: ‘Such a tool could include a traffic-light indicator showing whether the LLM can verify certain statements with sources or not’, Jasper explains. “That little nudge can make someone pause and check.”

Why smarter GenAI demands even more critical thinking

GenAI tools keep improving, won’t that solve these problems? Jasper hears this question often from AI enthusiasts. His answer: ‘Some issues are inherent to how these models work. It’s naïve to think they’ll just disappear.’

One deeply rooted trait of language models is that they’re designed to please users. They sound helpful and convincing, regardless of accuracy. As they get better, errors become harder to spot.

Measuring critical thinking: how do your employees score?

Before organisations jump to solutions, they need to know where they stand. “If you have an application in mind or a pilot running, start by asking: are employees critical enough?’, Jasper explains. ‘And if not, where exactly is the problem?’

TNO is developing targeted measurement tools for this. ‘We can send out a questionnaire or run a simple A/B test’, says Jasper. ‘That gives insight into how well employees score on critical thinking. That’s the foundation for designing features that support it.’

Responsible AI that works

TNO is building responsible AI that works: systems that are not only effective but also ethical. Responsible AI empowers people, increases societal impact, and keeps control with the user. European values such as privacy and security make AI more usable and reliable, and ensure broad acceptance in society.

Working together on solutions for key pitfalls

With organisations, we aim to build a critical thinking toolbox to tackle the main pitfalls in GenAI use. This toolbox should include not only technical solutions but also a list of best practices with guidelines for crucial decisions, such as which tasks should and shouldn’t be delegated to GenAI.

‘The problem is that language models sound very convincing, even when they hallucinate’, Jasper explains. ‘ChatGPT can even invent sources that look real.’ Visual aids in specific tools could help here, but the responsibility remains with the user to stay critical. These aids can encourage users to double-check outputs.

‘Many models come from the U.S. Big Tech industry’, says Jasper. ‘That bias, political or cultural, creeps into communication.’ LLMs also amplify our own biases. Medical queries are a prime example: ‘When asking an LLM about a health issue, people often seek confirmation for a self-diagnosis’, Liv notes. ‘GenAI just goes along with it.’

Solutions lie both in the tool and the user. An LLM that challenges you would be a step forward, says Liv: The LLM could say: ‘I notice you’re leaning in a certain direction. Shall we reframe the question more broadly? That opens up both perspectives.’

Prompt quality hides more pitfalls, Liv observes: “People often aren’t clear on what they want from an LLM. They think: ‘I have a topic or task, I’ll just ask ChatGPT.’ Then they send a half-baked prompt,” she says. “The answer they get is treated as correct, even though there may be dozens of other relevant options.”

Time pressure makes this worse. Take a helpdesk scenario where staff need quick answers. A toolbox could help here with clear guidelines on when and how to use GenAI, especially in time-critical situations.

A key part of best practices is deciding which tasks to delegate to GenAI at all. “We advise organisations on which tasks belong to an LLM and which are core to an employee’s role,” Liv explains. “You don’t want to outsource those, otherwise employees lose essential skills.”

“Imagine,” says Jasper, “that report you used to write yourself is now partly generated by an LLM. Tomorrow your manager asks you to present it. You’re stuck.” As GenAI systems become more advanced, we let them do more. Research shows people don’t internalize the process when using AI, so they remember less.

Organisations must find ways to keep employees actively engaged. Support features in GenAI tools can help, by requiring users to summarize key points or justify decisions, ensuring understanding remains intact.

Practical steps to get started

Organisations navigating these pitfalls can begin with a few essential steps to strengthen critical thinking in GenAI use:

  1. Start with awareness
    Recognize the pitfalls in your organisation. Identify high-risk applications such as medical diagnoses, legal advice, or financial decisions.
  2. Map critical thinking capacity
    Measure when and with which tools employees rely too much on GenAI or distrust it unnecessarily. What specific risks exist in your context? TNO develops questionnaires and test methods for this.
  3. Define your GenAI task strategy
    Make deliberate choices about which tasks to outsource to GenAI. Preserve essential human skills and avoid competence loss where you can’t afford it.
  4. Implement targeted solutions
    From visual warnings to GenAI that challenges users—choose tools that fit your workflows and culture. Pay extra attention to time-critical situations. TNO is developing prototypes for this.
  5. Create feedback loops
    Ensure continuous evaluation of GenAI use. Learn from mistakes without blaming employees. Share successes and failures to grow together.

AI Agents: why critical thinking matters even more

The importance of critical thinking and responsible use will only grow as tools become more advanced. AI agents will soon access email, calendars, and project tools. ‘You could assign tasks to your AI agent in the morning and have them autonomously executed’, Jasper sketches the near future. ‘The results await you in the afternoon, but if you don’t know what it did, how do you stay accountable?’

The more autonomous GenAI becomes, the more vital it is for humans to assess what’s happening and when to intervene. A healthy critical mindset also ensures GenAI is used more effectively.

Critical thinking as an innovation accelerator

‘AI enthusiasts often think we only see problems’, Jasper admits. ‘But by supporting critical thinking, you actually unlock more value from GenAI. You combine human strength with AI power.’ Liv adds: ‘We’re not saying you should ditch technology. It’s about complementing it to manage risks.’

TNO seeks partners to further develop the critical thinking toolbox. ‘The ideal scenario’, Jasper describes. ‘Is an organisation saying: we want to apply GenAI to a specific case, for a specific group, with clear objectives. Then we assess how critical employees are and where support is needed. Our goal is to design the application so it naturally stimulates and supports critical thinking.’

Contact TNO to explore how critical thinking can be the key to successful GenAI implementation in your organisation.

Building Responsible GenAI Implementation together

TNO develops comprehensive tools to help organisations implement GenAI responsibly across all fronts. We seek partners to co-create practical instruments for:

  • Critical thinking for users – measurement tools and support features for employees
  • Governance structures – frameworks for responsible GenAI policies
  • Evaluation of GenAI performance – technical testing and quality assurance

Together, we ensure generative AI becomes a force for positive transformation in your organisation.

Meet our expert

  • Jasper van der Waa

    Senior Researcher Human-Machine Teaming

  • Liv Ziegfeld

    Scientist Human-Machine Teaming

Get inspired

49 resultaten, getoond 1 t/m 5

From reactive to proactive: How organisations gain control over GenAI governance

Informatietype:
Insight
16 December 2025
TNO develops practical pathways that guide organisations through this uncharted governance territory, transforming reactive ‘firefighting’ into proactive control.

How TNO is leading the drive towards sovereign, responsible Dutch AI

Informatietype:
Insight
23 October 2025

TNO’s Vision for Responsible AI That Works

Informatietype:
Article
10 October 2025

ObjectivEye: AI-assisted human recruitment

Informatietype:
Insight
5 September 2025

Large dataset news organizations for Dutch AI language model GPT-NL

Informatietype:
News
17 July 2025