One Model Blog

AI Trustworthiness is Your New Readiness Strategy

Written by The One Model Team | Dec 3, 2025 4:03:23 PM


At People Analytics World New York, one of the standout sessions came from our own Hayley Bresina, who emphasized that AI Readiness isn’t a tech strategy; it’s a trust strategy. This post breaks down how preparedness, governance, and strong data practices create AI systems leaders and employees can rely on, and it closes with a bold 30-day challenge. Are you in?

 

The talk's central thesis is simple: ChatGPT, Copilot...insert your favorite Large Language Model (LLM)... They are all impressive. However, using them naively is not. AI tools are only useful when they're built on top of trusted data. So let's start by redefining what AI Readiness means and looking into the factors that both build and break trust when it comes to AI use in your people analytics practice.

Defining Real "AI Readiness"

AI Readiness is the ability to make repeatable, auditable people decisions with data that's explainable and responsibly managed before, during, and after AI is applied.

It’s Not a Shopping List Item to Check Off

AI Readiness is the ability to:

  • Make repeatable decisions

  • Provide auditable logic

  • Explain how data moved

  • Demonstrate responsible use before, during, and after AI is applied

In short, if you can't trace a metric's origin, or even properly define it, you’re not ready for AI to scale it.

 

Buying an LLM is Easy

Anyone can spend $20 a month on an AI subscription like Chat GPT or CoPilot. That subscription is not going to make you more competitive than any other organization in your industry that has also purchased that LLM.


What actually differentiates organizations is:

  • Trusted, explainable data

  • Tools fit specifically to People Analytics

  • Clear human expertise in the loop

  • Repeatable, defensible decisions

 

 

Why Trust Comes Before Tools

Trust is like a seatbelt: you don’t think about it when it works, but its reliability determines whether you feel safe moving forward. The same principle applies to data and AI. If you can’t trust the answers your data provides today, adding more technology won’t fix the foundation. And if you’re giving people insights without being able to explain how you got there, they won’t see you as someone they can rely on for the right answers.

Hayley points out that no one wants AI-generated decisions without expert oversight.
The moment we blindly outsource judgment to tools is the moment we lose credibility.

 

The Honesty Test

The honesty test is simple: If you asked your data or your AI system a question, would you believe the answer enough to make a real decision on it?

Most organizations want to say yes, but when you dig deeper, gaps appear. The honesty test forces you to confront whether your data is clean, your logic is transparent, and your outputs are explainable. If you can’t clearly show how an insight was generated or trace it back to trustworthy inputs, the system fails the honesty test, no matter how sophisticated the technology layered on top may be. It’s a gut-check for readiness and a reminder that trust is earned long before AI enters the equation.

 

Where Trust Breaks: The 5 Fracture Points

Trust is like a bridge: It takes time, engineering, and repeated reinforcement to build. Yet one crack can make everyone hesitate to cross. Hayley outlines the most common places CHROs lose confidence in people data. Being open about these issues and prepared to answer questions related to these will help you build trust with leaders .

1. Data Lineage

Leaders ask: Where did this number come from and why did it change?
If teams can’t answer, trust collapses.

2. Data Quality

One error or inconsistency across systems (HRIS vs payroll) stalls decisions instantly. An error amplifies incorrect answers as it is used for analysis.

3. Access

Too much access creates risk and version chaos, while too little access prevents leaders from acting in time.

4. Bias

Historical inequities live inside the data. If unmanaged, AI will scale them.

5. Security

Not just breaches, simple sloppiness (files on laptops, no backups) breaks trust fast.

 

 

Two Ends of the Trust Spectrum

Hayley introduces Carl and Lori, representing opposite ends of AI readiness. Carl, representing the low-trust end, struggles with data that’s hard to validate. In this scenerio,  leadership relies mainly on financial metrics, and people data only matters when there’s a crisis. Decisions are reactive, and the effort to even get a number slows strategic thinking. On the other end, Lori embodies higher trust: her organization delivers daily, reliable metrics that stand up to scrutiny, allowing leaders to focus on strategy instead of debating numbers. Through this contrast, Hayley shows that trust isn’t binary; it’s a continuum, and moving toward the Lori end requires intentional preparation, transparency, and governance of data and processes.

Carl – Low Trust

  • Data is hard to validate

  • Executives rely on financials, not people data

  • Decisions only happen in crisis

Lori – High Trust

  • Daily metrics straight from trusted systems

  • Definitions stand up to scrutiny

  • Conversations stay focused on strategy

Most organizations fall closer to Carl than Lori.

 

 

Why Most Organizations Still Aren’t Ready

To understand how ready organizations really are for AI, One Model surveyed CHROs and then paired our findings with Microsoft Research. The research combined real-world insights from executives with quantitative data, giving a clear picture of confidence levels in people metrics, access to AI tools, and preparedness for AI-driven decision-making. This combination of data allowed Hayley to highlight common gaps in data trust, readiness, and governance, showing that while many professionals are experimenting with AI, a majority of organizations still face foundational challenges before AI can be scaled responsibly.

  • Only 33% of CHROs feel highly confident in core people metrics

  • Only 59% of People Analytics pros have company-approved AI tools

  • 82% use AI anyway → leading to unsanctioned shadow use

  • Waiting for data destroys confidence and slows decision-making

Shadow AI isn’t just unsafe, it’s less effective.

Are you AI Ready? Take our Quiz.

 

 

Why LLMs Need Human Expertise

Hayley outlines five major blind spots with LLMs, like ChatGPT, that should give you pause when trusting their raw outputs:

1. They flatter us

They reinforce our ideas, right or wrong.

2. They don’t reason like humans

They predict, not understand.

3. They’re confidently wrong

Fluent output ≠ accurate insight.

4. They rarely say “I don’t know”

They produce plausible answers that sound factual.

5. They weren’t built for People Analytics

They’re trained on the internet, not performance reviews, comp models, or workforce planning. Therefore, People Analytics specific AI must be fine-tuned and validated by experts.

 

 

The Trust Framework: Turning Expertise Into Infrastructure

Hayley introduces “trust movers”, structured checkpoints designed to make AI responsible, repeatable, and defensible. These aren’t just abstract principles; they’re practical mechanisms that turn human expertise into reliable infrastructure. These are the stepping stones that build trust. From making metrics explainable and standard across the organization, to embedding quality checks, access controls, bias audits, and full audit trails, trust movers ensure every AI-driven decision can be traced, defended, and aligned with ethical and legal standards. By putting these checkpoints in place, organizations move from simply asking colleagues to “trust us” to demonstrating exactly how trust is built into every step of the process.

1. Make AI Legible

  • Explainable metrics

  • Clear definitions

  • Shared organizational glossary

2. Protect System Integrity

  • Quality checks

  • Anomaly flags

  • Access and privacy governance

3. Hold AI Accountable

  • Bias checks

  • Fairness audits

  • Impact assessments

4. Prove the Process

  • Full audit trails

  • Model versioning

  • Documented overrides and approvals

 

 

Industry Standards are Fuel
(Not Handcuffs)

Hayley highlights two major frameworks and highlights that while these standards exist, they are optional, not law. She emphasizes that understanding which vendors follow these guidelines can be an important evaluation tool. Let's dive into those standards:

NIST AI Risk Management Framework

The NIST AI Risk Management Framework provides a clear, practical workflow for building trust in AI systems. It guides organizations through four key functions: Govern → Map → Measure → Manage

1. Govern

Establishing ownership and accountability

2. Map

Understanding systems, data flows, and dependencies

3. Measure

Tracking outcomes, performance, and risks

4. Manage

By responding to issues and continuously improving processes, before continuing to "By following this structured approach", organizations can ensure AI is applied responsibly, consistently, and in a way that builds confidence among leaders and employees alike.

ISO 42001

ISO 42001 is the first global standard for AI management, providing a structured approach to demonstrate responsible AI practices. It emphasizes clearly defined roles, robust policies, thorough documentation, and verifiable evidence to show that safeguards and processes are in place. 

ISO 42001 is focused on:

  • Roles
  • Policies
  • Documentation
  • Evidence

Both frameworks help companies build proof of responsible AI. Together with the NIST framework, ISO 42001 helps organizations not only implement trustworthy AI but also provide proof that their systems are ethically, legally, and operationally sound. Did you know that One Model is ISO certified?

 

 

How to Evaluate Vendors
(Green Flags vs. Red Flags)

When evaluating AI vendors, it’s important to look beyond marketing promises and focus on signals that indicate whether a solution is truly trustworthy. Green flags show that a vendor has built safeguards into their technology and processes. While red flags highlight shortcuts, lack of transparency, or missing safeguards that could put your organization, and your people, at risk.

As Hayley emphasizes, in People Analytics, moving fast and breaking things isn’t an option: Mistakes don’t just cost money, they affect real careers and trust.

Green Flags 

  • Explainable outputs

  • SOC 2 and privacy certifications

  • Human sign-off points

  • People-centric outcomes

Red Flags

  • Vague training data

  • Limited transparency (“trust us, it’s proprietary”)

  • Rushed deployments

  • No performance or bias testing

 

 

The 30-Day Blueprint:
Start Small, Start Real

Hayley’s practical framework for building trusted AI, one decision at a time:

1. Pick a single “AI-in-the-loop” workflow

Example: flagging promotion-ready employees

  • Define your glossary

  • Add caveats

  • Set human checkpoints

2. Turn on guardrails and logging

  • Sanctioned AI tools only

  • Logged prompts, overrides, and versions

  • Training for tricky situations

3. Run a red-team drill

Stress-test the AI with edge cases:

  • Parental leave returns

  • Underpromoted groups

  • Missing data scenarios

Repeat for recruiting, pay equity, and more. Trust grows one workflow at a time.

 

 

Are you ready to become AI Ready? Connect with us today.