At People Analytics World New York, one of the standout sessions came from our own Hayley Bresina, who emphasized that AI Readiness isn’t a tech strategy; it’s a trust strategy. This post breaks down how preparedness, governance, and strong data practices create AI systems leaders and employees can rely on, and it closes with a bold 30-day challenge. Are you in?
The talk's central thesis is simple: ChatGPT, Copilot...insert your favorite Large Language Model (LLM)... They are all impressive. However, using them naively is not. AI tools are only useful when they're built on top of trusted data. So let's start by redefining what AI Readiness means and looking into the factors that both build and break trust when it comes to AI use in your people analytics practice.
AI Readiness is the ability to make repeatable, auditable people decisions with data that's explainable and responsibly managed before, during, and after AI is applied.
AI Readiness is the ability to:
Make repeatable decisions
Provide auditable logic
Explain how data moved
Demonstrate responsible use before, during, and after AI is applied
In short, if you can't trace a metric's origin, or even properly define it, you’re not ready for AI to scale it.
Anyone can spend $20 a month on an AI subscription like Chat GPT or CoPilot. That subscription is not going to make you more competitive than any other organization in your industry that has also purchased that LLM.
Trusted, explainable data
Tools fit specifically to People Analytics
Clear human expertise in the loop
Repeatable, defensible decisions
Trust is like a seatbelt: you don’t think about it when it works, but its reliability determines whether you feel safe moving forward. The same principle applies to data and AI. If you can’t trust the answers your data provides today, adding more technology won’t fix the foundation. And if you’re giving people insights without being able to explain how you got there, they won’t see you as someone they can rely on for the right answers.
Hayley points out that no one wants AI-generated decisions without expert oversight.
The moment we blindly outsource judgment to tools is the moment we lose credibility.
The honesty test is simple: If you asked your data or your AI system a question, would you believe the answer enough to make a real decision on it?
Most organizations want to say yes, but when you dig deeper, gaps appear. The honesty test forces you to confront whether your data is clean, your logic is transparent, and your outputs are explainable. If you can’t clearly show how an insight was generated or trace it back to trustworthy inputs, the system fails the honesty test, no matter how sophisticated the technology layered on top may be. It’s a gut-check for readiness and a reminder that trust is earned long before AI enters the equation.
Trust is like a bridge: It takes time, engineering, and repeated reinforcement to build. Yet one crack can make everyone hesitate to cross. Hayley outlines the most common places CHROs lose confidence in people data. Being open about these issues and prepared to answer questions related to these will help you build trust with leaders .
Leaders ask: Where did this number come from and why did it change?
If teams can’t answer, trust collapses.
One error or inconsistency across systems (HRIS vs payroll) stalls decisions instantly. An error amplifies incorrect answers as it is used for analysis.
Too much access creates risk and version chaos, while too little access prevents leaders from acting in time.
Historical inequities live inside the data. If unmanaged, AI will scale them.
Not just breaches, simple sloppiness (files on laptops, no backups) breaks trust fast.
Hayley introduces Carl and Lori, representing opposite ends of AI readiness. Carl, representing the low-trust end, struggles with data that’s hard to validate. In this scenerio, leadership relies mainly on financial metrics, and people data only matters when there’s a crisis. Decisions are reactive, and the effort to even get a number slows strategic thinking. On the other end, Lori embodies higher trust: her organization delivers daily, reliable metrics that stand up to scrutiny, allowing leaders to focus on strategy instead of debating numbers. Through this contrast, Hayley shows that trust isn’t binary; it’s a continuum, and moving toward the Lori end requires intentional preparation, transparency, and governance of data and processes.
Data is hard to validate
Executives rely on financials, not people data
Decisions only happen in crisis
Daily metrics straight from trusted systems
Definitions stand up to scrutiny
Conversations stay focused on strategy
Most organizations fall closer to Carl than Lori.
To understand how ready organizations really are for AI, One Model surveyed CHROs and then paired our findings with Microsoft Research. The research combined real-world insights from executives with quantitative data, giving a clear picture of confidence levels in people metrics, access to AI tools, and preparedness for AI-driven decision-making. This combination of data allowed Hayley to highlight common gaps in data trust, readiness, and governance, showing that while many professionals are experimenting with AI, a majority of organizations still face foundational challenges before AI can be scaled responsibly.
Only 33% of CHROs feel highly confident in core people metrics
Only 59% of People Analytics pros have company-approved AI tools
82% use AI anyway → leading to unsanctioned shadow use
Waiting for data destroys confidence and slows decision-making
Shadow AI isn’t just unsafe, it’s less effective.
Hayley outlines five major blind spots with LLMs, like ChatGPT, that should give you pause when trusting their raw outputs:
They reinforce our ideas, right or wrong.
They predict, not understand.
Fluent output ≠ accurate insight.
They produce plausible answers that sound factual.
They’re trained on the internet, not performance reviews, comp models, or workforce planning. Therefore, People Analytics specific AI must be fine-tuned and validated by experts.
Hayley introduces “trust movers”, structured checkpoints designed to make AI responsible, repeatable, and defensible. These aren’t just abstract principles; they’re practical mechanisms that turn human expertise into reliable infrastructure. These are the stepping stones that build trust. From making metrics explainable and standard across the organization, to embedding quality checks, access controls, bias audits, and full audit trails, trust movers ensure every AI-driven decision can be traced, defended, and aligned with ethical and legal standards. By putting these checkpoints in place, organizations move from simply asking colleagues to “trust us” to demonstrating exactly how trust is built into every step of the process.
Explainable metrics
Clear definitions
Shared organizational glossary
Quality checks
Anomaly flags
Access and privacy governance
Bias checks
Fairness audits
Impact assessments
Full audit trails
Model versioning
Documented overrides and approvals
Hayley highlights two major frameworks and highlights that while these standards exist, they are optional, not law. She emphasizes that understanding which vendors follow these guidelines can be an important evaluation tool. Let's dive into those standards:
The NIST AI Risk Management Framework provides a clear, practical workflow for building trust in AI systems. It guides organizations through four key functions: Govern → Map → Measure → Manage
Establishing ownership and accountability
Understanding systems, data flows, and dependencies
Tracking outcomes, performance, and risks
By responding to issues and continuously improving processes, before continuing to "By following this structured approach", organizations can ensure AI is applied responsibly, consistently, and in a way that builds confidence among leaders and employees alike.
ISO 42001 is the first global standard for AI management, providing a structured approach to demonstrate responsible AI practices. It emphasizes clearly defined roles, robust policies, thorough documentation, and verifiable evidence to show that safeguards and processes are in place.
ISO 42001 is focused on:
Both frameworks help companies build proof of responsible AI. Together with the NIST framework, ISO 42001 helps organizations not only implement trustworthy AI but also provide proof that their systems are ethically, legally, and operationally sound. Did you know that One Model is ISO certified?
When evaluating AI vendors, it’s important to look beyond marketing promises and focus on signals that indicate whether a solution is truly trustworthy. Green flags show that a vendor has built safeguards into their technology and processes. While red flags highlight shortcuts, lack of transparency, or missing safeguards that could put your organization, and your people, at risk.
As Hayley emphasizes, in People Analytics, moving fast and breaking things isn’t an option: Mistakes don’t just cost money, they affect real careers and trust.
Explainable outputs
SOC 2 and privacy certifications
Human sign-off points
People-centric outcomes
Vague training data
Limited transparency (“trust us, it’s proprietary”)
Rushed deployments
No performance or bias testing
Hayley’s practical framework for building trusted AI, one decision at a time:
Example: flagging promotion-ready employees
Define your glossary
Add caveats
Set human checkpoints
Sanctioned AI tools only
Logged prompts, overrides, and versions
Training for tricky situations
Stress-test the AI with edge cases:
Parental leave returns
Underpromoted groups
Missing data scenarios
Repeat for recruiting, pay equity, and more. Trust grows one workflow at a time.