We wrote this paper because we believe that AI/ML has the potential to be a very valuable and powerful technology to support better talent decisions in organizations – and it also has the potential to be mishandled in ways that are unethical and can do harm to individuals and groups of employees.
In this paper, we provide some process-thinking substance to the conversation that has too often been dominated by hyperbolic “AI/ML is great!” and “AI/ML will destroy us!” headlines.
In the paper, you will find a set of Guiding Principles …
And, most importantly, a set of Processes for Ethical ML Stewardship that we believe you should be discussing (immediately) within your organizations. Each of these processes (and sub-processes) is defined in the paper in plain, readable language to enable the widest possible readership.
We believe we are at a delicate and critical point in time where AI/ML has been embedded into so many HR technology solutions without sufficient governance amongst the buying organizations. Vendors (like One Model) need to have their AI/ML solutions challenged to provide sufficient transparency into the AI/ML models – model features, performance measures, bias detection, review/refresh commitments, etc.
One Model has built our “One AI” machine learning toolset to enable the processes that our customers can use to ensure ethical model design and outputs.
To be clear, this paper is not a promotional piece about One Model, but it is absolutely intended to challenge the sellers and buyers of HR technology to get this right.
Without the appropriate focus on ethics, AI/ML products and projects could become too risky for organizations and then summarily eliminated along with all the potential value for individuals and organizations.