Will your People Analytics AI activity create legal concerns?

Blog Post - JG

In a recent editorial (here), Emerging Intelligence Columnist John Sumser explains how pending EU Artificial Intelligence (AI) regulations will impact its global use. A summary of those regulations can be found here.

You and your organization should take an interest in these developments. The moral and ethical concerns associated with the application of AI are something we must all understand in the coming years. Ignorance of AI capabilities and ramifications can no longer be an excuse.

Sumser explains how this new legislation will add obligations and restrictions beyond existing GDPR requirements. The expectation is that legal oversight will arise that may expose liability to People Analytic users and their vendors. These regulations may bode poorly for People Analytics providers. It is worth your while to review how your current vendor addresses the three primary topics from these regulations:

  • Fairness – This can address both training data used in your predictive model as well as the model itself. Potential bias toward things like gender or race may be obvious, but hidden bias often exists. Your vendor should identify biased data and allow you to either remove it or debias it.
  • Transparency – All activity related to your predictive runs should be identifiable and auditable. This includes selection and disclosure of data, the strength of the models developed, and configurations used for data augmentation.
  • Individual control over their own data – This relationship ultimately exists between the worker and their employer.

Sumser’s article expertly summarizes a set of minimum expectations your employees deserve. Our opinion is that vendors should have already self-adopted these types of standards, and we are delighted this issue is being raised.

At One Model we are consistently examining the ethical issues that are associated with AI. One Model already meets and exceeds the Fairness and Transparency recommendations; not begrudgingly but happily because it is the right thing to do.

One Model has long understood the HR industry has an obligation to develop rigor and understanding around Data Science and Machine Learning. The obvious need for regulation and a legal standard for ethics has risen with the amount of snake oil and obscurity being heavily marketed by some HR vendors.

One Model’s ongoing plan to empower your HR AI initiatives includes:

  • Radical transparency.
  • Full traceability and automated version control (data + model).
  • Transparent local and model level justifications for the predictions that our Machine Learning component called OneAI makes.

By providing justifications and explanations for our decision-making process One Model builds paths for user-education and auditability for both simple and complex statistics. Our objective has been to advance the HR landscape by up-skilling analysts within their day-to-day job while still providing the latest cutting edge in statistics and machine learning. Providing clear and educational paths to statistics is in the forefront of our product design and roadmaps, and One Model is just getting started.

You should promptly schedule a review of the AI practices being conducted with your employee data. Ignoring what AI can offer risks putting your organization at a competitive disadvantage. Incorrectly deploying AI practices may expose you to legal risk, employee distrust, compromised ethics, and incorrect observations. One Model is glad to share our expertise around People Analytics AI with you and your team.

High level information on our OneAI capability can be found in the following brief video and documents:

For a more detailed discussion please schedule a convenient time for a personal discussion.

http://bit.ly/OneModelMeeting

Asset 1

Subscribe to
Our Newsletter

Get the latest insides,
news and updates delivered straight to your inbox.

PREVIOUS ARTICLE NEXT ARTICLE