6 min read

John Sumser: 12 Questions to Ask an Artificial Intelligence Solution

John Sumser, one of the most insightful industry analysts in HR, recently wrote an article providing guidance on the selection of machine learning/AI tools. That article is found HERE, and can serve as a rubric for reviewing AI and predictive analysis tools for use in your people analytics practice or HR operations.  

Much of our work day is filled with conversations regarding the One Model tool and how it fits into an organization's People Analytics initiative. This is often the first practical exposure a customer contact has using Artificial Intelligence (AI), so a significant amount of time is invested in explaining AI and the dangers of misusing it.

Good Questions to Ask About Artificial Intelligence Solutions - And Our Answers!

Our product, One AI, delivers a suite of easy-to-use predictive pipelines and data extensions, allowing organizations to build, understand, and predict workforce behaviors. Artificial Intelligence in its simplest form is about automating a decision process.  We class our predictive modeling engine as AI because it is built to automate the decisions usually made by a human data scientist in building and testing predictive models. In essence, we’ve built our own automated machine learning toolkit that rapidly discovers, builds, and tests many hundreds of potential data features, predictive models, and parameter tuning to ultimately select the best fit for the business objective at hand.  Unlike other predictive applications in the market, One AI provides full transparency and configurability, which implicitly encompasses peer review.  Every predictive output is not only peered reviewable within a given moment of time but also for all time.

This post will follow a Q&A style as we comment on each of John’s 12 critical questions to ask an artificial intelligence company.

1) Tell me about the data used to train the algorithms and models.

Ideally, all data available to One Model is used for feeding the machine learning engine - the more the better.  You cannot overload One AI because it is going to wade through everything you throw at it and decide which data points are relevant, and how much history it should use, and then select, clean, and position that data as part of its process.  This means we should feed every single system we have available into the engine from the HRIS, ATS, Survey, Payroll, Absence, Talent Management - everything and the kitchen sink as long as we’re ethically okay with its potential use. This is not a one size fits all algorithm; each model is unique to the customer, their data set, and their target problem.  

The content of training data can also be user-defined. Users define what type of data is brought into the modeling process, choosing which variables, filters, or cuts will be offered.  At any time if users want to specify how individual fields will be treated, they have the ability to do so with the same types of levers as you would have in creating your own model externally.

2) How long will it take for the system to be trained?

The scope of data and the machine learning pipeline determine training time.  The capacity to create models is intrinsically available in One AI and training can take anywhere from 5 minutes to 20+ hours.

For example, we automatically schedule re-training a turnover prediction model for a 15k employee-customer in the space of 45 minutes.   

3) Can we make changes to our historical data?

Yes, data can be set to be held static or use fresh data every time the model is trained.  One AI acts as a data science orchestration toolkit that automates the data refresh, training, build and ongoing maintenance of the model.  Models are typically scheduled to potentially refresh on a regular basis e.g. monthly.

With every run extensive reports are created, time-stamped, and logged so users can always return to summary reports of what the data looked like, the decisions made, and the performance of the model at any given time.

4) What happens when you turn it off? How much notice will we receive if you turn it off?

One AI models and pipelines are completely persisted.  They can be turned on and off with no loss of data or logic.  We are a data science orchestration toolset for building and managing predictive models at scale.


8-questions-purchase-AI-HR-ToolIs AI being offered in a solution for your HR Team?

Download our latest whitepaper to get the questions you should ask in the next sales pitch when someone is trying to sell you technology with AI.

Download the Guide



5) Do we own what the machine learned from us? How do we take those data with us?

Yes, customers own the results from their predictive models, and those results are easily downloaded. Results and models are based upon your organizations data.  One Model customers only see their own results, and these results are not combined with other data for any purpose. All the decisions that the machine made to select a model are shown and could be used to recreate the model externally as well.

6) What is the total cost of ownership?

Predictive modeling, along with all features of our One AI product, are inclusive within the One Model suite subscription fee.  

7) How do we tell when the models and algorithms are “drifting”?

Each predictive model is generated and its results are fully transparent.  Once a One AI run is finished, two reports are generated for review:

  1. Results Summary – This report details the model selected and its performance.
  2. Exploratory Data Analysis – This report details the state of the data that the model was trained on so users can determine if the present-state data has changed drastically.

Models are typically scheduled to be re-trained every month with any new data received.  The new models can be compared to the previous model using the output reports generated. It is expected that models will degrade over time and they should be replaced regularly with better performing models incorporating recent data.  This is a huge burden on a human team, hence the need for data science orchestration automating the manual process and taking data science delivery to scale.

8) What sort of training comes with the service?

One Model’s customers are trained on all aspects of our People Analytics tool.  Training is offered for non-Data Scientists to be able to interpret the Results Summary and Exploratory Data Analysis reports so they can feel comfortable deploying models. A named One Model Customer Service Manager is available to aid and provide guidance if needed.

9) What do we do when circumstances change?

One AI is built with change in mind.  If the data changes in a way that breaks the model or the model drifts enough that a retrain is necessary, users can restart the automated machine learning pipelines to bring in new data and create a new pipeline.  The new model can be compared to the previous model. One AI also allows work to occur on a draft version of a model while the active model is being run in production.

10) How do we monitor system performance?

The Results Summary and Exploratory Data Analysis charts provide extensive model performance and diagnostic data.  Actual real-world results can be used to assess the performance of the model by overlaying predictions with outcomes within the One Model application. This is also typically how results are distributed to users through the main analytics visualization toolsets.  When comparing actual results against predictions, One Model cautions users to be aware of underlying data changes or company behaviors skewing results. For example, an attrition model may identify risk due to an employee being under-trained. If that employee is then trained and chooses to remain with the organization, then the model may have been correct but because the training data changed results can’t really be compared.  In the case of this employee their risk score today would be lower than their risk score from several months ago prior to training. The action to provide additional training may indeed have been a response from the organization to address the attrition risk, and actions like these that are specifically made to address risk must also be captured to inform the model if mitigation actions have taken place. The Results Summary and Exploratory Data Analysis reports typically build enough trust in cross-validation that system performance questions are not an issue.

11) What are your views on product liability?

One AI provides tooling to create models along with the reports for model explanation and interpretation of results.  All models and results are based exclusively on a customer’s own data. The customer must review the model’s results and choose to deploy and how they use those results within the organization.  We provide transparency into our modeling and explanations to provide confidence and knowledge of what the machine is doing and not just trusting a black box algorithm is working (or not). This is different from other vendors who may deliver inflexible canned models that were trained on data other than the customers or are inflexible to use a unique customer data set relevant to the problem.  I would be skeptical of any algorithm that cannot be explained or its performance tracked over time.

12) Get an inventory of every process in your system that uses machine intelligence.

Each One Model customer decides how specific models will be run for them, and how to apply One AI.  These predictive models typically include attrition risk, time to fill, promotability, and headcount forecast.  Customers own every model and the result generated within their One Model tool.

One AI empowers our customers to combine the appropriate science with a strong awareness of their business needs.  Our most productive One AI users utilize the tool by asking it critical business questions, understanding all relative data ethics, and providing appropriate guidance to their organization.

If you would like to learn more about One AI, and how it can address your specific people analytics needs, schedule some time with a team member below.

Request a Personal Demo

One Model Announces One AI

One Model Announces One AI

The One Model team is pleased to announce its official launch ofOne AI. The new tool integrates cutting-edge machine learning capabilities into the...

Read More
Steam Powered Data Science for HR

Steam Powered Data Science for HR

We’re back with another installment of our One Model Difference series. On the heels of our One AI announcement, how could we not take this...

Read More
Does People Analytics Work? #PAQA

7 min read

Does People Analytics Work? #PAQA

“Half the money I spend on advertising is wasted; the trouble is, I don't know which half.” Henry Ford, Lord Lever, John Wanamaker... People often...

Read More