W e’re big fans of experimentation at IQN Labs, and not just because we consider ourselves scientists – data scientists, that is. Experiments are a great way to learn how the world works. We can use them to improve our contingent workforce management practices that, in turn, improve your business results!
It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong. – Richard P. Feynman
Experimentation is not just for ivory tower academics. Web companies regularly undertake experiments (sometimes calling it A/B testing) to improve purchase rates or user engagement or process efficiency or customer satisfaction. They experiment because it generates good evidence about what works and what doesn’t. Experiments could also be a useful tool for figuring out what works to improve contractor hiring. They’ll allow us to move beyond best practices (which may just represent a common opinion) to evidence-based practices.
Experiments provide good evidence about what caused what because they address the problem that correlation is not causation. When we analyze observational data, we might think we are seeing a causal relationship between two variables when really all we have is an association. Other factors that we didn’t include in our analysis could be at play. These other factors are known as “confounders.”
For example, if we did a naïve analysis of the relationship between ice cream consumption and drowning, we might conclude that higher ice cream consumption leads to more drownings. If we made a recommendation that people should not eat ice cream lest they drown, we would have missed out on the confounder of seasonal weather patterns. When it’s hotter out, more people eat ice cream and more people drown because they are swimming more. Experimentation can control for confounders like weather patterns.
Let’s find some natural experiments
For the best evidence of what to do to achieve the outcomes we want, we should design careful experiments, randomizing subjects (which in our case may be positions or suppliers or hiring managers or some other entity) to the different practices we want to evaluate. Only by doing that can we be sure that differences we see are due to the practices themselves and not other influences, such as differences between positions or suppliers.
Sometimes it’s not feasible to undertake experiments, but we can still look to them as the ideal, and try to find situations that have characteristics of experiments but have arisen in the ordinary course of business. These are sometimes called natural experiments. In a big data set like IQNavigator’s, which spans more than ten years of time and hundreds of businesses across every major industry vertical, we can evaluate practices across different settings. Sometimes these look a lot like randomized controlled experiments designed and run by scientists.
As an example of a natural experiment, IQN Labs recently learned that one of our clients lowered (via changing a configuration setting) the number of candidates each supplier could submit for a particular position. The company along with their MSP hoped that this change would decrease the resume load on hiring managers without impacting time to fill or quality of candidates. This looks a lot like an experiment, and offered us a chance to consider how this particular setting affects outcomes.
This doesn’t remove all potential confounding. One threat to the validity of this study is a “history” threat – that is, business and economic factors are different at different points in time. So it is important when undertaking this analysis to consider whether any such changes could account for results found in the analysis. For example, we might see that time to fill went up after such a change but that could be due to factors other than the policy setting change.
Experiments, even the randomized controlled kind, are not necessarily the gold standard everyone thinks. Both experimental and observational studies have advantages and disadvantages when it comes to telling us what we should do in the future. Still, experiments are one of the best tools we have available for identifying practices that will help IQN users find and hire the best candidates in the shortest period of time.
Stay tuned – in a few weeks we’ll share the results of our analysis of the natural experiment on supplier submission limits. You may be surprised at what we found!