INTRODUCTION


Our Company & Talent Acquisition Team

Before diving into how we use data, let’s review a few things about Opower and our team. Opower is a 600-person company, headquartered in the DC area, and our mission is to help utility companies build a clean energy future. Since our founding in 2007, our products have helped utility customers worldwide save more than $1 billion on their bills, and enough energy to power all the homes in a state like New Mexico for a year.  Our Talent Acquisition team is comprised of 15 people, and we hire about 200 employees per year collectively. Although our budget for tools and resources is pretty lean, we have one person dedicated to analytics, who splits his time between HR and Talent Acquisition analytics.

Like most Talent Acquisition teams, our bandwidth is largely taken up by day-to-day recruiting activities (our job is to fill roles, after all). Over the years, however, we’ve been able to accomplish more by working smarter as opposed to just working harder. Working smart is easier said than done, of course, and we certainly haven’t been perfect. What we can say with confidence is that we’re getting better – and a lot of our improvement has depended on our ability to use data to make thoughtful, informed decisions. 


WHY WE DID IT


Our Spin on Talent Analytics

When we started looking at best practices in talent analytics (an umbrella term for recruiting and HR analytics), virtually all of the literature was geared toward big companies with big budgets. It was intimidating, and we knew we wouldn’t be able to stack up against the Googles of the world, which have entire departments dedicated to analyzing people data. What we did know is that companies with successful talent analytics functions were experiencing significant benefits to the health of their organizations. These benefits include improved quality of hire, increased corporate performance, stronger recruiting metrics, and even increased profit margins (Bersin, 2013).

We wanted to see some of these benefits at Opower, but we had to be realistic and start small.

It wasn’t until a year after we started developing our analytics that we stumbled upon Bersin’s Talent Analytics Maturity Model. This model maps out the journey of building a talent analytics function, and proposes that companies will fall into one of four stages of maturity in regards to how they are using their people data:

Bersin’s Maturity Model became our roadmap. It was comforting to know that the majority of companies were stuck right where we had been stuck - pulling metrics as needed, but not really using them to solve problems or predict. Bersin also cautioned that getting un-stuck wouldn’t be easy:

This graph points out a few important things. The first is obvious: the more effort organizations put into talent analytics, the more they get out of it. Second, there is a “choke point” between level 1 and 2 where organizations tend to get stuck. They’ll work really hard just to get a grip on their advanced reporting, and will start to doubt whether the value of analytics is worth the effort. The real value of analytics isn’t really evident until organizations can start using their data to solve problems and predict (Bersin, 2013). This was a major “a-ha” moment for us - we finally got it! The only issue that remained is that we weren’t ever going to be big enough to warrant a significant increase in resources for analytics. Our challenge was to make the most of the resources we had.


WHAT WE DID


2014: Getting Over The First-Year Hump

Let’s briefly review what life was like before we discovered Bersin’s research, when we first invested in analytics in 2014. Our Talent Acquisition team was growing, our company was about to go public, and we wanted to have a dedicated resource for analytics to help us stay on track of our goals. Scott Walker, who joined Opower as a sourcer, was already creating dashboards on the side for about six months and decided to take on analytics full-time. In his new role, Scott’s responsibility was to provide departments with regular recruiting dashes. These dashes contained a wide array of metrics - everything from time to fill, to phone time, to pipeline funnel metrics.

At first, the dashes were a welcome change. Recruiters were more prepared for their meetings, and it was nice to have pretty charts in company colors. After only a few months, though, Scott and the team realized that the dashes were more or less “for show”. They weren’t yielding insights, we weren’t using the data, and hiring managers would look at our metrics and say “so, what?”. We also had a major problem with data accuracy because recruiters were using our system inconsistently.

We were stuck at level 1, right at the dreaded “choke point” on Bersin’s graph.

Another challenge for us when we were getting started is that we didn’t know how to use data to tell a story. It’s kind of like dressing yourself for the first time as a kid - you put on a hodge podge of things that don’t go together and the end result is a mess.

 

This dash is cringe-worthy for many reasons, including but not limited to:

  • The speedometer.

  • Other than our goal for “Q1 Offer Accepts”, it isn’t apparent if we were performing well or not.

  • The ineffective use of pie charts to compare differences between source of applicants and hires.

  • No indication of where we wanted our audience’s attention directed.

  • Comparing recruiters without taking into account the type and difficulty of their roles.

Pivot Point

By the fall of 2014, our analytics were improving through practice, feedback, and receiving mentorship from a few analytics pros at Opower. We were also in an entirely different place as a company and as a team. Opower went public, the direction of our company was pivoting, and our hiring needs became more unpredictable. Our recruiters, who had become subject-matter experts in specific role types, had to flex in and out of different areas of the business in order to meet shifting hiring needs. Because of this, it was becoming increasingly difficult to manage workload and measure our performance.

Around the same time, Dawn Mitchell, our current Director, was promoted to lead the team. With so much in flux, we needed to evolve how we thought about managing the team and delivering on our goals. We also needed a strategy to inform how we use data. This is when we found Bersin’s research on talent analytics. Almost immediately, we scrapped our regular dashboarding in favor of analytics projects that were aligned to our specific goals.


HOW WE DID IT


2015: Trystorming New Frameworks

Instead of just brainstorming, 2015 was all about trystorming - a fun way of saying that we tested new ideas even if those ideas hadn’t been completely vetted or perfected yet. To do that, we had to get our data in order quickly, and then use that data to inform and test new practices without knowing if they would work. Let’s review what we did.

Clean Applicant Tracking System, Clean Mind

In order to get our analytics on the right track and master level 1, we had to have accurate data. We accomplished this in two steps. First, we spent a month scrubbing our historical data, removing errors and compiling a sample of “clean jobs” with accurate metrics. Second, we taught recruiters to “live and breathe” our applicant tracking system by holding mandatory, weekly job reviews for 3 months. It sounds awful, and it was, but the meetings were very simple: recruiters sat in a room for an hour and we would audit one job per recruiter (recruiters wouldn’t know which job until the meeting).

If a recruiter’s data was incorrect, there were no repercussions the first time. We would use it as a learning moment and ask them to fix it for the next meeting. We found that for the most part, a polite but public nudge was sufficient to keep our data clean. Although perfect data is unrealistic, our data was accurate enough to be used after 3 months. We stopped holding the meetings, and instead conduct periodic audits to stay on track.

Recruiter Workload & Performance: The Quadrant Model

After scrubbing our historical data, we tackled our next big challenge: actually using data to solve problems. Since one of our biggest challenges was managing team capacity, we decided to start there. One question we always struggled to answer (and hated getting asked) is “How many recruiters do we need to fill X number of roles?” The answer to that question obviously depends on the roles, but we had no data-driven explanation for why one role was different from the next. So, we tested out a framework called the Quadrant Model, a way of looking at recruiter workloads and goals based on recruiting difficulty instead of number of hires.

First, we completed some background research on the recruiting difficulty of our roles, which involved looking at time to fill, pipeline health, market availability, and recruiter workloads across role types. In the majority of cases, we found that difficulty was strongly related to the uniqueness of the skillset in the market and how frequently we hired for the role. We used this to bucket roles into four quadrants. Each quadrant had a point value associated with its recruiting difficulty, and we would add up recruiters’ points to measure their workload. Each quadrant also had a time to fill goal, based on the historical average time to fill by quadrant.

For example, a role in quadrant 2 would be moderately easy to fill, such as a project manager. The skillset is common, we hire for the role all the time, and we fill them fairly quickly. In contrast, quadrant 4 roles are our toughest roles, such as a Vice President of Engineering. The skillset is extremely rare, we don’t hire for the role often, and it will take us at least a few months to find the right fit. The points allotted to quadrant 4 indicate that a recruiter could fill one quadrant 4 role with a similar amount of effort as two quadrant 2 roles.

In 2015, the majority of recruiters delivered between 25-30 quadrant points per quarter. Their workloads were “fair” because we adjusted them based on the difficulty of the roles. Let’s take a look at Sally and Bob in the graph below, which compares recruiters’ performance metrics:

The graph shows that even though Sally’s time to fill is significantly higher than Bob’s, she’s filling a greater percentage of roles “on time” according to the goals of each quadrant. Sally is also delivering more quadrant points per quarter, and her roles are more difficult to fill than any other recruiter on the team. If we’re concerned about Bob’s performance, we can look deeper and find out what’s going on. Are candidates getting stuck in a particular interview stage? Is Bob phone screening candidates with the right qualifications? Is Bob sourcing effectively? In this way, the quadrant model can serve as a starting point for digging and segmenting the data further.

Most importantly, the quadrant model helped us improve our performance. By having a rationale behind recruiting workloads and goals, we met our goals more consistently. We also decreased our average time to fill from 93 days to 67 days in 2015.

Forecasting & Making A Business Case For Resources

In previous years, our hiring forecast went something like this: we would ask business leaders what they wanted to hire for the year, add in our historical attrition rate to estimate backfill hiring, and voilà: the forecast was complete. Unfortunately, these back-of-the-envelope forecasts didn’t have any resemblance to what actually happened, and we never knew what resources we needed to meet our hiring goals.

In 2015, we obtained a more accurate forecast by using formulas to predict attrition and growth. Below are some new forecasting practices we introduced in 2015:

  • Modeling out attrition trends as opposed to using average historical attrition rates

  • Factoring in rates of “unexpected” roles added to our plan mid-year

  • Factoring in transfer rates and the additional backfills that result from transfers

  • Accounting for a percentage of roles being cut or changing profile mid-search

  • Calculating the probability of recruiters leaving the team, and the amount of time it takes to replace and ramp up new recruiters (for us, total time to hire and ramp is 4-6 months).

  • A general understanding that hiring managers only know what they want to hire right now, not 6 months from now.

When we showed our forecast to the business, they had serious doubts. They thought it was “padded”, and that hiring would be light because headcount growth was expected to be light. We agreed to disagree. The business had their forecast and we had ours. Let’s look at what happened:

The graph above shows that our new forecast was much more effective at predicting our hiring plan than the initial business forecast. This was a huge win for us, but the next issue was that our current team was only staffed to fill 70% of our roles within the same quarter as their goal date. So, we had to make a business case for more budget. We accomplished this by playing out 3 scenarios to our management team:

  1. Our capacity with current resources

  2. Our capacity with resources that would deliver immediate increases in hires, but expensively

  3. Our capacity with long, term cost-effective resources that would allow us to meet 85% of our goal.

Let’s take a closer look at these scenarios:

As you might have guessed, we all agreed to go with our cost-effective plan as opposed to spending a tremendous amount of money on agencies ($350K doesn’t sound that bad when it’s half the cost of the alternative). Note that all of these scenarios were rooted in data and would be tested empirically based on our actual results. Here’s how our cost effective plan worked out for us:

What worked:

  • We met 100% of goal in 2015 with a total of 237 hires

  • Hired.com yielded ~2 hard-to-fill tech hires per month

  • New resources/incentives increased capacity by ~20 roles per quarter

  • Q2 recruiter bonus program effectively increased capacity (time to fill decreased by an average of 4 days and each recruiter filled ~2 more roles than expected).

What Didn’t Work:

Our big miss was the $10K referral bonus awards for engineering hires. We already had a pretty generous referral bonus program, and this additional bonus didn’t increase the number of referral hires we had whatsoever. What it did do is increase the number of unqualified referrals.

Integrating HR & Talent Acquisition Data

Another important initiative we worked on in 2015 was integrating HR and Talent Acquisition data to form a more cohesive “people” strategy. Linking the two areas can reveal powerful information about interview effectiveness, quality of hire, backfill hiring forecasts, and other insights that are mutually beneficial for both HR and Talent Acquisition. Let’s review some of our recent findings.

Can Our Interviews Predict Performance?

As a starting point for investigating quality of hire, we examined whether interview scores could predict employees’ performance. In a sample of 261 hires, we linked interview scores to performance review scores. We found that interviews could predict performance so long as there were 5 or more interviewers on the team. However, if a hire was interviewed by less than 5 people, the correlation disappeared. Although we all know that correlations should be taken with a grain of salt, this finding had a significant impact on our recruiting strategy because it was the first evidence we had that suggested hiring decisions based on the opinion of just a couple of people weren’t as good as the hiring decisions of a full interview team.

Is Source of Hire Related To Measures of Quality of Hire?

While performance ratings are one measure of quality of hire, a second measure is retention. A few months ago, we were curious to see whether retention rates differed at all across source of hire. So we looked at the percentage of employees retained at 1 year versus 2 years across all of our major sources. Here’s what the data looks like:

Interestingly, intern converts and referral hires were much more likely to stay at Opower at both 1 and 2 years than hires from job boards, agencies, or passive candidates sourced by recruiters. Why?

One hypothesis is that referrals and intern converts get the most realistic job preview before joining Opower - they’ve either worked at Opower, or they’ve had candid conversations with their friends who already work here. In contrast, both agency and corporate recruiters tend to pitch Opower in a way that highlights the best aspects of both the company and the team, and as a result these candidates may not get the most objective view of the company. A second hypothesis is that referral and intern converts will likely have stronger social networks at Opower.

Either way, we can use this data to make a business case for investing in our employee referral and intern programs, creating a well-defined employee value proposition, and experimenting with retention initiatives.

We’ve completed a similar analysis on source of hire by performance reviews and haven’t found any connection (i.e., referrals, job board candidates, and candidates sourced by recruiters have roughly the same probability of being top performers or poor performers). Future projects include seeing if these trends are changing over time, and evaluating specific role competencies of employees to see if our job descriptions match what managers truly value in a particular role.

Anticipating the Bandwidth Required to Replace Attrition

The quadrant model also allows us to estimate the recruiting difficulty of turnover. For one, our historical data shows that 60% of our backfills are in quadrants 3 and 4, and 40% of our backfills are in quadrants 1 and 2, which meant that we needed to factor the increased difficulty of backfill roles when staffing our team. Second, with our company increasing in size, the volume of turnover will naturally increase, which directly impacts our hiring plan. Third, we can assign quadrants to the roles of our existing employee population and evaluate differences in tenure and retention risk. Here’s what our forecast looked like this year:


WHAT WE GOT WRONG


Now that our strategies have had some time to actualize, let’s review some of the hurdles and mis-steps we’ve experienced (almost all of these are still current challenges).

The Balancing Act Between Reporting and Analytics

We still get side-tracked by ad-hoc requests for data and this frequently distracts us from “true analytics.” It's easy to say, "well, let's just hire more analysts,” but this just doesn't make sense on a team of a certain size. With limited resources, we always have to choose what isn't going to happen, and we could be more thoughtful about prioritizing.

Tools & Systems

We're still completing most of our analytics within Excel. It's cheap, and gets the job done, but almost everything we do is pretty manual. In 2016, one of our major goals is to move to more automated dashboards by purchasing a tool like Tableau.

Sharing Ownership

Another miss for us is only having one person on the team to keep analytics running. If he's out, or has other projects to work on, there is no one else who can do the work. We could do a better job of training people on the team to be backups.

Making Ourselves Heard

Most people at our company aren't aware of how we use people data in HR and Talent Acquisition. Whether it’s quarterly emails, posters on the wall, or Q&As about major projects, we could do a better job of broadcasting our research to wider audiences.


KEY TAKEAWAYS FOR RECRUITING


Below are some of the “best practices” we use to keep our analytics on the right track. We haven’t mastered all of these by any means, but the more we practice them the stronger our analytics become.

Do’s...

  1. Always be goaling: Define success. Set realistic, measurable goals, and then track them. What gets measured gets improved.

  2. Make your analyst an insider: Empower your analyst and include them in strategy meetings. The more they know, the more they can help.

  3. Construct narratives: Your metrics and graphs won’t always speak for themselves. Make it easy on your audience by summarizing key take-aways and emphasizing specific information on graphs that you want the audience to pay attention to.

  4. Ask why: Recruiting data is often unhelpful when presented in aggregate. Dig, segment, and identify root causes.

  5. Beg, borrow, and steal: If you lack expertise and resources, ask for help! Consult coworkers in other departments with analytical backgrounds, or ask them to partner with you on creating your analytics strategy. At a bare minimum, ask other analysts at your company to critique your work.

Don’ts…

  1. Don’t let the perfect be the enemy of the good: If you insist on perfect, error-free data, you’ve missed the point and you won’t be able to accomplish much. Always ask “what is the business impact of this data being 95% correct vs. 100% correct?”

  2. Don’t overlook quick wins: Start by using the data you already have. Difficult and expensive isn’t always better than simple and cheap.

  3. Don’t focus your time on secondary requests: An analyst’s time can quickly be taken up by requests that aren’t central to your overall strategy. Don’t be afraid to push back.

  4. Don’t get discouraged: Analytics = delayed gratification. In the beginning, you’ll put a ton of effort into getting the basics right and question whether the work has any value. It gets better.

  5. Don’t get comfortable: Keep on iterating; re-evaluate which metrics are still valuable. Switch up what you show in dashboards to keep engagement.


RESOURCES & FURTHER READING

High-Impact Talent Analytics: Building a World-Class HR Measurement and Analytics Function, Bersin by Deloitte, 2013.

Competing on Talent Analytics, Harvard Business Review, 2010.

Storytelling with Data, Cole Nussbaumer Knaflic, 2015.

Talent Analytics: The Opower Story

 

 

5 Comments