Recently, I was in a very intense game of Topgolf with two of my friends and colleagues, Sheik Ayube and Peter Armaly (VPs of Business Development and Customer Success, respectively, here at ESG). I want to be completely honest here at the beginning; no extended metaphors or analogies to get this point across. I am terrible at golf. I’ve watched YouTube tutorials, consulted with friends on my grip, and even started watching golf to see if I could pick up any tips or tricks. No impact…still terrible.
So, you can understand my surprise and slight uncertainty when I realize I’m holding my own; we’re about midway through the game and the scores are relatively close. I only have one strategy—get points every ball, and since the larger targets off in the distance aren’t a possibility for me, I am left to focus on the small red, slightly further yellow, or reaching green targets (25, 50, and 90 yards out respectively). Perhaps being somewhat prideful, I don’t want to focus too much on the red target, so I try to find a groove with the outer rings of the green target—most of the time trying to hit the ball in the general direction and hard enough for it to roll in the outermost ring. Three points.
Just as I get comfortable in my mediocrity, Peter grabs the hybrid club and decides he is going to win, pulling away from Sheik and I quickly and without remorse. (For the record, Peter said he wasn’t a golfer going into the night. Results indicate deceit.) The battle for second place begins.
Sheik shanks a ball or two…then so do I. The final round begins. Two balls per person, except for me. I only have one shot left because I went three in a row in an earlier round—I still think that was a technical issue.
Peter hit the white target almost 200 yards away. Ridiculous. Sheik gets a few points from his first ball, but none from the second.
I grab the hybrid club and look up at the scoreboard. Peter is securely in first. Sheik is in second…but only by five points. As I swipe my club to retrieve the ball, Sheik also notices the score and starts the heckling. One ball. One shot. Six points needed.
Knowing that it was unlikely I’d get six points with one ball, I decide to live or die by what I do next—something that you only see happen in movies or on TV. I place the ball on the tee and position my fingers around the club the only way I know how. I look down at the ball and then glance over my right shoulder to catch Sheik’s eye. Without breaking eye contact, I swing my club and make contact with the ball. I’m still glaring at Sheik as he looks past me to see where the ball lands. No blinking. No speaking. His mouth drops as the ball lands in the inner middle ring of the green target. Six points.
I break eye contact only as I drop the club back in its slot and sit down, pretending that is exactly what I meant to do.
We all know the truth here. I got very lucky—and if you don’t believe that, reference the score from the next game; I couldn’t even hit the red targets as I watched Peter and Sheik slug it out for the win. But as I reflect on this experience, I can’t help but think about forecasting. (Stay with me!)
The logic here is very similar to what I shared a few weeks ago in the scenario-based NPS rant; just like my one epic game of dodgeball didn’t predict advocacy, my one epic golf shot shouldn’t be used to evaluate my ability or forecast my next round. They only stand out because these experiences triggered an emotional response that has been sensationalized…by me mostly. From the moment that ball left the tee, I had no idea what was going to happen; the way my body was turned (staring into Sheik’s soul), it wouldn’t have surprised me if the ball had hooked left and hit the side netting. But by nothing more than chance, the connection was good, the direction was accurate, and I hit my target.
If you’ve ever been in Sales or been in a position to observe Sales Reps…let’s chat for a second.
How many times do you think Sales Reps have closed a big deal, whether it be a big Fortune 100 logo, or a financially significant account, and then instantly had that dreaded pit in their stomach because they know the expectation and forecast for their next quarter was just raised by that historical data point? We love sensationalizing our outliers. We love using the big deals as examples of what should be expected, instead of what they are—anomalies.
Or even better, how many times have those same Sales Reps been asked to review their pipeline and report their sales forecast to management? They likely review the opportunities in the CRM and then scan through their mind for the opportunities they haven’t entered in the CRM (we all know it happens), and report back a relatively conservative number that is close to their target for the quarter to avoid the dreaded “gap plan.”
I don’t want any of the above to come across as accusatory. Sales Reps are genuinely trying to do the best with the information they have; the problem is, that information isn’t enough. I’ve yet to speak with a Sales Rep who has, at their fingertips, all the data (historical or otherwise) needed to present an accurate forecast in a structured, complete, mathematical way. Most processes involving forecasting are scrappy, manual, and in desperate need of the two big buzz words: operationalize and automate. As Sales Reps, they’re often left with no insight, unremarkable telemetry, and aspirational luck.
So, if Sales Reps aren’t adequately equipped to forecast their business, why would we expect Customer Success Managers (CSMs) to be any better equipped, prepared, and enabled to forecast?
I think the answer is simple, but unfortunate: Customer Success is supposed to be proactive in nature, so it’s easy to assume better insight into financial gains from customers by those who spend most of their time trying to stay ahead of their customers’ obstacles. But the moment we start mandating forecasts from CSMs (rather than measuring, say, Customer Success Qualified Leads (CSQLs)), they aren’t CSMs anymore—their ability to drive adoption and value becomes dichotomous as they’re obligated, and sometimes compensated, to shift focus to selling more.
There is no exact science to this forecasting thing—if there was, you’d know about it because of its demand and importance, especially in Customer Success. I’ve observed and implemented a variety of forecasting techniques in my career—most of them involved looking to the past at historics, assessing the present pipeline, and factoring in some sort of predictive measure of growth. I’m not going to tell you I know which way is right. I’m a firm believer that forecasting is iterative, ever-changing, and never going to be a concrete model that you can implement once and repeat as needed. It will change over time, as new products or services are introduced, and as the company matures.
When it comes to Customer Success and forecasting, there are three big picture principles that we must get right.
- Customer Success Operations led
This is a good place to start. While forecasting can be challenging for both Sales and Customer Success, the need to provide it has not, and likely will not, go away anytime soon—leaders will always need to know what to expect and when to expect it. So, the question instead becomes: “who owns forecasting?” If my personal stance isn’t obvious to you yet, let me provide further clarity here: CSMs should not be expected to forecast. Instead, Customer Success Operations teams should own the processes of constructing the methodology of renewal/expansion forecasts, executing the steps necessary to come to a forecast, and reporting it to management accordingly. In yet another shameless plug for the importance of CS Analysts on your CS Operations team…this is what we do: we analyze the data and provide insight, I.e. forecast. This doesn’t mean CSMs are completely absolved from the forecasting process. They still have a role to play that I’ll get into more detail on in a moment. But CS Ops, and specifically a CS Analyst, should take accountability and responsibility over forecasting for the Customer Success organization. I’d argue no one in the entire organization is in a better place to do so.
- Cross-functional collaboration
I know we’ve all seen that phrase a lot over the last few years but hear me out. The most successful forecasting methodologies happen when there is complete alignment between teams and stakeholders…specifically Customer Success, Sales, and Finance. I’ve seen firsthand the metaphorical magic that happens when these teams collaborate during the defining, executing, and reporting of the forecasting process. I’ve also seen firsthand what happens when they don’t—the former much smoother and with fewer enraged emails than the latter. This methodology of coming together to forecast ensures the entire organization is synchronized in a way that everyone’s input is heard and considered in the calculation. Regardless of who in your organization officially owns the forecasting process, strive towards this type of alignment, if not in the execution of calculating the forecast, then at least in the spirit of fostering openness and collaboration. In my personal opinion, all organizations (but specifically Sales, CS, and Finance) should have a basic level of understanding of how the forecast is calculated, even if that’s contained to management level.
- Multi-factor approach
When it’s time to forecast the next fiscal quarter or year, if you are only looking at how the equivalent variable (territory, Sales Rep, etc.) performed during that same time period the previous fiscal quarter or year to determine the forecast, there is room for improvement. It’s a good foundation, but it’s certainly not the most complete or accurate way to forecast. We’ve already reviewed how anomalies and outliers in our financial data can occur, so we need more than historics here. We need a methodology that looks at a variety of factors and variables to reach a forecast. For example, we could run a historical analysis of the data, including growth rates and expansions through upsell or cross-sell, identify impacting variables like employee turnover and new hire ramp up periods, and review the current pipeline. Then run a more qualitative analysis. This is where CSM input and even Sales counterparts can be consulted for additional insight to consider. By running an arsenal of statistical tests and analyzing multiple methods of forecasting, we can very easily get validation in our hypothesis, or quickly realize something’s not aligned and should be classified as risk to our forecast or reevaluated for consideration to be included in our forecasting process.
It’s nearly impossible to give a complete, prescriptive, and universal methodology to forecasting in Customer Success. Not only is it going to be different in every company, but it’s going to change over time in your own business as your data becomes more robust, connected, and accessible across your organization. Your data directly impacts your ability to provide a more accurate and resilient forecast, leaving you less likely to have to stare down the VP of Business Development while swinging away for an impossible shot.
Until next week.
Missed last week’s installment of Rants of a Customer Success Analyst? Go back and read! And keep an eye out for the next Reason, Rant, and Resolution next week.