We like Phil Fersht and the HfS Research team. Scrappy, snarky but respected for building a powerful desk research-based vision, the company has bit the proverbial bullet and exited the 2x2 grid business and launched into the rankings game. Will this work out well? Time will tell but in the meantime, let's consider the rationale behind the change.
We're done, the whole quadrant craze is starting to smell pretty bad and we know the industry is fed up with it. Increasingly, many of these 2x2 matrices are missing several of the market leaders (who refuse to participate) and having them all stacked in the top right just smacks of pay-for-play (even if the analyst has fair intentions). Let's be honest, noone trusts these matrices and they are harming the entire credibility of the analyst industry. Sure, there are many honest, quality analysts with integrity, but their craft is being soiled by several quacks who are basing their vendor placements purely on vendor briefings, whether they like a particular vendor, and whether some vendors pony up for their research services. There are many "analysts" out there who do not bother to do sufficient customer research and we all suspect who these characters (and their employers) are...
Fersht is right and wrong. I've heard complaints and grumbles about the 2x2 grids for at least 15 years. Vendors obsess about positioning and then obsess again about dot movement from one year to the next.
On the other hand, the grids serve as a protective tick in the box for CIOs undertaking product selection. The theory goes that if Gartner/Forrester/IDC/(name your favorite analyst firm here) says that X tech is a leader, you choose X and then your project fails well heh, you ticked the box. The logic of that always escapes me but it is a common refrain from CIOs eager to convince the rest of the C-Suite.
The trouble with the grids is that they are fiction masquerading as fact. Over the years, Gartner, in particular, has successfully defended its Magic Quadrant methodology by pointing out that while research-based and extensively reviewed internally, it is, in the end, an opinion. As I like to say from time to time - opinions are like assholes, we all have one and most of them stink. And no more so than in the technology industry.
As I've said before, the problem doesn't lie with the individual analysts, many of whom have impeccable motives and a desire to do the right thing. It's the business model of what HfS correctly characterizes as 'pay for play,' another ongoing grumble, especially among new entrants with limited resources.
What's the alternative?
...we are merely making our research more relevant, more timely and more impactful with the HFS TOP 10 and much more simplified to support the enterprise customer. What's more, when some firms take six to nine months to get a quadrant to market, that market has often already moved on, and the data, despite its credibility, may already be stale. We are in a world that doesn't stand still, where enterprise customers are thirsty for timely, credible data that clearly shows the winners, contenders and laggards in a given market.
Yes and maybe.
The idea that a ranking is any better than a 2x2 grid is hard to evaluate and here's why.
There are biases inside any assessment criteria, however carefully the matrix of measures used to evaluate a ranking system is constructed. Those same biases get amplified as individuals bring their own pet positions to the table. HfS is very clear - it will take positions, something we also do and applaud. That means any ranking is ultimately a subjective opinion.
Now to the question of timeliness. Fersht is correct to say that technology shifts are operating at an accelerated pace but does that mean HfS can respond any quicker to market shifts than its much larger brethren? Does it even matter?
When I think about the kinds of technology buying decisions being taken among larger enterprises, in particular, the fact is they move at a much slower pace than the marketing hype machine. It's probably fair to say for instance that 'cloud ERP' as a topic hit peak hype about six to eight years ago. Yet I notice that Brian Sommer's recent exposé of excuses for not building multi-tenant cloud ERP proved to be our most popular story by a country mile.
CIOs I've met in the last six months routinely tell me they have no interest in cloud. Would you characterize them as laggards or are there other factors at play? And what about Computer Economics determination that DevOps is more aspirational than reality? How many years have we heard analysts warbling on about DevOps? How many yards of digital verbiage has been devoted to the DevOps topic?
I can see an argument for regular updates say quarterly, to technology rankings that are resonating well with the market. HfS choice of RPA is a good case in point. (See below)
The key comes in understanding why a particular vendor has been placed in a specific position. That's where the devil is in the detail which will always be argued but which can help buyers determine which firms they should pick. That also happens to make for a good business model that benefits the buyer and compensates the analyst for doing real research.
One obvious flaw comes in reference selection. Analysts often try to discover customers through their own Q&A but are frequently faced with asking vendors to supply contact lists. You can bet that vendors will only showcase the best with nary a sniff of discontent. To that extent, all CX assessments are massively skewed. Smart analysts will ask the kinds of probing question that elicits the 'gotchas' but it's not a done deal. Many companies have active AR/PR training and treat analysts with the same suspicion they have for media. I've found that the best and most reliable conversations are those had via chance meetings at vendor events. Does that qualify as research? I think it does because it allows us to balance what we are fed with what we hunt.
But the biggest flaw with rankings of this kind is that they do not take into consideration why buyers make particular choices. Experience teaches me that most software purchases are irrational. The demos have been done, the slide decks viewed, the due diligence undertaken and the contract negotiations are going the way most negotiations go. But the final choice is rarely one that can be objectively viewed as sound. Analysts will argue that their views are not tainted by the need to be a buyer but analysts also need to understand that buyers do have choices and that what fits one buyer is not necessarily a good fit for another.
In that context, I recall a recent conversation with a buyer who selected one of HfS top three because it could be bought with a credit card. Her other possible choice appeared to be functionally richer but was difficult to buy. I get that trade-off (I've made it myself) but was the decision objectively sound? Time will tell.
A missed trick?
HfS Top 10 RPA ranking is a good first stab at this type of exercise, especially with the addition of 'voice of the customer' coupled to an explanation that encapsulates the offering in less than 140 characters. The Top 10 also explains why certain vendors were excluded but I think there's a missed trick here that will appeal to Fersht.
Fersht loves English soccer and is a rabid Tottenham Hotspur fan. Given that particular predilection, HfS might have considered a set of league tables that mirror the Premier League, Division 1 and possibly Division 2. Taking that approach, vendors that are missed off the Premier League get a shot at ascending the ladder as their offering improves while alerting buyers to potential rising stars.
I see that approach as valuable because when buying cycles run 3, 6, 9 months, there is every possibility in a fast-paced world that a vendor who didn't make the Top 10 cut today might rapidly emerge as a solid competitor at a later date. Equally, in emerging markets, it is never easy to be certain that the top picks of today are sufficiently stable to maintain their high ranking position.
Should HfS choose this route then it will be expensive to operate but will ensure that all the analysis gets aired.
Whether analysts or media, we all like to think we're good at spotting the Next Big Thing, but there is an ongoing herd mentality attached to that. HfS has 'broken ranks' in that regard and should be applauded for being brave in offering an alternative approach that has significant potential in helping buyers make better decisions.
But even they must know that despite a clear desire to step away from vanity metrics and 'pay to play,' the top-ranked vendors will pimp the crap out of this type of report and perhaps more so than they might a Magic Quadrant. Others that are less well ranked will likely bitch and moan, possibly seeking to restrict access to executives, customers and marketing dollars as they seethe with resentment. More fool them.
The good news is that I know this has been a decision that's been long in the making. It wasn't something Fersht dreamt up overnight but has been shaping up over a long period. Is his timing right? We shall see.