If you go to Sales to get win/loss information, you typically find that they are more than happy to talk about their wins - and how their brilliance achieved the win. However, a lost deal is something that they are less willing to discuss and, even when they do talk about these lost opportunities, it is often the product or pricing that is the reason for the lost deal (and never them). This is not to say that product managers shouldn't talk to Sales about wins and losses...just have a healthy amount of skepticism when you listen to the responses. Sales are an excellent source of information about competitor's selling tactics as they have to deal with them everyday, but I am reluctant to use Sales as the primary source of win/loss information.
Of course, you can, and should, talk to customers and prospects about why they selected (or didn't select) your product. However, just be aware that both customers and prospect can give misleading information that can significantly affect your win/loss analysis results. Consider the situation with a customer i.e. a 'win'...the customer wants to build good relationships with their supplier - you and your peers in Sales, Consulting, Operations, etc. They will most likely be full of praise for your product and your sales process - they don't want to upset people by being too critical. They also want to justify their decision to buy your product, so they will likely be someone who has been "drinking the Kool-Aid".
Similarly, prospects (losses) will also be looking to justify their decision not to buy your product. For example, how many prospects would say "we were never going to buy your product...our purchasing process required us to have three tenders and you were always column fodder". In many cases, this might well be the truth but the prospect will never admit to it. They will provide some reason, such as price, missing features, etc., as the reason that your product was not chosen and this misinformation will be reflected in your win/loss analysis data. I remember discussing a lost deal with a sales person who told me that we had lost a multi-million dollar deal because "our XML Editor was inferior to the competition". This is what the prospect had told him - he believed it and duly reported this as the reason for losing the deal. The XML Editor was a very minor piece of functionality in the product suite and, while it might not have been perfect, it was functional. If the prospect was really serious about this particular feature, they could have used a $30 shareware product to complement our product suite. The truth was that one of our leading competitors was the incumbent vendor with this particular prospect and they were always going to buy the competing product from the incumbent vendor. We were column fodder but the prospect couldn't admit to the sales person that he had spent six months working on a deal that he had no chance of winning.
So, how can you overcome this 'information bias'? One option is to use a third-party to perform the win/loss interviews. If the third-party does not disclose who they are working for, they remove a large part of the bias. Third-parties posing a research analysts also have the credibility that they are viewed as being independent, again leading to less bias in the interview responses. I have used a third-party researcher to perform win/loss analysis and the results were very impressive. They managed to illicit information from both customers (wins) and prospects (losses) that we, as the product managers, would never have been told. The interview transcripts were just as valuable as the win/loss analysis results, as the buyers being interviewed were very candid about the products they reviewed and the sales process of all the vendors (some of the comments made me want to go and slap the sales people around the head). This was the type of feedback that we would have never got ourselves. However, third-party consultants are expensive - a few years ago the cost was $5,000 per buyer interview and the consultant estimated that we would need somewhere between 30 and 50 interviews to make the result statistically significant. That's a lot of money for win/loss analysis and a figure that will be difficult to justify in many companies.
Another alternative is to use online survey software (e.g. Survey Monkey) to gather the feedback and then follow up the more interesting survey responses with in-person interviews. Using a tool such as Survey Monkey allows you to hide your identity i.e. the person responding to the survey does not know who has created the survey. This will help to remove the information bias that I talked about earlier. However, online surveys have a couple of disadvantages:
- The questions are static i.e. you cannot drill further down into a specific area based on a previous response. If you are interviewing a live person, you can focus on areas of interest that come up in the discussion. You can't do this with an online survey. This means that follow up interviews will be required to elicit more details (i.e. the 'reading between the lines' type of information).
- Online surveys generally have a low response rate. Even if you have targeted the survey invites very well, the response rate will require you to send out hundreds of invites to hit a statistically significant response rate.
The key to making online win/loss surveys effective is what questions that you ask. More about this in another post...
No comments:
Post a Comment