This is the fifth and final part of the broader series on the “10 Rules of Innovation Management,” which has been brimming with best practices for managing an online, collaborative innovation management program, whether you’re already well on your way or are just starting to think about launching a program. In our last post, I shared my views on communication and transparency. To wrap things up, we’re moving onto evaluation and recognition. We’ll look into how to evaluate different campaign scenarios and why and how to recognize top participants.
If you’re new to the series, you can catch up on the 10 Rules here:
- Part 1: Alignment, Management Support, and Sponsorship (Rules 1-3)
- Part 2: Catching Ideas With Targeted Campaigns (Rule 4)
- Part 3: Seeding and Advocates (Rules 5 & 6)
- Part 4: Communication and Transparency (Rules 7 & 8)
Now that you’re up to speed, let’s jump into Rule 9 on evaluation!
9. ALIGN YOUR EVALUATION PROCESS WITH THE CAMPAIGN GOALS AND APPROACH
If not communicated properly, the evaluation stage can be a black box for people submitting ideas. It can also be a massive pain in the butt for evaluators when not well prepared. And, if you didn’t set the right criteria to evaluate against, you won’t end up with the proper selection of ideas to solve your campaign challenge. Below are some pointers to smooth your evaluation process as much as possible.
While shaping the campaign’s focus, make sure to also align on the meeting where the sponsor will choose from the top ideas and decide on next steps. Who needs to attend, how many ideas can be discussed, what’s the expected quality level and preferred pitching method? Once all this is clear, you can work backward and design your evaluation process. Based on the estimated number of idea submissions, you can decide what evaluation methods to use, who should be involved, and even schedule the meeting in their calendars.
Go from many to fewWhen inviting all your employees for an idea generation campaign, you shouldn’t be surprised to receive hundreds of submissions. And that’s a good thing; as Linus Pauling, renowned chemist and the only person to win two unshared Nobel Prizes, said, “The best way to have good ideas is to have a lot of ideas.” But after this divergent thinking, there will be a time that you need to bring this down to a manageable set of ideas and focus on the most promising ones.
For big bulks of ideas, you can use community graduation, which allows your crowd to make an initial selection based on the number of visitors, votes, comments, etc. Be aware that your crowd needs to be sufficiently knowledgeable and not biased about the topic.
Another initial screening method is to run a triage evaluation where you ask your evaluator group just one simple question: “Would you like to take this idea forward?” If the majority agrees for either choice, you can follow them. With only a smaller majority, I’d recommend further assessing it before making a final decision.
Because you don’t receive qualitative feedback from community graduation or triages, you must be clear and upfront with your crowd that not all ideas will receive elaborate feedback. People will understand if you use this for managing vast amounts of ideas.
Once you’ve narrowed it down to a more manageable set of ideas, you should run a more qualitative evaluation session by using scorecards. Create bubble charts or radar chart visualizations to support the decision-making by the campaign sponsor. Invite all decision-makers and use methods like dot stickering to vote for the best ideas or bring innovators and decision-makers together in a “Dragons’ Den” or “Shark Tank”-like setting.
Run iterative evaluation sessions
Following Eric Ries’ “Build, Measure, Learn” cycle, I advise running iterative evaluation sessions as well. Make sure that you learn from every evaluation round and use that to improve the idea before the next one. In the qualitative evaluations, you can ask what the next step might be and who should be involved to mature the idea. This should give you sufficient input to design the next iteration and grow the idea. Especially when I used “Dragons’ Den”-style decision meetings, I always saw significant idea improvement in the later stages.
Use the right metrics
The later in your evaluation process, the more specific your metrics can and should become. In the early evaluation rounds, you’ll just have too many ideas to dive into depth, but later, it’s required to get a more detailed understanding of the evaluators’ opinions to elaborate on the most important elements of the ideas and make the right decisions. As coined by IDEO, a global design and innovation company, the innovation “sweet spot” is when an idea is “desirable, feasible, and viable.” These metrics are relevant in every idea evaluation iteration, but, the further along, the more specifically you’ll address and investigate these metrics.
Apart from these general assessment criteria, you should align your evaluation metrics to your campaign topic and question. You can imagine that you need to ask about technical feasibility if you’re looking into some sort of new development, and when innovating an internal process, you’re looking at the desirability of the idea at internal end-users instead of at a potential customer when developing a new product or service. And align with your innovation goals as well; you can be way more specific for incremental innovations than radical ones because things are just more unclear for the latter at this point in time.
10. RECOGNIZE ALL PEOPLE THAT ADD VALUE TO YOUR PROGRAM
Recognition is a very important element in your innovation management program. You’re asking people to do something on top of their jobs, to share their thoughts on another department’s challenge, and to use (innovation) methods that they’re most likely not completely familiar with yet. Entice your employees to participate by showing upfront what’s in it for them. But carefully consider what behaviors you’d like to see and recognize just that. When you recognize people for the number of ideas they submitted, you’ll get a lot of ideas but not directly high-quality ideas. And this approach also doesn’t push online collaboration to improve the submissions. Better is to recognize on number of (constructive) comments and selected ideas.
Although some cultures seem to prefer monetary awards, I recommend using non-monetary honors to recognize your top campaign participants. The study “Effects of Externally Mediated Rewards on Intrinsic Motivation” from the Journal of Personality and Social Psychology (1971) covers different experiments done to study how external rewards influence the intrinsic motivation to perform an activity. “The results indicate that (a) when money was used as an external reward, intrinsic motivation tended to decrease, whereas (b) when verbal reinforcement and positive feedback were used, intrinsic motivation tended to increase.”
Author Daniel Pink combined these findings with many other related studies for his book “Drive” (2009) and indicated that monetary awards harm tasks that require some conceptual, creative thinking. He also concludes that autonomy, mastery, and purpose lead to better performance and personal satisfaction.
Based on the above and my insights from various collaborative innovation programs, I advise to mainly use events to recognize the top participants. Have senior management publicly hand out certificates or awards and make sure everyone else in the organization is aware. Offer the opportunity to be involved in the next steps to realize the idea or allow them to pitch their own idea directly to senior management. Innovators seem to appreciate having the opportunity to protect the idea from changing over time, and it’ll give them new experiences and skills – which directly relates to Pink’s mastery.
My final piece of advice on recognition is to incentivize not only idea submitters but everyone that adds value. Ideas initially are mere hunches that need the input from others to grow into something that can be decided upon and implemented. Therefore, also award those that helped develop an idea, evaluators that identified risks and opportunities, and people who help promote your program in general. Make it a fun and rewarding process for all involved, because these programs cannot exist without enthusiastic people!
- Plan your evaluation upfront and align with your campaign’s goals
- Be ready for a staged, iterative evaluation to cope with high numbers of ideas
- Recognize all people who add value to your program
Although idea evaluations will only take place at the end of your innovation campaigns, make sure you organize this upfront. Based on the idea inflow, you might want to make some changes along the way, but this preparation allows you to inform stakeholders early on to smooth execution and create trust. Then – when you have decided on the best ideas – it’s also time to recognize the top participants that played a key role in getting to those best ideas. Don’t just acknowledge those that submitted the selected ideas; give props to everybody that helped you get there. You want people to collaboratively innovate, so practice what you preach!
So, there you have it; my views on the 10 areas that need to be in place correctly to collaboratively innovate with your employees online. I hope you enjoyed the read and found useful pointers to make your program a success.
To celebrate the “10 Rules of Innovation Management” series, HYPE Innovation and I worked together to launch the official 10 Rules of Innovation Management interactive e-book! The e-book combines all the content form the 10 Rules series, including blogs and webinars, as well as additional content. Check it out, and happy innovating!