Companies increasingly cite user experience as a primary pillar to their growth and success models. But like any core competency, implementation involves much more than adding a phase gate to the product-design flowchart.
User experience is a capacious concept, encompassing every customer interaction and touch point, from the first media or store-shelf impression, to the final moment of discarding the product or completing the service. Defining the full scope would require volumes of text. However, for physical products, as well as software and cloud-based solutions, a key phase of customer experience evaluation is beta testing. Beta testing is the first time that non-company customers have a chance to touch, feel, and decipher the product. When done properly, beta testing is tedious and slower than the project team desires. And when done properly, the results are enlightening, and highlight real changes for measurable improvement.
Before listing the DOs, one DO NOT must be mentioned: DO NOT send out 50 products and then request that participants complete an online survey. This is called feedback — not beta testing. Feedback is a viable tool, but beta testing is a hands-on process for bench marking the customer experience. Key points of beta testing are:
Witness. Project team members must be onsite to observe all aspects of the beta test customer experience firsthand. The reasoning is simple but important. Customers rarely report failures of intermediate steps. Rather, they blame themselves. If a step takes three attempts to succeed, testers will not report this finding, as long as the end result works. In reality, this scenario is an enormous failure that will drive support calls and derail the customer experience in mass production. The onsite witness must recognize, document, and ask probing questions around any perceived confusion by the customer to identify possible improvements by the design team.
Regional Testing. Customers in Bangor have different experiences and viewpoints from customers in New Orleans or Phoenix. If a product or service is to be sold nationwide, then perform user-experience testing nationwide.
Demographic Variability. The product is designed for a particular market and age range, say, 25 – 54. Conduct 10 percent of your tests outside of the target range. Yes, this group will have feedback which does not apply. But this group will also preview shortcomings your customers might report later, as your target group shifts with time. Such data allows for summarized contingency plans to be documented for quick implementation later, should the findings prove true over time.
Edge-Case Conditions. Every product has limits: memory usage, weight supported, battery life, and wireless range. Work with your customers to create scenarios in the field that simulate the user experience near performance limits. These limits will certainly be encountered after launch, and usually sooner than later. The user experience at limits can be managed gracefully if properly understood and addressed prior to launch.
Follow-Up. Approximately 75 percent of findings will be found during the first few days of user-experience testing. But the project team must re-visit customers two to three weeks after the initial review. Is the product still functional? What has been better than expected? What has been worse than expected? Most importantly, does the product effectively serve its intended purpose?
User-experience verification encapsulates a healthy blend of rigorous planning and active observation. This includes much more than a dispassionate survey. User experience expertise is truly a touchy process, working side-by-side with customers every step of the way.