Intro

In order to understand customer sentiment toward a product and respond accordingly, businesses frequently visit common retailer websites (i.e. Amazon, Best Buy, etc.) to read customer reviews and assess the general consensus. Unfortunately, many online retailer websites don’t make customer reviews as quickly understandable as they should be. This approach does not provide a comprehensive understanding of the product’s quality, and requires constant effort from the user to access desired review content. Sentient was my solution to this problem space and intended for users in the global supply chain department at HP.

The Users

There were two target user groups for this project: Supply Chain Engineers and Business Retailers. Supply Chain Engineers help provide detailed reports to the business end on product performance. In addition, this department helps develop solutions to improve product forecasting accuracy. Business Retailers make informed product forecasting decisions based on detailed reports and analyses of product performance.

The Design

After analyzing my initial interviews and requirements by the client, it became clear that I needed to provide a “big-picture” understanding of all reviews for a single product. Furthermore, I needed to bring more transparency to reliable and non-reliable reviews, and make any single review more accessible by requiring less scrolling from the user.

I began drawing concepts focused on these aspects. The drawings were discussed with the client and iterations of the design were made through Adobe Illustrator to provide a clearer understanding of the user interface.

The first approach was a bubble chart split by star rating, where the word bubbles would be sized by their occurrence and colored by sentiment.

The second approach was a bar chart design, but the bars were composed of interactive, individual reviews. As the user clicks individual reviews, its detailed content would appear to the right. There was a contextual word cloud that would be presented above the bar chart to provide more context into the sentiment of what’s being viewed.

After conducting additional user interviews, an overwhelming majority preferred the node bar chat. It was seen as providing a better birds-eye view of the dataset in addition to its potential for advanced filters. As a result, the node bar chart became the base visualization for the design. In the final design (below), the color of each node is determined by the review’s helpfulness rating. This visual encoding was designed to combat users who don’t actually own the product, users who are “fan boys” of a particular brand, or users who prefer to misrepresent companies and brands they don’t personally like. It is common that these reviews receive a low helpfulness rating since readers will typically down vote this type of review.

Filtering out reviews by the helpfulness rating can yield some very interesting facts about the more realistic sentiment toward a product. Looking at the result below, after filtering out reviews that have a helpfulness rating of less than 50%, in which a minimum of 10 people have rated that particular review, a more reliable perception of the product becomes clear.

In response to my initial user testing, it became clear that the ability to compare reviews was an additional feature that would accommodate their workflow even more appropriately. The alternate view for the visualization (below) displays a simplified visualization given the smaller real estate with 2 products on the screen, but still preserving the ability to see the helpfulness percentage ratio.

Conclusion

Overall, this has been a successful first step toward improving customer review transparency, and Information Visualization has helped end users understand their data more closely at a glance. This will help companies not only respond quickly to customer criticism, but develop higher quality products based on more reliable sources.