Improving Sentiment: Adopt Four “Missing Measures” and Use Root Cause Analysis

Very insightful post by Seth Grimes identifies four missing measures:

  1. sentiment classified in ways that are meaningful to business, such as “promoter/detractor,” “satisfied/disappointed,” “happy/sad/angry” or whatever are relevant for your needs. Grimes points out that flexible automated methods, and expert or crowd-sourced analysis is up to the task. Netvibes, for example, offers a very sophisticated capability that allows in-house analysts to classify sentiments using any terms that make sense, such as “threat,” “opportunity,” “product improvement,” “unmet need,” and so on.
  2. sentiment density – does the post use a few words or is it packed with words that indicate a lot of feeling;
  3. variation –  the dispersion of words around an idea; and
  4. volatility – a measure of the variation of sentiment over time. Seth isn’t saying that these measures are not being used today – they are, rather it’s that they are not used broadly enough. Too many sentiment analyses leave it at the positive/negative/neutral level.

I especially appreciated Seth’s final point, not a metric but an exploratory process called “root cause analysis.” That gets us to understanding the “why” underlying the “what” reported by your metrics and indicators.

I’ll Rate as They Rate: Herd Instincts, Online Influence and the Problem of Online Ratings

ImageRatings we assign are influenced by other ratings that we read. New research conducted by Sinan Aral and colleagues at MIT’s Sloan School found that:

“When it comes to online ratings, our herd instincts combine with our susceptibility to positive ‘social influence.’ When we see that other people have appreciated a certain book, enjoyed a hotel or restaurant or liked a particular doctor — and rewarded them with a high online rating — this can cause us to feel the same positive feelings about the book, hotel, restaurant or doctor and to likewise provide a similarly high online rating.

This important finding was discovered too late to be included in the Field Guide’s entry on “Influencers.” That discussion pointed out the importance of the herd model and urged that it be considered along with the widely adopted Influencer-Follower (two-step) model. The influencer two-step is very popular because: a) it conforms to the conventional mental models we evoke to explain how advertising works (authority, message, persuasion), b) because measures of influence, such as Klout scores, are computed in line with the two-step model — using social media counts such as posts/updates, number of friends/followers/contacts, and sharing, and c) herd influence has been under-recognized.

Despite books and articles on herd instincts in marketing, knowledge about herd instincts and its applicability to the work we  do is not yet generally known by practitioners. Adding to herd instincts’ invisibility: Herd instincts measures are not reported by measurement services, so most of us are unaware of the herd notion. Herd measures are not easily derived from social media metrics. Methods for researching herd instincts scientifically in marketing and advertising are not in the market research tookit. This study changes that at last … and to our benefit. 

Key Implication: Brands should oversee their ratings sections  to minimize fraudulent positive ratings. Those “false positives” can create unrealistic expectations. Instead, encourage people to record authentic ratings that minimize bandwagon effects and foster realistic expectations about the brand. Ratings may then generate better guidance to other readers and to the brand itself.

The study had three experimental conditions, one where the rating was increased positively, one where the rating was decreased negatively, and one with no change. Read on for the five key findings … Continue reading “I’ll Rate as They Rate: Herd Instincts, Online Influence and the Problem of Online Ratings”