SAS 2014 continued to push past its origins as a sentiment-only conference to one on the cutting edge of what organizer Seth Grimes named “human analytics” and what I’ve dubbed “humetrics.” Whatever it’s called, attendees seemed to be operating with the knowledge that capturing and analyzing social data – what people do, say and feel, goes well beyond the traditional and impersonal demographics, participation rates or transaction volumes that tell us “what” but not “why.”
Here are the highlights from my perspective on understanding people:
- Talks on emotions, intentions, and motivations. Dr. Rosalind Picard of MIT, the founder of affective computing, described her lab’s research to recognize emotions which can eventually help people with autism, social phobias, and other conditions more successfully read others and interact. Aloke Guha of Cruxly outlined a system to detect peoples’ intentions and sentiment in near real time. MotiveQuest’s David Rabjohns’ talk on mapping human motivations and applying them to brands was instructive in ways that brands can use motivations to position themselves better. Beyond Verbal’s presentation on emotions in speech illuminated a way forward.
- Engagement. Marie Wallace of IBM outlined an approach to “engagement analytics” with a description of ways to understand the engagement of an enterprise’s workforce, and for workers in the company to appreciate their social standing. I’m intrigued and also skeptical, in that I’m not sure that people in companies should be rated on their contributions, sharing, and ratings by others.
- Social ROI through Enhanced Text Analytics. Dell Software presented its system for tracking conversations across a variety of channels. What struck me was how straightforward it was, how its taxonomies were based on consumer language, and how the system allowed analysts to drill down or roll up as needed.
- Insiders Guide to Social Media Analysis: This 3.5 hour workshop focused on how to think about metrics, stressed the importance of having a social media “theory,” a measurement framework, and fitting metrics to it. Attendees from companies like Dell gave it a big thumbs up because it reinforced the value of “framework,” and an agency which said “I wish I attended this a year ago before we committed to a measurement plan for an important client.” (Full disclosure: I gave the workshop. I’m not one to self-promote. I’m reporting this because I was truly happy that so many attendees derived value from it).
Really looking forward to the 2015 edition.
Very insightful post by Seth Grimes identifies four missing measures:
- sentiment classified in ways that are meaningful to business, such as “promoter/detractor,” “satisfied/disappointed,” “happy/sad/angry” or whatever are relevant for your needs. Grimes points out that flexible automated methods, and expert or crowd-sourced analysis is up to the task. Netvibes, for example, offers a very sophisticated capability that allows in-house analysts to classify sentiments using any terms that make sense, such as “threat,” “opportunity,” “product improvement,” “unmet need,” and so on.
- sentiment density – does the post use a few words or is it packed with words that indicate a lot of feeling;
- variation – the dispersion of words around an idea; and
- volatility – a measure of the variation of sentiment over time. Seth isn’t saying that these measures are not being used today – they are, rather it’s that they are not used broadly enough. Too many sentiment analyses leave it at the positive/negative/neutral level.
I especially appreciated Seth’s final point, not a metric but an exploratory process called “root cause analysis.” That gets us to understanding the “why” underlying the “what” reported by your metrics and indicators.
Ratings we assign are influenced by other ratings that we read. New research conducted by Sinan Aral and colleagues at MIT’s Sloan School found that:
“When it comes to online ratings, our herd instincts combine with our susceptibility to positive ‘social influence.’ When we see that other people have appreciated a certain book, enjoyed a hotel or restaurant or liked a particular doctor — and rewarded them with a high online rating — this can cause us to feel the same positive feelings about the book, hotel, restaurant or doctor and to likewise provide a similarly high online rating.
This important finding was discovered too late to be included in the Field Guide’s entry on “Influencers.” That discussion pointed out the importance of the herd model and urged that it be considered along with the widely adopted Influencer-Follower (two-step) model. The influencer two-step is very popular because: a) it conforms to the conventional mental models we evoke to explain how advertising works (authority, message, persuasion), b) because measures of influence, such as Klout scores, are computed in line with the two-step model — using social media counts such as posts/updates, number of friends/followers/contacts, and sharing, and c) herd influence has been under-recognized.
Despite books and articles on herd instincts in marketing, knowledge about herd instincts and its applicability to the work we do is not yet generally known by practitioners. Adding to herd instincts’ invisibility: Herd instincts measures are not reported by measurement services, so most of us are unaware of the herd notion. Herd measures are not easily derived from social media metrics. Methods for researching herd instincts scientifically in marketing and advertising are not in the market research tookit. This study changes that at last … and to our benefit.
Key Implication: Brands should oversee their ratings sections to minimize fraudulent positive ratings. Those “false positives” can create unrealistic expectations. Instead, encourage people to record authentic ratings that minimize bandwagon effects and foster realistic expectations about the brand. Ratings may then generate better guidance to other readers and to the brand itself.
The study had three experimental conditions, one where the rating was increased positively, one where the rating was decreased negatively, and one with no change. Read on for the five key findings … Continue reading “I’ll Rate as They Rate: Herd Instincts, Online Influence and the Problem of Online Ratings”