The Shift from a Thinking to Feeling Economy

A summary of the main point from the book "The Feeling Economy" by Roland T. Rust and Ming-Hui Yuang, which is that we’ve gone from a Physical Economy (manufacturing) to a Thinking Economy (information) and are now entering a Feeling Economy (empathy).

Your career and future employability will depend on how you add value in a world where AI (artificial intelligence) + HI (human intelligence) are converging. Reading faces (facial coding), voices (e.g., Apple’s Siri) and bodies (via Fitbit) fit a world in which your emotional intelligence skills will be vital.

Here are some signposts of the basic socio-economic change underway from a thinking to feeling model:

1987: FCC repeals Fairness Doctrine, opening the way for Rush LimbaughFox News will launch in 1996

1995: Daniel Goleman publishes Emotional Intelligence

1997: Big Blue (IBM) defeats world chess champion Garry Kasparov; emojisfirst appear in Japanese mobile phones

1998: launch of Google & also Sensory Logic (my company, using facial coding to capture/quantify emotions)

2001: release of Stephen Spielberg movie AI Artificial Intelligence

2004: Facebook launches

2005: Malcolm Gladwell publishes Blink (which highlights facial coding)

2007: Fitbit launches; I release my book Emotionomics

2009: Lie to Me TV series based on facial coding launches on Fox (#29 most-viewed show that season); Affectiva and Realeyes switch to applying (automated) facial coding to business in imitation of Sensory Logic

2011: launch of the 1st digital assistant, Apple’s Siri

2014: SoftBank Robotic’s Pepperis 1st social humanoid robot

2016: Apple buys Emotient, the original facial coding automation company

2017: Female robot Sophia named an AI citizen in Saudi Arabia

Released today: episode #44 of “Dan Hill’s EQ Spotlight,” featuring Ming-Hui Huang, the co-author of The Feeling Economy: How Artificial Intelligence Is Creating the Era of Empathy. Listen to the clip below and click on the image to get to the new episode.

Image of Author Ming-Hui Huang and her book "The Feeling Economy" for episode 44 of Dan Hill's EQ Spotlight, titled When A.I. Thinks, Humans Feel. Click on the image to get to podcast link.

Huang Ming-Hui Huang holds a number of posts. She’s a Distinguished Professor at National Taiwan University; a fellow of the European Marketing Academy (EMAC); an International Research Fellow of the Centre for Corporate Reputation, University of Oxford, UK; and a Distinguished Research Fellow of the Center for Excellence in Service, University of Maryland, USA. She is also the incoming Editor-in-Chief of the Journal of Service Research.

Dan Hill, PhD, is the president of Sensory Logic, Inc.

The Digitization of Psychology

An assumption is often made that knowledge can or will bring about change in human behavior. That viewpoint ignores the reality that emotions play a major role in changing behavior because there are so many knots in the wood of human nature.

In my interview of Amy Bucher, I asked her about live experts vs. avatars vs. chatbots. For consumers interacting with a digital product online, which of these three approaches is most successful in effecting change?

What her research suggests is that avatars serve as an ideal middle ground between a mechanistic approach (chatbots) and a costly, fully human approach (live experts). Why? The answer lies in emotions. By using an avatar’s face on screen, an intimate enough connection is simulated. At the same time, online users don’t feel at risk of being condescended to by a live expert. In other words, the emotion of trust—of being shown respect—gives the nod to avatars, and a reason for live experts to improve their EQ skills.

The Psychology of Using Design to Motivate Change

Released today: episode #30 of “Dan Hill’s EQ Spotlight,” featuring Amy Bucher, the author of Engaged: Designing for Behavior Change. Listen to the clip below and click on the image to get to the new episode.

Amy Bucher, PhD, works in Behavior Change Design at Mad*Pow and previously worked at CVS Health and Johnson & Johnson. She received her A.B. from Harvard University and her M.A. and PhD in organizational psychology from the University of Michigan.

This episode addresses both the barriers and levers to achieving behavioral change. Among the barriers are cognitive biases, like a Status Quo Bias, as well as growing both emotionally and mentally exhausted by changes that require too much willpower on behalf of the user. Opportunities to promote change include having accountability buddies to help guide you, and avatars that have proven highly effective in providing information in a trust-building, nonjudgmental manner.

Could Workplaces Become Semi Ghost Towns?

Propelled in part by Covid-19, all sorts of changes are afoot in today’s workplace:

  • 70% of companies are offering full-time workers the ability to work from home.
  • Workers are relocating outside of major city-centers to feel safe, save money, and have more space, now that they don’t have to be in centralized offices and can work remotely. 83% of employees are in favor of relocating and 20% have done so in 2020.
  • Over 72% of workers favor a hybrid workplace model, allowing for structure and sociability (the office) while also enabling independence and flexibility (the home).

Combine these trends with the need to upgrade skills as Artificial Intelligence makes inroads, and what do we see? In the future, workers will be more on their own than at anytime since the shift from farms to factories over a century ago. In navigating change, keeping your eyes open to learning (curiosity) is going to be vital to surviving and thriving on the job.

Making Robots Our Friends, Not Our Overlords

Released today: episode #27 of “Dan Hill’s EQ Spotlight,” featuring Jamie Merisotis, the author of Human Work in the Age of Smart MachinesListen to the clip below and click on the image to get to the new episode.

Merisotis is a globally recognized leader in philanthropy, education, and public policy. Since 2008, he’s served as the president of CEO of Lumina Foundation, an independent, private foundation dedicated to making opportunities for learning beyond high school available to all.

In this episode, the topics range from why and how the economy is rapidly becoming people-centered, to why the power is shifting from employers to workers as part of the 4thIndustrial Revolution. The role that academia can adapt in providing more practical, flexible life-long learning is also covered.

Dan Hill, PhD, is the president of Sensory Logic, Inc.

The Incoming Tide: How Facial Recognition and Facial Coding Will Feed Into A.I.

I pioneered the use of facial coding in business to capture and quantify people’s intuitive emotional responses to advertising, products, packaging, and much more. So I’m a believer in Cicero’s adage that “All action is of the mind and the mirror of the mind is its face, its index the eyes.” Yes, an awful lot is in the face: four of our five senses are located there, and it serves as the easiest and surest barometer of a person’s beauty, health, and emotions. But Cicero’s adage also leads to the question: whose eyes serve as the interpreter, and how reliable are they?

An article in last Saturday’s edition of The New York Times, “Facial Recognition Is Accurate, If You’re a White Guy” raises exactly those questions. Usually, in “Faces of the Week” I focus on what I guess you could call the rich and famous. But in this case I’m showcasing Joy Buolamwini, a M.I.T. researcher whose TED talk on algorithmic injustices has already been viewed almost a million times on-line. Hooray for Boulamwini for documenting just how accurate facial recognition technology is to date. Take gender, for example. If you’re a white guy, the software has 99% accuracy in recognizing whether you’re male or female. But if you’re a black woman, like Boulamwini, then for now you have to settle for something like 65% accuracy instead.

The implications of such errors are enormous. The Economist, for one, has written about the emerging “facial-industrial complex.” In airports, cars, appliances, courtrooms, online job interviews, and elsewhere, a tidal wave of uses for automated facial recognition software, emotional recognition software (facial coding), and how both will feed into artificial intelligence (A.I.) systems is well under way.  So it’s no laughing matter when, for instance, a Google image-recognition photo app labeled African-Americans as “gorillas” back in 2015.

In my specialty, I’ve doggedly stuck to manual facial coding in researching my newest book, Famous Faces Decoded: A Guidebook for Reading Others (set for release on October 1, 2018). And the reason is accuracy. A knowledgeable, experienced facial coder can exceed 90% accuracy, whereas the emotional recognition software that forms the second wave behind the identity recognition software that Boulamwini has investigated is, at best, probably in the 60% range as companies like Apple, Facebook, and others weigh in. As Boulamwini has shown, even getting a person’s gender right can be tricky. Then throw in not one variable—male or female—but seven emotions and the 23 facial muscle movements that reveals those emotions, often in combination, and you can begin to see why the task of automating emotional recognition isn’t a slam-dunk.

Add in the influence of money to be made, and credibility suffers because the big claims of accuracy never go away. Plenty of firms offering automated facial coding services claim in excess of 90% accuracy, knowing that they won’t be able to attract customers by acknowledging a lower rate.

That makes for embarrassing moments. One company claiming 90% was appalled when they asked my firm to test and confirm their accuracy. When we found it to be at maybe half that level of accuracy, they rushed to provide us with the results from an academic contest in which they placed first by achieving 52% accuracy (based on standards we weren’t privy to learning). Another company’s software we tested showed all seven emotions flaring into strong action at a certain moment in time. In actuality, however, the person’s head had merely tilted a little—with no big burst of feelings having actually taken place just then. In another instance, automated facial coding software reported that the three judges in a mock appellate hearing had been so endlessly angry that about 75% of their emoting had supposedly consisted of anger during the proceedings. If so, that would have been an astonishing rate considering that the rapper Eminem was, at 73%, the single most frequently angry person in my sample of the 173 celebrities I manually coded for Famous Faces Decoded.

I could go on and on with such examples of automated facial coding not yet being ready for prime time. The case of another firm’s software supposedly detecting emotions in a plastic doll placed in front of a web cam to watch a TV commercial also comes to mind. Meanwhile, the reactions of the three companies Boulamwini tested for the accuracy of their facial recognition software are equally telling. China-based Megvii ignored requests for comment before the NYT’s story was published. Microsoft promised that improvements were under way. As for IBM, the company claimed to be on the verge of releasing identity recognition software nearly 10x better than before in terms of detecting dark-skinned women more faithfully. What’s the old saying in Silicon Valley? If you’re not embarrassed by your initial launch, then you waited too long.