The Incoming Tide: How Facial Recognition and Facial Coding Will Feed Into A.I.

I pioneered the use of facial coding in business to capture and quantify people’s intuitive emotional responses to advertising, products, packaging, and much more. So I’m a believer in Cicero’s adage that “All action is of the mind and the mirror of the mind is its face, its index the eyes.” Yes, an awful lot is in the face: four of our five senses are located there, and it serves as the easiest and surest barometer of a person’s beauty, health, and emotions. But Cicero’s adage also leads to the question: whose eyes serve as the interpreter, and how reliable are they?

An article in last Saturday’s edition of The New York Times, “Facial Recognition Is Accurate, If You’re a White Guy” raises exactly those questions. Usually, in “Faces of the Week” I focus on what I guess you could call the rich and famous. But in this case I’m showcasing Joy Buolamwini, a M.I.T. researcher whose TED talk on algorithmic injustices has already been viewed almost a million times on-line. Hooray for Boulamwini for documenting just how accurate facial recognition technology is to date. Take gender, for example. If you’re a white guy, the software has 99% accuracy in recognizing whether you’re male or female. But if you’re a black woman, like Boulamwini, then for now you have to settle for something like 65% accuracy instead.

021218-01 Joy Buolamwini (resize)

The implications of such errors are enormous. The Economist, for one, has written about the emerging “facial-industrial complex.” In airports, cars, appliances, courtrooms, online job interviews, and elsewhere, a tidal wave of uses for automated facial recognition software, emotional recognition software (facial coding), and how both will feed into artificial intelligence (A.I.) systems is well under way.  So it’s no laughing matter when, for instance, a Google image-recognition photo app labeled African-Americans as “gorillas” back in 2015.

In my specialty, I’ve doggedly stuck to manual facial coding in researching my newest book, Famous Faces Decoded: A Guidebook for Reading Others (set for release on October 1, 2018). And the reason is accuracy. A knowledgeable, experienced facial coder can exceed 90% accuracy, whereas the emotional recognition software that forms the second wave behind the identity recognition software that Boulamwini has investigated is, at best, probably in the 60% range as companies like Apple, Facebook, and others weigh in. As Boulamwini has shown, even getting a person’s gender right can be tricky. Then throw in not one variable—male or female—but seven emotions and the 23 facial muscle movements that reveals those emotions, often in combination, and you can begin to see why the task of automating emotional recognition isn’t a slam-dunk.

Add in the influence of money to be made, and credibility suffers because the big claims of accuracy never go away. Plenty of firms offering automated facial coding services claim in excess of 90% accuracy, knowing that they won’t be able to attract customers by acknowledging a lower rate.

That makes for embarrassing moments. One company claiming 90% was appalled when they asked my firm to test and confirm their accuracy. When we found it to be at maybe half that level of accuracy, they rushed to provide us with the results from an academic contest in which they placed first by achieving 52% accuracy (based on standards we weren’t privy to learning). Another company’s software we tested showed all seven emotions flaring into strong action at a certain moment in time. In actuality, however, the person’s head had merely tilted a little—with no big burst of feelings having actually taken place just then. In another instance, automated facial coding software reported that the three judges in a mock appellate hearing had been so endlessly angry that about 75% of their emoting had supposedly consisted of anger during the proceedings. If so, that would have been an astonishing rate considering that the rapper Eminem was, at 73%, the single most frequently angry person in my sample of the 173 celebrities I manually coded for Famous Faces Decoded.

I could go on and on with such examples of automated facial coding not yet being ready for prime time. The case of another firm’s software supposedly detecting emotions in a plastic doll placed in front of a web cam to watch a TV commercial also comes to mind. Meanwhile, the reactions of the three companies Boulamwini tested for the accuracy of their facial recognition software are equally telling. China-based Megvii ignored requests for comment before the NYT’s story was published. Microsoft promised that improvements were under way. As for IBM, the company claimed to be on the verge of releasing identity recognition software nearly 10x better than before in terms of detecting dark-skinned women more faithfully. What’s the old saying in Silicon Valley? If you’re not embarrassed by your initial launch, then you waited too long.

Leave a Reply