New Yorker: Computer software is advancing to the point that machines are beginning to be able to interpret not just human speech but also human emotions. They do so by scanning faces and noting so-called “action units"—key features, such as the eyebrows or the corners of the lips, that move when the person reacts to a situation. Beginning in the 1960s, research psychologist Paul Ekman worked to define human emotions as simple geometrical patterns of those deformable points in combination with nondeformable ones, such as the eyes and nose. His system of facial movements, called the Facial Action Coding System, has become the basis of many software programs that are being used in a variety of fields, from computer animation to law enforcement. This New Yorker article by Raffi Khatchadourian focuses on Rana el Kaliouby, an Egyptian scientist living near Boston, whose company has developed a facial-reading program called Affdex.