IEEE Spectrum: Lip-reading is a difficult task. Spoken language involves some 50 different sounds, called phonemes, yet the face presents just 10 or so different configurations, called visemes. Even for people with normal hearing, most speech is best understood through the use of both aural and visual cues. Now, Helen Bear and Richard Harvey of the University of East Anglia have improved computer lip-reading software by introducing a two-step algorithm. First, the researchers had the computer map a given viseme to the multiple phonemes it can represent; then, an analysis of video recordings of humans speaking different phonemes allowed the program to zero in on the minute visual clues that differentiate one word from another. Lip-reading software could have many uses, from helping hearing-impaired people to aiding in criminal investigations that involve recorded film footage.
An ultracold atomic gas can sync into a single quantum state. Researchers uncovered a speed limit for the process that has implications for quantum computing and the evolution of the early universe.