Sydney Morning Herald: Two different groups of researchers have written computer algorithms that can not only pick out individual objects in an image but also describe the entire scene. Both groups, one at Google and the other at Stanford University, were inspired by the neural networks found in the brain. They incorporated two types of networks: one that recognizes images and one that focuses on human language. The researchers trained the artificial-intelligence software by exposing it to images that had descriptive sentences written by humans. Once it learned to pick out patterns in the pictures and descriptions, the researchers tried it out on unfamiliar images. A few of the sentences the software generated included “man in black shirt is playing guitar” and “girl in pink dress is jumping in air.” Although still nowhere as accurate as humans, the new system is much more advanced than previous designs. Possible uses include sifting through the billions of images and video online to better catalog and describe them, helping people who are visually impaired to navigate on their own, and monitoring public spaces for illegal activity and alerting the police.