Sydney Morning Herald: Two different groups of researchers have written computer algorithms that can not only pick out individual objects in an image but also describe the entire scene. Both groups, one at Google and the other at Stanford University, were inspired by the neural networks found in the brain. They incorporated two types of networks: one that recognizes images and one that focuses on human language. The researchers trained the artificial-intelligence software by exposing it to images that had descriptive sentences written by humans. Once it learned to pick out patterns in the pictures and descriptions, the researchers tried it out on unfamiliar images. A few of the sentences the software generated included “man in black shirt is playing guitar” and “girl in pink dress is jumping in air.” Although still nowhere as accurate as humans, the new system is much more advanced than previous designs. Possible uses include sifting through the billions of images and video online to better catalog and describe them, helping people who are visually impaired to navigate on their own, and monitoring public spaces for illegal activity and alerting the police.
The finding that the Saturnian moon may host layers of icy slush instead of a global ocean could change how planetary scientists think about other icy moons as well.
Modeling the shapes of tree branches, neurons, and blood vessels is a thorny problem, but researchers have just discovered that much of the math has already been done.
January 29, 2026 12:52 PM
Get PT in your inbox
PT The Week in Physics
A collection of PT's content from the previous week delivered every Monday.
One email per week
PT New Issue Alert
Be notified about the new issue with links to highlights and the full TOC.
One email per month
PT Webinars & White Papers
The latest webinars, white papers and other informational resources.