Look at the picture above and you see Albert Einstein. Now walk across the room. Suddenly, he morphs into Marilyn Monroe. Trippy, right? Aude Oliva, an associate professor of cognitive science at MIT, uses images like this one to study how our brains make sense of sight.
Our eyes pick up resolutions with both high spatial frequencies (sharp lines) and low ones (blurred shapes). By blending the high frequencies from one picture with the lows from another, Oliva creates images that change as a function of distance and time—allowing her to parse how humans absorb visual information. Turns out that we perceive coarse features quickly, within the first 30 milliseconds, and then home in on details at around 100 milliseconds. We also focus on the higher frequencies close up and register softer shapes from afar.
"It's something we never think about," Oliva says. "But we still don't know how our brains digest new images so seamlessly and so rapidly." The answer could help treat cognitive disorders or assist in the development of more-perceptive bots. Because, let's be honest: What good is a robot if it can't tell the difference between a sexy, troubled icon and Marilyn Monroe?