Google wants devices to know when you’re paying attention

Google has been working on a "new interaction language" for years, and today it's sharing a peek at what it's developed so far. The company is showcasing a set of movements it's defined in its new interaction language in the first episode of a new series called In the lab with Google ATAP. That acronym stands for Advanced Technology and Projects, and it's Google's more-experimental division that the company calls its "hardware invention studio."
The idea behind this "interaction language" is that the machines around us could be more intuitive and perceptive of our desire to interact with them by better understanding our nonverbal cues. "The devices that surround us… should feel like a best friend," senior interaction designer at ATAP Lauren Bedal told Engadget. "They should have social grace."
Specifically (so far, anyway), ATAP is analyzing our movements (as opposed to vocal tones or facial expressions) to see if we're ready to engage, so devices know when to remain in the background instead of bombarding us with information. The team used the company's Soli radar sensor to detect the proximity, direction and pathways of people around it. Then, it parsed that data to determine if someone is glancing at, passing, approaching or turning towards the sensor.
Google formalized this set of four movements, calling them Approach, Glance, Turn and Pass. These actions can be used as triggers for commands or reactions on things like smart displays or other types of ambient computers. If this sounds familiar, it's because some of these gestures already work on existing Soli-enabled devices. The Pixel 4, for example, had a feature called Motion Sense that will snooze alarms when you wave at it, or wake the phone if it detected your hand coming towards it. Google's Nest Hub Max used its camera to see when you've raised your open palm, and will pause your media playback in response.
Approach feels similar to existing implementations. It allows devices to tell when you (or a body part) are getting closer, so they can bring up information you might be near enough to see. Like the Pixel 4, the Nest Hub uses a similar approach when it knows you're close by, pulling up your upcoming appointments or reminders. It'll also show touch commands on a countdown screen if you're near, and switch to larger, easy-to-read font when you're further away.
While Glance may seem like it overlaps with Approach, Bedal explained that it can be for understanding where a person's attention is when they're using multiple devices. "Say you're on a phone call with someone and you happen to glance at another device in the house," she said. "Since we know you may have your attention on another device, we can offer a suggestion to maybe transfer your conversation to a video call." Glance can also be used to quickly display a snippet of information.
What's less familiar are Turn and Pass. "With turning towards and away, we can allow devices to help automate repetitive or mundane tasks," Bedal said. It can be used to determine when you're ready for the next step in a multi-stage process, like following an onscreen recipe, or something repetitive, like starting and stopping a video. Pass, meanwhile, tells the device you're not ready to engage.
It's clear that Approach, Pass, Turn and Glance build on what Google's implemented in bits and pieces into its products over the years. But the ATAP team also played with combining some of these actions, like passing and glancing or approaching and glancing, which is something we've yet to see much of in the real world.
For all this to work well, Google's sensors and algorithms need to be incredibly adept not only at recognizing when you're making a specific action, but also when you're not. Inaccurate gesture recognition can turn an experience that's meant to be helpful into one that's incredibly frustrating.
ATAP's head of design Leonardo Giusti said "That's the biggest challenge we have with these signals." He said that with devices that are plugged in, there is more power available to run more complex algorithms than on a mobile device. Part of the effort to make the system more accurate is collecting more data to train machine learning algorithms on, including the correct actions as well as similar but incorrect ones (so they also learn what not to accept).
"The other approach to mitigate this risk is through UX design," Giusti said. He explained that the system can offer a suggestion rather than trigger a completely automated response, to allow users to confirm the right input rather than act on a potentially inaccurate gesture.
Still, it's not like we're going to be frustrated by Google devices misinterpreting these four movements of ours in the immediate future. Bedal pointed out "What we're working on is purely research. We're not focusing on product integration." And to be clear, Google is sharing this look at the interaction language as part of a video series it's publishing. Later episodes of In the lab with ATAP will cover other topics beyond this new language, and Giusti said it's meant to "give people an inside look into some of the research that we are exploring."
But it's easy to see how this new language can eventually find its way into the many things Google makes. The company's been talking about its vision for a world of "ambient computing" for years, where it envisions various sensors and devices embedded into the many surfaces around us, ready to anticipate and respond to our every need. For a world like that to not feel intrusive or invasive, there are many issues to sort out (protecting user privacy chief among them). Having machines that know when to stay away and when to help is part of that challenge.
Bedal, who's also a professional choreographer, said "We believe that these movements are really hinting to a future way of interacting with computers that feels invisible by leveraging the natural ways that we move."
She added, "By doing so, we can do less and computers can… operate in the background, only helping us in the right moments."
Google has been working on a "new interaction language" for years, and today it's sharing a peek at what it's developed so far. The company is showcasing a set of movements it's defined in its new interaction language in the first episode of a new series called In the lab…
Recent Posts
- ICYMI: the week’s 8 biggest tech stories, from the iPhone 16e to Wi-Fi 7 routers and a crackdown on Kindle piracy
- The Handmaid’s Tale season 6: everything we know so far about the hit Hulu show’s return
- Nvidia confirms ‘rare’ RTX 5090 and 5070 Ti manufacturing issue
- I used NoteBookLM to help with productivity – here’s 5 top tips to get the most from Google’s AI audio tool
- Reddit is experiencing outages again
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010