CMU Radar Research: Privacy-Preserving Activity Tracking
The Rise of Camera-less Activity Tracking in Smart Homes
Envision a future where your smart speaker can intelligently respond to your daily routines – perhaps by determining when a room has been cleaned or confirming if the trash has been taken out.
Alternatively, consider the potential for health and fitness tracking, with your speaker accurately counting repetitions during workouts or even functioning as a virtual personal trainer, adjusting resistance on your exercise bike.
Smart Speakers Adapting to Your Life
What if the speaker intuitively recognized dinnertime and automatically selected appropriate background music? This level of proactive assistance is becoming increasingly feasible.
Crucially, these advancements can be achieved without the privacy concerns associated with integrated cameras within your home.
New Research from Carnegie Mellon University
Researchers at Carnegie Mellon University’s Future Interfaces Group have unveiled a groundbreaking approach to activity tracking that bypasses the need for visual surveillance.
Millimeter Wave Doppler Radar as a Sensing Tool
The team investigated the use of millimeter wave (mmWave) doppler radar as a means of detecting various human activities, addressing the significant privacy risks inherent in connected cameras.
A key challenge was the limited availability of datasets for training AI models to interpret RF noise as human activity, unlike the abundance of visual data used for other AI applications.
Synthesizing Data for AI Training
To overcome this obstacle, the researchers synthesized doppler data to train a human activity tracking model, creating a software pipeline for developing privacy-preserving AI.
The resulting model, demonstrated in a video, accurately identifies activities like cycling, clapping, waving, and squats solely from the mmWave signal generated by the movements, leveraging publicly available video data for training.
Bridging the Gap Between Domains
“We demonstrate the success of this cross-domain translation through experimental results,” the researchers state, believing their approach is a vital step towards simplifying the training of human sensing systems and fostering innovation in human-computer interaction.
Limitations and Capabilities of mmWave Sensing
Researcher Chris Harrison notes that mmWave doppler radar-based sensing isn’t suitable for detecting “very subtle” cues like facial expressions.
However, it is sensitive enough to recognize less strenuous activities such as eating or reading.
Line-of-Sight Considerations
The effectiveness of doppler radar relies on a clear line-of-sight between the subject and the sensing hardware, meaning it cannot currently detect activity around corners.
Moving Towards Widespread Adoption
While specialized hardware is required, progress is already underway, with Google integrating radar sensors into devices like the Pixel 4 (Project Soli) and Nest Hub for sleep tracking.
Chris Harris explains that a lack of compelling applications has hindered wider adoption of radar sensors in mobile devices, but research into activity detection can unlock new possibilities, such as smarter virtual assistants.
Potential in Mobile and Fixed Applications
Harris believes there are valuable use cases for both mobile and stationary applications, citing the Nest Hub as an example of leveraging existing sensors for enhanced smart speaker functionality, like counting exercise repetitions.
He also points to the potential for using radar in buildings to detect occupancy and even determine the last time a room was cleaned.
Cost Reduction and Privacy Advantages
The cost of these sensors is rapidly decreasing, potentially falling to just a few dollars, enabling their integration into a wide range of devices.
Harris emphasizes that radar-based sensing offers a significant privacy advantage over camera-based systems, addressing concerns about a “surveillance society.”
Beyond Consumer Applications
Companies like VergeSense are already utilizing sensor hardware and computer vision to provide real-time analytics of indoor spaces for commercial clients, such as monitoring office occupancy.
However, even with local processing of low-resolution image data, privacy concerns surrounding visual sensors persist, particularly in consumer settings.
Radar as a Privacy-Focused Alternative
Radar presents a privacy-conscious alternative to visual surveillance, potentially making it ideal for consumer devices like ‘smart mirrors.’
Harris questions whether consumers would be comfortable with cameras in bedrooms or bathrooms, highlighting the privacy benefits of radar.
The Value of Multi-Sensor Systems
He also references previous research emphasizing the benefits of incorporating diverse sensing technologies, stating that “the more sensors, the longer tail of interesting applications you can support.”
Cameras have limitations, particularly in low-light conditions, while radar offers a complementary sensing modality.
Addressing Privacy Concerns
While radar-based tracking is generally less invasive than camera-based systems, it’s crucial to acknowledge potential privacy implications.
Data indicating a child’s bedroom occupancy, for example, could be misused depending on data access.
Furthermore, any human activity can generate sensitive information, raising questions about the appropriateness of smart speakers monitoring personal moments.
A Spectrum of Privacy
As Harris notes, privacy is a spectrum rather than a binary issue.
“Radar sensors happen to be usually rich in detail, but highly anonymizing, unlike cameras,” he explains, emphasizing that leaked doppler radar data would be far less compromising than leaked camera footage.
Computational Costs and Future Directions
Regarding the computational demands of synthesizing training data, Harris points to the availability of large video datasets like Youtube-8M.
He argues that downloading and processing video data to create synthetic radar data is more efficient than recruiting participants for in-lab motion capture.
Signal Processing and Real-Time Performance
The doppler signal is “very high level and abstract,” making real-time processing feasible, even on relatively low-powered processors.
Embedded processors in cars already utilize radar data for collision avoidance and blind spot monitoring.
Ongoing Research and Innovation
This research will be presented at the ACM CHI conference, alongside another project – Pose-on-the-Go – which uses smartphone sensors to estimate full-body pose without wearable devices.
The CMU team has also previously demonstrated cost-effective indoor sensing without cameras and explored using smartphone cameras to enhance AI assistant contextual awareness.
Their ongoing investigations include laser vibrometry, electromagnetic noise sensing, conductive spray paint touchscreens, and wearable technology enhancements.
A More Contextually Aware Future
The future of human-computer interaction promises greater contextual awareness, even as current ‘smart’ devices often fall short of expectations.
Related Posts

Google's New AI Agent vs. OpenAI GPT-5.2: A Deep Dive

Disney Cease and Desist: Google Faces Copyright Infringement Claim

OpenAI Responds to Google with GPT-5.2 After 'Code Red' Memo

Google Disco: Build Web Apps from Browser Tabs with Gemini

Waymo Baby Delivery: Birth in Self-Driving Car
