Design World

  • Home
  • Technologies
    • 3D CAD
    • Electronics • electrical
    • Fastening & Joining
    • Factory automation
    • Linear Motion
    • Motion Control
    • Test & Measurement
    • Sensors
    • Fluid power
  • Learn
    • Ebooks / Tech Tips
    • Engineering Week
    • Future of Design Engineering
    • MC² Motion Control Classrooms
    • Podcasts
    • Videos
    • Webinars
  • LEAP AWARDS
  • Leadership
    • 2022 Voting
    • 2021 Winners
  • Design Guide Library
  • Resources
    • 3D Cad Models
      • PARTsolutions
      • TraceParts
    • Digital Issues
      • Design World
      • EE World
    • Women in Engineering
  • Supplier Listings

RedEye Could Let Your Phone See 24-7

By Rice University | June 21, 2016

Share

Rice University researchers have just the thing for the age of information overload: an app that sees all and remembers only what it should.

RedEye, new technology from Rice’s Efficient Computing Group that was unveiled today at the International Symposium on Computer Architecture (ISCA 2016) conference in Seoul, South Korea, could provide computers with continuous vision — a first step toward allowing the devices to see what their owners see and keep track of what they need to remember.

“The concept is to allow our computers to assist us by showing them what we see throughout the day,” said group leader Lin Zhong, professor of electrical and computer engineering at Rice and the co-author of a new study about RedEye. “It would be like having a personal assistant who can remember someone you met, where you met them, what they told you and other specific information like prices, dates and times.”

Zhong said RedEye is an example of the kind of technology the computing industry is developing for use with wearable, hands-free, always-on devices that are designed to support people in their daily lives. The trend, which is sometimes referred to as “pervasive computing” or “ambient intelligence,” centers on technology that can recognize and even anticipate what someone needs and provide it right away.

“The pervasive-computing movement foresees devices that are personal assistants, which help us in big and small ways at almost every moment of our lives,” Zhong said. “But a key enabler of this technology is equipping our devices to see what we see and hear what we hear. Smell, taste and touch may come later, but vision and sound will be the initial sensory inputs.”

Zhong said the bottleneck for continuous vision is energy consumption because today’s best smartphone cameras, though relatively inexpensive, are battery killers, especially when they are processing real-time video.

Zhong and former Rice graduate student Robert LiKamWa began studying the problem in the summer of 2012 when they worked at Microsoft Research’s Mobility and Networking Research Group in Redmond, Wash., in collaboration with group director and Microsoft Distinguished Scientist Victor Bahl. LiKamWa said the team measured the energy profiles of commercially available, off-the-shelf image sensors and determined that existing technology would need to be about 100 times more energy-efficient for continuous vision to become commercially viable. This was the motivation behind LiKamWa’s doctoral thesis, which pursues software and hardware support for efficient computer vision.

In an award-winning paper a year later, LiKamWa, Zhong, Bahl and colleagues showed they could improve the power consumption of off-the-shelf image sensors tenfold simply through software optimization.

“RedEye grew from that because we still needed another tenfold improvement in energy efficiency, and we knew we would need to redesign both the hardware and software to achieve that,” LiKamWa said.

He said the energy bottleneck was the conversion of images from analog to digital format.

“Real-world signals are analog, and converting them to digital signals is expensive in terms of energy,” he said. “There’s a physical limit to how much energy savings you can achieve for that conversion. We decided a better option might be to analyze the signals while they were still analog.”

The main drawback of processing analog signals — and the reason digital conversion is the standard first step for most image-processing systems today — is that analog signals are inherently noisy, LiKamWa said. To make RedEye attractive to device makers, the team needed to demonstrate that it could reliably interpret analog signals.

“We needed to show that we could tell a cat from a dog, for instance, or a table from a chair,” he said.

Rice graduate student Yunhui Hou and undergraduates Mia Polansky and Yuan Gao were also members of the team, which decided to attack the problem using a combination of the latest techniques from machine learning, system architecture and circuit design. In the case of machine learning, RedEye uses a technique called a “convolutional neural network,” an algorithmic structure inspired by the organization of the animal visual cortex.

LiKamWa said Hou brought new ideas related to system architecture circuit design based on previous experience working with specialized processors called analog-to-digital converters at Hong Kong University of Science and Technology.

“We bounced ideas off one another regarding architecture and circuit design, and we began to understand the possibilities for doing early processing in order to gather key information in the analog domain,” LiKamWa said.

“Conventional systems extract an entire image through the analog-to-digital converter and conduct image processing on the digital file,” he said. “If you can shift that processing into the analog domain, then you will have a much smaller data bandwidth that you need to ship through that ADC bottleneck.”

LiKamWa said convolutional neural networks are the state-of-the-art way to perform object recognition, and the combination of these techniques with analog-domain processing presents some unique privacy advantages for RedEye.

“The upshot is that we can recognize objects — like cats, dogs, keys, phones, computers, faces, etc. — without actually looking at the image itself,” he said. “We’re just looking at the analog output from the vision sensor. We have an understanding of what’s there without having an actual image. This increases energy efficiency because we can choose to digitize only the images that are worth expending energy to create. It also may help with privacy implications because we can define a set of rules where the system will automatically discard the raw image after it has finished processing. That image would never be recoverable. So, if there are times, places or specific objects a user doesn’t want to record — and doesn’t want the system to remember — we should design mechanisms to ensure that photos of those things are never created in the first place.”

Zhong said research on RedEye is ongoing. He said the team is working on a circuit layout for the RedEye architecture that can be used to test for layout issues, component mismatch, signal crosstalk and other hardware issues. Work is also ongoing to improve performance in low-light environments and other settings with low signal-to-noise ratios, he said.


Filed Under: M2M (machine to machine)

 

Related Articles Read More >

Part 6: IDE and other software for connectivity and IoT design work
Part 4: Edge computing and gateways proliferate for industrial machinery
Part 3: Trends in Ethernet, PoE, IO-Link, HIPERFACE, and single-cable solutions
Machine Learning for Sensors

DESIGN GUIDE LIBRARY

“motion

Enews Sign Up

Motion Control Classroom

Design World Digital Edition

cover

Browse the most current issue of Design World and back issues in an easy to use high quality format. Clip, share and download with the leading design engineering magazine today.

EDABoard the Forum for Electronics

Top global problem solving EE forum covering Microcontrollers, DSP, Networking, Analog and Digital Design, RF, Power Electronics, PCB Routing and much more

EDABoard: Forum for electronics

Sponsored Content

  • Global supply needs drive increased manufacturing footprint development
  • How to Increase Rotational Capacity for a Retaining Ring
  • Cordis high resolution electronic proportional pressure controls
  • WAGO’s custom designed interface wiring system making industrial applications easier
  • 10 Reasons to Specify Valve Manifolds
  • Case study: How a 3D-printed tool saved thousands of hours and dollars

Design World Podcasts

May 17, 2022
Another view on additive and the aerospace industry
See More >
Engineering Exchange

The Engineering Exchange is a global educational networking community for engineers.

Connect, share, and learn today »

Design World
  • Advertising
  • About us
  • Contact
  • Manage your Design World Subscription
  • Subscribe
  • Design World Digital Network
  • Engineering White Papers
  • LEAP AWARDS

Copyright © 2022 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search Design World

  • Home
  • Technologies
    • 3D CAD
    • Electronics • electrical
    • Fastening & Joining
    • Factory automation
    • Linear Motion
    • Motion Control
    • Test & Measurement
    • Sensors
    • Fluid power
  • Learn
    • Ebooks / Tech Tips
    • Engineering Week
    • Future of Design Engineering
    • MC² Motion Control Classrooms
    • Podcasts
    • Videos
    • Webinars
  • LEAP AWARDS
  • Leadership
    • 2022 Voting
    • 2021 Winners
  • Design Guide Library
  • Resources
    • 3D Cad Models
      • PARTsolutions
      • TraceParts
    • Digital Issues
      • Design World
      • EE World
    • Women in Engineering
  • Supplier Listings