Since one of the iPhone X’s highlight features is its facial recognition software, it’s shed a lot of light on how this authentication method is poised to become the next ubiquitous means of mobile security access. Facial recognition software will replace current authentication methods like fingerprints, passwords, and PINs, namely to make the process more convenient. Having said that, facial recognition software is still in its infancy and working out some glaring flaws.
Despite their efficient performance in low and normal light conditions, facial recognition software has historically struggled with 3D depth details in outdoor environments (especially with bright sunlight). In addition, the software has already been fooled by realistic masks and other props that closely resemble faces, reiterating how facial recognition still has ways to go before being embraced by mainstream consumers.
As previously mentioned, part of the inspiration behind perfecting facial recognition technology are the inconveniences bestowed by conventional means of user authentication. Nowadays, we don’t use our mobile and IoT devices in the manner of how we operated the technologies that were mainstream (like desktop PCs) at the time when our current authentication methods were developed.
“You’re going to pick up your phone, use it for a few seconds, and pick it up again a few seconds later,” says George Brostoff, CEO at SensibleVision, whose company has been involved in these software projects since 2005. “Traditional methods of authentication make little sense from an efficiency standpoint. This is primarily why people are rejecting traditional forms of authentication because they’re not transparent or convenient.”
The convenience aspect has been well documented, however implementation of facial recognition software has faced some serious hurdles. SensibleVision is spearheading efforts at mitigating (and eventually eliminating) security flaws in next-generation authentication technology (like facial recognition). Software brands like 3DVerify aim to maintain the degree of convenience facial recognition is supposed to bring, while addressing design flaws and security concerns like the ones mentioned above.
“It was surprising when Apple chose to use structured light for 3D technology. There are multiple technologies today for 3D cameras and generating depth info. Structured light uses dot/pattern projection that’s always been sensitive to sunlight, eliminating the capability of depth information,” says Brostoff. “Simply put, the sun is a more powerful IR projector than a smartphone laser. When Apple introduced the iPhone X, they went to great lengths discussing success working in low and normal lighting, but didn’t bring up functionality outdoors—a historically known weakness of 3D cameras. It’s surprising Apple didn’t develop a solution like time-of-flight sensors or modulating IR lasers, allowing you to obtain depth information outdoors and in bright sunlight.”
Aside from worries over the software’s ability to work in different lighting conditions, consumers are concerned and skeptical over scenarios where this technology can be used against the owner to compromise their privacy. For Brostoff, the solution is more practical than some might originally believe.
“People are concerned about using facial recognition on their smartphones, because they fear scenarios where a police officer (for example) can unlock their cellphone by pointing the device at the user’s face. With our implementations of continuous security, the moment an officer would take the smartphone away from the owner’s face and try accessing it- the device would remain secure,” says Brostoff. “What if someone’s child points a parent’s cellphone at their face while asleep, unlocks the device, and makes purchases? The moment she turns the phone away from her parent’s face, it won’t even work anymore. Once the smartphone senses it’s doesn’t directly have a face in its field of view, the device makes a 180-degree turn.”
Brostoff believes the key to developing this software solution is understanding where and how users are utilizing this technology. Facial recognition technology released in recent years from companies like Samsung, Apple, and Google tend to be designed and tested in labs without considering all operating conditions people will use for these devices. Brostoff’s company is focusing on user experience and understanding the exact location of the target use case.
“With the smartphone’s issues of light affecting their facial recognition software for example, making sure the device works in side/back/outdoor light with sunshine on your face is a practical scenario we’ve taken into consideration when designing our software for 3DVerify. We want to make sure that someone at an outside bus stop who wants to check the bus schedule on their phone can get quick phone access (even if the sun is shining directly on them).” Brostoff says. “If this technology is failing with any kind of frequency, people will stop using. It’s critical with voluntary security to have extremely high success rates. Apple talked about accuracy with the iPhone X being a given (i.e. not letting the wrong person access the device). Otherwise, it’s not a security solution. We must make sure success rates are high in all scenarios where consumers are using this technology, and more importantly, working at the speed they’re expecting.”
While the hurdles of facial recognition software are glaringly apparent, their solutions seem achievable, and should improve as newer models of this software continue to be developed. While it took longer for conventional authentication methods (like fingerprinting and passwords) to sort out their inefficiencies, the fact software developers already have a keen eye on facial recognition software’s categories for improvement is a good sign of what this technology is capable of becoming moving forward.
Filed Under: M2M (machine to machine)