in

Jaguar Land Rover and Cambridge University are developing ‘no-touch’ touchscreens for cars

[ad_1]

The big picture: Carmakers go to great lengths for optimizing the UI/UX of their infotainment systems. While much of it has been streamlined thanks to Apple CarPlay and Android Auto, the actual business of interacting with the software is still a mixture of buttons, rotary knobs, touchpads, and touchscreens. Gesture-driven systems have also appeared over the past few years, with BMW offering functions like volume control, call accept/reject, and navigation through simple hand gestures on supported models. JLR and Cambridge University have now shared their vision of an AI-powered ‘no-touch’ touchscreen that combines all these systems to let users operate their infotainment displays from a short distance, potentially improving usability while in motion and also reducing the risk of transferring pathogens by eliminating the need for touch altogether.

It’s still an ongoing debate on whether touchscreens are an effective replacement for good old physical switches and knobs. The myriad of implementations often results in similarly varied experiences, while the general consensus among carmakers remains that offering more touchscreens wherever possible would likely make for an upmarket, sophisticated vehicle ownership experience.

The modern car’s relationship with our digital lives is now permanent, which means the touchscreen, either multiple or a single giant tablet, is here to stay. The latest innovation in how users could one day interact with them comes from JLR and Cambridge University, who’ve co-developed an AI-powered ‘predictive touch’ technology as a means of using a car’s infotainment touchscreen without actually touching it.

The research argues that while operating a touchscreen for navigation, temperature control or entertainment functions, users can “often miss the correct item – for example due to acceleration or vibrations from road conditions – and have to reselect, meaning that their attention is taken off the road, increasing the risk of an accident.”

Cambridge University engineers have developed a solution that uses a combination of machine learning, alongside vision-based sensors for detecting user gestures and quickly determining or predicting the item they intend to use/touch by pointing at it.

During lab tests, they found a reduction in screen interaction effort by up to 50 percent, thanks to the technology’s ability to predict the correct user action with high accuracy, resulting in distraction-free, safer driving.

The researchers say that the tech is also useful considering the ongoing pandemic situation. Since touching the display is no longer required, there’s a wide array of use cases where its deployment can be beneficial in terms of health. Public facilities like ATMs, check-in kiosks, self-service checkouts, and other industrial applications equipped with predictive touch could potentially reduce the risk of spreading coronavirus and other pathogens as the need for hand contact with interactive displays is virtually eliminated.

The tech, according to its lead developer Dr. Bashar Ahmad, is superior to “basic mid-air interaction techniques or conventional gesture recognition, ” as it supports intuitive interactions with “legacy interface designs,” and doesn’t require any learning on the user’s end. It can also be seamlessly integrated with existing touchscreen and interactive systems, provided they’re able to feed correct sensory data to the software’s machine learning algorithm.

[ad_2]

Source link

Invisible barriers cut down on cheating — ScienceDaily

Randomness theory could hold key to internet security — ScienceDaily