July 10, 2017

Four Things We Are Excited About This Year

“Prediction is very difficult, especially if it’s about the future.”

Nils Bohr

Every week, with a blatant disregard for the futility of trying to divine the future, our technology scouts and business strategists study the cutting edge of academic invention –and the markets they promise to disrupt– in the hopes that our next incubated venture will be the bridge that unites them.

Here are four things that we think will change the way we interact with technology and the world around us. If any of these opportunities resonate with you, reach out to us: we’re looking for technical leaders who won’t back down from a challenge!

JOIN AS AN EIR

 

1.  A cocktail party solution

The world we experience is such a complex convolution of electromagnetic, acoustic and chemical signals that it is a marvel of animal evolution that we can make any sense of it at all.

With the (re)advent of machine learning, we have seen incredible strides in computer vision and the ability to teach computers to transform the signals generated by light striking a camera into meaningful objects, as well as extract relationships between them. Unfortunately, when it comes to the semantic understanding overlapping audio signals, progress has not been as swift. Sounds in a mixture aren’t always localized spatially to their sources, and so it is easy to see how even an array of microphones can struggle to unmix the hopelessly tangled pressure waves elicited by the many sound sources in the average scene.

Prof. Colin Cherry coined the “cocktail party problem”: it is hard to follow only one conversation while many other conversations are going on around us. Picture Credits: Imperial College London

This issue, colloquially named the Cocktail Party Problem by Colin Cherry in 1953, is one that humans have mastered through a fusion of binaural audio signal processing and selective attention systems. Teaching our computers to interpret auditory scenes the way we do presents some interesting use cases. Biologically-inspired robot navigation, multi-speaker transcription, context-enriched speech recognition and computer vision-enhancing sensor fusion are just a few things we expect to see enabled.

 

 

2. Polite code

Software is eating the world, and it’s leaving a mess in the process. More than half a developer’s time is spent debugging and the cost of vulnerabilities and errors is growing larger and larger as traditionally non-IT companies find significant parts of their business becoming dependent on the digital. Even the more benign issue of technical debt becomes a problem of more severe proportions as disparate systems become increasingly interconnected through paradigms such as IoT. How can we even begin to start normalizing the way we program computers across domains when Google alone is over two billion lines of code?

The answer may exist in the interface of semantic language processing and computer science. While there are often a multitude of ways to code a piece of software for a given outcome, computer code follows relatively strict syntactic rules (at least, compared to human language), so it stands to reason that we may be able to teach computers to understand the semantic meaning behind a block of code and extract the intent of the programmer. With that intent identified, we could design systems that extract the best practices in computer science and apply them to a developer’s code by understanding the intent of the developer. Pushing this idea to its limit, we could democratize software programming so that the average consumer can string together a series of intentions into a fully working application for their personal needs.

 

3. Computational haptics

Photo: Evan Ackerman in IEEE Spectrum. What if touch and temperature could be brought to VR?

If you own an iPhone 7, you may have noticed the incredible way that the home button turns from something that feels like a button when powered on, to a flat unyielding surface when powered off. This simple, yet compelling demonstration illustrates the power of haptic feedback when applied with an understanding of how humans process the sense of touch.

Currently, our ability to generate expressive surfaces on our mobile devices is limited by the resolution of the haptic feedback we can apply; however, recent advances in the transmission of force through transparent surfaces promise the ability to generate the feel of buttons and sliders that mimic the UI elements that we use to navigate our devices. It’s tempting to think of such a feature as aesthetic in purpose, but how much more intuitive would our devices be if we didn’t have to look at them for every desired interaction?

We would also be remiss not to mention the enhancement such a modality would provide to the fledgling VR industry. This particular demonstration by AxonVR blew CES attendees away this year.

 

4. 3D without 3D

There probably aren’t many experiences that can leave you both exhilarated and nauseated at the same time. When it comes to VR and 3D display, however, those two adjectives often arrive in tandem. To experience imagery in 3D we rely on stereopsis, the use of two images of the same scene viewed at slightly different angles, mimicking human binocular perception. The trouble with this approach is that unlike the real world, objects displayed on a screen all fall on one plane. So, while stereopsis fools our eyes into directing onto different objects like they would in a three-dimensional world, our ocular lenses focus entirely on the plane of the screen.

This ‘vergence-accommodation’ conflict assails most VR users with vertigo and nausea after only a few minutes of use, and while content creators have found innovative ways to design around the problem, VR will forever remain relegated to the entertainment fringe until we find a real solution. One way would be to find a new hardware paradigm such as this optical mapping display, but even by best estimates such technologies are still ways out from consumers hands. Instead, we once again may need to turn to knowledge of human physiology for a solution.

It may surprise you to know that stereopsis is one of over 19 different cues that humans use to perceive depth (16 of them only require one eye!). You may have noticed that new TVs equipped with High Dynamic Range (HDR) capability can look eerily realistic depending on the footage and lighting conditions of your room. HDR is just one way that a computational approach to display can trick the human perceptual system into observing a new level of realism from our media.

 

Interested in solving one of these problems? Reach out to us!

Leave a Reply