Super Ventures launches first fund and incubator dedicated to augmented reality

1-YNMo-2e31fn6B7eqkPAIZg.png

For the past 20 years, our team has been building an ecosystem around technologies we believe are bringing superpowers to the people: augmented reality, virtual reality and wearable tech. We believe these technologies are making us better at everything we do and will overtake personal computing as the next platform. To help breed these superpowers, we are launching Super Ventures, the first incubator and fund dedicated to augmented reality.

“Establishing an investment fund and incubator furthers our influence in the AR space by allowing us to invest in passionate entrepreneurs and incubate technologies that will become the foundation for the next wave of computing,” said Super Ventures Founder and GM, Ori Inbar.

Today we are announcing our inaugural fund of $10 million along with initial investments in several AR companies including Waygo, a visual translation app specializing in Asian languages, and Fringefy, a visual search engine that helps people discover local content and services. Our fund will also invest in technologies enabling AR, and we are separately backing an unannounced company developing an intelligence system to crowdsource visual data.

“I am extremely excited to be working with Super Ventures,” said Ryan Rogowski, CEO of Waygo. “The Super Ventures team has an impressive amount of operational experience and deep connections in the AR community that are sure to help Waygo reach its next milestones!”

“We feel each and every Super Ventures team member provides substantial advantages to Fringefy from technical aspects, to the user experience, and business strategy,” said Assif Ziv, Co-founder of Fringefy. “We are excited to tap into the Super Ventures team’s immense network and experience.”

“Super Ventures is building an ecosystem of companies enabling the technology that will change how computers — and humans — see and interact with the world,” said the CEO & co-founder of the unannounced company. “We are truly excited to be part of this brain trust, and to leverage the network and expertise of the Super Ventures team.”

The Super Ventures partnership is made up of Ori Inbar, Matt Miesnieks, Tom Emrich and Professor Mark Billinghurst.

Ori Inbar has devoted the past 9 years to fostering the AR ecosystem. Prior to Super Ventures he was the co-founder and CEO of AR mobile gaming startup Ogmento (now Flyby Media — acquired by Apple). He is also the founder of AugmentedReality.org, a non-profit organization on a mission to inspire 1 billion active users of augmented reality by 2020 and the producer of the world’s largest AR, VR and wearable tech event in Silicon Valley and Asia, AWE. Ori advises dozens of startups, corporations, and funds about the AR industry.

Matt Miesnieks most recently led an AR R&D team at Samsung and previously was co-founder & CEO of Dekko, the company that invented 3D AR holograms on iOS. Prior to this, Matt led worldwide customer development for mobile AR browser Layar (acquired by Blippar), and held executive leadership roles at Openwave and other mobile companies.

Tom Emrich is the founder of We Are Wearables, the largest wearable tech community of its kind, which has become the launch pad for a number of startups. He is also the co-producer of AWE. Tom is well known for his analysis and blogging on wearable tech as one of the top influencers in this space. He has used this insight to advise VCs, startups and corporations on opportunities within the wearable tech market.

Professor Mark Billinghurst is one of the most recognized researchers in AR. His team developed the first mobile AR advertising experience, first collaborative AR game on mobile devices, and early visual authoring tools for AR, among many other innovations. He founded the HIT Lab NZ, a leading AR research laboratory and is the co-founder of ARToolworks (acquired by DAQRI), which produced ARToolKit, the most popular open source AR tracking library. In the past he has conducted innovative interface research for Nokia, the MIT Media Lab, Google, HP, Samsung, British Telecom and other companies.

“The next 12–18 months will be seen as the best opportunity to invest in augmented reality,” said Matt Miesnieks. “Our goal is to identify early-stage startups developing enabling platforms and provide them with necessary capital and mentorship. Our guiding light for our investment decisions is our proprietary industry roadmap we’ve developed from our combined domain expertise. The companies we select will be the early winning platforms in the next mega-wave of mobile disruption.”

“Our team will take a hands-on approach with the startups, leveraging our industry experience, research knowledge and networks to help them meet their goals while part of the incubator,” added Professor Mark Billinghurst.

“Unique to Super Ventures is the community we have fostered which we will put to work to help support our startups,” said Tom Emrich. “Our community of over 150,000 professionals not only gives us access to investment opportunities early on, it also offers a vast network of mentors, corporate partners and investors which our startups can rely on to succeed.”

Join us in bringing superpowers to the people.

Optinvent Unveils consumer oriented ORA-X Smart Glass

Optinvent Unveils the Smart Glass that doesn’t make you look like a Creep

 orax1 ORAX2

The ORA-X is a revolutionary mobile device in a disruptive form factor – Forget “Smart Glasses” that look geeky. Enter a new category: Smart Headphones – running Android – with high quality audio – and an adjustable see-through display.

 

January 05, 2014, Las Vegas, NV. – Optinvent, a world leader in smart digital eyewear, unveils today, for the first time anywhere, the design of its revolution ORA-X smart headphones.

Targeting music lovers on the go, the ORA-X is a brand new category that is part smart glass, part high-end wireless audio headphones. It runs Android and will feature quality audio sound as well as a disruptive see-through retinal projection technology.

“Smart Glasses have been plagued by what I call a ‘paradigm prison’. You can’t make a fashion accessory out of them no matter how hard you try. Consumers are just not ready to embrace looking geeky for the added functionality.” says Kayvan Mirza, CEO and Co-Founder of Optinvent. “The ORA-X is a clean break from this paradigm. Now not only can you hear music, but you can ‘see music’.   And that’s just the tip of the iceberg.   It’s also about hands free mobile computing without looking like a cyborg. It’s about “augmenting your senses” while still looking stylish.”

Imagine watching music videos and clips on the go, searching the web, video conferencing, taking pictures, sharing on social networks, GPS, and all the other uses that smart glasses promise…. without looking like a cyborg. Traditional headphones can give you high quality audio but their functionality hasn’t evolved much beyond that. However, they have become a fashion statement and people can be seen flaunting these colorful head-worn accessories. The ORA-X wants to go further by adding vision. This is a new and compelling user experience and this revolutionary device could mean the end of the “regular” headset. Why just hear when you can see?

ORA-X

The ORA-X is scheduled to be released in 2015. It will run standard Android apps – just like any standalone smartphone or tablet device. It is based on Optinvent’s cutting edge display technology and includes high end acoustics.

On the hardware side, specifications include:

  • Large, transparent virtual display
  • High fidelity speakers w/ active noise cancellation
  • Microphone (for calls and voice commands)
  • Front-facing camera
  • 9 axis motion sensor
  • Wireless connectivity (Bluetooth, Wi-Fi, GPS)
  • Trackpad (mouse and swipe) for tactile interactions with the device
  • High capacity Li-Ion rechargeable battery
  • Powerful microprocessor and GPU with enough memory to support complex applications

For more information, please visit http://www.optinvent.com

About Optinvent

Optinvent is a world leader in digital eyewear and see-through retinal projection technology. Optinvent’s team has 20+ years of experience in the field of consumer electronics and is recognized in the industry for developing cutting edge patented technologies and products.

 

Press contact:

Maiwenn Regnault

maiwenn@oxygen-pr.com

(415) 609-0140

Augmented Reality at CES 2015 – What to expect?

AR.org and Ori Inbar will be hanging out at the AR Pavilion – booth #SV6 at the entrance to the Sands Expo Level 2 at CES 2015. Come see us!

What else to expect at CES 2015?

Here are new announcements:

Optinvent unveils the ORA-X

The ORA-X is a revolutionary mobile device in a disruptive form factor – Forget “Smart Glasses” that look geeky. Enter a new category: Smart Headphones – running Android – with high quality audio – and an adjustable see-through display.

Vuzix announced a 30% stake investment by Intel

A great vote of confidence in the Smart Glasses pioneer
Vuzix is also expected to show a working pre-production demo of slim see through stereo AR glasses

ODG to unveil sub $1000 consumer Smart Glasses  at CES 2015 booth #SV6

Epson to showcase Application Developers

Epson to Showcase New Consumer and Enterprise Apps on Moverio BT-200 Smart Glasses

at booth 74728

Augmented Reality Apps for Live Action Gaming, Education, Manufacturing and More Demonstrate Power of Platform for Merging Digital Content with Real-World Environment

LyteShot, PlayAR, EON Experience VR, NGRAIN, Aero Glass, Augumenta, Scope AR, Metaio, APX Labs, Rochester Optical,

Softkinetic Reveals 3D Vision Technology for Augmented Reality Mobile Platforms

Sony SmartEyeGlass

Sony Electronics Suite 30-111, Sony Electronics Inc. N108, 14200

Seebright Wave new product announcement at booth 75480

Atheer Showcases Gesture-Controlled AiR™Smart Glasses

ARM Meeting Space MP25256; South Hall 2 (Ground Level)

Augumenta announces Roshambo Reloaded

Will demo new game at the Epson Sands Expo booth #74728

 

InfinityAR showcases its engine with Lumus Glasses at booth #SV6

http://www.infinityar.com/

 

XOEye Technologies and Vuzix Announce Partnership

XOEye Technologies and Vuzix Partner to Deliver End-to-End Enterprise Wearable Technology Solutions in North America and Europe

XOEye is exhibiting its solutions this week at the 2015 International CES in Las Vegas at the Eureka Park NEXT pavilion located in the Sands Convention Center, Level 2, Booth #75616.

Qualcomm to demonstrate Tablets with DepthSense Cameras and Vuforia

Qualcomm will demonstrate on their main booth a Sony Experia tablet with our embedded DepthSense camera running an new  Vuforia build
 Booth #8252

Hyundai to show an Augmented Reality Head Up Display at CES

Also, don’t miss:

CastAR’s Jeri in CNET’s SuperSession panel – The Next Big Thing: New Realities. Tuesday, January 6th, in LVCC, North Hall, Room N257, from 3:30-4:30 PM.  

Additional notable AR and VR presence at CES 2015

  • Kopin at the Venetian Palazzo Hospitality Suites
  • APX will also be present at the following booths: Vuzix, Epson, and Sony
  • Occipital unveils new gaming experience at CES South Hall (gaming area), also at Sands as part of Autodesk (Spark)
  • Lumus Suite 335

  • MicroVision, Inc. Suite 469, Suite 569, Suite 571

  • Sulon Technologies Inc. 26214

  • Tobii 71837

  • XOEye Technologies 75616

  • Oculus VR 26002, MP25965

  • Avegant Corporation 74547

  • eyeSight Mobile Technologies at the Venetian Palazzo Hospitality Suites 72037

  • Dassault Systemés | 3DEXCITE 72722

  • Canon N109, 13106

  • Daqri’s new Smart Helmet at Suite

 

Predictions for 2015

Finally, the best collection of predictions for Wearable Tech in 2015 by Tom Emrich:

 

14 WEARABLE TECH PREDICTIONS FOR 2015

 

And to get the lowdown of the Smart Glasses market, key players, adoption – get your copy of AugmentedReality.Org’s Smart Glasses Report at http://www.augmentedreality.org/smartglassesreport

 

Smart Glasses Report Predicts 1 Billion Shipments By 2020

New York, NY – November 5, 2014: A new report by AugmentedReality.Org is predicting that the Smart Glasses market will soar towards 1 billion shipments near the end of the decade. The report, “Smart Glasses Market 2014”, defines the scope of the Smart Glasses (or Augmented Reality Glasses) market, predicts how fast it will ramp up, and which companies are positioned to gain from it. It forecasts the adoption phases between 2014-2023, the drivers and challenges for adoption, and how hardware and software companies, as well as investors should plan ahead to take part in the next big computing cycle.

Get the Report

Progress and Mind Share Small

With over 10 new Smart Glasses launched in 2014 – this is a banner year for Smart Glasses. AugmentedReality.Org expects shipments to reach 1 million by fall 2015 – mostly for enterprises, followed by an increase to 10 million by 2016, 50-100 million shipments by 2018, and eventually capture the mainstream consumer space and cross 1 billion shipments at the turn of the decade. The report predicts that as the market matures and early winners emerge, by the end of 2016 the market will experience a “shakeup” with mergers, acquisitions, and significant investments. It argues that consumer electronics giants, and other players in the ecosystem have no more than a 12 months window to position their companies in the space (build, buy, partner) – or risk missing the opportunity.

Market Adoption Small

Enterprises Will Lead, Consumers Will Follow 

Driven by need of Fortune 500 companies to become more competitive, the largest investments in Smart Glasses and related software in the next few years will come mostly from the enterprise space. AR Glasses targeting niches (Bicycle helmets, competitive sports, entertainment) could also thrive. Once enterprise usage irons out the kinks of Smart Glasses and pushes their prices further down – the consumer market will take the lead – with the goal to ship a pair of Smart Glasses to every consumer.

Target Audience 

  • Hardware manufacturers and suppliers

  • Hardware startups

  • Software developers

  • Investors

  • The entire Augmented Reality ecosystem

Key Questions Answered in this Report

  • How will the market evolve?

  • Who are the key players? What are their strategies?

  • What is the competitive landscape?

  • What are the needs, challenges, and solutions?

  • What’s the value chain for AR Glasses?

  • What’s the forecast for market adoption in the next decade?

  • What are the drivers for adoption?

  • What’s the right price? The right timing?

  • Will this market happen at all? How big can it get?

Companies Mentioned in this Report

Google, Epson, Vuzix, Optinvent, Lumus, Meta, Sony, Samsung, Apple, Amazon, Kopin, ODG, Atheer, Glassup, Mirama, Penny, Laster, Recon, Innovega, Elbit, Brother, Oakley, Fujitsu, Microsoft, Canon, Lenovo, Baidu, Nokia, LG, Olympus, Foxcon, Konica Minolta, Daqri, Skully Helmets, Fusar, Magic Leap, Oculus.

AugmentedReality.Org is a global not-for-profit organization with a mission to advance Augmented Reality to advance humanity. It catalyzes the transformation of the AR industry by educating the market about the real power of AR, connecting the best talent around the world, and hatching AR Startups and helping bring them to market. This report is a service to the community funded by members and sponsors.

About the Author: Ori Inbar is the Co-Founder and CEO of Augmented Reality.ORG, and the founder and producer of Augmented World Expo – the world’s largest conference for AR. He dedicates his time is to explore and analyze every aspect of the industry, try every product, and speak with every expert. He lives and breathes Augmented Reality. In 2009, Ori was the co-founder and CEO of Ogmento, one of the first venture-backed companies focused on augmented reality games. Ori is recognized as a passionate speaker in the AR industry, a lecturer at NYU, as well as a sought after adviser and board member for augmented reality startups.

Watch Ori Inbar present key findings from the Smart Glasses Report at InsideAR 2014

For further details please contact info@AugmentedReality.Org or +1 (571) 293-2013

The 27 pages report is available for $799 on AugmentedReality.Org’s website.

AugmentedReality.Org members may purchase the report at a discount for just $99.

Get the Report

Glass Explore and Explorer

Explore&Explorer

Google I/O kicked off today with not much fanfare around Glass. From a pure awareness stand point, Glass is the best thing that happened to Augmented Reality since the iPhone. And as a champion of the Augmented Reality industry from way back in 2007 – I am an avid supporter.

But Glass Explorers* make me angry (*users of the Google Glass prototype.)

I am not angry at Explorers because they love to walk on the street with Glass so that passerbys stop and ask them about it (although passerbys just want to take selfies with Glass)

Glass Selfie2

I am not angry at Explorers because they love getting into bars just to be denied service. Nor am I angry because they drive cars with Glass just to annoy highway patrol officers.

And you know what, I am not even angry at their eagerness to pay an exuberant amount of money to be testers in the most expensive beta program ever.

All that doesn’t bother me so much.

As Jon Stewart says : Intolerance shouldn’t be tolerated.

You know why I am angry at Glass Explorers?

Because they totally mistake the purpose of wearing.

In the Daily Show’s “Glass Half Empty” segment a Glass Explorer explains: “it’s basically a cell phone on your face.”

Ugh!

Daily show wearing Glass

“Make calls, get email, surf the internet…accessibility to everything on your cell phone” but now “right there on your eye”.

This is a bad case of skeuomorphism. Arrrgh!

iphone-dial-retro

Skeuomorphism: a dial phone on a touch screen!?

Can’t escape the comparison to Dumb and Dumber.

The “Glass Half Empty” Explorer argues: “With Glass you maintain in the here and now…”

So far – that’s brilliant. With Augmented Reality you Play in the Now.

But then he continues: “when I check messages I am looking in your general direction – I am not distracted.”

Just when I thought you couldn’t possibly be any more explorer. Or dumber.

Dumb and dumberer

My friend (and I mean it from the bottom of my heart), if you are reading a text message while talking to me – you ARE distracted. And looking in my GENERAL direction is like farting in my general direction.

Maybe these are just run-of-the-mill explorers regurgitating talking points.

So I asked a [very] senior [and very smart] member of the Google Glass team what’s the most compelling Glass app he’s seen so far. He didn’t flinch when answered: “Texting.”

Wah-What!?

This makes me mad!!!

The second Law of Augmented Reality design clearly states “Augmented Reality must not distract from reality”.

Second law of AR design

If it does distract you – it ventures into virtual reality which is an escape from the real world. The fundamental purpose of Augmented reality is to make you more aware of the real world and make things around you more interactive. Because in an interactive world everything you do is more engaging, productive, and fun.

The Simpsons’ Days of Future Future episode warns us about the consequences of not paying attention to the real world:

Epilogue

An incident that brought my anger to a head: A senior member of the Glass team which recently participated in a Glass Class at AWE 2014 didn’t agree to be video-taped or mentioned by name while at the same time was wearing Glass and [could have] recorded us all…

Aaarrrggggh!!

When I calm down, I’ll show what I consider good uses of Augmented Reality.

In the meantime check out over a hundred videos from AWE 2014 – the world’s largest event focused on Augmented Reality, Wearables, and the Internet of Things.

Guest Post: Presence – The Powerful Sensation Making Virtual Reality Ready for Primetime

Evolving the Human Machine Interface Part III

The concept of Presence in Virtual Reality (VR) has been gaining popularity over the past year, particularly within the gaming community. With consumer VR devices in development from Oculus, Sony, and more than likely Microsoft, Presence has become the metric by which we evaluate all VR experiences. But Presence is difficult to describe to someone who has never tried VR. “It’s like I was actually there. It made me feel like what I was seeing was actually happening to me, as though I was experiencing it for real,” is how one colleague described the experience.

Presence in VR triggers the same physical and emotional responses one would normally associate with a real world situation, it is the wonderfully magical experience of VR. But how is Presence achieved? While many research studies have provided a variety of subjective descriptions for Presence, there seem to be 3 common variables that affect tele or virtual Presence most:

1.  Sensory input: at minimum the ability to display spatial awareness

2.  Control: the ability to modify one’s view and interact with the environment

3.  Cognition: our individual ability to process and respond to sensory input

Because the nature of VR isolates the user from real world visual input, if the device’s sensory input and control are inadequate or missing then the effect of Presence fails, the results of which are oftentimes met will ill side effects: You feel sick to your stomach!

Sensory Input

For those who have tried VR, at some point or another you’ve felt queasy. That point which the experience turns from wonder to whoa, has been an unfortunate side effect throughout the development of VR. As Michael Abrash presented at Valve’s Steam Dev days, the hurdles needed to overcome VR sickness and achieve Presence are within reach. In the video below, Michael expertly summarizes the technical hurdles to achieve a believable sense of Presence in VR.

“What VR Could, Should, and Almost Certainly Will Be within Two Years” Steam Dev Days 2014

Michael Abrash, Valve Softwrae

Control

To achieve a minimum level of Presence, head tracking is used to display the VR image to match the users own head position and orientation. While this helps to create the sense of spatial awareness within VR, it still doesn’t make someone’s experience truly “Present.” To do that, we need to add the representation of ourselves, either through an avatar or our own body image. Viewing our physical presence in VR, known as body awareness, creates an instant sense of scale and helps to ground the user within the experience. In the video sample below, Untold Games is creating body awareness through avatar control.

“Loading Human, true body awareness” Untold Games 2014

Without body awareness, VR can feel more like an out-of-body experience. Everything “looks” real and the user has spatial awareness, but the user’s body and movements are not reflected therefore the user does not feel actually present. Combining body awareness with VR’s spatial awareness creates a strong bond between the user and the experience.

Cognition

The third perimeter of Presence is us. The feeling of Presence in VR is directly influenced by our personal ability to process and react to environmental changes in the real world. It’s likely that many of us will not have the same reactions to the experiences within VR. If you get sick riding in cars easily, then VR motion will give you the same sensation. If you’re afraid of heights, fire, spiders, etc. you’re going to have the same strong reactions and feelings in VR. Our individual real life experience influences our perception and reactions to VR. This can lead to some interesting situations, in particular with gaming. For example one player may be relatively unaffected by a situation or challenge, while another may be strongly affected.

Obviously the conditions of Presence are perceptual only. In most cases we’re not at the same physical risk in virtual environments as we would be in real life. But our own cognition coupled with VR’s ability to create Presence is why VR is such a popular field for everything from gaming and entertainment to therapy and rehabilitation.

Once we start to overcome these technical hurdles and provide a basic level of Presence, we next need to understand what it will ultimately enable. What does Presence provide for us in an experience other than merely perceiving the experience as real-like? We’ll explore that idea in the next segment, and try to understand where Presence will have the most impact.

Guest Post: Harnessing the Power of Human Vision

Harnessing the Power of Human Vision

By Mike Nichols, VP Content and Applications at SoftKinetic

For some time now, we’ve been in the midst of a transition away from computing on a single screen. Advances in technology, combined with the accessibility of the touch-based Human Machine Interface (HMI) have enabled mobile computing to explode. This trend will undoubtedly continue to evolve as we segue into more wearable Augmented Reality (AR) and Virtual Reality (VR) technologies. While AR and VR may provide substantially different experiences to their flat screen contemporaries, both AR and VR face similar issues of usability. Specifically, how and what are the most accessible ways to interact using these devices?

The history of HMI development for both AR and VR has iterated along similar paths of using physical controllers to provide user navigation. Although the use of physical controls has been a necessity in the past, if they remain the primary input, these tethered devices will only serve as shackles that prevent AR and VR from reaching full potential as wearable devices. While physical control devices can and do undoubtedly add a feeling of immersion to an experience, in particular with gaming, you would no more want a smart-phone that was only controllable via a special glove, then you would want to control your smart-glass through a tethered controller. As the technology for AR and VR continues to evolve it will eventually need embedded visual and audio sensors to support the HMI. In particular, visual sensors to support a full suite of human interactions that will integrate with our daily activities in a more natural and seamless way than our mobile devices do today.

In Depth

A depth sensor is the single most transformative technology for AR and VR displays because it is able to see the environment as 3-dimensional data, much like you or I do with our own eyes. It’s the key piece of technology that provides us with the building blocks needed to interact with our environments – virtual or otherwise. The depth sensor allows us to reach out and manipulate virtual objects and UI by tracking our hands and fingers

A scenes’ depth information can be used for surface and object detection, then overlaid with graphics displayed relative to any surface at the correct perspective to our heads’ position and angle. Depth recognition combined with AR and VR presents a profound change from the way we receive and interact with our 2D digital sources today. To simulate this effect, the video below shows an example of how a process known as projection mapping can transform even simple white cards in astonishing ways.

“Box” Bot & Dolly

It’s not hard to imagine how AR combined with depth can be used to transform our view of the world around us. To not only augment our world view with information, but even transform live entertainment such as theater, concerts, sporting events, even photography and more.

Take a more common example like navigation. Today, when we use our smart phones or GPS devices to navigate, our brain has to translate the 2D information on the screen into the real world. Transference of information from one context to another is a learned activity and often confusing for many people. We’ve all missed a turn from time-to-time and blamed the GPS for confusing directions. In contrast, when navigating with depth-enabled AR glasses the path will be displayed as if being projected into the environment, not overlaid on a flat simulated screen. Displaying projected graphics mapped to our environment creates more context aware interactions, and becomes easier to parse relevant information based on distance and view angle.

Bridge the gap

As we look to the future of AR and VR they will both certainly require new approaches to enable an accessible HMI. But that won’t happen overnight. With commercialized VR products from the likes of Oculus, Sony and more coming soon we’ll have to support an interactive bridge to a new HMI through existing controllers. Both Sony and Microsoft already offer depth cameras for their systems that support depth recognition and human tracking. The new Oculus development kit includes a camera for tracking head position.

We’re going to learn a lot about what interactions work well and those that do not over the next few years. With technology advances still a ways off to make commercial AR glass feasible as a mass market option, it’s even more important to learn from VR. Everything done to make VR more accessible will make AR better.

Stay tuned for our next guest post, where we’ll take a closer look at how depth will provide a deeper and more connected experience.

Guest Post: Evolving the Human Machine Interface

How the World Is Finally Ready For Virtual and Augmented Reality

By Mike Nichols, VP, Content and Applications at SoftKinetic

The year is 1979 and Richard Bolt, a student at MIT, demonstrates a program that enables the control of a graphic interface by combining both speech and gesture recognition. As the video of his thesis below demonstrates, Richard points at a projected screen image and issues a variety of verbal commands like “put that there”, to control the placement of images within a graphical interface in what he calls a “natural user modality”.

“Put-That-There”: Voice and Gesture at the Graphics Interface
Richard A. Bolt, Architecture Machine Group
Massachusetts Institute of Technology – under contract with the Cybernetics Technology Division of the Defense Advanced Research Projects Agency, 1979.

What Bolt demonstrated in 1979 was the first natural user interface. A simple pointing gesture combined with a verbal command, while an innate task in human communication, was and still is difficult for machines to understand correctly. It would take another 30 years for a consumer product to appear that might just fulfill that vision.

A new direction

In the years following Richard’s research, technology would advance to offer another choice to improve the Human Machine Interface (HMI). By the mid 80’s the mouse, a pointing device for 2D screen navigation, had evolved to provide an accurate, cost effective, and convenient method for navigating a graphical interface. Popularized by Apple’s Lisa and Macintosh computers, and supported by the largest software developer Microsoft, the mouse would become the primary input for computer navigation over the next 20 years.

Mac-lisa

“The Macintosh uses an experimental pointing device called a ‘mouse’. There is no evidence that people want to use these things.”

San Francisco Examiner, John C. Dvorak – image provided by…

 

In 2007, technology advancements helped Apple once again popularize an equally controversial device, the iPhone. With its touch sensitive screen and gesture recognition, the touch interface in all its forms has now become the dominant form of HMI.

The rebirth of natural gesture

Although seemingly dormant throughout the 80’s and 90’s, research continued to refine a variety of methods for depth and gesture recognition. In 2003 Sony released the Eye Toy for use with the PlayStation2. The Eye Toy enabled Augmented Reality (AR) experiences and could track simple body motions. Then in 2005 Nintendo premiered a new console, the Wii, which used infrared in combination with handheld controllers to detect hand motions for video games. The Wii controllers, with their improved precision over Sony’s Eye Toy, proved wildly successful and set the stage for the next evolution in natural gesture.

In 2009 Microsoft announced the Kinect for Xbox 360, with its ability to read human motions to control our games and media user interface (UI), without the aid of physical controllers.

What Richard Bolt had demonstrated some 30+ years prior was finally within grasp. Since the premier of Kinect we’ve seen more progress in the development of computer vision and recognition technologies than in the previous 35 years combined. Products like the Asus Xtion, Creative Senz3D, and Leap Motion have inspired an energetic global community of developers to create countless experiences across a broad spectrum of use cases.

The future’s so bright

To this day, Richard’s research speaks to the core of what natural gesture technology aims to achieve, that “natural user modality”. While advances in HMI have continued to iterate and improve over time, the medium for our visual interaction has remained relatively intact: the screen. Navigation of our modern UI has been forced to work within the limits of the 2D screen. With the emergence of AR and VR, our traditional forms of HMI do not provide the same accessible input as the mouse and touch interfaces of the past. Our HMI must evolve to allow users the ability to interact to the scene and not the screen.

CES 2014, Road to VR, SoftKinetic premiers hand and finger tracking for VR.

 

Next, we’ll explore how sensors, not controllers, will provide the “natural user modality” that will propel AR and VR to become more pervasive than mobile is today. The answer, it seems, may be right in front of us…we just need to reach out and grab it.

 

4 years at ARNY: Augmented Reality meetup celebrates 1500 members, 200 demos, new startups.

I founded the Augmented Reality New York meetup (ARNY) exactly 4 years ago as a labor of love, and it developed a life of it’s own: attracting nearly 1500 members, introducing 200 Augmented Reality demos, helping advance AR in NYC, creating partnerships, helping AR enthusiasts find jobs, and spurring some fantastic AR startups.

How did we celebrate last night’s ARNY?

With a fantastic collections of speakers and demos from all over the world: Israel, Canada, Columbia and New York City.

Huge shout out for our wonderful host Mark Skwarek at NYU Poly!

1) Brendan Scully – Metaio – first ever SLAM demo on Google glass from the Augmented Reality Company

2) Niv Borenstein – Kazooloo – A truly fun to play Augmented Reality game-toy combination

3) Keiichi Matsuda – Hyper-Reality – A new vision for the future with plenty of Augmented Reality goodness. Back him on Kickstarter!

4) Dhan Balachand – Founder and CEO, Sulon Technologies – a new head mounted console that will fully immerse players into games where the real world and the virtual world become one.

5) Ori Inbar – The latest and greatest from around the world of augmented reality

AR on Sony Playstation 4 featuring Augmented Reality on mainstream TV

MIT Tangible Interfaces: inForm – Interacting with a dynamic shape display

8Tree – fastCHECK is a revolutionary surface inspection system that is amazingly easy to use combining a laser scanner with a projector to help inspect aircrafts.

2013 DEMO Gods winner review – Pristine – Delivering the next generation of telemedicine and process control solutions at the point of care through Google Glass.

Moto X box opening AR experience – Augmenting paper popup for story telling

One Fat Sheep’s Hell Pizza Zombie game (New Zealand)

TWNKLS cool Augmented Reality SLAM demo  for maintenance and repair at Europort

Re+Public new AR mural app – resurrecting murals with Augmented Reality

https://vimeo.com/77516545

Nikola Tesla app IndieGoGo launch by Brian Yetzer

https://vimeo.com/80143324

Stay tuned for a full video of the entire event

RE+Public launching urban art Augmented Reality – reviving murals

Re+Public is (finally!) publicly launching its anticipated and innovative urban art augmented reality mobile app.

App Store: search: “republic”

Google Play: search “re+public heavy”

moto_1

In addition to our projects at the Bowery Wall (NYC) and Wynwood Walls (MIA), the launch of our free app coincides with the video release of our most recent project, which is a collaboration with MOMO to create an interactive digital mural in St. Louis:

Click Thumbnail to View Video:

https://vimeo.com/77516545

+ Note: if you are unable to visit these cities, you can still trigger the augmented reality experience from mural images on the Re+Public website. Click Here to view mural images.

While currently only available on mobile devices, the Re+Public app is a visionary initial step in the coming future of digitally augmented urban spaces that individuals will view and interact with through wearables. The Re+Public app will soon expand to include projects in more US locations and cities abroad.

A sincere thanks for your support and continuing to follow Re+Public. Please feel free to forward this email and help get the word out ;)

Additional Links:

Project Videos

Facebook

Official Press Release

Re+Public

Re+Imagining Public Space

Los Angeles | New York City

info@republiclab.com