H U M A N

Bringing you the parts of the future that are ready to work today.

Follow publication

The Future is Not a Screen You Can Touch

Interaction design beyond the glass rectangle

H U M A N
H U M A N
Published in
9 min readDec 18, 2015

--

Ryan Betts, co-founder and Chief Prototyping Officer at HUMAN, presented an overview of the work we do and how we’re thinking beyond the rectangle in different ways to the students at Vancouver’s RED Academy. The images below are an excerpt from the presentation.

Every screen larger than the one on your wrist will get eaten by VR/AR in the next 10 years — HUMAN

In 2015, the dominant screens in our lives are Mark Weiser’s tabs, pads and boards: watches, smartphones, tablets, laptops and TVs. Glass rectangles of various sizes. As designers, we spend most of our time designing experiences for constellations of glass rectangles. But …

Glass rectangles aren’t much longer for this earth

At least not as the centrepieces of our lives. They’ll soon be aging accessories. Our rectangle fetishism is already starting to fade.

By 2025, the transition away from glass rectangles will be well underway. By then, we’ll be charging towards two extremes of computing accessories: wearables that calmly direct our attention to important background activity, and immersive displays that allow us to focus our whole attention on a job to be done in the foreground.

Rectangles won’t disappear. Glass won’t disappear. But the palette of shapes and materials will just expand such that glass rectangles will be marginalized. The ergonomics of computing will mature. Suddenly information technology will be valuable for tasks beyond just information work.

It’ll be everywhere, for everything. And that’s going to pose a lot of new challenges.

Seeing Spaces

Brett Victor — formerly of Apple — has spoken very clearly on how he imagines this future playing out. Right now, we have maker spaces to help us better construct all sorts of objects we can imagine. In the near future, we’ll need seeing spaces to help us better understand those objects and their increasingly complex inner workings.

As the world around us increases in its complexity, the tasks we are faced with daily increase in complexity as well. Seeing spaces will help us to face the challenges of complexity head on.

Watch his Seeing Spaces presentation, from May 2014.

We’re big fans of Brett Victor. Another recommended talk to watch and concepts to dive into is his Humane Representation of Thought.

HoloDecks are a red herring

The future may be more like the HoloDeck, but you shouldn’t need to stare into a square rectangle like Wesley Crusher.

While this future may sound more like the HoloDeck, it really isn’t. Seeing spaces aren’t just about invoking an imaginary landscape disconnected from the real world around it. A world disconnected from reality is no more useful than a world shoehorned into a tiny glass rectangle.

The future is about pulling content out of screens.

Capture of an email interface in Magic Leap concept video

Pulling everything out of the rectangle on your desk (even tedious things like emails)…

Microsoft HoloLens Skype chat concept video

…and putting them in context. Annotating the world around you. Showing rather than telling.

As designers, we know all too well that you have to get your hands dirty and move around in order to make real progress. At HUMAN we invent through a design process that includes heavy usage of prototyping. We’re experts at untangling the complexity in the world around us.

We don’t want to wait for seeing spaces to arrive. We want to build and use seeing spaces that can help us right now. We want the world around us to be the interface.

That’s where Human Computer Interfaces need to be tomorrow. But to set things up, lets do a quick review of how we got to the state of the art Human Computer Interfaces that we have today.

In the beginning, there was darkness. Early Computers: Punchcards, Machine Code and Command Lines. Not exactly obvious to the layman on the value proposition of integrating this into your life. Practical, but unapproachable.

In 1964, Douglas Engelbart landed with the first mouse — an attempt to help move computers from the CLI to something more tangible. More usable. The mouse and essentially every other major component of computers as we know them today were demoed 4 years later in what became know as the “Mother of all Demos.” A fire had been lit.

Within a decade, the spill over from Engelbart’s lab crystallized in the Xerox PARC Alto computer, with the first GUI, controlled by the first practical application of the mouse.

Steve Jobs and Steve Wozniak go over to PARC and see the GUI and the Mouse. Blows their minds — because they suddenly see a way to unlock utility (and unlock people’s wallets). The Personal Computer revolution begins in earnest.

These inputs methods and the ability to have a common control layer over the top of the raw computation power of the computer helped us get to where we are right now with immensely powerful PCs and the ever growing power of mobile devices.

5 years later, Tim Berners-Lee invents the world wide web. HTML, HTTP and the browser soon follow. The archetype for how things and people will talk to each other. Incidentally on a NeXT computer — Steve Job’s other company.

All of this innovation culminated in the touchscreen smartphone. Direct manipulation of content. Communication from anywhere to anywhere. A supercomputer in the palm of your hand.

The glass rectangle perfected.

But still a glass rectangle. Amongst a growing sea of smaller and smaller glass rectangles like the Apple Watch beginning to dot our bodies in a Personal Area Network (PAN).

As the personal computer vis a vis glass rectangles was creeping further and further into becoming an everyday object, a different vision of computing was taking root — tucked away in research labs across the world.

Building on pioneering work by Ivan Sutherland, NASA and others work on VR and use things like ‘data gloves’ to allow interaction with onscreen objects. Obviously lots of cables and machine power needed. But the right intentions.

By the time the Berlin wall had fallen, VR started to enter the mainstream. Arcades adopt the tech, so starts being seen as Gaming-centric. Input starts to be simplified, trigger/pistol style of event firing. No one should ever forget Nintendo’s entry into the fray with the VirtuaBoy. VR HAD ARRIVED.

Of course, VR hadn’t arrived. VR had died. The tech just wasn’t ready.

The world fell out of love with VR. It seemed like a “flying car” technology — something that would forever be just around the corner. Then, a true hero emerged. Palmer Luckey, with his Oculus Rift.

And here we are today. Not just with the Oculus Rift and HTC Vive leading the VR charge, but with plenty of increasingly legitimate hype around the promise of augmented reality platforms like the Microsoft HoloLens and mysterious Magic Leap.

But these are still early days.

The tech still has some barriers to becoming an everyday platform. All 3 branches of immersive display technology — Powered Headset VR, Phone VR, and AR — suffer from short comings.

1. Powered Headsets: tethered and closed off from reality

Oculus Rift, plus hand controllers and a walking rig
Room scale VR with HTC Vive
Playstation VR with hand controllers

When you are towing a cable and can’t see the real world, it severely constrains the amount of practical applications for these headsets. No matter how powerful and immersive the experience may be.

2. Mixed Reality / Augmented Reality: timelines and Gorilla Arm

Magic Leap, which appears to be more AR / Mixed Reality. No one has seen it yet, or what kind of tethered or un-tethered headset is being used.

Magic Leap promises “cinematic reality” using advanced techniques that paint light directly into the user’s retina. The result is virtual objects laid seamlessly onto the real world. This is the promised land of immersive display tech, but nobody has yet publicly seen the headset, and no shipping date or price has been committed to.

Microsoft HoloLens is all about Mixed Reality.

HoloLens is a bit less glamorous, but it will be shipping dev kits in Q1 2016 for $3k. The HoloLens’s interaction paradigm is Gaze, Gesture, Voice (GGV). It relies heavily on Minority Report like gestures driven by holding your arms out in front of you.

Minority Report was a cool movie, but if you’ve ever tried using an interface like this, it’s damned annoying. There’s a name for this, it’s called “Gorilla Arm.” And it’s terrible.

3. PhoneVR: view only

We’re interested in building on top of devices that everyone already has with them, but can start powering new mixed reality experiences. So we’ve been focused on the smartphone as the most practical VR/AR platform available at the moment. But for all intents and purposes, it’s a view-only platform at the moment.

GearVR works with Samsung smartphones. The interface is a D-pad that you wiggle your finger on the side of your face.
Google Cardboard works with any smartphone, and has either a magnet in the older version (pictured here) or a simple switch to interact with.

What makes the smartphone great is interacting directly with objects. Smartphone VR is currently missing touch.

New Technology Creating New Interfaces

While the current cohort of immersive display platforms are lacking the right input, there are some emergent technologies that may remedy this.

Google’s Project Jacquard

Google’s Project Jacquard is using a variety of smart fabrics to create quite high fidelity interactive touch interfaces.

Google’s Project Soli uses a miniature radar system to track small, precise movements of hands and fingers.

Neither of these are in production yet with any promised ship dates. They are incredibly promising.

But we need something that works now. And we want something that doesn’t add more cost and complexity to the hardware picture.

Palmer, HUMAN’s VR/AR Interface System

VR/AR input is more of a design problem, than an engineering problem.

The challenge for VR/AR input right now is similar to the design problems when touchscreen devices first entered the market. These became frameworks, or accepted patterns of interaction, like interacting with images through pinch-and-zoom.

No silos, like working with just one type of smartphone.

Must work principally across both VR and AR.

Must involve direct manipulation of content. Touch.

Let us leverage humans’ natural sense of proprioception — the “body sense” you’ve developed from living in your own skin your whole life.

Palmer Demo

Thanks to RED Academy for hosting us, and the great Q&A with the students after the presentation.

HUMAN
Bringing you the parts of the future
that are ready to work today.

We’d love to talk to you: hello@ishuman.co | @IsHumanCo

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Published in H U M A N

Bringing you the parts of the future that are ready to work today.

Written by H U M A N

We improve & create products through prototypes, business design, and interaction research.

Responses (1)

Write a response