thefugue.space

View Original

The Glimmer

Well, It’s been an interesting week. I’ve watched a device get announced that I knew was a long time coming, I also reconnected with old colleagues and peers who also found it timely to chat about these things. With this little piece, I thought I’d give you some insight into my past, in the context of the equally psychological bane and tech love of my life: Spatial Computing. So let’s go back to 2014…

I was living at that time in Vancouver, Canada and I’d been playing around with the Oculus DK1 and the newly-released DK2. I was experimenting with a duck-taped Leap Motion on the front for hand gesture recognition (later I got the Leap Motion mount that made it a little bit more elegant). This was a lot of fun, and some of this experimentation was done while at Global Mechanic (an awesome creative agency that really encouraged me to play with this new tech). But I wanted to go further - I started to become obsessed with the idea that there was a potential ‘new way’ of using your computer - taking it from sitting in your pocket, or on a desk, to looking through it at the world.

In 2015 I spun out and formed a new company - HUMAN. This was with a couple of friends who were also interested in ‘what will come next’. We called it HUMAN because we didn’t want to obsess about the tech itself - it would be simply a conduit to a very humanistic relationship between us and ‘enablers’. Check out an early thought piece here. We ran some projects with external clients to keep the lights on, but internally we were focused on building out a iOS-powered AR headset - we called this ‘Scale’. It consisted of a heavily-modified Zeiss VR headset with the front shield/visor ripped off to allow for a mounted iPhone 6 with an attached 120deg fisheye lens to see the world. Development was done in Unity, with Vuforia for tracking (…way before ARKit). We built a fully functional demo (called Palmer - your hand - get it?)- you could put the headset on, look at your gloved hand (with a special marker that vuforia would track) and see a full touchable interface to load 3d objects, or interact with a basic messenger app, or draw in space. We ended up building 2 prototypes that could connect to each other on the same wifi network, and have 2 people seeing each other and sharing the same object view. We were invited to demo this at MIT and the Microsoft Cambridge campus. We felt like pioneers. But we also felt completely not understood - besides the ‘cool demo!’ responses from everyone that tried it out, there was also a lack of belief that Spatial Computing could eventually become a thing. While in my head I saw beyond the current technical limitations, and tried to explain how this could transform our understanding of the built environment - querying the real world, using ML to predict interactions and behaviours....most were just interested in trying out cool games in VR. Scale was eventually mothballed in 2016, after failing to get investment after approaching various angels, and that also began the end of HUMAN as a company.

After a brief time running product at Archiact in Vancouver, I was approached by Meta - no, not that Meta, but the original AR pioneer Meta. I met with CEO Meron Gribetz and quickly we realized that we were (almost) entirely on the same page when it came to the future of Spatial Computing. I was offered the joint role of Senior Director of both Product and UX, reporting to Meron and was to drive AR design for the Meta 2 headset and the forthcoming (and never released) Meta 3 headset. In the early days while at Meta it was truly an inspirational and exciting place - there was a raw energy (that sometimes boiled over) and belief that we were truly the only spatial computing company that wanted to protect our users from privacy and security concerns, make our relationship with technology more symbiotic, and also make it more social - one of our core HW requirements was to be able to see each others eyes when wearing the headset. We wrote principles of spatial interaction, we won ‘Best of Show’ at AWE in 2017 for combining an iPhone with the headset (you could drag things out of the phone screen into physical space and back again). I was going around giving talks on Designing for Mixed Reality (which was also a book I wrote for O’Reilly in 2015). I felt like this was it. That finally, it was really happening - the move toward Spatial Computing was starting to be understood!

And then it was over. In 2019 Meta collapsed after failing to close series D funding.

In the days after the collapse I got a message from someone at Google. Not a recruiter, but an actual employee. I was shocked and impressed at the directness. They wanted to know if I’d be interested in joining and driving some of the stealth AR efforts. Sure - why not? I knew Google had been early with Glass, and also with Cardboard - so I also knew that they were capable. I joined in 2019 and immediately realized that I was surrounded by some of the most talented, driven, knowledgable luminaries from decades of AR/VR work. I’m still good friends with most of that group. It was intoxicating because anything was possible. I also got to see some of the most advanced hardware prototypes I’ve ever seen to date. Really. So we had the tools, we had the team, and we had the talent. What could possibly go wrong? I worked on a special project that was developing an advanced AR OS. I’d brought all my prior learnings from Scale and Meta, and we set about building functional prototypes. While I can’t go into detail, I will say that period was the most inspiring spatial computing work I’d ever done. I could *see* and *feel* the future. It was right there. But as time went on, as usual, executives can get twitchy - a lot of R&D dollars were getting spent. There was no ‘product’ to launch yet. It would be years in the making (fun fact - back then in 2019 we set a launch date of 2023 to realistically get the tech optimization to the point where it would last more than 30mins). We also actively discussed privacy issues - the perception of ‘camera on your face’ (again, Google still had the ‘ghost of Glass’ wandering the building, making execs fearful of cameras in general), but even with these challenges, the positive energy just kept all projects running full steam ahead. Sigh….you know what I’m about to say..

And then it was over. The most advanced AR project was wound down. Google had decided to go back toward a Glass-like form factor and approach. The pandemic was at full-tilt, and once again the potential spatial future came crashing down in front of me. That was an emotional time for me - a lot of energy had been spent, and to see defeat snatched from the jaws of victory AGAIN...well…I needed a break!

These experiences have been quite a rollercoaster - there really is such a thing as being ‘too early’ (that’s the ‘bane’ part I mentioned earlier), and there is also equally important a thing such as ‘long-term commitment’. Most companies simply don’t have the belly for Spatial Computing development - it’s extremely costly on the HW side (it really is - that’s why the Apple VisionPro price doesn’t shock me one bit..), and beyond that its also not an ‘obvious win’ when it launches. This is a long tail paradigm-shift. There is no ‘killer app’ (never really was) - I detest the basicness of that statement. The one thing I have always said (and have kept saying) was that a Spatial Computing device - at launch - needs to do almost everything your other devices did so you don’t need to take it off. This one statement is the project-killer statement for nearly all other companies - You mean we got to do everything? Yes. Sorry, but yes.

Over the last decade it’s been a series of glimmers of the future. Now, with Apple entering the space it will turn into a strong and clear vision of how we will compute going forward.