Why Real-world Data is a Gold Rush Hidden in Plain Sight.
--
In late 2018, the world of immersive tech was really picking up. ‘Spatial Computing’ was a new territory for businesses, and I was obsessed. I joined an AR company in Cambridge as part of my EngD industrial research placement. They were developing AR experiences for big brands, and I was tasked to use my knowledge in Games Tech and Human-Computer Interaction to improve their AR software.
Side note: I used to tell my family that I’m doing a PhD in Cambridge 😉
What I discovered there changed my life.
At the time, Machine Learning (ML) was the latest buzzword in the commercial world. For good reasons too. It could do magical things in narrow tasks like detecting objects. So, we decided to integrate ML in the educational AR app that we were building for our biggest client yet.
Little did I know what this meant…
We spent nearly 6 months just training the neural network to recognise a specific object that belonged to our client (without giving away too much, it was a device used for hair styling). As a disclaimer, I wasn’t training the network, but I remember the nightmare period. It was like watching someone trying to teach a cat how to drive a car. You could maybe teach the cat to place its paws on the steering wheel but that’s not driving.
In many ways, training the network to recognise this oddly shaped object was like a cat learning to drive. Could we get it to detect the object? Yes. What about when we tilt it? Nope. Sorry-I-don’t-recognise-this-alien-object.
So, what did we do? Well… in the name of science, we pushed with sheer data until it could recognise the object from multiple angles when held by a human hand (the clue was context). This was a big lesson for me. Machines need to be trained in context (the situation in which they are required to perform), otherwise some details will be skipped.
After spending 6 long months on developing this ML tech demo, we then started building the app on top of it. This was crazy to me. How could it be? How can 50% of our resources go towards one feature?
The art of recycling.
Many of you artists and designers know how important it is to recycle old material. Why create a new asset from the ground-up for every project when you can recycle a little here and there and save a ton of time? In fact, this is an art. It’s a desirable skill to have if done correctly.
So, what about advanced software engineering? That was the idea that really struck me. What if you had an existing framework where you didn’t have to spend months training a neural network to recognise just one object? What if there was a framework for the very challenge that we were trying to solve in Cambridge?
Don’t mine. Sell pickaxes during a goldrush instead.
In other words, solve one challenge to unlock solutions to many more. What carried over after this period of research was the idea to build a software framework that could give AR apps context awareness. So, I dropped out of my EngD (yes, my parents were disappointed) and decided to start a company with my university friend, Daniel. He and I shared this vision and had the mutual experience to really execute it. A couple of weeks into it, we were convinced that we needed to build a proof-of-concept… There was just no other way.
After many part-time side hustles later, Dan and I saved up enough to bootstrap and really test our idea. By late 2020, we had grown to be a team of 5 at Phantom Tech. We held our first public demo at a research seminar in Bournemouth University, and our system was amazing! It was a tool that let developers assign behaviour to most everyday objects without having to write a single line of code. This also meant that objects could be detected with zero training effort by developers. This was seriously revolutionary.
The age of context.
What followed this period blew our minds. The world fully entered what we call “the age of context”. So, what does this actually mean? Well, let me try illustrating this:
In simpler words, mobile apps are becoming more aware of our context. Take Uber or delivery platforms for instance. They use location context to optimise logistics and significantly improve user experience (imagine if you had to type in your address every time you ordered a cab? those days are over).
But this is not just about location context. Visual context also plays a huge role. Social media platforms are using computer vision to detect objects in your scene. Are you holding a can of Coca-Cola? Do you have a poster of vintage cars in your bedroom? What about your favourite brand? It knows you better than your government.
Machines are starting to make sense of our physical world, but it’s still difficult for developers to leverage these technologies. And if it stays difficult, only the big companies will retain this power. So how do we democratise context awareness for app developers?
By making real-world data more accessible inside a game engine.
We knew from the very beginning that the best place for developers to build AR apps is Unity engine. And we knew that our role would be to make real-world data more contextual for developers.
So, here’s how we thought about our role: Imagine if you could squeeze the Earth and real-world data juice would pour into a funnel that filtered it so it was compatible with Unity engine.
Since our first public demo at Bournemouth University, we went all-in and developed a suite of tools for building context aware AR experiences. We call this platform PhantomEngine. We realised that true context awareness requires the system to do much more than just object recognition. It needs to be aware of users’ spatial geometry, geo-location, weather, traffic and more.
And here we are… just over 2 years later, PhantomEngine has entered Early Access.
What is PhantomEngine?
It’s basically the best pickaxe for obtaining real-world data, and building apps with that data. Why should this matter? We believe this platform will revolutionise experiences that interface with our physical world. It will unlock a greater potential for creators by saving them a lot of time and engineering headache. Because its better to build apps for the real-world — to improve our lives, reshape outdated industries and reconnect us with our physical world. Our mission is to make real-world data more contextual for AR, and eventually provide this as a no-code solution.
Follow us or sign up for Early Access: