THOUGHT:

Hands on with Apple’s ARKit

Over the last few months we’ve been experimenting with Apple’s new ARKit Framework within Unity3D. In this post I’m going to share the trials and tribulations of that experimentation as well as some technical insight into how we achieved our VisitAR prototype.

We initially set out with the main goal of demonstrating viable commercial uses for the ARKit technology. If you’re anything like us then I am sure you also love the idea of seeing a giant Pikachu in your living room, however I think we’d all agree that it’s uses are somewhat limited. We want to show that AR can be used to subtly improve our everyday lives by delivering information in unique and exciting ways (see our news post).

The idea for VisitAR initially came about from one of our “Lab Projects”, small internal challenges we take on to inspire the team to come up with something innovating. Many of our clients would agree that our studio can be slightly difficult to locate, especially for a first-time visitor. Although Google maps is a great way to navigate, it often leads users to a general area or street, the problem is the understanding and spacial awareness, this is especially true in built up areas like Cities. With this problem in mind we set out to find a solution.

Within VisitAR we wanted to augment the cityscape around the user’s feet, almost as if they are a giant stood within a model village. By enabling the user to get close and explore the area we can offer far better insight to the area and it’s geometry, ultimately helping them understand where our building is and how to get to it. Although our prototype is centered around getting to us we can see how this same idea can be applied in endless ways, whether it be to help users navigate a shopping centre or simply find the ward in a hospital. We feel that the concept behind VisitAR has huge potential.

So empowered and full of great ideas we set out to develop the prototype. This process was full of learning, some of which I am going to share within this post. We started out by gathering data about the local area, using a shapefile from OpenStreetMaps we were able to get the footprints of all the buildings in central Manchester. We then gathered data about the building heights from emu-analytics. Using this information our 3D artist set about creating a fairly low detail level model.

The advantage of using the shapefile data to create the model was that it gave us accuracy for geolocating the user. If we were to create a much larger area we would have opted to automatically extruded the building based on both the building height and shapefile data, however for this prototype we wanted to add slightly more detail.

The first challenge we hit was achieving the correct sense of scale. ARKit creates virtual surfaces by sampling the point cloud data it has gathered from “anchors”. These anchors are represented as planes and can be detected on a Vertical or Horizontal axis. The scale and distance of these planes are approximated and as we found they can slightly differ depending of the conditions under which the the point cloud data is collected, the more data gathered the better these approximations are.

Within Unity3D 1 unit is equal to 1 metre, so if we create a 1 metre cube this will appear to be very large within the augmented world. Typically in a Unity scene we aim to have 1 metre as the smallest physical size as surface lighting and shadow calculations can get a bit unstable when working at a smaller scale due to floating point errors. When we came to try our city scene within ARKit we found that the model village scale we were aiming for felt more like being stood on the city street.

The solution to this was around reprojecting the spatial coordinates onto a larger scale. This could be achieved by translating both the incoming camera position data from ARKit and the scene itself. By doing this the composited result would still match the real world camera input but appear at the desired scale.

Another development challenge we experienced was the speed in which we were able to iterate. Typically when working with mobile apps we use phone simulators or Unity Remote in order to test quickly and iterate fast, however the ARKit functionally relies on a number of the native device sensors to gather it’s data so we had to use the actual device for testing.

Part way through development we found that Unity had released an ARKit Remote source code which was intended to enable us to test directly within the Unity Editor, however the slow frame rate and lack of touch events being passed through meant this was not a viable solution. It did however help understand how the camera is being affected by the ARKit data.

We still feel there is a lot more we can do with ARKit and ways we can really push the technology further. Putting AR capable devices in the hands of a much wider audience than ever before will present some interesting technical and user experience challenges but presents some really truly exciting opportunities. We think the wave of apps letting users place characters into their homes are super fun but we also think there are loads of real world application for AR that will subtly enhances our daily lives.