Show HN: ARCharts – Augmented Reality Charts for iOS
github.comGraph libraries for VR (where the system has total knowledge of the scene contents) and for more powerful AR devices are definitely needed in tons of industries.
That said, as someone who spends nearly every waking hour coding for VR and AR, I find it hard to believe ARKit graphing applications are anything but throwaway gimmicks because the ARKit's lack of semantic knowledge about the local environment means there is no way for it to attach the graphs to contextually meaningful world points without significant user input.
Without contextual placement, ARKit graphs are just a way to force the user to expend metabolic energy moving their phone around to view things that can and have been presented as well or better on a traditional displays for decades. Sure, it's cool the first time you see one, but ARKit is not how you're going to want to view your SEO conversion data, regardless of how fun it was the first time you imagined it.
I recommend moving this library into Unity and starting to establish it as the way to do graphs on systems that can have semantic knowledge about the environment.
Are you saying arkit cannot identify a car if I want to chart it's mileage over it?
Correct. ARKit can keep the chart fixed above the car if you put it there, but it can't yet identify it as a car nor can it distinguish your car from your partner's car, and if you close the app and leave the lot and come back, you're going to need to place the chart over your car again in almost any realistic scenario. The best you get at present is "here is the floor" and "there is a wall", neither of which help the app provide contextual relevance for you. This doesn't make for terribly compelling charting applications beyond the first 15 seconds of "that's so cool it totally works" (which is a pretty cool 15 seconds).
The HoloLens will likely be able to keep the chart over your car if you leave your garage and come back, but not if you move your car and probably not if you park your car outside in the sun (which swamps the spatial projectors and prevents proper environment scanning), and you still need to have manually placed it over your car in the first place.
I would suspect CoreML would play an important role in scene identification, which could certainly be used in conjunction with ARKit, don't you think?
Microsoft is investing a lot in computer vision, I'll be interested to see what changes they make to the next HoloLens' vision capabilities. Plane finding seems almost as good in ARKit as it was on HoloLens when I was developing on it (albeit the environmental understanding is limited/non-existent in ARKit, which as you mentioned is a big part of the equation.)
Fortunately plane finding is pretty easy even with something as simple as RANSAC[0]. I'm sure there are better algorithms now but ransac has been around forever and is nearly trivial to implement. It just doesn't easily generalize to "Siri, find my car"
Have an email? Would like to connect regarding ARKit
And finally 3D x-y-z bar charts make at least a little bit of sense when you can walk around them.
Definitely yes, but you have to look at them through your phone so I'm afraid it gets tiresome quickly. It's fish for the graph + raise phone + align and walk around the graph vs seat and flip the charts around by swiping the screen. I agree with the current top poster, it's a gimmick.
It reminds me of the narrow visual field of the hololens I tried months ago. I spent half of the time turning around and moving my head up and down to look for objects. Then they were pleasantly steady once I found them but the experience doesn't compare with the visual field we're born with.
Those ARKit charts look very steady too. I assume that fixing a virtual object in space is a solved problem and it's not an easy one.
I hope we will get there... This in lightweight glasses with great field of view.