Developers Are Already Making Great AR Experiences Using Apple's ARKit
net.xyzThere's no way you can talk about ARKit without showing off the measuring tape apps
Wow - how does it do that?! Image recognition techniques alone (as far as I know) couldn't do that, right? So how does it know how far away something is?
My guess is they estimate one or several dominant scene planes from the sparse triangulated feature points and get scale through incorporating accelerometer measurements.
You only need two camera angles and information about camera lens distortion to make a very good estimate. Look up "3d reconstruction".
That's pretty cool. I almost wonder if someone could use this type of tech for a cheap way of mapping the inside of structures/homes. Something like an ARKit -> Sketchup service. That or a good way to monitor job-sites for correctness, remotely.
It's already possible to do this in a basic sense with Canvas (uses Structure sensor) and floorplans have been done since 2014 with RoomScan app.
What apple and these apps don't want to say is that they absolutely can't be used for measuring correctness, the calibration and accuracy is just not that good.
Shameless plug: At DotProduct (www.dotproduct3d.com) we have pioneered this. We've been selling our solution since summer 2013.
Yeah, even though consumer AR (= overlay graphics on picture of the world) hasn't really taken off, it it is being adopted by industries that build large things with people like buildings, ships, etc.
I've seen several very creative demos but I'm still struggling to see practical applications. It seems like massive, every day adoption of this kind of AR is going to have to wait for dedicated glasses.
Maybe I just lack imagination.
Maybe point it at a sports game for stats and game data?
Point it down the street to have great restaurants highlighted?
Point it at the sky to get astronomical facts?
Point it at tourist landmarks to find out about them?
Point it anywhere to see where athletes/runners/cyclists have run and in what time?
Visualise all historical car accidents at a given location?
Visualise all hidden infrastructure, underground gas lines etc?
See history overlaid on the present?
Visualise public transport routes?
Show "upcoming events at this venue" as you pass theatres, cinemas, stadiums?
> Point it down the street to have great restaurants highlighted?
on android early on (before 2.3!) yelp called it lens, and had a monocle icon. google maps and nokia apps also had it at some point I think. and no better implementation will ever make it less useless.
it was a uter useles gimmick. everyone tried it exactly once.
It's actually coming back in Android O as "Google Lens". [1] Highlights include identifying trees/flowers in view, automatically connecting to Wifi by pointing at the sticker on a router, and pulling up a business listing and ratings by pointing at a building.
Apparently it'll be a part of Google Camera, but also baked into the Assistant so you can take pictures within the context of your conversation to "show" the assistant things, clarify what you're talking about, or ask for more information about something unknown.
Can also apparently integrate with your other Google services, with one example given as pointing your phone at a theater marquee, it pulling up ticket prices for you, and you saying "add this to my calendar".
[1] https://techcrunch.com/2017/05/17/google-lens-will-let-smart...
It was on one of the early versions of the Yelp iOS app as well. Don't really remember if it was there on day one, but I remember playing around with it in iOS 2 or 3.
It was called Monocle, and it's still there, under More (on the version of the app I have installed).
The ones I tried in olden times just used GPS and orientation to overlay geodata, camera input wasn't used
Mask advertisements, and replace them by beautiful paintings.
An ad-blocker for the physical world? Hmm..
Will employees of the city 'pop up' in your path as you walk down the street: "Hey! I see you're using an ad blocker! The city relies on advertisements to fund the street you're walking on right now, please consider turning it off!"
Your AR device will replace those employees with beautiful dancing girls, of course.
Obligatory link to Black Mirror-inspired hackathon project "Brand Killer": http://jonathandub.in/cognizance/
Can't you do all that without the pointless waving, faster?
Having said all that I'm sure it will go the way of the Apple Watch.
> Having said all that I'm sure it will go the way of the Apple Watch.
A popular and growing technology that gets more useful with each release?
In bizzaro world, sure.
I'm actually expecting this tech to be the "bring contextual information V2" of things.
Example : Visiting a museum. I want context around the displayed works to better grasp the key concept.
How it works today :
- Guided tour : needs time management (when does it starts ?) + plus paying a human being (what language?) + keep with group speed
- Audio guide : Better, but lacks of visual explanation. Input (which track should i play concerning this work ?) requires some visual encoding (title, #, qr)
- Reading explanation next to the actual work : Lots of reading, not suited for different types of visitors
- AR : Could use the actual work as form of encoding to deliver the contextual information. Can adapt to the amount of data, depth that visitor want's to have.
Does that make sense ?
> AR : Could use the actual work as form of encoding to deliver the contextual information.
Scott Naismith has been doing this with ActivCanvas - https://www.instagram.com/p/BMFLhHThCy8/
Although I don't know if ARKit has "replace picture X with picture/video Y" functionality?
I could see it for some gaming niches, like LARPers (live-action roleplay). See those fireballs tossed out. Show off your new +1 chainmail you looted, etc. Probably free to play with microtransactions for new loot and effects.
One game idea we kicked around was first person AR bomberman. All you need is a sufficiently-sized field for the map (and your AR device of course).
All of this has happened before with 'AR' for television. The eventual niche was just a few election/budget politics shows rather than day-to-day broadcasting. A few things that were hard to do with 'AR' for television - getting feet to stay consistently on the ground in the one place and not float or drift with camera moves. The reality of solving this was to use a close up shot where the feet were not shown or to pan very carefully with a camera on a tripod.
It is these finer technical points that stop the technology getting too far and prevent you doing really cool stuff with 'AR'. This of course does not matter if it is some effect on Snapchat, however, if you really are trying to get someone to appear as if they really are 'walking on the moon' then sliding feet makes the result look like bad chromakeying and not the 'AR' desired.
Vanity- this will allow if for lots of vanity assecoirs only AR can provide. Virtual Pets, Decorations, virtual Cloths- all that is missing is good people tracking.
The emperors cloths might just become a story on a boys defunct glasses.
Look around a car before booking a test drive? https://twitter.com/JelmerVerhoog/status/881237798623293440/...
And here's one I made to actually have a test drive as well: https://youtu.be/LKgJk7DdyuY
I sense this is more of a roadtest for technologies that are bound for a new product line.
I have been coding on laptop, it's kind of bad of our back body, AR can help to bring 360 degree display so we won't have to switch/toggle our workflow all the time for hundred times a day.
Yes, the world just changed and you don’t realize it. AR is going to be huge, even with a phone.
X-ray vision anyone?
Some of these are genuinely brilliant - I love the video of "testing" a cushion on your own couch before you buy.
This only works to add to the environment. The problem with the AR approach for things like "what would this couch look like in my living room" is you can't get rid of the existing couch. That's still a long way away.
Is it? Maybe some 3D content aware fill + deep learning and it's not so far away?
I can foresee a great many stubbed toes in the future with this technology
Yes , the biggest problem for me and wife has been to visualize how the new furniture we are buying will look inside our house. AR can solve lot of those problems to some extent
I can see benefits in the interior paint industry, and the flooring industries too. Being able to 'preview' tens of thousands of dollars worth of tiles (for instance) would be a real benefit, especially if you operated an interior design company etc.
I have the same problem with a shed; how would it look in the space I've got?
Very cool tech but the phone as viewport seems like a fairly large step backwards in my opinion.
Depends on the use application. For observing especially large objects[1][2], I agree, it would be very hard to use for anything beyond a quick demo. But on the other hand, I think the phone is a perfect viewport for on-the-fly smaller applications, like an AR measuring tape[3], adding info about artwork in a museum, displaying a virtual prototype of a dish on a restaurant's menu[4], translating street signs on the fly[5], etc. The phone is perfect for applications where you don't want to plan ahead or be constantly carrying an extra piece of equipment around.
1: https://twitter.com/madewithARKit/status/880815805281300480 2: https://twitter.com/madewithARKit/status/880056901987254272 3: https://www.youtube.com/watch?v=z7DYC_zbZCM 4: https://twitter.com/madewithARKit/status/880744158423658497 5: http://newatlas.com/google-translate-update/35605/
Seems Apple solved monocular SLAM problem before attempting VR.
Wonder if they used FPGA or anything to speed up the algorithm? In my experience SLAM takes too much computation power and/or release too much heat.
Basic monocular SLAM has been solved for a while on mobile devices. ARKit, Facebook, and Magic Leap are just very careful to show only demos where the SLAM works well for that case.
Detailed monocular SLAM needs a lot of CPU, but that is changing...
This is awesome! I can't wait to start playing around with this :)
Took me a while to realize that two guys playing basketball were actually Virtual
Has anyone played with the ARKit yet to develop? Will be great if you can share your experience. I have made couple of games on Unity and want to dive into mixed reality applications.
Looks like someone has already made a Unity ARKIT plugin
Just go through the official sample app and documentation https://developer.apple.com/arkit/, if you have worked on Unity, then working with ARKit will be an easy transition for you