Blue Vision, which builds collaborative AR, leave stealth with $14.5M led by GV
techcrunch.comI can't wait for the integration with HMDs!
It'll be so useful to see contextual advertising near all of my favorite shops with the great deals that I love -- all without having to clip coupons! It'll be great when this is paired with facial recognition, so that I can see just the ads that I need for my personal situation at that moment. Maybe they'll even give me a better deal since I'm such a frequent customer!
Two thumbs up!
I totally don't get how this could be useful without smart glasses. Most people already spend to much time staring at their phones and with this crap will be even worst. Such solutions were already years ago and were not adopted because phones were too weak and software unusable. Nothing has changed except that we have better phones and rest is still a crap.
I just wait for someone to fall under the car collecting gold circles...
EDIT: Don't get me wrong, AR have a huge potential but not for flying emojis or playing RPG games running all over the city like a maniac with a huge phone.
> Most people already spend to much time staring at their phones
Compared to what? This sounds like something a closed minded person would say. My grandmother says "I cant believe you stare at screens all day". Today, people my age don't even question it.
The phones years ago were too old, and the software was pretty unusable, but now the hardware is significantly better, you can push a 100,000 poly model in realtime over 30 fps video and it's smooth. Have you used the BBC app that allows you to walk around the mummy tomb? It's smooth and gorgeous and really fun.
I'd have to say that their promo video was pretty uninspiring, I'd like to have seen a spatially aware map-type demo, or some higher poly models, or a simple RPG game. The floating 2D emoji sprites don't do the potential justice, but hey, they're just starting out!
> Compared to what?
For instance compared to talking with each other. Didn't you spot that at every bar? Snapchat or Facebook Messenger are the main sources of communication now even for people who sit side by side. But probably I'm closed minded and I do not see a bigger picture.
Yes, you are, because you seem to put everything under the common label of "staring at screens" :).
You look around the bar, and see lots of people staring at screens. Were you to look closer, peek over everyone's shoulder, you'd see that the group over that table is skimming the project document they met to talk about. That couple over there just took out their phones to skim the news, the first moment they had during the day for that. That girl next to the entrance is ordering a taxi. That awkward-looking guy fiddling with his phone just got an important message from his SO, and as untactful as it may be between friends, he doesn't feel comfortable not replying immediately. Etc.
The range of activities we do over phones is so big, that you can't just bundle it together. Many of those activities become social objects themselves, i.e. something we start talking about or doing together.
> Yes, you are, because you seem to put everything under the common label of "staring at screens" :).
I'm not and don't exaggerate with this. I don't mind if someone commute for a an hour and a half to work and reads a book, skim the news news or prepare for work meeting. This is not the point here.
It just hurts me as I see people who evidently met to spend time together and for 40 minutes did not exchange words but they scroll through instagram or facebook all the time and I am a witness to such situations constantly.
But this is not a thread about it, anyway I think that the AR deserves better times and much more convenient equipment than the phone in hand.
I don't go to bars, I'm too busy staring at screens ;) . haha!
This is the app I was talking about: http://www.bbc.co.uk/mediacentre/latestnews/2018/civilisatio... . give it a try, it's cool!
But you know... you don't bring your computer to drink with friends. ;-)
BBC app is awesome it could be way more awesome when using a smart glasses but we don't have them yet. I'm waiting for that day... :)
I kind of use that “would I pull out my laptop” question as a guide to mobile phone politeness. In any social context, if it would be rude to pull out a laptop and start typing, then it is also rude to pull out your phone and use it. Rule of thumb, there may be exceptions.
I see it as spreading the conversation to more people than are around the table. Plenty of conversation appears to be happening too. But that could be a cultural thing.
"Running all over the city" sounds like a rather good thing wrt health.
And why would anyone be bothered by others seeing flying emojis if the chose to?
I'm not bothered by it just for me it's silly and pointless use of great technology.
They must be seeing something I don’t, if you pardon my pun. I don’t see “AR collaboration” as even remotely viable outside of a few very specific niches, and will remain skeptical of this whole AR thing until I see a demo. The only spaces I’m aware of where AR/VR collaboration is currently viable all deal with the physical world to one extent or another: real estate sales, furniture sales, construction, that sort of thing. I could get behind “Kingsman” style teleconferencing if it works spectacularly well. Other than that, I’m drawing a blank.
Any idea how it works behind the scenes? From some of the details, I assume the mapping is gps + sensors + something like Photosynth or this: https://www.popsci.com/gear-amp-gadgets/article/2009-09/buil...
pretty much that.
They have gone round and taken a video of every street in a certain area, unpacked it, extracted salient points, reconstruct those points to get a 3d map.
From that, given any 2d image you should be able to extract a bunch of "salient points" or known points, which from their relationship to each other can tell where the camera is, and what direction its pointing.
The two hard parts are 1) collecting the data 2) searching the data in reasonable amount of time
you can see it here in their demo
https://youtu.be/tXwVg2S9wuY?t=60
the "salient points" are called keypoints and their feature vectors are called descriptors
https://en.wikipedia.org/wiki/Scale-invariant_feature_transf...
you are correct that the challenge is collecting and indexing/retrieving but there have been techniques that do this for a while
https://web.eecs.umich.edu/~michjc/papers/p144-park.pdf
(they even tested against SIFT descriptors)
the real thing that i'm puzzled by with blue vision is how they're registering against ARKit descriptors (if they are at all) since apple doesn't expose them in the ARKit api (only the point cloud itself). ARCore used to expose them (https://stackoverflow.com/a/29012790) but i don't think it does anymore. they must be doing the registration because they only support devices that are running ARKit/ARCore (and without it they would just have built a SLAM system - albeit backed by an "arcloud" - that sits beside ARKit/ARCore and would most likely be inferior).
> the real thing that i'm puzzled by with blue vision is how they're registering against ARKit descriptors (if they are at all) since apple doesn't expose them in the ARKit api (only the point cloud itself). ARCore used to expose them (https://stackoverflow.com/a/29012790) but i don't think it does anymore. they must be doing the registration because they only support devices that are running ARKit/ARCore (and without it they would just have built a SLAM system - albeit backed by an "arcloud" - that sits beside ARKit/ARCore and would most likely be inferior).
I have had a look at their API documentation, and what they do is they provide you with an anchor, and that's where you attach your SCNode-s. They use the built-in ORB-SLAM to position your SCNodes, but these are all relative to the main anchor, hence achieving stability and persistence.
yea that's clever. that way you don't have to track just identify. still leverage arkit/arcore to do the tracking.
Just looking around me now and imagining everything I see having an augmented reality overlay it seems exhausting. I guess the giant blue arrows will help me focus.
I can't imagine AR being accepted without some end-user control over how content is displayed. I'd accept an AR overlay -- maybe -- if I could render every feature on my end in black-and-white serif font. Almost as if the world was a giant Medium article.
The SDK will initially be free to use
Now there's a way to get people to use it"initially"
/s
how else exactly do you think they'll make money. or do you think they should be giving this away for free?
Sell people's tracking data like most "free" software
they posted on Who's Hiring back on November last year