Meta New Segmentation Model
ai.facebook.comOnline demo that runs in the browser: https://segment-anything.com/demo
Initial impression: it seems to be pretty good at identifying objects but when you look at the cutouts without the glowing outline you'll quickly realize the edge is rather rough and thinner parts sometimes get omitted completely. This might be though because those borders they draw are too thick and omitted from the resulting cutout.
Example: https://i.imgur.com/S57c5Cj.png
In the kids drawing both extracted person had no arms.
Wow, glad they decided to provide all the model weights (not behind a request form this time!): https://github.com/facebookresearch/segment-anything#model-c...
very impressive work by Meta AI
Its funny to me that they are able to produce cool R&D stuff like this, but they chose to go all in on the metaverse.
One is driven by their engineering talent, who are consulted on what is worth pursuing, the other is what their slightly spectral founder thinks is worth pursuing. They are not the same
The metaverse application of image segmentation was shown in the GIF in the article.
Are you confusing the metaverse with AR/VR in general?
There would be no need for image segmentation in the metaverse, a completely generated construct. The information gained from image segmentation could be derived much more easily and embedded directly in the objects you're interacting with.
That’s a very narrow interpretation of metaverse that meta doesn’t share. There can absolutely be AR in the metaverses. Check out some of marketing materials for visions of what that would look like
Nobody said the Metaverse is VR-only, I literally even said "AR/VR" and called AR out before VR. Did you even read my comment?
Image segmentation wouldn't be used in the Metaverse side of AR. By its very definition, the Metaverse is a "layer" on top of either a VR or AR base. Every object that is superimposed in your view as part of the "metaverse" would be generated. It doesn't need to be segmented, it already is. It doesn't need to be described, its descriptions are already part of its properties.
The only use for segmentation is to identify things in the real world. That has nothing to do with the Metaverse and instead, as I said, "with AR/VR in general". In fact, all of the examples on the page showed examples of segmentation when used in standard AR scenarios. Things like helping you execute a recipe.
You’re caught up in your interpretation and that’s stopping you from entertaining what I’m saying. Scanning, segmenting, and projecting objects and people into a virtual representation is part of metas vision for the metaverse. An example is one of the marketing materials showing a projection of a person from somewhere else playing chess with somebody. This IS metas vision of metaverse, not what you may think of (which is horizon worlds if I had to guess)
And even if you are in horizon worlds, image segmentation is useful for room/object tracking for 6dof. And also so you don't stub your toe on the coffee table while you're in VR.
But that’s not a feature of Horizon Worlds/the Metaverse/literally any other VR/AR experience.
How many times do I have to say the same thing in different ways? You two are describing basic features of AR and VR systems and pretending like it’s the next big innovation in AR that only the Metaverse could possibly come up with.
Did you even bother to read my comment? The other guy plainly didn’t. It’s almost like it’s easier to defend this strawman that the two of you keep bringing up which has nothing to do with what I said than to actually read what I am saying and properly respond to it.
You're probably looking for the model.
Here it is: https://github.com/facebookresearch/segment-anything#model-c...
Licensed under Apache 2.0 license, for free.
Go for it.
Related ongoing thread:
Segment Anything Model (SAM) can "cut out" any object in an image - https://news.ycombinator.com/item?id=35455566 - April 2023 (33 comments)