Settings

Theme

Show HN: Unscreen – Remove Video and GIF Backgrounds

unscreen.com

193 points by EricLima 6 years ago · 42 comments

Reader

atoav 6 years ago

As a freelance VFX-artist the thing that annoys me about examples like these is that they don't honestly shou you the edge cases where things start falling apart (and they usually do).

A guy with dark hair in front of a white wall? I could luma key that in 10 seconds. The book example is more interesting, but there you can already see a bit of chatter (which might have to do with compression and noise tho).

In your defense you probably aim at a different target audience than people like me.

  • ygra 6 years ago

    Well, the first video with the girl in the desert shows a case where things usually break down and they do so here too.

gitgud 6 years ago

I was thinking this looks very similar to the photo version of the tool https://remove.bg ... It's the same guys!

jtvjan 6 years ago

From the submission title I assumed this was some kind of plug-in to remove those auto-playing video backgrounds from web pages. This could be a very useful tool. It sent me a 157 MB APNG in about a second, I don't even get those kinds of speeds from local file servers.

joosters 6 years ago

Their example green-screen photo shows two chairs against a green background. I would love to see how their technology would work in their equivalent kind of setup. If you have two people sitting in chairs, talking to each other, then presumably the chairs will be static and will most likely be deemed to be part of the background, to be cut out.

maktouch 6 years ago

If you need it for real-time video calls, check out XSplit VCam: https://www.xsplit.com/vcam

  • dillonmckay 6 years ago

    That is interesting for $40.

    I just purchased a very basic greenscreen and 3 point lighting kit for about $120 on Amazon.

ashraymalhotra 6 years ago

Are there similar tools to automate even the green screen removal process? It's super inefficient right now, especially dealing with green screen light bleed. Tweaking parameters etc on Premiere Pro/After effects takes forever.

  • numpad0 6 years ago

    Mandalorian solved it by completely replacing all lighting with an LED videowall room showing camera tracked cubemap texture fed realtime from Unreal

    • dillonmckay 6 years ago

      I think the poor man’s version of this would be a fixed camera position and using a projector from the rear of the screen.

AnonC 6 years ago

The privacy policy says that the uploaded video is deleted immediately after processing, but I'd still prefer a locally installed application for this (without any Internet access for it).

emayljames 6 years ago

Firefox Android: Failed to load video file: Can not access file at 'media1.giphy.com'. Please verify the URL or try downloading the file to your device, and upload it from there.

Please try again later or contact support@unscreen.com if the problem persists.

nathancahill 6 years ago

It does really well with the classic Pulp Fiction test case: https://i.imgur.com/pISmxjH.gifv

Budabellly 6 years ago

This would make for an excellent after effects (or any post-production software for that matter) plugin. Would highly recommend going this route for the "Unscreen Pro" version that is yet to be released.

Separately, because I'd bet the makers are reading, are there any plans to offer the segmentation models or APIs locally? Was looking for this for the remove.bg product as well.

  • a_t48 6 years ago

    It would also make an excellent OBS plugin for streamers, if such a thing doesn't already exist.

    • tenryuu 6 years ago

      For webcams? You can use xsplit vcam. https://www.xsplit.com/ja/vcam

      It uses whatever AI systems they made to single out the foreground objects from the background objects. And then it's basically just taking the camera input, applying filters or transparency and outputting it as a new video device.

craftinator 6 years ago

I have been unable to upload a video or gif from mobile. The gif search and unscreen works correctly, but if I try to upload my own, it just hangs with the loading bar permanently. These files are <5MB, in correct format, and I'm on Firefox for Android.

  • dlivingston 6 years ago

    I tried with an 8 second video on Safari for iOS. Worked well for a somewhat complex video (moving video with animal + human in foreground, carpet and walls in background), with only minor artifacts. Quite impressive.

    • craftinator 6 years ago

      Weird, maybe it's a Firefox issue, though I'd hate for that to be the case. Did you try any videos in a portrait aspect ratio?

thrownaway954 6 years ago

very refreshing seeing a product that demos exactly what they do (and amazingly i made add) within a moment of me landing on their homepage. i cannot believe how cool that is. very well done... congrats

wildduck 6 years ago

Can't really get it working. All I see is a white bar in the middle of screen.

https://i.imgur.com/mOWAJwg.png

pimlottc 6 years ago

This is neat, but I'm confused why the two sides of the split screen samples don't line up exactly. Why does removing the background shift the image?

  • martin-adams 6 years ago

    I think they are only shifted in time which could be a quirk if the video compression.

    I once had a clip that I trimmed off another scene. Only after converting the video file did a frame of that scene come back.

runawayvibrator 6 years ago

So when are we expecting Snapchat to do this exact thing?

amerine 6 years ago

Nice! How are you doing it?

  • julvo 6 years ago

    My guess would be U-Net-like ConvNets, trained on images annotated with foreground/background segmentations. Probably with all kinds of tricks like multi-scale inference etc.

    However, simple frame-by-frame segmentation will probably not be enough to get temporal consistency, so for each frame's segmentation they probably take previous and following frames into account.

    • superasn 6 years ago

      That is incredibly insightful. For someone having no knowledge about this field, where would one start if he wants to remove the background from images using programming?

      • julvo 6 years ago

        Depending on the type of image, a simple solution could be using OpenCV and some clever heuristics.

        For a deep learning approach, I would start by looking into literature on semantic segmentation. Here is a blog post I just found which gives an intro: [1]

        With state-of-the-art models (e.g. DeepLabV3) and a good dataset of foreground/background segmentations, the results could be of useful quality already.

        The next step would be to look into literature on image matting (e.g. deep image matting [2]) which instead of trying to classify each pixel as foreground/background, regresses the foreground colour and transparency.

        ___

        [1] https://divamgupta.com/image-segmentation/2019/06/06/deep-le...

        [2] https://arxiv.org/abs/1703.03872

        • superasn 6 years ago

          Thanks for the reply. This will make for a great weekend project.

          I have some knowledge of creating an OCR program using deep learning from the last online course I took, but this looks like a very different beast and so it would be great fun to learn

OutsmartDan 6 years ago

This is pretty neat, would love to see how the underlying tech works.

_def 6 years ago

edit: okay, it was because I recorded in portait mode.

The examples are great! I recorded a short video of myself and the processing failed horribly. Whelp.

artur_makly 6 years ago

replace bkg with a whiteboard of complex calculus algos and code functions.. and voila! you got that CTO position!

arrayjumper 6 years ago

very nice demo. how does it work?

BubRoss 6 years ago

This is an area of research that has been going on for years now, called "natural image matting".

There are dozens of techniques of varying success that have been developed over the course of a decade and a half. My guess is that this is taking some more common implementation like 'closed form matting' and putting it on a server with ffmpeg. To guess the foreground I would use motion vectors as a starting point.

Also note that an alpha channel doesn't get you all the way there. You have to solve the full matting equation to extract both the foreground and alpha. You can see a bright edge around the hair in the example. The result they show still looks pretty good in general though.

  • dannyw 6 years ago

    Pretty sure it's a machine learning model for video segmentation. It doesn't guess the foreground by motion: it guess it with millions of human-annotated masks.

    Deep learning is making decades of research obsolete by delivering better results with more generalisation and less time.

    • BubRoss 6 years ago

      Different techniques don't mean it isn't still natural image matting. I was guessing to give people a starting point on what to look at. Does it reference a paper somewhere? Just saying 'deep learning's doesn't really explain much.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection