Settings

Theme

Protect your art against AI/machine learning theft with customizable watermarks

rgbwatermark.net

13 points by jbothma 3 years ago · 3 comments

Reader

yarg 3 years ago

People have already succeeded in generating adversarial images that confuse neural networks.

Until networks reach the point where those techniques don't work any more, it's probably a better option - it allows you to automatically perturb the image for a targeted deception, as opposed to specifically defining a large number of parameters without understanding what the impact will be.

antiterra 3 years ago

But does it work?

  • kallistisoft 3 years ago

    Given everything I know about feature extraction/embedding, my armchair answer would be a resounding no... Given enough similar content the water mark would get rounded of as what it is -- noise.

    Also, The fact that a service which in theory would have a high demand, offers 0 explanation or justification for their technique is highly suspect.

    - - - - -

    The only way a watermark would be even remotely effective as a deterrent would be if everyone agreed on a fixed mark; this mark could be automatically detected (this a silly way of doing things), or if it was incorporated into the model would create a bubble of locality due to feature alignment (poising the well)

    Neither of these are practical or reasonable solutions like: robots.txt, EXIF metadata, and digital signing - combined with regulations allowing for legal recourse

    TL;DR - DRM of publicly available data is not a "winnable" fight

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection