nsfw.rest – Keep your platform safe of NSFW content, all for free
nsfw.restCan you consider giving a score instead of a binary decision? Or differentiate between, say, beach pics and true 18+ content.
Edit: The accuracy leaves a lot to be desired - you can paste any images from clothing stores of models in dresses or tank tops and it will flag it as NSFW.
The "AI" seems to replicate an Islamic fundamentalist - a woman in a burqa did pass as SFW ;)
Google offers a hosted SafeSearch version [1] which has a lot more nuance: https://cloud.google.com/vision/docs/reference/rpc/google.cl...
It gives a score as well, in the API, I can for sure add that to what you see on the "Try out" page. I was planning on adding a button where you could see the raw data.
The whole thing is built to export moderation to an external service. It's a bit creepy IMO.
I totally get that. That's why I'm planning on making the service (or at least parts of the service) open source. You can already get insights in the storage usage of the server through status.lngzl.nl (click scan.nsfw.rest). If you upload files, the storage size does not increase.
Also, one of the things I should make a little bit more clear is that it's best paired with data that will be made public anyways (for example, a social media platform) and shouldn't be run on private images (even though it's not storing anything it's probably not needed for those use cases).
Corporate uses of nsfw filters typically want to filter out clothing store models and similar content. “Brand safe”
It's a good idea in theory, but what if it shuts down? And what about the privacy implications of sending all images to some server across the world?
That's a really good question! I'm planning on making parts of it open source and for mission critical applications (for example, companies like Discord) that they have the option to host it on their own hardware. If the project fails, I'll make it completely open source so that people can continue to use it.
Oh brilliant! All power to you, then man :-)
It marked 3/4 SFW images as NSFW and then marked the only NSFW image I uploaded as being OK.
I am not sure this thing is very accurate at all.
You’ve got me curious; would you mind sharing the images used? The SFW images that were mislabeled, rather.
I used a picture of myself, a picture of a tan colored bird and a Mortal Kombat character and it marked them all as NSFW
I added a number from 0-100 how confident it is
What are the API limits? That is, how many images scanned is too many?
Currently, no limits. As long as it doesn't look like a DDoS attack nothing will be blocked.
Amazing accuracy, I tried to trick it but it got everything right
Awesome! It's based on an open source model :)