Settings

Theme

Teachable Machine: Teach a machine using your camera, live in the browser

blog.google

438 points by jozydapozy 8 years ago · 94 comments

Reader

nsthorat 8 years ago

deeplearn.js author here...

We do not send any webcam / audio data back to a server, all of the computation is totally client side. The storage API requests are just downloading weights of a pretrained model.

We're thinking about releasing a blog post explaining the technical details of this project, would people be interested?

  • amelius 8 years ago

    Yes please! :)

    And some quick questions:

    What network topology do you use, and on what model is it based (e.g. "inception")?

    What kind of data have you used to pretrain the model?

    • nsthorat 8 years ago

      We're using SqueezeNet (https://github.com/DeepScale/SqueezeNet), which is similar to Inception (trained on the same ImageNet dataset) but is much smaller - 5MB instead of inception's 100MB - and inference is much much quicker.

      The application takes webcam frames and infers through SqueezeNet, producing a 1000D logits vector for each frame. These can be thought of as unnormalized probabilities for each of ImageNet's 1000 classes.

      During the collection phase, we collect these vectors for each class in browser memory, and during inference we pass the frame through SqueezeNet and do k-nearest neighbors to find the class with the most similar logits vector. KNN is quick because we vectorize it as one large matrix multiplication.

      I'll go deeper in a blog post soon :)

  • Splines 8 years ago

    There's something fantastically entertaining about this. It's stupidly simple (from the outside) but interacting with the computer in such a different way is weirdly fun.

    It's like when you turn on a camera and people can see themselves on a TV. A lot of people can't help but make faces at it.

  • sydd 8 years ago

    Why does it not work in Edge? Please keep the web open, do not make stuff that does not work in a modern browser. Also always give an option to try it anyway.

  • haser_au 8 years ago

    A blog post on the technical details would be great, please. Thanks in advance, since I know it'll take a bit of your time to write.

  • godelmachine 8 years ago

    To answer the question at last, yes, I am interested.

celim307 8 years ago

Pretty neat! Good overview without overwhelming right off the bat. Would be cool if they showed off common pitfalls like over fitting, or even segued into general statistics!

melling 8 years ago

How long before I can teach my computer gestures that are mapped to real computer functions? For example, scroll up/down, switch apps, save document, cut/copy/paste, etc.

One could probably map each gesture to a regular USB device that acts as a second keyboard and mouse? The hard part is identifying enough unique gestures?

amelius 8 years ago

I don't have a camera here. Did anyone try it? How does it work?

  • IanCal 8 years ago

    Surprisingly well!

    It's a really well put together demo & tutorial.

    I held a pen up next to me and held the green button.

    Then did the same with a mouse.

    It would flick between the two if I was holding nothing, so I held the orange button for a bit while holding nothing.

    Worked pretty much every time.

    Training is fast enough with a few hundred images per class that I didn't notice any delay.

    • amelius 8 years ago

      What do you mean exactly by "held the green button"?

      I can't run the demo here (browser not capable enough, and no camera) and I'm getting really curious what this is about.

      • icc97 8 years ago

        Watch the demo video in the link, it explains the green button

      • IanCal 8 years ago

        Ah sorry. So there are three coloured buttons. When you hold one, the site takes a series of photos from your webcam, and assign them to that "class". Then it'll train and start classifying your video input live.

        It's a pretty neat way of creating a reasonable training set of 3 classes.

  • makmanalp 8 years ago

    It's working great because they're using a state of the art model (SqueezeNet https://github.com/DeepScale/SqueezeNet) and also the samples / experiments you do are often only on yourself, in the same lighting, same clothes, etc. So it gives a nice idealized playground environment that mostly eliminates annoying details like this.

  • gfredtech 8 years ago

    there are 3 default classes, so you train according to each class(e.g. hand waving, sitting still, etc) you take examples of each(using your camera). you map the input data from your camera to some output data(e.g. if i used the green button to take photos of me waving), display a GIF of a cat that's waving. instead of a GIF you can use sound too

crypticlizard 8 years ago

The value-add for this demo is amazing, it's going to be many people's first approachable experience of ml, or things just like this will be. I expect a lot more of this stuff to appear in UI/UX. It's fun, intuitive, and a game changer away from dumb screens to fully interactive machines with their own knowledge graph.

lelima 8 years ago

You can solve problems using Machine learning without coding from a while ago.. azure machine learning have this features from more than a year ago.

I've solve regression, classification and recommendation problems with it and the best part is it deploys an web service with a few clicks.

  • thanksgiving 8 years ago

    But you would need to have:

    1 a working phone

    2 a valid credit card

    To use azure which places a too high bar on students. I mean I've tried to argue for graduated restrictions so basically students with .edu emails should be able to do some things without entering a credit card number but the fact that it is not possible suggests this isn't a priority for azure.

    Google says this finds on your browser so there's little infrastructure cost for this demo, right?

  • shostack 8 years ago

    Can you clarify on what you did with it? I'd love to start dabbling in solving problems with ML, but am a bit intimidated by getting started. Is it fairly easy for a novice to do the things you did?

StavrosK 8 years ago

Does anyone know what this uses under the hood? I loved the demo, but I would like a similarly easy way to get started locally with Python, for example.

Is there an ML library that can easily start capturing images from the webcam so you can play around with training a model?

greggman 8 years ago

be aware, at least in Chrome, once you give teachablemachine.withgoogle.com permission to use you camera, unless you revoke that permission is has permission to use your camera without further permission including from iframes. In other words every ad from and analytics from Google could start injecting camera access.

I wish chrome would give the option to only give permission "this time" and I wish it didn't allow camera access from cross domain iframes.

  • ma2rten 8 years ago

    Are you serious? Do you realize that Chrome is also written by Google and they could theoretically already run arbitrary code on your computer? The potential reputation damage and legal risk for Google would be way too high pull off something like that.

  • jamesmishra 8 years ago

    If this happened, the Google Chrome tab would show a camera. Many webcams have adjacent LEDs that identify that they are activated.

    Google could theoretically release compromised versions of Google Chrome and only use the permission on devices where webcam LEDs are unlikely (e.g. smartphones), but this is going deep into tin-foil-hat territory.

    • greggman 8 years ago

      that's not helpful. the pictures would already be taken and uploaded to servers without my permission reguardless of whether or not I wanted my picture taken or what's visible (contracts, trade secrets, people in various states of undress) .

      also this isn't about Google spying. it's about Chrome's bad camera permission model. any company can abuse it

  • azinman2 8 years ago

    But won’t it be on just that FQDN alone? Google analytics and ads are served from a totally different domain. What’s the actual concern here?

    • greggman 8 years ago

      Google ads and analytics inject JavaScript which means they can insert iframes for any domain they want. If they injected <iframe src="https:// teachablemachine.withgoogle.com/spyonuserwithcamera" /> they'd be able to use your camera from the ad or analytics without asking for permission again.

      Of course I'm not suggesting Google would actually do that but some other company might make seeamazingcamerameme.com to get users to turn on there camera for that domain and then after that make iframes for seeamazingcamerameme.com/spy

    • amigoingtodie 8 years ago

      So you are contending we are secure via DNS?

      • matt4077 8 years ago

        That's one of these arguments that may attack the parent in isolation, but makes absolutely no sense in the context of the thread they were replying to.

        Because if you assume an attacker to have control over DNS, the security model of giving permission on a per-domain basis is broken anyway, and the initial concern with granting google this access is already subsumed in your general paranoia.

        • ec109685 8 years ago

          No it isn’t. TLS helps ensure you aren’t talking to a rogue server and HSTS ensures you can be spoofed in the first http request to a new server.

  • haser_au 8 years ago

    Chrome does give you this option. It's called "incognito mode"

  • addedlovely 8 years ago

    Good to know, but thankfully easy to remove permissions from the settings.

netcraft 8 years ago

What makes it non-mobile? Is it something about the expected performance of the JS? or are there apis being used im not thinking about?

  • nsthorat 8 years ago

    It works on mobile, it's just slow. Every time we read and write from memory we have to pack and unpack 32 bit floats as 4 bytes without bit shifting operators >.>

    • white-flame 8 years ago

      Isn't that what ArrayBuffers can do for you at nearly the same amortized speed as C unions?

f00_ 8 years ago

this is really cool, openframeworks-esque in browser javascript

if you like this I would highly recommend looking at openframeworks.

the interactive browser part excites me want to try to make something with deeplearn.js

mschuster91 8 years ago

Hmm. I wonder if one could train this with dick pics and embed into popular messenger apps client-side... "this picture was classified as a penis", to counter morons sending their dick as first message.

peepopeep 8 years ago

Am I the only paranoid one who thinks this is just Google's way of capturing millions of faces in their database? Or did Apple beat them to it?

  • moduspol 8 years ago

    Claims like these make privacy-focused efforts less valuable, and I wish people wouldn't make them.

    What value is there in taking care to store biometric data only locally, in a separate chip inaccessible even to the OS, if people will simply claim it's equivalent to keeping a remote database of millions of faces?

    • prophesi 8 years ago

      People will be much less likely to make those claims if you clearly state where the data is being stored. This article + their project page doesn't mention anything about privacy.

      I don't know anything about Squeezenet, but it makes a lot of calls to storage.googleapis.com. I wouldn't be surprised if it's making some PUT requests. https://github.com/googlecreativelab/teachable-machine/blob/...

      • scarlac 8 years ago

        People need to ask the question before making assumptions. In the case of Apple, they said it directly in the presentation of FaceID as well as TouchID IIRC. Yet people made these claims anyway. For this project, they also state it clearly on the page:

        > Are my images being stored on Google servers?

        > No. All the training is happening locally on your device.

        • prophesi 8 years ago

          Where is it clearly stating that? I couldn't find anything in the linked article + the github repo + teachablemachine.withgoogle.com

          But I do agree people need to ask the question before making assumptions. Sadly, the two popular mindsets is either to not think about privacy at all, or believe that everything is infringing on your privacy.

      • urspx 8 years ago

        I think the post above was referring to Apple, not Google. In the latter case, I think the claim is justified.

  • xyrnoble 8 years ago

    Facebook beat them to it... that's the whole reason for tagged images imo. Then they can relate identities with each other and with exif gps data to track their movements over time.

    • colmvp 8 years ago

      Yeah it's a little hilarious how people just keep giving Facebook more and more data to experiment with.

      • melling 8 years ago

        HN discussions tend to devolve into rants about privacy. There are a lot of repeated discussions that occur here. They overwhelm the discussion about the actual technology

        https://h4labs.wordpress.com/2017/09/27/groundhog-day-amazon...

        • xyrnoble 8 years ago

          I can solve the privacy problem by not using their products? I disagree: https://en.m.wikipedia.org/wiki/FBI–King_suicide_letter

          Also, my own personal privacy is less secure if it's a relative inconvenience for employers. If everyone but me gives up their privacy then there's more pressure on me to follow suit.

          The argument even doubles back on itself. If these comments aren't interesting to you... don't read them. Embrace tree-style collapsible comments.

          • melling 8 years ago

            The comments are repetitive and are basically complaining. If there was something to be learned that would make it useful and interesting.

            • xyrnoble 8 years ago

              You can learn lots of interesting things by invading people's privacy.

              I responded to the argument you linked. You're avoiding a more interesting discussion on the topic. Push the [-] button and move on. Your comment is blatantly hypocritical:

              "Every time X is updated people complain about X; those people ignore the details of the update."

              "Every time people complain about X other people complain about them complaining about X; those people ignore the details of the complaint."

        • gtufano 8 years ago

          That's because the privacy implication of the technology should be part of the discussions on the technology... technology is not neutral, the way its used and the privacy implications are significant.

      • EGreg 8 years ago

        Once the big data genie is out of the bottle, you can't put it back in.

        http://magarshak.com/blog/?p=169

  • ma2rten 8 years ago

    I am pretty sure that Apple does not save your image data in any database. Apple is really trying to differentiate itself on privacy.

    Also, I don't think that this sends any data to Google, since it trains the neural net in the browser. You could even verify this yourself by looking at the source code.

  • glass_of_water 8 years ago

    The machine learning is done in the browser with deeplearn.js, so the images aren't being sent to Google's servers.

  • jamesmishra 8 years ago

    The faces are unlabeled, and I'm not sure what that data would be good for. If Google really wanted face data, they could look at:

    - Gmail / Google Plus / Google Apps profile pictures

    - Google Street View

    - Google Hangouts

    - implementing a primitive Face ID or Snapchat-style camera on Google Android

    - the large mass of face pictures that they index with Google Images

    • runj__ 8 years ago

      Google Photos seems like the absolute best bet there, they're "organizing" them there by default.

      • jamesmishra 8 years ago

        Can't believe I forgot about that one! I'm an avid Google Photos user, and they definitely have some pretty amazing unsupervised clustering for faces.

    • tokenizerrr 8 years ago

      Android has had face unlock for ages

  • danso 8 years ago

    What did Apple beat them to? FaceID is said to not upload data off-device.

  • icc97 8 years ago

    It's good to be paranoid about it but at the same time it's quite a cool thing to offer people.

    Also I think a lot of the processing is done in the browser using deeplearning.js, so I don't know how much is sent back to Google.

  • 4684499 8 years ago

    Don't need to, they've got Youtube or so. People has been providing free data set to Google for years anyway.

  • fancyfacebook 8 years ago

    Don't worry some comment on a forum said they'd never do this, so I think we're all good!

eggie5 8 years ago

i bet it's fine tuning an ImageNet CNN

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection