Settings

Theme

Fuyu-8B: A multimodal architecture for AI agents

adept.ai

205 points by averylamp 2 years ago · 59 comments

Reader

tasdfqwer0897 2 years ago

Hey I work at Adept and helped make this! Happy to answer questions. The thing I think is especially neat/notable is how simple you can make the model architecture while still getting good performance. I expect we'll continue to see bits of these models get deleted in the next few years

Note that you can get the model weights on HuggingFace here: https://huggingface.co/adept/fuyu-8b

  • brianjking 2 years ago

    First off, absolutely incredible work, congrats and thank you.

    Secondly, do you anticipate Fuyu being made available for commercial access or will it remain NC?

  • JimDabell 2 years ago

    What’s the situation with the license? Your blog post says you are open sourcing it, but it’s currently only available under a non-commercial license instead. Is an open source release forthcoming?

    • coder543 2 years ago

      Yeah... in the blog post, they do explicitly mention "cc-by-nc", which I find disappointing.

      Anything that Adept is "excited to see what the community builds on top of it" would only serve Adept and no one else! What incentive does the community have to build on top of Fuyu, when the community can't benefit from its own work? If Adept wants to benefit from word-of-mouth discussion of their models and from community contributions that make those models work better, as has happened dramatically with Llama 2, then they need to give the community the opportunity to benefit too.

      Also weird: if you look at the tags on Hugging Face, you'll see it is listed as "cc". This comes from the README[0] metadata. "cc" is not really a license.

      [0]: https://huggingface.co/adept/fuyu-8b/blob/main/README.md?cod...

    • schleck8 2 years ago

      It's open source by their definition, that is source available (open). Everyone always thinks the term open source is protected in any way while the entity that has established the commercial usage aspect is the Open Source Foundation. And noone is forced to abide by their ideology

      FOSS meets the commercial usage requirement much better. Otherwise the term FOSS would be redundant.

    • mandelken 2 years ago

      You can download the weights on Hugginface.

      I believe the copyright on AI model weights in the US is not fully established, but so far it has been held that a list of numbers can not be copyrighted, so likely the same applies to model weights. Note that you don't have to enter into an agreement with Adept to use the model.

      Alternatively, use and download the weights in Japan that has explicitly no copyright on AI models.

      • ansk 2 years ago

        > a list of numbers can not be copyrighted

        Any digital object can be represented as a list of numbers (this is precisely the origin of the term digital). Since there is clearly precedent for copyrighted digital objects (media, software, etc), reducing something to "a list of numbers" is not a useful distinction in regard to copyright law.

        • MattPalmer1086 2 years ago

          IANAL but as far as I remember, you can't copyright a list of objective facts, for example a phone book containing a list of phone numbers.

          Model weights are clearly not in that category. Happy to be corrected if I misremember.

          • outofpaper 2 years ago

            Model weight are akin to markov chains and compressed data. They are direct representations of the data they where created from in the same way that markov chains are created from hidden markov chains and Zipped files are created from files.

            Zipping a file does not grant the copyright protection of the zipped output beyond the copyright of the original file.

            Moreover the American federal registrar has officially stated that AI generated artifacts are not eligible for copyright https://www.federalregister.gov/documents/2023/03/16/2023-05....

            • startupsfail 2 years ago

              If you take some copyrighted data, a set of books, for example. And count words in these books and then plot a distribution of top 100 word frequencies. The copyright for that new image would belong to you.

              • MattPalmer1086 2 years ago

                Copyright in the specific image sure, but not the graph itself. Someone else could do the same thing and make their own graph image.

                • outofpaper 2 years ago

                  Exactly. Data is not covered by American copyright and artifacts generated by LLM and diffusion tools are not covered by copyright protection unless there was human involvement and humans are transparent about how they participated in the creation of the artifacts.

                  • startupsfail 2 years ago

                    For now there is a lot of human involvement. You pretty much need a team of engineers or an equivalent to get anything besides minor fine tuning done. And there is usually human labor involved at labeling, feedback and evaluation stages.

                    • outofpaper 2 years ago

                      The issue circles back to their needing to be transparent about how they did the work.

                      When it comes to intellectual property there are two methods of protecting it: either you can keep it a trade secret and only use it in house (the secret sauce approach) or you keep things out in the open and seek copyright or patent or trademark protection. You can't have it both ways and even more so with AI co-created artifacts. If they are transparent about all the steps involved and what the humans did then they can seek protection for the human created parts. This also allows others to then replicate these steps and to create similar artifacts.

                      It sounds like they and many other "AI" teams want patent protection without having to register for it. These teams are trying to write their own licenses to rights they do not have.

      • schleck8 2 years ago

        I highly doubt that any of this will hold up infront of a court. For intellectual property not just the result is important but also the creation process, and there is enough work going into the data science here

  • zan2434 2 years ago

    Hey! Awesome work. It seems like in theory this encoding scheme should enable the a model like this to generate images as well, by outputting image tokens, is that right?

  • abrichr 2 years ago

    Thank you for the release!

    What can you tell us about this:

    > Our internal models (based on Fuyu) have extra capabilities related to our product. In particular,

    > 1. They can reliably perform OCR on high-resolution images

    > 2. They can do fine-grained localization of text and UI elements within those images

    > 3. They can answer questions about images of UIs

    Is this just a matter of additional fine tuning, or are there architectural differences?

    • amks 2 years ago

      Even with experiments with just adding additional fine-tuning, we've seen models gain these capabilities!

  • Q6T46nT668w6i3m 2 years ago

    Neat idea! Are the batches encoded as tokens into the input sequence? This is something I really like about the multi-modal PALM papers since it enables the multi-modal tokens to be referenced.

    • ekelsen 2 years ago

      Image patches are projected directly into an embedding that goes into the decoder Transformer. The same thing could be done for audio.

  • saran945 2 years ago

    Hi, Will it work for html/APP UI screenshots, Have been trained using UI screenshots ? Thank you

  • visarga 2 years ago

    Do you offer paid API access to larger models?

  • acanb 2 years ago

    can you guys launch a web gradio demo until the transformers PR gets approved? i'd like to play around with the model

fpgaminer 2 years ago

The architecture is quite compelling. I would not have expected it to work as well as it does. Glancing at the benchmarks it's basically on par with other VLMs in its class, despite having no separate image encoder.

Is there an associated paper? Or more specifically, details on the training dataset? It must have been a mix of text and VLM tasks, otherwise one or the other capability would have rotted during training. But I wonder if they trained off strictly VLM corpora, or also used plain image-text datasets like CLIP. It would be interesting if only the former.

Also makes me wonder if it could be trained on something like CommonCrawl where all the images are retained and interspersed correctly throughout the text. This model could theoretically train just fine off that, and it would unlock a whole new dataset effectively.

And has there been an inspection of what the model is outputting for predicted image "tokens"? Is it correctly predicting projected image patches to any degree of accuracy? And could therefore also generate images inline with text if another de-projection layer was trained?

joanfihu 2 years ago

I’ve done a review for UI navigation

https://joanfihu.wordpress.com/2023/10/19/evaluating-adepts-...

  • abunner 2 years ago

    This is a really nice review. The examples helped me better understand the model's capabilities

abrichr 2 years ago

Thank you to the amazing team at Adept.ai for making this available!

For anyone interested in contributing to a fully open source alternative, join us at https://github.com/OpenAdaptAI/OpenAdapt

Lots of interesting work to be done, including integrating with Fuyu-8B!

  • coder543 2 years ago

    "fully open source", but there is no license?

    https://github.com/OpenAdaptAI/OpenAdapt/blob/30581e47fa9aec...

    https://github.com/OpenAdaptAI/OpenAdapt/issues/246

    And Fuyu is under a non-commercial license, so there's not much to be done with it unless someone trains a new Fuyu-architecture model from scratch.

    • abrichr 2 years ago

      Thank you for pointing this out! You are correct that we have not yet decided on a license.

      I will admit my ignorance on this topic, and I didn't want us to rush into selecting a license that is inappropriate.

      Which one should we choose?

      • webappguy 2 years ago

        If it's for the win (?), the most permissible is the one you choose. This is a extraordinarily competitive space. The sooner you make the choice and it's MIT, the sooner I personally put forth serious contribution time and the faster you grow in the broad and competitive ecosystem. Your main options are GNU All-permissive License, MIT License, BSD licenses, Apple Public Source License and Apache license.

        It is recommended by this developer you go MIT

      • coder543 2 years ago

        > Which one should we choose?

        It depends a lot on what you want the license to do, so I don’t really want to say one way or another.

        IANAL, but my understanding is that code without a license effectively has an “all rights reserved” license in the U.S., meaning that it can’t be used for anything at all — even non-commercial work.

thatcherc 2 years ago

Really cool that the image patches are converted to tokens with just a linear projection instead of a big embedding model! I wonder if that trick will prove viable for other multimodel media like audio.

  • rafaelero 2 years ago

    Not using embeddings/lookup table means they can't generate image/audio, which to me it's a severe limitation. Why bother going to the process of generating a multimodal transformer if it's able to generate nothing but text?

    • leodriesch 2 years ago

      For an AI agent that should navigate a computer (which is Adepts use case IIRC) it should work, as it only has to output commands.

    • Philpax 2 years ago

      Many applications only need input, not output.

mark_l_watson 2 years ago

This looks so cool, and from reading the Hugging Face model card it should be easy enough to run. I do almost all of my work with text, NLP, IR, etc., and I have wanted to try multi-modal models. I just bookmarked the model card page.

I am also getting even more excited by the explosion of work on open models. I still haven’t adjusted to how good mistral-7B is, and it runs on my Mac without breaking a sweat.

  • JimDabell 2 years ago

    I gave it a shot on an M1 Max with 64GB RAM yesterday and it consumed all available RAM and hit a wall. I can run other, larger models without any problems so I assume it’s not an intrinsic limitation, but I didn’t spend any time debugging it.

    Mistral-7B is incredible for its size!

yeldarb 2 years ago

This looks epic. Definitely going to explore adding it to Autodistill[1] this weekend. Any chance you'll be publicly releasing the internal OCR finetune?

[1] https://github.com/autodistill/autodistill

devinprater 2 years ago

Awesome! I can't wait to see how we can make local models for, say, describing images offline, or even getting a few screenshots of, say, a video game and describing what's going on.

stavros 2 years ago

This looks great! Is there any software that supports these? Llama.cpp, Ollama, LM studio, etc are really convenient, but I don't think they have image support yet?

paulkon 2 years ago

Can this be used to click around in the browser with text prompts? Maybe after some fine-tuning on screen recordings of specific workflows in browsers.

WanderPanda 2 years ago

Why don‘t these benchmarks judge the likelihood of the example answer? Just taking the MAP predictions seems like a waste of information

thefcpk 2 years ago

One thing that puzzles me is the lack of multilingual models... it is a bit sad to see everything through the English language.

  • snats 2 years ago

    Yes, but currently there is a project called Aya[1] from Cohere4AI that I think it is trying to create multilingual models.

    [1] aya.for.ai

    • tellarin 2 years ago

      And the project is looking for contributors across many languages!

      Full disclaimer: I'm a contributor and a big believer in the project.

  • ekelsen 2 years ago

    I would try your language of interest...

StephenAshmore 2 years ago

Fascinating! I love seeing more multimodal ML. Thanks for sharing!

famouswaffles 2 years ago

Oh wow. This seems to be the best released vlm model. The chart/UI understanding displayed in particular is superb.

lxe 2 years ago

Comparable with llava13b in benchmarks! Great work!

ronsor 2 years ago

Before someone else does, I'm going to point out that CC-BY-NC is technically not an open source license.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection