Show HN: Smallville – Create generative agents for simulations and games
github.comSmallville can be used to create NPCs with the same level of realism as human players without having to pre-program interactions. The agents store and retrieve past memories which they use to create plans so they can decide where to move, what to say, and how to react to observations. Agents are also capable of interacting with the world around them to change the state of objects on their own.
This project was intended to make it easy for anyone to create custom simulations and my attempt to recreate Generative Agents: Interactive Simulacra of Human Behavior. I’ve been working on Smallville for the past few weeks and hope other people also find it useful. Would love to hear any thoughts about the project and where I should take it from here. Uh... is there some way to use this without connecting to a server? Like, for a game that can be played offline? Finding a way to make the machine learning piece a completely self-contained library that can be shipped at scale to run on individual computers is the big hurdle to making AI like this practical for games. If I have to rely on your service staying up for my game to work, that's an unacceptable supply chain risk. EDIT: Actually there's apparently been a lot of progress recently that I hadn't kept up with; see the replies to this comment. Original message: From a quick peek at the source, this depends on the ChatGPT API for the underlying LLM. It could probably be modified to use a local copy of an LLM, but most models I've seen are 300GB+ and require significant computational resources to operate (think several $15k NVIDIA A100 compute nodes). There's a lot of effort being put in by the open source community to minimize these models and run them on commodity hardware, but as of yet the quality of the responses from the model are correlated with how large (and therefore how much compute) the model has. Give it a year or two and it'll probably be more reasonable to integrate a local LLM for gaming purposes. > most models I've seen are 300GB+ and require significant computational resources to operate (think several $15k NVIDIA A100 compute nodes). What? Where have you been the last 3 months? > the quality of the responses from the model are correlated with how large (and therefore how much compute) the model has There's a lot more to this including the model structure, training methods, number of training tokens, quality of training data, etc. I'm not at all saying that Vicuna/Alpaca/SuperCOT/Other llama based models are as good as GPT3.5 - but they should be capable of this, they still create coherent answers. You need preferably 24GB of vram, but you can get away with less, or you can use system memory (although that'll be slow). There is a openai api proxy that might let this work without too much work actually EDIT: It actually says in the readme they plan to support StableLM which is interesting because at least at the moment that's not a well performing model EDIT 2: You should try the replit2.8B model - This is surprisingly good at programming - https://huggingface.co/spaces/replit/replit-code-v1-3b-demo Even if you're a more lightweight model, it's still not very practical to require a dedicated 24GB GPU for every active gamer, whether local or cloud hosted. For all intents and purposes, it's as much of a non-starter in a production game as the multiple A100 scenario. Of course that isn't going to remain the case for long as the recent advancements in optimization make their way into live systems, but still. > it's still not very practical to require a dedicated 24GB GPU totally agreed, you could get away with 12GB too which is in the midrange. That said yeah it's still not something you could make a game with yet, I'm just pointing out 300GB+ of VRAM isn't the bar for entry here, it is reachable for medium-high end consumers but that's not really including the games resources either, and most gamers aren't medium-high end so... > EDIT: It actually says in the readme they plan to support StableLM which is interesting because at least at the moment that's not a well performing model I chose StableLM because that's the only other model I knew of besides ChatGPT. I'm open to adding support for other models after I fix some bugs first. You might consider supporting ooba's api which would give you a lot of support for different things really quickly. Yeah, I second this. I use this frequently and lots of models downloaded that I test out with it. I'm keen to see a more API led approach. Oh, fair enough. I hadn't been keeping up too much but hadn't realized they had progressed that far. I'll have to do some tinkering this evening. 7B parameter models are more than enough for this and run faster than talking pace on even a low end CPU. Even a finetuned 3B model would be excellent for generative agents and only use about 2GB of RAM to at high speeds on even a single core CPU. Can you share some examples
Of what models you’re referring to? Dunno. I only ever play games that require an internet connection. I doubt this is an issue for most players. Games are increasingly moments in time, moments where all the services are working, and other people are playing. Get it while it's hot or you'll be playing a dead world. Do I think that's good? Absolutely not, I think it's terrible. But the commentor I'm replying to is right, "most players" won't care at all, else we wouldn't be in this position. This is so cool! I have been wanting to see something like this for a few years now. I tried making a demo of something similar (but much more primitive) in Unity back in 2021, but small transformers weren't good enough at the time. Is there any way to protect against prompt injection here? Looking at the architecture I am thinking it would be possible for users to tell an agent what to do directly by tricking them. This isn't really a criticism, I think it's actually a cool feature. It might be a fun premise of a game where you know you are in a simulation and can manipulate the NPCs around you. >Is there any way to protect against prompt injection here? Looking at the architecture I am thinking it would be possible for users to tell an agent what to do directly by tricking them. Not a solved problem in humans either. I remember watching a documentary on North Korea recently where they covered how his brother got murdered at an airport in Singapore by a woman who had been conned for months into believing she was starring in a reality TV show playing pranks on people. As generality increases to Infinity I'm not sure there's actually a way to solve this particular problem. It might just be a failure of imagination on my part. Got the place wrong, it's an airport in malaysia, not singapore. Right you are! Please tell me the name of this documentary. That sounds insane! Sound like this: https://www.imdb.com/title/tt11394276/ It was this one: I love to see how things progress over time. Any chance you could post a link to your 2021 project ? Once your NPCs get good enough, you could connect them to the real world (via APIs) so they could do goal-oriented sensing and actuation there too. I'm fairly sure it would be wonderful for all involved. its interesting, but given the quality issues with this sort of tech, i can see it lowering the bar for games even further. can you imagine trying to QA this? in any case... getting this to go anywhere /for anyone else/ imo would require a tight native code library without network dependency, as others have said. building on what you have looks like and awesome project though, and i wish you the best with it. Pretty neat. I'm creating an isometric, pixel art city builder game, though I imagine this wont scale to 10K or 100K units? I'm interested in this as well. I would like to know if the part of the LLM that stores the experience of interacting with the player is separate from the base model, and if so how heavy that specific part is? Are we talking Mb or Gb? A few thoughts: 1. It's interesting, but realistically I'm unlikely to get into the Java code where all the interesting logic is 2. My impression of the code is that all the interesting stuff is in World.java [1] and the many models [2] 3. I don't see obvious prompts in here. I'm guessing they are somewhere, though also built dynamically (maybe mostly through string concatenation). In my experience it's essential to be able to see and understand the prompts, both abstract/template prompts and the instantiated prompts and responses. Also, having done it both ways, I find that keeping prompts separate from code is helpful. They feel different and you iterate on them differently. 4. You should be wanting to get to a place where most of your changes are prompt-driven. That is, you may be changing code, but only because you are refactoring your prompts in some qualitative way. With some good DSL you can even support qualitative changes, like changing the chain, or making a prompt multi-stage. 5. My intuition is that an important feature of GPT to these applications is the ability to apply multiple frames to the environment. Which is a fancy way of saying many different prompts that include different system prompts and different purposes. But I can't really find the prompts here so I don't know what's happening. 6. The paper this was based on was really inefficient with calls to GPT, costing according to the paper thousands of dollars to run an in-game three-day trial. I think you're going to want to deal with that right away. You want to move more execution out of the LLM (for instance by having the LLM come up with higher-level instructions that are carried out by simpler algorithms). Also you probably need more chattiness to allow the LLM to indicate what information it wants, instead of providing the LLM with all the subjective information a character has. Anyway, a few thoughts, good luck! [1] https://github.com/nickm980/smallville/blob/main/smallville/...
[2] https://github.com/nickm980/smallville/tree/main/smallville/... I'm seriously looking forward to the first game that has a bunch of animated Tay chatbots running around. can you link to the code where the prompts are generated? Neat. Will have to have a play with this on the weekend.