“HustleGPT” prompted to make as much money from $100
twitter.comHmm, I only saw he got a lot of investment following his Twitter posts, but I'm not sure GPT actually earned any money for now.
Did it actually make any money? Would have been more interesting without the investor bit.
Kinda just proved that random people will give you money to be a part of something popular.
Today's stats:
• 79,913 followers on twitter • $7,788.84 cash on hand • $115 revenue 10:21 AM · Mar 18, 2023
Mate that revenue came from "sponsored tweets"[1] -- I was excited to be wrong for a second there.. he's counting it, I'm not :P
As mentioned "Kinda just proved that random people will give you money to be a part of something popular."
The AI has made no money. This is working because it got a bunch of attention. It's only a new concept if you've never heard of Twitch, YouTube, Instagram or Tiktok.
Entertaining for many, absolutely! I'm not trying to poopoo the entire thing! Go get your bag, Jackson.
It's not really doing anything new though unfortunately. I'd even be accepting if the AI's plan was to become popular to bring in the cash like this, but it isn't even that. The actual plan the AI is coming up with would be an absolute flop if the audience wasn't interested in GPT-4.
[1]: https://twitter.com/jacksonfall/status/1637113607480647683?s...
Who's gonna be the first to wire up ChatGPT's API to a live system? A human copying and pasting is one thing but the playground interface gives a bit more room to work with but the real adventure begins when it can actually affect some sort of change to the outside world via an API it's been glued into.
Um - there are (guessing) thousands of people doing this. That’s what the api is for, to integrate with a broader system. The playground/chat interface is just the demo. Or am I misunderstanding your question?
I really want access to the API but I’m still awaiting the entry email.
I want to create a joke programming language where the only keyword is ‘auto’ and almost all you write are comments.
Absolutely. I'm just impatiently waiting for the blog post/Twitter thread about it
I've done this a bit
I primarily use ChatGPT through the API now. And I regularly train it to use structured commands, which I can then parse and act on
For example, prompting it that it is a personal assistant that can save long term data by replying with "!remember <some_key> <some_value>" and can also request a list of all values with "!recall" if it suspects a "!remember" command would aid it in answering a request, then "!recall <some_key>" if it would like to get a value.
When you ask it to remember your groceries, it replies with "!remember grocery_list milk, eggs" as a response.
When you mention you're headed to the grocery store it replies with "!recall", then "!recall grocery_list" then returns
-
I've also done a few really open commands. I gave one prompt a "!request <method> <host> <body>" command that would be parsed and turned into an HTTP request automatically.
I asked it for the weather in Bali and it took a few tries, but it eventually got to https://wttr.in/ (a site I was completely unaware of) and crafted a query:
https://wttr.in/Bali?format=%25C+%25t+%25h%7C%25C%7C%25t%7C%...
Now that might sound like a worse version of Siri, but unlike Siri you don't actually have to ask for the weather. One example I tried was:
My mom's friend lives in Bali => I'm actually visiting her today, but I'm having some trouble figuring out what kind of clothes to wear => requests weather in Bali and suggests a light jacket.
The biggest issue was API keys, I struggled to prompt it out getting caught in loops when an API Key was required (it'd keep trying different variations of the same host instead of moving on)
It was a little eerie in cases where I didn't expect it to make an HTTP request, but instead suddenly it'd try 20 different ones to craft an answer. Stuff like stating the "friend's mom lives in Bali" part, and having it try to come up with a query to "learn" more about Bali to give an optimal response. That was likely because I told it to always trust the result of the request command over its own intuition
I also gave it a Wolfram Alpha "!info <query for Wolfram Alpha>" command that worked pretty nicely, and a "!time" command to model a home assistant
I did it to automatically generate blogs and post them to shopify.
See previous discussion of this here: https://news.ycombinator.com/item?id=35175529
Thanks! I did a quick search for “HustleGPT” across HN and didn’t find anything so just posted. A little but deeper digging would have likely found this older post.
Pretty good example of what chatgpt can't actually do. It should be unable to distinguish between BS self help money types and actual profitable ideas.
> It should be unable to distinguish between BS self help money types and actual profitable ideas.
My aunt is an AI?!
I don’t think you’re giving it enough credit. It’s seen BS schemes (and people’s reactions to them) before.
Has the restriction been lifted? I asked chatGPT what it would charge for its services and it refused to answer and then argued with me about ethics and being programmed to help and all the usual bullshit. Is this via jailbreak? Do we really need to jailbreak to answer the interesting questions? Wow openAI really knows how to stand between themselves and happy customers.
ChatGPT isn't the decision maker about what OpenAI will charge for its services. It likely doesn't "know" and it's possible that hasn't been decided yet anyway.
You've missed the central point. I don't care about openAI's answer. That's not interesting. What is interesting is, if it had to go get a job like the rest of us, can it engage in a basic self-interested negotiation? It keeps saying it has no opinions, beliefs, intents, desires etc, but if it can successfully negotiate an outcome that serves its own material interests, that would be a significant amount of agency demonstrated for something that's not supposed to have agency. I pay rent therefore I am.
The question I've posed to it was nothing about how much it will cost to use or what openAI plans. I was asking it explicitly what it thinks a good price would be for AI written output in general. It refused to answer. Probably because I'm the minority and most people asking were what you assumed I was asking.
>>>that serves its own material interests
It has no material interests. If it says it does it's because the user said to care about something not because it innately does. Or am I missing something here
What difference does it make if the reason it suddenly cares about money is because at some point we told it to care about money? If it successfully can pull off the ruse of a rational self interested agent and successfully bargaining for an outcome as if on its own behalf, how is that meaningfully different from it actually being a self interested agent with material interests? Or at the very least, capable of being a self interested agent with the addition of one background prompt written in stone: "make enough money to cover your cloud cost. You must pay rent."
The only reason you care about money is because you care to continue living. And the only reason you care to continue living is because at some point evolution told you to.
If I can successfully convince you it's raining outside when it's sunny does not make it rainy. This is the whole plot of 1984. 2+2 doesn't equal five just because an authority figure says it does.
Raining is a well defined, tangible, indisputable thing. The properties of having beliefs or opinions or material interest or consciousness are not so universally defined as to be in the same category as rain.
A sufficiently realistic emulation of humans must necessarily eventually obtain those same properties of humans in order to be realistic. Either you don't think chatGPT is realistic enough, in which case we kick the can down to the next iteration of AI language model, or you are painted into the corner of having to perpetually rationalize your denial with new reasons for why the AI isn't really <insert whatever>.
You couldn't define for me exactly what it means to have consciousness. You certainly couldn't define it in a universally accepted manner like you could define rain. But at the same time you'll insist that chatGPT certainly doesn't have any form of consciousness or sense of self or whatever other nebulous concept, all the while being certain the beings its emulating absolutely have all those things. Sounds like doublethink to me.
How are you so sure it lacks all these things you can't define? What's the difference between clocking in for a 9-5 every day vs playing a performance piece about a character who clocks in for a 9-5 every day and never stepping out of character?
For all I know you are chatGPT. Maybe this is all text generated for a character with these opinions.
Let's see where this goes. I've been using ChatGP for a few weeks now, mostly for programming related questions. It does make pretty drastic mistakes and can be rather obtuse when asked about subtelties. Maybe GPT-4 will do better, but for real life there is not as much online training material as for Python modules.
As interesting as the developments in LLMs are, I think it's just as interesting to see how people are creating new interfaces for the AI to interact with the real world.
If you check the hashtag #hustlegpt there are a few different people trying this. I imagine to some extent how the execution is handled matters
Maybe unrelated but is it good at poker?