Settings

Theme

Llama That Thinks

colab.research.google.com

1 points by torrmal a year ago · 1 comment

Reader

torrmalOP a year ago

Hey HN,

So, casually trying to make LLaMA achieve consciousness (as one does on a Tuesday), when I stumbled upon something hilarious. Turns out, you can make these language models "reason" with about as much code as it takes to write a "Hello World" program. No, really!

https://colab.research.google.com/drive/1jfsG0_XP8a5mME76F5a...

Here's the philosophical crisis I'm having now: When you ask an LLM to code something vs. asking it to reason about something... are we basically watching the same neural spaghetti being twirled around?

The real question is: If one can make an AI model "think" with 5 lines of code, does this mean: a) An LLM should be able to write its own reasoning code b) We've been overthinking AI? c) The simulation is running low on RAM d) All of the above

Would love to hear your thoughts, preferably in the form of recursive functions or philosophical paradoxes.

P.S. No LLAMAs were harmed in the making of this experiment, though several did ask for a raise.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection