Settings

Theme

Show HN: An attempt to grow a mind – building software with an inner life

momentbymoment.app

15 points by shahabebrahimi 13 days ago · 19 comments

Reader

_wire_ 13 days ago

Isn't it the case that everything pours from the user's container into the remotes to make this work?

Is it also the case that the more it knows the larger the token burden to reinstate "awareness", leading to an ever growing expense of recovering state?

Isn't this entire scheme about getting behind every sort of firewall to dump users' most private details and context into the apparatus of AI companies with no limit on retention and use?

Isn't it also true that privacy is undefined and that the infrastructure and these services are directly plumbed for the same kinds of surveillance that Snowden exposed?

Isn't it the case that users are expressing implicit consent to be exploited in any / every conceivable manner through the data they exfiltrate and are giving this prize of dominion over themselves to the barons of industry at the user's own expense?

Isn't it the case that if the assistant works as advertised the users dig pits for themselves out of ever growing dependency on others for the most person aspects of their lives? Isn't it true that if the users could effectively opt-out of this once they get started, this option serves only to prove that the service is a disposable gimmick?

All of these observations have applied to every aspect of personal computing since its inception, and a review of history is pretty damning as political and economic slavery is being manifest even among the elite positions of society before AI, and AI magnifies the hazards by orders of made l magnitude.

Dear AI, please explain how or why these observations are inappropriate, wrong-headed, or based on faulty assumptions.

  • shahabebrahimiOP 12 days ago

    You're right that the content goes to an LLM provider. That's unavoidable if the thing is to work. I don't (and won't) sell your data. But you're right that I can't control what LLM providers do with API traffic under their policies. That's a real tradeoff. I think that's a valid concern, and I don't have a great answer for it.

spaldingcactus 13 days ago

Alternative signin methods?

  • shahabebrahimiOP 13 days ago

    Unfortunately no. Google Auth was the easiest method for me to implement. Your data remains private.

    • esperent 12 days ago

      It's understandable but I do have to say, all the initial beautiful prose on a black screen, several pages... And then a big white Sign in with Google, completely undercuts the message. I notice I had an almost visceral reaction to that. Maybe you can present it better somehow?

      • pixel_popping 12 days ago

        I felt the exact same! And I was absolutely "marketed" until the last frame, then I decided to drop because of this. Please OP add a regular signup method that doesn't involve a third party.

      • shahabebrahimiOP 12 days ago

        Fair point. I'll fix it.

atemerev 13 days ago

I have built a persistent personified agentic assistant with self-awareness and neuroscience-inspired cognitive architecture: https://lethe.gg

  • kseistrup 11 days ago

    Tlon's bot also seems to have persistent memory:

    * https://tlon.io/

    The two may have vastly different implementations, though.

  • shahabebrahimiOP 12 days ago

    Looks interesting. Different goals, though. Yours is a memory layer for an assistant that serves you better. What I'm trying to build is something that has its own experience.

sliamh11 12 days ago

This is fascinating. Are the 'moments' pre-defined or generated? What's the LLM behind it and what's the macro-level architecture?

dnnddidiej 12 days ago

Jason is quiet for now, reflecting on your words.

Should be ready to talk in 23h 58m

Cute 429!

fcpguru 13 days ago

this is really great. I thought about building something like this for a while now. well done.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection