Is ChatGPT and OpenAI Stealing Ideas? Does It Have the Right to Do So? How Do We Know It Doesn’t?

9 min read Original article ↗

Klaudi

The technical details were messy for quite a while. We noticed that enabling double memory slowed down token generation speed. With double memory active, the AI processed around 40–50 tokens per second. (see video below)

Without it, the speed jumped to 80+ t/s almost a 100% increase.(see video below)

We’ve working hard building HugstonOne, not just as an app, but as a testament to what happens when a team of innovators pours their energy into solving real problems. Our mission has always been clear: to make AI tools more intuitive, powerful, and more human prone. But lately, we’ve found ourselves in a strange, unsettling position. We’ve created something unique. The last feature that Hugston team thought of is the double memory option.
And then, just days later, we see it appear elsewhere, without credit, without consent. It’s not really just about the feature, It’s about the idea, and it’s about the question that’s been haunting us for months: Is this normal, or better is this even the new normal, because it keeps happening quite often recently (since we using AI more precisely) ?

The memory toggle is a deceptively simple switch. When turned on, the AI can pull context from its own internal memory and from a persistent local storage layer on the user’s device, allowing for richer, more contextually relevant conversations. When turned off, the AI discards all past context, offering a clean slate each time. this is actually what users are after, a more human like Ai that remembers, but only when we want to. We knew it would be a game-changer. And we were right.

That wasn’t a surprise, It is well known that Memory processing is inherently slower. But what was a surprise was the logic behind it. In the version without double memory active, tokens streamed directly to the chat bubble. The local storage layer was active, but the AI is not required to remember, but was remembering only the local agent so 1 layer of memory. In the memory on version, the AI had to juggle two systems: the LLM’s own memory and the local storage.
The issue, we realized, wasn’t just the memory module itself but the coding logic and pathways used to manage it. In the first example, tokens streamed directly to the chat bubble, even though persistent memory was active in the tabs within local storage not in the LLM itself. This direct streaming contributed to the faster token speed. In contrast, the version with memory utilized a double memory system: one within the LLM and a second, persistent layer stored locally on the user’s device. This dual storage mechanism, while enhancing the app’s capabilities, inevitably slowed down the token speed.
The dual-layer architecture is complex and has tradeoffs, but it gave users more control. We are proud of it all considered.

Then, last week, we celebrated, but the team’s excitement was short lived. Just last week, the HugstonOne team celebrated the internal success of our memory toggle feature. We had a working prototype, a clear technical advantage, and a plan to roll it out. Then 2 days later a new, sleek interface on the ChatGPT platform, (see picture below)

Press enter or click to view image in full size

The first time that the team noted, offering users the exact same choice: a switch to turn memory on or off. The timing was, as Klaudi put it, “just too perfect.” The UI is different, the branding is theirs, but the core concept the very idea of giving the user an on/off switch for LLM level memory is ours.

We were stunned. We had never shared our code (other then the OpenAI platform. We hadn’t even deployed the feature to paying users. How could they have replicated it so quickly?

We’ve had this feeling before also, a few months ago, we developed a code editor feature, embedded directly into the chat bubble. It was a bold move, and we knew it would be controversial. We used OpenAI’s platform to test and enhance the code, even using their GPT-5 model for initial optimizations. Then, days later, ChatGPT rolled out a similar feature, which as you can imagine was of identical implementation l. We couldn’t explain it.

We’ve always believed in open innovation. We use ChatGPT, and we’ve even contributed to their ecosystem. But there’s a line. And we’re starting to think that line is being blurred.

Our paid plan with ChatGPT explicitly states that our data isn’t used for training, and we don’t consent to our user interactions or proprietary features being shared with third-party platforms. But what if they’re still siphoning ideas? What if their models are learning from our public documentation, our forum discussions, even our API endpoints?
In this case we built the memory toggle with the hope that it would empower users. We saw the potential, and we believed in it. Then we saw it replicated without credit, without consent. We’ve seen it before. We’ve seen features appear in competing platforms with a timing that’s too perfect to be a coincidence.

This is a legit concern about the future of innovation. If ideas can be stolen without proof, if the very tools that help us build are also helping others copy us, then what’s the point of building at all? We’re asking for clarity. We’re asking for boundaries.

“We are not trying to start a war,” Klaudi emphasizes. “We are proponents of open innovation and are immensely grateful for the tools that platforms like ChatGPT provide. Our Team use them, our marketers use them, we see them as powerful allies in our work.” The problem, Klaudi argues, is one of boundaries and consent. HugstonOne operates under a paid plan with explicit terms. “We have a contract, “It’s very clear: we do not consent to our data, our user interactions, or our proprietary features being used for training or by any third party platforms. We are a paid service; we expect our data to be treated as our private intellectual property.”
Whether through data scraping, a compromised account, or a more direct form of corporate intelligence, the result is the same. The innovation, born of countless hours of hard work and investment, is being co opted without credit or compensation. “It’s not fun. It’s demoralizing and counterproductive,” says a teammate on the project, who asked to remain anonymous. If small companies can´t thrive because of monopolies they affect the economy directly. “We pour our souls into this stuff. We’re trying to push the boundaries of what’s possible, and then we see someone else get the credit for the leap we made. It undermines the entire incentive to innovate.”
We see it as A Pattern of Concern.
These incidents raise serious concerns about the boundaries of intellectual property in the AI era. It is also well known the avalanche of law suits against various AI copyright infringement. While HugstonOne has a paid plan with ChatGPT that explicitly states their data should not be used for training, it’s challenging to prove whether their ideas have been directly lifted or if it’s merely a case of parallel development. The broader implications touch on AI ethics, intellectual property rights, and the responsibilities of AI companies.
While the AI platforms keep applying the rule (push at any means until you get big enough to bay copyright) the law get broken continuously and someone need to establish a precedent with unregulated boundaries.

We’re not going to stop innovating. In fact, this has only made us more determined. But we’re going to be more vigilant. We’re going to document every feature, every technical detail, every development milestone. We’re going to monitor the market more closely than ever. And we’re going to speak up because if the boundaries of innovation are being eroded by the very tools meant to help us, then we all have a stake in defining what those boundaries are.
Every new feature we develop will be logged, dated, and its technical specifications will be documented. We will be monitoring the landscape more closely than ever before. This incident forces a crucial conversation in the tech industry: How do we balance the rapid, open sharing nature of AI with the protection of intellectual property? How can we ensure that the individuals and small teams driving innovation are not steamrolled by corporate behemoths? And we want to know what to do about it. We’re excited to build the future, but we need to know that when we build something, it’s ours to build. We deserve that respect.

We’re grateful for the AI tools that exist. We’re excited about what they enable. But we also need to know: Is this a one-time coincidence, or is this the new normal? And if it is, how do we, as an industry and as a society, stop it?
We’re just looking for clarity.
The team is determined to protect their intellectual property.
Because It’s about the hours, the sacrifices, the late nights, and the relentless drive to create something that matters.
Because in the end, innovation isn’t just about what you build but also who build it. It is about incentive and rewards which are the motor of the world, it is about keeping alive creativity.

We’re not looking for a public fight. We’re looking for answers. Because when the next breakthrough is out there, we want to be sure credits are given to the authors (as it should in a civilize society) and the law still works avoiding chaos and the jungle law.
We’re not the only ones feeling this. The AI industry is a delicate ecosystem. It thrives on collaboration, but it also needs protection. Without clear rules, innovation becomes a gamble. And that’s not fair to the people who pour their lives into this work.
This is the question haunting the team at HugstonOne. And we’re not going to stop asking it until satisfied.

This is our story.

And it’s just the beginning.