Settings

Theme

DeepSeek may have used Google's Gemini to train its latest model

techcrunch.com

29 points by samsmithy 7 months ago · 16 comments

Reader

philipwhiuk 7 months ago

> Distillation isn’t an uncommon practice, but OpenAI’s terms of service prohibit customers from using the company’s model outputs to build competing AI.

I have the absolute tiniest of violins for this given OpenAI's behaviour vs everyone else's terms of service.

  • sovietmudkipz 7 months ago

    “Copyright must evolve into the 21century (…so that AI can legally steal everything produced by people”

    And also “Don’t steal our AI!”

    • jsheard 7 months ago

      The world is not prepared for the mental gymnastics that OpenAI/Google/etc will employ to defend their copyright if their big models ever get leaked.

  • Zetaphor 7 months ago

    I'm still unclear how they are able to claim this considering their raw thinking traces were never exposed to the end user, only summaries.

parineum 7 months ago

At this point, they all using each other because so much of the new content they are scraping for data is generated.

These models will converge and plateau because the datasets are only going to get worse as more of their content is incestuous.

  • jsheard 7 months ago

    The default Llama 4 system prompt even instructs it to avoid using various ChatGPT-isms, presumably because they've already scraped so much GPT-generated material that it noticably skews their models output.

  • sovietmudkipz 7 months ago

    I recall that AI trained on AI output over many cycles eventually becomes something akin to noise texture as the output degrades rapidly.

    Won’t most AI produced content put out into the public be human curated, thus heavily mitigating this degradation effect? If we’re going to see a full length AI generated movie it seems like humans will be heavily involved, hand holding the output and throwing out the AI’s nonsense.

    • AstroBen 7 months ago

      Some will be heavily curated, by those who care about quality. This is a lot slower to produce, requires some expertise to do right, so there will be far less of it

      The vast majority of content will be (is) the fastest and easiest to create - AI slop

  • wkat4242 7 months ago

    Yes indeed some studies were already done on this.

  • zackangelo 7 months ago

    There might be a plateau coming but I’m not sure that will be the reason.

    It seems counterintuitive but there is some research suggesting that using synthetic data might actually be productive.

    • jsheard 7 months ago

      I think there's probably a distinction to be made between deliberate, careful use of synthetic data, as opposed to blindly scraping 1PB of LLM generated SEO spam and force-feeding it into a new model. Maybe the former is useful, but the latter... probably not.

ksymph 7 months ago

Interesting. The tonal change has definitely been noticeable. It also seems a bit more succinct and precise with its word choice, less flowery. That does seem to be in line with Gemini's behavior.

vb-8448 7 months ago

I wonder if at this point it really matters who used whose data ...

hulitu 7 months ago

> Sam Paech, a Melbourne-based developer who creates “emotional intelligence” evaluations for AI, published what he claims is evidence that DeepSeek’s latest model was trained on outputs from Gemini. DeepSeek’s model, called R1-0528, prefers words and expressions similar to those that Google’s Gemini 2.5 Pro favors, said Paech in an X post.

And if you search for personal information of Android users, including location, sex, political orientation and location data, it is all there. /s

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection