Images 15 and 16: example of a hashtag still in use one hashtag. On the right, another hashtag still suggests CSAM-related hashtags. Screenshots recorded on 29 July.
- Posts did not appear with the same frequency on the “top” section, likely due to the decreased take-down time not allowing amplification by automated commenting. However, since the key vulnerabilities themselves were not addressed, posts continued to be published in the “latest section”, and some posts still made it to the “top” feed.
Image 17: CSAM posts still making it to the top feed, on 29 July, and several posts appearing in the “latest” feed.
- Amplification through automated commenting continued, albeit on a smaller scale. For example, the post in image X had 174 likes, no comments, no re-tweets and 26 views after 1 minute. After 6 minutes, the post obtained 59 comments, 4 retweets, 191 likes and 1.6k views. All the comments were spam.
Image 18: CSAM posts still being amplified.
It initially seemed like X was addressing the CSAM operation (decreased take-down time, regular wiping of hashtags would suggest so). However, as the researchers’ account got age-verified, the CSAM content became available again, and the operation came back in full swing. It seems like part of the countermeasures undertaken by X seems to be linked to the new age restriction policy.
Image 19: CSAM posts temporarily restricted on the basis of age restrictions.
Indeed, X’s approach seems to have been to implement its age assurance policy to shield sensitive content from accounts under 18. This is a departure from the platform’s original response to the operation, where X suspended accounts, likely on the basis of platform rules violations.
The implementation of the age assurance policy foresees that X will verify the user’s age via a set of automated and proactive measures.
This age verification measure seems to be a reaction to the very new need to comply with the new Irish Online Safety Code and the UK Online Safety Act, which include provisions on age assurance to keep minors away from harmful content, including pornographic and violent content. On 24 July, indeed, the Irish media regulator clamped down on X regarding this matter. In this sense, the measures are not CSAM specific, nor do they systematically tackle the underlying technical issue, dealing with the blocking of URLs, hashtags and ease of creating fabricated accounts.
Update after 06 August
The researchers could verify that age-verified accounts may still access the content.
The operation continues to operate in a similar manner as it did during the initial investigation, reaching tens of thousands of users.
Image 20: CSAM content still available for age-verified profiles
The operation itself, the underlying issue of easy account creation, inconsistent content moderation do not seem to have been addressed, and content remains easily available.
Systemic risks
Similarities to Doppelganger and Operation Overload
The Russian influence operations Doppelganger and Operation Overload use one X account to post their content, and then others to amplify it inauthentically. This is made possible through the ease of creating disposable X accounts, only taking 1.5 minutes to create one manually, and likely requiring mere seconds to do so automatically.
By not having good safeguards in place during the account creation process, only requiring an email address to create an account, threat actors can easily create an infinite number of disposable accounts for their illegal actions.
X needs to put in place safeguards against the bulk creation of accounts, like phone number verification, blacklisting email addresses used by temporary email services, tracking and blocking IP addresses creating several accounts within a short period of time, and other patterns of behaviour that X can detect through their logs.
Similar to the Doppelganger operation, the CSAM network is also relying on redirection links to obfuscate the real URL of its websites. Strangely, X is not blocking these redirect URLs when they remove the accounts, allowing the operation to continue spreading their content.
It is critical to note that this does not imply that these accounts are operated by Russian influence operators. Instead, it should be interpreted as different groups exploiting the same vulnerability and platform design flaw.
X Takedowns are not helping
As very explicit CSAM accounts, the lifetime of any individual account is short. Initially, before the more intense takedown period, the posts were usually taken down within hours to a day. However, new accounts are created continuously, even suggesting some sort of automation, providing continuous access to CSAM content. Paradoxically, this modus operandi actually helps the content spread and makes it harder to gather evidence, as the continuous whack-a-mole-style deletion of individual accounts has the side effect of removing access to evidence from researchers, all while the links that the new accounts will continue to amplify are left unblocked by X. Therefore, the central issue is whether or not the exploitable vulnerability continues to persist.
To further support this evidence, two initial posts were flagged to X using their DSA Article 16 illegal content flagging tool, and while the posts were taken down instantly, the operation continued undisturbed for at least two-three days.
Despite X’s initial period of action, the actions seem to have stopped, and the operation resumed, meaning that the operation might not be systematically mitigated by X.
Conclusions
In this case, the CSAM CIB network can be proven to use some of the same vulnerabilities that Russian influence operations are using. This type of cross-disciplinary observation provides topic-agnostic evidence of potential systemic risk, as defined by the Digital Services Act Article 34 and 35 (DSA).
This also demonstrates the importance of working together across disciplines to analyse and address systemic risks, as such risks are not isolated to one theme or another. More information exchange between researchers working on information integrity and DSA enforcement is key to detecting these types of vulnerabilities.
X should focus on tracking and stopping manipulative or harmful patterns of behaviour, harmful links being shared, and should better respond to the shared characteristics of accounts being used in these types of operations to stop them, instead of focusing only on addressing individual accounts. A more systematic and systemic approach would be the only way to stop this network, rather than banning easily replaceable accounts, one by one, as they appear.
As highlighted, the analysed CSAM network follow specific patterns (low numbers of followers/following, quick posting, large amounts of spam comments to boost, use of similar wording, spread of similar links, and flooding a number of hashtags using the same posts). These patterns can potentially be translated into queries, as has been done in the past, and taken down.
This report was made possible through the Counter Disinformation Network.
The CDN is a collaboration and crisis response platform, knowledge valorisation resource, and expert network, bringing together 60+ organisations and over 300 practitioners from OSINT, journalism, fact-checking and academia from 25 countries. The network has been used to coordinate projects on four elections and has produced 80+ alerts since its creation in May 2024.
Alliance4Europe’s participation in the writing of this report was made possible by the Ministry of Foreign Affairs of the Republic of Poland
This report is a public task financed by the Ministry of Foreign Affairs of the Republic of Poland within the grant competition ‘Public Diplomacy 2024-2025 – the European dimension and countering disinformation.’
The opinions expressed in this publication are those of the authors and do not reflect the views of the official positions of the Ministry of Foreign Affairs of the Republic of Poland

Name of the task: Information Defence Alliance
Project financed from the state budget under the competition of the Minister of Foreign Affairs of the Republic of Poland “Public Diplomacy 2024–2025 – the European dimension and counteracting disinformation”
Amount of funding: 473 900 PLN
Brief description of the task: The Information Defence Alliance project aimed to monitor and mitigate influence operations targeting France, Italy, Germany, Moldova, Romania, Slovakia, and the Belarusian diaspora.
To do this, the project had three pillars:
1. researching influence operations,
2. inviting organisations and researchers from these countries to the CDN,
3. providing trainings to organisations to increase their capacity and share a common language.