ACM debuted a set of AI tools on the Digital Library in December 2025 alongside our transition to Open Access. Since launch, we have adapted the tools based on thoughtful community feedback. This evolution will continue as AI improves and we continue to receive more feedback.
Throughout this process, our objective is to provide AI functionality that helps authors promote their research and makes accessing the DL more productive and accessible. These tools are designed to complement articles, not replace them.
Please review the FAQ below and send us your feedback at [email protected]
Frequently Asked Questions
Background
What are the AI tools currently available on the Digital Library?
To help ACM content reach as many stakeholders as possible, we currently offer three AI tools in the Digital Library Premium Edition:
- Short article summaries (~100 words) as part of Table of Contents and search results pages
- Longer summaries (~400 words) on article pages
- Podcast-style interviews synthesizing all papers within a conference session
Article summaries are currently available for most ACM-published journal, proceeding and magazine articles published in the last five years.
AI Podcasts are available for most proceedings of ACM-hosted conferences published in the last 3 months.
We are working on making additional tools available in 2026, including an overhauled search workflow and a chat interface for asking questions about individual articles. If you have suggestions about future AI tools, please share them by clicking the "Feedback" button on the right side of this page.
Why is ACM using AI-generated content in the Digital Library?
The ACM Digital Library Board wants as many people as possible to read and use the research we publish. In addition to making all of our content Open Access, the Board established a working group to evaluate the helpfulness, accuracy and quality of a range of AI summarization tools. The resulting features launched in the Digital Library aim to broaden ACM's audience and support new ways of engaging with ACM content.
How do summaries broaden an article’s audience?
Article summaries are not a replacement for reading full articles, including their abstracts.
Summaries use a straightforward writing style that is particularly helpful for:
- Non-native English speakers
- Interdisciplinary researchers from related fields
- General public, industry practitioners and citizen scientists
Further, article summaries use a consistent structure across papers and tend to include more information on scopes and limitations than abstracts. Combined, these features help all user types, including domain experts, discover new content. Specific tasks that test users said they were able to perform faster include:
- Search, browse, and compare articles in sets or lists
- Finding relevant articles in a Table of Contents
- Evaluating whether to read an article in full
Policy matters
What’s the difference between a summary and an abstract?
Abstracts and summaries are complementary. Abstracts are part of an article’s content written by the authors, whereas AI summaries are a tool to help readers quickly understand if an article is of interest.
Abstracts are written by article authors and subject to peer review. Unlike summaries, abstracts constitute an important part of the article’s permanent Version of Record and are therefore citable.
Should I cite an AI summary?
No. If you found a summary helpful, you should read and cite the related article. ACM does not recommend citing AI summaries as they are not authored or reviewed by humans and they may change over time as ACM adapts to user feedback and AI tools improve.
Can I use AI summaries as part of a literature review?
No. AI summaries can be helpful in evaluating which papers to include in a structured literature review. AI summaries should not be cited and are not intended to replace reviewing the full text of the related articles.
I’m an author. Who do I contact if I find a quality issue with the summary on one of my articles?
Please send the article URL to [email protected] using your institutional email address and an ACM staff member will manually reprocess the summary based on your feedback.
I’m a reader. How do I opt out of using AI tools?
While no explicit opt-out feature is enabled, all AI tools on the Digital Library are clearly labeled and optional to use. Not using them will not limit your use of other functionality on the site.
- On article pages, AI summaries are in a separate tab from the abstract, with the abstract loading by default. Users should only click on an AI summary if they wish to read one
- On conference proceedings, podcasts are listed among articles in the Table of Contents when available. Users should only click on a podcast if they wish to listen to one
- On search results and Table of Contents pages, previews of short AI summaries are listed under article titles when available. Users should only click "More" if they wish to read more of the summary
I found an error or other quality issue- how do I send feedback?
Please click the blue feedback button on the right of any article page and leave a comment there. All comments are manually triaged by ACM staff.
For general feedback, please email ACM at [email protected]
About the technology
I can ask AI to summarize an article myself. What’s different about the summaries here?
Article summaries on the Digital Library are produced by a custom AI service that is calibrated for scholarly content and emphasizes information that researchers are likely to value, including an article’s results and limitations. It also includes checks to remove fake references (“hallucinations”) and the overly positive language that is characteristic of mainstream chatbots (“AI sycophancy”).
Finally, the custom AI service is using the most up to date, authoritative Version of Record of the summarized article. Third party chatbots often rely on parsing preprint PDFs or sometimes even respond without ever accessing any versions of an article’s full text.
Are your partners training their LLMs on my article?
No. We have explicit, legally binding guarantees from our vendors confirming that their language models will not train on ACM content.
How did ACM test to ensure quality AI output?
To date, we have evaluated 10 potential AI use cases with a shortlist of 7 vendors that met ACM’s standards for respecting author privacy and copyright. AI outputs were then scored for both accuracy and usefulness by a volunteer working group of Computer Science faculty and librarians. The group represented a diverse range of subdisciplines, and all participants scored outputs related to their stated expertise.
Only the vendors and use cases that consistently scored “Good” or “Great” for both accuracy and usefulness are shown on the Digital Library. While we would like to launch additional AI use cases soon, other use cases have not yet met this quality standard.
The vendor that ACM ultimately selected for article summaries continuously refines its prompt generation tool through A/B testing, with more than 1 million data points so far.
Are the AI tools WCAG accessible?
Yes. Article summaries are presented in high-contrast text that is keyboard navigable and available to screen readers. All podcasts are transcribed in equally accessible text. Accessibility of the entire Digital Library is a key priority for ACM- please click here to learn more.
How do you protect against hallucinations?
The summarization technology uses a multi-agent approach to critique and fact-check its own output, specifically protecting against fake references (“hallucinations”). However no tool is perfect, and errors may still occur. If you find one, please click the “Feedback” button to the right of the summary and let us know.
How do you protect against prompt injections?
ACM uses tools to detect hidden AI prompts in a submitted manuscript, which can be grounds for rejection and possibly author sanctions. As a fail safe, ACM sends the AI service the article’s Version of Record, which is derived from ACM-produced XML, thus removing any hidden prompts.
Are summaries and podcasts generated in real time?
No. All summaries and podcasts are cached, although we may occasionally regenerate them as tools improve. At any single point in time, all visitors to the Digital Library will see the exact same summaries and podcasts. We are working to display versioning metadata to article summaries and hope to have these displaying live shortly.
Which AI vendors are you using?
Article summaries are produced in partnership with SciSummary. Podcasts are made with Google Notebook LM and then transcribed by Google Gemini. These vendors were selected because they explicitly agreed not to train their language models on ACM content and because their outputs scored highest in our tests for accuracy and usefulness.