- Home
Tuesday, OpenAI launched Prism, a free AI-powered workspace for scientists. The release has elicited immediate skepticism from researchers, who fear it will increase the volume of low-quality papers submitted to scientific journals. Publishers are also expressing growing concern about what many refer to as the AI shop in academic publishing.
Prism is a writing and formatting tool, not a research platform. Although OpenAI's broader messaging sometimes blurs, this distinction.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor, helping researchers draft papers, generate citations, create diagrams, sketch ideas, and collaborate with co-authors in real time. The tool is available at no cost to anyone with a ChatGPT account.
2026 will be for AI and science what 2025 was for AI in software engineering. Kevin Veal, Vice President of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on hard science topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
OpenAI developed Prism using technology from Crixet, a cloud-based LATEX platform acquired in late 2025. The company intends for Prism to reduce the time spent on formatting, allowing researchers to focus on scientific work. In a demonstration, an OpenAI employee showed how the software can automatically locate and include pertinent scientific literature and format bibliographies.
AI models are tools that can be misused. The concern is that by simplifying the creation of polished manuscripts, tools like Prism may overwhelm the peer-review system with papers that do not significantly advance their fields. At the same time, it is now easier to produce scientific text. The ability to evaluate such research has not kept pace.
When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that none of this absolves the scientist of the responsibility to verify that their references are correct.
Unlike traditional reference management software such as EndNote, which has formatted citations for over 30 years without fabricating them, AI models can generate plausible-sounding sources that don't exist. Weil added: "We are conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community."
The Slop Problem
Recent findings confirm its concerns. A December 2025 study in science found that researchers using large language models increased their output by 30-50%, depending on the field. Still, their AI-assisted papers performed worse than their peer-reviewed papers. Papers with elaborate language written without AI were more likely to be accepted, while those likely written by AI were less likely to be accepted. Reviewers appeared to recognize that sophisticated prose sometimes masks weak science.
It is a very widespread pattern across several fields of science. Yian Yin, an information science professor at Cornell University and one of the study's authors, told the Cornell Chronicle. There is a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.
An analysis of 41 million papers published between 1980 and 2025 found that although scientists using AI receive more citations and publish more papers, the overall scope of scientific research appears to be narrowing. A socio-cultural anthropologist at Yale University told Science magazine these outcomes should set off "loud alarm bells" for the research community.
Science is nothing but an aggregate endeavor, she said. There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.
Concerns about AI-generated scientific content are long-standing. In 2022, Meta withdrew its Galactica demo, a substantial language model for scientific writing, after users found it could generate plausible but inaccurate content, such as a wiki entry or a fictional paper titled "The Benefits of Eating Crushed Grass in 2024." Sakana AI in Tokyo introduced the AI scientist, an autonomous research system, in 2022. Critics on Hacker News described it as producing low-quality papers. One commentator, an academic journal editor, stated, "I would likely desk reject them; they contain very limited new knowledge."
The issue has intensified since then. In his 2026 editorial for Science, editor-in-chief H. Holden Thorpe noted that the journal is less susceptible to AI-generated errors due to its scale and human editorial oversight, but cautioned that no system can catch everything. Science permits limited AI use for editing and reference gathering, requires disclosure for wider applications, and prohibits the use of AI-generated figures.
Mandy Hill, Managing Director of Academic Publishing at Cambridge University Press and Assessment, has expressed similar concerns. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for radical change. She explained to Varsity that too many journal articles are being published, causing a huge strain. She warned that AI will exacerbate the issue.
Is AI Accelerating Scientific Progress or Flooding the Peer-Review Process
OpenAI is committed to supporting scientific progress and outlined its case for AI-assisted research in a recent report. The report features researchers who state that AI models have accelerated their work, such as a mathematician who used GPT-5.2 to solve an open optimization problem in three evenings, and a physicist who observed the model replicate symmetric calculations that had previously taken months.
These examples show AI's use in research beyond writing assistance. A distinction that OpenAI's marketing does not always make clear for non-native English-speaking scientists. AI writing tools can accelerate the publication of high-quality research; however, this advantage may be offset by an increase in mediocre submissions, further burdening the peer-review system.
Steele told MIT Technology Review that his objective is not a single AI-generated discovery but rather 10,000 advances in science that may not have happened or wouldn't have happened as quickly. He described this as an incremental, compounding acceleration.
It remains to be seen whether this acceleration will yield more scientific knowledge or increase the number of published papers. Nikita Zhivotovskiy, a statistician at UC Berkeley unaffiliated with OpenAI, told MIT Technology Review that GPT-5 has already proven valuable in his work for improving text and identifying mathematical errors, making interaction with the scientific literature smoother.
By making papers appear polished and professional, regardless of their scientific merit, AI writing tools may enable weaker research to pass initial editorial screening. There is a risk that dialogic workflows could obscure assumptions, reduce accountability, and further burden into the human peer-review process.
OpenAI acknowledges this concern. Its public statements about PRISM emphasize that the tool will not conduct research independently, and that human investigators remain responsible for verification.
Nevertheless, a commenter on Hacker News expressed the growing concern within technical communities: "I am scared that this type of thing is going to do science journals what AI-generated bug reports are doing to bug bounties. We are truly living in a post-scarcity society now, except that the thing we have in abundance is garbage, and it's drowning out everything of value."
- About US
Tuesday, OpenAI launched Prism, a free AI-powered workspace for scientists. The release has elicited immediate skepticism from researchers, who fear it will increase the volume of low-quality papers submitted to scientific journals. Publishers are also expressing growing concern about what many refer to as the AI shop in academic publishing.
Prism is a writing and formatting tool, not a research platform. Although OpenAI's broader messaging sometimes blurs, this distinction.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor, helping researchers draft papers, generate citations, create diagrams, sketch ideas, and collaborate with co-authors in real time. The tool is available at no cost to anyone with a ChatGPT account.
2026 will be for AI and science what 2025 was for AI in software engineering. Kevin Veal, Vice President of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on hard science topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
OpenAI developed Prism using technology from Crixet, a cloud-based LATEX platform acquired in late 2025. The company intends for Prism to reduce the time spent on formatting, allowing researchers to focus on scientific work. In a demonstration, an OpenAI employee showed how the software can automatically locate and include pertinent scientific literature and format bibliographies.
AI models are tools that can be misused. The concern is that by simplifying the creation of polished manuscripts, tools like Prism may overwhelm the peer-review system with papers that do not significantly advance their fields. At the same time, it is now easier to produce scientific text. The ability to evaluate such research has not kept pace.
When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that none of this absolves the scientist of the responsibility to verify that their references are correct.
Unlike traditional reference management software such as EndNote, which has formatted citations for over 30 years without fabricating them, AI models can generate plausible-sounding sources that don't exist. Weil added: "We are conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community."
The Slop Problem
Recent findings confirm its concerns. A December 2025 study in science found that researchers using large language models increased their output by 30-50%, depending on the field. Still, their AI-assisted papers performed worse than their peer-reviewed papers. Papers with elaborate language written without AI were more likely to be accepted, while those likely written by AI were less likely to be accepted. Reviewers appeared to recognize that sophisticated prose sometimes masks weak science.
It is a very widespread pattern across several fields of science. Yian Yin, an information science professor at Cornell University and one of the study's authors, told the Cornell Chronicle. There is a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.
An analysis of 41 million papers published between 1980 and 2025 found that although scientists using AI receive more citations and publish more papers, the overall scope of scientific research appears to be narrowing. A socio-cultural anthropologist at Yale University told Science magazine these outcomes should set off "loud alarm bells" for the research community.
Science is nothing but an aggregate endeavor, she said. There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.
Concerns about AI-generated scientific content are long-standing. In 2022, Meta withdrew its Galactica demo, a substantial language model for scientific writing, after users found it could generate plausible but inaccurate content, such as a wiki entry or a fictional paper titled "The Benefits of Eating Crushed Grass in 2024." Sakana AI in Tokyo introduced the AI scientist, an autonomous research system, in 2022. Critics on Hacker News described it as producing low-quality papers. One commentator, an academic journal editor, stated, "I would likely desk reject them; they contain very limited new knowledge."
The issue has intensified since then. In his 2026 editorial for Science, editor-in-chief H. Holden Thorpe noted that the journal is less susceptible to AI-generated errors due to its scale and human editorial oversight, but cautioned that no system can catch everything. Science permits limited AI use for editing and reference gathering, requires disclosure for wider applications, and prohibits the use of AI-generated figures.
Mandy Hill, Managing Director of Academic Publishing at Cambridge University Press and Assessment, has expressed similar concerns. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for radical change. She explained to Varsity that too many journal articles are being published, causing a huge strain. She warned that AI will exacerbate the issue.
Is AI Accelerating Scientific Progress or Flooding the Peer-Review Process
OpenAI is committed to supporting scientific progress and outlined its case for AI-assisted research in a recent report. The report features researchers who state that AI models have accelerated their work, such as a mathematician who used GPT-5.2 to solve an open optimization problem in three evenings, and a physicist who observed the model replicate symmetric calculations that had previously taken months.
These examples show AI's use in research beyond writing assistance. A distinction that OpenAI's marketing does not always make clear for non-native English-speaking scientists. AI writing tools can accelerate the publication of high-quality research; however, this advantage may be offset by an increase in mediocre submissions, further burdening the peer-review system.
Steele told MIT Technology Review that his objective is not a single AI-generated discovery but rather 10,000 advances in science that may not have happened or wouldn't have happened as quickly. He described this as an incremental, compounding acceleration.
It remains to be seen whether this acceleration will yield more scientific knowledge or increase the number of published papers. Nikita Zhivotovskiy, a statistician at UC Berkeley unaffiliated with OpenAI, told MIT Technology Review that GPT-5 has already proven valuable in his work for improving text and identifying mathematical errors, making interaction with the scientific literature smoother.
By making papers appear polished and professional, regardless of their scientific merit, AI writing tools may enable weaker research to pass initial editorial screening. There is a risk that dialogic workflows could obscure assumptions, reduce accountability, and further burden into the human peer-review process.
OpenAI acknowledges this concern. Its public statements about PRISM emphasize that the tool will not conduct research independently, and that human investigators remain responsible for verification.
Nevertheless, a commenter on Hacker News expressed the growing concern within technical communities: "I am scared that this type of thing is going to do science journals what AI-generated bug reports are doing to bug bounties. We are truly living in a post-scarcity society now, except that the thing we have in abundance is garbage, and it's drowning out everything of value."
- Tech Reviews
Tuesday, OpenAI launched Prism, a free AI-powered workspace for scientists. The release has elicited immediate skepticism from researchers, who fear it will increase the volume of low-quality papers submitted to scientific journals. Publishers are also expressing growing concern about what many refer to as the AI shop in academic publishing.
Prism is a writing and formatting tool, not a research platform. Although OpenAI's broader messaging sometimes blurs, this distinction.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor, helping researchers draft papers, generate citations, create diagrams, sketch ideas, and collaborate with co-authors in real time. The tool is available at no cost to anyone with a ChatGPT account.
2026 will be for AI and science what 2025 was for AI in software engineering. Kevin Veal, Vice President of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on hard science topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
OpenAI developed Prism using technology from Crixet, a cloud-based LATEX platform acquired in late 2025. The company intends for Prism to reduce the time spent on formatting, allowing researchers to focus on scientific work. In a demonstration, an OpenAI employee showed how the software can automatically locate and include pertinent scientific literature and format bibliographies.
AI models are tools that can be misused. The concern is that by simplifying the creation of polished manuscripts, tools like Prism may overwhelm the peer-review system with papers that do not significantly advance their fields. At the same time, it is now easier to produce scientific text. The ability to evaluate such research has not kept pace.
When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that none of this absolves the scientist of the responsibility to verify that their references are correct.
Unlike traditional reference management software such as EndNote, which has formatted citations for over 30 years without fabricating them, AI models can generate plausible-sounding sources that don't exist. Weil added: "We are conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community."
The Slop Problem
Recent findings confirm its concerns. A December 2025 study in science found that researchers using large language models increased their output by 30-50%, depending on the field. Still, their AI-assisted papers performed worse than their peer-reviewed papers. Papers with elaborate language written without AI were more likely to be accepted, while those likely written by AI were less likely to be accepted. Reviewers appeared to recognize that sophisticated prose sometimes masks weak science.
It is a very widespread pattern across several fields of science. Yian Yin, an information science professor at Cornell University and one of the study's authors, told the Cornell Chronicle. There is a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.
An analysis of 41 million papers published between 1980 and 2025 found that although scientists using AI receive more citations and publish more papers, the overall scope of scientific research appears to be narrowing. A socio-cultural anthropologist at Yale University told Science magazine these outcomes should set off "loud alarm bells" for the research community.
Science is nothing but an aggregate endeavor, she said. There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.
Concerns about AI-generated scientific content are long-standing. In 2022, Meta withdrew its Galactica demo, a substantial language model for scientific writing, after users found it could generate plausible but inaccurate content, such as a wiki entry or a fictional paper titled "The Benefits of Eating Crushed Grass in 2024." Sakana AI in Tokyo introduced the AI scientist, an autonomous research system, in 2022. Critics on Hacker News described it as producing low-quality papers. One commentator, an academic journal editor, stated, "I would likely desk reject them; they contain very limited new knowledge."
The issue has intensified since then. In his 2026 editorial for Science, editor-in-chief H. Holden Thorpe noted that the journal is less susceptible to AI-generated errors due to its scale and human editorial oversight, but cautioned that no system can catch everything. Science permits limited AI use for editing and reference gathering, requires disclosure for wider applications, and prohibits the use of AI-generated figures.
Mandy Hill, Managing Director of Academic Publishing at Cambridge University Press and Assessment, has expressed similar concerns. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for radical change. She explained to Varsity that too many journal articles are being published, causing a huge strain. She warned that AI will exacerbate the issue.
Is AI Accelerating Scientific Progress or Flooding the Peer-Review Process
OpenAI is committed to supporting scientific progress and outlined its case for AI-assisted research in a recent report. The report features researchers who state that AI models have accelerated their work, such as a mathematician who used GPT-5.2 to solve an open optimization problem in three evenings, and a physicist who observed the model replicate symmetric calculations that had previously taken months.
These examples show AI's use in research beyond writing assistance. A distinction that OpenAI's marketing does not always make clear for non-native English-speaking scientists. AI writing tools can accelerate the publication of high-quality research; however, this advantage may be offset by an increase in mediocre submissions, further burdening the peer-review system.
Steele told MIT Technology Review that his objective is not a single AI-generated discovery but rather 10,000 advances in science that may not have happened or wouldn't have happened as quickly. He described this as an incremental, compounding acceleration.
It remains to be seen whether this acceleration will yield more scientific knowledge or increase the number of published papers. Nikita Zhivotovskiy, a statistician at UC Berkeley unaffiliated with OpenAI, told MIT Technology Review that GPT-5 has already proven valuable in his work for improving text and identifying mathematical errors, making interaction with the scientific literature smoother.
By making papers appear polished and professional, regardless of their scientific merit, AI writing tools may enable weaker research to pass initial editorial screening. There is a risk that dialogic workflows could obscure assumptions, reduce accountability, and further burden into the human peer-review process.
OpenAI acknowledges this concern. Its public statements about PRISM emphasize that the tool will not conduct research independently, and that human investigators remain responsible for verification.
Nevertheless, a commenter on Hacker News expressed the growing concern within technical communities: "I am scared that this type of thing is going to do science journals what AI-generated bug reports are doing to bug bounties. We are truly living in a post-scarcity society now, except that the thing we have in abundance is garbage, and it's drowning out everything of value."
- AI
Tuesday, OpenAI launched Prism, a free AI-powered workspace for scientists. The release has elicited immediate skepticism from researchers, who fear it will increase the volume of low-quality papers submitted to scientific journals. Publishers are also expressing growing concern about what many refer to as the AI shop in academic publishing.
Prism is a writing and formatting tool, not a research platform. Although OpenAI's broader messaging sometimes blurs, this distinction.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor, helping researchers draft papers, generate citations, create diagrams, sketch ideas, and collaborate with co-authors in real time. The tool is available at no cost to anyone with a ChatGPT account.
2026 will be for AI and science what 2025 was for AI in software engineering. Kevin Veal, Vice President of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on hard science topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
OpenAI developed Prism using technology from Crixet, a cloud-based LATEX platform acquired in late 2025. The company intends for Prism to reduce the time spent on formatting, allowing researchers to focus on scientific work. In a demonstration, an OpenAI employee showed how the software can automatically locate and include pertinent scientific literature and format bibliographies.
AI models are tools that can be misused. The concern is that by simplifying the creation of polished manuscripts, tools like Prism may overwhelm the peer-review system with papers that do not significantly advance their fields. At the same time, it is now easier to produce scientific text. The ability to evaluate such research has not kept pace.
When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that none of this absolves the scientist of the responsibility to verify that their references are correct.
Unlike traditional reference management software such as EndNote, which has formatted citations for over 30 years without fabricating them, AI models can generate plausible-sounding sources that don't exist. Weil added: "We are conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community."
The Slop Problem
Recent findings confirm its concerns. A December 2025 study in science found that researchers using large language models increased their output by 30-50%, depending on the field. Still, their AI-assisted papers performed worse than their peer-reviewed papers. Papers with elaborate language written without AI were more likely to be accepted, while those likely written by AI were less likely to be accepted. Reviewers appeared to recognize that sophisticated prose sometimes masks weak science.
It is a very widespread pattern across several fields of science. Yian Yin, an information science professor at Cornell University and one of the study's authors, told the Cornell Chronicle. There is a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.
An analysis of 41 million papers published between 1980 and 2025 found that although scientists using AI receive more citations and publish more papers, the overall scope of scientific research appears to be narrowing. A socio-cultural anthropologist at Yale University told Science magazine these outcomes should set off "loud alarm bells" for the research community.
Science is nothing but an aggregate endeavor, she said. There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.
Concerns about AI-generated scientific content are long-standing. In 2022, Meta withdrew its Galactica demo, a substantial language model for scientific writing, after users found it could generate plausible but inaccurate content, such as a wiki entry or a fictional paper titled "The Benefits of Eating Crushed Grass in 2024." Sakana AI in Tokyo introduced the AI scientist, an autonomous research system, in 2022. Critics on Hacker News described it as producing low-quality papers. One commentator, an academic journal editor, stated, "I would likely desk reject them; they contain very limited new knowledge."
The issue has intensified since then. In his 2026 editorial for Science, editor-in-chief H. Holden Thorpe noted that the journal is less susceptible to AI-generated errors due to its scale and human editorial oversight, but cautioned that no system can catch everything. Science permits limited AI use for editing and reference gathering, requires disclosure for wider applications, and prohibits the use of AI-generated figures.
Mandy Hill, Managing Director of Academic Publishing at Cambridge University Press and Assessment, has expressed similar concerns. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for radical change. She explained to Varsity that too many journal articles are being published, causing a huge strain. She warned that AI will exacerbate the issue.
Is AI Accelerating Scientific Progress or Flooding the Peer-Review Process
OpenAI is committed to supporting scientific progress and outlined its case for AI-assisted research in a recent report. The report features researchers who state that AI models have accelerated their work, such as a mathematician who used GPT-5.2 to solve an open optimization problem in three evenings, and a physicist who observed the model replicate symmetric calculations that had previously taken months.
These examples show AI's use in research beyond writing assistance. A distinction that OpenAI's marketing does not always make clear for non-native English-speaking scientists. AI writing tools can accelerate the publication of high-quality research; however, this advantage may be offset by an increase in mediocre submissions, further burdening the peer-review system.
Steele told MIT Technology Review that his objective is not a single AI-generated discovery but rather 10,000 advances in science that may not have happened or wouldn't have happened as quickly. He described this as an incremental, compounding acceleration.
It remains to be seen whether this acceleration will yield more scientific knowledge or increase the number of published papers. Nikita Zhivotovskiy, a statistician at UC Berkeley unaffiliated with OpenAI, told MIT Technology Review that GPT-5 has already proven valuable in his work for improving text and identifying mathematical errors, making interaction with the scientific literature smoother.
By making papers appear polished and professional, regardless of their scientific merit, AI writing tools may enable weaker research to pass initial editorial screening. There is a risk that dialogic workflows could obscure assumptions, reduce accountability, and further burden into the human peer-review process.
OpenAI acknowledges this concern. Its public statements about PRISM emphasize that the tool will not conduct research independently, and that human investigators remain responsible for verification.
Nevertheless, a commenter on Hacker News expressed the growing concern within technical communities: "I am scared that this type of thing is going to do science journals what AI-generated bug reports are doing to bug bounties. We are truly living in a post-scarcity society now, except that the thing we have in abundance is garbage, and it's drowning out everything of value."
- Troubleshooting
- Smartwatches
- Smartphones
- Camera
- Gadgets
- Games
- Laptops
- Seasonal Sales
- AI
- Phone Features
Tuesday, OpenAI launched Prism, a free AI-powered workspace for scientists. The release has elicited immediate skepticism from researchers, who fear it will increase the volume of low-quality papers submitted to scientific journals. Publishers are also expressing growing concern about what many refer to as the AI shop in academic publishing.
Prism is a writing and formatting tool, not a research platform. Although OpenAI's broader messaging sometimes blurs, this distinction.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor, helping researchers draft papers, generate citations, create diagrams, sketch ideas, and collaborate with co-authors in real time. The tool is available at no cost to anyone with a ChatGPT account.
2026 will be for AI and science what 2025 was for AI in software engineering. Kevin Veal, Vice President of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on hard science topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
OpenAI developed Prism using technology from Crixet, a cloud-based LATEX platform acquired in late 2025. The company intends for Prism to reduce the time spent on formatting, allowing researchers to focus on scientific work. In a demonstration, an OpenAI employee showed how the software can automatically locate and include pertinent scientific literature and format bibliographies.
AI models are tools that can be misused. The concern is that by simplifying the creation of polished manuscripts, tools like Prism may overwhelm the peer-review system with papers that do not significantly advance their fields. At the same time, it is now easier to produce scientific text. The ability to evaluate such research has not kept pace.
When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that none of this absolves the scientist of the responsibility to verify that their references are correct.
Unlike traditional reference management software such as EndNote, which has formatted citations for over 30 years without fabricating them, AI models can generate plausible-sounding sources that don't exist. Weil added: "We are conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community."
The Slop Problem
Recent findings confirm its concerns. A December 2025 study in science found that researchers using large language models increased their output by 30-50%, depending on the field. Still, their AI-assisted papers performed worse than their peer-reviewed papers. Papers with elaborate language written without AI were more likely to be accepted, while those likely written by AI were less likely to be accepted. Reviewers appeared to recognize that sophisticated prose sometimes masks weak science.
It is a very widespread pattern across several fields of science. Yian Yin, an information science professor at Cornell University and one of the study's authors, told the Cornell Chronicle. There is a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.
An analysis of 41 million papers published between 1980 and 2025 found that although scientists using AI receive more citations and publish more papers, the overall scope of scientific research appears to be narrowing. A socio-cultural anthropologist at Yale University told Science magazine these outcomes should set off "loud alarm bells" for the research community.
Science is nothing but an aggregate endeavor, she said. There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.
Concerns about AI-generated scientific content are long-standing. In 2022, Meta withdrew its Galactica demo, a substantial language model for scientific writing, after users found it could generate plausible but inaccurate content, such as a wiki entry or a fictional paper titled "The Benefits of Eating Crushed Grass in 2024." Sakana AI in Tokyo introduced the AI scientist, an autonomous research system, in 2022. Critics on Hacker News described it as producing low-quality papers. One commentator, an academic journal editor, stated, "I would likely desk reject them; they contain very limited new knowledge."
The issue has intensified since then. In his 2026 editorial for Science, editor-in-chief H. Holden Thorpe noted that the journal is less susceptible to AI-generated errors due to its scale and human editorial oversight, but cautioned that no system can catch everything. Science permits limited AI use for editing and reference gathering, requires disclosure for wider applications, and prohibits the use of AI-generated figures.
Mandy Hill, Managing Director of Academic Publishing at Cambridge University Press and Assessment, has expressed similar concerns. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for radical change. She explained to Varsity that too many journal articles are being published, causing a huge strain. She warned that AI will exacerbate the issue.
Is AI Accelerating Scientific Progress or Flooding the Peer-Review Process
OpenAI is committed to supporting scientific progress and outlined its case for AI-assisted research in a recent report. The report features researchers who state that AI models have accelerated their work, such as a mathematician who used GPT-5.2 to solve an open optimization problem in three evenings, and a physicist who observed the model replicate symmetric calculations that had previously taken months.
These examples show AI's use in research beyond writing assistance. A distinction that OpenAI's marketing does not always make clear for non-native English-speaking scientists. AI writing tools can accelerate the publication of high-quality research; however, this advantage may be offset by an increase in mediocre submissions, further burdening the peer-review system.
Steele told MIT Technology Review that his objective is not a single AI-generated discovery but rather 10,000 advances in science that may not have happened or wouldn't have happened as quickly. He described this as an incremental, compounding acceleration.
It remains to be seen whether this acceleration will yield more scientific knowledge or increase the number of published papers. Nikita Zhivotovskiy, a statistician at UC Berkeley unaffiliated with OpenAI, told MIT Technology Review that GPT-5 has already proven valuable in his work for improving text and identifying mathematical errors, making interaction with the scientific literature smoother.
By making papers appear polished and professional, regardless of their scientific merit, AI writing tools may enable weaker research to pass initial editorial screening. There is a risk that dialogic workflows could obscure assumptions, reduce accountability, and further burden into the human peer-review process.
OpenAI acknowledges this concern. Its public statements about PRISM emphasize that the tool will not conduct research independently, and that human investigators remain responsible for verification.
Nevertheless, a commenter on Hacker News expressed the growing concern within technical communities: "I am scared that this type of thing is going to do science journals what AI-generated bug reports are doing to bug bounties. We are truly living in a post-scarcity society now, except that the thing we have in abundance is garbage, and it's drowning out everything of value."
- Buying Guides
Tuesday, OpenAI launched Prism, a free AI-powered workspace for scientists. The release has elicited immediate skepticism from researchers, who fear it will increase the volume of low-quality papers submitted to scientific journals. Publishers are also expressing growing concern about what many refer to as the AI shop in academic publishing.
Prism is a writing and formatting tool, not a research platform. Although OpenAI's broader messaging sometimes blurs, this distinction.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor, helping researchers draft papers, generate citations, create diagrams, sketch ideas, and collaborate with co-authors in real time. The tool is available at no cost to anyone with a ChatGPT account.
2026 will be for AI and science what 2025 was for AI in software engineering. Kevin Veal, Vice President of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on hard science topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
OpenAI developed Prism using technology from Crixet, a cloud-based LATEX platform acquired in late 2025. The company intends for Prism to reduce the time spent on formatting, allowing researchers to focus on scientific work. In a demonstration, an OpenAI employee showed how the software can automatically locate and include pertinent scientific literature and format bibliographies.
AI models are tools that can be misused. The concern is that by simplifying the creation of polished manuscripts, tools like Prism may overwhelm the peer-review system with papers that do not significantly advance their fields. At the same time, it is now easier to produce scientific text. The ability to evaluate such research has not kept pace.
When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that none of this absolves the scientist of the responsibility to verify that their references are correct.
Unlike traditional reference management software such as EndNote, which has formatted citations for over 30 years without fabricating them, AI models can generate plausible-sounding sources that don't exist. Weil added: "We are conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community."
The Slop Problem
Recent findings confirm its concerns. A December 2025 study in science found that researchers using large language models increased their output by 30-50%, depending on the field. Still, their AI-assisted papers performed worse than their peer-reviewed papers. Papers with elaborate language written without AI were more likely to be accepted, while those likely written by AI were less likely to be accepted. Reviewers appeared to recognize that sophisticated prose sometimes masks weak science.
It is a very widespread pattern across several fields of science. Yian Yin, an information science professor at Cornell University and one of the study's authors, told the Cornell Chronicle. There is a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.
An analysis of 41 million papers published between 1980 and 2025 found that although scientists using AI receive more citations and publish more papers, the overall scope of scientific research appears to be narrowing. A socio-cultural anthropologist at Yale University told Science magazine these outcomes should set off "loud alarm bells" for the research community.
Science is nothing but an aggregate endeavor, she said. There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.
Concerns about AI-generated scientific content are long-standing. In 2022, Meta withdrew its Galactica demo, a substantial language model for scientific writing, after users found it could generate plausible but inaccurate content, such as a wiki entry or a fictional paper titled "The Benefits of Eating Crushed Grass in 2024." Sakana AI in Tokyo introduced the AI scientist, an autonomous research system, in 2022. Critics on Hacker News described it as producing low-quality papers. One commentator, an academic journal editor, stated, "I would likely desk reject them; they contain very limited new knowledge."
The issue has intensified since then. In his 2026 editorial for Science, editor-in-chief H. Holden Thorpe noted that the journal is less susceptible to AI-generated errors due to its scale and human editorial oversight, but cautioned that no system can catch everything. Science permits limited AI use for editing and reference gathering, requires disclosure for wider applications, and prohibits the use of AI-generated figures.
Mandy Hill, Managing Director of Academic Publishing at Cambridge University Press and Assessment, has expressed similar concerns. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for radical change. She explained to Varsity that too many journal articles are being published, causing a huge strain. She warned that AI will exacerbate the issue.
Is AI Accelerating Scientific Progress or Flooding the Peer-Review Process
OpenAI is committed to supporting scientific progress and outlined its case for AI-assisted research in a recent report. The report features researchers who state that AI models have accelerated their work, such as a mathematician who used GPT-5.2 to solve an open optimization problem in three evenings, and a physicist who observed the model replicate symmetric calculations that had previously taken months.
These examples show AI's use in research beyond writing assistance. A distinction that OpenAI's marketing does not always make clear for non-native English-speaking scientists. AI writing tools can accelerate the publication of high-quality research; however, this advantage may be offset by an increase in mediocre submissions, further burdening the peer-review system.
Steele told MIT Technology Review that his objective is not a single AI-generated discovery but rather 10,000 advances in science that may not have happened or wouldn't have happened as quickly. He described this as an incremental, compounding acceleration.
It remains to be seen whether this acceleration will yield more scientific knowledge or increase the number of published papers. Nikita Zhivotovskiy, a statistician at UC Berkeley unaffiliated with OpenAI, told MIT Technology Review that GPT-5 has already proven valuable in his work for improving text and identifying mathematical errors, making interaction with the scientific literature smoother.
By making papers appear polished and professional, regardless of their scientific merit, AI writing tools may enable weaker research to pass initial editorial screening. There is a risk that dialogic workflows could obscure assumptions, reduce accountability, and further burden into the human peer-review process.
OpenAI acknowledges this concern. Its public statements about PRISM emphasize that the tool will not conduct research independently, and that human investigators remain responsible for verification.
Nevertheless, a commenter on Hacker News expressed the growing concern within technical communities: "I am scared that this type of thing is going to do science journals what AI-generated bug reports are doing to bug bounties. We are truly living in a post-scarcity society now, except that the thing we have in abundance is garbage, and it's drowning out everything of value."
- Comparison
Tuesday, OpenAI launched Prism, a free AI-powered workspace for scientists. The release has elicited immediate skepticism from researchers, who fear it will increase the volume of low-quality papers submitted to scientific journals. Publishers are also expressing growing concern about what many refer to as the AI shop in academic publishing.
Prism is a writing and formatting tool, not a research platform. Although OpenAI's broader messaging sometimes blurs, this distinction.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor, helping researchers draft papers, generate citations, create diagrams, sketch ideas, and collaborate with co-authors in real time. The tool is available at no cost to anyone with a ChatGPT account.
2026 will be for AI and science what 2025 was for AI in software engineering. Kevin Veal, Vice President of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on hard science topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
OpenAI developed Prism using technology from Crixet, a cloud-based LATEX platform acquired in late 2025. The company intends for Prism to reduce the time spent on formatting, allowing researchers to focus on scientific work. In a demonstration, an OpenAI employee showed how the software can automatically locate and include pertinent scientific literature and format bibliographies.
AI models are tools that can be misused. The concern is that by simplifying the creation of polished manuscripts, tools like Prism may overwhelm the peer-review system with papers that do not significantly advance their fields. At the same time, it is now easier to produce scientific text. The ability to evaluate such research has not kept pace.
When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that none of this absolves the scientist of the responsibility to verify that their references are correct.
Unlike traditional reference management software such as EndNote, which has formatted citations for over 30 years without fabricating them, AI models can generate plausible-sounding sources that don't exist. Weil added: "We are conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community."
The Slop Problem
Recent findings confirm its concerns. A December 2025 study in science found that researchers using large language models increased their output by 30-50%, depending on the field. Still, their AI-assisted papers performed worse than their peer-reviewed papers. Papers with elaborate language written without AI were more likely to be accepted, while those likely written by AI were less likely to be accepted. Reviewers appeared to recognize that sophisticated prose sometimes masks weak science.
It is a very widespread pattern across several fields of science. Yian Yin, an information science professor at Cornell University and one of the study's authors, told the Cornell Chronicle. There is a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.
An analysis of 41 million papers published between 1980 and 2025 found that although scientists using AI receive more citations and publish more papers, the overall scope of scientific research appears to be narrowing. A socio-cultural anthropologist at Yale University told Science magazine these outcomes should set off "loud alarm bells" for the research community.
Science is nothing but an aggregate endeavor, she said. There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.
Concerns about AI-generated scientific content are long-standing. In 2022, Meta withdrew its Galactica demo, a substantial language model for scientific writing, after users found it could generate plausible but inaccurate content, such as a wiki entry or a fictional paper titled "The Benefits of Eating Crushed Grass in 2024." Sakana AI in Tokyo introduced the AI scientist, an autonomous research system, in 2022. Critics on Hacker News described it as producing low-quality papers. One commentator, an academic journal editor, stated, "I would likely desk reject them; they contain very limited new knowledge."
The issue has intensified since then. In his 2026 editorial for Science, editor-in-chief H. Holden Thorpe noted that the journal is less susceptible to AI-generated errors due to its scale and human editorial oversight, but cautioned that no system can catch everything. Science permits limited AI use for editing and reference gathering, requires disclosure for wider applications, and prohibits the use of AI-generated figures.
Mandy Hill, Managing Director of Academic Publishing at Cambridge University Press and Assessment, has expressed similar concerns. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for radical change. She explained to Varsity that too many journal articles are being published, causing a huge strain. She warned that AI will exacerbate the issue.
Is AI Accelerating Scientific Progress or Flooding the Peer-Review Process
OpenAI is committed to supporting scientific progress and outlined its case for AI-assisted research in a recent report. The report features researchers who state that AI models have accelerated their work, such as a mathematician who used GPT-5.2 to solve an open optimization problem in three evenings, and a physicist who observed the model replicate symmetric calculations that had previously taken months.
These examples show AI's use in research beyond writing assistance. A distinction that OpenAI's marketing does not always make clear for non-native English-speaking scientists. AI writing tools can accelerate the publication of high-quality research; however, this advantage may be offset by an increase in mediocre submissions, further burdening the peer-review system.
Steele told MIT Technology Review that his objective is not a single AI-generated discovery but rather 10,000 advances in science that may not have happened or wouldn't have happened as quickly. He described this as an incremental, compounding acceleration.
It remains to be seen whether this acceleration will yield more scientific knowledge or increase the number of published papers. Nikita Zhivotovskiy, a statistician at UC Berkeley unaffiliated with OpenAI, told MIT Technology Review that GPT-5 has already proven valuable in his work for improving text and identifying mathematical errors, making interaction with the scientific literature smoother.
By making papers appear polished and professional, regardless of their scientific merit, AI writing tools may enable weaker research to pass initial editorial screening. There is a risk that dialogic workflows could obscure assumptions, reduce accountability, and further burden into the human peer-review process.
OpenAI acknowledges this concern. Its public statements about PRISM emphasize that the tool will not conduct research independently, and that human investigators remain responsible for verification.
Nevertheless, a commenter on Hacker News expressed the growing concern within technical communities: "I am scared that this type of thing is going to do science journals what AI-generated bug reports are doing to bug bounties. We are truly living in a post-scarcity society now, except that the thing we have in abundance is garbage, and it's drowning out everything of value."
- News
Tuesday, OpenAI launched Prism, a free AI-powered workspace for scientists. The release has elicited immediate skepticism from researchers, who fear it will increase the volume of low-quality papers submitted to scientific journals. Publishers are also expressing growing concern about what many refer to as the AI shop in academic publishing.
Prism is a writing and formatting tool, not a research platform. Although OpenAI's broader messaging sometimes blurs, this distinction.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor, helping researchers draft papers, generate citations, create diagrams, sketch ideas, and collaborate with co-authors in real time. The tool is available at no cost to anyone with a ChatGPT account.
2026 will be for AI and science what 2025 was for AI in software engineering. Kevin Veal, Vice President of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on hard science topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
OpenAI developed Prism using technology from Crixet, a cloud-based LATEX platform acquired in late 2025. The company intends for Prism to reduce the time spent on formatting, allowing researchers to focus on scientific work. In a demonstration, an OpenAI employee showed how the software can automatically locate and include pertinent scientific literature and format bibliographies.
AI models are tools that can be misused. The concern is that by simplifying the creation of polished manuscripts, tools like Prism may overwhelm the peer-review system with papers that do not significantly advance their fields. At the same time, it is now easier to produce scientific text. The ability to evaluate such research has not kept pace.
When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that none of this absolves the scientist of the responsibility to verify that their references are correct.
Unlike traditional reference management software such as EndNote, which has formatted citations for over 30 years without fabricating them, AI models can generate plausible-sounding sources that don't exist. Weil added: "We are conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community."
The Slop Problem
Recent findings confirm its concerns. A December 2025 study in science found that researchers using large language models increased their output by 30-50%, depending on the field. Still, their AI-assisted papers performed worse than their peer-reviewed papers. Papers with elaborate language written without AI were more likely to be accepted, while those likely written by AI were less likely to be accepted. Reviewers appeared to recognize that sophisticated prose sometimes masks weak science.
It is a very widespread pattern across several fields of science. Yian Yin, an information science professor at Cornell University and one of the study's authors, told the Cornell Chronicle. There is a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.
An analysis of 41 million papers published between 1980 and 2025 found that although scientists using AI receive more citations and publish more papers, the overall scope of scientific research appears to be narrowing. A socio-cultural anthropologist at Yale University told Science magazine these outcomes should set off "loud alarm bells" for the research community.
Science is nothing but an aggregate endeavor, she said. There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.
Concerns about AI-generated scientific content are long-standing. In 2022, Meta withdrew its Galactica demo, a substantial language model for scientific writing, after users found it could generate plausible but inaccurate content, such as a wiki entry or a fictional paper titled "The Benefits of Eating Crushed Grass in 2024." Sakana AI in Tokyo introduced the AI scientist, an autonomous research system, in 2022. Critics on Hacker News described it as producing low-quality papers. One commentator, an academic journal editor, stated, "I would likely desk reject them; they contain very limited new knowledge."
The issue has intensified since then. In his 2026 editorial for Science, editor-in-chief H. Holden Thorpe noted that the journal is less susceptible to AI-generated errors due to its scale and human editorial oversight, but cautioned that no system can catch everything. Science permits limited AI use for editing and reference gathering, requires disclosure for wider applications, and prohibits the use of AI-generated figures.
Mandy Hill, Managing Director of Academic Publishing at Cambridge University Press and Assessment, has expressed similar concerns. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for radical change. She explained to Varsity that too many journal articles are being published, causing a huge strain. She warned that AI will exacerbate the issue.
Is AI Accelerating Scientific Progress or Flooding the Peer-Review Process
OpenAI is committed to supporting scientific progress and outlined its case for AI-assisted research in a recent report. The report features researchers who state that AI models have accelerated their work, such as a mathematician who used GPT-5.2 to solve an open optimization problem in three evenings, and a physicist who observed the model replicate symmetric calculations that had previously taken months.
These examples show AI's use in research beyond writing assistance. A distinction that OpenAI's marketing does not always make clear for non-native English-speaking scientists. AI writing tools can accelerate the publication of high-quality research; however, this advantage may be offset by an increase in mediocre submissions, further burdening the peer-review system.
Steele told MIT Technology Review that his objective is not a single AI-generated discovery but rather 10,000 advances in science that may not have happened or wouldn't have happened as quickly. He described this as an incremental, compounding acceleration.
It remains to be seen whether this acceleration will yield more scientific knowledge or increase the number of published papers. Nikita Zhivotovskiy, a statistician at UC Berkeley unaffiliated with OpenAI, told MIT Technology Review that GPT-5 has already proven valuable in his work for improving text and identifying mathematical errors, making interaction with the scientific literature smoother.
By making papers appear polished and professional, regardless of their scientific merit, AI writing tools may enable weaker research to pass initial editorial screening. There is a risk that dialogic workflows could obscure assumptions, reduce accountability, and further burden into the human peer-review process.
OpenAI acknowledges this concern. Its public statements about PRISM emphasize that the tool will not conduct research independently, and that human investigators remain responsible for verification.
Nevertheless, a commenter on Hacker News expressed the growing concern within technical communities: "I am scared that this type of thing is going to do science journals what AI-generated bug reports are doing to bug bounties. We are truly living in a post-scarcity society now, except that the thing we have in abundance is garbage, and it's drowning out everything of value."
- Contact
Tuesday, OpenAI launched Prism, a free AI-powered workspace for scientists. The release has elicited immediate skepticism from researchers, who fear it will increase the volume of low-quality papers submitted to scientific journals. Publishers are also expressing growing concern about what many refer to as the AI shop in academic publishing.
Prism is a writing and formatting tool, not a research platform. Although OpenAI's broader messaging sometimes blurs, this distinction.
Prism integrates OpenAI's GPT-5.2 model into a LaTeX-based text editor, helping researchers draft papers, generate citations, create diagrams, sketch ideas, and collaborate with co-authors in real time. The tool is available at no cost to anyone with a ChatGPT account.
2026 will be for AI and science what 2025 was for AI in software engineering. Kevin Veal, Vice President of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on hard science topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
OpenAI developed Prism using technology from Crixet, a cloud-based LATEX platform acquired in late 2025. The company intends for Prism to reduce the time spent on formatting, allowing researchers to focus on scientific work. In a demonstration, an OpenAI employee showed how the software can automatically locate and include pertinent scientific literature and format bibliographies.
AI models are tools that can be misused. The concern is that by simplifying the creation of polished manuscripts, tools like Prism may overwhelm the peer-review system with papers that do not significantly advance their fields. At the same time, it is now easier to produce scientific text. The ability to evaluate such research has not kept pace.
When asked about the possibility of the AI model confabulating fake citations, Weil acknowledged in the press demo that none of this absolves the scientist of the responsibility to verify that their references are correct.
Unlike traditional reference management software such as EndNote, which has formatted citations for over 30 years without fabricating them, AI models can generate plausible-sounding sources that don't exist. Weil added: "We are conscious that as AI becomes more capable, there are concerns around volume, quality, and trust in the scientific community."
The Slop Problem
Recent findings confirm its concerns. A December 2025 study in science found that researchers using large language models increased their output by 30-50%, depending on the field. Still, their AI-assisted papers performed worse than their peer-reviewed papers. Papers with elaborate language written without AI were more likely to be accepted, while those likely written by AI were less likely to be accepted. Reviewers appeared to recognize that sophisticated prose sometimes masks weak science.
It is a very widespread pattern across several fields of science. Yian Yin, an information science professor at Cornell University and one of the study's authors, told the Cornell Chronicle. There is a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.
An analysis of 41 million papers published between 1980 and 2025 found that although scientists using AI receive more citations and publish more papers, the overall scope of scientific research appears to be narrowing. A socio-cultural anthropologist at Yale University told Science magazine these outcomes should set off "loud alarm bells" for the research community.
Science is nothing but an aggregate endeavor, she said. There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.
Concerns about AI-generated scientific content are long-standing. In 2022, Meta withdrew its Galactica demo, a substantial language model for scientific writing, after users found it could generate plausible but inaccurate content, such as a wiki entry or a fictional paper titled "The Benefits of Eating Crushed Grass in 2024." Sakana AI in Tokyo introduced the AI scientist, an autonomous research system, in 2022. Critics on Hacker News described it as producing low-quality papers. One commentator, an academic journal editor, stated, "I would likely desk reject them; they contain very limited new knowledge."
The issue has intensified since then. In his 2026 editorial for Science, editor-in-chief H. Holden Thorpe noted that the journal is less susceptible to AI-generated errors due to its scale and human editorial oversight, but cautioned that no system can catch everything. Science permits limited AI use for editing and reference gathering, requires disclosure for wider applications, and prohibits the use of AI-generated figures.
Mandy Hill, Managing Director of Academic Publishing at Cambridge University Press and Assessment, has expressed similar concerns. In October 2025, she told Retraction Watch that the publishing ecosystem is under strain and called for radical change. She explained to Varsity that too many journal articles are being published, causing a huge strain. She warned that AI will exacerbate the issue.
Is AI Accelerating Scientific Progress or Flooding the Peer-Review Process
OpenAI is committed to supporting scientific progress and outlined its case for AI-assisted research in a recent report. The report features researchers who state that AI models have accelerated their work, such as a mathematician who used GPT-5.2 to solve an open optimization problem in three evenings, and a physicist who observed the model replicate symmetric calculations that had previously taken months.
These examples show AI's use in research beyond writing assistance. A distinction that OpenAI's marketing does not always make clear for non-native English-speaking scientists. AI writing tools can accelerate the publication of high-quality research; however, this advantage may be offset by an increase in mediocre submissions, further burdening the peer-review system.
Steele told MIT Technology Review that his objective is not a single AI-generated discovery but rather 10,000 advances in science that may not have happened or wouldn't have happened as quickly. He described this as an incremental, compounding acceleration.
It remains to be seen whether this acceleration will yield more scientific knowledge or increase the number of published papers. Nikita Zhivotovskiy, a statistician at UC Berkeley unaffiliated with OpenAI, told MIT Technology Review that GPT-5 has already proven valuable in his work for improving text and identifying mathematical errors, making interaction with the scientific literature smoother.
By making papers appear polished and professional, regardless of their scientific merit, AI writing tools may enable weaker research to pass initial editorial screening. There is a risk that dialogic workflows could obscure assumptions, reduce accountability, and further burden into the human peer-review process.
OpenAI acknowledges this concern. Its public statements about PRISM emphasize that the tool will not conduct research independently, and that human investigators remain responsible for verification.
Nevertheless, a commenter on Hacker News expressed the growing concern within technical communities: "I am scared that this type of thing is going to do science journals what AI-generated bug reports are doing to bug bounties. We are truly living in a post-scarcity society now, except that the thing we have in abundance is garbage, and it's drowning out everything of value."