Tag

Peer Review

All articles tagged with #peer review

LLMs Aren’t the Problem, Cash-for-Review Fails, and Vaping Studies Reveal Flaws
science8 minutes ago

LLMs Aren’t the Problem, Cash-for-Review Fails, and Vaping Studies Reveal Flaws

Retraction Watch’s weekend digest notes that large language models aren’t the core issue in science publishing, reports that offering cash to spot errors doesn’t work, and spotlights vaping studies with numerous flaws and few retractions, while also outlining ongoing investigations and policy discussions around scientific integrity and publishing practices.

Nature peer‑reviews AI Scientist, signaling progress and limits in autonomous science
technology16 days ago

Nature peer‑reviews AI Scientist, signaling progress and limits in autonomous science

Nature published a peer‑reviewed update on Sakana AI’s AI Scientist, a system that uses LLMs to search literature, generate hypotheses, run experiments, and draft papers. The tool submitted three original AI‑generated papers to a leading ML conference, with one accepted, but the authors tempered claims of fully automating science and included an automated reviewer. They stress AI should assist human scientists, while flagging risks like originality dilution as autonomous research advances.

AI Scientist Demonstrates End-to-End Autonomous Research and Peer Review
technology16 days ago

AI Scientist Demonstrates End-to-End Autonomous Research and Peer Review

Researchers present The AI Scientist, an end-to-end autonomous system that ideates, codes, runs experiments, analyzes results, writes full manuscripts, and even conducts its own peer review to generate new ML papers. It operates in template-based and template-free modes, relies on foundation models and agentic tree search, and includes The Automated Reviewer to gauge paper quality. In tests, a fully AI-generated manuscript reached workshop peer review at ICLR (though not top-tier publication), and overall quality improves with more compute and better models, while the work also highlights ethical risks and the need for responsible norms as autonomous scientific systems mature.

AI-Driven Reboot of Scholarly Publishing
technology1 month ago

AI-Driven Reboot of Scholarly Publishing

Tyler Cowen asks for ideas on how AI should transform academic journals; commenters propose AI-led triage and grading, disclosures of AI use, open-access reforms, machine-readable submissions with supporting artifacts, transparent AI pipelines, auditing, and new funding models, while skeptics caution that AI cannot fully replace human reviewers or authorship, highlighting both opportunities and risks in rethinking scholarly publishing.

Chemist's 35 Retractions in 24 Months Sparks Integrity Debate
ethics1 month ago

Chemist's 35 Retractions in 24 Months Sparks Integrity Debate

A chemistry researcher had 35 papers retracted within 24 months for a mix of issues including major errors in analyses, compromised peer review, image-related problems, and citation manipulation. Most retractions appeared in Elsevier- and Royal Society of Chemistry–published journals, placing the scientist on Retraction Watch’s leaderboard. Coauthors have defended the work as a matter of presentation rather than fraudulent data.

Pediatric Journal Reveals 138 Fictional Case Reports, Sparking Global Corrections
health1 month ago

Pediatric Journal Reveals 138 Fictional Case Reports, Sparking Global Corrections

A pediatric journal admitted that 138 published case reports were fictional, created under a confidentiality program to protect patients; at least 61 of these cases have been cited as fact in other journals, prompting corrections, scrutiny of potential retractions, and concern over how misinformation spread, including the controversial 'baby boy blue' opioid-in-breast milk case. Editors say future reports will clearly label cases as fictional, but copies and citations in databases like PubMed Central and Semantic Scholar have already propagated the error.

Five red flags that a research paper may be fraudulent
technology1 month ago

Five red flags that a research paper may be fraudulent

Science sleuths outline five practical checks to spot dubious papers: vet the references for relevance and possible fake or self-citing patterns; verify authors and affiliations (and ORCID IDs); examine figures and images for manipulation; evaluate the science itself for formulaic, boring, or implausible findings often produced by paper mills; and read the abstract for clarity and consistency, using community resources like PubPeer and Retraction Watch to corroborate concerns.

AI-Driven Feedback Elevates Peer Review Quality in a Large-Scale Study
technology1 month ago

AI-Driven Feedback Elevates Peer Review Quality in a Large-Scale Study

Nature Machine Intelligence reports a large-scale randomized study showing that automated, LLM-generated feedback via the Review Feedback Agent improves peer review quality and engagement. At ICLR 2025, over 20,000 reviews were analyzed; 27% of reviewers who received AI feedback updated their reviews, incorporating more than 12,000 suggested edits. Blind evaluations found revised reviews more informative, and the intervention increased writing length (about 80 extra words for updaters) with longer author and reviewer rebuttals. The study suggests carefully designed LLM feedback can make reviews more specific and actionable while boosting reviewer–author engagement; data and open-source code are available.

AI coach sharpens peer review with clearer, more constructive feedback
technology1 month ago

AI coach sharpens peer review with clearer, more constructive feedback

A five-LLM AI coach, called Review Feedback Agent, was developed to help peer reviewers deliver more specific, constructive, and less toxic feedback. When tested on thousands of existing reviews, it frequently suggested actionable ways to improve comments. It remains unclear whether this improves the quality or impact of the papers being reviewed, requiring further study.

AI Slop Tests the Limits of Computer Science Publishing
technology1 month ago

AI Slop Tests the Limits of Computer Science Publishing

Nature reports that a surge of AI-generated, low-quality submissions—dubbed 'AI slop'—is flooding computer science journals and conferences, with ICML 2026 receiving over 24,000 papers and arXiv submissions up more than 50% since ChatGPT; some papers are AI-generated or contain fabrications, prompting arXiv and conference policy changes, expanded reviewer pools, and debates about moving to rolling-journal models to preserve research integrity.

Critique casts doubt on claim that trees anticipate solar eclipses
science2 months ago

Critique casts doubt on claim that trees anticipate solar eclipses

A new critique published in Trends in Plant Science questions the 2025 study that linked synchronized bioelectrical activity in spruce trees to a partial solar eclipse, arguing the small sample size, numerous variables, and lack of alternative explanations undermine the claim; some scientists label the work as pseudoscience, while the original researchers defend the preliminary results and say follow-up studies are ongoing.

Retractions, AI Slop, and the Watchful Eye of Peer Review
science2 months ago

Retractions, AI Slop, and the Watchful Eye of Peer Review

Retraction Watch’s Weekend Reads roundup recaps a week of publishing scrutiny: headlines about a researcher’s alleged poisoning obfuscation, plagiarism accusations, fake references, and dozens of retractions due to compromised peer review; it also highlights AI-related issues in arXiv’s new rules (endorsements for first-time posters and English submissions) and a broad set of discussions on replication, ethics, and data use. The post notes the Hijacked Journal Checker with 400+ entries, the Retraction Watch Database surpassing 63,000 retractions, COVID-era retractions over 640, and 50 mass resignations, and invites donations to support the work.

AI Flood Threatens Trust in Scientific Publishing
artificial-intelligence2 months ago

AI Flood Threatens Trust in Scientific Publishing

A Gizmodo.io9 piece argues that AI-generated or AI-augmented papers are flooding arXiv, undermining traditional signals of quality and risking the reliability of scientific publishing. While AI can help with language barriers, analyses show AI-authored submissions are more prolific and standard quality indicators are becoming less reliable as publication volume rises; incidents like a Nature report about a German researcher misusing ChatGPT and AI-generated data in cancer research illustrate the potential for fraud. The article warns this could overwhelm scholarly communication unless reviewers and repositories tighten safeguards.