Davriellelouna@lemmy.world to science@lemmy.worldEnglish · 18 days agoScientists reportedly hiding AI text prompts in academic papers to receive positive peer reviewswww.theguardian.comexternal-linkmessage-square11fedilinkarrow-up1101arrow-down10cross-posted to: technology@lemmy.world
arrow-up1101arrow-down1external-linkScientists reportedly hiding AI text prompts in academic papers to receive positive peer reviewswww.theguardian.comDavriellelouna@lemmy.world to science@lemmy.worldEnglish · 18 days agomessage-square11fedilinkcross-posted to: technology@lemmy.world
minus-squareDaTingGoBrrr@lemmy.mllinkfedilinkEnglisharrow-up11·18 days agoOr maybe AI shouldn’t review things? Who knows what they are hallucinating.
minus-squareMysteriousSophon21@lemmy.worldlinkfedilinkEnglisharrow-up2·16 days agoThis is the biggest issue - peer review is supposed to be about critical analysis and domain expertise, not just following promts blindly, and no AI today has actual scientific understanding to catch subtle methodological flaws.
minus-squareSquizzy@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·18 days agoYeah absolutely, but researchers who are attempting skirt review processes to only receice positive feedback are not respecting the process.
minus-squareCrypticCoffee@lemmy.mllinkfedilinkEnglisharrow-up2·15 days agoWhat’s to respect in an AI review where they didn’t even review the output. It’s an LLM lazy review. Deserves to be gamed.
minus-squareSquizzy@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down2·15 days agoYes, the reviewers should not be using it. The researcher shouldnt be submitting it with the intention of gaming it. AI is not all LLM chat bots, there are legitimate AI implementations used in research.
Or maybe AI shouldn’t review things? Who knows what they are hallucinating.
This is the biggest issue - peer review is supposed to be about critical analysis and domain expertise, not just following promts blindly, and no AI today has actual scientific understanding to catch subtle methodological flaws.
Yeah absolutely, but researchers who are attempting skirt review processes to only receice positive feedback are not respecting the process.
What’s to respect in an AI review where they didn’t even review the output. It’s an LLM lazy review. Deserves to be gamed.
Yes, the reviewers should not be using it. The researcher shouldnt be submitting it with the intention of gaming it.
AI is not all LLM chat bots, there are legitimate AI implementations used in research.