Summary: A new AI system developed by computer scientists automatically shows open access magazines to identify potentially predatory publications. These magazines often charge high rates to publish without an adequate pairs review, undermining scientific credibility.
The AI analyzed more than 15,000 magazines and marked more than 1,000 as questionable, offering researchers a scalable way of detecting risks. While the system is not perfect, it serves as a first crucial filter, with human experts that make the final calls.
Key facts
Predator publication: Magazines exploit researchers through collection rates without quality pairs. AI detection: The system marked more than 1,000 suspicious magazines of 15,200 analyzed.
Source: Colorado University
A computer team led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks “questionable” scientific journals.
The study, published on August 27 in the magazine “Science Advances”, addresses an alarming trend in the world of research.
Daniel Acuña, main author of the study and associated professor in the Department of Computer Science, receives a reminder of that several times a week in his email input tray: these spam messages come from people who intend to be editors in scientific journals, in general, Acuña has never heard and offer to publish their documents, by a rate.
These publications are sometimes called “predatory” magazines. They address scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without adequate investigation.
“There has been a growing effort among scientists and organizations to examine these magazines,” said Acuña. “But it’s like Whack-A-Mole. Crossing one, and then another appears, usually from the same company. They simply create a new website and come up with a new name.”
The new AI tool automatically shows scientific journals, evaluating its websites and other online data for certain criteria: do magazines have an editorial board with established researchers? Do your websites contain many grammatical errors?
Acuña emphasizes that the tool is not perfect. Ultimately, he believes that human experts, not the machines, should make the final call about whether a magazine is of good reputation.
But in an era in which prominent figures question the legitimacy of science, stop the propagation of questionable publications has become more important than ever, he said.
“In science, you do not start from scratch. You are built at the top of the investigation of others,” said Acuña. “Then, if the base of that tower falls apart, then everything collapses.”
The shaking
When scientists present a new study to a good reputation publication, that study generally suffers a practice called pairs. External experts read the study and evaluate it for their quality, or, at least, that is the objective.
A growing number of companies has tried to avoid this process for earnings. In 2009, Jeffrey Beall, a Cu Denver librarian, coined the phrase “predators” magazines to describe these publications.
Often, they are addressed to researchers outside the United States and Europe, as in China, India and Iran, the countries where scientific institutions can be young, and the pressure and incentives for researchers to publish are high.
“They will say: ‘If you pay $ 500 or $ 1,000, we will review your work,'” said Acuña. “Actually, they do not provide any service. They simply take the PDF and publish it on their website.”
Some different groups have tried to stop the practice. Among them is a non -profit organization called Open Access Magazine Directory (DOAJ).
Since 2003, DAAJ volunteers have marked thousands of magazines as suspects based on six criteria. (Accredited publications, for example, tend to include a detailed description of their peer review policies on their websites).
But maintaining the rhythm of the spread of these publications has been discouraging for humans.
To accelerate the process, Acuña and its colleagues resorted to AI. The team trained its system using the DAAJ data, then asked the AI to examine a list of almost 15,200 open access magazines on the Internet.
Among those magazines, the AI initially marked more than 1,400 as potentially problematic.
Acuña and his colleagues asked the human experts to review a subset of suspicious magazines. The AI made mistakes, according to humans, marking an estimate of 350 publications as questionable when they were probably legitimate. That still left more than 1,000 magazines that the researchers identified as questionable.
“I think this should be used as an assistant to prescribe large amounts of magazines,” he said. “But human professionals should do the final analysis.”
A Firewall for Science
Acuña added that researchers did not want their system to be a “black box” like other AI platforms.
“With Chatgpt, for example, you often don’t understand why it suggests something,” said Acuña. “We try to make ours as interpretable as possible.”
The team discovered, for example, that questionable magazines published an unusually high number of articles. They also included authors with a greater amount of affiliations than more legitimate magazines, and authors who cited their own research, instead of the research of other scientists, at an unusually high level.
The new AI system is not publicly accessible, but researchers hope to make it available to universities and editorials soon. Acuña sees the tool as a way in which researchers can protect their fields from bad data, what he calls a “firewall for science.”
“As a computer scientist, I often give the example of when a new smartphone comes out,” he said.
“We know that the phone’s software will have failures, and we hope that error corrections will come in the future. We should probably do the same with science.”
About this AI and scientific research news
Author: Daniel Strain
Source: Colorado University
Contact: Daniel Strain – Colorado University
Image: The image is accredited to Neuroscience News
Original research: open access.
“Estimation of the predictability of questionable open access magazines” by Daniel Acuña et al. Scientific advances
Abstract
Estimation of the predictability of questionable open access magazines
Questionable magazines threaten the integrity of global research, but manual research can be slow and inflexible.
Here, we explore the potential of artificial intelligence (AI) to systematically identify such places analyzing the website design, content and publication metadata.
Evaluated against extensive data sets noted by the human being, our method achieves practical precision and discovers the indicators previously overlooked the legitimacy of the magazine.
By adjusting the decision threshold, our method can prioritize integral detection or precise identification of low noise.
In a balanced threshold, we mark more than 1000 suspicious magazines, which collectively publish hundreds of thousands of articles, receive millions of appointments, recognize the funds of the main agencies and attract authors of developing countries.
Errors analysis reveals challenges that involve discontinuous titles, poorly classified book series and small societies with limited online presence, which are addressable problems with a better data quality.
Our findings demonstrate the potential of AI for scalable integrity verifications, while highlighting the need to match the automated triage with an expert review.






_6e98296023b34dfabc133638c1ef5d32-620x480.jpg)











