The ethics of erroneous AI-generated scientific figures

Alexander Skulmowski & Patricia Engel-Hermann

Ethics and Information Technology2025https://doi.org/10.1007/s10676-025-09835-4article
AJG 1ABDC B
Weight
0.52

Abstract

The number of AI-generated figures in scientific publications is increasing, unfortunately leading to high-profile retractions of papers featuring inaccurate visualizations. The lack of definitive guidelines for AI-generated scientific and educational visualizations results in several ethical issues and dilemmas. At the same time, we maintain that there should not be a double standard regarding the factual correctness of figures only due to AI involvement in their creation and argue in favor of measured responses. We present a framework considering the communicative purpose of a visualization, the type and function of the figure in a paper, the type of error, risks, and the appropriateness of the figure as a means to support decisions regarding the severity of issues of AI-generated images for scientific and educational aims. By outlining a more fine-grained analysis of error types and visualization characteristics, we provide orientation for the current controversy surrounding AI-generated figures. This framework can also serve as a starting point for considerations regarding AI use by students. In addition, we discuss more sophisticated ways of using AI systems to generate visualizations that avoid the pitfalls of general-purpose text-to-image tools.

7 citations

Open via your library →

Cite this paper

https://doi.org/https://doi.org/10.1007/s10676-025-09835-4

Or copy a formatted citation

@article{alexander2025,
  title        = {{The ethics of erroneous AI-generated scientific figures}},
  author       = {Alexander Skulmowski & Patricia Engel-Hermann},
  journal      = {Ethics and Information Technology},
  year         = {2025},
  doi          = {https://doi.org/https://doi.org/10.1007/s10676-025-09835-4},
}

Paste directly into BibTeX, Zotero, or your reference manager.

Flag this paper

The ethics of erroneous AI-generated scientific figures

Flags are reviewed by the Arbiter methodology team within 5 business days.


Evidence weight

0.52

Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40

F · citation impact0.47 × 0.4 = 0.19
M · momentum0.68 × 0.15 = 0.10
V · venue signal0.50 × 0.05 = 0.03
R · text relevance †0.50 × 0.4 = 0.20

† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.