Discussion
Adherence to the STROBE checklist has been previously investigated in
other research fields 16–19. Our observed proportions
of full reporting are in line with those detected in other fields. This
may suggest that the STROBE checklist is equably accepted and thus a
suitable tool in observational allergy epidemiology. However, alike in
other research fields, only a minority (45%) of the allergy papers
published in EAACI journals fully adhered to the reporting of all STROBE
items.
We applied a rigorous random procedure to select the included papers to
avoid potential selection bias. However, a limitation of our study is
that we restricted our literature search to three journals affiliated
with a strong European professional association. These are widely
accepted in the field and ranked on positions 2, 5, and 8 (out of a
total of 28 journals) in the category “Allergy” of the 2019 Journal
Citation Reports© (Clarivate Analytics). Suboptimal reporting quality in
top journals in the field, despite a lag of 10 years since introduction
of the STROBE checklist, is likely to reflect an issue generalizable to
other journals in the field, also outside of Europe. One may be inclined
to think that reporting quality may be even worse in lower ranking
journals or journals not affiliated with a strong professional society.
This would have to be explored by extending the present work to more
journals in the field and some previous evidence argues against this
assumption. A study of 69 studies, a mix of experimental and
observational studies in animals and humans published in 2016, did not
find a strong or statistically significant inverse correlation between
reporting quality and the journal’s impact factor 20.
Alike, a study of 171 observational studies affiliated with the Iranian
Shiraz University of Medical Sciences did not show a correlation between
STROBE-ascertained reporting quality and the journal’s impact factor21.
The STROBE checklist was not designed to be used as a score or to rank
papers by reporting quality after their publication, but as a checklist
to be used when papers are written. We extended its use and demonstrated
how to apply the STROBE checklist in a quantitative way, proposing
conservative or liberal definitions. Using these proposed scores, we
were able to identify differences in reporting quality by study design
which could be viewed as external validation: papers based on cohort and
cross-sectional designs had comparable reporting quality while papers
based on case-control designs less often achieved high reporting
quality. This has been found in other similar research as well but the
underlying reasons remain elusive 21. Of note, papers
based on a case-control design were the minority among those we included
in the present evaluation. This may reflect that there are several more
methodological issues associated with case-control
studies,22–24 which may lead to the lesser use or
publication of this study design.
Different sections of the papers achieved higher levels of full
adherence to the STROBE criteria. In particular, high proportions of
full reporting were observed for introduction, discussion, and funding.
On the contrary, the reporting quality for the methods and results
sections was lower and items in the methods and results sections with
low levels of full reporting clustered together in the correspondence
analyses. Specifically in the methods sections, we identified study
settings, participant’s features, efforts undertaken to account for bias
and confounding, and sample size justification as the most unreported
features. This poor reporting poses a trifold problem for translation of
the reported evidence into public health policies and interventions.
Firstly, study settings and participant’s features must be well defined
to identify the target populations. Secondly, residual bias or
confounding limit applicability and efficacy. Thirdly, an undersized
study comes with loss of statistical power potentially leading to false
negative outcomes and thus potential discarding of valid ideas. Two
STROBE items on sample description and reporting results from the main
and secondary analyses (items 14 and 17) were particularly correlated to
the aforementioned cluster of poorly reported items. Of note, full
reporting within this cluster of poorly reported items was less achieved
in case-control studies compared to cohort and cross-sectional studies,
reinforcing the notion about poor reporting in papers based on
case-control studies. Again, reporting of information on the target
population as well as the results, including secondary and sensitivity
analyses, which are the ones supposed to reveal potential source of
biases, should be at the core of every scientific paper.
To improve the quality of reporting of studies, Moher et al. made four
suggestions 25. First, they proposed publication
officers at universities and other research institutions, alike grants
officers or technology transfer officers, who could provide guidance on
manuscript preparation. Second, core competencies for editors should be
established including knowledge on reporting guidelines. Third, training
authors in scientific writing and adherence to reporting guidelines.
Fourth, peer reviewers could receive formal training, e.g. at
universities.
In addition to structural implementation of officers and training
programs at research institutions, interventions at the journal level
have been suggested. The obligatory submission of filled reporting
guideline checklists (STROBE, CONSORT, PRISMA) along with manuscript
submission raised the adherence to these checklists by 13% for
observational studies to 58% for systematic reviews26. A suggestion derived from this study was to
implement checklists in online submission systems, which can be ticked
by authors as well as the reviewers26. Furthermore, a
scoping review found 31 interventions to improve reporting but only 4
were tested in randomised trials 27. The mere
endorsement of the use of reporting guidelines by a scientific journal
influenced the reporting quality little or not at all27. Improvements were achieved by active interventions
on editors and peer reviewers, who were required to assess adherence to
the appropriate checklist 27. Even though some of the
interventions were evaluated and proven to be effective, they are still
not realized in the routine of scientific work 27.
All three evaluated EAACI journals endorse the use of reporting
guidelines including STROBE in their author guidelines. Active,
structural implementation of reporting guidelines in the submission
process, as well as, during editorial and peer review evaluation
including training for editors and peer reviewers seems warranted but
will require larger efforts. Until this is achieved, we suggest to start
with simple, targeted interventions based on our results. For instance,
authors could be required to select their study design from a list in
the submission system. This may be used to instruct editors or peer
reviewers to evaluate the two or three most poorly reported STROBE items
for that given study design. It may also be used to append a generic
subtitle to all published manuscripts disclosing the study design. We
hope that our present work provides basis for improving reporting
quality of observational studies in allergy research, both by initiation
of targeted interventions on journal level as well as by increased
awareness among authors.