Unsere vernetzte Welt verstehen
Wiederholungsstudien sind eine Frage der Reputation
Wiederholungsstudien gehören zur guten wissenschaftlichen Praxis, werden jedoch selten durchgeführt. Die Autoren argumentierten daher in ihrem ersten Blogpost aus verschiedenen Gründen für mehr Wiederholungsstudien, die zum Beispiel auf den gleichen Daten beruhen. In ihrem zweiten Beitrag wird deutlich, dass für Wiederholungsstudien die Reputation der Ausgangsstudie ausschlaggebend ist. Dieser Blogpost basiert auf dem wissenschaftlichen Artikel, Perceptions and Practices of Replication by Social and Behavioral Scientists: Making Replications a Mandatory Element of Curricula Would Be Useful.
Replication studies are considered a hallmark of good scientific practice (1). Yet they are treated among researchers as an ideal to be professed but not practised (2, 3). For science policy makers, journal editors and external research funders to design favourable boundary conditions, it is therefore necessary to understand what drives replication.
Using metadata from all articles published in the top-50 economics journals from 1974 to 2014, we investigated how often replication studies are published and which types of journal articles are replicated. We find that replication is a matter of impact: High-impact articles and articles by authors from leading institutions are more likely to be replicated. We could not find empirical evidence for the hypothesis that the lower cost of replication that is associated with the availability of data and code has a significant effect on the incidence of replication. We argue that researchers behave highly rationally in terms of the academic reputation economy, as they tend to replicate high-impact research from renowned researchers and institutions, possibly because in this case replications are more likely to be published (4). Our results are in line with previous assumptions that relate replication to impact (3, 5–7). In this regard, private incentives are well aligned with societal interests, since high-impact publications are also the studies that are most likely to influence political and economic decisions as well as the public discourse.
However, the question remains whether sufficient replications are conducted to guarantee the correctness of published findings. While we have no analytical result that would indicate which rate of replication is optimal for a scientific discipline, having less than 0.1% of articles among the top-50 journals in economics being replications strikes us as unreasonably low. In addition, there is no reason to believe that the share of published replication studies should be significantly higher among non-top-50 articles (2). We argue that the incidence of replication poses no threat to researchers. We also have to note that we cannot detect any statistically strong impact of data disclosure policies. Moreover, for 37% of the studies empirical articles subject to mandatory data-disclosure, the data or program code was not available although the data was not proprietary. This raises concern regarding the enforcement of mandatory data disclosure policies.
Our results suggest that replication is – at least partly – driven by the replicator’s reputation considerations. Thus the low number of replication studies being conducted would possibly increase if replication received more formal recognition, e.g. through publication in (high-impact) journals or specific funding. The same holds true for the replicated author who should receive formal recognition if his results were successfully replicated. This could additionally motivate authors to ensure the replicability of published results. Moreover, considering the costs of replication, a stronger commitment of publishers for the replicability of research by establishing and enforcing data availability policies would lower the barrier for replicators.
Frank Mueller-Langer is Senior Research Fellow at the Max Planck Institute for Innovation and Competition and the Joint Research Center, Seville. Benedikt Fecher is a doctoral student at the German Institute of Economic Research and the Alexander von Humboldt Institute for Internet and Society. Dietmar Harhoff is Director at the Max Planck Institute for Innovation and Competition. Gert G. Wagner is Board Member of the German Institute for Economic Research and Max Planck Fellow at the MPI for Human Development in Berlin. Correspondence about this blog should be directed to Benedikt Fecher at fecher@hiig.de.
References
(1) B. R. Jasny, G. Chin, L. Chong, S. Vignieri, Again, and Again, and Again … Science. 334, 1225–1225 (2011).
(2) M. Duvendack, R. W. Palmer-Jones, W. R. Reed, Replications in Economics: A Progress Report. Econ Journal Watch. 12, 164–191 (2015).
(3) D. S. Hamermesh, Viewpoint: Replication in economics. Canadian Journal of Economics. 40, 715–733 (2007).
(4) B. Fecher, S. Friesike, M. Hebing, S. Linek, A. Sauermann, A Reputation Economy: Results from an Empirical Survey on Academic Data Sharing. DIW Berlin Discussion Paper. 1454 (2015) (available at http://dx.doi.org/10.2139/ssrn.2568693).
(5) D. Hamermesh, What is Replication? The Possibly Exemplary Example of Labor Economics (2017), (available at https://www.aeaweb.org/conference/2017/preliminary/2100?sessionType%5Bsession%5D=1&organization_name=&search_terms=replication&day=&time=).
(6) J. L. Furman, K. Jensen, F. Murray, Governing Knowledge in the Scientific Community: Exploring the Role of Retractions in Biomedicine. Research Policy. 41, 276–290 (2012).
(7) W. G. Dewald, J. G. Thursby, R. G. Anderson, Replication in Empirical Economics: The Journal of Money, Credit and Banking Project. The American Economic Review. 76, 587–603 (1986).
Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de
Jetzt anmelden und die neuesten Blogartikel einmal im Monat per Newsletter erhalten.