10:15 - 12:00 | Interpreting replication failures |
12:00 - 13:00 | Lunch |
13:00 - 14:00 | Introduction to p-curve analysis and preregistration |
14:15 - 16:00 | P-curve analysis and preregistration excercises |
Replication Crisis and Its Solutions.pdf
Respond at PollEv.com/esapalosaari182.
Run a P-curve analysis on a subject of your choosing. You'll find the official user guide here. Before running the analysis, complete a P-Curve Disclosure Table (example). You can do the analysis with the online app (p-curve.com/app4/), SAS (code) or R (code).
Send both the Disclosure Table and the results from the analysis to esa.palosaari(at)uta.fi by 27.11.
Submit a preregistration to a website such as AsPredicted.org or OSF.io.
Send a link to the preregistration to esa.palosaari(at)uta.fi by 27.11.
You can run the R scripts in RStudio which should also be installed in the computers in the class.
A massive open online course (MOOC) taught by Daniel Lakens. The basis of much of this course.
Camerer, C. F. & al. (2016). Evaluating replicability of laboratory experiments in economics. Science, aaf0918. DOI: 10.1126/science.aaf0918
Chan, A.-W. & Altman, D. G. (2005). Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors. BMJ, 330(7494), 753. 10.1136/bmj.38356.424606.8F
Decullier, E. & Chapuis, F. (2005). Fate of biomedical research protocols and publication bias in France: retrospective cohort study. BMJ, 331, 19. https://doi.org/10.1136/bmj.38488.385995.8F
Dickersin, K. & Min, Y. (1993). NIH clinical trials and publication bias. Online J Curr Clin Trials, 50.
Gelman, A. & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem even when there is no ”fishing expedition” or ”p-hacking” and the research hypothesis was posited ahead of time. http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf.
Inzlicht, Michael and Gervais, Will and Berkman, Elliot, Bias-Correction Techniques Alone Cannot Determine Whether Ego Depletion is Different from Zero: Commentary on Carter, Kofler, Forster, & McCullough, 2015 (September 11, 2015). Available at SSRN: https://ssrn.com/abstract=2659409 or http://dx.doi.org/10.2139/ssrn.2659409
Lakens, D., Scheel, A. & Isager, P. (2017). Equivalence testing for psychological research: A tutorial. DOI: 10.17605/OSF.IO/V3ZKT
Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70, 487-498.
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7, 615-631.
Perezgonzales, J. D. (2015). Fisher, Neyman-Pearson or NHST? A tutorial for teaching data testing. Frontiers in Psychology, 6: 223.
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251).
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 534-547. http://dx.doi.org/10.1037/a0033242
Song, F., Parekh-Bhurke, S, … , Harvey, I. (2009). Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC Medical Research Methodology, 9, 79. https://doi.org/10.1186/1471-2288-9-79