Authors:
Mitch Earleywine Department of Psychology, University at Albany, State University of New York, NY, United States

Search for other papers by Mitch Earleywine in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-6870-0623
and
Philip Kamilar-Britt Department of Psychology, University at Albany, State University of New York, NY, United States

Search for other papers by Philip Kamilar-Britt in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0001-7823-1560
Open access

Abstract

Psychedelic compounds hold promise for alleviating human suffering. Initial trials of psychedelic-assisted treatments have established feasibility and safety, generating calls for replications. Meanwhile, social and medical sciences have drawn criticism due to perceptions of replication failures and varying public trust in empiricism. Data suggest that researchers and the public frequently misunderstand some of the statistical issues associated with replication, potentially leading to unrealistic expectations of treatment effects. Promoting discourse on what constitutes sufficient replication is especially warranted considering the ongoing progression of multi-site phase II and III clinical trials. Here, we review recent and classic work on prediction intervals and power analysis to reveal that trials of psychedelic-assisted therapy that emphasize statistical significance will likely include failures to replicate, especially if sample sizes do not increase dramatically. The field and the public should expect some failed replication attempts based on sampling variability alone. Continued emphasis on statistical significance will require markedly larger samples than those used in clinical trials to date, necessitating substantially greater resources. An alternative approach focused on prediction intervals has distinct advantages. We focus on a recent trial of MDMA-assisted therapy for PTSD to show that, based on prediction intervals, reasonable replications are well within reach. A lack of attention to these statistical issues could unnecessarily prompt widespread dismissal of these therapies before the intervention receives adequate investigation and a fair assessment. In contrast, realistic expectations and appropriate planning could help ensure that these treatments receive the opportunity to help those most in need.

Abstract

Psychedelic compounds hold promise for alleviating human suffering. Initial trials of psychedelic-assisted treatments have established feasibility and safety, generating calls for replications. Meanwhile, social and medical sciences have drawn criticism due to perceptions of replication failures and varying public trust in empiricism. Data suggest that researchers and the public frequently misunderstand some of the statistical issues associated with replication, potentially leading to unrealistic expectations of treatment effects. Promoting discourse on what constitutes sufficient replication is especially warranted considering the ongoing progression of multi-site phase II and III clinical trials. Here, we review recent and classic work on prediction intervals and power analysis to reveal that trials of psychedelic-assisted therapy that emphasize statistical significance will likely include failures to replicate, especially if sample sizes do not increase dramatically. The field and the public should expect some failed replication attempts based on sampling variability alone. Continued emphasis on statistical significance will require markedly larger samples than those used in clinical trials to date, necessitating substantially greater resources. An alternative approach focused on prediction intervals has distinct advantages. We focus on a recent trial of MDMA-assisted therapy for PTSD to show that, based on prediction intervals, reasonable replications are well within reach. A lack of attention to these statistical issues could unnecessarily prompt widespread dismissal of these therapies before the intervention receives adequate investigation and a fair assessment. In contrast, realistic expectations and appropriate planning could help ensure that these treatments receive the opportunity to help those most in need.

Psychedelic-assisted treatments for psychiatric ailments look promising for appropriately screened samples (Ilingworth et al., 2021; Luoma, Chwyl, Bathje, Davis, & Lancelotta, 2020; Romeo, Karila, Martelli, & Benyamina, 2020; Zeifman et al., 2022). Most reviews of clinical trials with psychedelics end with the clarion call for replications. Tacit hopes for these replications are numerous but often include repeating the same therapy with comparable clients to confirm ameliorative effects. If a psychedelic-assisted treatment repeatedly demonstrates greater efficacy than an alternative intervention, the relevant data would serve as a first step toward empirical support, much like evidence-based approaches have benefitted other forms of therapy and medicine (Sakulak et al., 2019). But even within a single trial, outcomes vary. For instance, the occasional member of a treatment group often ends up worse off than some members of the control group. The public purportedly trusts experts to conclude that, in the long run and on average, those who complete a treatment end up improving more than others who did not receive treatment. But traditions around replication and null hypothesis testing might lead any therapist, client, or public official to expect more than a good treatment could deliver. Expecting every trial to reveal that psychedelic-assisted treatment is superior to reasonable controls, let alone alternative treatments, might be more than simple sampling error can permit. Maintaining reasonable expectations as these lines of research continue will be essential for establishing true efficacy. Data suggest that even authors of published research do not have good intuitions about some of the difficulties inherent in replication, leading them to overestimate the probability that one study's statistically significant result will appear in a second experiment (Cumming, Williams, & Fidler, 2004).

Despite the promising reputation developing for psychedelic-assisted therapy during this new renaissance, many other publications lament a replication crisis in the social and medical sciences more broadly (See Amrhein, Trafimow, & Greenland, 2019; Ioannidis, 2005). Recent data also suggest a broader distrust of science in the public (Krause, Brossard, Scheufele, Xenos, & Franke, 2019). Meanwhile, many people with psychological ailments continue to suffer or find available treatments lacking (Earleywine & De Leo, 2020). Reasonable expectations for treatments and treatment outcome studies would help. A close look at available data and the natural variation in sampling suggests that a series of replications would prove informative. Nevertheless, expecting that some experiments will fail to replicate the initial promising, statistically significant results is actually quite reasonable. Tolerating these moments of uncertainty and occasional failures to replicate might prove difficult, but a complete absence of replication failures would likely suggest larger problems associated with incomplete reporting or publication bias. In fact, a series of results completely free of replication failures might suggest that some investigators decided not to write up experiments that showed no differences in treatment outcomes, or some editors did not accept those null findings for publication. Without infinite resources, we should expect some replication failures even when a treatment is effective, simply because of sampling error (Amrehein, et al., 2019).

Part of the problem with replication might arise because of the ritual surrounding P-values less than 0.05. Available randomized clinical trials of psychedelic-assisted therapy almost invariably examine the experimental treatment and a control, compare their average impacts, and report whether or not group differences reached statistical significance. Readers who are familiar with the spirited arguments about null hypothesis testing might relish this chance to repeat critiques of the practice that started several decades ago (Neyman & Pearson, 1928). But reconsidering what we do (and do not) call a replication might be in order. The question seems straightforward. Either the psychedelic-assisted treatment creates benefits or it does not. But small samples, including the trials already published, invariably show greater variation in their estimates of effect sizes in the population. Incredible statistical power could help, as enormous samples provide more trustworthy answers. But resources are limited, and these trials already require tremendous time, effort, and cash.

With these issues in mind, those eager to answer questions about the efficacy of psychedelic-assisted treatments are left in a quandary about what does and does not qualify as a replication. Strict adherence to null hypothesis testing could prove time consuming and expensive. But trusting in clinical lore and subjective impressions feels like something other than science. One helpful idea concerns the prediction interval—an anticipated effect size based on an initial estimate from an original experiment. An initial experiment's effect and sample size can inform guesses about a subsequent replication's effect for a specified sample size. Any effect that falls within this interval could qualify as a replication, regardless of statistical significance. This approach has proven helpful in the past when Psychology appeared to have a “replication crisis.” Although only 36% of findings appeared to replicate based on statistical significance, a close look at prediction intervals revealed that actually 77% of findings were consistent with the initial publications (Patil, Peng, & Leek, 2016). A focus on identifying effects within the interval, rather than reaching statistical significance, could have advantages for psychedelic-assisted treatments currently.

For example, a recent Phase III trial of MDMA-assisted treatment for post-traumatic stress disorder represents years of herculean work performed in multiple nations and reveals a promising effect size of d = 0.91 (Mitchell et al., 2021). The treatment itself makes sense. Alternative treatments lead to challenging drop-out rates and limited success. MDMA, when administered over multiple sessions as part of a manualized treatment lasting 18 weeks, outperformed the placebo control group. This original experiment had 46 people receive MDMA and 43 controls. The effect size, Cohen's d, is a standardized difference between the means for the MDMA and control groups. (Literally the difference between the means divided by the pooled standard deviation).

Any team eager to replicate these results might initially plan to use the same sample sizes present in the original experiment. Understandably, they might wonder what sort of effect size they could anticipate if their results were from the same population as the original experiment, given expected variation due to sampling error. Note that anticipating the same statistical significance could be unrealistic. Ideally, the replicating team would change from expecting another P value below 0.05 to a focus on identifying an effect within the prediction interval. The replication team (and consumers of their research) would essentially ask how different their d could be from the original d if their replication effect size really were identical except for the natural variation that we would expect based on sampling.

The result they would anticipate depends upon the original d, the sample size used to estimate it, and the sample size proposed in the replication. Computationally, the answer turns out to be the same as calculating a confidence interval for the difference between these two d values (the original one and the replication). Tradition and desire might lead to a 95% prediction interval. An extension (Spence & Stanley, 2016) of a related proof (Zou, 2007) reveals that the sampling distribution of these d values follows non-central t-distributions. The prediction interval ends up being asymmetric because non-central t-distributions are asymmetric, but the lower and upper limits are based on the original d as well as the lower (l original) and upper (u original) limits of its confidence interval. We could assume that the replication's d is exactly the same as the original. The replication d would also have the same confidence interval (since the replication will have the same N), from l replication to u replication. Thus, we can compute the 95% prediction interval from some straightforward equations.
PredictionIntervalLowerLimit=doriginalSQRT((doriginalloriginal)2+(ureplicationdoriginal)2)
PredictionIntervalUpperLimit=doriginalSQRT((doriginallreplication)2+(uoriginaldoriginal)2)

Thanks to a generous shiny app (https://replication.shinyapps.io/dvalue/) and simple calculations (Spence & Stanley, 2016), we learn that this interval includes all the values from d = 0.29 to 1.53. That is, if we assume a replication experiment with sample sizes identical to the original ones, the 95% prediction interval ranges from 0.29 to 1.53. The replication d has a 95% chance of falling into this interval, assuming sampling error alone (note that 5% will still fall outside this range and could still stem from sampling error.) This range might appear vast. Cohen's (1992) classic work on power and effect sizes suggests that 0.29 is between “small” and “medium” but 1.53 is much larger than the 0.8 considered “large.” The original sample size is fixed, but a team might imagine a bigger N for the replication. With 50 per group, the 95% prediction interval would be a little smaller and range from 0.31 to 1.51. But the returns for a larger sample size diminish. With 500 participants per group, the new range is from d = 0.45 to 1.36. No research team is at fault here. These are the simple vagaries of sampling error. The variation and sample size within the original study leave the replication team with a potentially daunting task. In a sense, any replication that falls into this interval could be considered “a success” (Patil et al., 2016). That is, any result in this range is a reasonable expectation for a replication if this second sample differed from the original only based on sampling error.

In contrast, demands for identical statistical significance could be excessive. Any replicating team might dream of publishing the standard statistically significant result that appeared in the previous trial. They might sit down with power tables to see what they might need to reach the sacred P < 0.05 for the same dependent variable used in the original experiment. If they only were to accept a statistically significant result, even one-tailed (given that the therapy should do better than the control), they might want to specify power up front. Assuming a Type I error rate of 0.05, the knee-jerk response is often to set power to 0.80. But this plan literally means that 20% of the time, a true effect will go undetected. The team might prefer something more definite, like power of 0.99. But the results for this range of effect sizes is humbling.

For the high end of the prediction interval (d = 1.53), power of 0.99 and an alpha of 0.05 (one-tailed) requires a mere 15 participants per group. But trusting in an effect size estimate from such a small sample seems ill-advised given that the Phase III trial was almost three times as large. Should the effect exactly replicate the effect size of the Phase III trial at d = 0.91, an N of 39 per group would have a power of 0.99. But the lower bound of 0.29 would require 376 per group to reach power of 0.99, which is more than four times as large as the original experiment they want to replicate (Faul, Erdfelder, Buchner, & Lang, 2009). Demanding statistical significance could require dramatically more resources than focusing on the prediction interval. The time to recruit, screen, and complete the process for 758 people (376 for the treatment and 376 controls) could take years and years. Many people suffering from PTSD could miss out on an efficacious treatment waiting for these data to publish.

Alternatively, the attempts at replication could focus on sample sizes comparable to those used to demonstrate efficacy for other treatments of the same or comparable problems and focus on the prediction interval rather than statistical significance. For example, a meta-analytic review of psychological therapies for PTSD examined over 100 trials, with an average sample size much like the one used for this Phase III MDMA study (see Lewis, Roberts, Andrew, Starling, & Bisson, 2020). A replication using the same sample sizes as this original trial, and a hypothesis involving the prediction interval, might well be within reach. Nevertheless, a demand for a miniscule P-value risks abandoning a treatment simply because random error (related to sampling) got in the way. Resources are too valuable and the need for treatment is too dire for us to ignore prediction intervals as an alternative approach. A focus on prediction intervals, rather than P-values, would help establish psychedelic-assisted therapies as empirically validated treatments without risking dismissing their efficacy because of sampling error.

References

  • Amrhein, V., Trafimow, D., & Greenland, S. (2019). Inferential statistics as descriptive statistics: There is no replication crisis if we don’t expect replication. The American Statistician, 73(sup1), 262270.

    • Search Google Scholar
    • Export Citation
  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155.

  • Cumming, G., Williams, J., & Fidler, F. (2004). Replication and researchers' understanding of confidence intervals and standard error bars. Understanding Statistics, 3(4), 299311.

    • Search Google Scholar
    • Export Citation
  • Earleywine, M., & De Leo, J. (2020). Psychedelic-assisted psychotherapy for depression: How dire is the need? How could we do it? Journal of Psychedelic Studies, 4(2), 8892.

    • Search Google Scholar
    • Export Citation
  • Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 11491160.

    • Search Google Scholar
    • Export Citation
  • Illingworth, B. J., Lewis, D. J., Lambarth, A. T., Stocking, K., Duffy, J. M., Jelen, L. A., & Rucker, J. J. (2021). A comparison of MDMA-assisted psychotherapy to non-assisted psychotherapy in treatment-resistant PTSD: A systematic review and meta-analysis. Journal of Psychopharmacology, 35(5), 501511.

    • Search Google Scholar
    • Export Citation
  • Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.

  • Krause, N. M., Brossard, D., Scheufele, D. A., Xenos, M. A., & Franke, K. (2019). Trends—Americans’ trust in science and scientists. Public Opinion Quarterly, 83(4), 817836.

    • Search Google Scholar
    • Export Citation
  • Lewis, C., Roberts, N. P., Andrew, M., Starling, E., & Bisson, J. I. (2020). Psychological therapies for post-traumatic stress disorder in adults: Systematic review and meta-analysis. European Journal of Psychotraumatology, 11(1), 1729633.

    • Search Google Scholar
    • Export Citation
  • Luoma, J. B., Chwyl, C., Bathje, G. J., Davis, A. K., & Lancelotta, R. (2020). A meta-analysis of placebo-controlled trials of psychedelic-assisted therapy. Journal of Psychoactive Drugs, 52(4), 289299.

    • Search Google Scholar
    • Export Citation
  • Mitchell, J. M., Bogenschutz, M., Lilienstein, A., Harrison, C., Kleiman, S., Parker-Guilbert, K., …, & Doblin, R. (2021). MDMA-Assisted therapy for severe PTSD: A randomized, double-blind, placebo-controlled phase 3 study. Nature Medicine, 27(6), 10251033.

    • Search Google Scholar
    • Export Citation
  • Neyman, J., & Pearson, E. S. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference, part I. Biometrika A, 20, 12.

    • Search Google Scholar
    • Export Citation
  • Patil, P., Peng, R. D., & Leek, J. T. (2016). What should researchers expect when they replicate studies? A statistical view of replicability in psychological science. Perspectives on Psychological Science, 11(4), 539544.

    • Search Google Scholar
    • Export Citation
  • Romeo, B., Karila, L., Martelli, C., & Benyamina, A. (2020). Efficacy of psychedelic treatments on depressive symptoms: A meta-analysis. Journal of Psychopharmacology, 34(10), 10791085.

    • Search Google Scholar
    • Export Citation
  • Sakaluk, J. K., Williams, A. J., Kilshaw, R. E., & Rhyner, K. T. (2019). Evaluating the evidential value of empirically supported psychological treatments (ESTs): A meta-scientific review. Journal of Abnormal Psychology, 128(6), 500.

    • Search Google Scholar
    • Export Citation
  • Spence, J. R., & Stanley, D. J. (2016). Prediction interval: What to expect when you’re expecting… A replication. Plos One, 11(9), e0162874.

    • Search Google Scholar
    • Export Citation
  • Zeifman, R. J., Yu, D., Singhal, N., Wang, G., Nayak, S. M., & Weissman, C. R. (2022). Decreases in suicidality following psychedelic therapy: A meta-analysis of individual patient data across clinical trials. The Journal of Clinical Psychiatry, 83(2), 39235.

    • Search Google Scholar
    • Export Citation
  • Zou, G. Y. (2007). Toward using confidence intervals to compare correlations. Psychological Methods, 12(4), 399.

  • Amrhein, V., Trafimow, D., & Greenland, S. (2019). Inferential statistics as descriptive statistics: There is no replication crisis if we don’t expect replication. The American Statistician, 73(sup1), 262270.

    • Search Google Scholar
    • Export Citation
  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155.

  • Cumming, G., Williams, J., & Fidler, F. (2004). Replication and researchers' understanding of confidence intervals and standard error bars. Understanding Statistics, 3(4), 299311.

    • Search Google Scholar
    • Export Citation
  • Earleywine, M., & De Leo, J. (2020). Psychedelic-assisted psychotherapy for depression: How dire is the need? How could we do it? Journal of Psychedelic Studies, 4(2), 8892.

    • Search Google Scholar
    • Export Citation
  • Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 11491160.

    • Search Google Scholar
    • Export Citation
  • Illingworth, B. J., Lewis, D. J., Lambarth, A. T., Stocking, K., Duffy, J. M., Jelen, L. A., & Rucker, J. J. (2021). A comparison of MDMA-assisted psychotherapy to non-assisted psychotherapy in treatment-resistant PTSD: A systematic review and meta-analysis. Journal of Psychopharmacology, 35(5), 501511.

    • Search Google Scholar
    • Export Citation
  • Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.

  • Krause, N. M., Brossard, D., Scheufele, D. A., Xenos, M. A., & Franke, K. (2019). Trends—Americans’ trust in science and scientists. Public Opinion Quarterly, 83(4), 817836.

    • Search Google Scholar
    • Export Citation
  • Lewis, C., Roberts, N. P., Andrew, M., Starling, E., & Bisson, J. I. (2020). Psychological therapies for post-traumatic stress disorder in adults: Systematic review and meta-analysis. European Journal of Psychotraumatology, 11(1), 1729633.

    • Search Google Scholar
    • Export Citation
  • Luoma, J. B., Chwyl, C., Bathje, G. J., Davis, A. K., & Lancelotta, R. (2020). A meta-analysis of placebo-controlled trials of psychedelic-assisted therapy. Journal of Psychoactive Drugs, 52(4), 289299.

    • Search Google Scholar
    • Export Citation
  • Mitchell, J. M., Bogenschutz, M., Lilienstein, A., Harrison, C., Kleiman, S., Parker-Guilbert, K., …, & Doblin, R. (2021). MDMA-Assisted therapy for severe PTSD: A randomized, double-blind, placebo-controlled phase 3 study. Nature Medicine, 27(6), 10251033.

    • Search Google Scholar
    • Export Citation
  • Neyman, J., & Pearson, E. S. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference, part I. Biometrika A, 20, 12.

    • Search Google Scholar
    • Export Citation
  • Patil, P., Peng, R. D., & Leek, J. T. (2016). What should researchers expect when they replicate studies? A statistical view of replicability in psychological science. Perspectives on Psychological Science, 11(4), 539544.

    • Search Google Scholar
    • Export Citation
  • Romeo, B., Karila, L., Martelli, C., & Benyamina, A. (2020). Efficacy of psychedelic treatments on depressive symptoms: A meta-analysis. Journal of Psychopharmacology, 34(10), 10791085.

    • Search Google Scholar
    • Export Citation
  • Sakaluk, J. K., Williams, A. J., Kilshaw, R. E., & Rhyner, K. T. (2019). Evaluating the evidential value of empirically supported psychological treatments (ESTs): A meta-scientific review. Journal of Abnormal Psychology, 128(6), 500.

    • Search Google Scholar
    • Export Citation
  • Spence, J. R., & Stanley, D. J. (2016). Prediction interval: What to expect when you’re expecting… A replication. Plos One, 11(9), e0162874.

    • Search Google Scholar
    • Export Citation
  • Zeifman, R. J., Yu, D., Singhal, N., Wang, G., Nayak, S. M., & Weissman, C. R. (2022). Decreases in suicidality following psychedelic therapy: A meta-analysis of individual patient data across clinical trials. The Journal of Clinical Psychiatry, 83(2), 39235.

    • Search Google Scholar
    • Export Citation
  • Zou, G. Y. (2007). Toward using confidence intervals to compare correlations. Psychological Methods, 12(4), 399.

  • Collapse
  • Expand

Editor-in-Chief:

Attila Szabo - University of Oslo

E-mail address: attilasci@gmail.com

Managing Editor:

Zsófia Földvári, Oslo University Hospital

 

Associate Editors:

  • Alexander De Foe, School of Educational Psychology and Counselling, Monash University, Australia
  • Zsolt Demetrovics - Eötvös Loránd University, Budapest, Hungary
  • Ede Frecska, founding Editor-in-Chief - University of Debrecen, Debrecen, Hungary
  • David Luke - University of Greenwich, London, UK
  • Dennis J. McKenna- Heffter Research Institute, St. Paul, USA
  • Jeremy Narby - Swiss NGO Nouvelle Planète, Lausanne, Switzerland
  • Stephen Szára - Retired from National Institute on Drug Abuse, Bethesda, USA
  • Enzo Tagliazucchi - Latin American Brain Health Institute, Santiago, Chile, and University of Buenos Aires, Argentina
  • Michael Winkelman - Retired from Arizona State University, Tempe, USA 

Book Reviews Editor:

Michael Winkelman - Retired from Arizona State University, Tempe, USA

Editorial Board

  • Gábor Andrássy - University of Debrecen, Debrecen, Hungary
  • Paulo Barbosa - State University of Santa Cruz, Bahia, Brazil
  • Michael Bogenschutz - New York University School of Medicine, New York, NY, USA
  • Petra Bokor - University of Pécs, Pécs, Hungary
  • Jose Bouso - Autonomous University of Madrid, Madrid, Spain
  • Zoltán Brys - Multidisciplinary Soc. for the Research of Psychedelics, Budapest, Hungary
  • Susana Bustos - California Institute of Integral Studies San Francisco, USA
  • Robin Carhart-Harris - Imperial College, London, UK
  • Per Carlbring - Stockholm University, Sweden
  • Valerie Curran - University College London, London, UK
  • Alicia Danforth - Harbor-UCLA Medical Center, Los Angeles, USA
  • Alan K. Davis - The Ohio State University & Johns Hopkins University, USA
  • Rick Doblin - Boston, USA
  • Rafael G. dos Santos - University of Sao Paulo, Sao Paulo, Brazil
  • Genis Ona Esteve - Rovira i Virgili University, Spain
  • Silvia Fernandez-Campos
  • Zsófia Földvári - Oslo University Hospital, Oslo, Norway
  • Andrew Gallimore - University of Cambridge, Cambridge, UK
  • Neal Goldsmith - private practice, New York, NY, USA
  • Charles Grob - Harbor-UCLA Medical Center, Los Angeles, CA, USA
  • Stanislav Grof - California Institute of Integral Studies, San Francisco, CA, USA
  • Karen Grue - private practice, Copenhagen, Denmark
  • Jiri Horacek - Charles University, Prague, Czech Republic
  • Lajos Horváth - University of Debrecen, Debrecen, Hungary
  • Robert Jesse - Johns Hopkins University School of Medicine, Baltimore, MD, USA
  • Matthew Johnson - Johns Hopkins University School of Medicine, Baltimore, MD, USA
  • Eli Kolp - Kolp Institute New, Port Richey, FL, USA
  • Stanley Krippner - Saybrook University, Oakland, CA, USA
  • Evgeny Krupitsky - St. Petersburg State Pavlov Medical University, St. Petersburg, Russia
  • Rafael Lancelotta - Innate Path, Lakewood, CO, USA
  • Anja Loizaga-Velder - National Autonomous University of Mexico, Mexico City, Mexico
  • Luis Luna - Wasiwaska Research Center, Florianópolis, Brazil
  • Katherine MacClean - Johns Hopkins University School of Medicine, Baltimore, MD, USA
  • Deborah Mash - University of Miami School of Medicine, Miami, USA
  • Friedericke Meckel - private practice, Zurich, Switzerland
  • Ralph Metzner - California Institute of Integral Studies, San Francisco, CA, USA
  • Michael Mithoefer - private practice, Charleston, SC, USA
  • Levente Móró - University of Turku, Turku, Finland
  • David Nichols - Purdue University, West Lafayette, IN, USA
  • David Nutt - Imperial College, London, UK
  • Torsten Passie - Hannover Medical School, Hannover, Germany
  • Janis Phelps - California Institute of Integral Studies, San Francisco, CA, USA
  • József Rácz - Semmelweis University, Budapest, Hungary
  • Christian Rätsch - University of California, Los Angeles, Los Angeles, CA, USA
  • Sidarta Ribeiro - Federal University of Rio Grande do Norte, Natal, Brazil
  • William Richards - Johns Hopkins School of Medicine, Baltimore, MD, USA
  • Stephen Ross - New York University, New York, NY, USA
  • Brian Rush - University of Toronto, Toronto, Canada
  • Eduardo Schenberg - Federal University of São Paulo, São Paulo, Brazil
  • Ben Sessa - Cardiff University School of Medicine, Cardiff, UK
  • Lowan H. Stewart - Santa Fe Ketamine Clinic, NM, USA (Medical Director)
  • Rebecca Stone - Emory University, Atlanta, GA, USA
  • Rick Strassman - University of New Mexico School of Medicine, Albuquerque, NM, USA
  • Csaba Szummer - Károli Gáspár University of the Reformed Church, Budapest, Hungary
  • Manuel Torres - Florida International University, Miami, FL, USA
  • Luís Fernando Tófoli - University of Campinas, Campinas, Brazil State
  • Malin Uthaug - Maastricht University, Maastricht, The Netherlands
  • Julian Vayne - Norwich, UK
  • Nikki Wyrd - Norwich, UK

Attila Szabo
University of Oslo

E-mail address: attilasci@gmail.com

Indexing and Abstracting Services:

  • Web of Science ESCI
  • Biological Abstracts
  • BIOSIS Previews
  • APA PsycInfo
  • DOAJ
  • Scopus
  • CABELLS Journalytics

2023  
Web of Science  
Journal Impact Factor 2.2
Rank by Impact Factor Q2 (Psychology, Multidisciplinary)
Journal Citation Indicator 0.89
Scopus  
CiteScore 2.5
CiteScore rank Q1 (Anthropology)
SNIP 0.553
Scimago  
SJR index 0.503
SJR Q rank Q1

Journal of Psychedelic Studies
Publication Model Gold Open Access
Submission Fee none
Article Processing Charge €990
Subscription Information Gold Open Access
Regional discounts on country of the funding agency World Bank Lower-middle-income economies: 50%
World Bank Low-income economies: 100%
Further Discounts Corresponding authors, affiliated to an EISZ member institution subscribing to the journal package of Akadémiai Kiadó: 100%. 
   

Journal of Psychedelic Studies
Language English
Size A4
Year of
Foundation
2016
Volumes
per Year
1
Issues
per Year

4

Founder Akadémiai Kiadó
Debreceni Egyetem
Eötvös Loránd Tudományegyetem
Károli Gáspár Református Egyetem
Founder's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
H-4032 Debrecen, Hungary Egyetem tér 1.
H-1053 Budapest, Hungary Egyetem tér 1-3.
H-1091 Budapest, Hungary Kálvin tér 9.
Publisher Akadémiai Kiadó
Publisher's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
Responsible
Publisher
Chief Executive Officer, Akadémiai Kiadó
ISSN 2559-9283 (Online)

Monthly Content Usage

Abstract Views Full Text Views PDF Downloads
Sep 2024 0 46 7
Oct 2024 0 198 12
Nov 2024 0 88 10
Dec 2024 0 77 2
Jan 2025 0 87 13
Feb 2025 0 133 16
Mar 2025 0 40 0