Journal article 46 views 2 downloads
Methodological implications of sample size and extinction gradient on the robustness of fear conditioning across different analytic strategies
PLOS ONE, Volume: 17, Issue: 5, Start page: e0268814
PDF | Version of Record
© 2022 Ney et al. This is an open access article distributed under the terms of the Creative Commons Attribution LicenseDownload (1.54MB)
Fear conditioning paradigms are critical to understanding anxiety-related disorders, but studies use an inconsistent array of methods to quantify the same underlying learning process. We previously demonstrated that selection of trials from different stages of experimental phases and inconsistent us...
|Published in:||PLOS ONE|
Public Library of Science (PLoS)
Check full text
No Tags, Be the first to tag this record!
Fear conditioning paradigms are critical to understanding anxiety-related disorders, but studies use an inconsistent array of methods to quantify the same underlying learning process. We previously demonstrated that selection of trials from different stages of experimental phases and inconsistent use of average compared to trial-by-trial analysis can deliver significantly divergent outcomes, regardless of whether the data is analysed with extinction as a single effect, as a learning process over the course of the experiment, or in relation to acquisition learning. Since small sample sizes are attributed as sources of poor replicability in psychological science, in this study we aimed to investigate if changes in sample size influences the divergences that occur when different kinds of fear conditioning analyses are used. We analysed a large data set of fear acquisition and extinction learning (N = 379), measured via skin conductance responses (SCRs), which was resampled with replacement to create a wide range of bootstrapped databases (N = 30, N = 60, N = 120, N = 180, N = 240, N = 360, N = 480, N = 600, N = 720, N = 840, N = 960, N = 1080, N = 1200, N = 1500, N = 1750, N = 2000) and tested whether use of different analyses continued to produce deviating outcomes. We found that sample size did not significantly influence the effects of inconsistent analytic strategy when no group-level effect was included but found strategy-dependent effects when group-level effects were simulated. These findings suggest that confounds incurred by inconsistent analyses remain stable in the face of sample size variation, but only under specific circumstances with overall robustness strongly hinging on the relationship between experimental design and choice of analyses. This supports the view that such variations reflect a more fundamental confound in psychological science—the measurement of a single process by multiple methods.
College of Human and Health Sciences
The author(s) received no specific funding for this work.