{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,12,4]],"date-time":"2024-12-04T18:47:14Z","timestamp":1733338034509,"version":"3.30.1"},"update-to":[{"updated":{"date-parts":[[2024,11,21]],"date-time":"2024-11-21T00:00:00Z","timestamp":1732147200000},"DOI":"10.1371\/journal.pcbi.1012537","type":"new_version","label":"New version"}],"reference-count":68,"publisher":"Public Library of Science (PLoS)","issue":"11","license":[{"start":{"date-parts":[[2024,11,11]],"date-time":"2024-11-11T00:00:00Z","timestamp":1731283200000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Del Monte Institute","award":["Pilot Grant"]},{"DOI":"10.13039\/100021528","name":"Advancing a Healthier Wisconsin Endowment","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100021528","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000001","name":"National Science Foundation","doi-asserted-by":"publisher","award":["1652127"],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100000923","name":"Australian Research Council","doi-asserted-by":"publisher","award":["DISCOVERY award DP200102188"],"id":[{"id":"10.13039\/501100000923","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["www.ploscompbiol.org"],"crossmark-restriction":false},"short-container-title":["PLoS Comput Biol"],"abstract":"To transform continuous speech into words, the human brain must resolve variability across utterances in intonation, speech rate, volume, accents and so on. A promising approach to explaining this process has been to model electroencephalogram (EEG) recordings of brain responses to speech. Contemporary models typically invoke context invariant speech categories (e.g. phonemes) as an intermediary representational stage between sounds and words. However, such models may not capture the complete picture because they do not model the brain mechanism that categorizes sounds and consequently may overlook associated neural representations. By providing end-to-end accounts of speech-to-text transformation, new deep-learning systems could enable more complete brain models. We model EEG recordings of audiobook comprehension with the deep-learning speech recognition system Whisper. We find that (1) Whisper provides a self-contained EEG model of an intermediary representational stage that reflects elements of prelexical and lexical representation and prediction; (2) EEG modeling is more accurate when informed by 5-10s of speech context, which traditional context invariant categorical models do not encode; (3) Deep Whisper layers encoding linguistic structure were more accurate EEG models of selectively attended speech in two-speaker \u201ccocktail party\u201d listening conditions than early layers encoding acoustics. No such layer depth advantage was observed for unattended speech, consistent with a more superficial level of linguistic processing in the brain.<\/jats:p>","DOI":"10.1371\/journal.pcbi.1012537","type":"journal-article","created":{"date-parts":[[2024,11,11]],"date-time":"2024-11-11T18:42:35Z","timestamp":1731350555000},"page":"e1012537","update-policy":"https:\/\/doi.org\/10.1371\/journal.pcbi.corrections_policy","source":"Crossref","is-referenced-by-count":0,"title":["Deep-learning models reveal how context and listener attention shape electrophysiological correlates of speech-to-language transformation"],"prefix":"10.1371","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0316-9787","authenticated-orcid":true,"given":"Andrew J.","family":"Anderson","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6387-4181","authenticated-orcid":true,"given":"Chris","family":"Davis","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2498-6631","authenticated-orcid":true,"given":"Edmund C.","family":"Lalor","sequence":"additional","affiliation":[]}],"member":"340","published-online":{"date-parts":[[2024,11,11]]},"reference":[{"key":"pcbi.1012537.ref001","doi-asserted-by":"crossref","first-page":"431","DOI":"10.1037\/h0020279","article-title":"Perception of the speech code","volume":"74","author":"AM Liberman","year":"1967","journal-title":"Psychol. Rev"},{"key":"pcbi.1012537.ref002","doi-asserted-by":"crossref","first-page":"493","DOI":"10.1007\/BF00231983","article-title":"Spatiotemporal stability and patterning of speech movement sequences","volume":"104","author":"A Smith","year":"1995","journal-title":"Experimental Brain Research"},{"key":"pcbi.1012537.ref003","doi-asserted-by":"crossref","first-page":"2457","DOI":"10.1016\/j.cub.2015.08.030","article-title":"Low-Frequency Cortical Entrainment to Speech Reflects Phoneme-Level Processing","volume":"25","author":"GM Di Liberto","year":"2015","journal-title":"Curr Biol"},{"issue":"7","key":"pcbi.1012537.ref004","article-title":"Heard or Understood? Neural Tracking of Language Features in a Comprehensible Story, an Incomprehensible Story and a Word List","volume":"10","author":"M Gillis","year":"2023"},{"key":"pcbi.1012537.ref005","doi-asserted-by":"crossref","first-page":"e82386","DOI":"10.7554\/eLife.82386","article-title":"A tradeoff between acoustic and linguistic feature encoding in spoken language comprehension","volume":"12","author":"F Tezcan","year":"2023","journal-title":"Elife"},{"issue":"1","key":"pcbi.1012537.ref006","doi-asserted-by":"crossref","first-page":"677","DOI":"10.1038\/s41467-024-44844-9","article-title":"Acoustic and language-specific sources for phonemic abstraction from speech","volume":"15","author":"A Mai","year":"2024","journal-title":"Nature Communications"},{"key":"pcbi.1012537.ref007","doi-asserted-by":"crossref","first-page":"1924","DOI":"10.1016\/j.cub.2019.04.067","article-title":"Simple acoustic features can explain phoneme-based predictions of cortical responses to speech","volume":"29","author":"C Daube","year":"2019","journal-title":"Current Biology"},{"key":"pcbi.1012537.ref008","article-title":"Robust speech recognition via large-scale weak supervision","author":"A Radford","year":"2022","journal-title":"arXiv preprint arXiv:2212.04356"},{"key":"pcbi.1012537.ref009","doi-asserted-by":"crossref","first-page":"803","DOI":"10.1016\/j.cub.2018.01.080","article-title":"Electrophysiological Correlates of Semantic Dissimilarity Reflect the Comprehension of Natural","volume":"28","author":"MP Broderick","year":"2018","journal-title":"Narrative Speech. Curr Biol"},{"key":"pcbi.1012537.ref010","doi-asserted-by":"crossref","first-page":"3976","DOI":"10.1016\/j.cub.2018.10.042","article-title":"Rapid Transformation from Auditory to Linguistic Representations of Continuous Speech","volume":"28","author":"C Brodbeck","year":"2018","journal-title":"Curr Biol"},{"key":"pcbi.1012537.ref011","doi-asserted-by":"crossref","first-page":"7564","DOI":"10.1523\/JNEUROSCI.0584-19.2019","article-title":"Semantic Context Enhances the Early Auditory Encoding of Natural Speech","volume":"39","author":"M. P. Broderick","year":"2019","journal-title":"J Neurosci"},{"key":"pcbi.1012537.ref012","doi-asserted-by":"crossref","first-page":"e2201968119","DOI":"10.1073\/pnas.2201968119","article-title":"A hierarchy of linguistic predictions during natural language comprehension","volume":"119","author":"M Heilbron","year":"2022","journal-title":"Proceedings of the National Academy of Sciences"},{"key":"pcbi.1012537.ref013","article-title":"Human-like Linguistic Biases in Neural Speech Models: Phonetic Categorization and Phonotactic Constraints in Wav2Vec2.0","author":"MH Kloots","year":"2024","journal-title":"arXiv:2407.03005"},{"key":"pcbi.1012537.ref014","article-title":"Perception of Phonological Assimilation by Neural Speech Recognition Models","author":"C Pouw","year":"2024","journal-title":"arXiv:2406.15265"},{"key":"pcbi.1012537.ref015","unstructured":"Baevski A, Zhou H, Mohamed A, Auli M. 2020. wav2vec 2.0: a framework for self-supervised learning of speech representations. In Proceedings of the 34th International Conference on Neural Information Processing Systems 2020 Dec 6(pp. 12449\u201312460)."},{"key":"pcbi.1012537.ref016","doi-asserted-by":"crossref","first-page":"251","DOI":"10.21437\/Interspeech.2023-2359","volume-title":"INTERSPEECH","author":"K Martin","year":"2023"},{"key":"pcbi.1012537.ref017","doi-asserted-by":"crossref","first-page":"372","DOI":"10.1162\/tacl_a_00656","article-title":"What Do Self-Supervised Speech Models Know About Words?","volume":"12","author":"A Pasad","year":"2024","journal-title":"Transactions of the Association for Computational Linguistics"},{"key":"pcbi.1012537.ref018","first-page":"5998","article-title":"Attention is all you need","author":"A Vaswani","year":"2017","journal-title":"In Advances in Neural Information Processing Systems"},{"issue":"8","key":"pcbi.1012537.ref019","first-page":"9","article-title":"Language models are unsupervised multitask learners","volume":"1","author":"A Radford","year":"2019","journal-title":"OpenAI blog"},{"key":"pcbi.1012537.ref020","doi-asserted-by":"crossref","unstructured":"Jain S, Huth AG (2018) Incorporating context into language encoding models for fMRI. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp 6629\u20136638. Montreal: Curran.","DOI":"10.1101\/327601"},{"key":"pcbi.1012537.ref021","first-page":"14928","volume-title":"Advances in Neural Information Processing Systems","author":"M Toneva","year":"2019"},{"issue":"2","key":"pcbi.1012537.ref022","doi-asserted-by":"crossref","first-page":"589","DOI":"10.1109\/TNNLS.2020.3027595","article-title":"Neural encoding and decoding with distributed sentence representations","volume":"32","author":"J Sun","year":"2020","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"issue":"18","key":"pcbi.1012537.ref023","doi-asserted-by":"crossref","first-page":"4100","DOI":"10.1523\/JNEUROSCI.1152-20.2021","article-title":"Deep artificial neural networks reveal a distributed cortical network encoding propositional sentence-level meaning","volume":"41","author":"AJ Anderson","year":"2021","journal-title":"Journal of Neuroscience"},{"issue":"45","key":"pcbi.1012537.ref024","doi-asserted-by":"crossref","first-page":"e2105646118","DOI":"10.1073\/pnas.2105646118","article-title":"The neural architecture of language: Integrative modeling converges on predictive processing","volume":"118","author":"M Schrimpf","year":"2021","journal-title":"Proceedings of the National Academy of Sciences"},{"key":"pcbi.1012537.ref025","article-title":"Inductive biases, pretraining and fine-tuning jointly account for brain responses to speech","author":"J. Millet","year":"2021","journal-title":"arXiv:2103.01032"},{"issue":"1","key":"pcbi.1012537.ref026","doi-asserted-by":"crossref","first-page":"16327","DOI":"10.1038\/s41598-022-20460-9","article-title":"Deep language algorithms predict semantic comprehension from brain activity","volume":"12","author":"C Caucheteux","year":"2022","journal-title":"Scientific Reports"},{"issue":"3","key":"pcbi.1012537.ref027","doi-asserted-by":"crossref","first-page":"430","DOI":"10.1038\/s41562-022-01516-2","article-title":"Evidence of a predictive coding hierarchy in the human brain listening to speech","volume":"7","author":"C Caucheteux","year":"2023","journal-title":"Nature Human Behaviour"},{"issue":"3","key":"pcbi.1012537.ref028","doi-asserted-by":"crossref","first-page":"369","DOI":"10.1038\/s41593-022-01026-4","article-title":"Shared computational principles for language processing in humans and deep language models","volume":"25","author":"A Goldstein","year":"2022","journal-title":"Nature neuroscience"},{"key":"pcbi.1012537.ref029","article-title":"Scaling laws for language encoding models in fMRI","volume":"36","author":"R Antonello","year":"2024","journal-title":"Advances in Neural Information Processing Systems"},{"key":"pcbi.1012537.ref030","first-page":"33428","article-title":"Toward a realistic model of speech processing in the brain with self-supervised learning","volume":"35","author":"J Millet","year":"2022","journal-title":"Advances in Neural Information Processing Systems"},{"key":"pcbi.1012537.ref031","unstructured":"Vaidya AR, Jain S, Huth AG. 2022. Self-supervised models of audio effectively explain human cortical responses to speech. Proceedings of the 39th International Conference on Machine Learning, PMLR 162:21927\u201321944."},{"issue":"12","key":"pcbi.1012537.ref032","doi-asserted-by":"crossref","first-page":"2213","DOI":"10.1038\/s41593-023-01468-4","article-title":"Dissecting neural computations in the human auditory pathway using deep neural networks for speech","volume":"26","author":"Y Li","year":"2023","journal-title":"Nature Neuroscience"},{"key":"pcbi.1012537.ref033","article-title":"Deep speech-to-text models capture the neural basis of spontaneous speech in everyday conversations","author":"A Goldstein","year":"2023","journal-title":"bioRxiv"},{"issue":"7","key":"pcbi.1012537.ref034","doi-asserted-by":"crossref","first-page":"1697","DOI":"10.1093\/cercor\/bht355","article-title":"Attentional selection in a cocktail party environment can be decoded from single-trial EEG","volume":"25","author":"JA O\u2019Sullivan","year":"2015","journal-title":"Cerebral cortex"},{"issue":"7397","key":"pcbi.1012537.ref035","doi-asserted-by":"crossref","first-page":"233","DOI":"10.1038\/nature11020","article-title":"Selective cortical representation of attended speaker in multi-talker speech perception","volume":"485","author":"N Mesgarani","year":"2012","journal-title":"Nature"},{"issue":"29","key":"pcbi.1012537.ref036","doi-asserted-by":"crossref","first-page":"11854","DOI":"10.1073\/pnas.1205381109","article-title":"Emergence of neural encoding of auditory objects while listening to competing speakers","volume":"109","author":"N Ding","year":"2012","journal-title":"Proceedings of the National Academy of Sciences"},{"key":"pcbi.1012537.ref037","doi-asserted-by":"crossref","first-page":"33","DOI":"10.1016\/j.neuroimage.2018.10.057","article-title":"Late cortical tracking of ignored speech facilitates neural selectivity in acoustically challenging conditions","volume":"186","author":"L. Fiedler","year":"2019","journal-title":"NeuroImage"},{"key":"pcbi.1012537.ref038","doi-asserted-by":"crossref","first-page":"3451","DOI":"10.1109\/TASLP.2021.3122291","article-title":"HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units","volume":"29","author":"WN Hsu","year":"2021","journal-title":"IEEE\/ACM Transactions on Audio, Speech, and Language Processing"},{"key":"pcbi.1012537.ref039","article-title":"Layer-wise Analysis of a Self-supervised Speech Representation Model","author":"A Pasad","year":"2021","journal-title":"arXiv:2107.04734"},{"key":"pcbi.1012537.ref040","article-title":"Data from: Electrophysiological correlates of semantic dissimilarity reflect the comprehension of natural, narrative speech [Dataset]","author":"MP Broderick","year":"2019","journal-title":"Dryad"},{"volume-title":"The Old man and the sea","year":"1952","author":"E. Hemingway","key":"pcbi.1012537.ref041"},{"issue":"4427","key":"pcbi.1012537.ref042","doi-asserted-by":"crossref","first-page":"203","DOI":"10.1126\/science.7350657","article-title":"Reading senseless sentences: Brain potentials reflect semantic incongruity","volume":"207","author":"M Kutas","year":"1980","journal-title":"Science"},{"key":"pcbi.1012537.ref043","doi-asserted-by":"crossref","first-page":"621","DOI":"10.1146\/annurev.psych.093008.131123","article-title":"Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP)","volume":"62","author":"M Kutas","year":"2011","journal-title":"Annual review of psychology"},{"key":"pcbi.1012537.ref044","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.bandl.2014.10.006","article-title":"The ERP response to the amount of information conveyed by words in sentences","volume":"140","author":"SL Frank","year":"2015","journal-title":"Brain and language"},{"issue":"27","key":"pcbi.1012537.ref045","doi-asserted-by":"crossref","first-page":"6539","DOI":"10.1523\/JNEUROSCI.3267-16.2017","article-title":"The hierarchical cortical organization of human speech processing","volume":"37","author":"WA de Heer","year":"2017","journal-title":"Journal of Neuroscience"},{"volume-title":"20,000 Leagues under the Sea","year":"1869","author":"J Verne","key":"pcbi.1012537.ref046"},{"volume-title":"Journey to the Centre of the Earth","year":"1864","author":"J Verne","key":"pcbi.1012537.ref047"},{"key":"pcbi.1012537.ref048","article-title":"Decoding speech from non-invasive brain recordings","author":"A D\u00e9fossez","year":"2022","journal-title":"arXiv preprint arXiv:2208.12266"},{"key":"pcbi.1012537.ref049","article-title":"Improved Decoding of Attentional Selection in Multi-Talker Environments with Self-Supervised Learned Speech Representation","author":"C Han","year":"2023","journal-title":"arXiv preprint arXiv:2302.05756"},{"key":"pcbi.1012537.ref050","doi-asserted-by":"crossref","unstructured":"Pennington J, Socher R, Manning CD. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). 1532\u20131543 (Association for Computational Linguistics, Doha, Qatar, 2014).","DOI":"10.3115\/v1\/D14-1162"},{"issue":"5","key":"pcbi.1012537.ref051","doi-asserted-by":"crossref","first-page":"393","DOI":"10.1038\/nrn2113","article-title":"The cortical organization of speech processing","volume":"8","author":"G Hickok","year":"2007","journal-title":"Nature reviews neuroscience"},{"key":"pcbi.1012537.ref052","doi-asserted-by":"crossref","first-page":"101","DOI":"10.1121\/1.399955","article-title":"Articulation rate and the duration of syllables and stress groups in connected speech","volume":"88","author":"TH Crystal","year":"1990","journal-title":"J. Acoust. Soc. Am"},{"key":"pcbi.1012537.ref053","article-title":"Llama: Open and efficient foundation language models","author":"H Touvron","year":"2023","journal-title":"arXiv preprint arXiv:2302.13971"},{"issue":"6","key":"pcbi.1012537.ref054","doi-asserted-by":"crossref","first-page":"1505","DOI":"10.1109\/JSTSP.2022.3188113","article-title":"Wavlm: Large-scale self-supervised pretraining for full stack speech processing","volume":"16","author":"S Chen","year":"2022","journal-title":"IEEE Journal of Selected Topics in Signal Processing"},{"issue":"9","key":"pcbi.1012537.ref055","doi-asserted-by":"crossref","first-page":"1497","DOI":"10.1111\/j.1460-9568.2012.08060.x","article-title":"At what time is the cocktail party? A late locus of selective attention to natural speech","volume":"35","author":"AJ Power","year":"2012","journal-title":"European Journal of Neuroscience"},{"key":"pcbi.1012537.ref056","article-title":"Hugging Face\u2019s transformers: state-of-the-art natural language processing","author":"T Wolf","year":"2019","journal-title":"arXiv 1910.03771"},{"key":"pcbi.1012537.ref057","doi-asserted-by":"crossref","first-page":"139","DOI":"10.1097\/AUD.0b013e31816453dc","article-title":"Human cortical responses to the speech envelope","volume":"29","author":"SJ Aiken","year":"2008","journal-title":"Ear Hear"},{"key":"pcbi.1012537.ref058","doi-asserted-by":"crossref","first-page":"201","DOI":"10.1016\/j.neuroimage.2018.09.006","article-title":"\u2018Comparing the potential of MEG and EEG to uncover brain tracking of speech temporal envelope\u2019","volume":"184","author":"F Destoky","year":"2019","journal-title":"Neuroimage"},{"key":"pcbi.1012537.ref059","doi-asserted-by":"crossref","first-page":"5728","DOI":"10.1523\/JNEUROSCI.5297-12.2013","article-title":"Adaptive temporal encoding leads to a background-insensitive cortical representation of speech","volume":"33","author":"N Ding","year":"2013","journal-title":"J Neurosci"},{"key":"pcbi.1012537.ref060","doi-asserted-by":"crossref","first-page":"5750","DOI":"10.1523\/JNEUROSCI.1828-18.2019","article-title":"Neural Speech Tracking in the Theta and in the Delta Frequency Band Differentially Encode Clarity and Comprehension of Speech in Noise","volume":"39","author":"O Etard","year":"2019","journal-title":"J Neurosci"},{"key":"pcbi.1012537.ref061","doi-asserted-by":"crossref","first-page":"189","DOI":"10.1111\/j.1460-9568.2009.07055.x","article-title":"Neural responses to uninterrupted natural speech can be extracted with precise temporal resolution","volume":"31","author":"EC Lalor","year":"2010","journal-title":"European Journal of Neuroscience"},{"key":"pcbi.1012537.ref062","doi-asserted-by":"crossref","first-page":"15564","DOI":"10.1523\/JNEUROSCI.3065-09.2009","article-title":"Temporal envelope of time-compressed speech represented in the human auditory cortex","volume":"29","author":"KV Nourski","year":"2009","journal-title":"J Neurosci"},{"key":"pcbi.1012537.ref063","doi-asserted-by":"crossref","first-page":"e1001251","DOI":"10.1371\/journal.pbio.1001251","article-title":"Reconstructing speech from human auditory cortex","volume":"10","author":"BN Pasley","year":"2012","journal-title":"PLoS Biol"},{"key":"pcbi.1012537.ref064","doi-asserted-by":"crossref","first-page":"2222","DOI":"10.1109\/TASL.2006.874669","article-title":"A Dynamic Compressive Gammachirp Auditory Filterbank","volume":"14","author":"T Irino","year":"2006","journal-title":"IEEE Trans Audio Speech Lang Process"},{"key":"pcbi.1012537.ref065","doi-asserted-by":"crossref","DOI":"10.7554\/eLife.58077","article-title":"Rapid computations of spectrotemporal prediction error support perception of degraded speech","volume":"9","author":"E Sohoglu","year":"2020","journal-title":"Elife"},{"key":"pcbi.1012537.ref066","doi-asserted-by":"crossref","first-page":"eaay6279","DOI":"10.1126\/sciadv.aay6279","article-title":"A speech envelope landmark for syllable encoding in human superior temporal gyrus","volume":"5","author":"Y Oganian","year":"2019","journal-title":"Sci Adv"},{"key":"pcbi.1012537.ref067","doi-asserted-by":"crossref","first-page":"604","DOI":"10.3389\/fnhum.2016.00604","article-title":"The multivariate temporal response function (mTRF) toolbox: a MATLAB toolbox for relating neural signals to continuous stimuli","volume":"10","author":"MJ Crosse","year":"2016","journal-title":"Frontiers in human neuroscience"},{"issue":"1","key":"pcbi.1012537.ref068","doi-asserted-by":"crossref","first-page":"289","DOI":"10.1111\/j.2517-6161.1995.tb02031.x","article-title":"Controlling the false discovery rate: a practical and powerful approach to multiple testing","volume":"57","author":"Y Benjamini","year":"1995","journal-title":"Journal of the Royal statistical society: series B (Methodological)"}],"updated-by":[{"updated":{"date-parts":[[2024,11,21]],"date-time":"2024-11-21T00:00:00Z","timestamp":1732147200000},"DOI":"10.1371\/journal.pcbi.1012537","type":"new_version","label":"New version"}],"container-title":["PLOS Computational Biology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dx.plos.org\/10.1371\/journal.pcbi.1012537","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,21]],"date-time":"2024-11-21T19:07:59Z","timestamp":1732216079000},"score":1,"resource":{"primary":{"URL":"https:\/\/dx.plos.org\/10.1371\/journal.pcbi.1012537"}},"subtitle":[],"editor":[{"given":"Daniele","family":"Marinazzo","sequence":"first","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2024,11,11]]},"references-count":68,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2024,11,11]]}},"URL":"https:\/\/doi.org\/10.1371\/journal.pcbi.1012537","relation":{},"ISSN":["1553-7358"],"issn-type":[{"type":"electronic","value":"1553-7358"}],"subject":[],"published":{"date-parts":[[2024,11,11]]}}}
  NODES