Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Yu and Li present DeepSearch, a deep learning-based method for peptide identification in mass spectrometry, offering unbiased, data-driven scoring without statistical estimation. It accurately profiles post-translational modifications in a zero-shot manner.
Vashishtha and colleagues test and reuse AtomAI, a machine learning framework developed for analysing microscopy data, for a range of materials characterization tasks.
Previous studies have explored the integration of episodic memory into reinforcement learning and control. Inspired by hippocampal memory, Freire et al. develop a model that improves learning speed and stability by storing experiences as sequences, demonstrating resilience and efficiency under memory constraints.
Liu et al. developed a framework called ARNLE to explore host tropism of SARS-CoV-2 and found a shift from weak to strong primate tropism. Key mutations involved in this shift can be analysed to advance research on emerging viruses.
Self-supervised learning techniques are powerful assets for enabling deep insights into complex, unlabelled single-cell genomic data. Richter et al. here benchmark the applicability of self-supervised architectures into key downstream representation learning scenarios.
Matthews et al. present a protein sequence embedding based on data from ancestral sequences that allows machine learning to be used for tasks where training data are scarce or expensive.
A kernel approximation method that enables linear-complexity attention computation via analogue in-memory computing (AIMC) to deliver superior energy efficiency is demonstrated on a multicore AIMC chip.
Survival prediction models used in healthcare usually assume that training and test data share a similar distribution, which is not true in real-world settings. Cui and colleagues develop a stable Cox regression model that can identify stable variables for predicting survival outcomes under distribution shifts.
Approaches are needed to explore regulatory RNA motifs in plants. An interpretable RNA foundation model is developed, trained on thousands of plant transcriptomes, which achieves superior performance in plant RNA biology tasks and enables the discovery of functional RNA sequence and structure motifs across transcriptomes.
Ektefaie and colleagues introduce the spectral framework for models evaluation (SPECTRA) to measure the generalizability of machine learning models for molecular sequences.
Reconstructing and predicting spatiotemporal dynamics from sparse sensor data is challenging, especially with limited sensors. Li et al. address this by using self-supervised pretraining of a generative model, improving accuracy and generalization.
Interactive robots can be used to study animal social behaviour. Imitation learning can be used to enable a rat-like robot to learn subtle templates of social behaviour, demonstrating that it can modulate the emotional states of rats through varied interaction patterns.
Predicting nanobody–antigen interactions is crucial for advancing nanobody development in drug discovery, but it remains a challenging task. Deng et al. propose DeepNano to enhance the prediction of nanobody–antigen interactions, facilitating virtual screening of _target nanobodies.
Rate- and noise-induced transitions pose key tipping risks for ecosystems and climate subsystems, yet no predictive theory existed before. This study introduces deep learning as an effective prediction tool for these tipping events.
Ock and colleagues explore predictive and generative language models for improving adsorption energy prediction in catalysis without relying on exact atomic positions. The method involves aligning a language model’s latent space with graph neural networks using graph-assisted pretraining.
Why brain-like feature extraction emerges in large language models (LLMs) remains elusive. Mischler, Li and colleagues demonstrate that high-performing LLMs not only predict neural responses more accurately than other LLMs but also align more closely with the hierarchical language processing pathway in the brain, revealing parallels between these models and human cognitive mechanisms.