Sigma Nova is recruiting M2 / PhD research interns to work on foundation models for brain signals, with a focus on robust generalization under strong data shift (cross-subject, cross-session).
You’ll work primarily with EEG, with opportunities to connect to other modalities (MEG/iEEG/fMRI), and contribute to publication-oriented research at the intersection of machine learning and neuroscience.
1) Cross-subject alignment & generalization
Develop and benchmark methods that improve cross-subject transfer for brain foundation models under strong subject and session shifts. The focus is on zero-/few-shot generalization and training-free or lightweight adaptation approaches (e.g. alignment losses, parameter-efficient tuning, or post-hoc representation alignment). Candidate directions include geometry-aware adaptation techniques, and recent advances in relative representations.
2) Self-supervised learning for neural time series
Design and evaluate self-supervised pretraining strategies tailored to brain signals. Potential directions include exploring new training objectives for non-stationary and noisy neural time series, investigating pretraining dataset composition/curriculum and how these approaches impact robustness to recording variability and generalization capabilities.
3) Diffusion / generative models for brain data
This internship proposes developing and comparing state-of-the-art generative models (diffusion probabilistic models, VAEs, or GANs) for EEG signal synthesis, investigating fundamental mechanisms of generative augmentation while analyzing quality-speed trade-offs across architectures. Through improved data augmentation, we will study the enhanced adversarial robustness, and generalizability of the trained models. Moving forward to building foundational models for EEG, we will also analyze the impact of such synthetic data augmentations during pretraining.
4) Improving the generalizability of EEG models through mixup
Mixup, introduced by Zhang et al. (2017), performs data augmentation by creating convex combinations of training samples, labels or by interpolating hidden representations at various network depths (Manifold Mixup). It has consistently improve generalization across diverse tasks despite incomplete theoretical understanding. This internship will investigate mixup’s fundamental mechanisms in EEG signal analysis, examining robustness properties, specifically cross-subject generalization. Given mixup’s computational efficiency compared to generative synthesis (particularly suited for large and heavy trainings), understanding its theoretical foundations could unlock powerful, accessible augmentation strategies for neural signal processing.
Strong Python skills; experience with PyTorch
Solid ML fundamentals (deep learning, representation learning); time-series experience is a plus but is not mandatory
Comfortable reading papers, running experiments, and writing clean, reproducible code
Prior neuro data experience is a plus, not required
1) Prescreen with Recruiter
2) Hiring meeting with one NeuroAI team member
3) Onsite interview (coding and research discussion)
Rencontrez Paul, Head of Talent Acquisition
Ces entreprises recrutent aussi au poste de “Données/Business Intelligence”.