Join Sigma Nova, a company focused on transforming research into industrial capabilities. As an ML Product Engineer, you will be responsible for building a capability library, finetuning and adapting models for clients, designing high-performance inference servers, and deploying models for production. You should have at least 4 years of experience in ML engineering or applied research, and be proficient in Python, PyTorch/Hugging Face, backend APIs, and DevOps. Bonus skills include MLOps, low-level optimization, and experience in neurology or clinical data.
Résumé suggéré par Welcome to the Jungle
Transform research experiments into installed, documented, and monetizable capabilities that are ready for deployment, scaling, and client integration.
Support the development of modular, reusable components emerging from research, ensuring that research outputs can be reliably reused and extended.
Execute and optimise finetuning pipelines for diverse domains, adapting foundation models to specific client needs while maintaining performance and scalability.
Sigma Nova’s growth depends on our ability to translate research into Capabilities: reusable technical building blocks (pipelines, frameworks, interpretability tools) that become permanent assets for the company and our clients.
Your mission: Transform research experiments into installed, documented, and monetizable capabilities that are ready for deployment, scaling, and client integration.
Building the Capability Library:
Support the development of modular, reusable components emerging from research (e.g., EEG preprocessing pipelines, fMRI interpretability tools).
Act as the guarantor of versionability, documentation, and reproducibility, ensuring that research outputs can be reliably reused and extended.
Finetuning & Client Adaptation:
Execute and optimise finetuning pipelines (PyTorch, Hugging Face) for diverse domains.
Adapt foundation models to specific client needs while maintaining performance and scalability.
Backend & Inference:
Design high-performance inference servers (FastAPI, gRPC) and SDKs to expose capabilities seamlessly.
Optimise for low latency, scalability, and ease of integration.
Deployment & Ops:
Deploy models via Docker/Kubernetes on Scaleway to ensure a frictionless “Lab to Production” transition.
Implement monitoring, logging, and maintenance for long-term reliability.
Experience: 4+ years in ML engineering, applied research, or deployment.
Technologies:
Python: Production-grade, typed, and documented code.
PyTorch/Hugging Face: Finetuning, optimisation, and deployment.
Backend/APIs: FastAPI, Flask, or gRPC for model exposure.
DevOps: Docker, CI/CD, cloud deployment (Scaleway/AWS).
Fluent English (the team speaks English in the day-to-day)
MLOps (model versioning, monitoring, lifecycle management).
Low-level optimisation (CUDA, memory, hardware acceleration).
Interpretability tools (SHAP, LIME, SAEs) or client-facing experience.
Experience working with one or more of these fields: neurology, clinical data, generative methods, and multi-modality.
While technical excellence is critical, we place equal importance on how we work together. We believe the best teams are built on:
Integrity & Respect
Open Communication & Humility
Psychological Safety & Camaraderie
Prescreen with Paul (Head of People)
Technical Screen with one Research Scientist or Research Engineer
On-site (Take-home exercise and restitution + Behavioural interview)
Rencontrez Paul, Head of Talent Acquisition
Ces entreprises recrutent aussi au poste de “Data / Business Intelligence”.