Join our team as an MLOps Infrastructure Engineer, where you'll design and deploy a high-performance platform for distributed machine learning. You'll work with cloud and Kubernetes architecture, develop internal tools for MLOps, and implement DevOps best practices. This role requires 3-4 years of experience in cloud infrastructure, DevOps, or MLOps, as well as proficiency in Kubernetes, cloud GPU management, Python, and CI/CD. Bonus skills include low-level optimization, backend/API experience, and designing partner-facing tools.
jobs.show.blocks.metaData.summary.generated
Concevoir et déployer une plateforme pour rendre les GPU, les clusters et l'entraînement distribué transparents.
Développer et améliorer l'orchestrateur interne pour simplifier l'entraînement distribué.
Mettre en œuvre l'Infrastructure-as-Code (Terraform/Pulumi) pour la reproductibilité et l'évolutivité.
Your mission: Design and deploy the platform that makes GPUs, clusters, and distributed training transparent, not just for internal research, but also as a foundation for monetizable capabilities (e.g., managed training services, optimised inference pipelines for partners).
Cloud & Kubernetes Architecture:
Build and maintain a high-performance, multi-tenant environment on Scaleway and GENCI, optimised for distributed ML.
Deploy and supervise a Slurm cluster for research workload, ensuring seamless integration with Scaleway’s infrastructure.
Automate scaling, resource allocation, and cost management to avoid technical debt.
MLOps & Internal Tools:
Develop and enhance our internal orchestrator to simplify distributed training (FSDP, data pipelines) for both researchers and external users.
Create reusable frameworks for monitoring, logging, efficiency, and cost tracking.
Collaborate with research teams to industrialise workflows (e.g., model alignment, large-scale finetuning) and package them as deployable capabilities.
DevOps & Software Craftsmanship:
Implement Infrastructure-as-Code (Terraform/Pulumi) for reproducibility and scalability.
Write clean, typed, and documented Python code
Troubleshoot at the intersection of hardware (GPUs, networking) and software (PyTorch, CUDA), ensuring robustness for both internal and external use cases.
Experience: 3–4 years in cloud infrastructure, DevOps, or MLOps (research or industry).
Technologies:
Kubernetes/Docker: Advanced orchestration and containerization.
Cloud GPU Management: Scaleway, AWS/GCP (clusters, networking, storage).
Python: Proficiency in PEP standards, typing, and testing.
MLOps: Data pipelines, distributed training (PyTorch, FSDP), monitoring.
CI/CD: Pipeline setup and maintenance.
Fluent English (the team speaks English in the day-to-day)
Low-level optimisation (Triton, CUDA), HPC, or large-scale training experience.
Backend/APIs (FastAPI, gRPC) for exposing models or services.
Experience designing partner-facing tools or managed services.
While technical excellence is critical, we place equal importance on how we work together. We believe the best teams are built on:
Integrity & Respect
Open Communication & Humility
Psychological Safety & Camaraderie
Prescreen with Paul (Head of People)
Technical Screen with one Research Scientist or Research Engineer
On-site (Take-home exercise and restitution OR On site live interviews + Behavioural interview)
Rencontrez Paul, Head of Talent Acquisition
Tyto společnosti rovněž nabírají pracovníky na pozici "{profese}".