Sr. MLOps Engineer

Freelance
Paris, Barcelona
Fully-remote
Salary: Not specified

Lenstra
Lenstra

Interested in this job?

Questions and answers about the job

The position

Job description

Lenstra was created by the passion of engineers specialised in Computer Science with a proven history in delivering top quality solutions to its customers. Bringing together work excellence and vision we managed to serve top tier clients from a variety of industry domains like Banking/Insurance, Luxury and Tech.

We help our clients to solve their most difficult problems around Cloud Computing & DevOps, Data Platform, IT Security by having a holistic approach of their environment and building often complex but always relevant solutions to help them accelerate their business.

As a Senior MLOps Engineer, you will build and operate the platform and tooling that powers our client’s identity-verification products. You’ll join a team supporting Applied Scientists and Machine Learning Engineers across countries. For this mission you will help accelerate the path from ML research to production by building intuitive platform abstractions that let engineers focus on model innovation rather than infrastructure complexity.

Location : Based in France, Portugal, Spain or UK, fully remote under the condition to travel once in a while to one of the HQs.

Key Responsibilities:

  • Run and evolve the ML compute layer on Kubernetes/EKS (CPU/GPU) for multi-tenant workloads, and make workloads portable across regions (region-aware scheduling, cross-region data access, and artifact portability).

  • Operate Argo Workflows and Dask Gateway as reliable, self-serve services used by engineers and researchers to orchestrate data prep, training, evaluation, and large-scale batch compute (installation, upgrades, security, quotas, autoscaling).

  • Build GitOps-native delivery for ML jobs and platform components (GitLab CI, Helm, FluxCD) with fast rollouts and safe rollbacks.

  • Design and maintain the data platform built on LakeFS to enable experiment reproducibility, data lineage tracking, and automated governance processes.

  • Own developer experience and enablement by creating clear APIs/CLIs and minimal UIs, maintaining comprehensive templates and documentation.


Preferred experience

Skills needed for the role:

  • Experience with distributed compute frameworks such as Dask, Spark, or Ray.

  • Familiarity with NVIDIA Triton or other inference servers.

  • FinOps best practices and cost attribution for multi-tenant ML infrastructure.

  • Exposure to multi-region designs (dataset replication strategies, compute placement, and latency optimization).

  • Container Orchestration: Kubernetes (EKS)

  • Compute: Argo Workflows for orchestration and Dask for Distributed Computing

  • ML Experiment Tracking: Weights & Biases

  • Data (Lakehouse & Versioning): Apache Iceberg + AWS Athena, LakeFS, Snowflake

  • CI/CD & GitOps: GitLab CI, Helm, FluxCD

  • Infrastructure as Code: Terraform

  • Observability: Prometheus/Grafana, Loki/Promtail, Datadog, Sentry

  • Languages & Libraries: Python (Django, FastAPI, Pydantic, boto3)


Recruitment process

Application process:

  • an introductory call with the recruiter

  • a technical interview with one of our Sr. Engineer Consultants

  • and an interview with the client

Want to know more?

These job openings might interest you!

These companies are also recruiting for the position of “Data / Business Intelligence”.