Software Engineer (GenAI / MLOps & ModelOps) - AI-Share Team

Indefinido
Lyon
Unos días en casa
Salario: No especificado

DataGalaxy
DataGalaxy

¿Te interesa esta oferta?

Preguntas y respuestas sobre esta oferta

El puesto

Descripción del puesto

Founded in Lyon, France, in 2015, DataGalaxy has become the leading data catalog in France, connecting data, people, and AI through an intuitive data governance platform. Our mission is to simplify metadata mapping, management, and knowledge sharing to enhance organizational data governance and data literacy.With more than 200 clients worldwide and rapidly growing, we are making significant strides in the US market, with the ambition to become a top 3 player in the data catalog space. Our teams span across two continents, fostering a dynamic international spirit that drives our innovation and growth.Our mission: To lead the revolution in modern business data catalogs by empowering data professionals and business users through AI-driven data governance. As we expand rapidly in the US, we aim to set the standard for data and AI solutions, positioning ourselves as a trusted partner for navigating the complexities of a data-centric world across industries. Our vision is to drive impactful change and growth, particularly in the US market, as we shape the future of data governance on a global scale.Our values: Be intentional. Be clear. Be bold. Be humble.

About the role

Join the AI-Share team to help build and operate the foundations that power our Generative AI features (LLM, RAG, agents) inside the DataGalaxy data governance platform. This role focuses on MLOps / ModelOps delivery: making GenAI capabilities reliable in production (deployment, monitoring, cost control, traceability), while collaborating with product engineering teams across a polyglot stack.

You don’t need to match every item below - we value curiosity, eagerness to learn, pragmatism, and steady progress!

What you’ll do (with support from senior engineers)

MLOps / ModelOps (core)

  • Contribute to the evolution of our ModelOps platform for GenAI: provider integrations, configuration, deployment automation, and operational tooling.
  • Help implement practical patterns for running GenAI workloads in production: evaluation, versioning, reproducibility, safe rollouts/rollbacks, and environment management.
  • Build and improve CI/CD workflows adapted to AI: packaging, automated checks, evaluation steps (when applicable), deployment, and rollback.
  • Improve traceability of AI assets (configs, prompts/templates when applicable, evaluation outputs, versions) to support governance and debugging.
  • Add and maintain observability for GenAI workloads: latency, availability, usage/cost signals, and quality-related indicators (dashboards/alerts).

GenAI feature development & platform integration (core)

  • Develop and improve GenAI features within the platform (agent, RAG pipelines, MCP server): new capabilities, prompt engineering, bug fixes, and client-facing improvements.
  • Work closely with Product / Data / Engineering to integrate GenAI capabilities into the platform in a maintainable way.
  • Participate in code reviews, documentation, and post-incident follow-ups (RCA / action items), with guidance from the team.

Tech environment (high level)

  • Python for MLOps tooling, evaluation, automation, and integrations
  • Cloud services and managed GenAI providers (e.g., Azure AI Foundry, AWS Bedrock, GCP Vertex)
  • CI/CD, containers (Docker), observability tooling
  • A polyglot product stack (e.g., backend services and front-end surfaces owned by other squads)

¿Quieres saber más?

¡Estas ofertas de trabajo te pueden interesar!

Estas empresas también contratan para el puesto de "{profesión}".