MEng / MSc Deep Generative Image Translation Intern

Fixed-term / Temporary(6 months)
Paris
A few days at home
Salary: Not specified
Starting date: April 14, 2026
Experience: < 6 months
Education: Master's Degree

TheraPanacea
TheraPanacea

Interested in this job?

Questions and answers about the job

The position

Job description

This internship focuses on developing universal deep learning frameworks for image-to-image translation in medical imaging, addressing challenges such as cross-modality synthesis, domain adaptation, and data scarcity. The intern will explore space-constrained foundational models that enable efficient learning and inference under limited computational and memory budgets, while maintaining high fidelity and clinical relevance. Emphasis will be placed on leveraging variational auto-encoders (VAEs) and hybrid generative architectures to learn robust latent representations that generalize across imaging modalities, scanners, and institutions.

The role involves implementing, training, and evaluating state-of-the-art models on real-world medical imaging datasets, with attention to stability, interpretability, and uncertainty modeling. The intern will collaborate with researchers to experiment with novel architectural constraints, latent-space regularization, and foundation-model adaptation strategies, contributing to scalable and transferable solutions for medical image translation. This internship offers hands-on experience at the intersection of generative modeling, representation learning, and medical AI, with opportunities for research publications and real-world clinical impact.


Preferred experience

  • Enrolled in or recently graduated from a degree program in Computer Science, AI, Data Science, Biomedical Engineering, or a related discipline

  • Hands-on experience building and deploying deep learning models for image processing or computer vision

  • Strong proficiency in Python and production-grade deep learning frameworks (preferably PyTorch)

  • Practical experience with generative models (e.g., VAEs, diffusion models, or foundation models) and image-to-image translation workflows

  • Familiarity with model efficiency techniques, including memory- or space-constrained architectures, model compression, or optimized training/inference

  • Experience working with large datasets, experiment tracking, and reproducible ML pipelines

  • Ability to translate research ideas into robust, scalable implementations

  • Strong communication skills and comfort collaborating with cross-functional teams (research, engineering, clinical or product stakeholders)

Want to know more?