42Next is our new initiative aimed at expanding our innovative educational model to meet evolving industry needs. This offering builds upon our core curriculum while integrating new technologies and methodologies to prepare students for the rapidly changing digital landscape.
As a Data Platform Engineering Intern at 42, you will:
Enhance our integration with open source tools including Dagster, dlthub, and Superset
Directly contribute to open source solutions through code contributions, bug fixes, and feature enhancements
Adapt open source tools to our particular stack and requirements
Develop and maintain data infrastructure components that support our educational ecosystem
Design and implement engineering solutions that address our unique data challenges
Take projects from ideation through development, testing, and production deployment
Write tests, build dashboards, make code production ready, and learn how to automate processes
Collaborate with stakeholders across the organization to understand data needs and provide solutions
Gain practical experience with data engineering tools and practices while contributing to meaningful projects
Level: Bachelor’s degree (BAC+3) or higher (or equivalent)
Fields: Computer Science, Data Science, Statistics, Software Engineering, Information Systems, Big Data, or related
Academic Projects: End-to-end data integration, dashboard development, group deliverables with milestones.
Open-source contributions (e.g., Dagster, Superset, dlthub)
Hackathons, data meetups, Kaggle competitions
Technical blogging or conference/student-club presentations
SQL & Data Modeling: advanced query writing, schema design (star/snowflake), optimization
Scripting & Programming: Python (Pandas, SQLAlchemy) or similar
ETL & Orchestration: Building and scheduling pipelines with Airflow, dbt, Dagster or equivalents
Version Control & CI/CD: Git workflows; basic Docker, automated testing (e.g. pytest), familiarity with CI/CD concepts
Linux/Unix: Command-line comfort, shell scripting
Visualization & Dashboards: Apache Superset (dashboard creation, filters, parameterization)
Open-Source Engagement: Experience contributing to OSS (bug fixes, feature PRs)
Cloud & Infra as Code: AWS/GCP/Azure fundamentals (e.g. S3, BigQuery), Terraform/CloudFormation
Container Orchestration: Kubernetes basics
Testing & Quality: Writing unit/integration tests, code reviews, documentation
Analytical Mindset: Translate business needs into data solutions
Communication: Explain technical concepts clearly to diverse stakeholders
Collaboration: Pair-programming, code reviews, cross-team alignment
Autonomy & Curiosity: Self-driven learning and initiative
Organization: Task prioritization, clear documentation
Languages: French (fluent) and English (professional proficiency)
Inscrivez-vous gratuitement pour découvrir les détails du poste : missions, profil recherché, et plus.
Vous avez déjà un compte ? Identifiez-vous !
Ces entreprises recrutent aussi au poste de “Data / Business Intelligence”.