Internship M2 in Robotics (H-F)

Internship
Villeurbanne
No remote work
Salary: Not specified
Experience: < 6 months
Apply

CESI
CESI

Interested in this job?

Apply
Questions and answers about the job

The position

Job description

Title: Real-Time Dynamic-aware Trajectory Planning for Mobile Robots in Indoor Environments

Abstract: Service robots are increasingly used in dynamic indoor spaces such as offices, hospitals, or universities, where people and moving objects constantly modify the environment. For safe and efficient operation, a robot must detect and react to these changes in real time. The TIAGo-2 robot (PAL Robotics) provides an ideal research platform for this task. It integrates LiDAR, RGB-D sensors, and a differential base, with full support for ROS 2, including mapping and navigation modules. However, these standard navigation pipelines—based on slam_toolbox and Navigation2—are primarily designed for static environments. When people or objects move within the robot’s workspace, its local planner often fails to anticipate or smoothly avoid dynamic obstacles. This internship aims to extend the TIAGo-2 navigation stack by adding a LiDAR-based dynamic obstacle detection module and a reactive navigation layer that adapts the robot’s motion in real time. The aim is to focus on lightweight, real-time techniques rather than long-term motion prediction or learning-based planning.

Key words:  mobile robotics, perception, object detection, LiDAR, ROS programming

1.Research Work

a. Scientific and Technical Background

Conventional SLAM and navigation systems (e.g., Cartographer, slam_toolbox) assume static maps. The Navigation2 framework provides global and local planners, obstacle avoidance, and control loops, but it treats all detected obstacles as static.   Recent studies have proposed ways to handle moving obstacles more effectively. Zhu et al. (2020) used LiDAR scan differencing and clustering to detect and track moving objects in indoor environments. DS-SLAM (2018) integrated motion detection into a visual SLAM pipeline, improving localization robustness in dynamic scenes. Weng et al. (2021) showed that combining RGB-D segmentation with LiDAR improves moving obstacle identification. Building on this literature, the internship will design a simplified perception module for LiDAR-based motion detection, integrated directly into the Navigation2 costmap. The focus will be on maintaining real-time performance and reliable obstacle avoidance.

b. Objectives and Methodology

Main Objective of the internship is to develop and integrate a LiDAR-based dynamic obstacle detection and avoidance system into the TIAGo-2 robot’s existing ROS 2 navigation framework. The specific goals include to:

  1. Understand and benchmark the baseline performance of TIAGo-2’s navigation in dynamic indoor scenes (simulated in Gazebo).

  2. Implement dynamic obstacle detection using LiDAR scan differencing and clustering to identify moving entities.

  3. Integrate a dynamic layer into the ROS 2 local costmap, marking moving obstacles as temporary regions to avoid.

  4. Adapt the local planner’s parameters (e.g., obstacle inflation, re-planning frequency) for smoother reactive navigation.

  5. Evaluate improvements through simulation metrics: number of collisions, trajectory deviation, and computational latency.

If time allows, a basic experiment on the physical robot may be performed to validate the approach.

The following methodology would be adopted:

  • Use Gazebo simulation with dynamic actors (people, moving boxes).

  • Process LiDAR scans using ROS 2 nodes (e.g., rclcpp, pointcloud_to_laserscan, scan_tools).

  • Detect motion by comparing consecutive scans and clustering points using Euclidean or DBSCAN methods.

  • Update the Navigation2 costmap dynamically via a plugin interface.

  • Benchmark navigation with and without the dynamic layer under identical scenarios.

c. Scientific Challenges

Several challenges are expected during the internship, particularly related to the reliable and efficient operation of the proposed system in real-world dynamic environments. One major challenge will be achieving accurate differentiation between static and moving obstacles, as LiDAR sensors are often affected by measurement noise, reflections, and occlusions that can lead to false detections or misclassifications (Zhu et al., 2020). Developing robust filtering and motion segmentation techniques will therefore be essential to ensure that the robot correctly interprets its surroundings. Another key difficulty concerns maintaining real-time performance: all perception and planning modules must operate within strict time constraints, typically under 200 milliseconds per processing cycle (Weng et al., 2021), to guarantee that the robot can react quickly enough to sudden environmental changes. Achieving this level of responsiveness will require optimizing computational efficiency and carefully balancing accuracy with processing load. Finally, ensuring smooth and stable planner responses represents an additional challenge, since the robot must adapt its trajectory continuously when obstacles move unpredictably, without generating oscillations or abrupt maneuvers that could compromise safety or navigation efficiency. These challenges will be progressively addressed through parameter tuning, signal filtering, and the incremental integration of each developed component into the Navigation2 framework, ensuring a stable and coherent evolution toward a fully functional dynamic-aware navigation system.

d. Expected Outcomes

At the end of the internship, it is expected that the student will have produced:

  • A LiDAR-based moving obstacle detection module for ROS 2;

  • An extended local costmap integrating dynamic obstacle information;

  • A reactive navigation behavior that improves safety in dynamic environments;

  • A simulation-based evaluation report and a short technical paper draft.

The expected scientific contribution lies in improving the robustness and autonomy of service robots navigating in dynamic indoor environments.

e.Milestones & Tentative Timeline (6 months)

Phase 0 (Weeks 1–3)
Main tasks: Literature review; setup of ROS 2 and TIAGo-2 simulation
Deliverable: Baseline report

Phase 1 (Weeks 4–7)
Main tasks: LiDAR motion detection and clustering
Deliverable: Detection module and test results

Phase 2 (Weeks 8–11)
Main tasks: Integration with the costmap as a dynamic layer
Deliverable: Dynamic costmap plugin

Phase 3 (Weeks 12–16)
Main tasks: Reactive local planner tuning; simulation experiments
Deliverable: Navigation benchmark results

Phase 4 (Weeks 17–20)
Main tasks: Final testing and validation (simulation or real robot)
Deliverable: Video demo and performance metrics

Phase 5 (Weeks 21–24)
Main tasks: Report writing and presentation
Deliverable: Final report and code repository

2. Lab Presentation

CESI LINEACT (UR 7527) anticipates and supports technological transformations in the industrial and construction sectors. Its strong and long-standing connection with companies drives an applied research approach that combines scientific rigor with practical relevance. The laboratory promotes a human-centered perspective, emphasizing technologies that serve people, their needs, and their uses, while fostering collaboration between research, education, and industry.  Research at CESI LINEACT is structured around two interdisciplinary teams:

  • Team 1 – Learning and Innovating, which focuses on cognitive, social, and management sciences to study how technology-enhanced environments (immersive systems, prototyping platforms, etc.) influence learning, creativity, and innovation processes.

  • Team 2 – Engineering and Digital Tools, which specializes in modeling, simulation, optimization, and data analysis for cyber-physical systems, with particular emphasis on decision support, digital twins, and human–system interaction in augmented or virtual environments.

These teams collaborate across application areas such as Industry 5.0, Construction 4.0, and Sustainable and Smart Cities, supported by research platforms in Rouen and Nanterre.

LINEACT Lab is dedicated to the study of intelligent, connected, and resilient systems in support of industrial transformation and human-system cooperation. The laboratory brings together expertise in robotics, artificial intelligence, data science, and human-machine interaction, with a strong emphasis on application-oriented research and technology transfer.

This internship is proposed within Team 1 and the research axe S2HSI (Interactions Humains Systèmes et Compréhension de scènes).

3. References

  1. Steve Macenski et al., slam_toolbox, GitHub, 2019.

  2. PAL Robotics, TIAGo 2 Simulation and Navigation Stack, https://github.com/pal-robotics/tiago_simulation.

  3. Bescos, B., Fácil, J. M., Civera, J., & Neira, J. (2018). DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes. IEEE Robotics and Automation Letters, 3(4), 4076–4083.

  4. Hess, W., Kohler, D., Rapp, H., & Andor, D. (2016). Real-time loop closure in 2D LiDAR SLAM. IEEE International Conference on Robotics and Automation (ICRA), 1271–1278.

  5. Weng, X., Kitani, K., & Narayanan, V. (2021). Dynamic Obstacle Detection and Tracking for Mobile Robots using RGB-D Data. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

  6.     Zhu, M., Wang, J., & Yang, C. (2020). Real-Time Moving Object Detection and Tracking Using 2D LiDAR. Sensors, 20(15), 4237.

4. Contacts and Application

All qualified individuals are encouraged to apply by sending the following documents in one file PDF :

· A curriculum vitae (CV) detailing academic background, technical skills, and relevant experiences 

· A motivation letter (maximum one page) expressing interest in the internship and alignment with the research topic 

· Academic transcripts (relevé de notes) from the current and previous years of study 

· Any reference letters or supporting documents that may strengthen the application 

Applications should be sent by email with the subject line: 
“Application – Research Internship on Real-Time Dynamic-aware Trajectory Planning for Mobile Robots in Indoor Environments” 

Requirements:

Candidate Requirements:

M2 student with extensive ROS2 experience, programming in C++/Python. Familiarity with Gazebo, LiDAR/RGB-D perception, and navigation stacks. Interest in mobile robotics and dynamic environment navigation.

Interpersonal skills: Independent, with a spirit of initiative and curiosity, able to work in a team and demonstrate good interpersonal and communications skills.

Want to know more?

These job openings might interest you!

These companies are also recruiting for the position of “Engineering Disciplines”.

Apply