Theodo France

Theodo France

IT / Digital, Logiciels

Paris, Casablanca, London, Lyon, Nantes

Explorez leurs posts

Parcourez les posts publiés par l’entreprise pour une immersion dans leur culture et leurs domaines d’expertise.

Theodo Named 2025 Google Cloud Services Partner of the Year

Las Vegas, April 8, 2025 Theodo today announced that it has received the 2025 Google Cloud Services Partner of the Year. Theodo is being recognized for its achievements in the Goog…

04/07/2025

REX après la migration d’une feature Android/iOS vers KMP

Dans un premier article, nous avons exploré comment migrer progressivement une application native Android/iOS vers Kotlin Multiplatform (KMP). Si vous avez suivi ce guide, vous ave…

04/07/2025

ShedLock : Gérer efficacement les tâches planifiées dans une architecture multi-instances

Introduction Pour les applications d'entreprise, en 2025 l’exécution de tâches planifiées est un besoin courant : génération de rapports, traitements par lots, synchronisation de d…

04/07/2025

Comment diviser par 2 votre temps de CI grâce à ces simples astuces

Chez Theodo HealthTech, notre politique est de créer un code de qualité tout en respectant nos délais de livraison. Cependant, pendant trois sprints consécutifs, nous avons livré d…

08/04/2025

How QRQC helps me better manage bugs on my project?

As a software developer, you spend most of your time delivering code and then fixing bugs you introduced in your code. The Pareto principle is a great way to represent our day-to-d…

04/07/2025

How we migrated from AWS Glue to Snowflake and dbt

Today, I’ll tell you about our ETL migration from AWS Glue to the Snowflake and dbt modern data stack. My team’s mission is to centralize and ensure data reliability from multiple…

04/07/2025

Sécuriser les partages Notion : guide pour tous

Chez Theodo Fintech, nous avons fait de Notion notre QG digital. Documentation produit, projets tech, comptes rendus, référentiels clients ou financiers… tout y passe. Et c’est bie…

04/07/2025

Construire un produit Gen AI : le guide de survie pour les PMs

Bienvenue dans l'ère de l'IA générative, où les machines créent du contenu de manière autonome. Pour les product managers (PMs), cela représente une révolution autant qu'un défi. L…

08/04/2025

Qu’est-ce que le scaling ?

Une bonne application est une application qui tient sa charge d’utilisateurs, notamment grâce à un scaling controlé. Dans cet article nous aborderons ce sujet, et plus particulière…

04/07/2025

How LLM Monitoring builds the future of GenAI ?

Discover how Langfuse offers secure, open-source monitoring for LLM and GenAI solutions. Large Language Models (LLMs) are popping up everywhere and are more accessible than ever. W…

04/07/2025

Les annotations java custom pour respecter l’architecture hexagonale

Le problème que l’on veut résoudre Lorsque l’on développe une API avec Spring Boot, il est fréquent d’utiliser les annotations fournies par le framework, telles que @Service, @Comp…

04/07/2025

Le kit de survie du Product Manager responsable en 4 étapes

Alors que j’étais tranquillement en train de me laisser charmer par la dégustation gratuite de tofu fumé de mon Biocoop l’année dernière, une soudaine prise de conscience a heurté…

08/04/2025

Faites des Plugins pas la Guerre: REX sur ma bataille pour écrire un plugin

Imaginez commencer chaque projet avec tous les outils configurés et prêts à l’emploi. Le rêve, non ? En tant que développeur Android, j’ai toujours eu à portée de main les outils n…

04/07/2025

Don’t use Langchain anymore : Atomic Agents is the new paradigm !

Introduction Since the rise of LLMs, numerous libraries have emerged to simplify their integration into applications. Among them, LangChain quickly established itself as the go-to…

04/07/2025

Optimize Your Google Cloud Platform Costs with Physical Bytes Storage Billing

In today's data-driven world, cloud providers are essential for efficiently managing, processing, and analyzing vast amounts of data. When choosing one such provider, Google Cloud…

08/04/2025

Tech Radar Cloud

Ce Tech radar regroupe une cinquantaine de technologies Cloud et DevOps éprouvées par les experts de Theodo Cloud durant plus de 4 ans de projets. Téléchargez le 2ème volume de not…

08/04/2025

Don’t use Langchain anymore : Atomic Agents is the new paradigm !

Theodo France

Don’t use Langchain anymore : Atomic Agents is the new paradigm !

Introduction

Since the rise of LLMs, numerous libraries have emerged to simplify their integration into applications. Among them,LangChain quickly established itself as the go-to reference. Thanks to its modular approach, it allows chaining model calls, interacting with vector databases, and structuring intelligent agents capable of reasoning and acting. For a long time, it was seen as a must-have for anyone looking to build generative AI solutions.

Over time, however, several limitations became apparent: too many abstraction layers, insufficient optimization for certain projects, hidden costs, lack of native input/output validation, and more. To keep it short and simple : customization is hard. In response to these limitations, new alternatives are emerging with a more efficient and pragmatic approach. One of the most promising solutions is AtomicAgents, a modern framework better suited to the current needs of developers.

AtomicAgents takes a modular and flexible approach to designing autonomous agents, avoiding LangChain’s complexity while enabling better task orchestration. LangChain paved the way, but these new solutions may well surpass it by offering better answers to the real needs of today’s developers.

Langchain’s limitations as LLM Agent library

When it was first introduced, LangChain impressed developers with how incredibly easy it made building applications powered by LLMs. At Theodo Data&AI, LangChain was quickly adopted. Even today, we still use it to rapidly build Proof of Concept projects, test ideas, or validate technical feasibility. LangChain is also used at Theodo for its convenient wrappers and its seamless integration with LangFuse, which provides enhanced monitoring capabilities.

While LangChain became a de facto standard for LLM-based applications, it now comes with significant limitations, driving many developers to look for alternatives.

1. Lack of Control Over Autonomous Agents

One of the main issues is the lack of control when working with agents. When using LangChain agents, the framework makes hidden calls to LLMs, chaining requests together without giving the developer full visibility into the process. The result? Unpredictable costs and inefficient execution, sometimes leading to unnecessarily long workflows.

2. Excessive Abstraction and Rigid Structures

The excessive abstraction and rigid architecture also make optimization difficult. Tweaking a processing flow or customizing an agent becomes frustratingly complex because LangChain enforces its own internal structures. To make matters worse, the documentation is incomplete and confusing, packed with outdated examples — making the learning curve unnecessarily steep

Capture d’écran 2025-07-04 à 10.29.56.png

As a library, with its wrappers for integration with LangFuse for example, LangChain could be useful. As a framework, LangChain is too restrictive. LangChain imposes its opaque structure, which reduces the ability to debug, diagnose LLM behavior, and compromises the ease of maintaining a project. Furthermore, this slows down the learning process for junior developers and can even mask gaps in their understanding. These limitations have pushed us to explore other libraries and alternative paradigms for developing LLM agents, such as Atomic Agents, PydanticAI or Marvin.

Atomic Agents as the new paradigm

What if the real game-changer was AtomicAgents and its modularity?

AtomicAgents is a library designed to create and orchestrate autonomous AI agents in a modular and optimized way. Unlike LangChain, it avoids heavy abstractions and gives developers greater control over agent workflows. Its simplified approach allows for better efficiency, transparency, and scalability.AtomicAgents was launched by Kenny Vaneetlvelde en Juin 2024, a highly active contributor on Reddit, in June 2024, and its popularity has been steadily growing ever since.

Capture d’écran 2025-07-04 à 10.30.25.png

AtomicAgents introduces several major improvements compared to LangChain, CrewAI, and other similar frameworks:

  • Reduced Complexity: No more excessive abstractions. Just simple components that can be combined and arranged however you want.

  • Control is Power:AtomicAgents is designed to give developers full control over every essential part of the agent (agent, memory, RAG, etc.). This allows for customization, fine-tuning, and optimization without having to guess what’s happening behind the scenes.

  • A Proven Approach: IPO (Input, Process, Output): By adopting the IPO model (Input-Process-Output) and emphasizing atomicity, AtomicAgents promotes modularity, maintainability, and scalability.

1. IPOB: Input, Process, Output — and that’s it

AtomicAgents ensures clarity and simplicity in development by following the IPO model:

  • Input: Data structure validation using Pydantic

  • Process: All operations are handled via agents and tools (memory, context providers, etc.)

  • Output: Output data structure validation using Pydantic

2. Atomicity and the Single Responsibility Principle

The core idea behind AtomicAgents is to structure simple, specialized objects — agents, memory, context providers, etc. — where each component has a single responsibility and can be reused across different pipelines. Designed to be interconnected without rigid dependencies, these modules can be added or removed without disrupting the entire system, ensuring optimal modularity. This approach avoids the opacity of LangChain.

With simple objects, developers can then build more complex pipelines step by step.

Capture d’écran 2025-07-04 à 10.30.47.png

AtomicAgents integrates seamlessly with Pydantic, but more importantly, with Instructor. This solves a major issue faced by libraries with limited maintainers: the ecosystem.

Thanks to its integration with Instructor, AtomicAgents gives developers access to a wide range of LLM providers and makes it easy to migrate any existing project to AtomicAgents!

How we use it ?

1. Clear Inputs and Outputs

We start by creating the schemas that will define our input and output. Unlike LangChain, we have validation schemas for both inputs and outputs.

Capture d’écran 2025-07-04 à 10.31.09.png

Input/Output validation schema with Atomic Agents

2. Creation of a clear prompt system

We create a clear and easily modular interaction system.

Capture d’écran 2025-07-04 à 10.31.54.png

3. Building the Agent

Capture d’écran 2025-07-04 à 10.32.19.png

4. Example of Modularity: RAG

The ContextProvider itself is a module independent from the agent. A RAG can be shared across multiple agents — just like memory. Two agents could even share the same memory.

Capture d’écran 2025-07-04 à 10.32.43.png

Conclusion

AtomicAgents is a highly promising module in the LLM ecosystem, directly challenging LangChain. It offers three key advantages over LangChain:

  • Transparency: Unlike LangChain, everything is understandable when using AtomicAgents.

  • Simplicity: No black box — just clear code.

  • Control: Full ability to debug, swap components, and have total control over agent behavior.

That being said, LangChain still remains a solid option and a great entry point:

  • It benefits from a large community ready to help.

  • The internet is full of tutorials and examples.

  • It’s an excellent tool for quickly experimenting with LLMs.

  • Asynchrone easy handling

LangChain also comes with its own ecosystem (LangSmith, LangServe), allowing you to build end-to-end projects easily.

On the flip side, AtomicAgents does not yet integrate smoothly with LangFuse. Atomic Agents also requires to be at ease with LLM concepts and interactions. It might no fit for new comers in the LLM world.

However, it brings a fresh paradigm for building LLM agents and introduces a new way of thinking about LLM-based development — making it absolutely worth exploring.

You are looking for GenAI experts? Feel free to contact us!

Article rédigé par Timothé Bernard