🦄 DataDome stops cyberfraud and bots in real time, outpacing AI-driven fraud from simple to sophisticated across your sites, apps, and APIs. Named a Leader in the Forrester Wave for Bot Management, the DataDome platform is built on a multi-layered AI engine that focuses on intent, not just identity. Because it’s not about knowing who’s real, it’s about what they intend to do. With thousands of AI models that adapt to every fraudulent click, signup, and login, DataDome blocks fraud in less than 2 milliseconds, without compromising performance. DataDome is fully automated and integrates seamlessly into any tech stack. Backed by a 24/7 SOC team of advanced threat researchers, DataDome stops over 350 billion attacks annually. Experience protection that outperforms with DataDome.
Our technical stack is primarily composed of:
- A real-time, high-performance detection layer at the edge in Java
- A low-latency Stream Engine running on Apache Flink in Scala
- Elasticsearch for storage
- Apache Druid for computing aggregations
- Kafka for communication between layers
- Symfony & Angular for our dashboards
We operate at scale, handling over 12 billion events per day with response times of less than 5 milliseconds (99% percentile), resulting in more than 700 TB of data per month. We currently operate in over 30 data centers worldwide.
Our infrastructure is deployed across AWS, GCP, and Scaleway, utilizing Docker, Ansible, and Terraform, and is monitored using Grafana and Prometheus.
We are seeking a Backend Software Engineer to join our Edge processing team (currently 4 engineers). You'll thrive on technical challenges, design new detection components, and help us enhance our ultra-low-latency solutions.
👉 You will be more specifically in charge of things like...
- Developing and maintaining edge applications that process billions of daily requests from our customers, ensuring ultra-low latency and high availability
- Building and optimizing detection modules in Java to identify and block sophisticated bot attacks in real-time
- Working with streaming data pipelines using Kafka to communicate with our asynchronous processing layer
- Implementing new features for our detection engine while maintaining performance standards (sub-5ms response times)
- Collaborating on database design and optimization using Elasticsearch to efficiently store and query large-scale datasets
- Contributing to our CI/CD processes, including automated testing, deployment pipelines, and infrastructure improvements
- Participating in code reviews to maintain high code quality standards and share knowledge with the team
- Monitoring system performance and troubleshooting issues using our observability stack (Grafana, Prometheus)
- Contributing to the on-call rotation system to maintain 24/7 availability of our engine stack
- Participating in team agile ceremonies, including sprint planning, daily stand-ups, and retrospectives
- Contributing to technical documentation and sharing your knowledge with team members
- Working in collaboration with our Threat Research team and Product team to improve our detection engine
👤 It would be great if...
- Have 4+ years of professional development experience
- Have a strong programming background in Java and understand object-oriented and data structure design principles
- Have solid experience working in Unix/Linux environments and are comfortable with command-line tools
- Care deeply about code quality, simplicity, and performance
- Understand how the internet works (HTTP, TCP/IP, DNS, etc.)
- Have experience with distributed systems and understand concepts like scalability and fault tolerance
- Are familiar with at least one of the following technologies: Apache Kafka, Elasticsearch, Apache Druid, or similar big data tools
- Have a problem-solving mindset and enjoy debugging complex technical issues
- Are a team player with good communication skills
Bonus points if you have:
- Knowledge of cybersecurity concepts
- Experience with stream processing frameworks (Apache Flink)
- Exposure to containerization and orchestration (Docker, Kubernetes)
- Familiarity with Infrastructure as Code (Terraform, Ansible)
What’s in it for you?
- Flex Life: Flexible remote, hybrid or in office options, including working from our Paris office, located next to the Opera Garnier + 500€ stipend to help you set up your ideal workspace.
- Generous Health Benefits: Leading healthcare providers for each EU country (e.g. Alan in France).
- Professional Development: #Growth is part of our DNA, therefore we provide an annual stipend to invest in yourself.
- Events & Teambuilding: Feel the #TeamSpirit both virtually & onsite, with several events & workshops planned throughout the year, including two annual offsite events, summer & winter parties, lunch & learns, & much more.
- Perks: we prefer to adapt to what works best for you. Some prefer lunch on us, others prefer sports with friends, therefore we believe BotBusters should decide what works best for them.
- Parent Care: Gift & care packages for parents.
- PTO: Based on the country you are based from (e.g. 25 days in France).
What are the next steps?
- Talent Acquisition Manager will contact you for a first chat
- You will then meet with Pierre the Manager
- You will complete home made technichal test
- You will review it with the team
- The final step will be a one-on-one meeting with Gilles our CTO
- Welcome to DataDome!
DataDome is an equal opportunity employer, and proud to be committed to diversity and inclusion. We will consider all qualified applicants without regard to race, color, nationality, gender, gender identity or expression, sexual orientation, religion, disability or age.