Sifflet connects to many different data sources: data warehouses (Google BigQuery, Snowflake, AWS Redshift…), business intelligence visualisation solutions (Looker, PowerBI, AWS QuickSight…), transformation/ETL tools (dbt, Fivetran, Airflow…)… For each of these data sources, we need to support all Sifflet features (catalog, lineage, monitoring…).
As each integration requires deep knowledge about the API, data model, and behaviour of each data source, Sifflet has a team dedicated to building these integrations. As a member of this team, you will:
Design and implement new integrations with data products. This often requires using and researching how each data source behaves, and then think hard about how to model it within the Sifflet platform.
Make the necessary changes to architecture and implementation to scale our data ingestion engine - some of our customers connect Sifflet to really large instances.
Add support for completely new integration types - which entails defining how they will be displayed and integrated within the Sifflet application.
Lead technical improvements to our codebase and architecture: integrating with many external services naturally results in many challenges regarding modularization, testability, and reliability.
Help improve the team standards and processes, both around technical decisions and product design.
Build a new model for our lineage capabilities, seamlessly merging data from various sources (such as query logs processed by our in-house SQL parser, data warehouses lineage API, or dbt models) into an easy-to-query model used as a source for both automated capabilities (such as root cause analysis) and UI elements (lineage graph).
Optimize the queries issued by our ingestion engine to reduce the cost incurred by customers when monitoring their datasources with Sifflet.
Fetch query history from all sources, and use it as an input for automated root cause analysis.
Applications written in (modern) Java, to tap into the huge data ecosystem offered by this language; Spring Boot 3.
Other teams at Sifflet use Typescript + Vue.js (frontend) or Python. You may need to write small chunks of code in these languages too.
Infrastructure: Kubernetes (AWS EKS clusters), MySQL (on AWS RDS), Temporal for job orchestration.
Plus a few supporting services: Gitlab CI, Prometheus/Loki/Grafana, Sentry…
While not directly part of our stack, expect to gain a lot of knowledge on many products in the modern data ecosystem. The subtleties of BigQuery or Snowflake will soon be very familiar to you.
3 years of experience in a backend engineer role or equivalent. Data engineers with software development experience who want to move to a backend engineering position are also welcome.
General knowledge around some of these topics: data warehouses, data visualisation solutions, ETL pipelines… Of course, you don’t have to know everything upfront, you’ll pick up what you need on the job.
Willingness to learn Java and Spring Boot if you don’t already know this ecosystem.
You value ownership of your projects from design to production, and aren’t afraid of taking initiatives.
None of the people who joined Sifflet perfectly matched the described requirements for the role. If you’re interested in this position but don’t tick all the boxes above, feel free to apply anyway!
Introduction Call (30min) – A conversation with a team lead to discuss your background, the role, and what excites you about Sifflet.
Technical Interviews – Two in-depth assessments:
◦ Coding Interview (90min) – Evaluate your problem-solving and coding skills.
◦ System Design Interview (90min) – Assess your ability to design scalable and efficient systems.
Meet the Product team (30min) – Gain insights into our vision, challenges, and ambitions.
Team Connect (30min)– Meet your future colleagues, experience our culture, and see firsthand what makes our team awesome!
Reference Call – A final step to gather feedback from previous colleagues or managers.
We offer a competitive salary along with meaningful company equity.
You’ll have the opportunity to contribute to and help build the team in India.
You’ll work alongside real experts across many domains — there’s always someone to learn from. We also run regular tech talks where the team shares cool projects and new technologies.
You’ll get deep exposure to the modern data ecosystem, quickly building strong expertise in data engineering, the modern data stack, and how data is actually used in real companies.
Our culture is strongly team-oriented, focused on shipping things that work and bringing projects all the way to production.
We’re building a genuinely great product, and just as importantly, a team people actually enjoy working with.
Rencontrez Oriane, Software Engineer
Rencontrez Wajdi, CTO & Co founder
Estas empresas también contratan para el puesto de "{profesión}".