This position is no longer available.

Senior Data Engineer / Data Architect (Python, Spark, Kafka, ...)

Permanent contract
Paris
A few days at home
Salary: Not specified
Starting date: May 31, 2021
Experience: > 3 years

Square Sense
Square Sense

Interested in this job?

Questions and answers about the job

The position

Job description

Software Engineering at Square Sense

Our Software Engineering team has a combined skillset that covers data engineering, full-stack web development, site reliability engineering, and quality assurance. We are responsible for implementation, quality control, delivery and maintenance of the Square Sense software solution and data lakehouse, as well as providing ongoing support of data-related activities.

We are building a multitude of products in the domains of data collection, analysis, visualization and IoT manipulation. Our systems collect data from IoT devices and third-party data sources, process ingested data in streaming and batch modes, organize processed data, provide the APIs and UI to access it (analytics platform), or use third-party APIs to manipulate the physical world (automated decision making solution).

Our main programming language for data products is Python. Our data processing back-end is built on Spark, Airflow, PostgreSQL, Docker, Kubernetes, with bits of Hadoop and Kafka, and includes a multitude of data processing applications. Our production platforms run in the public clouds (Azure, GCP, AWS) and employ related services such as GKE, Dataproc (GCP) and AKS, HDInsight (Azure) to name a few. We deploy the software in a Continuous Delivery process. We focus on a high quality of our software and all team members take seriously such practices as automated testing and PR reviews.

All team members participate in the design and architecture, development, quality, production delivery, and monitoring.

Objectives

As a Senior Data Engineer / Data Architect you will be a member of the Software Engineering Team. This position implies close collaboration with all members of the Software Engineering Team, as well as members of our Data Science Team, who design and implement various data processing algorithms.

Primary objectives of a Senior Data Engineer / Data Architect are:

  • Provide expertise in the design and the architecture of data pipelines and the data warehouse. Propose relevant technologies and solutions.
  • Provide support for the data engineers, ML engineers, data scientists and data analysts in the implementation process, coaching.
  • Design and develop multiple software components for data collection (connectors for various IoT devices and third-party services, web hooks) and ingestion of collected data (using Kafka, for example).
  • Design and develop multiple software components for data processing (for example, streaming and batch data pipelines with Apache Spark or Apache Kafka Streams), as well as other internal software, tools and APIs to support the data processing.
  • Automate data processing tasks and their monitoring in production (for example, orchestration of batch processing jobs using Airflow).
  • Design efficient data models using various storage technologies (cloud file storages, HDFS, relational DBs and non-relational DBs).
  • Collaborate with DevOps Engineers for production delivery and related objectives.

Preferred experience

Profile

We are looking for a data engineer with 3-5+ years of professional industry experience. Candidates for this position are expected to have:

  • Strongly engineering-oriented profile.
  • At least 3 to 5 years full-time job experience is mandatory for this position.
  • Willingness to work in a rich technical environment where Data Engineering joins full stack software development and DevOps.
  • Industry experience and strong programming skills in Python. Additional experience in Scala is a plus, but not mandatory.
  • Industry experience with Apache Spark. Additional experience with other data processing frameworks is a big plus.
  • Experience in data warehouse solutions and data modelling. Experience with SQL and NoSQL storages and APIs.
  • High level of rigor. A good taste for simple but high quality software.
  • Industry experience with multiple (more than one) products from a business standpoint.

Knowledge in all of the following areas is a plus:

  • Scala
  • Docker, Kubernetes
  • Kafka stack (Kafka, Kafka Connect, KSQL, Kafka Streams)
  • Cloud Platforms (AWS / GCP / Azure)
  • Agile (Scrum) or Lean (Kanban)

Being passionate about IT ourselves, we are looking for a likewise passionate person with a good team spirit.

What we offer

  • An experienced engineering team with a very strong high-quality development mentality yet focused on fast and agile execution to achieve business impact
  • A data-centric product, where engineers make an important contribution to making it all happen
  • Team leaders with more than 10 years of professional experience in software engineering
  • A competitive salary and eligibility for participation in the stock option plan.
  • Performance-based bonuses
  • Fast-growing early-stage startup
  • A multi-cultural team that is passionate about technology, regular team outings
  • Open communication, flat hierarchy, and fast execution
  • A budget for personal education, participation to conferences, and training
  • Flexible working hours
  • Remote work: every team member has a choice of working remotely, in the office, or mix the two; today, most of us work four days a week remotely, and one day a week in the office
  • A comfortable office on boulevard du Montparnasse with a nice view over Paris

Recruitment process

  1. Phone call, about 30 minutes. The objective of the phone call is to confirm the intent to continue the process.
  2. Technical exercise. The exercice is fully asynchronous and remote, with no deadline. It usually takes about 4 hours end to end.
  3. Technical and general interview. It includes lots of pair programming, coding and design, some theory, some general questions, no whiteboard programming or trick questions. Remote or in the office. Plan 2 hours at least.
  4. Meeting with the Team: meet co-founders and more team members.

Depending on how tight we scheduled these steps, the whole process may take from 1 to 3 weeks.

Want to know more?

These job openings might interest you!

These companies are also recruiting for the position of “Data / Business Intelligence”.

  • Descartes & Mauss

    Data Scientist

    Descartes & Mauss
    Descartes & Mauss
    Permanent contract
    Paris
    A few days at home
    Artificial Intelligence / Machine Learning, Digital Marketing / Data Marketing
    35 employees

  • Qantev

    Senior Data Engineer

    Qantev
    Qantev
    Permanent contract
    Paris
    A few days at home
    Artificial Intelligence / Machine Learning, FinTech / InsurTech
    46 employees

  • Phagos

    AI Researcher (All genders)

    Phagos
    Phagos
    Permanent contract
    Suresnes
    Fully-remote
    Salary: €45K to 50K
    Artificial Intelligence / Machine Learning, Pharmaceutical / Biotech
    38 employees

  • Joko

    Data Engineer

    Joko
    Joko
    Permanent contract
    Paris
    Fully-remote
    Mobile Apps, Artificial Intelligence / Machine Learning
    80 employees

  • Mistral Ai

    Applied Scientist / Research Engineer - Edge Devices and Quantization - EMEA

    Mistral Ai
    Mistral Ai
    Permanent contract
    Paris
    A few days at home
    Artificial Intelligence / Machine Learning, IT / Digital
    280 employees

  • Nabla

    Machine Learning Engineer

    Nabla
    Nabla
    Permanent contract
    Paris
    A few days at home
    Artificial Intelligence / Machine Learning, Big Data
    60 employees

See all job openings