implicity

implicity

  • Artificial Intelligence / Machine Learning, Health, Software
  • Paris, Cambridge
  • View website

Tech team

Thanks to our SaaS leading cardiac remote monitoring platform, we bring outstanding innovations to cardiologists.

Our platform is built with a high scalability ambition: . Scalability of the infrastructure, with AWS and Kubernetes, ensuring elastic computer power . Scalability of the code, with a domain oriented microservices architecture . Scalability of the teams, with autonomous empowered teams

Tech colleagues are dedicated to bringing the best of technology to medical teams to provide an outstanding health care to their patients. We’re proud of our craft and strive to improve the product and our processes in every iteration.

We were then recently ranked in the best 10% coding quality / clarity / maintainability by independent auditors during the last fundraising process.

implicity
implicity

Employee breakdown

  • Software Engineers

    65%

  • QA

    10%

  • DevOps

    7%

  • CTO - Architect - Release - Security - IT support

    18%

Technologies and tools

Backend

  • Scala
    Scala
    100%
  • Python
    Python
    100%
  • Kafka
    Kafka
    100%
  • Node.js
    Node.js
    95%
  • Nest JS
    Nest JS
    80%
  • PostgreSQL
    PostgreSQL
    70%
  • MongoDB
    MongoDB
    50%
  • MySQL
    MySQL
    30%
  • MariaDB
    MariaDB
    30%
  • Hapi JS
    Hapi JS
    20%

Frontend

  • TypeScript
    TypeScript
    100%
  • SASS
    SASS
    100%
  • Redux.js
    Redux.js
    100%
  • PrimeNG
    PrimeNG
    100%
  • Angular
    Angular
    100%

Devops

  • XRay
    XRay
    100%
  • Terraform
    Terraform
    100%
  • Kubernetes
    Kubernetes
    100%
  • GitLab
    GitLab
    100%
  • Docker
    Docker
    100%
  • Ansible
    Ansible
    100%
  • AWS
    AWS
    70%

Data

  • Hadoop
    Hadoop
    100%
  • AWS Glue
    AWS Glue
    100%
  • Apache Spark
    Apache Spark
    100%
  • Amplitude
    Amplitude
    100%

Continuous Integration

  • Jest
    Jest
    100%
  • Cypress
    Cypress
    100%
  • GitLab CI
    GitLab CI
    80%
  • Travis CI
    Travis CI
    5%

Project Management

  • Slack
    Slack
    100%
  • Notion.so
    Notion.so
    100%
  • JIRA
    JIRA
    100%

IDE

  • WebStorm
    WebStorm
    90%
  • Visual Studio Code
    Visual Studio Code
    10%

Monitoring

  • Sentry
    Sentry
    100%
  • Datadog
    Datadog
    100%

Design

  • Redschift
    Redschift
    100%
  • Maze
    Maze
    100%
  • Figma
    Figma
    100%

Gitlab ⚙️

We fully adopted Gitlab that provides an integrated solution for all our CI/CD needs, including advanced code and container scanning tools, code linters to ensure readability and best practices in our code

Horizontal pod autoscaling ⚙️

Kubernetes HPA allow our infrastructure to dynamically detect thresholds based on various metrics like resource consumption, queue levels and even application generated.

Event driven architecture ⚙️

Migrating from monolith to micro-services is not an easy task. Adopting an event-driven architecture for internal communication helped us to limit the coupling between domains and increase the scalability of our solution.

Organization and methodologies

Methodolgy & Delivery

  • Agile: 2 weeks sprint development + 1 week testing/validation
  • Xtrem programing: peer programing on demand
  • 1 production release / 3 weeks

Tech Company Mindset:

  • shared roadmap between « Tech » & « Product) (50/50)

Planification

  • Roadmap planning every quarter
  • Sprint planning every 2 weeks

Mentoring / Inspiration

  • Tech workshop 1h/week, with presentation and discussion
  • Training on specific challenging topics

Organization: 4 autonomous squads (Devs, QAs, Products)

Equipment: Linux and Mac friendly

And last but not least… weekly afterwork 😁

Projects and tech challenges

  • 💪🏻 Patient Mobile Solution (PMS)
💪🏻 Patient Mobile Solution (PMS)

💪🏻 Patient Mobile Solution (PMS)

The Patient Mobile Solution, called PMS, is a simple single web-page app on a mobile phone that helps the patient to justify connectivity issues with their pacemakers and reports their symptoms during therapeutical guidance.

What has been a meaningful success for our teams in this project, was to be able to deliver a product, from scratch, involving 3 different teams, each of them working on an isolated set of microservices with curated technical specifications in a matter of weeks.

We have been able to benefit from the flexibility of our microservice by using our event-driven architecture, integrated API documentation, and highly qualitative specification process.

Today this project helps thousands of patient in their daily care.

  • 💪🏻 Data Pipeline Agent (DPA)

    The DPA is one of the most critical aspect of our solution.

    Being able to ingest data from manufacturers in a fast, scalable and secure way is quite complex. At the beginning of Implicity, we could not imagine how much and how hard it would be to process that quantity of information.

    Designing the right system to absorb a large amount of data, on a daily basis, with security and traceability constraints is one of the most challenging parts of our technical stacks.

    Our ingestion teams had a challenging couple of years to continue assuming the liveness state of the ingestion pipeline and rethink all the design to ensure that we will maintain this quality of service with 10 to 100 times more data.

    Today, thanks to the hard work of a passionate team of engineers, we are now closer to the goal of having a complete Ingestion pipeline that is no longer limited by its design to ingest an illimited quantity of data on a daily basis.

💪🏻 Data Pipeline Agent (DPA)

The DPA is one of the most critical aspect of our solution.

Being able to ingest data from manufacturers in a fast, scalable and secure way is quite complex. At the beginning of Implicity, we could not imagine how much and how hard it would be to process that quantity of information.

Designing the right system to absorb a large amount of data, on a daily basis, with security and traceability constraints is one of the most challenging parts of our technical stacks.

Our ingestion teams had a challenging couple of years to continue assuming the liveness state of the ingestion pipeline and rethink all the design to ensure that we will maintain this quality of service with 10 to 100 times more data.

Today, thanks to the hard work of a passionate team of engineers, we are now closer to the goal of having a complete Ingestion pipeline that is no longer limited by its design to ingest an illimited quantity of data on a daily basis.

💪🏻 Data Pipeline Agent (DPA)

Recruitment process

  • 1st step: 15-20min « HR call screen », just to check that it is a mutual good idea to move on in a process
  • 2nd step: 1h « Manager interview » with the CTO (generalist interview: hard skills, soft skills, cultural fit, etc.)
  • 3rd step: 1h15 « Tech interview » with 2 members of the team (just oral questions on your hard skills)
  • 4th step: 45min « Final interview », with anyone that would like to come for last questions (often HR & CTO, for soft skills questions)

The whole process usually lasts 10 days & offer usually follows within 24h after last interview 🤞🏻

Learn more about implicity