Visual Behavior

Visual Behavior

  • Artificial Intelligence / Machine Learning, Robotics, Specialised Engineering
  • Lyon
  • View website

Tech team

We are developing an artificial vision solution for robots.We have a team of experts, from AI engineers and software developers to PhDs in machine learning. We develop models that allow robots to better understand their environment.

For this we have several technical challenges:

  • Finding the fundamental blocks that allow visual autonomy
  • Deploying our applications in real time
  • Being able to rely on embedded hardware Our techniques and tools:
  • Supersived, Self supervised learning, CNN & Transformers architecture.
  • Pytorch, Python, C++
Visual Behavior

Employee breakdown

  • Engineering

    25%

  • Research

    50%

  • Product

    25%

Technologies and tools

Backend

  • Python
    Python
    100%

Data

  • TensorRT
    TensorRT
    100%
  • Pytorch
    Pytorch
    100%

Python and Pytorch ⚙️

We rely mostly on Python and Pytorch for the majority of our developments.

ONNX et TensorRT ⚙️

For the production of our models, we use ONNX and TensorRT

kernels CUDA ⚙️

Occasionally, we use specific CUDA kernels for a better execution of our models.

Organization and methodologies

In the Visual Behavior team, we have weekly progress meetings with all the employees to find out about the individual progress of the projects. We also run a technical progress meeting to dive deeper into technical elements.

We usually introduce to the team our ideas and current direction to debate and brainstorm around new ideas. Within the teams, the aim is to encourage a free exchange, all ideas are welcome, encouraged and rewarded.

Projects and tech challenges

  • Aloception
Aloception

Aloception

is a set of modules for computer vision built on top of popular deep learning libraries: pytorch and pytorch lightning. This tool is also open source and available on our github.

It consists of three packages:

  • Aloscene extends the use of tensors with augmented tensors and spatial augmented tensors.
  • Alodataset implements ready-to-use computer vision datasets with the help of aloscene and augmented tensors to facilitate the transformation and display of your vision data.
  • Alonet integrates several promising computer vision architectures.

Github link

Recruitment process

If you want to take on a technical research and production challenge in the field of robotics, Visual behavior is one of the best places in France.

The hiring steps: 1 - Interview (15-30 min) 2 - Technical test 3 - Final interview + meeting with the team in our offices for a permanent contract

Learn more about Visual Behavior