MLOPs is the expertise that streamlines the process of building, training, testing, deploying and tracking the versioned results of an AI model. We have developed several separate tracks for implementing MLOps based on industry standard technologies that covers most cases of AI projects, leveraging our deep knowledge and expertise of digital product DevOps.

Our MLOps pipelines are created with the following technology stack:

Tech stack

Deployment

At Lifely, we have two approaches to deploying AI models that cover most of the possible AI implementation scenarios. Both methods use the aforementioned techstack, and only differ in the way the models are deployed to production:

AI development workflow

  1. collect data
    1. Elicit datasets from client
      1. database dump
      2. API access
      3. manual CSV/Excel files
      4. RPA collected from legacy system
    2. manually inspect available dataset
    3. manually preprocessing dataset
  2. create proof of concept in jupyternotebooks
  3. decide between incidental invocation or continuous invocation deployment model
  4. create accounts and access keys for AI training data and AI model data hosting
  5. initiate production ai model git repository