In this series we explore the most important parts of moving AI/ML projects from lab scale to production.
One of the hardest parts of scaling any AI/ML initiatives is knowing what is required to scale them!
Whilst developer operations (DevOps) has been maturing for a decade and DataOps is catching up, AI/MLOps is at the very beginning of the journey. Unfortunately it is going to take a few years for patterns, processes, people and tech to catch up. Until then, you need to know what is going to work for you; including consideration of: use cases, architectures, tech stack, data pipelines, integration layer, monitoring, reporting, budgeting etc etc.
In our experience that is a lack of knowledge of the requirements to scale up AI/ML projects and many start on the journey based on what the largest vendor says is best (and they rarely know more than you do in reality!).
Our view is that you want an operational plan that aligns with your stage and path. You need to maximise optionality and speed, whilst minimising technical debt and minimise investment in expert resources. You also need to consider what part of the tech stack do you need to buy, build or operate.
We're more than happy to talk through this topic if you at the consideration or planning stage.