case study



ESP is a leader in carbon and utility management software, on a mission to embed sustainability into every business’s performance. They have large clients in the multi-site retail, commercial property, government, and industrial sectors.

Jeremy Allen, ESP’s founder and CPO, approached Arcanum to support the up-scaling of ESP’s data science and machine learning (ML) capability. They had already started on the journey, embedding ML into a core IoT-related service, but needed a partner to build out an industrial-scale MLOps function and support the ongoing performance management.

Business Case

ESP has expert consultants who work with clients to improve energy efficiency and reduce emissions. These consultants often work in tandem with ESP’s proprietary utility measurement and tracking software, which utilises machine learning to generate alerts when a client’s utility use is outside of expected parameters.

These alerts trigger consultants to review the data and determine whether intervention or client engagement is required to ensure energy use goals are met.

  • The cost of incorrect notifications is high in terms of wasted consultant time.
  • The cost of missed notifications is high in terms of the business value experienced by customers.

The ultimate objective is to reduce human intervention while still providing high-quality advice and recommendations to customers.

accelerate ai platform logo


The Arcanum team of data scientists and MLOps engineers reviewed the existing architecture, pipeline and algorithms.



We migrated the whole ML infrastructure from the existing fixed deployment architecture to our scalable, flexible and resilient infrastructure as code set up.



Validation of the model accuracy and performance on the new setup was established as a benchmark for future measurement.



Once live, the Arcanum monitoring capability provides ESP with a single view of infrastructure and ML performance; a view that stakeholders did not have before.



Running as a managed service, the Arcanum team are responsible for improving the ML accuracy over time; training and deploying the models via the Accelerate Platform.