Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

How to deploy AI workloads at the edge using open source solutions

Gokhan Cetinkaya

on 30 September 2024

Running AI workloads at the edge with Canonical and Lenovo

AI is driving a new wave of opportunities in all kinds of edge settings—from predictive maintenance in manufacturing, to virtual assistants in healthcare, to telco router optimisation in the most remote locations. But to support these AI workloads running virtually everywhere, companies need edge infrastructure that’s fast, secure and highly scalable.

Open-source tools — such as MicroK8s for lightweight Kubernetes orchestration and Charmed Kubeflow for machine learning (ML) workflows — deliver greater levels of flexibility and security for edge AI deployments. And when paired with an accelerated computing stack, these solutions help professionals to deliver projects faster, reduce operational costs and ensure more predictable outcomes.

Today’s blog looks at why companies are turning to open infrastructure solutions for edge AI, and explores how to deploy a purpose-built, optimised stack that can deliver transformative intelligence at scale. 

Get the AI at the Edge reference design

Why an open infrastructure stack is right for edge AI

Organisations worldwide have a treasure trove of data at the edge, but what’s the best way to bring AI capabilities to these data sources in the most remote and rugged sites? Canonical, NVIDIA and Lenovo can help. 

To ensure purpose-built performance for edge AI, consider an open-source solution architecture that includes Canonical Ubuntu running on Lenovo ThinkEdge servers, MicroK8s for lightweight Kubernetes orchestration, and Charmed Kubeflow for ML workflow management. The NVIDIA EGX platform provides the foundation of the architecture, enabling powerful GPU-accelerated computing capabilities for AI workloads. 

Key advantages of using this pre-validated architecture include:

  • Faster iteration and experimentation: Data scientists can iterate faster on AI/ML models and accelerate the experimentation process.
  • Scalability: The architecture is already tested with various MLOps tooling options, enabling quick scaling of AI initiatives.
  • Security: AI workloads benefit from the secure infrastructure and regular updates provided by Canonical Ubuntu, ensuring ongoing protection and reliability.
  • AI workload optimisation: The architecture is built to meet the specific needs of AI workloads—that is, it can efficiently handle large datasets on an optimised hardware and software stack.
  • End-to-end stack: The architecture leverages NVIDIA EGX offerings and Charmed Kubeflow to simplify the entire ML lifecycle.
  • Reproducibility: The solution offers a clear guide that can be used by professionals across the organisation, expecting the same outcome.

Canonical’s open source infrastructure stack

For computing on the edge, Canonical and Lenovo work together across the stack to get the best performance from certified hardware. The implementation choices are highly specific for each cloud infrastructure. However, many of these choices can be standardised and automated to help reduce operational risk.

At the base of the pre-validated infrastructure is the Ubuntu operating system. Ubuntu is already embraced by AI/ML developers, so it adds familiarity and efficiency to the production environment. Ubuntu Pro extends the standard Ubuntu distribution with 10 years of security maintenance from Canonical—along with optional enterprise-grade support. 

Canonical MicroK8s is a Kubernetes distribution certified by the Cloud Native Computing Foundation (CNCF). It offers a streamlined approach to managing Kubernetes containers, which are invaluable for repeatable cloud deployments. MicroK8s installs the NVIDIA GPU operator for enabling efficient management and utilization of GPU resources.

Charmed Kubeflow is an enterprise-grade distribution of Kubeflow, a popular open-source ML toolkit built for Kubernetes environments. Developed by Canonical, Charmed Kubeflow simplifies the deployment and management of AI workflows, providing access to an entire ecosystem of tools and frameworks. 

Finally, what sets Canonical infrastructure apart is the automation made possible by Juju, an open-source orchestration engine for automating the provisioning, management and maintenance of infrastructure components and applications. 

Lenovo ThinkEdge servers for edge AI

Even the best open source infrastructure software cannot deliver its full potential without the right hardware. Lenovo ThinkEdge servers using the NVIDIA EGX platform enable powerful performance for AI workloads at the edge. 

In particular, ThinkEdge SE450 servers are purpose-built for tight spaces, making them ideal for deployment outside a traditional data center. These servers are designed to virtualize traditional IT applications as well as new transformative AI systems, providing the processing power, storage, acceleration, and networking technologies required for the latest edge workloads.  

Getting started with validated designs for edge AI

Canonical, Lenovo and NVIDIA are working together to ensure that data science is accessible across all industries. With a pre-validated reference architecture, developers and researchers have a rapid path to value for their AI initiatives. 

The deployment process begins with installing the Canonical software components on the ThinkEdge SE450 server. Using the Charmed Kubeflow dashboard, users can get then create an AI experiment using the NVIDIA Triton inference server. Triton provides a dedicated environment for efficient and effective model serving. The end-to-end AI workflow is optimised for both cost and performance.

For a closer look at the reference architecture and a step-by-step guide for running AI at the edge, click on the button below to read the white paper from Lenovo.  

Read the AI at the Edge reference design

kubeflow logo

Run Kubeflow anywhere, easily

With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.

Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like katib or pipelines-ui.

Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.

Learn more about Charmed Kubeflow ›

kubeflow logo

What is Kubeflow?

Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.

Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and configurable steps, with machine learning specific frameworks and libraries.

Learn more about Kubeflow ›

kubeflow logo

Install Kubeflow

The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple, portable and scalable.

You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install with MicroK8s on any of these environments and can be scaled to high-availability.

Install Kubeflow ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Join Canonical in Dallas at Dell Technologies Forum

Canonical is excited to be a sponsor of the Dell Technologies Forum in Dallas, taking place on November 14th. This is a great opportunity to learn about the...

Join Canonical in Brazil at Dell Technologies Forum São Paulo

Canonical is excited to be a part of the Dell Technologies Forum in São Paulo on October 30th. This exclusive event brings together industry leaders to...

Join Canonical in Sydney at Dell Technologies Forum

Canonical is excited to be exhibiting at the upcoming Dell Technologies Forum – Sydney on the 24th of September. This leading event brings together industry...