Meet Canonical at KubeCon + CloudNativeCon North America 2024

We are ready to connect with the pioneers of open-source innovation! Canonical, the force behind Ubuntu, is returning as a gold sponsor at KubeCon + CloudNativeCon North America 2024. 

This premier event, hosted by the Cloud Native Computing Foundation, brings together the brightest minds in open source and cloud-native technologies. From November 13-15, 2024, we’ll be in Salt Lake City, Utah—don’t miss the chance to be part of this transformative gathering of industry leaders and visionaries!

You can book a meeting with our team here.

Canonical recently celebrated 20 years as publishers of Ubuntu, the world’s most popular Linux operating system. Canonical’s portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. 

Minimal, secure containers from your favorite distro

Canonical containers are designed for modern software deployment. Our container portfolio ranges from an ecosystem of base OCI images, ultra-optimized chiseled container images to our long-term supported Docker images

As developers build applications, they can count on Ubuntu for a seamless containerization experience—from development to production. With timely updates, robust security patches, and long-term support, Ubuntu ensures a consistent and predictable life cycle, backed by a reliable support commitment.

That’s not all: as part of the container design and build service announced on June 26, 2024, customers can engage Canonical to design a Docker image of an open source application, or a base image that includes all of the open source dependencies to host their proprietary app. They get hardened distroless container images with a minimal attack surface and 12+ years CVE maintenance. Canonical will support these custom-built images on all popular K8s platforms like RHEL, VMware kubernetes or public cloud. 

Up to 10 years support for the most popular Docker images for databases and big data

Engineered for your most critical workloads, Canonical’s open source data solutions deliver better ROI across clouds.  Our portfolio of images includes a comprehensive set of databases and big data containers that extend the same principles behind Ubuntu—reliability, security, simplicity and performance—to your data stack. 

We provide containers for PostgreSQL, MySQL, MongoDB, Spark, Kafka, Opensearch and Valkey. These images following the Open Container Initiative (OCI) format are optimized to run on any cloud, on-premise or air-gapped environment, and on any hardware. They run natively on Ubuntu, but are also compatible with any CNCF distribution of Kubernetes. Whether you need a minimal, stripped-down image with just the essential packages, or an image with all essential plugins—we have a ready-to-use, trusted container that can seamlessly integrate into your environment. 

Deliver AI at scale

Open source enables organizations to iterate faster and accelerates project delivery, by taking away the burden of licensing and tool accessibility. However, GenAI comes with several challenges, such as the need for extensive compute resources and associated costs . In order to optimize the use of their compute resources, organizations need efficient and scalable AI infrastructure, from bare metal to Kubernetes to their MLOps platforms. 

Our Kubeflow distribution, Charmed Kubeflow, is designed to run on any infrastructure, enabling you to take your models to production in the environment that best suits your needs. 

You can use Canonical’s modular MLOps platform to smoothly transition from experimentation to production, ensuring a quick return on investment. 

At our booth K10, you will be able to see the full RAG pipeline setup that can be deployed on any cloud (public or private) using Canonical products Charmed Kubeflow and MLflow, Charmed Opensearch, MicroK8s and COS.

Discover cloud native innovations with Canonical & our partners!

Canonical and its close partners are moving AI forward. Come down to out booth to learn from our collaborations and find out how to:

  • Run GenAI projects at scale in a secure manner using confidential computing
  • Benefit from Managed Kubeflow on any cloud to develop and deploy models and move beyond experimentation
  • Get your ML environment with only 3 commands on Ubuntu using Data Science Stack
  • Take your LLMs to production in a secure, scalable environment with open source tooling

Accelerated GenAI on Azure with Canonical’s Managed Kubeflow

Canonical and Microsoft Azure at booth C4 on Nov 13 2 PM

We’ll be presenting Managed Kubeflow on Microsoft Azure to highlight our engagement with the Microsoft Azure community. It will include limited access to sign-up for the preview that canonical and microsoft will be running in the upcoming months. 

AI on ARM Platform with Ampere

Model inference is not always straightforward – as organizations need to ensure the right tooling, the security of the model and the compatibility with the hardware underneath. Join our demo to see how Ampere and Canonical enable AI on the ARM platform, using MicroK8s. It will focus on a computer vision use case, which is suitable for a wide variety of industries, including manufacturing, retail and logistics.

End-to-end Generative AI workflows built for developers, ready for production 

Canonical and NVIDIA at booth K10 on Nov 14 11 AM

As enterprises invest more into generative AI solutions, finding a toolchain that is easy to use, scalable and backed by enterprise support becomes vital for businesses looking to take full advantage of the latest innovations.

In this demo, we will delve into the NVIDIA NGC Catalog and NVIDIA AI Enterprise which offers a suite of AI tools and frameworks (including NVIDIA NIM microservices) that integrate into cloud native projects like Kubernetes and KServe while offering security and enterprise support.

We will demo how to take a state-of-the-art open Meta Llama 3.1 8b model, run inference using NVIDIA NIM, scale the model dynamically using the open-source KServe project, then use advanced PEFT techniques and deploy LoRA adapters trained by NVIDIA NeMo Customizer microservice to have multiple fine-tuned versions of the open model running on a single Kubernetes cluster.

Everything shown during this demo can be repeated easily using resources that are readily available thanks to the large amount of community involvement around open model development and deployment.

Make sure to join Andreea Munteanu as she discusses engaging the Kserve Community on Nov 14 at 5.25 PM

During Kubecon, together with thought leaders from NVIDIA, Bloomberg, Nutanix and RedHat, we will talk about model inference in the open source landscape. With a focus on KServe, the panel discussion will highlight the benefits of working with the community, the challenges that an open source solution provides and some of the most exciting enhancements that we think the future will bring.

The panel discussion will capture key considerations when integrating with a CNCF project, based on the experience of using NIMs with KServe. We will delve into the main differences between experimentation environments and how to scale such an environment. 

Run GenAI in production on Google Cloud

Canonical and Google Cloud at booth C1 on Nov 14 at 3.30 PM

Computing power is still a concern for organizations, so running your GenAI projects on Google Cloud is a great option. Organizations can use a fully open source solution, to ensure portability. During our talk we will highlight main benefits of using open source for GenAI use cases, key considerations when scaling such projects and how Google Cloud is an enabler for enterprises.

Enterprise-Ready LLMs with Confidential RAG using Intel TDX

Canonical and Intel at booth G5 on Nov 13 at 4 PM, Nov 14 at 2 PM and Nov 15 at 11.30 AM

We will be presenting a talk on Enterprise-Ready LLMs with Confidential RAG at the Intel booth. It will highlight how organizations who are using highly sensitive data can move beyond experimentation with their GenAI use cases. It includes key considerations, scalability concerns and a blueprint to use a fully open source solution. 

From metal to apps, fortify your stack

Kubernetes has proven to be a vital tool for developing and running ML models. It significantly enhances experimentation and workflow management, ensures high availability and can accommodate the resource-intensive nature of AI workloads. 

At Canonical, our aim is to streamline Kubernetes cluster management by removing unnecessary manual tasks. Be it the developer workstation, the data center, the cloud or an IoT device- deploying applications on Kubernetes should not be a different experience just because the infrastructure changes. That’s why Canonical Kubernetes provides zeroops for small clusters and intelligent automation for larger production environments that also want to benefit from the latest community innovations.

Canonical Kubernetes is one component of our comprehensive infrastructure portfolio with which you can quickly build a private cloud to suit your needs and simplify operations with automation.

Whether you’re building new products or AI models, it’s crucial to ensure that the pace of innovation is not hindered by security vulnerabilities. That’s why Canonical’s open source solutions come with reliable security maintenance, so you can consume the open source you need at speed, securely. 

Meet our team to learn more about Ubuntu Pro, our comprehensive subscription for open source software security. With Ubuntu Pro organizations reduce their average CVE exposure from 98 days to 1 day (on average). It enables development teams to focus on building and running innovative applications with complete peace of mind.

Join us at booth K10- Book a meeting now!

If you are attending KubeCon NA in Salt Lake City between 13-15 November, make sure to visit booth E25. Our team of open source experts will be available throughout the day to answer all your questions. 

You can book a meeting with us here!

kubeflow logo

Run Kubeflow anywhere, easily

With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.

Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like katib or pipelines-ui.

Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.

Learn more about Charmed Kubeflow ›

kubeflow logo

What is Kubeflow?

Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.

Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and configurable steps, with machine learning specific frameworks and libraries.

Learn more about Kubeflow ›

kubeflow logo

Install Kubeflow

The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple, portable and scalable.

You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install with MicroK8s on any of these environments and can be scaled to high-availability.

Install Kubeflow ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Charmed Kubeflow vs Kubeflow

Why should you use an official distribution of Kubeflow? Kubeflow is an open source MLOps platform that is designed to enable organizations to scale their ML...

Charmed Kubeflow 1.9 Beta is here: try it out

After releasing a new version of Ubuntu every six months for 20 years, it’s safe to say that we like keeping our traditions. Another of those traditions is...

A deep dive into Kubeflow pipelines 

Widely adopted by both developers and organisations, Kubeflow is an MLOps platform that runs on Kubernetes and automates machine learning (ML) workloads. It...