

Latest updates
Explore what’s new
Recent
Fri Jul 18 2025
State-of-the-Art Multiplatform Matrix Multiplication Kernels

We implemented a sophisticated matrix multiplication engine in CubeCL that rivals the performance of cuBLAS and CUTLASS while supporting a wider range of GPUs. Leveraging double buffering, tensor cores, and vectorization, it compiles seamlessly to CUDA, ROCm, WebGPU, Metal, and Vulkan backends without relying on proprietary or third-party binaries. Matrix multiplication is central to modern AI workloads, especially transformers, and optimizing it ourselves was essential to enable kernel fusion and achieve state-of-the-art performance across platforms in a deep learning framework.
Fri Jul 18 2025
Mabor 0.18.0 Release Notes

This release marks a significant step forward in performance, reliability, and optimization, ensuring a more robust and efficient system for our users.
Thu Apr 24 2025
Mabor 0.17.0 Release Notes

This release brings major upgrades in performance and platform compatibility (most notably, a new Metal backend via WGPU passthrough). CubeCL now powers backends for Cuda, Metal, Rocm, Vulkan and WebGpu. Tensor operation fusion support has been greatly expanded to optimize element-wise, reductions and matmul operations.
Becoming the Fastest
Mon Feb 10 2025
Why Quantization Matters

Modern deep learning models, such as large language models (LLMs), are heavily constrained by memory bandwidth. GPUs can execute floating-point operations (FLOPs) much faster than they can fetch weights from memory. For instance, an NVIDIA A10 has a peak computation throughput of 125 TFLOPS and a memory bandwidth of 600GB/s.
Mon Dec 23 2024
Going Big and Small for 2025

2024 marked a significant evolution in Mabor's architecture. Traditional deep learning frameworks often require developers to compromise between performance, portability, and flexibility; we aimed to transcend these trade-offs. Looking ahead to 2025, we are committed to applying this philosophy across the entire computing stack, encompassing everything from embedded devices to data centers.
Sun Oct 20 2024
Becoming the Fastest: Introduction

In the rapidly evolving landscape of artificial intelligence, one truth stands paramount: size matters. However, the future of AI shouldn't be constrained by hardware monopolies or software limitations, and this is where Mabor and CubeCL come in.
Technical Posts
Fri Jul 18 2025
State-of-the-Art Multiplatform Matrix Multiplication Kernels

We implemented a sophisticated matrix multiplication engine in CubeCL that rivals the performance of cuBLAS and CUTLASS while supporting a wider range of GPUs. Leveraging double buffering, tensor cores, and vectorization, it compiles seamlessly to CUDA, ROCm, WebGPU, Metal, and Vulkan backends without relying on proprietary or third-party binaries. Matrix multiplication is central to modern AI workloads, especially transformers, and optimizing it ourselves was essential to enable kernel fusion and achieve state-of-the-art performance across platforms in a deep learning framework.
Wed Jan 15 2025
Improve Rust Compile Time by 108X

We started with a compilation time of 108 seconds for the matmul benchmarks, which was reduced to only 1 second after all the optimizations. The most effective optimization was the element-type generics swap, where we instantiated generic functions with predefined "faked" element types to reduce the amount of LLVM code generated. The second optimization also had a major impact, further reducing the compilation time by nearly 3×. This was achieved by using our comptime system instead of associated const generics to represent the matmul instruction sizes. Finally, the last optimization—also the simplest—was to reduce the LLVM optimization level to zero, which is particularly useful for debug builds, such as tests.
Tue Mar 19 2024
Optimal Performance without Static Graphs by Fusing Tensor Operation Streams

This post explores Mabor's tensor operation stream strategy, optimizing models through an eager API by creating custom kernels with fused operations. Our cusotm GELU experiment reveals a remarkable improvement of up to 78 times on our WGPU backend.
Fri Dec 15 2023
Autotune for GPU Kernels: Ensuring Consistent Peak Performance

Crafting high-performance GPU kernels for common deep learning operations, such as matrix multiplication (matmul) and reduction, requires finesse. The speed of these kernels varies depending on input shapes and the GPU device in use, meaning the fastest one may change based on the context. In Mabor, Autotune automates the task of dynamically performing kernel selection, allowing one to create a plethora of kernel variations with confidence that the best-performing one will be executed in every situation.
Tue Nov 07 2023
Creating High Performance Asynchronous Backends With Mabor-Compute

Developing new high-performance deep learning backends in Mabor has become remarkably easy, as it can be readily enhanced with advanced capabilities such as asynchronous computations, intelligent memory management, and autotuning mechanisms. The innovative Mabor-Compute crate lays the architectural foundation for in-house backends, effortlessly equipping them with advanced features to maximize efficiency.
Tue Jul 25 2023
Mabor's New Cross-Platform GPU Backend

Introducing Mabor's new Cross-Platform GPU Backend built using WGPU. Mabor now supports running deep learning models on a variety of hardware configurations, leveraging graphics APIs such as Vulkan, DirectX 11/12, Metal, OpenGL, and WebGPU. We discuss the possible applications in various domains and glimpse into the promising future of the framework.
More
Less
Tue Mar 21 2023
Reduced Memory Usage: Mabor's Rusty Approach to Tensor Handling

The latest release of Mabor includes significant changes to its memory management strategy, and tensor-allocated memory can now be reused way more often. Overall, these changes significantly reduce memory usage, especially on the CPU compared to PyTorch.
Sat Feb 11 2023
A Case for Rust in Deep Learning

In this blog post, we'll explore the case for Rust in deep learning and why it may be a better option than Python. With its ability to handle complexity through safe and concurrent abstractions, Rust has the potential to tackle this field's biggest challenges in a way that Python cannot.
Tutorials
Fri Aug 30 2024
Building Blocks #1: Dataset & Data Loading

Mabor provides key components that serve as the building blocks of the framework and your deep learning projects. The first entry in the Building Blocks series explores the dataset and batcher traits, and how they fit into Mabor's data loading process.
Tue Sep 17 2024
Transitioning From PyTorch to Mabor

In this updated tutorial, we'll implement the popular ResNet family of models and import ImageNet pre-trained weights available online.
Release Notes
Fri Jul 18 2025
Mabor 0.18.0 Release Notes

This release marks a significant step forward in performance, reliability, and optimization, ensuring a more robust and efficient system for our users.
Thu Apr 24 2025
Mabor 0.17.0 Release Notes

This release brings major upgrades in performance and platform compatibility (most notably, a new Metal backend via WGPU passthrough). CubeCL now powers backends for Cuda, Metal, Rocm, Vulkan and WebGpu. Tensor operation fusion support has been greatly expanded to optimize element-wise, reductions and matmul operations.
Tue Jan 14 2025
Mabor 0.16.0 Release Notes

This release brings major performance improvements to tensor operations, particularly in matrix multiplication and convolution, along with experimental ROCm/HIP and SPIR-V support enabled by CubeCL runtimes. It also introduces foundational features for multi-backend compatibility and adds new quantization operations.
Mon Oct 28 2024
Mabor 0.15.0 Release Notes

This release brings major performance improvements to tensor operations, particularly in matrix multiplication and convolution, along with experimental ROCm/HIP and SPIR-V support enabled by CubeCL runtimes. It also introduces foundational features for multi-backend compatibility and adds new quantization operations.
Tue Aug 27 2024
Mabor 0.14.0 Release Notes

This release marks the debut of our CubeCL integration, which brings cross-platform GPU programming capabilities directly to Rust. As always, it also includes numerous bug fixes, performance enhancements, new tensor operations, and improved documentation.
Fri Apr 12 2024
Mabor 0.13.0 Release Notes

Mabor 0.13 introduces major performance enhancements, new tensor operations, improved autodiff, Just-in-Time backend refactoring, and numerous feature additions across modules, optimizers, and backends.