Ever felt like your computer is just holding its breath right before a big task. Maybe you are running a heavy compile or just opening way too many browser tabs and you can practically hear the CPU sweat. Well the Linux kernel is about to get a major upgrade to how it handles its "short term memory" and it is all thanks to something called sheaves.
With the release of Linux 6.19 happening basically right now the door has swung wide open for the next big thing. Whether it is called Linux 6.20 or more likely Linux 7.0 we are looking at a massive shift in the memory management department. This isn't just some boring code cleanup. It is a fundamental change in how the kernel caches objects.
What on Earth are Sheaves
So here is the deal. For the longest time the Linux kernel has used "slabs" to manage memory. Think of a slab like a tray in a cafeteria. When the kernel needs a small piece of memory it goes to the tray and grabs an object. But as computers got more cores—like those massive 128 core EPYC chips—the cafeteria got crowded. Everyone was fighting over the same trays.
Introduced to the mainline Linux kernel last year was "sheaves" as an opt-in per-CPU array-based caching layer. Instead of a tray you now have a "sheaf" which is basically your own personal stash of memory objects. You don't have to ask anyone else for them. You just reach into your sheaf and go.
The Evolution from 6.18 to 7.0
Sheaves was merged back in Linux 6.18 and while it started as an opt-in caching layer the plan is to replace more CPU slabs / caches with sheaves. This is a classic Linux move. Start small test the waters and then let it take over the world. Queued up for slated introduction in the upcoming Linux 7.0 cycle is replacing more of those caches with sheaves.
| Linux Version | Sheaves Status | Key Goal |
| Linux 6.18 | Experimental / Opt-in | Initial framework for VMA and maple nodes |
| Linux 6.19 | Stabilizing | Bug fixes and preparation for wider use |
| Linux 7.0 | Mainstream / Default | Replacing per-CPU partial slabs entirely |
Why Vlastimil Babka is the Name to Know
If you are into kernel development you have probably seen Vlastimil Babka's name a lot lately. He is a SUSE engineer and the SLAB maintainer and he has been the driving force behind this whole sheaves revolution. He summed up the work in a recent patch series and it is pretty exciting stuff.
He mentioned that "Percpu sheaves caching was introduced as opt-in but the goal was to eventually move all caches to them. This is the next step, enabling sheaves for all caches (except the two bootstrap ones) and then removing the per cpu (partial) slabs and lots of as
"Besides (hopefully) improved performance, this removes the rather complicated code related to the lockless
fastpaths." - Vlastimil Babka
This is huge. In the world of kernel dev "removing complicated code" is like finding a chest of gold. It means fewer bugs less maintenance and a smoother ride for everyone.
The Death of the Lockless Fastpath
Okay let's get a bit technical but I promise I will keep it snappy. For a long time the kernel used something called "lockless fastpaths" using specific CPU instructions like this_cpu_try_cmpxchg128/64. This was the "fast" way to handle memory but it was super complex especially when you throw in things like PREEMPT_RT (real-time Linux) or kmalloc_nolock().
By moving to sheaves we can toss out a lot of that headache. The sheaves approach uses local_trylock() which is much simpler. It is basically like saying "Hey I am using this sheaf right now" without having to perform expensive atomic operations that slow down the whole bus.
What is staying and what is going
Going Away: Per-CPU partial caches. These were used to accelerate object allocation but they are being totally eliminated by sheaves.
Staying: The lockless slab freelist+counters update. This is still needed for freeing "remote" NUMA objects. If you have a dual-socket server and CPU 1 wants to free memory that belongs to CPU 100 on another socket you still need this.
New Stuff: The "Barn". This is a per-NUMA-node cache where sheaves go to rest when they are full or where a CPU goes when its sheaf is empty.
Show Me the Numbers (The "Hopefully" Part)
Here is the kicker. Everyone is saying this will improve performance but there is "hopefully" improved performance but no numbers to quantify the possible performance impact from this expanded sheaves use.
Wait what. No numbers.
Yeah that is how it goes sometimes in the kernel world. Early benchmarks from Google engineers back during the 6.18 cycle showed some massive wins—we are talking +70% or even +100% in specific scalability tests on huge AMD Turin systems. But they also saw some regressions of 10-20% in other areas. It is a trade-off.
The Performance Trade-off Table
| Factor | Sheaves Benefit | The Potential Downside |
| Temporal Locality | High (you get recently freed hot objects) | Objects might be from different physical slabs |
| Lock Contention | Very Low (no atomic ops on fast path) | The "Barn" spinlock could get busy under load |
| Memory Usage | Consistent | Slightly higher memory footprint due to array overhead |
Global Impact and Economics of Code
You might be wondering why we are talking about "economic repercussions" in a Linux blog post. But think about it. The modern world runs on Linux servers. From the "international trade" platforms that move trillions of dollars to the "supply chains" managed by massive databases.
If Linux 7.0 can squeeze an extra 5% or 10% performance out of a server that has a direct "economic growth" impact. Companies can do more with less hardware. It lowers the barrier for "foreign investment" in tech infrastructure. In a world of "geopolitical tensions" having the most efficient software stack is a strategic advantage.
Even the "labor market" for developers changes when the kernel gets easier to maintain. We spend less time fighting weird lockup bugs and more time building actual features.
What to Expect in February
With these patches now in slab/for-next after being staged at first in slab/for-7.0/sheaves the work should be submitted in February as part of the Linux 6.20~7.0 cycle.
February is going to be a wild month for kernel nerds. We will see the merge window open and if Linus Torvalds is in a good mood he will hit that button and Linux 7.0 will officially be born. Barring any last minute issues we'll find this expanded sheaves use in the next mainline kernel version.
Frequently Asked Questions
Will Linux 7.0 make my laptop faster
Maybe a little but you will really notice it if you have a CPU with a lot of cores. The more cores you have the more sheaves help by reducing the "crowded cafeteria" problem.
Do I need to do anything to enable sheaves
In Linux 7.0 the plan is for this to be the default for almost all caches. You won't have to flip a switch or edit a config file. It just happens.
What are the "two bootstrap caches" that don't get sheaves
Those are the very first caches the kernel creates when it is waking up. They are so basic that they can't use the fancy sheaves system because the system isn't "awake" enough to handle them yet.
Final Thoughts
The jump to Linux 7.0 is looking like it is going to be a big one. Not just because of the name but because we are finally cleaning up some of the most complex parts of the memory management system. It is a bit of a gamble—trading spatial locality for temporal locality—but the "hopefully" improved performance is a bet most of us are willing to take.
Stay tuned for more updates as we get closer to the merge window. I will be keeping a close eye on the mailing lists to see if anyone finally posts some solid benchmark numbers for the full sheaf conversion.
"Contact us via the web."
Source:
Source:
Citations: Babka, V. (2026). "Slab: replace cpu (partial) slabs with sheaves". Linux Kernel Mailing List.
Keywords: international conflicts, geopolitical tensions, economics, economic repercussions, labor market, international trade, economic sanctions, economic growth, foreign investment, supply chains, growth, macroeconomics, microeconomics
Libellés: Linux 7.0, Kernel, Sheaves, Memory Management, SLUB, Performance, Tech Trends



0 Comments