linux/debian/patches-rt/0328-mm-slub-Always-flush-t...

68 lines
2.7 KiB
Diff
Raw Normal View History

2020-06-23 13:42:59 +00:00
From 35163550074015f85b43664a5b4ae464a44a6529 Mon Sep 17 00:00:00 2001
Message-Id: <35163550074015f85b43664a5b4ae464a44a6529.1592846147.git.zanussi@kernel.org>
In-Reply-To: <07cd0dbc80b976663c80755496a03f288decfe5a.1592846146.git.zanussi@kernel.org>
References: <07cd0dbc80b976663c80755496a03f288decfe5a.1592846146.git.zanussi@kernel.org>
From: Kevin Hao <haokexin@gmail.com>
Date: Mon, 4 May 2020 11:34:07 +0800
Subject: [PATCH 328/330] mm: slub: Always flush the delayed empty slubs in
flush_all()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.19/older/patches-4.19.127-rt55.tar.xz
[ Upstream commit 23a2c31b19e99beaf5107071b0f32a596202cdae ]
After commit f0b231101c94 ("mm/SLUB: delay giving back empty slubs to
IRQ enabled regions"), when the free_slab() is invoked with the IRQ
disabled, the empty slubs are moved to a per-CPU list and will be
freed after IRQ enabled later. But in the current codes, there is
a check to see if there really has the cpu slub on a specific cpu
before flushing the delayed empty slubs, this may cause a reference
of already released kmem_cache in a scenario like below:
cpu 0 cpu 1
kmem_cache_destroy()
flush_all()
--->IPI flush_cpu_slab()
flush_slab()
deactivate_slab()
discard_slab()
free_slab()
c->page = NULL;
for_each_online_cpu(cpu)
if (!has_cpu_slab(1, s))
continue
this skip to flush the delayed
empty slub released by cpu1
kmem_cache_free(kmem_cache, s)
kmalloc()
__slab_alloc()
free_delayed()
__free_slab()
reference to released kmem_cache
Fixes: f0b231101c94 ("mm/SLUB: delay giving back empty slubs to IRQ enabled regions")
Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
mm/slub.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index d243c6ef7fc9..a9473bbb1338 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2341,9 +2341,6 @@ static void flush_all(struct kmem_cache *s)
for_each_online_cpu(cpu) {
struct slub_free_list *f;
- if (!has_cpu_slab(cpu, s))
- continue;
-
f = &per_cpu(slub_free_list, cpu);
raw_spin_lock_irq(&f->lock);
list_splice_init(&f->list, &tofree);
--
2.17.1