[rt] Update to 4.9-rt1 and reenable

This commit is contained in:
Uwe Kleine-König 2016-12-23 21:25:24 +01:00
parent b9aaa1abc5
commit 61b71464b3
332 changed files with 1534 additions and 5695 deletions

3
debian/changelog vendored
View File

@ -9,8 +9,9 @@ linux (4.9-1~exp1) UNRELEASED; urgency=medium
[ Uwe Kleine-König ]
* enable `perf data' support; patch by Sebastian Andrzej Siewior
(Closes: #846597)
* [rt] Update to 4.9-rt1 and reenable
-- Uwe Kleine-Koenig <ukleinek@debian.org> Sun, 18 Dec 2016 17:53:39 +0100
-- Uwe Kleine-König <ukleinek@debian.org> Sun, 18 Dec 2016 17:53:39 +0100
linux (4.9~rc8-1~exp1) experimental; urgency=medium

View File

@ -47,7 +47,7 @@ debug-info: true
signed-modules: true
[featureset-rt_base]
enabled: false
enabled: true
[description]
part-long-up: This kernel is not suitable for SMP (multi-processor,

View File

@ -1,43 +0,0 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 1 Mar 2013 11:17:42 +0100
Subject: futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
In exit_pi_state_list() we have the following locking construct:
spin_lock(&hb->lock);
raw_spin_lock_irq(&curr->pi_lock);
...
spin_unlock(&hb->lock);
In !RT this works, but on RT the migrate_enable() function which is
called from spin_unlock() sees atomic context due to the held pi_lock
and just decrements the migrate_disable_atomic counter of the
task. Now the next call to migrate_disable() sees the counter being
negative and issues a warning. That check should be in
migrate_enable() already.
Fix this by dropping pi_lock before unlocking hb->lock and reaquire
pi_lock after that again. This is safe as the loop code reevaluates
head again under the pi_lock.
Reported-by: Yong Zhang <yong.zhang@windriver.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex.c | 2 ++
1 file changed, 2 insertions(+)
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -866,7 +866,9 @@ void exit_pi_state_list(struct task_stru
* task still owns the PI-state:
*/
if (head->next != next) {
+ raw_spin_unlock_irq(&curr->pi_lock);
spin_unlock(&hb->lock);
+ raw_spin_lock_irq(&curr->pi_lock);
continue;
}

View File

@ -1,7 +1,7 @@
From: "Yadi.hu" <yadi.hu@windriver.com>
Date: Wed, 10 Dec 2014 10:32:09 +0800
Subject: ARM: enable irq in translation/section permission fault handlers
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Probably happens on all ARM, with
CONFIG_PREEMPT_RT_FULL

View File

@ -1,31 +0,0 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 6 Apr 2016 17:30:28 +0200
Subject: [PATCH] ARM: imx: always use TWD on IMX6Q
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
There is no reason to limit the TWD to be used on SMP kernels only if the
hardware has it available.
On Wandboard i.MX6SOLO, running PREEMPT-RT and cyclictest I see as max
immediately after start in idle:
UP : ~90us
SMP: ~50us
UP + TWD: ~20us.
Based on this numbers I prefer the TWD over the slightly slower MXC
timer.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm/mach-imx/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm/mach-imx/Kconfig
+++ b/arch/arm/mach-imx/Kconfig
@@ -526,7 +526,7 @@ config SOC_IMX6Q
bool "i.MX6 Quad/DualLite support"
select ARM_ERRATA_764369 if SMP
select HAVE_ARM_SCU if SMP
- select HAVE_ARM_TWD if SMP
+ select HAVE_ARM_TWD
select PCI_DOMAINS if PCI
select PINCTRL_IMX6Q
select SOC_IMX6

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 21 Mar 2013 19:01:05 +0100
Subject: printk: Drop the logbuf_lock more often
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The lock is hold with irgs off. The latency drops 500us+ on my arm bugs
with a "full" buffer after executing "dmesg" on the shell.
@ -13,7 +13,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1399,6 +1399,7 @@ static int syslog_print_all(char __user
@@ -1396,6 +1396,7 @@ static int syslog_print_all(char __user
{
char *text;
int len = 0;
@ -21,7 +21,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
text = kmalloc(LOG_LINE_MAX + PREFIX_MAX, GFP_KERNEL);
if (!text)
@@ -1410,6 +1411,14 @@ static int syslog_print_all(char __user
@@ -1407,6 +1408,14 @@ static int syslog_print_all(char __user
u64 seq;
u32 idx;
enum log_flags prev;
@ -36,7 +36,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* Find first record that fits, including all following records,
@@ -1425,6 +1434,14 @@ static int syslog_print_all(char __user
@@ -1422,6 +1431,14 @@ static int syslog_print_all(char __user
prev = msg->flags;
idx = log_next(idx);
seq++;
@ -51,7 +51,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/* move first record forward until length fits into the buffer */
@@ -1438,6 +1455,14 @@ static int syslog_print_all(char __user
@@ -1435,6 +1452,14 @@ static int syslog_print_all(char __user
prev = msg->flags;
idx = log_next(idx);
seq++;
@ -66,7 +66,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/* last message fitting into this dump */
@@ -1478,6 +1503,7 @@ static int syslog_print_all(char __user
@@ -1475,6 +1500,7 @@ static int syslog_print_all(char __user
clear_seq = log_next_seq;
clear_idx = log_next_idx;
}

View File

@ -1,7 +1,7 @@
From: Josh Cartwright <joshc@ni.com>
Date: Thu, 11 Feb 2016 11:54:01 -0600
Subject: KVM: arm/arm64: downgrade preempt_disable()d region to migrate_disable()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
kvm_arch_vcpu_ioctl_run() disables the use of preemption when updating
the vgic and timer states to prevent the calling task from migrating to
@ -23,7 +23,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -584,7 +584,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
@@ -619,7 +619,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
* involves poking the GIC, which must be done in a
* non-preemptible context.
*/
@ -32,7 +32,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
kvm_pmu_flush_hwstate(vcpu);
kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu);
@@ -605,7 +605,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
@@ -640,7 +640,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
kvm_pmu_sync_hwstate(vcpu);
kvm_timer_sync_hwstate(vcpu);
kvm_vgic_sync_hwstate(vcpu);
@ -41,7 +41,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
continue;
}
@@ -661,7 +661,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
@@ -696,7 +696,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
kvm_vgic_sync_hwstate(vcpu);

View File

@ -1,7 +1,7 @@
From: Marcelo Tosatti <mtosatti@redhat.com>
Date: Wed, 8 Apr 2015 20:33:25 -0300
Subject: KVM: lapic: mark LAPIC timer handler as irqsafe
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Since lapic timer handler only wakes up a simple waitqueue,
it can be executed from hardirq context.
@ -16,7 +16,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1938,6 +1938,7 @@ int kvm_create_lapic(struct kvm_vcpu *vc
@@ -1939,6 +1939,7 @@ int kvm_create_lapic(struct kvm_vcpu *vc
hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC,
HRTIMER_MODE_ABS_PINNED);
apic->lapic_timer.timer.function = apic_timer_fn;

View File

@ -5,7 +5,7 @@ Cc: Anna Schumaker <anna.schumaker@netapp.com>,
linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org,
tglx@linutronix.de
Subject: NFSv4: replace seqcount_t with a seqlock_t
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The raw_write_seqcount_begin() in nfs4_reclaim_open_state() bugs me
because it maps to preempt_disable() in -RT which I can't have at this
@ -47,7 +47,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
put_nfs_open_context(ctx);
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -107,7 +107,7 @@ struct nfs4_state_owner {
@@ -111,7 +111,7 @@ struct nfs4_state_owner {
unsigned long so_flags;
struct list_head so_states;
struct nfs_seqid_counter so_seqid;
@ -58,7 +58,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -2525,7 +2525,7 @@ static int _nfs4_open_and_get_state(stru
@@ -2697,7 +2697,7 @@ static int _nfs4_open_and_get_state(stru
unsigned int seq;
int ret;
@ -67,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
ret = _nfs4_proc_open(opendata);
if (ret != 0)
@@ -2561,7 +2561,7 @@ static int _nfs4_open_and_get_state(stru
@@ -2735,7 +2735,7 @@ static int _nfs4_open_and_get_state(stru
ctx->state = state;
if (d_inode(dentry) == state->inode) {
nfs_inode_attach_open_context(ctx);
@ -87,7 +87,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
mutex_init(&sp->so_delegreturn_mutex);
return sp;
}
@@ -1459,8 +1459,12 @@ static int nfs4_reclaim_open_state(struc
@@ -1497,8 +1497,12 @@ static int nfs4_reclaim_open_state(struc
* recovering after a network partition or a reboot from a
* server that doesn't support a grace period.
*/
@ -101,7 +101,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
restart:
list_for_each_entry(state, &sp->so_states, open_states) {
if (!test_and_clear_bit(ops->state_flag_bit, &state->flags))
@@ -1528,14 +1532,20 @@ static int nfs4_reclaim_open_state(struc
@@ -1567,14 +1571,20 @@ static int nfs4_reclaim_open_state(struc
spin_lock(&sp->so_lock);
goto restart;
}

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Wed, 13 Feb 2013 09:26:05 -0500
Subject: acpi/rt: Convert acpi_gbl_hardware lock back to a raw_spinlock_t
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
We hit the following bug with 3.6-rt:
@ -143,7 +143,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/* Delete the reader/writer lock */
--- a/include/acpi/platform/aclinux.h
+++ b/include/acpi/platform/aclinux.h
@@ -131,6 +131,7 @@
@@ -133,6 +133,7 @@
#define acpi_cache_t struct kmem_cache
#define acpi_spinlock spinlock_t *
@ -151,7 +151,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
#define acpi_cpu_flags unsigned long
/* Use native linux version of acpi_os_allocate_zeroed */
@@ -149,6 +150,20 @@
@@ -151,6 +152,20 @@
#define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_thread_id
#define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_create_lock

View File

@ -1,7 +1,7 @@
From: Anders Roxell <anders.roxell@linaro.org>
Date: Thu, 14 May 2015 17:52:17 +0200
Subject: arch/arm64: Add lazy preempt support
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
arm64 is missing support for PREEMPT_RT. The main feature which is
lacking is support for lazy preemption. The arch-specific entry code,
@ -13,14 +13,15 @@ indicate that support for full RT preemption is now available.
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/thread_info.h | 6 +++++-
arch/arm64/include/asm/thread_info.h | 7 ++++++-
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kernel/entry.S | 13 ++++++++++---
4 files changed, 17 insertions(+), 4 deletions(-)
arch/arm64/kernel/entry.S | 12 +++++++++---
arch/arm64/kernel/signal.c | 2 +-
5 files changed, 18 insertions(+), 5 deletions(-)
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -90,6 +90,7 @@ config ARM64
@@ -91,6 +91,7 @@ config ARM64
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
@ -38,7 +39,7 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
int cpu; /* cpu */
};
@@ -109,6 +110,7 @@ static inline struct thread_info *curren
@@ -112,6 +113,7 @@ static inline struct thread_info *curren
#define TIF_NEED_RESCHED 1
#define TIF_NOTIFY_RESUME 2 /* callback before returning to user */
#define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */
@ -46,7 +47,7 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
#define TIF_NOHZ 7
#define TIF_SYSCALL_TRACE 8
#define TIF_SYSCALL_AUDIT 9
@@ -124,6 +126,7 @@ static inline struct thread_info *curren
@@ -127,6 +129,7 @@ static inline struct thread_info *curren
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE)
@ -54,19 +55,20 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
#define _TIF_NOHZ (1 << TIF_NOHZ)
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
@@ -132,7 +135,8 @@ static inline struct thread_info *curren
@@ -135,7 +138,9 @@ static inline struct thread_info *curren
#define _TIF_32BIT (1 << TIF_32BIT)
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
- _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE)
+ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
+ _TIF_NEED_RESCHED_LAZY)
+#define _TIF_NEED_RESCHED_MASK (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)
#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
_TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -37,6 +37,7 @@ int main(void)
@@ -38,6 +38,7 @@ int main(void)
BLANK();
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count));
@ -76,7 +78,7 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -434,11 +434,16 @@ ENDPROC(el1_sync)
@@ -428,11 +428,16 @@ ENDPROC(el1_sync)
#ifdef CONFIG_PREEMPT
ldr w24, [tsk, #TI_PREEMPT] // get preempt count
@ -96,7 +98,7 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
#endif
#ifdef CONFIG_TRACE_IRQFLAGS
bl trace_hardirqs_on
@@ -452,6 +457,7 @@ ENDPROC(el1_irq)
@@ -446,6 +451,7 @@ ENDPROC(el1_irq)
1: bl preempt_schedule_irq // irq en/disable is done inside
ldr x0, [tsk, #TI_FLAGS] // get new tasks TI_FLAGS
tbnz x0, #TIF_NEED_RESCHED, 1b // needs rescheduling?
@ -104,11 +106,14 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
ret x24
#endif
@@ -708,6 +714,7 @@ ENDPROC(cpu_switch_to)
*/
work_pending:
tbnz x1, #TIF_NEED_RESCHED, work_resched
+ tbnz x1, #TIF_NEED_RESCHED_LAZY, work_resched
/* TIF_SIGPENDING, TIF_NOTIFY_RESUME or TIF_FOREIGN_FPSTATE case */
mov x0, sp // 'regs'
enable_irq // enable interrupts for do_notify_resume()
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -409,7 +409,7 @@ asmlinkage void do_notify_resume(struct
*/
trace_hardirqs_off();
do {
- if (thread_flags & _TIF_NEED_RESCHED) {
+ if (thread_flags & _TIF_NEED_RESCHED_MASK) {
schedule();
} else {
local_irq_enable();

View File

@ -1,77 +0,0 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 22 Jan 2016 21:33:39 +0100
Subject: arm+arm64: lazy-preempt: add TIF_NEED_RESCHED_LAZY to _TIF_WORK_MASK
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
_TIF_WORK_MASK is used to check for TIF_NEED_RESCHED so we need to check
for TIF_NEED_RESCHED_LAZY here, too.
Reported-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm/include/asm/thread_info.h | 7 ++++---
arch/arm/kernel/entry-common.S | 9 +++++++--
arch/arm64/include/asm/thread_info.h | 3 ++-
3 files changed, 13 insertions(+), 6 deletions(-)
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -143,8 +143,8 @@ extern int vfp_restore_user_hwstate(stru
#define TIF_SYSCALL_TRACE 4 /* syscall trace active */
#define TIF_SYSCALL_AUDIT 5 /* syscall auditing active */
#define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */
-#define TIF_SECCOMP 7 /* seccomp syscall filtering active */
-#define TIF_NEED_RESCHED_LAZY 8
+#define TIF_SECCOMP 8 /* seccomp syscall filtering active */
+#define TIF_NEED_RESCHED_LAZY 7
#define TIF_NOHZ 12 /* in adaptive nohz mode */
#define TIF_USING_IWMMXT 17
@@ -170,7 +170,8 @@ extern int vfp_restore_user_hwstate(stru
* Change these and you break ASM code in entry-common.S
*/
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
- _TIF_NOTIFY_RESUME | _TIF_UPROBE)
+ _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
+ _TIF_NEED_RESCHED_LAZY)
#endif /* __KERNEL__ */
#endif /* __ASM_ARM_THREAD_INFO_H */
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -36,7 +36,9 @@
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
- tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
+ tst r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
+ bne fast_work_pending
+ tst r1, #_TIF_SECCOMP
bne fast_work_pending
/* perform architecture specific actions before user return */
@@ -62,8 +64,11 @@ ENDPROC(ret_fast_syscall)
str r0, [sp, #S_R0 + S_OFF]! @ save returned r0
disable_irq_notrace @ disable interrupts
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
- tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
+ tst r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
+ bne do_slower_path
+ tst r1, #_TIF_SECCOMP
beq no_work_pending
+do_slower_path:
UNWIND(.fnend )
ENDPROC(ret_fast_syscall)
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -135,7 +135,8 @@ static inline struct thread_info *curren
#define _TIF_32BIT (1 << TIF_32BIT)
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
- _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE)
+ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
+ _TIF_NEED_RESCHED_LAZY)
#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
_TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \

View File

@ -1,7 +1,7 @@
From: Benedikt Spranger <b.spranger@linutronix.de>
Date: Sat, 6 Mar 2010 17:47:10 +0100
Subject: ARM: AT91: PIT: Remove irq handler when clock event is unused
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Setup and remove the interrupt handler in clock event mode selection.
This avoids calling the (shared) interrupt handler when the device is
@ -45,7 +45,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/* update clocksource counter */
data->cnt += data->cycle * PIT_PICNT(pit_read(data->base, AT91_PIT_PIVR));
@@ -211,15 +220,6 @@ static int __init at91sam926x_pit_common
@@ -230,15 +239,6 @@ static int __init at91sam926x_pit_dt_ini
return ret;
}

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sat, 1 May 2010 18:29:35 +0200
Subject: ARM: at91: tclib: Default to tclib timer for RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
RT is not too happy about the shared timer interrupt in AT91
devices. Default to tclib timer for RT.

View File

@ -1,7 +1,7 @@
From: Frank Rowand <frank.rowand@am.sony.com>
Date: Mon, 19 Sep 2011 14:51:14 -0700
Subject: arm: Convert arm boot_lock to raw
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The arm boot_lock is used by the secondary processor startup code. The locking
task is the idle thread, which has idle->sched_class == &idle_sched_class.

View File

@ -1,7 +1,7 @@
Subject: arm: Enable highmem for rt
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 13 Feb 2013 11:03:11 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
fixup highmem for ARM.

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Mon, 11 Mar 2013 21:37:27 +0100
Subject: arm/highmem: Flush tlb on unmap
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The tlb should be flushed on unmap and thus make the mapping entry
invalid. This is only done in the non-debug case which does not look

View File

@ -0,0 +1,25 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 22 Dec 2016 17:28:33 +0100
Subject: [PATCH] arm: include definition for cpumask_t
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
This definition gets pulled in by other files. With the (later) split of
RCU and spinlock.h it won't compile anymore.
The split is done in ("rbtree: don't include the rcu header").
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm/include/asm/irq.h | 2 ++
1 file changed, 2 insertions(+)
--- a/arch/arm/include/asm/irq.h
+++ b/arch/arm/include/asm/irq.h
@@ -22,6 +22,8 @@
#endif
#ifndef __ASSEMBLY__
+#include <linux/cpumask.h>
+
struct irqaction;
struct pt_regs;
extern void migrate_irqs(void);

View File

@ -2,7 +2,7 @@ From 6e2639b6d72e1ef9e264aa658db3b6171d9ba12f Mon Sep 17 00:00:00 2001
From: Yang Shi <yang.shi@linaro.org>
Date: Thu, 10 Nov 2016 16:17:55 -0800
Subject: [PATCH] arm: kprobe: replace patch_lock to raw lock
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
When running kprobe on -rt kernel, the below bug is caught:

View File

@ -1,32 +0,0 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Tue, 24 May 2016 12:56:38 +0200
Subject: [PATCH] arm: lazy preempt: correct resched condition
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
If we get out of preempt_schedule_irq() then we check for NEED_RESCHED
and call the former function again if set because the preemption counter
has be zero at this point.
However the counter for lazy-preempt might not be zero therefore we have
to check the counter before looking at the need_resched_lazy flag.
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm/kernel/entry-armv.S | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -244,7 +244,11 @@ ENDPROC(__irq_svc)
bne 1b
tst r0, #_TIF_NEED_RESCHED_LAZY
reteq r8 @ go again
- b 1b
+ ldr r0, [tsk, #TI_PREEMPT_LAZY] @ get preempt lazy count
+ teq r0, #0 @ if preempt lazy count != 0
+ beq 1b
+ ret r8 @ go again
+
#endif
__und_fault:

View File

@ -1,7 +1,7 @@
Subject: arm: Add support for lazy preemption
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 31 Oct 2012 12:04:11 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Implement the arm pieces for lazy preempt.

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 20 Sep 2013 14:31:54 +0200
Subject: arm/unwind: use a raw_spin_lock
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Mostly unwind is done with irqs enabled however SLUB may call it with
irqs disabled while creating a new SLUB cache.

View File

@ -1,7 +1,7 @@
Subject: arm64/xen: Make XEN depend on !RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Mon, 12 Oct 2015 11:18:40 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
It's not ready and probably never will be, unless xen folks have a
look at it.
@ -13,7 +13,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -689,7 +689,7 @@ config XEN_DOM0
@@ -694,7 +694,7 @@ config XEN_DOM0
config XEN
bool "Xen guest support on ARM64"

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 09 Mar 2016 10:51:06 +0100
Subject: arm: at91: do not disable/enable clocks in a row
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Currently the driver will disable the clock and enable it one line later
if it is switching from periodic mode into one shot.

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <srostedt@redhat.com>
Date: Fri, 3 Jul 2009 08:44:29 -0500
Subject: ata: Do not disable interrupts in ide code for preempt-rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Use the local_irq_*_nort variants.

View File

@ -1,84 +0,0 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Sat, 3 May 2014 11:00:29 +0200
Subject: blk-mq: revert raw locks, post pone notifier to POST_DEAD
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
The blk_mq_cpu_notify_lock should be raw because some CPU down levels
are called with interrupts off. The notifier itself calls currently one
function that is blk_mq_hctx_notify().
That function acquires the ctx->lock lock which is sleeping and I would
prefer to keep it that way. That function only moves IO-requests from
the CPU that is going offline to another CPU and it is currently the
only one. Therefore I revert the list lock back to sleeping spinlocks
and let the notifier run at POST_DEAD time.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
block/blk-mq-cpu.c | 17 ++++++++++-------
block/blk-mq.c | 2 +-
2 files changed, 11 insertions(+), 8 deletions(-)
--- a/block/blk-mq-cpu.c
+++ b/block/blk-mq-cpu.c
@@ -16,7 +16,7 @@
#include "blk-mq.h"
static LIST_HEAD(blk_mq_cpu_notify_list);
-static DEFINE_RAW_SPINLOCK(blk_mq_cpu_notify_lock);
+static DEFINE_SPINLOCK(blk_mq_cpu_notify_lock);
static int blk_mq_main_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
@@ -25,7 +25,10 @@ static int blk_mq_main_cpu_notify(struct
struct blk_mq_cpu_notifier *notify;
int ret = NOTIFY_OK;
- raw_spin_lock(&blk_mq_cpu_notify_lock);
+ if (action != CPU_POST_DEAD)
+ return NOTIFY_OK;
+
+ spin_lock(&blk_mq_cpu_notify_lock);
list_for_each_entry(notify, &blk_mq_cpu_notify_list, list) {
ret = notify->notify(notify->data, action, cpu);
@@ -33,7 +36,7 @@ static int blk_mq_main_cpu_notify(struct
break;
}
- raw_spin_unlock(&blk_mq_cpu_notify_lock);
+ spin_unlock(&blk_mq_cpu_notify_lock);
return ret;
}
@@ -41,16 +44,16 @@ void blk_mq_register_cpu_notifier(struct
{
BUG_ON(!notifier->notify);
- raw_spin_lock(&blk_mq_cpu_notify_lock);
+ spin_lock(&blk_mq_cpu_notify_lock);
list_add_tail(&notifier->list, &blk_mq_cpu_notify_list);
- raw_spin_unlock(&blk_mq_cpu_notify_lock);
+ spin_unlock(&blk_mq_cpu_notify_lock);
}
void blk_mq_unregister_cpu_notifier(struct blk_mq_cpu_notifier *notifier)
{
- raw_spin_lock(&blk_mq_cpu_notify_lock);
+ spin_lock(&blk_mq_cpu_notify_lock);
list_del(&notifier->list);
- raw_spin_unlock(&blk_mq_cpu_notify_lock);
+ spin_unlock(&blk_mq_cpu_notify_lock);
}
void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier,
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1687,7 +1687,7 @@ static int blk_mq_hctx_notify(void *data
{
struct blk_mq_hw_ctx *hctx = data;
- if (action == CPU_DEAD || action == CPU_DEAD_FROZEN)
+ if (action == CPU_POST_DEAD)
return blk_mq_hctx_cpu_offline(hctx, cpu);
/*

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 13 Feb 2015 11:01:26 +0100
Subject: block: blk-mq: Use swait
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
| BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914
| in_atomic(): 1, irqs_disabled(): 0, pid: 255, name: kworker/u257:6
@ -75,7 +75,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
* Init percpu_ref in atomic mode so that it's faster to shutdown.
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -92,7 +92,7 @@ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_st
@@ -72,7 +72,7 @@ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_st
static void blk_mq_freeze_queue_wait(struct request_queue *q)
{
@ -84,7 +84,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/*
@@ -130,7 +130,7 @@ void blk_mq_unfreeze_queue(struct reques
@@ -110,7 +110,7 @@ void blk_mq_unfreeze_queue(struct reques
WARN_ON_ONCE(freeze_depth < 0);
if (!freeze_depth) {
percpu_ref_reinit(&q->q_usage_counter);
@ -93,7 +93,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
}
EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
@@ -149,7 +149,7 @@ void blk_mq_wake_waiters(struct request_
@@ -129,7 +129,7 @@ void blk_mq_wake_waiters(struct request_
* dying, we need to ensure that processes currently waiting on
* the queue are notified as well.
*/

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 29 Jan 2015 15:10:08 +0100
Subject: block/mq: don't complete requests via IPI
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The IPI runs in hardirq context and there are sleeping locks. This patch
moves the completion into a workqueue.
@ -10,9 +10,9 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
block/blk-core.c | 3 +++
block/blk-mq.c | 20 ++++++++++++++++++++
include/linux/blk-mq.h | 1 +
include/linux/blk-mq.h | 2 +-
include/linux/blkdev.h | 1 +
4 files changed, 25 insertions(+)
4 files changed, 25 insertions(+), 1 deletion(-)
--- a/block/blk-core.c
+++ b/block/blk-core.c
@ -28,7 +28,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
rq->__sector = (sector_t) -1;
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -197,6 +197,9 @@ static void blk_mq_rq_ctx_init(struct re
@@ -177,6 +177,9 @@ static void blk_mq_rq_ctx_init(struct re
rq->resid_len = 0;
rq->sense = NULL;
@ -38,7 +38,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
INIT_LIST_HEAD(&rq->timeout_list);
rq->timeout = 0;
@@ -379,6 +382,17 @@ void blk_mq_end_request(struct request *
@@ -345,6 +348,17 @@ void blk_mq_end_request(struct request *
}
EXPORT_SYMBOL(blk_mq_end_request);
@ -56,7 +56,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
static void __blk_mq_complete_request_remote(void *data)
{
struct request *rq = data;
@@ -386,6 +400,8 @@ static void __blk_mq_complete_request_re
@@ -352,6 +366,8 @@ static void __blk_mq_complete_request_re
rq->q->softirq_done_fn(rq);
}
@ -65,7 +65,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
static void blk_mq_ipi_complete_request(struct request *rq)
{
struct blk_mq_ctx *ctx = rq->mq_ctx;
@@ -402,10 +418,14 @@ static void blk_mq_ipi_complete_request(
@@ -368,10 +384,14 @@ static void blk_mq_ipi_complete_request(
shared = cpus_share_cache(cpu, ctx->cpu);
if (cpu != ctx->cpu && !shared && cpu_online(ctx->cpu)) {
@ -82,14 +82,15 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -222,6 +222,7 @@ static inline u16 blk_mq_unique_tag_to_t
@@ -209,7 +209,7 @@ static inline u16 blk_mq_unique_tag_to_t
return unique_tag & BLK_MQ_UNIQUE_TAG_MASK;
}
struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *, const int ctx_index);
struct blk_mq_hw_ctx *blk_mq_alloc_single_hw_queue(struct blk_mq_tag_set *, unsigned int, int);
-
+void __blk_mq_complete_request_remote_work(struct work_struct *work);
int blk_mq_request_started(struct request *rq);
void blk_mq_start_request(struct request *rq);
void blk_mq_end_request(struct request *rq, int error);
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -89,6 +89,7 @@ struct request {

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Tue, 14 Jul 2015 14:26:34 +0200
Subject: block/mq: do not invoke preempt_disable()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
preempt_disable() and get_cpu() don't play well together with the sleeping
locks it tries to allocate later.
@ -14,7 +14,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -397,7 +397,7 @@ static void blk_mq_ipi_complete_request(
@@ -363,7 +363,7 @@ static void blk_mq_ipi_complete_request(
return;
}
@ -23,7 +23,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags))
shared = cpus_share_cache(cpu, ctx->cpu);
@@ -409,7 +409,7 @@ static void blk_mq_ipi_complete_request(
@@ -375,7 +375,7 @@ static void blk_mq_ipi_complete_request(
} else {
rq->q->softirq_done_fn(rq);
}
@ -32,10 +32,10 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
static void __blk_mq_complete_request(struct request *rq)
@@ -938,14 +938,14 @@ void blk_mq_run_hw_queue(struct blk_mq_h
@@ -917,14 +917,14 @@ void blk_mq_run_hw_queue(struct blk_mq_h
return;
if (!async) {
if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
- int cpu = get_cpu();
+ int cpu = get_cpu_light();
if (cpumask_test_cpu(cpu, hctx->cpumask)) {
@ -49,4 +49,4 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ put_cpu_light();
}
kblockd_schedule_delayed_work_on(blk_mq_hctx_next_cpu(hctx),
kblockd_schedule_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work);

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 9 Apr 2014 10:37:23 +0200
Subject: block: mq: use cpu_light()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
there is a might sleep splat because get_cpu() disables preemption and
later we grab a lock. As a workaround for this we use get_cpu_light().
@ -13,7 +13,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -86,12 +86,12 @@ static inline struct blk_mq_ctx *__blk_m
@@ -72,12 +72,12 @@ static inline struct blk_mq_ctx *__blk_m
*/
static inline struct blk_mq_ctx *blk_mq_get_ctx(struct request_queue *q)
{

View File

@ -1,7 +1,7 @@
Subject: block: Shorten interrupt disabled regions
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 22 Jun 2011 19:47:02 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Moving the blk_sched_flush_plug() call out of the interrupt/preempt
disabled region in the scheduler allows us to replace
@ -48,7 +48,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -3171,7 +3171,7 @@ static void queue_unplugged(struct reque
@@ -3177,7 +3177,7 @@ static void queue_unplugged(struct reque
blk_run_queue_async(q);
else
__blk_run_queue(q);
@ -57,7 +57,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
}
static void flush_plug_callbacks(struct blk_plug *plug, bool from_schedule)
@@ -3219,7 +3219,6 @@ EXPORT_SYMBOL(blk_check_plugged);
@@ -3225,7 +3225,6 @@ EXPORT_SYMBOL(blk_check_plugged);
void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
{
struct request_queue *q;
@ -65,7 +65,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
struct request *rq;
LIST_HEAD(list);
unsigned int depth;
@@ -3239,11 +3238,6 @@ void blk_flush_plug_list(struct blk_plug
@@ -3245,11 +3244,6 @@ void blk_flush_plug_list(struct blk_plug
q = NULL;
depth = 0;
@ -77,7 +77,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
while (!list_empty(&list)) {
rq = list_entry_rq(list.next);
list_del_init(&rq->queuelist);
@@ -3256,7 +3250,7 @@ void blk_flush_plug_list(struct blk_plug
@@ -3262,7 +3256,7 @@ void blk_flush_plug_list(struct blk_plug
queue_unplugged(q, depth, from_schedule);
q = rq->q;
depth = 0;
@ -86,7 +86,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
}
/*
@@ -3283,8 +3277,6 @@ void blk_flush_plug_list(struct blk_plug
@@ -3289,8 +3283,6 @@ void blk_flush_plug_list(struct blk_plug
*/
if (q)
queue_unplugged(q, depth, from_schedule);

View File

@ -1,7 +1,7 @@
Subject: block: Use cpu_chill() for retry loops
From: Thomas Gleixner <tglx@linutronix.de>
Date: Thu, 20 Dec 2012 18:28:26 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Retry loops on RT might loop forever when the modifying side was
preempted. Steven also observed a live lock when there was a

View File

@ -0,0 +1,39 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 14 Dec 2016 14:44:18 +0100
Subject: [PATCH] btrfs: drop trace_btrfs_all_work_done() from
normal_work_helper()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
For btrfs_scrubparity_helper() the ->func() is set to
scrub_parity_bio_endio_worker(). This functions invokes invokes
scrub_free_parity() which kfrees() the worked object. All is good as
long as trace events are not enabled because we boom with a backtrace
like this:
| Workqueue: btrfs-endio btrfs_endio_helper
| RIP: 0010:[<ffffffff812f81ae>] [<ffffffff812f81ae>] trace_event_raw_event_btrfs__work__done+0x4e/0xa0
| Call Trace:
| [<ffffffff8136497d>] btrfs_scrubparity_helper+0x59d/0x780
| [<ffffffff81364c49>] btrfs_endio_helper+0x9/0x10
| [<ffffffff8108af8e>] process_one_work+0x26e/0x7b0
| [<ffffffff8108b516>] worker_thread+0x46/0x560
| [<ffffffff81091c4e>] kthread+0xee/0x110
| [<ffffffff818e166a>] ret_from_fork+0x2a/0x40
So in order to avoid this, I remove the trace point.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
fs/btrfs/async-thread.c | 2 --
1 file changed, 2 deletions(-)
--- a/fs/btrfs/async-thread.c
+++ b/fs/btrfs/async-thread.c
@@ -318,8 +318,6 @@ static void normal_work_helper(struct bt
set_bit(WORK_DONE_BIT, &work->flags);
run_ordered_work(wq);
}
- if (!need_order)
- trace_btrfs_all_work_done(work);
}
void btrfs_init_work(struct btrfs_work *work, btrfs_work_func_t uniq_func,

View File

@ -0,0 +1,34 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 14 Dec 2016 12:28:52 +0100
Subject: [PATCH] btrfs: swap free() and trace point in run_ordered_work()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The previous patch removed a trace point due to a use after free problem
with tracing enabled. While looking at the backtrace it took me a while
to find the right spot. While doing so I noticed that this trace point
could be used with two clean-up functions in run_ordered_work():
- run_one_async_free()
- async_cow_free()
Both of them free the `work' item so a later use in the tracepoint is
not possible.
This patches swaps the order so we first have the trace point and then
free the struct.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
fs/btrfs/async-thread.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/btrfs/async-thread.c
+++ b/fs/btrfs/async-thread.c
@@ -288,8 +288,8 @@ static void run_ordered_work(struct __bt
* we don't want to call the ordered free functions
* with the lock held though
*/
- work->ordered_free(work);
trace_btrfs_all_work_done(work);
+ work->ordered_free(work);
}
spin_unlock_irqrestore(lock, flags);
}

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:58 -0500
Subject: bug: BUG_ON/WARN_ON variants dependend on RT/!RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Introduce RT/NON-RT WARN/BUG statements to avoid ifdefs in the code.

View File

@ -1,7 +1,7 @@
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Sat, 21 Jun 2014 10:09:48 +0200
Subject: memcontrol: Prevent scheduling while atomic in cgroup code
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
mm, memcg: make refill_stock() use get_cpu_light()
@ -43,7 +43,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1727,6 +1727,7 @@ struct memcg_stock_pcp {
@@ -1697,6 +1697,7 @@ struct memcg_stock_pcp {
#define FLUSHING_CACHED_CHARGE 0
};
static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
@ -51,7 +51,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
static DEFINE_MUTEX(percpu_charge_mutex);
/**
@@ -1749,7 +1750,7 @@ static bool consume_stock(struct mem_cgr
@@ -1719,7 +1720,7 @@ static bool consume_stock(struct mem_cgr
if (nr_pages > CHARGE_BATCH)
return ret;
@ -60,7 +60,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
stock = this_cpu_ptr(&memcg_stock);
if (memcg == stock->cached && stock->nr_pages >= nr_pages) {
@@ -1757,7 +1758,7 @@ static bool consume_stock(struct mem_cgr
@@ -1727,7 +1728,7 @@ static bool consume_stock(struct mem_cgr
ret = true;
}
@ -69,7 +69,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
return ret;
}
@@ -1784,13 +1785,13 @@ static void drain_local_stock(struct wor
@@ -1754,13 +1755,13 @@ static void drain_local_stock(struct wor
struct memcg_stock_pcp *stock;
unsigned long flags;
@ -85,7 +85,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/*
@@ -1802,7 +1803,7 @@ static void refill_stock(struct mem_cgro
@@ -1772,7 +1773,7 @@ static void refill_stock(struct mem_cgro
struct memcg_stock_pcp *stock;
unsigned long flags;
@ -94,7 +94,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
stock = this_cpu_ptr(&memcg_stock);
if (stock->cached != memcg) { /* reset if necessary */
@@ -1811,7 +1812,7 @@ static void refill_stock(struct mem_cgro
@@ -1781,7 +1782,7 @@ static void refill_stock(struct mem_cgro
}
stock->nr_pages += nr_pages;

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 13 Feb 2015 15:52:24 +0100
Subject: cgroups: use simple wait in css_release()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
To avoid:
|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914
@ -53,7 +53,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -5027,10 +5027,10 @@ static void css_free_rcu_fn(struct rcu_h
@@ -5040,10 +5040,10 @@ static void css_free_rcu_fn(struct rcu_h
queue_work(cgroup_destroy_wq, &css->destroy_work);
}
@ -66,7 +66,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
struct cgroup_subsys *ss = css->ss;
struct cgroup *cgrp = css->cgroup;
@@ -5071,8 +5071,8 @@ static void css_release(struct percpu_re
@@ -5086,8 +5086,8 @@ static void css_release(struct percpu_re
struct cgroup_subsys_state *css =
container_of(ref, struct cgroup_subsys_state, refcnt);
@ -77,7 +77,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
static void init_and_link_css(struct cgroup_subsys_state *css,
@@ -5716,6 +5716,7 @@ static int __init cgroup_wq_init(void)
@@ -5742,6 +5742,7 @@ static int __init cgroup_wq_init(void)
*/
cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);
BUG_ON(!cgroup_destroy_wq);

View File

@ -1,7 +1,7 @@
From: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Date: Thu, 17 Mar 2016 21:09:43 +0100
Subject: [PATCH] clockevents/drivers/timer-atmel-pit: fix double free_irq
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
clockevents_exchange_device() changes the state from detached to shutdown
and so at that point the IRQ has not yet been requested.

View File

@ -1,7 +1,7 @@
From: Benedikt Spranger <b.spranger@linutronix.de>
Date: Mon, 8 Mar 2010 18:57:04 +0100
Subject: clocksource: TCLIB: Allow higher clock rates for clock events
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
As default the TCLIB uses the 32KiHz base clock rate for clock events.
Add a compile time selection to allow higher clock resulution.

View File

@ -1,7 +1,7 @@
Subject: completion: Use simple wait queues
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 11 Jan 2013 11:23:51 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Completions have no long lasting callbacks and therefor do not need
the complex waitqueue variant. Use simple waitqueues which reduces the
@ -36,7 +36,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
break;
--- a/drivers/usb/gadget/function/f_fs.c
+++ b/drivers/usb/gadget/function/f_fs.c
@@ -1590,7 +1590,7 @@ static void ffs_data_put(struct ffs_data
@@ -1593,7 +1593,7 @@ static void ffs_data_put(struct ffs_data
pr_info("%s(): freeing\n", __func__);
ffs_data_clear(ffs);
BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
@ -137,7 +137,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
struct mm_struct;
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -681,6 +681,10 @@ static int load_image_and_restore(void)
@@ -683,6 +683,10 @@ static int load_image_and_restore(void)
return error;
}
@ -148,7 +148,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/**
* hibernate - Carry out system hibernation, including saving the image.
*/
@@ -694,6 +698,8 @@ int hibernate(void)
@@ -696,6 +700,8 @@ int hibernate(void)
return -EPERM;
}
@ -157,7 +157,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
lock_system_sleep();
/* The snapshot device should not be opened while we're running */
if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
@@ -771,6 +777,7 @@ int hibernate(void)
@@ -773,6 +779,7 @@ int hibernate(void)
atomic_inc(&snapshot_device_available);
Unlock:
unlock_system_sleep();
@ -167,7 +167,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/kernel/power/suspend.c
+++ b/kernel/power/suspend.c
@@ -523,6 +523,8 @@ static int enter_state(suspend_state_t s
@@ -531,6 +531,8 @@ static int enter_state(suspend_state_t s
return error;
}
@ -176,7 +176,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/**
* pm_suspend - Externally visible function for suspending the system.
* @state: System sleep state to enter.
@@ -537,6 +539,8 @@ int pm_suspend(suspend_state_t state)
@@ -545,6 +547,8 @@ int pm_suspend(suspend_state_t state)
if (state <= PM_SUSPEND_ON || state >= PM_SUSPEND_MAX)
return -EINVAL;
@ -185,7 +185,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
error = enter_state(state);
if (error) {
suspend_stats.fail++;
@@ -544,6 +548,7 @@ int pm_suspend(suspend_state_t state)
@@ -552,6 +556,7 @@ int pm_suspend(suspend_state_t state)
} else {
suspend_stats.success++;
}
@ -287,7 +287,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
EXPORT_SYMBOL(completion_done);
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3317,7 +3317,10 @@ void migrate_disable(void)
@@ -3323,7 +3323,10 @@ void migrate_disable(void)
}
#ifdef CONFIG_SCHED_DEBUG
@ -299,7 +299,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#endif
if (p->migrate_disable) {
@@ -3344,7 +3347,10 @@ void migrate_enable(void)
@@ -3350,7 +3353,10 @@ void migrate_enable(void)
}
#ifdef CONFIG_SCHED_DEBUG

View File

@ -1,7 +1,7 @@
Subject: sched: Use the proper LOCK_OFFSET for cond_resched()
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 17 Jul 2011 22:51:33 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
RT does not increment preempt count when a 'sleeping' spinlock is
locked. Update PREEMPT_LOCK_OFFSET for that case.

View File

@ -1,7 +1,7 @@
Subject: sched: Take RT softirq semantics into account in cond_resched()
From: Thomas Gleixner <tglx@linutronix.de>
Date: Thu, 14 Jul 2011 09:56:44 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The softirq semantics work different on -RT. There is no SOFTIRQ_MASK in
the preemption counter which leads to the BUG_ON() statement in
@ -16,7 +16,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -3258,12 +3258,16 @@ extern int __cond_resched_lock(spinlock_
@@ -3366,12 +3366,16 @@ extern int __cond_resched_lock(spinlock_
__cond_resched_lock(lock); \
})
@ -35,7 +35,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
{
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5021,6 +5021,7 @@ int __cond_resched_lock(spinlock_t *lock
@@ -5050,6 +5050,7 @@ int __cond_resched_lock(spinlock_t *lock
}
EXPORT_SYMBOL(__cond_resched_lock);
@ -43,7 +43,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
int __sched __cond_resched_softirq(void)
{
BUG_ON(!in_softirq());
@@ -5034,6 +5035,7 @@ int __sched __cond_resched_softirq(void)
@@ -5063,6 +5064,7 @@ int __sched __cond_resched_softirq(void)
return 0;
}
EXPORT_SYMBOL(__cond_resched_softirq);

View File

@ -2,7 +2,7 @@ From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Sun, 16 Oct 2016 05:11:54 +0200
Subject: [PATCH] connector/cn_proc: Protect send_msg() with a local lock
on RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:931
|in_atomic(): 1, irqs_disabled(): 0, pid: 31807, name: sleep

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Thu, 5 Dec 2013 09:16:52 -0500
Subject: cpu hotplug: Document why PREEMPT_RT uses a spinlock
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The patch:
@ -39,7 +39,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -187,6 +187,14 @@ struct hotplug_pcp {
@@ -260,6 +260,14 @@ struct hotplug_pcp {
int grab_lock;
struct completion synced;
#ifdef CONFIG_PREEMPT_RT_FULL

View File

@ -1,7 +1,7 @@
Subject: cpu: Make hotplug.lock a "sleeping" spinlock on RT
From: Steven Rostedt <rostedt@goodmis.org>
Date: Fri, 02 Mar 2012 10:36:57 -0500
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Tasks can block on hotplug.lock in pin_current_cpu(), but their state
might be != RUNNING. So the mutex wakeup will set the state
@ -25,7 +25,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -137,10 +137,16 @@ static int cpu_hotplug_disabled;
@@ -210,10 +210,16 @@ static int cpu_hotplug_disabled;
static struct {
struct task_struct *active_writer;
@ -42,7 +42,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Also blocks the new readers during
* an ongoing cpu hotplug operation.
@@ -153,12 +159,24 @@ static struct {
@@ -226,12 +232,24 @@ static struct {
} cpu_hotplug = {
.active_writer = NULL,
.wq = __WAIT_QUEUE_HEAD_INITIALIZER(cpu_hotplug.wq),
@ -52,7 +52,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
.lock = __MUTEX_INITIALIZER(cpu_hotplug.lock),
+#endif
#ifdef CONFIG_DEBUG_LOCK_ALLOC
.dep_map = {.name = "cpu_hotplug.lock" },
.dep_map = STATIC_LOCKDEP_MAP_INIT("cpu_hotplug.dep_map", &cpu_hotplug.dep_map),
#endif
};
@ -67,7 +67,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/* Lockdep annotations for get/put_online_cpus() and cpu_hotplug_begin/end() */
#define cpuhp_lock_acquire_read() lock_map_acquire_read(&cpu_hotplug.dep_map)
#define cpuhp_lock_acquire_tryread() \
@@ -195,8 +213,8 @@ void pin_current_cpu(void)
@@ -268,8 +286,8 @@ void pin_current_cpu(void)
return;
}
preempt_enable();
@ -78,7 +78,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
preempt_disable();
goto retry;
}
@@ -269,9 +287,9 @@ void get_online_cpus(void)
@@ -342,9 +360,9 @@ void get_online_cpus(void)
if (cpu_hotplug.active_writer == current)
return;
cpuhp_lock_acquire_read();
@ -90,7 +90,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
EXPORT_SYMBOL_GPL(get_online_cpus);
@@ -324,11 +342,11 @@ void cpu_hotplug_begin(void)
@@ -397,11 +415,11 @@ void cpu_hotplug_begin(void)
cpuhp_lock_acquire();
for (;;) {
@ -104,7 +104,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
schedule();
}
finish_wait(&cpu_hotplug.wq, &wait);
@@ -337,7 +355,7 @@ void cpu_hotplug_begin(void)
@@ -410,7 +428,7 @@ void cpu_hotplug_begin(void)
void cpu_hotplug_done(void)
{
cpu_hotplug.active_writer = NULL;

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <srostedt@redhat.com>
Date: Mon, 16 Jul 2012 08:07:43 +0000
Subject: cpu/rt: Rework cpu down for PREEMPT_RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Bringing a CPU down is a pain with the PREEMPT_RT kernel because
tasks can be preempted in many more places than in non-RT. In
@ -51,13 +51,13 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
include/linux/sched.h | 7 +
kernel/cpu.c | 238 +++++++++++++++++++++++++++++++++++++++++---------
kernel/cpu.c | 236 +++++++++++++++++++++++++++++++++++++++++---------
kernel/sched/core.c | 78 ++++++++++++++++
3 files changed, 281 insertions(+), 42 deletions(-)
3 files changed, 280 insertions(+), 41 deletions(-)
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2429,6 +2429,10 @@ extern void do_set_cpus_allowed(struct t
@@ -2473,6 +2473,10 @@ extern void do_set_cpus_allowed(struct t
extern int set_cpus_allowed_ptr(struct task_struct *p,
const struct cpumask *new_mask);
@ -68,7 +68,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#else
static inline void do_set_cpus_allowed(struct task_struct *p,
const struct cpumask *new_mask)
@@ -2441,6 +2445,9 @@ static inline int set_cpus_allowed_ptr(s
@@ -2485,6 +2489,9 @@ static inline int set_cpus_allowed_ptr(s
return -EINVAL;
return 0;
}
@ -80,7 +80,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#ifdef CONFIG_NO_HZ_COMMON
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -137,16 +137,10 @@ static int cpu_hotplug_disabled;
@@ -210,16 +210,10 @@ static int cpu_hotplug_disabled;
static struct {
struct task_struct *active_writer;
@ -97,19 +97,17 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Also blocks the new readers during
* an ongoing cpu hotplug operation.
@@ -158,25 +152,13 @@ static struct {
#endif
@@ -232,24 +226,12 @@ static struct {
} cpu_hotplug = {
.active_writer = NULL,
- .wq = __WAIT_QUEUE_HEAD_INITIALIZER(cpu_hotplug.wq),
.wq = __WAIT_QUEUE_HEAD_INITIALIZER(cpu_hotplug.wq),
-#ifdef CONFIG_PREEMPT_RT_FULL
- .lock = __SPIN_LOCK_UNLOCKED(cpu_hotplug.lock),
-#else
.lock = __MUTEX_INITIALIZER(cpu_hotplug.lock),
-#endif
+ .wq = __WAIT_QUEUE_HEAD_INITIALIZER(cpu_hotplug.wq),
#ifdef CONFIG_DEBUG_LOCK_ALLOC
.dep_map = {.name = "cpu_hotplug.lock" },
.dep_map = STATIC_LOCKDEP_MAP_INIT("cpu_hotplug.dep_map", &cpu_hotplug.dep_map),
#endif
};
@ -124,7 +122,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/* Lockdep annotations for get/put_online_cpus() and cpu_hotplug_begin/end() */
#define cpuhp_lock_acquire_read() lock_map_acquire_read(&cpu_hotplug.dep_map)
#define cpuhp_lock_acquire_tryread() \
@@ -184,12 +166,42 @@ static struct {
@@ -257,12 +239,42 @@ static struct {
#define cpuhp_lock_acquire() lock_map_acquire(&cpu_hotplug.dep_map)
#define cpuhp_lock_release() lock_map_release(&cpu_hotplug.dep_map)
@ -167,7 +165,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
static DEFINE_PER_CPU(struct hotplug_pcp, hotplug_pcp);
/**
@@ -203,18 +215,39 @@ static DEFINE_PER_CPU(struct hotplug_pcp
@@ -276,18 +288,39 @@ static DEFINE_PER_CPU(struct hotplug_pcp
void pin_current_cpu(void)
{
struct hotplug_pcp *hp;
@ -211,7 +209,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
preempt_disable();
goto retry;
}
@@ -235,26 +268,84 @@ void unpin_current_cpu(void)
@@ -308,26 +341,84 @@ void unpin_current_cpu(void)
wake_up_process(hp->unplug);
}
@ -303,7 +301,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Start the sync_unplug_thread on the target cpu and wait for it to
* complete.
@@ -262,23 +353,83 @@ static int sync_unplug_thread(void *data
@@ -335,23 +426,83 @@ static int sync_unplug_thread(void *data
static int cpu_unplug_begin(unsigned int cpu)
{
struct hotplug_pcp *hp = &per_cpu(hotplug_pcp, cpu);
@ -394,7 +392,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
void get_online_cpus(void)
@@ -287,9 +438,9 @@ void get_online_cpus(void)
@@ -360,9 +511,9 @@ void get_online_cpus(void)
if (cpu_hotplug.active_writer == current)
return;
cpuhp_lock_acquire_read();
@ -406,7 +404,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
EXPORT_SYMBOL_GPL(get_online_cpus);
@@ -342,11 +493,11 @@ void cpu_hotplug_begin(void)
@@ -415,11 +566,11 @@ void cpu_hotplug_begin(void)
cpuhp_lock_acquire();
for (;;) {
@ -420,7 +418,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
schedule();
}
finish_wait(&cpu_hotplug.wq, &wait);
@@ -355,7 +506,7 @@ void cpu_hotplug_begin(void)
@@ -428,7 +579,7 @@ void cpu_hotplug_begin(void)
void cpu_hotplug_done(void)
{
cpu_hotplug.active_writer = NULL;
@ -429,7 +427,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
cpuhp_lock_release();
}
@@ -828,6 +979,9 @@ static int takedown_cpu(unsigned int cpu
@@ -907,6 +1058,9 @@ static int takedown_cpu(unsigned int cpu
kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread);
smpboot_park_threads(cpu);
@ -441,8 +439,8 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* interrupt affinities.
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1129,6 +1129,84 @@ void do_set_cpus_allowed(struct task_str
enqueue_task(rq, p, ENQUEUE_RESTORE);
@@ -1140,6 +1140,84 @@ void do_set_cpus_allowed(struct task_str
set_curr_task(rq, p);
}
+static DEFINE_PER_CPU(struct cpumask, sched_cpumasks);

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Tue, 4 Mar 2014 12:28:32 -0500
Subject: cpu_chill: Add a UNINTERRUPTIBLE hrtimer_nanosleep
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
We hit another bug that was caused by switching cpu_chill() from
msleep() to hrtimer_nanosleep().

View File

@ -1,7 +1,7 @@
From: Tiejun Chen <tiejun.chen@windriver.com>
Subject: cpu_down: move migrate_enable() back
Date: Thu, 7 Nov 2013 10:06:07 +0800
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Commit 08c1ab68, "hotplug-use-migrate-disable.patch", intends to
use migrate_enable()/migrate_disable() to replace that combination
@ -35,7 +35,7 @@ Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1117,6 +1117,7 @@ static int __ref _cpu_down(unsigned int
@@ -1195,6 +1195,7 @@ static int __ref _cpu_down(unsigned int
goto restore_cpus;
}
@ -43,7 +43,7 @@ Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
cpu_hotplug_begin();
ret = cpu_unplug_begin(cpu);
if (ret) {
@@ -1164,7 +1165,6 @@ static int __ref _cpu_down(unsigned int
@@ -1242,7 +1243,6 @@ static int __ref _cpu_down(unsigned int
cpu_unplug_done(cpu);
out_cancel:
cpu_hotplug_done();

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 9 Apr 2015 15:23:01 +0200
Subject: cpufreq: drop K8's driver from beeing selected
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Ralf posted a picture of a backtrace from

View File

@ -1,7 +1,7 @@
Subject: cpumask: Disable CONFIG_CPUMASK_OFFSTACK for RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 14 Dec 2011 01:03:49 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
There are "valid" GFP_ATOMIC allocations such as
@ -47,7 +47,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -888,7 +888,7 @@ config IOMMU_HELPER
@@ -900,7 +900,7 @@ config IOMMU_HELPER
config MAXSMP
bool "Enable Maximum number of SMP Processors and NUMA Nodes"
depends on X86_64 && SMP && DEBUG_KERNEL

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 21 Feb 2014 17:24:04 +0100
Subject: crypto: Reduce preempt disabled regions, more algos
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Don Estabrook reported
| kernel: WARNING: CPU: 2 PID: 858 at kernel/sched/core.c:2428 migrate_disable+0xed/0x100()

View File

@ -1,23 +0,0 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 11 May 2016 11:56:18 +0200
Subject: crypto/ccp: remove rwlocks_types.h
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Users of rwlocks should include spinlock.h instead including this
header file. The current users of rwlocks_types.h are internal.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
drivers/crypto/ccp/ccp-dev.c | 1 -
1 file changed, 1 deletion(-)
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -16,7 +16,6 @@
#include <linux/sched.h>
#include <linux/interrupt.h>
#include <linux/spinlock.h>
-#include <linux/rwlock_types.h>
#include <linux/types.h>
#include <linux/mutex.h>
#include <linux/delay.h>

View File

@ -1,7 +1,7 @@
Subject: debugobjects: Make RT aware
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 17 Jul 2011 21:41:35 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Avoid filling the pool / allocating memory with irqs off().

View File

@ -1,7 +1,7 @@
Subject: dm: Make rt aware
From: Thomas Gleixner <tglx@linutronix.de>
Date: Mon, 14 Nov 2011 23:06:09 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Use the BUG_ON_NORT variant for the irq_disabled() checks. RT has
interrupts legitimately enabled here as we cant deadlock against the
@ -16,10 +16,10 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -811,7 +811,7 @@ static void dm_old_request_fn(struct req
@@ -832,7 +832,7 @@ static void dm_old_request_fn(struct req
/* Establish tio->ti before queuing work (map_tio_request) */
tio->ti = ti;
queue_kthread_work(&md->kworker, &tio->work);
kthread_queue_work(&md->kworker, &tio->work);
- BUG_ON(!irqs_disabled());
+ BUG_ON_NONRT(!irqs_disabled());
}

View File

@ -2,7 +2,7 @@ From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Thu, 31 Mar 2016 04:08:28 +0200
Subject: [PATCH] drivers/block/zram: Replace bit spinlocks with rtmutex
for -rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
They're nondeterministic, and lead to ___might_sleep() splats in -rt.
OTOH, they're a lot less wasteful than an rtmutex per page.

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:24 -0500
Subject: drivers/net: Use disable_irq_nosync() in 8139too
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Use disable_irq_nosync() instead of disable_irq() as this might be
called in atomic context with netpoll.

View File

@ -1,127 +0,0 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sat, 20 Jun 2009 11:36:54 +0200
Subject: drivers/net: fix livelock issues
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Preempt-RT runs into a live lock issue with the NETDEV_TX_LOCKED micro
optimization. The reason is that the softirq thread is rescheduling
itself on that return value. Depending on priorities it starts to
monoplize the CPU and livelock on UP systems.
Remove it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
drivers/net/ethernet/atheros/atl1c/atl1c_main.c | 6 +-----
drivers/net/ethernet/atheros/atl1e/atl1e_main.c | 3 +--
drivers/net/ethernet/chelsio/cxgb/sge.c | 3 +--
drivers/net/ethernet/neterion/s2io.c | 7 +------
drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c | 6 ++----
drivers/net/ethernet/tehuti/tehuti.c | 9 ++-------
drivers/net/rionet.c | 6 +-----
7 files changed, 9 insertions(+), 31 deletions(-)
--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
@@ -2217,11 +2217,7 @@ static netdev_tx_t atl1c_xmit_frame(stru
}
tpd_req = atl1c_cal_tpd_req(skb);
- if (!spin_trylock_irqsave(&adapter->tx_lock, flags)) {
- if (netif_msg_pktdata(adapter))
- dev_info(&adapter->pdev->dev, "tx locked\n");
- return NETDEV_TX_LOCKED;
- }
+ spin_lock_irqsave(&adapter->tx_lock, flags);
if (atl1c_tpd_avail(adapter, type) < tpd_req) {
/* no enough descriptor, just stop queue */
--- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
+++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
@@ -1880,8 +1880,7 @@ static netdev_tx_t atl1e_xmit_frame(stru
return NETDEV_TX_OK;
}
tpd_req = atl1e_cal_tdp_req(skb);
- if (!spin_trylock_irqsave(&adapter->tx_lock, flags))
- return NETDEV_TX_LOCKED;
+ spin_lock_irqsave(&adapter->tx_lock, flags);
if (atl1e_tpd_avail(adapter) < tpd_req) {
/* no enough descriptor, just stop queue */
--- a/drivers/net/ethernet/chelsio/cxgb/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb/sge.c
@@ -1664,8 +1664,7 @@ static int t1_sge_tx(struct sk_buff *skb
struct cmdQ *q = &sge->cmdQ[qid];
unsigned int credits, pidx, genbit, count, use_sched_skb = 0;
- if (!spin_trylock(&q->lock))
- return NETDEV_TX_LOCKED;
+ spin_lock(&q->lock);
reclaim_completed_tx(sge, q);
--- a/drivers/net/ethernet/neterion/s2io.c
+++ b/drivers/net/ethernet/neterion/s2io.c
@@ -4084,12 +4084,7 @@ static netdev_tx_t s2io_xmit(struct sk_b
[skb->priority & (MAX_TX_FIFOS - 1)];
fifo = &mac_control->fifos[queue];
- if (do_spin_lock)
- spin_lock_irqsave(&fifo->tx_lock, flags);
- else {
- if (unlikely(!spin_trylock_irqsave(&fifo->tx_lock, flags)))
- return NETDEV_TX_LOCKED;
- }
+ spin_lock_irqsave(&fifo->tx_lock, flags);
if (sp->config.multiq) {
if (__netif_subqueue_stopped(dev, fifo->fifo_no)) {
--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
@@ -2137,10 +2137,8 @@ static int pch_gbe_xmit_frame(struct sk_
struct pch_gbe_tx_ring *tx_ring = adapter->tx_ring;
unsigned long flags;
- if (!spin_trylock_irqsave(&tx_ring->tx_lock, flags)) {
- /* Collision - tell upper layer to requeue */
- return NETDEV_TX_LOCKED;
- }
+ spin_lock_irqsave(&tx_ring->tx_lock, flags);
+
if (unlikely(!PCH_GBE_DESC_UNUSED(tx_ring))) {
netif_stop_queue(netdev);
spin_unlock_irqrestore(&tx_ring->tx_lock, flags);
--- a/drivers/net/ethernet/tehuti/tehuti.c
+++ b/drivers/net/ethernet/tehuti/tehuti.c
@@ -1629,13 +1629,8 @@ static netdev_tx_t bdx_tx_transmit(struc
unsigned long flags;
ENTER;
- local_irq_save(flags);
- if (!spin_trylock(&priv->tx_lock)) {
- local_irq_restore(flags);
- DBG("%s[%s]: TX locked, returning NETDEV_TX_LOCKED\n",
- BDX_DRV_NAME, ndev->name);
- return NETDEV_TX_LOCKED;
- }
+
+ spin_lock_irqsave(&priv->tx_lock, flags);
/* build tx descriptor */
BDX_ASSERT(f->m.wptr >= f->m.memsz); /* started with valid wptr */
--- a/drivers/net/rionet.c
+++ b/drivers/net/rionet.c
@@ -179,11 +179,7 @@ static int rionet_start_xmit(struct sk_b
unsigned long flags;
int add_num = 1;
- local_irq_save(flags);
- if (!spin_trylock(&rnet->tx_lock)) {
- local_irq_restore(flags);
- return NETDEV_TX_LOCKED;
- }
+ spin_lock_irqsave(&rnet->tx_lock, flags);
if (is_multicast_ether_addr(eth->h_dest))
add_num = nets[rnet->mport->id].nact;

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Fri, 3 Jul 2009 08:30:00 -0500
Subject: drivers/net: vortex fix locking issues
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Argh, cut and paste wasn't enough...

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:30 -0500
Subject: drivers: random: Reduce preempt disabled region
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
No need to keep preemption disabled across the whole function.

View File

@ -1,7 +1,7 @@
Subject: tty/serial/omap: Make the locking RT aware
From: Thomas Gleixner <tglx@linutronix.de>
Date: Thu, 28 Jul 2011 13:32:57 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The lock is a sleeping lock and local_irq_save() is not the
optimsation we are looking for. Redo it to make it work on -RT and

View File

@ -1,7 +1,7 @@
Subject: tty/serial/pl011: Make the locking work on RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Tue, 08 Jan 2013 21:36:51 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The lock is a sleeping lock and local_irq_save() is not the optimsation
we are looking for. Redo it to make it work on -RT and non-RT.
@ -13,7 +13,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/tty/serial/amba-pl011.c
+++ b/drivers/tty/serial/amba-pl011.c
@@ -2167,13 +2167,19 @@ pl011_console_write(struct console *co,
@@ -2194,13 +2194,19 @@ pl011_console_write(struct console *co,
clk_enable(uap->clk);
@ -36,7 +36,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* First save the CR then disable the interrupts
@@ -2197,8 +2203,7 @@ pl011_console_write(struct console *co,
@@ -2224,8 +2230,7 @@ pl011_console_write(struct console *co,
pl011_write(old_cr, uap, REG_CR);
if (locked)

View File

@ -2,7 +2,7 @@ From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Thu, 20 Oct 2016 11:15:22 +0200
Subject: [PATCH] drivers/zram: Don't disable preemption in
zcomp_stream_get/put()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
In v4.7, the driver switched to percpu compression streams, disabling
preemption via get/put_cpu_ptr(). Use a per-zcomp_strm lock here. We

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 25 Apr 2013 18:12:52 +0200
Subject: drm/i915: drop trace_i915_gem_ring_dispatch on rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
This tracepoint is responsible for:
@ -47,7 +47,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1302,7 +1302,9 @@ i915_gem_ringbuffer_submission(struct i9
@@ -1537,7 +1537,9 @@ execbuf_submit(struct i915_execbuffer_pa
if (ret)
return ret;

View File

@ -1,7 +1,7 @@
Subject: drm,i915: Use local_lock/unlock_irq() in intel_pipe_update_start/end()
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Sat, 27 Feb 2016 09:01:42 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
[ 8.014039] BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:918
@ -62,15 +62,15 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/gpu/drm/i915/intel_sprite.c
+++ b/drivers/gpu/drm/i915/intel_sprite.c
@@ -38,6 +38,7 @@
#include "intel_drv.h"
#include <drm/i915_drm.h>
#include "i915_drv.h"
@@ -35,6 +35,7 @@
#include <drm/drm_rect.h>
#include <drm/drm_atomic.h>
#include <drm/drm_plane_helper.h>
+#include <linux/locallock.h>
static bool
format_is_yuv(uint32_t format)
@@ -64,6 +65,8 @@ int intel_usecs_to_scanlines(const struc
#include "intel_drv.h"
#include "intel_frontbuffer.h"
#include <drm/i915_drm.h>
@@ -65,6 +66,8 @@ int intel_usecs_to_scanlines(const struc
1000 * adjusted_mode->crtc_htotal);
}
@ -79,7 +79,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/**
* intel_pipe_update_start() - start update of a set of display registers
* @crtc: the crtc of which the registers are going to be updated
@@ -94,7 +97,7 @@ void intel_pipe_update_start(struct inte
@@ -95,7 +98,7 @@ void intel_pipe_update_start(struct inte
min = vblank_start - intel_usecs_to_scanlines(adjusted_mode, 100);
max = vblank_start - 1;
@ -88,7 +88,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
if (min <= 0 || max <= 0)
return;
@@ -124,11 +127,11 @@ void intel_pipe_update_start(struct inte
@@ -125,11 +128,11 @@ void intel_pipe_update_start(struct inte
break;
}
@ -102,7 +102,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
finish_wait(wq, &wait);
@@ -180,7 +183,7 @@ void intel_pipe_update_end(struct intel_
@@ -181,7 +184,7 @@ void intel_pipe_update_end(struct intel_
crtc->base.state->event = NULL;
}

View File

@ -1,7 +1,7 @@
Subject: drm,radeon,i915: Use preempt_disable/enable_rt() where recommended
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Sat, 27 Feb 2016 08:09:11 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
DRM folks identified the spots, so use them.
@ -34,7 +34,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -1869,6 +1869,7 @@ int radeon_get_crtc_scanoutpos(struct dr
@@ -1845,6 +1845,7 @@ int radeon_get_crtc_scanoutpos(struct dr
struct radeon_device *rdev = dev->dev_private;
/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */
@ -42,7 +42,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/* Get optional system timestamp before query. */
if (stime)
@@ -1961,6 +1962,7 @@ int radeon_get_crtc_scanoutpos(struct dr
@@ -1937,6 +1938,7 @@ int radeon_get_crtc_scanoutpos(struct dr
*etime = ktime_get();
/* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */

View File

@ -1,78 +0,0 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Sun, 16 Aug 2015 14:27:50 +0200
Subject: dump stack: don't disable preemption during trace
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
I see here large latencies during a stack dump on x86. The
preempt_disable() and get_cpu() should forbid moving the task to another
CPU during a stack dump and avoiding two stack traces in parallel on the
same CPU. However a stack trace from a second CPU may still happen in
parallel. Also nesting is allowed so a stack trace happens in
process-context and we may have another one from IRQ context. With migrate
disable we keep this code preemptible and allow a second backtrace on
the same CPU by another task.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/x86/kernel/dumpstack_32.c | 4 ++--
arch/x86/kernel/dumpstack_64.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
--- a/arch/x86/kernel/dumpstack_32.c
+++ b/arch/x86/kernel/dumpstack_32.c
@@ -42,7 +42,7 @@ void dump_trace(struct task_struct *task
unsigned long *stack, unsigned long bp,
const struct stacktrace_ops *ops, void *data)
{
- const unsigned cpu = get_cpu();
+ const unsigned cpu = get_cpu_light();
int graph = 0;
u32 *prev_esp;
@@ -84,7 +84,7 @@ void dump_trace(struct task_struct *task
break;
touch_nmi_watchdog();
}
- put_cpu();
+ put_cpu_light();
}
EXPORT_SYMBOL(dump_trace);
--- a/arch/x86/kernel/dumpstack_64.c
+++ b/arch/x86/kernel/dumpstack_64.c
@@ -152,7 +152,7 @@ void dump_trace(struct task_struct *task
unsigned long *stack, unsigned long bp,
const struct stacktrace_ops *ops, void *data)
{
- const unsigned cpu = get_cpu();
+ const unsigned cpu = get_cpu_light();
unsigned long *irq_stack = (unsigned long *)per_cpu(irq_stack_ptr, cpu);
unsigned long dummy;
unsigned used = 0;
@@ -239,7 +239,7 @@ void dump_trace(struct task_struct *task
* This handles the process stack:
*/
bp = ops->walk_stack(task, stack, bp, ops, data, NULL, &graph);
- put_cpu();
+ put_cpu_light();
}
EXPORT_SYMBOL(dump_trace);
@@ -253,7 +253,7 @@ show_stack_log_lvl(struct task_struct *t
int cpu;
int i;
- preempt_disable();
+ migrate_disable();
cpu = smp_processor_id();
irq_stack_end = (unsigned long *)(per_cpu(irq_stack_ptr, cpu));
@@ -299,7 +299,7 @@ show_stack_log_lvl(struct task_struct *t
stack++;
touch_nmi_watchdog();
}
- preempt_enable();
+ migrate_enable();
pr_cont("\n");
show_trace_log_lvl(task, regs, sp, bp, log_lvl);

View File

@ -1,7 +1,7 @@
Subject: fs/epoll: Do not disable preemption on RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 08 Jul 2011 16:35:35 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
ep_call_nested() takes a sleeping lock so we can't disable preemption.
The light version is enough since ep_call_nested() doesn't mind beeing

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Mon, 16 Feb 2015 18:49:10 +0100
Subject: fs/aio: simple simple work
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:768
|in_atomic(): 1, irqs_disabled(): 0, pid: 26, name: rcuos/2
@ -55,7 +55,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
aio_mnt = kern_mount(&aio_fs);
if (IS_ERR(aio_mnt))
panic("Failed to create aio fs mount.");
@@ -578,9 +580,9 @@ static int kiocb_cancel(struct aio_kiocb
@@ -581,9 +583,9 @@ static int kiocb_cancel(struct aio_kiocb
return cancel(&kiocb->common);
}
@ -67,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
pr_debug("freeing %p\n", ctx);
@@ -599,8 +601,8 @@ static void free_ioctx_reqs(struct percp
@@ -602,8 +604,8 @@ static void free_ioctx_reqs(struct percp
if (ctx->rq_wait && atomic_dec_and_test(&ctx->rq_wait->count))
complete(&ctx->rq_wait->comp);
@ -78,7 +78,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/*
@@ -608,9 +610,9 @@ static void free_ioctx_reqs(struct percp
@@ -611,9 +613,9 @@ static void free_ioctx_reqs(struct percp
* and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -
* now it's safe to cancel any that need to be.
*/
@ -90,7 +90,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
struct aio_kiocb *req;
spin_lock_irq(&ctx->ctx_lock);
@@ -629,6 +631,14 @@ static void free_ioctx_users(struct perc
@@ -632,6 +634,14 @@ static void free_ioctx_users(struct perc
percpu_ref_put(&ctx->reqs);
}

View File

@ -1,7 +1,7 @@
Subject: block: Turn off warning which is bogus on RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Tue, 14 Jun 2011 17:05:09 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
On -RT the context is always with IRQs enabled. Ignore this warning on -RT.

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 14 Sep 2016 11:55:23 +0200
Subject: fs/dcache: include wait.h
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Since commit d9171b934526 ("parallel lookups machinery, part 4 (and
last)") dcache.h is using but does not include wait.h. It works as long

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 14 Sep 2016 17:57:03 +0200
Subject: [PATCH] fs/dcache: init in_lookup_hashtable
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
in_lookup_hashtable was introduced in commit 94bdd655caba ("parallel
lookups machinery, part 3") and never initialized but since it is in

View File

@ -1,7 +1,7 @@
Subject: fs: dcache: Use cpu_chill() in trylock loops
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 07 Mar 2012 21:00:34 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Retry loops on RT might loop forever when the modifying side was
preempted. Use cpu_chill() instead of cpu_relax() to let the system
@ -18,7 +18,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/fs/autofs4/autofs_i.h
+++ b/fs/autofs4/autofs_i.h
@@ -30,6 +30,7 @@
@@ -31,6 +31,7 @@
#include <linux/sched.h>
#include <linux/mount.h>
#include <linux/namei.h>
@ -97,7 +97,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#include <linux/security.h>
#include <linux/idr.h>
#include <linux/init.h> /* init_rootfs */
@@ -355,7 +356,7 @@ int __mnt_want_write(struct vfsmount *m)
@@ -358,7 +359,7 @@ int __mnt_want_write(struct vfsmount *m)
smp_mb();
while (ACCESS_ONCE(mnt->mnt.mnt_flags) & MNT_WRITE_HOLD) {
preempt_enable();

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 14 Sep 2016 14:35:49 +0200
Subject: [PATCH] fs/dcache: use swait_queue instead of waitqueue
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
__d_lookup_done() invokes wake_up_all() while holding a hlist_bl_lock()
which disables preemption. As a workaround convert it to swait.
@ -81,7 +81,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
INIT_HLIST_NODE(&dentry->d_u.d_alias);
--- a/fs/fuse/dir.c
+++ b/fs/fuse/dir.c
@@ -1174,7 +1174,7 @@ static int fuse_direntplus_link(struct f
@@ -1191,7 +1191,7 @@ static int fuse_direntplus_link(struct f
struct inode *dir = d_inode(parent);
struct fuse_conn *fc;
struct inode *inode;
@ -121,7 +121,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
struct dentry *dentry;
struct dentry *alias;
struct inode *dir = d_inode(parent);
@@ -1490,7 +1490,7 @@ int nfs_atomic_open(struct inode *dir, s
@@ -1498,7 +1498,7 @@ int nfs_atomic_open(struct inode *dir, s
struct file *file, unsigned open_flags,
umode_t mode, int *opened)
{
@ -152,7 +152,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
spin_lock(&dentry->d_lock);
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -1819,7 +1819,7 @@ bool proc_fill_cache(struct file *file,
@@ -1834,7 +1834,7 @@ bool proc_fill_cache(struct file *file,
child = d_hash_and_lookup(dir, &qname);
if (!child) {
@ -163,7 +163,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto end_instantiate;
--- a/fs/proc/proc_sysctl.c
+++ b/fs/proc/proc_sysctl.c
@@ -627,7 +627,7 @@ static bool proc_sys_fill_cache(struct f
@@ -632,7 +632,7 @@ static bool proc_sys_fill_cache(struct f
child = d_lookup(dir, &qname);
if (!child) {
@ -194,7 +194,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
extern struct dentry * d_exact_alias(struct dentry *, struct inode *);
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -1484,7 +1484,7 @@ struct nfs_unlinkdata {
@@ -1490,7 +1490,7 @@ struct nfs_unlinkdata {
struct nfs_removeargs args;
struct nfs_removeres res;
struct dentry *dentry;

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 18 Mar 2011 10:11:25 +0100
Subject: fs: jbd/jbd2: Make state lock and journal head lock rt safe
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
bit_spin_locks break under RT.

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Mon, 17 Feb 2014 17:30:03 +0100
Subject: fs: jbd2: pull your plug when waiting for space
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Two cps in parallel managed to stall the the ext4 fs. It seems that
journal code is either waiting for locks or sleeping waiting for

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 19 Jul 2009 08:44:27 -0500
Subject: fs: namespace preemption fix
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
On RT we cannot loop with preemption disabled here as
mnt_make_readonly() might have been preempted. We can safely enable
@ -16,7 +16,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -353,8 +353,11 @@ int __mnt_want_write(struct vfsmount *m)
@@ -356,8 +356,11 @@ int __mnt_want_write(struct vfsmount *m)
* incremented count after it has set MNT_WRITE_HOLD.
*/
smp_mb();

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 15 Sep 2016 10:51:27 +0200
Subject: [PATCH] fs/nfs: turn rmdir_sem into a semaphore
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The RW semaphore had a reader side which used the _non_owner version
because it most likely took the reader lock in one thread and released it
@ -22,7 +22,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1805,7 +1805,11 @@ int nfs_rmdir(struct inode *dir, struct
@@ -1813,7 +1813,11 @@ int nfs_rmdir(struct inode *dir, struct
trace_nfs_rmdir_enter(dir, dentry);
if (d_really_is_positive(dentry)) {
@ -34,7 +34,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name);
/* Ensure the VFS deletes this inode */
switch (error) {
@@ -1815,7 +1819,11 @@ int nfs_rmdir(struct inode *dir, struct
@@ -1823,7 +1827,11 @@ int nfs_rmdir(struct inode *dir, struct
case -ENOENT:
nfs_dentry_handle_enoent(dentry);
}

View File

@ -1,7 +1,7 @@
From: Mike Galbraith <efault@gmx.de>
Date: Fri, 3 Jul 2009 08:44:12 -0500
Subject: fs: ntfs: disable interrupt only on !RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
On Sat, 2007-10-27 at 11:44 +0200, Ingo Molnar wrote:
> * Nick Piggin <nickpiggin@yahoo.com.au> wrote:

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 18 Mar 2011 09:18:52 +0100
Subject: buffer_head: Replace bh_uptodate_lock for -rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Wrap the bit_spin_lock calls into a separate inline and add the RT
replacements with a real spinlock.
@ -74,7 +74,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
EXPORT_SYMBOL(end_buffer_async_write);
@@ -3384,6 +3376,7 @@ struct buffer_head *alloc_buffer_head(gf
@@ -3383,6 +3375,7 @@ struct buffer_head *alloc_buffer_head(gf
struct buffer_head *ret = kmem_cache_zalloc(bh_cachep, gfp_flags);
if (ret) {
INIT_LIST_HEAD(&ret->b_assoc_buffers);

View File

@ -1,7 +1,7 @@
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Sun, 16 Oct 2016 05:08:30 +0200
Subject: [PATCH] ftrace: Fix trace header alignment
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Line up helper arrows to the right column.

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 17 Jul 2011 21:56:42 +0200
Subject: trace: Add migrate-disabled counter to tracing output
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 1 Mar 2013 11:17:42 +0100
Subject: futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
In exit_pi_state_list() we have the following locking construct:
@ -31,7 +31,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -895,7 +895,9 @@ void exit_pi_state_list(struct task_stru
@@ -904,7 +904,9 @@ void exit_pi_state_list(struct task_stru
* task still owns the PI-state:
*/
if (head->next != next) {

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Tue, 14 Jul 2015 14:26:34 +0200
Subject: futex: Fix bug on when a requeued RT task times out
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Requeue with timeout causes a bug with PREEMPT_RT_FULL.
@ -104,7 +104,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
RT_MUTEX_FULL_CHAINWALK);
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -98,6 +98,7 @@ enum rtmutex_chainwalk {
@@ -99,6 +99,7 @@ enum rtmutex_chainwalk {
* PI-futex support (proxy locking functions, etc.):
*/
#define PI_WAKEUP_INPROGRESS ((struct rt_mutex_waiter *) 1)

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:57 -0500
Subject: genirq: Disable irqpoll on -rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Creates long latencies for no value

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 21 Aug 2013 17:48:46 +0200
Subject: genirq: Do not invoke the affinity callback via a workqueue on RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Joe Korty reported, that __irq_set_affinity_locked() schedules a
workqueue while holding a rawlock which results in a might_sleep()

View File

@ -1,7 +1,7 @@
Subject: genirq: Force interrupt thread on RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 03 Apr 2011 11:57:29 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Force threaded_irqs and optimize the code (force_irqthreads) in regard
to this.
@ -14,7 +14,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -398,9 +398,13 @@ extern int irq_set_irqchip_state(unsigne
@@ -406,9 +406,13 @@ extern int irq_set_irqchip_state(unsigne
bool state);
#ifdef CONFIG_IRQ_FORCED_THREADING

View File

@ -1,7 +1,7 @@
From: Josh Cartwright <joshc@ni.com>
Date: Thu, 11 Feb 2016 11:54:00 -0600
Subject: genirq: update irq_set_irqchip_state documentation
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
On -rt kernels, the use of migrate_disable()/migrate_enable() is
sufficient to guarantee a task isn't moved to another CPU. Update the

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Tue, 14 Jul 2015 14:26:34 +0200
Subject: gpu: don't check for the lock owner.
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---

View File

@ -1,7 +1,7 @@
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Tue, 24 Mar 2015 08:14:49 +0100
Subject: hotplug: Use set_cpus_allowed_ptr() in sync_unplug_thread()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
do_set_cpus_allowed() is not safe vs ->sched_class change.
@ -36,7 +36,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -345,7 +345,7 @@ static int sync_unplug_thread(void *data
@@ -418,7 +418,7 @@ static int sync_unplug_thread(void *data
* we don't want any more work on this CPU.
*/
current->flags &= ~PF_NO_SETAFFINITY;

View File

@ -1,7 +1,7 @@
Subject: hotplug: Lightweight get online cpus
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 15 Jun 2011 12:36:06 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
get_online_cpus() is a heavy weight function which involves a global
mutex. migrate_disable() wants a simpler construct which prevents only
@ -19,7 +19,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -192,9 +192,6 @@ static inline void cpu_notifier_register
@@ -180,9 +180,6 @@ static inline void cpu_notifier_register
#endif /* CONFIG_SMP */
extern struct bus_type cpu_subsys;
@ -29,7 +29,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#ifdef CONFIG_HOTPLUG_CPU
/* Stop CPUs going up and down. */
@@ -204,6 +201,8 @@ extern void get_online_cpus(void);
@@ -192,6 +189,8 @@ extern void get_online_cpus(void);
extern void put_online_cpus(void);
extern void cpu_hotplug_disable(void);
extern void cpu_hotplug_enable(void);
@ -38,7 +38,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#define hotcpu_notifier(fn, pri) cpu_notifier(fn, pri)
#define __hotcpu_notifier(fn, pri) __cpu_notifier(fn, pri)
#define register_hotcpu_notifier(nb) register_cpu_notifier(nb)
@@ -221,6 +220,8 @@ static inline void cpu_hotplug_done(void
@@ -209,6 +208,8 @@ static inline void cpu_hotplug_done(void
#define put_online_cpus() do { } while (0)
#define cpu_hotplug_disable() do { } while (0)
#define cpu_hotplug_enable() do { } while (0)
@ -49,7 +49,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/* These aren't inline functions due to a GCC bug. */
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -166,6 +166,100 @@ static struct {
@@ -239,6 +239,100 @@ static struct {
#define cpuhp_lock_acquire() lock_map_acquire(&cpu_hotplug.dep_map)
#define cpuhp_lock_release() lock_map_release(&cpu_hotplug.dep_map)
@ -150,7 +150,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
void get_online_cpus(void)
{
@@ -799,6 +893,8 @@ static int __ref _cpu_down(unsigned int
@@ -877,6 +971,8 @@ static int __ref _cpu_down(unsigned int
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
int prev_state, ret = 0;
bool hasdied = false;
@ -159,7 +159,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
if (num_online_cpus() == 1)
return -EBUSY;
@@ -806,7 +902,27 @@ static int __ref _cpu_down(unsigned int
@@ -884,7 +980,27 @@ static int __ref _cpu_down(unsigned int
if (!cpu_present(cpu))
return -EINVAL;
@ -187,7 +187,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
cpuhp_tasks_frozen = tasks_frozen;
@@ -845,6 +961,8 @@ static int __ref _cpu_down(unsigned int
@@ -923,6 +1039,8 @@ static int __ref _cpu_down(unsigned int
hasdied = prev_state != st->state && st->state == CPUHP_OFFLINE;
out:

View File

@ -1,7 +1,7 @@
Subject: hotplug: sync_unplug: No "\n" in task name
From: Yong Zhang <yong.zhang0@gmail.com>
Date: Sun, 16 Oct 2011 18:56:43 +0800
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Otherwise the output will look a little odd.
@ -14,7 +14,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -245,7 +245,7 @@ static int cpu_unplug_begin(unsigned int
@@ -318,7 +318,7 @@ static int cpu_unplug_begin(unsigned int
struct task_struct *tsk;
init_completion(&hp->synced);

View File

@ -1,7 +1,7 @@
Subject: hotplug: Use migrate disable on unplug
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 17 Jul 2011 19:35:29 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Migration needs to be disabled accross the unplug handling to make
sure that the unplug thread is off the unplugged cpu.
@ -13,7 +13,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -910,14 +910,13 @@ static int __ref _cpu_down(unsigned int
@@ -988,14 +988,13 @@ static int __ref _cpu_down(unsigned int
cpumask_andnot(cpumask, cpu_online_mask, cpumask_of(cpu));
set_cpus_allowed_ptr(current, cpumask);
free_cpumask_var(cpumask);
@ -30,7 +30,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
cpu_hotplug_begin();
ret = cpu_unplug_begin(cpu);
@@ -966,6 +965,7 @@ static int __ref _cpu_down(unsigned int
@@ -1044,6 +1043,7 @@ static int __ref _cpu_down(unsigned int
cpu_unplug_done(cpu);
out_cancel:
cpu_hotplug_done();

View File

@ -1,7 +1,7 @@
From: Yang Shi <yang.shi@windriver.com>
Date: Mon, 16 Sep 2013 14:09:19 -0700
Subject: hrtimer: Move schedule_work call to helper thread
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
When run ltp leapsec_timer test, the following call trace is caught:

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 23 Dec 2015 20:57:41 +0100
Subject: hrtimer: enfore 64byte alignment
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
The patch "hrtimer: Fixup hrtimer callback changes for preempt-rt" adds
a list_head expired to struct hrtimer_clock_base and with it we run into

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 3 Jul 2009 08:44:31 -0500
Subject: hrtimer: Fixup hrtimer callback changes for preempt-rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
In preempt-rt we can not call the callbacks which take sleeping locks
from the timer interrupt context.
@ -321,7 +321,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
/**
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -1195,6 +1195,7 @@ void tick_setup_sched_timer(void)
@@ -1198,6 +1198,7 @@ void tick_setup_sched_timer(void)
* Emulate tick processing via per-CPU hrtimers:
*/
hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:34 -0500
Subject: hrtimers: Prepare full preemption
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/older/patches-4.9-rt1.tar.xz
Make cancellation of a running callback in softirq context safe
against preemption.

View File

@ -1,26 +0,0 @@
From: Mike Galbraith <bitbucket@online.de>
Date: Fri, 30 Aug 2013 07:57:25 +0200
Subject: hwlat-detector: Don't ignore threshold module parameter
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
If the user specified a threshold at module load time, use it.
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Mike Galbraith <bitbucket@online.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
drivers/misc/hwlat_detector.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/misc/hwlat_detector.c
+++ b/drivers/misc/hwlat_detector.c
@@ -414,7 +414,7 @@ static int init_stats(void)
goto out;
__reset_stats();
- data.threshold = DEFAULT_LAT_THRESHOLD; /* threshold us */
+ data.threshold = threshold ?: DEFAULT_LAT_THRESHOLD; /* threshold us */
data.sample_window = DEFAULT_SAMPLE_WINDOW; /* window us */
data.sample_width = DEFAULT_SAMPLE_WIDTH; /* width us */

View File

@ -1,126 +0,0 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Mon, 19 Aug 2013 17:33:25 -0400
Subject: hwlat-detector: Update hwlat_detector to add outer loop detection
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
The hwlat_detector reads two timestamps in a row, then reports any
gap between those calls. The problem is, it misses everything between
the second reading of the time stamp to the first reading of the time stamp
in the next loop. That's were most of the time is spent, which means,
chances are likely that it will miss all hardware latencies. This
defeats the purpose.
By also testing the first time stamp from the previous loop second
time stamp (the outer loop), we are more likely to find a latency.
Setting the threshold to 1, here's what the report now looks like:
1347415723.0232202770 0 2
1347415725.0234202822 0 2
1347415727.0236202875 0 2
1347415729.0238202928 0 2
1347415731.0240202980 0 2
1347415734.0243203061 0 2
1347415736.0245203113 0 2
1347415738.0247203166 2 0
1347415740.0249203219 0 3
1347415742.0251203272 0 3
1347415743.0252203299 0 3
1347415745.0254203351 0 2
1347415747.0256203404 0 2
1347415749.0258203457 0 2
1347415751.0260203510 0 2
1347415754.0263203589 0 2
1347415756.0265203642 0 2
1347415758.0267203695 0 2
1347415760.0269203748 0 2
1347415762.0271203801 0 2
1347415764.0273203853 2 0
There's some hardware latency that takes 2 microseconds to run.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
drivers/misc/hwlat_detector.c | 32 ++++++++++++++++++++++++++------
1 file changed, 26 insertions(+), 6 deletions(-)
--- a/drivers/misc/hwlat_detector.c
+++ b/drivers/misc/hwlat_detector.c
@@ -143,6 +143,7 @@ static void detector_exit(void);
struct sample {
u64 seqnum; /* unique sequence */
u64 duration; /* ktime delta */
+ u64 outer_duration; /* ktime delta (outer loop) */
struct timespec timestamp; /* wall time */
unsigned long lost;
};
@@ -219,11 +220,13 @@ static struct sample *buffer_get_sample(
*/
static int get_sample(void *unused)
{
- ktime_t start, t1, t2;
+ ktime_t start, t1, t2, last_t2;
s64 diff, total = 0;
u64 sample = 0;
+ u64 outer_sample = 0;
int ret = 1;
+ last_t2.tv64 = 0;
start = ktime_get(); /* start timestamp */
do {
@@ -231,7 +234,22 @@ static int get_sample(void *unused)
t1 = ktime_get(); /* we'll look for a discontinuity */
t2 = ktime_get();
+ if (last_t2.tv64) {
+ /* Check the delta from outer loop (t2 to next t1) */
+ diff = ktime_to_us(ktime_sub(t1, last_t2));
+ /* This shouldn't happen */
+ if (diff < 0) {
+ pr_err(BANNER "time running backwards\n");
+ goto out;
+ }
+ if (diff > outer_sample)
+ outer_sample = diff;
+ }
+ last_t2 = t2;
+
total = ktime_to_us(ktime_sub(t2, start)); /* sample width */
+
+ /* This checks the inner loop (t1 to t2) */
diff = ktime_to_us(ktime_sub(t2, t1)); /* current diff */
/* This shouldn't happen */
@@ -246,12 +264,13 @@ static int get_sample(void *unused)
} while (total <= data.sample_width);
/* If we exceed the threshold value, we have found a hardware latency */
- if (sample > data.threshold) {
+ if (sample > data.threshold || outer_sample > data.threshold) {
struct sample s;
data.count++;
s.seqnum = data.count;
s.duration = sample;
+ s.outer_duration = outer_sample;
s.timestamp = CURRENT_TIME;
__buffer_add_sample(&s);
@@ -738,10 +757,11 @@ static ssize_t debug_sample_fread(struct
}
}
- len = snprintf(buf, sizeof(buf), "%010lu.%010lu\t%llu\n",
- sample->timestamp.tv_sec,
- sample->timestamp.tv_nsec,
- sample->duration);
+ len = snprintf(buf, sizeof(buf), "%010lu.%010lu\t%llu\t%llu\n",
+ sample->timestamp.tv_sec,
+ sample->timestamp.tv_nsec,
+ sample->duration,
+ sample->outer_duration);
/* handling partial reads is more trouble than it's worth */

View File

@ -1,184 +0,0 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Mon, 19 Aug 2013 17:33:27 -0400
Subject: hwlat-detector: Use thread instead of stop machine
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
There's no reason to use stop machine to search for hardware latency.
Simply disabling interrupts while running the loop will do enough to
check if something comes in that wasn't disabled by interrupts being
off, which is exactly what stop machine does.
Instead of using stop machine, just have the thread disable interrupts
while it checks for hardware latency.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
drivers/misc/hwlat_detector.c | 60 ++++++++++++++++++------------------------
1 file changed, 26 insertions(+), 34 deletions(-)
--- a/drivers/misc/hwlat_detector.c
+++ b/drivers/misc/hwlat_detector.c
@@ -41,7 +41,6 @@
#include <linux/module.h>
#include <linux/init.h>
#include <linux/ring_buffer.h>
-#include <linux/stop_machine.h>
#include <linux/time.h>
#include <linux/hrtimer.h>
#include <linux/kthread.h>
@@ -107,7 +106,6 @@ struct data; /* Global state */
/* Sampling functions */
static int __buffer_add_sample(struct sample *sample);
static struct sample *buffer_get_sample(struct sample *sample);
-static int get_sample(void *unused);
/* Threading and state */
static int kthread_fn(void *unused);
@@ -149,7 +147,7 @@ struct sample {
unsigned long lost;
};
-/* keep the global state somewhere. Mostly used under stop_machine. */
+/* keep the global state somewhere. */
static struct data {
struct mutex lock; /* protect changes */
@@ -172,7 +170,7 @@ static struct data {
* @sample: The new latency sample value
*
* This receives a new latency sample and records it in a global ring buffer.
- * No additional locking is used in this case - suited for stop_machine use.
+ * No additional locking is used in this case.
*/
static int __buffer_add_sample(struct sample *sample)
{
@@ -229,18 +227,18 @@ static struct sample *buffer_get_sample(
#endif
/**
* get_sample - sample the CPU TSC and look for likely hardware latencies
- * @unused: This is not used but is a part of the stop_machine API
*
* Used to repeatedly capture the CPU TSC (or similar), looking for potential
- * hardware-induced latency. Called under stop_machine, with data.lock held.
+ * hardware-induced latency. Called with interrupts disabled and with
+ * data.lock held.
*/
-static int get_sample(void *unused)
+static int get_sample(void)
{
time_type start, t1, t2, last_t2;
s64 diff, total = 0;
u64 sample = 0;
u64 outer_sample = 0;
- int ret = 1;
+ int ret = -1;
init_time(last_t2, 0);
start = time_get(); /* start timestamp */
@@ -279,10 +277,14 @@ static int get_sample(void *unused)
} while (total <= data.sample_width);
+ ret = 0;
+
/* If we exceed the threshold value, we have found a hardware latency */
if (sample > data.threshold || outer_sample > data.threshold) {
struct sample s;
+ ret = 1;
+
data.count++;
s.seqnum = data.count;
s.duration = sample;
@@ -295,7 +297,6 @@ static int get_sample(void *unused)
data.max_sample = sample;
}
- ret = 0;
out:
return ret;
}
@@ -305,32 +306,30 @@ static int get_sample(void *unused)
* @unused: A required part of the kthread API.
*
* Used to periodically sample the CPU TSC via a call to get_sample. We
- * use stop_machine, whith does (intentionally) introduce latency since we
+ * disable interrupts, which does (intentionally) introduce latency since we
* need to ensure nothing else might be running (and thus pre-empting).
* Obviously this should never be used in production environments.
*
- * stop_machine will schedule us typically only on CPU0 which is fine for
- * almost every real-world hardware latency situation - but we might later
- * generalize this if we find there are any actualy systems with alternate
- * SMI delivery or other non CPU0 hardware latencies.
+ * Currently this runs on which ever CPU it was scheduled on, but most
+ * real-worald hardware latency situations occur across several CPUs,
+ * but we might later generalize this if we find there are any actualy
+ * systems with alternate SMI delivery or other hardware latencies.
*/
static int kthread_fn(void *unused)
{
- int err = 0;
- u64 interval = 0;
+ int ret;
+ u64 interval;
while (!kthread_should_stop()) {
mutex_lock(&data.lock);
- err = stop_machine(get_sample, unused, 0);
- if (err) {
- /* Houston, we have a problem */
- mutex_unlock(&data.lock);
- goto err_out;
- }
+ local_irq_disable();
+ ret = get_sample();
+ local_irq_enable();
- wake_up(&data.wq); /* wake up reader(s) */
+ if (ret > 0)
+ wake_up(&data.wq); /* wake up reader(s) */
interval = data.sample_window - data.sample_width;
do_div(interval, USEC_PER_MSEC); /* modifies interval value */
@@ -338,15 +337,10 @@ static int kthread_fn(void *unused)
mutex_unlock(&data.lock);
if (msleep_interruptible(interval))
- goto out;
+ break;
}
- goto out;
-err_out:
- pr_err(BANNER "could not call stop_machine, disabling\n");
- enabled = 0;
-out:
- return err;
+ return 0;
}
/**
@@ -442,8 +436,7 @@ static int init_stats(void)
* This function provides a generic read implementation for the global state
* "data" structure debugfs filesystem entries. It would be nice to use
* simple_attr_read directly, but we need to make sure that the data.lock
- * spinlock is held during the actual read (even though we likely won't ever
- * actually race here as the updater runs under a stop_machine context).
+ * is held during the actual read.
*/
static ssize_t simple_data_read(struct file *filp, char __user *ubuf,
size_t cnt, loff_t *ppos, const u64 *entry)
@@ -478,8 +471,7 @@ static ssize_t simple_data_read(struct f
* This function provides a generic write implementation for the global state
* "data" structure debugfs filesystem entries. It would be nice to use
* simple_attr_write directly, but we need to make sure that the data.lock
- * spinlock is held during the actual write (even though we likely won't ever
- * actually race here as the updater runs under a stop_machine context).
+ * is held during the actual write.
*/
static ssize_t simple_data_write(struct file *filp, const char __user *ubuf,
size_t cnt, loff_t *ppos, u64 *entry)

View File

@ -1,93 +0,0 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Mon, 19 Aug 2013 17:33:26 -0400
Subject: hwlat-detector: Use trace_clock_local if available
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8.14-rt9.tar.xz
As ktime_get() calls into the timing code which does a read_seq(), it
may be affected by other CPUS that touch that lock. To remove this
dependency, use the trace_clock_local() which is already exported
for module use. If CONFIG_TRACING is enabled, use that as the clock,
otherwise use ktime_get().
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
drivers/misc/hwlat_detector.c | 34 +++++++++++++++++++++++++---------
1 file changed, 25 insertions(+), 9 deletions(-)
--- a/drivers/misc/hwlat_detector.c
+++ b/drivers/misc/hwlat_detector.c
@@ -51,6 +51,7 @@
#include <linux/version.h>
#include <linux/delay.h>
#include <linux/slab.h>
+#include <linux/trace_clock.h>
#define BUF_SIZE_DEFAULT 262144UL /* 8K*(sizeof(entry)) */
#define BUF_FLAGS (RB_FL_OVERWRITE) /* no block on full */
@@ -211,6 +212,21 @@ static struct sample *buffer_get_sample(
return sample;
}
+#ifndef CONFIG_TRACING
+#define time_type ktime_t
+#define time_get() ktime_get()
+#define time_to_us(x) ktime_to_us(x)
+#define time_sub(a, b) ktime_sub(a, b)
+#define init_time(a, b) (a).tv64 = b
+#define time_u64(a) ((a).tv64)
+#else
+#define time_type u64
+#define time_get() trace_clock_local()
+#define time_to_us(x) div_u64(x, 1000)
+#define time_sub(a, b) ((a) - (b))
+#define init_time(a, b) (a = b)
+#define time_u64(a) a
+#endif
/**
* get_sample - sample the CPU TSC and look for likely hardware latencies
* @unused: This is not used but is a part of the stop_machine API
@@ -220,23 +236,23 @@ static struct sample *buffer_get_sample(
*/
static int get_sample(void *unused)
{
- ktime_t start, t1, t2, last_t2;
+ time_type start, t1, t2, last_t2;
s64 diff, total = 0;
u64 sample = 0;
u64 outer_sample = 0;
int ret = 1;
- last_t2.tv64 = 0;
- start = ktime_get(); /* start timestamp */
+ init_time(last_t2, 0);
+ start = time_get(); /* start timestamp */
do {
- t1 = ktime_get(); /* we'll look for a discontinuity */
- t2 = ktime_get();
+ t1 = time_get(); /* we'll look for a discontinuity */
+ t2 = time_get();
- if (last_t2.tv64) {
+ if (time_u64(last_t2)) {
/* Check the delta from outer loop (t2 to next t1) */
- diff = ktime_to_us(ktime_sub(t1, last_t2));
+ diff = time_to_us(time_sub(t1, last_t2));
/* This shouldn't happen */
if (diff < 0) {
pr_err(BANNER "time running backwards\n");
@@ -247,10 +263,10 @@ static int get_sample(void *unused)
}
last_t2 = t2;
- total = ktime_to_us(ktime_sub(t2, start)); /* sample width */
+ total = time_to_us(time_sub(t2, start)); /* sample width */
/* This checks the inner loop (t1 to t2) */
- diff = ktime_to_us(ktime_sub(t2, t1)); /* current diff */
+ diff = time_to_us(time_sub(t2, t1)); /* current diff */
/* This shouldn't happen */
if (diff < 0) {

Some files were not shown because too many files have changed in this diff Show More