[rt] Update to 4.11.8-rt5
This commit is contained in:
parent
5b8fb021cd
commit
5cea93cf9c
|
@ -235,6 +235,7 @@ linux (4.11.9-1) UNRELEASED; urgency=medium
|
|||
|
||||
[ Salvatore Bonaccorso ]
|
||||
* Bump ABI to 2
|
||||
* [rt] Update to 4.11.8-rt5
|
||||
|
||||
-- Ben Hutchings <ben@decadent.org.uk> Tue, 20 Jun 2017 19:18:44 +0100
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 97181f9bd57405b879403763284537e27d46963d Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Mon, 10 Apr 2017 18:03:36 +0200
|
||||
Subject: [PATCH 1/4] futex: Avoid freeing an active timer
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Alexander reported a hrtimer debug_object splat:
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:48 +0100
|
||||
Subject: [PATCH] futex: Cleanup variable names for futex_top_waiter()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit 499f5aca2cdd5e958b27e2655e7e7f82524f46b1
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 048c9b954e20396e0c45ee778466994d1be2e612 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Wed, 12 Apr 2017 22:07:27 +0200
|
||||
Subject: [PATCH 01/13] ia64/topology: Remove cpus_allowed manipulation
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The CPU hotplug callback fiddles with the cpus_allowed pointer to pin the
|
||||
calling thread on the plugged CPU. That's already guaranteed by the hotplug
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 8fb12156b8db61af3d49f3e5e104568494581d1f Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:32 +0200
|
||||
Subject: [PATCH 01/17] init: Pin init task to the boot CPU, initially
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Some of the boot code in init_kernel_freeable() which runs before SMP
|
||||
bringup assumes (rightfully) that it runs on the boot CPU and therefore can
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 2a1c6029940675abb2217b590512dbf691867ec4 Mon Sep 17 00:00:00 2001
|
||||
From: Xunlei Pang <xlpang@redhat.com>
|
||||
Date: Thu, 23 Mar 2017 15:56:07 +0100
|
||||
Subject: [PATCH 1/9] rtmutex: Deboost before waking up the top waiter
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
We should deboost before waking the high-priority task, such that we
|
||||
don't run two tasks with the same "state" (priority, deadline,
|
||||
|
|
|
@ -1,9 +1,8 @@
|
|||
From 45aea321678856687927c53972321ebfab77759a Mon Sep 17 00:00:00 2001
|
||||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 24 May 2017 08:52:02 +0200
|
||||
Subject: [PATCH] sched/clock: Fix early boot preempt assumption in
|
||||
__set_sched_clock_stable()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The more strict early boot preemption warnings found that
|
||||
__set_sched_clock_stable() was incorrectly assuming we'd still be
|
||||
|
|
176
debian/patches/features/all/rt/0001-tracing-Add-hist_field_name-accessor.patch
vendored
Normal file
176
debian/patches/features/all/rt/0001-tracing-Add-hist_field_name-accessor.patch
vendored
Normal file
|
@ -0,0 +1,176 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:02 -0500
|
||||
Subject: [PATCH 01/32] tracing: Add hist_field_name() accessor
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
In preparation for hist_fields that won't be strictly based on
|
||||
trace_event_fields, add a new hist_field_name() accessor to allow that
|
||||
flexibility and update associated users.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 68 ++++++++++++++++++++++++++-------------
|
||||
1 file changed, 46 insertions(+), 22 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -146,6 +146,23 @@ struct hist_trigger_data {
|
||||
struct tracing_map *map;
|
||||
};
|
||||
|
||||
+static const char *hist_field_name(struct hist_field *field,
|
||||
+ unsigned int level)
|
||||
+{
|
||||
+ const char *field_name = "";
|
||||
+
|
||||
+ if (level > 1)
|
||||
+ return field_name;
|
||||
+
|
||||
+ if (field->field)
|
||||
+ field_name = field->field->name;
|
||||
+
|
||||
+ if (field_name == NULL)
|
||||
+ field_name = "";
|
||||
+
|
||||
+ return field_name;
|
||||
+}
|
||||
+
|
||||
static hist_field_fn_t select_value_fn(int field_size, int field_is_signed)
|
||||
{
|
||||
hist_field_fn_t fn = NULL;
|
||||
@@ -653,7 +670,6 @@ static int is_descending(const char *str
|
||||
static int create_sort_keys(struct hist_trigger_data *hist_data)
|
||||
{
|
||||
char *fields_str = hist_data->attrs->sort_key_str;
|
||||
- struct ftrace_event_field *field = NULL;
|
||||
struct tracing_map_sort_key *sort_key;
|
||||
int descending, ret = 0;
|
||||
unsigned int i, j;
|
||||
@@ -670,7 +686,9 @@ static int create_sort_keys(struct hist_
|
||||
}
|
||||
|
||||
for (i = 0; i < TRACING_MAP_SORT_KEYS_MAX; i++) {
|
||||
+ struct hist_field *hist_field;
|
||||
char *field_str, *field_name;
|
||||
+ const char *test_name;
|
||||
|
||||
sort_key = &hist_data->sort_keys[i];
|
||||
|
||||
@@ -703,8 +721,11 @@ static int create_sort_keys(struct hist_
|
||||
}
|
||||
|
||||
for (j = 1; j < hist_data->n_fields; j++) {
|
||||
- field = hist_data->fields[j]->field;
|
||||
- if (field && (strcmp(field_name, field->name) == 0)) {
|
||||
+ hist_field = hist_data->fields[j];
|
||||
+ test_name = hist_field_name(hist_field, 0);
|
||||
+ if (test_name == NULL)
|
||||
+ continue;
|
||||
+ if (strcmp(field_name, test_name) == 0) {
|
||||
sort_key->field_idx = j;
|
||||
descending = is_descending(field_str);
|
||||
if (descending < 0) {
|
||||
@@ -952,6 +973,7 @@ hist_trigger_entry_print(struct seq_file
|
||||
struct hist_field *key_field;
|
||||
char str[KSYM_SYMBOL_LEN];
|
||||
bool multiline = false;
|
||||
+ const char *field_name;
|
||||
unsigned int i;
|
||||
u64 uval;
|
||||
|
||||
@@ -963,26 +985,27 @@ hist_trigger_entry_print(struct seq_file
|
||||
if (i > hist_data->n_vals)
|
||||
seq_puts(m, ", ");
|
||||
|
||||
+ field_name = hist_field_name(key_field, 0);
|
||||
+
|
||||
if (key_field->flags & HIST_FIELD_FL_HEX) {
|
||||
uval = *(u64 *)(key + key_field->offset);
|
||||
- seq_printf(m, "%s: %llx",
|
||||
- key_field->field->name, uval);
|
||||
+ seq_printf(m, "%s: %llx", field_name, uval);
|
||||
} else if (key_field->flags & HIST_FIELD_FL_SYM) {
|
||||
uval = *(u64 *)(key + key_field->offset);
|
||||
sprint_symbol_no_offset(str, uval);
|
||||
- seq_printf(m, "%s: [%llx] %-45s",
|
||||
- key_field->field->name, uval, str);
|
||||
+ seq_printf(m, "%s: [%llx] %-45s", field_name,
|
||||
+ uval, str);
|
||||
} else if (key_field->flags & HIST_FIELD_FL_SYM_OFFSET) {
|
||||
uval = *(u64 *)(key + key_field->offset);
|
||||
sprint_symbol(str, uval);
|
||||
- seq_printf(m, "%s: [%llx] %-55s",
|
||||
- key_field->field->name, uval, str);
|
||||
+ seq_printf(m, "%s: [%llx] %-55s", field_name,
|
||||
+ uval, str);
|
||||
} else if (key_field->flags & HIST_FIELD_FL_EXECNAME) {
|
||||
char *comm = elt->private_data;
|
||||
|
||||
uval = *(u64 *)(key + key_field->offset);
|
||||
- seq_printf(m, "%s: %-16s[%10llu]",
|
||||
- key_field->field->name, comm, uval);
|
||||
+ seq_printf(m, "%s: %-16s[%10llu]", field_name,
|
||||
+ comm, uval);
|
||||
} else if (key_field->flags & HIST_FIELD_FL_SYSCALL) {
|
||||
const char *syscall_name;
|
||||
|
||||
@@ -991,8 +1014,8 @@ hist_trigger_entry_print(struct seq_file
|
||||
if (!syscall_name)
|
||||
syscall_name = "unknown_syscall";
|
||||
|
||||
- seq_printf(m, "%s: %-30s[%3llu]",
|
||||
- key_field->field->name, syscall_name, uval);
|
||||
+ seq_printf(m, "%s: %-30s[%3llu]", field_name,
|
||||
+ syscall_name, uval);
|
||||
} else if (key_field->flags & HIST_FIELD_FL_STACKTRACE) {
|
||||
seq_puts(m, "stacktrace:\n");
|
||||
hist_trigger_stacktrace_print(m,
|
||||
@@ -1000,15 +1023,14 @@ hist_trigger_entry_print(struct seq_file
|
||||
HIST_STACKTRACE_DEPTH);
|
||||
multiline = true;
|
||||
} else if (key_field->flags & HIST_FIELD_FL_LOG2) {
|
||||
- seq_printf(m, "%s: ~ 2^%-2llu", key_field->field->name,
|
||||
+ seq_printf(m, "%s: ~ 2^%-2llu", field_name,
|
||||
*(u64 *)(key + key_field->offset));
|
||||
} else if (key_field->flags & HIST_FIELD_FL_STRING) {
|
||||
- seq_printf(m, "%s: %-50s", key_field->field->name,
|
||||
+ seq_printf(m, "%s: %-50s", field_name,
|
||||
(char *)(key + key_field->offset));
|
||||
} else {
|
||||
uval = *(u64 *)(key + key_field->offset);
|
||||
- seq_printf(m, "%s: %10llu", key_field->field->name,
|
||||
- uval);
|
||||
+ seq_printf(m, "%s: %10llu", field_name, uval);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1021,13 +1043,13 @@ hist_trigger_entry_print(struct seq_file
|
||||
tracing_map_read_sum(elt, HITCOUNT_IDX));
|
||||
|
||||
for (i = 1; i < hist_data->n_vals; i++) {
|
||||
+ field_name = hist_field_name(hist_data->fields[i], 0);
|
||||
+
|
||||
if (hist_data->fields[i]->flags & HIST_FIELD_FL_HEX) {
|
||||
- seq_printf(m, " %s: %10llx",
|
||||
- hist_data->fields[i]->field->name,
|
||||
+ seq_printf(m, " %s: %10llx", field_name,
|
||||
tracing_map_read_sum(elt, i));
|
||||
} else {
|
||||
- seq_printf(m, " %s: %10llu",
|
||||
- hist_data->fields[i]->field->name,
|
||||
+ seq_printf(m, " %s: %10llu", field_name,
|
||||
tracing_map_read_sum(elt, i));
|
||||
}
|
||||
}
|
||||
@@ -1142,7 +1164,9 @@ static const char *get_hist_field_flags(
|
||||
|
||||
static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
|
||||
{
|
||||
- seq_printf(m, "%s", hist_field->field->name);
|
||||
+ const char *field_name = hist_field_name(hist_field, 0);
|
||||
+
|
||||
+ seq_printf(m, "%s", field_name);
|
||||
if (hist_field->flags) {
|
||||
const char *flags_str = get_hist_field_flags(hist_field);
|
||||
|
|
@ -1,8 +1,7 @@
|
|||
From 5976a66913a8bf42465d96776fd37fb5631edc19 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:33 +0200
|
||||
Subject: [PATCH 02/17] arm: Adjust system_state check
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 94ffac5d847cfd790bb37b7cef1cad803743985e Mon Sep 17 00:00:00 2001
|
||||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Fri, 7 Apr 2017 09:04:07 +0200
|
||||
Subject: [PATCH 2/4] futex: Fix small (and harmless looking) inconsistencies
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
During (post-commit) review Darren spotted a few minor things. One
|
||||
(harmless AFAICT) type inconsistency and a comment that wasn't as
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:49 +0100
|
||||
Subject: [PATCH] futex: Use smp_store_release() in mark_wake_futex()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit 1b367ece0d7e696cab1c8501bab282cc6a538b3f
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From e96a7705e7d3fef96aec9b590c63b2f6f7d2ba22 Mon Sep 17 00:00:00 2001
|
||||
From: Xunlei Pang <xlpang@redhat.com>
|
||||
Date: Thu, 23 Mar 2017 15:56:08 +0100
|
||||
Subject: [PATCH 2/9] sched/rtmutex/deadline: Fix a PI crash for deadline tasks
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
A crash happened while I was playing with deadline PI rtmutex.
|
||||
|
||||
|
|
|
@ -0,0 +1,115 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:03 -0500
|
||||
Subject: [PATCH 02/32] tracing: Reimplement log2
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
log2 as currently implemented applies only to u64 trace_event_field
|
||||
derived fields, and assumes that anything it's applied to is a u64
|
||||
field.
|
||||
|
||||
To prepare for synthetic fields like latencies, log2 should be
|
||||
applicable to those as well, so take the opportunity now to fix the
|
||||
current problems as well as expand to more general uses.
|
||||
|
||||
log2 should be thought of as a chaining function rather than a field
|
||||
type. To enable this as well as possible future function
|
||||
implementations, add a hist_field operand array into the hist_field
|
||||
definition for this purpose, and make use of it to implement the log2
|
||||
'function'.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 31 +++++++++++++++++++++++++++----
|
||||
1 file changed, 27 insertions(+), 4 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -28,12 +28,16 @@ struct hist_field;
|
||||
|
||||
typedef u64 (*hist_field_fn_t) (struct hist_field *field, void *event);
|
||||
|
||||
+#define HIST_FIELD_OPERANDS_MAX 2
|
||||
+
|
||||
struct hist_field {
|
||||
struct ftrace_event_field *field;
|
||||
unsigned long flags;
|
||||
hist_field_fn_t fn;
|
||||
unsigned int size;
|
||||
unsigned int offset;
|
||||
+ unsigned int is_signed;
|
||||
+ struct hist_field *operands[HIST_FIELD_OPERANDS_MAX];
|
||||
};
|
||||
|
||||
static u64 hist_field_none(struct hist_field *field, void *event)
|
||||
@@ -71,7 +75,9 @@ static u64 hist_field_pstring(struct his
|
||||
|
||||
static u64 hist_field_log2(struct hist_field *hist_field, void *event)
|
||||
{
|
||||
- u64 val = *(u64 *)(event + hist_field->field->offset);
|
||||
+ struct hist_field *operand = hist_field->operands[0];
|
||||
+
|
||||
+ u64 val = operand->fn(operand, event);
|
||||
|
||||
return (u64) ilog2(roundup_pow_of_two(val));
|
||||
}
|
||||
@@ -156,6 +162,8 @@ static const char *hist_field_name(struc
|
||||
|
||||
if (field->field)
|
||||
field_name = field->field->name;
|
||||
+ else if (field->flags & HIST_FIELD_FL_LOG2)
|
||||
+ field_name = hist_field_name(field->operands[0], ++level);
|
||||
|
||||
if (field_name == NULL)
|
||||
field_name = "";
|
||||
@@ -357,8 +365,20 @@ static const struct tracing_map_ops hist
|
||||
.elt_init = hist_trigger_elt_comm_init,
|
||||
};
|
||||
|
||||
-static void destroy_hist_field(struct hist_field *hist_field)
|
||||
+static void destroy_hist_field(struct hist_field *hist_field,
|
||||
+ unsigned int level)
|
||||
{
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ if (level > 2)
|
||||
+ return;
|
||||
+
|
||||
+ if (!hist_field)
|
||||
+ return;
|
||||
+
|
||||
+ for (i = 0; i < HIST_FIELD_OPERANDS_MAX; i++)
|
||||
+ destroy_hist_field(hist_field->operands[i], ++level);
|
||||
+
|
||||
kfree(hist_field);
|
||||
}
|
||||
|
||||
@@ -385,7 +405,10 @@ static struct hist_field *create_hist_fi
|
||||
}
|
||||
|
||||
if (flags & HIST_FIELD_FL_LOG2) {
|
||||
+ unsigned long fl = flags & ~HIST_FIELD_FL_LOG2;
|
||||
hist_field->fn = hist_field_log2;
|
||||
+ hist_field->operands[0] = create_hist_field(field, fl);
|
||||
+ hist_field->size = hist_field->operands[0]->size;
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -405,7 +428,7 @@ static struct hist_field *create_hist_fi
|
||||
hist_field->fn = select_value_fn(field->size,
|
||||
field->is_signed);
|
||||
if (!hist_field->fn) {
|
||||
- destroy_hist_field(hist_field);
|
||||
+ destroy_hist_field(hist_field, 0);
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
@@ -422,7 +445,7 @@ static void destroy_hist_fields(struct h
|
||||
|
||||
for (i = 0; i < TRACING_MAP_FIELDS_MAX; i++) {
|
||||
if (hist_data->fields[i]) {
|
||||
- destroy_hist_field(hist_data->fields[i]);
|
||||
+ destroy_hist_field(hist_data->fields[i], 0);
|
||||
hist_data->fields[i] = NULL;
|
||||
}
|
||||
}
|
|
@ -1,8 +1,7 @@
|
|||
From 0e8d6a9336b487a1dd6f1991ff376e669d4c87c6 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Wed, 12 Apr 2017 22:07:28 +0200
|
||||
Subject: [PATCH 02/13] workqueue: Provide work_on_cpu_safe()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
work_on_cpu() is not protected against CPU hotplug. For code which requires
|
||||
to be either executed on an online CPU or to fail if the CPU is not
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From ef284f5ca5f102bf855e599305c0c16d6e844635 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:34 +0200
|
||||
Subject: [PATCH 03/17] arm64: Adjust system_state check
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 38fcd06e9b7f6855db1f3ebac5e18b8fdb467ffd Mon Sep 17 00:00:00 2001
|
||||
From: "Darren Hart (VMware)" <dvhart@infradead.org>
|
||||
Date: Fri, 14 Apr 2017 15:31:38 -0700
|
||||
Subject: [PATCH 3/4] futex: Clarify mark_wake_futex memory barrier usage
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Clarify the scenario described in mark_wake_futex requiring the
|
||||
smp_store_release(). Update the comment to explicitly refer to the
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:50 +0100
|
||||
Subject: [PATCH] futex: Remove rt_mutex_deadlock_account_*()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit fffa954fb528963c2fb7b0c0084eb77e2be7ab52
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 67cb85fdcee7fbc61c09c00360d1a4ae37641db4 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Wed, 12 Apr 2017 22:07:29 +0200
|
||||
Subject: [PATCH 03/13] ia64/salinfo: Replace racy task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Some of the file operations in /proc/sal require to run code on the
|
||||
requested cpu. This is achieved by temporarily setting the affinity of the
|
||||
|
|
115
debian/patches/features/all/rt/0003-ring-buffer-Add-interface-for-setting-absolute-time-.patch
vendored
Normal file
115
debian/patches/features/all/rt/0003-ring-buffer-Add-interface-for-setting-absolute-time-.patch
vendored
Normal file
|
@ -0,0 +1,115 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:04 -0500
|
||||
Subject: [PATCH 03/32] ring-buffer: Add interface for setting absolute time
|
||||
stamps
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Define a new function, tracing_set_time_stamp_abs(), which can be used
|
||||
to enable or disable the use of absolute timestamps rather than time
|
||||
deltas for a trace array.
|
||||
|
||||
This resets the buffer to prevent a mix of time deltas and absolute
|
||||
timestamps.
|
||||
|
||||
Only the interface is added here; a subsequent patch will add the
|
||||
underlying implementation.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
include/linux/ring_buffer.h | 2 ++
|
||||
kernel/trace/ring_buffer.c | 11 +++++++++++
|
||||
kernel/trace/trace.c | 25 ++++++++++++++++++++++++-
|
||||
kernel/trace/trace.h | 2 ++
|
||||
4 files changed, 39 insertions(+), 1 deletion(-)
|
||||
|
||||
--- a/include/linux/ring_buffer.h
|
||||
+++ b/include/linux/ring_buffer.h
|
||||
@@ -180,6 +180,8 @@ void ring_buffer_normalize_time_stamp(st
|
||||
int cpu, u64 *ts);
|
||||
void ring_buffer_set_clock(struct ring_buffer *buffer,
|
||||
u64 (*clock)(void));
|
||||
+void ring_buffer_set_time_stamp_abs(struct ring_buffer *buffer, bool abs);
|
||||
+bool ring_buffer_time_stamp_abs(struct ring_buffer *buffer);
|
||||
|
||||
size_t ring_buffer_page_len(void *page);
|
||||
|
||||
--- a/kernel/trace/ring_buffer.c
|
||||
+++ b/kernel/trace/ring_buffer.c
|
||||
@@ -484,6 +484,7 @@ struct ring_buffer {
|
||||
u64 (*clock)(void);
|
||||
|
||||
struct rb_irq_work irq_work;
|
||||
+ bool time_stamp_abs;
|
||||
};
|
||||
|
||||
struct ring_buffer_iter {
|
||||
@@ -1378,6 +1379,16 @@ void ring_buffer_set_clock(struct ring_b
|
||||
buffer->clock = clock;
|
||||
}
|
||||
|
||||
+void ring_buffer_set_time_stamp_abs(struct ring_buffer *buffer, bool abs)
|
||||
+{
|
||||
+ buffer->time_stamp_abs = abs;
|
||||
+}
|
||||
+
|
||||
+bool ring_buffer_time_stamp_abs(struct ring_buffer *buffer)
|
||||
+{
|
||||
+ return buffer->time_stamp_abs;
|
||||
+}
|
||||
+
|
||||
static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer);
|
||||
|
||||
static inline unsigned long rb_page_entries(struct buffer_page *bpage)
|
||||
--- a/kernel/trace/trace.c
|
||||
+++ b/kernel/trace/trace.c
|
||||
@@ -2082,7 +2082,7 @@ trace_event_buffer_lock_reserve(struct r
|
||||
|
||||
*current_rb = trace_file->tr->trace_buffer.buffer;
|
||||
|
||||
- if ((trace_file->flags &
|
||||
+ if (!ring_buffer_time_stamp_abs(*current_rb) && (trace_file->flags &
|
||||
(EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED)) &&
|
||||
(entry = this_cpu_read(trace_buffered_event))) {
|
||||
/* Try to use the per cpu buffer first */
|
||||
@@ -5959,6 +5959,29 @@ static int tracing_clock_open(struct ino
|
||||
return ret;
|
||||
}
|
||||
|
||||
+int tracing_set_time_stamp_abs(struct trace_array *tr, bool abs)
|
||||
+{
|
||||
+ mutex_lock(&trace_types_lock);
|
||||
+
|
||||
+ ring_buffer_set_time_stamp_abs(tr->trace_buffer.buffer, abs);
|
||||
+
|
||||
+ /*
|
||||
+ * New timestamps may not be consistent with the previous setting.
|
||||
+ * Reset the buffer so that it doesn't have incomparable timestamps.
|
||||
+ */
|
||||
+ tracing_reset_online_cpus(&tr->trace_buffer);
|
||||
+
|
||||
+#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
+ if (tr->flags & TRACE_ARRAY_FL_GLOBAL && tr->max_buffer.buffer)
|
||||
+ ring_buffer_set_time_stamp_abs(tr->max_buffer.buffer, abs);
|
||||
+ tracing_reset_online_cpus(&tr->max_buffer);
|
||||
+#endif
|
||||
+
|
||||
+ mutex_unlock(&trace_types_lock);
|
||||
+
|
||||
+ return 0;
|
||||
+}
|
||||
+
|
||||
struct ftrace_buffer_info {
|
||||
struct trace_iterator iter;
|
||||
void *spare;
|
||||
--- a/kernel/trace/trace.h
|
||||
+++ b/kernel/trace/trace.h
|
||||
@@ -278,6 +278,8 @@ extern struct mutex trace_types_lock;
|
||||
extern int trace_array_get(struct trace_array *tr);
|
||||
extern void trace_array_put(struct trace_array *tr);
|
||||
|
||||
+extern int tracing_set_time_stamp_abs(struct trace_array *tr, bool abs);
|
||||
+
|
||||
/*
|
||||
* The global tracer (top) should be the first trace array added,
|
||||
* but we check the flag anyway.
|
|
@ -1,9 +1,8 @@
|
|||
From 85e2d4f992868ad78dc8bb2c077b652fcfb3661a Mon Sep 17 00:00:00 2001
|
||||
From: Xunlei Pang <xlpang@redhat.com>
|
||||
Date: Thu, 23 Mar 2017 15:56:09 +0100
|
||||
Subject: [PATCH 3/9] sched/deadline/rtmutex: Dont miss the
|
||||
dl_runtime/dl_period update
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Currently dl tasks will actually return at the very beginning
|
||||
of rt_mutex_adjust_prio_chain() in !detect_deadlock cases:
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 59cd42c29618c45cd3c56da43402b14f611888dd Mon Sep 17 00:00:00 2001
|
||||
From: "Darren Hart (VMware)" <dvhart@infradead.org>
|
||||
Date: Fri, 14 Apr 2017 15:46:08 -0700
|
||||
Subject: [PATCH 4/4] MAINTAINERS: Add FUTEX SUBSYSTEM
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Add a MAINTAINERS block for the FUTEX SUBSYSTEM which includes the core
|
||||
kernel code, include headers, testing code, and Documentation. Excludes
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:51 +0100
|
||||
Subject: [PATCH] futex,rt_mutex: Provide futex specific rt_mutex API
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit 5293c2efda37775346885c7e924d4ef7018ea60b
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 9feb42ac88b516e378b9782e82b651ca5bed95c4 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Thu, 6 Apr 2017 14:56:18 +0200
|
||||
Subject: [PATCH 04/13] ia64/sn/hwperf: Replace racy task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
sn_hwperf_op_cpu() which is invoked from an ioctl requires to run code on
|
||||
the requested cpu. This is achieved by temporarily setting the affinity of
|
||||
|
|
331
debian/patches/features/all/rt/0004-ring-buffer-Redefine-the-unimplemented-RINGBUF_TIME_.patch
vendored
Normal file
331
debian/patches/features/all/rt/0004-ring-buffer-Redefine-the-unimplemented-RINGBUF_TIME_.patch
vendored
Normal file
|
@ -0,0 +1,331 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:05 -0500
|
||||
Subject: [PATCH 04/32] ring-buffer: Redefine the unimplemented
|
||||
RINGBUF_TIME_TIME_STAMP
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
RINGBUF_TYPE_TIME_STAMP is defined but not used, and from what I can
|
||||
gather was reserved for something like an absolute timestamp feature
|
||||
for the ring buffer, if not a complete replacement of the current
|
||||
time_delta scheme.
|
||||
|
||||
This code redefines RINGBUF_TYPE_TIME_STAMP to implement absolute time
|
||||
stamps. Another way to look at it is that it essentially forces
|
||||
extended time_deltas for all events.
|
||||
|
||||
The motivation for doing this is to enable time_deltas that aren't
|
||||
dependent on previous events in the ring buffer, making it feasible to
|
||||
use the ring_buffer_event timetamps in a more random-access way, for
|
||||
purposes other than serial event printing.
|
||||
|
||||
To set/reset this mode, use tracing_set_timestamp_abs() from the
|
||||
previous interface patch.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
include/linux/ring_buffer.h | 12 ++--
|
||||
kernel/trace/ring_buffer.c | 107 +++++++++++++++++++++++++++++++-------------
|
||||
2 files changed, 83 insertions(+), 36 deletions(-)
|
||||
|
||||
--- a/include/linux/ring_buffer.h
|
||||
+++ b/include/linux/ring_buffer.h
|
||||
@@ -36,10 +36,12 @@ struct ring_buffer_event {
|
||||
* array[0] = time delta (28 .. 59)
|
||||
* size = 8 bytes
|
||||
*
|
||||
- * @RINGBUF_TYPE_TIME_STAMP: Sync time stamp with external clock
|
||||
- * array[0] = tv_nsec
|
||||
- * array[1..2] = tv_sec
|
||||
- * size = 16 bytes
|
||||
+ * @RINGBUF_TYPE_TIME_STAMP: Absolute timestamp
|
||||
+ * Same format as TIME_EXTEND except that the
|
||||
+ * value is an absolute timestamp, not a delta
|
||||
+ * event.time_delta contains bottom 27 bits
|
||||
+ * array[0] = top (28 .. 59) bits
|
||||
+ * size = 8 bytes
|
||||
*
|
||||
* <= @RINGBUF_TYPE_DATA_TYPE_LEN_MAX:
|
||||
* Data record
|
||||
@@ -56,12 +58,12 @@ enum ring_buffer_type {
|
||||
RINGBUF_TYPE_DATA_TYPE_LEN_MAX = 28,
|
||||
RINGBUF_TYPE_PADDING,
|
||||
RINGBUF_TYPE_TIME_EXTEND,
|
||||
- /* FIXME: RINGBUF_TYPE_TIME_STAMP not implemented */
|
||||
RINGBUF_TYPE_TIME_STAMP,
|
||||
};
|
||||
|
||||
unsigned ring_buffer_event_length(struct ring_buffer_event *event);
|
||||
void *ring_buffer_event_data(struct ring_buffer_event *event);
|
||||
+u64 ring_buffer_event_time_stamp(struct ring_buffer_event *event);
|
||||
|
||||
/*
|
||||
* ring_buffer_discard_commit will remove an event that has not
|
||||
--- a/kernel/trace/ring_buffer.c
|
||||
+++ b/kernel/trace/ring_buffer.c
|
||||
@@ -42,6 +42,8 @@ int ring_buffer_print_entry_header(struc
|
||||
RINGBUF_TYPE_PADDING);
|
||||
trace_seq_printf(s, "\ttime_extend : type == %d\n",
|
||||
RINGBUF_TYPE_TIME_EXTEND);
|
||||
+ trace_seq_printf(s, "\ttime_stamp : type == %d\n",
|
||||
+ RINGBUF_TYPE_TIME_STAMP);
|
||||
trace_seq_printf(s, "\tdata max type_len == %d\n",
|
||||
RINGBUF_TYPE_DATA_TYPE_LEN_MAX);
|
||||
|
||||
@@ -147,6 +149,9 @@ enum {
|
||||
#define skip_time_extend(event) \
|
||||
((struct ring_buffer_event *)((char *)event + RB_LEN_TIME_EXTEND))
|
||||
|
||||
+#define extended_time(event) \
|
||||
+ (event->type_len >= RINGBUF_TYPE_TIME_EXTEND)
|
||||
+
|
||||
static inline int rb_null_event(struct ring_buffer_event *event)
|
||||
{
|
||||
return event->type_len == RINGBUF_TYPE_PADDING && !event->time_delta;
|
||||
@@ -187,10 +192,8 @@ rb_event_length(struct ring_buffer_event
|
||||
return event->array[0] + RB_EVNT_HDR_SIZE;
|
||||
|
||||
case RINGBUF_TYPE_TIME_EXTEND:
|
||||
- return RB_LEN_TIME_EXTEND;
|
||||
-
|
||||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
- return RB_LEN_TIME_STAMP;
|
||||
+ return RB_LEN_TIME_EXTEND;
|
||||
|
||||
case RINGBUF_TYPE_DATA:
|
||||
return rb_event_data_length(event);
|
||||
@@ -210,7 +213,7 @@ rb_event_ts_length(struct ring_buffer_ev
|
||||
{
|
||||
unsigned len = 0;
|
||||
|
||||
- if (event->type_len == RINGBUF_TYPE_TIME_EXTEND) {
|
||||
+ if (extended_time(event)) {
|
||||
/* time extends include the data event after it */
|
||||
len = RB_LEN_TIME_EXTEND;
|
||||
event = skip_time_extend(event);
|
||||
@@ -232,7 +235,7 @@ unsigned ring_buffer_event_length(struct
|
||||
{
|
||||
unsigned length;
|
||||
|
||||
- if (event->type_len == RINGBUF_TYPE_TIME_EXTEND)
|
||||
+ if (extended_time(event))
|
||||
event = skip_time_extend(event);
|
||||
|
||||
length = rb_event_length(event);
|
||||
@@ -249,7 +252,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_event_leng
|
||||
static __always_inline void *
|
||||
rb_event_data(struct ring_buffer_event *event)
|
||||
{
|
||||
- if (event->type_len == RINGBUF_TYPE_TIME_EXTEND)
|
||||
+ if (extended_time(event))
|
||||
event = skip_time_extend(event);
|
||||
BUG_ON(event->type_len > RINGBUF_TYPE_DATA_TYPE_LEN_MAX);
|
||||
/* If length is in len field, then array[0] has the data */
|
||||
@@ -276,6 +279,27 @@ EXPORT_SYMBOL_GPL(ring_buffer_event_data
|
||||
#define TS_MASK ((1ULL << TS_SHIFT) - 1)
|
||||
#define TS_DELTA_TEST (~TS_MASK)
|
||||
|
||||
+/**
|
||||
+ * ring_buffer_event_time_stamp - return the event's extended timestamp
|
||||
+ * @event: the event to get the timestamp of
|
||||
+ *
|
||||
+ * Returns the extended timestamp associated with a data event.
|
||||
+ * An extended time_stamp is a 64-bit timestamp represented
|
||||
+ * internally in a special way that makes the best use of space
|
||||
+ * contained within a ring buffer event. This function decodes
|
||||
+ * it and maps it to a straight u64 value.
|
||||
+ */
|
||||
+u64 ring_buffer_event_time_stamp(struct ring_buffer_event *event)
|
||||
+{
|
||||
+ u64 ts;
|
||||
+
|
||||
+ ts = event->array[0];
|
||||
+ ts <<= TS_SHIFT;
|
||||
+ ts += event->time_delta;
|
||||
+
|
||||
+ return ts;
|
||||
+}
|
||||
+
|
||||
/* Flag when events were overwritten */
|
||||
#define RB_MISSED_EVENTS (1 << 31)
|
||||
/* Missed count stored at end */
|
||||
@@ -2219,13 +2243,16 @@ rb_move_tail(struct ring_buffer_per_cpu
|
||||
}
|
||||
|
||||
/* Slow path, do not inline */
|
||||
-static noinline struct ring_buffer_event *
|
||||
-rb_add_time_stamp(struct ring_buffer_event *event, u64 delta)
|
||||
+static struct noinline ring_buffer_event *
|
||||
+rb_add_time_stamp(struct ring_buffer_event *event, u64 delta, bool abs)
|
||||
{
|
||||
- event->type_len = RINGBUF_TYPE_TIME_EXTEND;
|
||||
+ if (abs)
|
||||
+ event->type_len = RINGBUF_TYPE_TIME_STAMP;
|
||||
+ else
|
||||
+ event->type_len = RINGBUF_TYPE_TIME_EXTEND;
|
||||
|
||||
- /* Not the first event on the page? */
|
||||
- if (rb_event_index(event)) {
|
||||
+ /* Not the first event on the page, or not delta? */
|
||||
+ if (abs || rb_event_index(event)) {
|
||||
event->time_delta = delta & TS_MASK;
|
||||
event->array[0] = delta >> TS_SHIFT;
|
||||
} else {
|
||||
@@ -2268,7 +2295,9 @@ rb_update_event(struct ring_buffer_per_c
|
||||
* add it to the start of the resevered space.
|
||||
*/
|
||||
if (unlikely(info->add_timestamp)) {
|
||||
- event = rb_add_time_stamp(event, delta);
|
||||
+ bool abs = ring_buffer_time_stamp_abs(cpu_buffer->buffer);
|
||||
+
|
||||
+ event = rb_add_time_stamp(event, info->delta, abs);
|
||||
length -= RB_LEN_TIME_EXTEND;
|
||||
delta = 0;
|
||||
}
|
||||
@@ -2456,7 +2485,7 @@ static __always_inline void rb_end_commi
|
||||
|
||||
static inline void rb_event_discard(struct ring_buffer_event *event)
|
||||
{
|
||||
- if (event->type_len == RINGBUF_TYPE_TIME_EXTEND)
|
||||
+ if (extended_time(event))
|
||||
event = skip_time_extend(event);
|
||||
|
||||
/* array[0] holds the actual length for the discarded event */
|
||||
@@ -2487,6 +2516,10 @@ rb_update_write_stamp(struct ring_buffer
|
||||
{
|
||||
u64 delta;
|
||||
|
||||
+ /* In TIME_STAMP mode, write_stamp is unused, nothing to do */
|
||||
+ if (event->type_len == RINGBUF_TYPE_TIME_STAMP)
|
||||
+ return;
|
||||
+
|
||||
/*
|
||||
* The event first in the commit queue updates the
|
||||
* time stamp.
|
||||
@@ -2500,9 +2533,7 @@ rb_update_write_stamp(struct ring_buffer
|
||||
cpu_buffer->write_stamp =
|
||||
cpu_buffer->commit_page->page->time_stamp;
|
||||
else if (event->type_len == RINGBUF_TYPE_TIME_EXTEND) {
|
||||
- delta = event->array[0];
|
||||
- delta <<= TS_SHIFT;
|
||||
- delta += event->time_delta;
|
||||
+ delta = ring_buffer_event_time_stamp(event);
|
||||
cpu_buffer->write_stamp += delta;
|
||||
} else
|
||||
cpu_buffer->write_stamp += event->time_delta;
|
||||
@@ -2686,7 +2717,7 @@ static struct ring_buffer_event *
|
||||
* If this is the first commit on the page, then it has the same
|
||||
* timestamp as the page itself.
|
||||
*/
|
||||
- if (!tail)
|
||||
+ if (!tail && !ring_buffer_time_stamp_abs(cpu_buffer->buffer))
|
||||
info->delta = 0;
|
||||
|
||||
/* See if we shot pass the end of this buffer page */
|
||||
@@ -2764,8 +2795,11 @@ rb_reserve_next_event(struct ring_buffer
|
||||
/* make sure this diff is calculated here */
|
||||
barrier();
|
||||
|
||||
- /* Did the write stamp get updated already? */
|
||||
- if (likely(info.ts >= cpu_buffer->write_stamp)) {
|
||||
+ if (ring_buffer_time_stamp_abs(buffer)) {
|
||||
+ info.delta = info.ts;
|
||||
+ rb_handle_timestamp(cpu_buffer, &info);
|
||||
+ } else /* Did the write stamp get updated already? */
|
||||
+ if (likely(info.ts >= cpu_buffer->write_stamp)) {
|
||||
info.delta = diff;
|
||||
if (unlikely(test_time_stamp(info.delta)))
|
||||
rb_handle_timestamp(cpu_buffer, &info);
|
||||
@@ -3447,14 +3481,12 @@ rb_update_read_stamp(struct ring_buffer_
|
||||
return;
|
||||
|
||||
case RINGBUF_TYPE_TIME_EXTEND:
|
||||
- delta = event->array[0];
|
||||
- delta <<= TS_SHIFT;
|
||||
- delta += event->time_delta;
|
||||
+ delta = ring_buffer_event_time_stamp(event);
|
||||
cpu_buffer->read_stamp += delta;
|
||||
return;
|
||||
|
||||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
- /* FIXME: not implemented */
|
||||
+ /* In TIME_STAMP mode, write_stamp is unused, nothing to do */
|
||||
return;
|
||||
|
||||
case RINGBUF_TYPE_DATA:
|
||||
@@ -3478,14 +3510,12 @@ rb_update_iter_read_stamp(struct ring_bu
|
||||
return;
|
||||
|
||||
case RINGBUF_TYPE_TIME_EXTEND:
|
||||
- delta = event->array[0];
|
||||
- delta <<= TS_SHIFT;
|
||||
- delta += event->time_delta;
|
||||
+ delta = ring_buffer_event_time_stamp(event);
|
||||
iter->read_stamp += delta;
|
||||
return;
|
||||
|
||||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
- /* FIXME: not implemented */
|
||||
+ /* In TIME_STAMP mode, write_stamp is unused, nothing to do */
|
||||
return;
|
||||
|
||||
case RINGBUF_TYPE_DATA:
|
||||
@@ -3709,6 +3739,8 @@ rb_buffer_peek(struct ring_buffer_per_cp
|
||||
struct buffer_page *reader;
|
||||
int nr_loops = 0;
|
||||
|
||||
+ if (ts)
|
||||
+ *ts = 0;
|
||||
again:
|
||||
/*
|
||||
* We repeat when a time extend is encountered.
|
||||
@@ -3745,12 +3777,17 @@ rb_buffer_peek(struct ring_buffer_per_cp
|
||||
goto again;
|
||||
|
||||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
- /* FIXME: not implemented */
|
||||
+ if (ts) {
|
||||
+ *ts = ring_buffer_event_time_stamp(event);
|
||||
+ ring_buffer_normalize_time_stamp(cpu_buffer->buffer,
|
||||
+ cpu_buffer->cpu, ts);
|
||||
+ }
|
||||
+ /* Internal data, OK to advance */
|
||||
rb_advance_reader(cpu_buffer);
|
||||
goto again;
|
||||
|
||||
case RINGBUF_TYPE_DATA:
|
||||
- if (ts) {
|
||||
+ if (ts && !(*ts)) {
|
||||
*ts = cpu_buffer->read_stamp + event->time_delta;
|
||||
ring_buffer_normalize_time_stamp(cpu_buffer->buffer,
|
||||
cpu_buffer->cpu, ts);
|
||||
@@ -3775,6 +3812,9 @@ rb_iter_peek(struct ring_buffer_iter *it
|
||||
struct ring_buffer_event *event;
|
||||
int nr_loops = 0;
|
||||
|
||||
+ if (ts)
|
||||
+ *ts = 0;
|
||||
+
|
||||
cpu_buffer = iter->cpu_buffer;
|
||||
buffer = cpu_buffer->buffer;
|
||||
|
||||
@@ -3827,12 +3867,17 @@ rb_iter_peek(struct ring_buffer_iter *it
|
||||
goto again;
|
||||
|
||||
case RINGBUF_TYPE_TIME_STAMP:
|
||||
- /* FIXME: not implemented */
|
||||
+ if (ts) {
|
||||
+ *ts = ring_buffer_event_time_stamp(event);
|
||||
+ ring_buffer_normalize_time_stamp(cpu_buffer->buffer,
|
||||
+ cpu_buffer->cpu, ts);
|
||||
+ }
|
||||
+ /* Internal data, OK to advance */
|
||||
rb_advance_iter(iter);
|
||||
goto again;
|
||||
|
||||
case RINGBUF_TYPE_DATA:
|
||||
- if (ts) {
|
||||
+ if (ts && !(*ts)) {
|
||||
*ts = iter->read_stamp + event->time_delta;
|
||||
ring_buffer_normalize_time_stamp(buffer,
|
||||
cpu_buffer->cpu, ts);
|
|
@ -1,8 +1,7 @@
|
|||
From aa2bfe55366552cb7e93e8709d66e698d79ccc47 Mon Sep 17 00:00:00 2001
|
||||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Thu, 23 Mar 2017 15:56:10 +0100
|
||||
Subject: [PATCH 4/9] rtmutex: Clean up
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Previous patches changed the meaning of the return value of
|
||||
rt_mutex_slowunlock(); update comments and code to reflect this.
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 719b3680d1f789c1e3054e3fcb26bfff07c3c623 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:35 +0200
|
||||
Subject: [PATCH 04/17] x86/smp: Adjust system_state check
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:52 +0100
|
||||
Subject: [PATCH] futex: Change locking rules
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit 734009e96d1983ad739e5b656e03430b3660c913
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From dcd2e4734b428709984e2fa35ebbd6cccc246d47 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:36 +0200
|
||||
Subject: [PATCH 05/17] metag: Adjust system_state check
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 6d11b87d55eb75007a3721c2de5938f5bbf607fb Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Wed, 12 Apr 2017 22:07:31 +0200
|
||||
Subject: [PATCH 05/13] powerpc/smp: Replace open coded task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Init task invokes smp_ops->setup_cpu() from smp_cpus_done(). Init task can
|
||||
run on any online CPU at this point, but the setup_cpu() callback requires
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From acd58620e415aee4a43a808d7d2fd87259ee0001 Mon Sep 17 00:00:00 2001
|
||||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Thu, 23 Mar 2017 15:56:11 +0100
|
||||
Subject: [PATCH 5/9] sched/rtmutex: Refactor rt_mutex_setprio()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
With the introduction of SCHED_DEADLINE the whole notion that priority
|
||||
is a single number is gone, therefore the @prio argument to
|
||||
|
|
299
debian/patches/features/all/rt/0005-tracing-Give-event-triggers-access-to-ring_buffer_ev.patch
vendored
Normal file
299
debian/patches/features/all/rt/0005-tracing-Give-event-triggers-access-to-ring_buffer_ev.patch
vendored
Normal file
|
@ -0,0 +1,299 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:06 -0500
|
||||
Subject: [PATCH 05/32] tracing: Give event triggers access to
|
||||
ring_buffer_event
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The ring_buffer event can provide a timestamp that may be useful to
|
||||
various triggers - pass it into the handlers for that purpose.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
include/linux/trace_events.h | 14 ++++++----
|
||||
kernel/trace/trace.h | 9 +++---
|
||||
kernel/trace/trace_events_hist.c | 11 +++++---
|
||||
kernel/trace/trace_events_trigger.c | 47 ++++++++++++++++++++++--------------
|
||||
4 files changed, 49 insertions(+), 32 deletions(-)
|
||||
|
||||
--- a/include/linux/trace_events.h
|
||||
+++ b/include/linux/trace_events.h
|
||||
@@ -400,11 +400,13 @@ enum event_trigger_type {
|
||||
|
||||
extern int filter_match_preds(struct event_filter *filter, void *rec);
|
||||
|
||||
-extern enum event_trigger_type event_triggers_call(struct trace_event_file *file,
|
||||
- void *rec);
|
||||
-extern void event_triggers_post_call(struct trace_event_file *file,
|
||||
- enum event_trigger_type tt,
|
||||
- void *rec);
|
||||
+extern enum event_trigger_type
|
||||
+event_triggers_call(struct trace_event_file *file, void *rec,
|
||||
+ struct ring_buffer_event *event);
|
||||
+extern void
|
||||
+event_triggers_post_call(struct trace_event_file *file,
|
||||
+ enum event_trigger_type tt,
|
||||
+ void *rec, struct ring_buffer_event *event);
|
||||
|
||||
bool trace_event_ignore_this_pid(struct trace_event_file *trace_file);
|
||||
|
||||
@@ -424,7 +426,7 @@ trace_trigger_soft_disabled(struct trace
|
||||
|
||||
if (!(eflags & EVENT_FILE_FL_TRIGGER_COND)) {
|
||||
if (eflags & EVENT_FILE_FL_TRIGGER_MODE)
|
||||
- event_triggers_call(file, NULL);
|
||||
+ event_triggers_call(file, NULL, NULL);
|
||||
if (eflags & EVENT_FILE_FL_SOFT_DISABLED)
|
||||
return true;
|
||||
if (eflags & EVENT_FILE_FL_PID_FILTER)
|
||||
--- a/kernel/trace/trace.h
|
||||
+++ b/kernel/trace/trace.h
|
||||
@@ -1189,7 +1189,7 @@ static inline bool
|
||||
unsigned long eflags = file->flags;
|
||||
|
||||
if (eflags & EVENT_FILE_FL_TRIGGER_COND)
|
||||
- *tt = event_triggers_call(file, entry);
|
||||
+ *tt = event_triggers_call(file, entry, event);
|
||||
|
||||
if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags) ||
|
||||
(unlikely(file->flags & EVENT_FILE_FL_FILTERED) &&
|
||||
@@ -1226,7 +1226,7 @@ event_trigger_unlock_commit(struct trace
|
||||
trace_buffer_unlock_commit(file->tr, buffer, event, irq_flags, pc);
|
||||
|
||||
if (tt)
|
||||
- event_triggers_post_call(file, tt, entry);
|
||||
+ event_triggers_post_call(file, tt, entry, event);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1259,7 +1259,7 @@ event_trigger_unlock_commit_regs(struct
|
||||
irq_flags, pc, regs);
|
||||
|
||||
if (tt)
|
||||
- event_triggers_post_call(file, tt, entry);
|
||||
+ event_triggers_post_call(file, tt, entry, event);
|
||||
}
|
||||
|
||||
#define FILTER_PRED_INVALID ((unsigned short)-1)
|
||||
@@ -1482,7 +1482,8 @@ extern int register_trigger_hist_enable_
|
||||
*/
|
||||
struct event_trigger_ops {
|
||||
void (*func)(struct event_trigger_data *data,
|
||||
- void *rec);
|
||||
+ void *rec,
|
||||
+ struct ring_buffer_event *rbe);
|
||||
int (*init)(struct event_trigger_ops *ops,
|
||||
struct event_trigger_data *data);
|
||||
void (*free)(struct event_trigger_ops *ops,
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -921,7 +921,8 @@ static inline void add_to_key(char *comp
|
||||
memcpy(compound_key + key_field->offset, key, size);
|
||||
}
|
||||
|
||||
-static void event_hist_trigger(struct event_trigger_data *data, void *rec)
|
||||
+static void event_hist_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
struct hist_trigger_data *hist_data = data->private_data;
|
||||
bool use_compound_key = (hist_data->n_keys > 1);
|
||||
@@ -1672,7 +1673,8 @@ static struct event_command trigger_hist
|
||||
}
|
||||
|
||||
static void
|
||||
-hist_enable_trigger(struct event_trigger_data *data, void *rec)
|
||||
+hist_enable_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
struct enable_trigger_data *enable_data = data->private_data;
|
||||
struct event_trigger_data *test;
|
||||
@@ -1688,7 +1690,8 @@ hist_enable_trigger(struct event_trigger
|
||||
}
|
||||
|
||||
static void
|
||||
-hist_enable_count_trigger(struct event_trigger_data *data, void *rec)
|
||||
+hist_enable_count_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
if (!data->count)
|
||||
return;
|
||||
@@ -1696,7 +1699,7 @@ hist_enable_count_trigger(struct event_t
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
- hist_enable_trigger(data, rec);
|
||||
+ hist_enable_trigger(data, rec, event);
|
||||
}
|
||||
|
||||
static struct event_trigger_ops hist_enable_trigger_ops = {
|
||||
--- a/kernel/trace/trace_events_trigger.c
|
||||
+++ b/kernel/trace/trace_events_trigger.c
|
||||
@@ -63,7 +63,8 @@ void trigger_data_free(struct event_trig
|
||||
* any trigger that should be deferred, ETT_NONE if nothing to defer.
|
||||
*/
|
||||
enum event_trigger_type
|
||||
-event_triggers_call(struct trace_event_file *file, void *rec)
|
||||
+event_triggers_call(struct trace_event_file *file, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
struct event_trigger_data *data;
|
||||
enum event_trigger_type tt = ETT_NONE;
|
||||
@@ -76,7 +77,7 @@ event_triggers_call(struct trace_event_f
|
||||
if (data->paused)
|
||||
continue;
|
||||
if (!rec) {
|
||||
- data->ops->func(data, rec);
|
||||
+ data->ops->func(data, rec, event);
|
||||
continue;
|
||||
}
|
||||
filter = rcu_dereference_sched(data->filter);
|
||||
@@ -86,7 +87,7 @@ event_triggers_call(struct trace_event_f
|
||||
tt |= data->cmd_ops->trigger_type;
|
||||
continue;
|
||||
}
|
||||
- data->ops->func(data, rec);
|
||||
+ data->ops->func(data, rec, event);
|
||||
}
|
||||
return tt;
|
||||
}
|
||||
@@ -108,7 +109,7 @@ EXPORT_SYMBOL_GPL(event_triggers_call);
|
||||
void
|
||||
event_triggers_post_call(struct trace_event_file *file,
|
||||
enum event_trigger_type tt,
|
||||
- void *rec)
|
||||
+ void *rec, struct ring_buffer_event *event)
|
||||
{
|
||||
struct event_trigger_data *data;
|
||||
|
||||
@@ -116,7 +117,7 @@ event_triggers_post_call(struct trace_ev
|
||||
if (data->paused)
|
||||
continue;
|
||||
if (data->cmd_ops->trigger_type & tt)
|
||||
- data->ops->func(data, rec);
|
||||
+ data->ops->func(data, rec, event);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(event_triggers_post_call);
|
||||
@@ -909,7 +910,8 @@ void set_named_trigger_data(struct event
|
||||
}
|
||||
|
||||
static void
|
||||
-traceon_trigger(struct event_trigger_data *data, void *rec)
|
||||
+traceon_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
if (tracing_is_on())
|
||||
return;
|
||||
@@ -918,7 +920,8 @@ traceon_trigger(struct event_trigger_dat
|
||||
}
|
||||
|
||||
static void
|
||||
-traceon_count_trigger(struct event_trigger_data *data, void *rec)
|
||||
+traceon_count_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
if (tracing_is_on())
|
||||
return;
|
||||
@@ -933,7 +936,8 @@ traceon_count_trigger(struct event_trigg
|
||||
}
|
||||
|
||||
static void
|
||||
-traceoff_trigger(struct event_trigger_data *data, void *rec)
|
||||
+traceoff_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
if (!tracing_is_on())
|
||||
return;
|
||||
@@ -942,7 +946,8 @@ traceoff_trigger(struct event_trigger_da
|
||||
}
|
||||
|
||||
static void
|
||||
-traceoff_count_trigger(struct event_trigger_data *data, void *rec)
|
||||
+traceoff_count_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
if (!tracing_is_on())
|
||||
return;
|
||||
@@ -1039,13 +1044,15 @@ static struct event_command trigger_trac
|
||||
|
||||
#ifdef CONFIG_TRACER_SNAPSHOT
|
||||
static void
|
||||
-snapshot_trigger(struct event_trigger_data *data, void *rec)
|
||||
+snapshot_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
tracing_snapshot();
|
||||
}
|
||||
|
||||
static void
|
||||
-snapshot_count_trigger(struct event_trigger_data *data, void *rec)
|
||||
+snapshot_count_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
if (!data->count)
|
||||
return;
|
||||
@@ -1053,7 +1060,7 @@ snapshot_count_trigger(struct event_trig
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
- snapshot_trigger(data, rec);
|
||||
+ snapshot_trigger(data, rec, event);
|
||||
}
|
||||
|
||||
static int
|
||||
@@ -1132,13 +1139,15 @@ static __init int register_trigger_snaps
|
||||
#define STACK_SKIP 3
|
||||
|
||||
static void
|
||||
-stacktrace_trigger(struct event_trigger_data *data, void *rec)
|
||||
+stacktrace_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
trace_dump_stack(STACK_SKIP);
|
||||
}
|
||||
|
||||
static void
|
||||
-stacktrace_count_trigger(struct event_trigger_data *data, void *rec)
|
||||
+stacktrace_count_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
if (!data->count)
|
||||
return;
|
||||
@@ -1146,7 +1155,7 @@ stacktrace_count_trigger(struct event_tr
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
- stacktrace_trigger(data, rec);
|
||||
+ stacktrace_trigger(data, rec, event);
|
||||
}
|
||||
|
||||
static int
|
||||
@@ -1208,7 +1217,8 @@ static __init void unregister_trigger_tr
|
||||
}
|
||||
|
||||
static void
|
||||
-event_enable_trigger(struct event_trigger_data *data, void *rec)
|
||||
+event_enable_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
struct enable_trigger_data *enable_data = data->private_data;
|
||||
|
||||
@@ -1219,7 +1229,8 @@ event_enable_trigger(struct event_trigge
|
||||
}
|
||||
|
||||
static void
|
||||
-event_enable_count_trigger(struct event_trigger_data *data, void *rec)
|
||||
+event_enable_count_trigger(struct event_trigger_data *data, void *rec,
|
||||
+ struct ring_buffer_event *event)
|
||||
{
|
||||
struct enable_trigger_data *enable_data = data->private_data;
|
||||
|
||||
@@ -1233,7 +1244,7 @@ event_enable_count_trigger(struct event_
|
||||
if (data->count != -1)
|
||||
(data->count)--;
|
||||
|
||||
- event_enable_trigger(data, rec);
|
||||
+ event_enable_trigger(data, rec, event);
|
||||
}
|
||||
|
||||
int event_enable_trigger_print(struct seq_file *m,
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:53 +0100
|
||||
Subject: [PATCH] futex: Cleanup refcounting
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit bf92cf3a5100f5a0d5f9834787b130159397cb22
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From a8fcfc1917681ba1ccc23a429543a67aad8bfd00 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:37 +0200
|
||||
Subject: [PATCH 06/17] powerpc: Adjust system_state check
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From b91473ff6e979c0028f02f90e40c844959c736d8 Mon Sep 17 00:00:00 2001
|
||||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Thu, 23 Mar 2017 15:56:12 +0100
|
||||
Subject: [PATCH 6/9] sched,tracing: Update trace_sched_pi_setprio()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Pass the PI donor task, instead of a numerical priority.
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From ea875ec94eafb858990f3fe9528501f983105653 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Thu, 13 Apr 2017 10:17:07 +0200
|
||||
Subject: [PATCH 06/13] sparc/sysfs: Replace racy task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The mmustat_enable sysfs file accessor functions must run code on the
|
||||
target CPU. This is achieved by temporarily setting the affinity of the
|
||||
|
|
140
debian/patches/features/all/rt/0006-tracing-Add-ring-buffer-event-param-to-hist-field-fu.patch
vendored
Normal file
140
debian/patches/features/all/rt/0006-tracing-Add-ring-buffer-event-param-to-hist-field-fu.patch
vendored
Normal file
|
@ -0,0 +1,140 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:07 -0500
|
||||
Subject: [PATCH 06/32] tracing: Add ring buffer event param to hist field
|
||||
functions
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Some events such as timestamps require access to a ring_buffer_event
|
||||
struct; add a param so that hist field functions can access that.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 39 ++++++++++++++++++++++++---------------
|
||||
1 file changed, 24 insertions(+), 15 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -26,7 +26,8 @@
|
||||
|
||||
struct hist_field;
|
||||
|
||||
-typedef u64 (*hist_field_fn_t) (struct hist_field *field, void *event);
|
||||
+typedef u64 (*hist_field_fn_t) (struct hist_field *field, void *event,
|
||||
+ struct ring_buffer_event *rbe);
|
||||
|
||||
#define HIST_FIELD_OPERANDS_MAX 2
|
||||
|
||||
@@ -40,24 +41,28 @@ struct hist_field {
|
||||
struct hist_field *operands[HIST_FIELD_OPERANDS_MAX];
|
||||
};
|
||||
|
||||
-static u64 hist_field_none(struct hist_field *field, void *event)
|
||||
+static u64 hist_field_none(struct hist_field *field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
-static u64 hist_field_counter(struct hist_field *field, void *event)
|
||||
+static u64 hist_field_counter(struct hist_field *field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
|
||||
-static u64 hist_field_string(struct hist_field *hist_field, void *event)
|
||||
+static u64 hist_field_string(struct hist_field *hist_field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
{
|
||||
char *addr = (char *)(event + hist_field->field->offset);
|
||||
|
||||
return (u64)(unsigned long)addr;
|
||||
}
|
||||
|
||||
-static u64 hist_field_dynstring(struct hist_field *hist_field, void *event)
|
||||
+static u64 hist_field_dynstring(struct hist_field *hist_field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
{
|
||||
u32 str_item = *(u32 *)(event + hist_field->field->offset);
|
||||
int str_loc = str_item & 0xffff;
|
||||
@@ -66,24 +71,28 @@ static u64 hist_field_dynstring(struct h
|
||||
return (u64)(unsigned long)addr;
|
||||
}
|
||||
|
||||
-static u64 hist_field_pstring(struct hist_field *hist_field, void *event)
|
||||
+static u64 hist_field_pstring(struct hist_field *hist_field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
{
|
||||
char **addr = (char **)(event + hist_field->field->offset);
|
||||
|
||||
return (u64)(unsigned long)*addr;
|
||||
}
|
||||
|
||||
-static u64 hist_field_log2(struct hist_field *hist_field, void *event)
|
||||
+static u64 hist_field_log2(struct hist_field *hist_field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
{
|
||||
struct hist_field *operand = hist_field->operands[0];
|
||||
|
||||
- u64 val = operand->fn(operand, event);
|
||||
+ u64 val = operand->fn(operand, event, rbe);
|
||||
|
||||
return (u64) ilog2(roundup_pow_of_two(val));
|
||||
}
|
||||
|
||||
#define DEFINE_HIST_FIELD_FN(type) \
|
||||
-static u64 hist_field_##type(struct hist_field *hist_field, void *event)\
|
||||
+ static u64 hist_field_##type(struct hist_field *hist_field, \
|
||||
+ void *event, \
|
||||
+ struct ring_buffer_event *rbe) \
|
||||
{ \
|
||||
type *addr = (type *)(event + hist_field->field->offset); \
|
||||
\
|
||||
@@ -883,8 +892,8 @@ create_hist_data(unsigned int map_bits,
|
||||
}
|
||||
|
||||
static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
|
||||
- struct tracing_map_elt *elt,
|
||||
- void *rec)
|
||||
+ struct tracing_map_elt *elt, void *rec,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
{
|
||||
struct hist_field *hist_field;
|
||||
unsigned int i;
|
||||
@@ -892,7 +901,7 @@ static void hist_trigger_elt_update(stru
|
||||
|
||||
for_each_hist_val_field(i, hist_data) {
|
||||
hist_field = hist_data->fields[i];
|
||||
- hist_val = hist_field->fn(hist_field, rec);
|
||||
+ hist_val = hist_field->fn(hist_field, rec, rbe);
|
||||
tracing_map_update_sum(elt, i, hist_val);
|
||||
}
|
||||
}
|
||||
@@ -922,7 +931,7 @@ static inline void add_to_key(char *comp
|
||||
}
|
||||
|
||||
static void event_hist_trigger(struct event_trigger_data *data, void *rec,
|
||||
- struct ring_buffer_event *event)
|
||||
+ struct ring_buffer_event *rbe)
|
||||
{
|
||||
struct hist_trigger_data *hist_data = data->private_data;
|
||||
bool use_compound_key = (hist_data->n_keys > 1);
|
||||
@@ -951,7 +960,7 @@ static void event_hist_trigger(struct ev
|
||||
|
||||
key = entries;
|
||||
} else {
|
||||
- field_contents = key_field->fn(key_field, rec);
|
||||
+ field_contents = key_field->fn(key_field, rec, rbe);
|
||||
if (key_field->flags & HIST_FIELD_FL_STRING) {
|
||||
key = (void *)(unsigned long)field_contents;
|
||||
use_compound_key = true;
|
||||
@@ -968,7 +977,7 @@ static void event_hist_trigger(struct ev
|
||||
|
||||
elt = tracing_map_insert(hist_data->map, key);
|
||||
if (elt)
|
||||
- hist_trigger_elt_update(hist_data, elt, rec);
|
||||
+ hist_trigger_elt_update(hist_data, elt, rec, rbe);
|
||||
}
|
||||
|
||||
static void hist_trigger_stacktrace_print(struct seq_file *m,
|
|
@ -1,8 +1,7 @@
|
|||
From 9762b33dc31c67e34b36ba4e787e64084b3136ff Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:38 +0200
|
||||
Subject: [PATCH 07/17] ACPI: Adjust system_state check
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,9 +1,8 @@
|
|||
From a5cbdf693a60d5b86d4d21dfedd90f17754eb273 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Wed, 12 Apr 2017 22:07:33 +0200
|
||||
Subject: [PATCH 07/13] ACPI/processor: Fix error handling in
|
||||
__acpi_processor_start()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
When acpi_install_notify_handler() fails the cooling device stays
|
||||
registered and the sysfs files created via acpi_pss_perf_init() are
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:54 +0100
|
||||
Subject: [PATCH] futex: Rework inconsistent rt_mutex/futex_q state
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit 73d786bd043ebc855f349c81ea805f6b11cbf2aa
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From e0aad5b44ff5d28ac1d6ae70cdf84ca228e889dc Mon Sep 17 00:00:00 2001
|
||||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Thu, 23 Mar 2017 15:56:13 +0100
|
||||
Subject: [PATCH 7/9] rtmutex: Fix PI chain order integrity
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
rt_mutex_waiter::prio is a copy of task_struct::prio which is updated
|
||||
during the PI chain walk, such that the PI chain order isn't messed up
|
||||
|
|
25
debian/patches/features/all/rt/0007-tracing-Increase-tracing-map-KEYS_MAX-size.patch
vendored
Normal file
25
debian/patches/features/all/rt/0007-tracing-Increase-tracing-map-KEYS_MAX-size.patch
vendored
Normal file
|
@ -0,0 +1,25 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:08 -0500
|
||||
Subject: [PATCH 07/32] tracing: Increase tracing map KEYS_MAX size
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The current default for the number of subkeys in a compound key is 2,
|
||||
which is too restrictive. Increase it to a more realistic value of 3.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/tracing_map.h | 2 +-
|
||||
1 file changed, 1 insertion(+), 1 deletion(-)
|
||||
|
||||
--- a/kernel/trace/tracing_map.h
|
||||
+++ b/kernel/trace/tracing_map.h
|
||||
@@ -5,7 +5,7 @@
|
||||
#define TRACING_MAP_BITS_MAX 17
|
||||
#define TRACING_MAP_BITS_MIN 7
|
||||
|
||||
-#define TRACING_MAP_KEYS_MAX 2
|
||||
+#define TRACING_MAP_KEYS_MAX 3
|
||||
#define TRACING_MAP_VALS_MAX 3
|
||||
#define TRACING_MAP_FIELDS_MAX (TRACING_MAP_KEYS_MAX + \
|
||||
TRACING_MAP_VALS_MAX)
|
|
@ -1,8 +1,7 @@
|
|||
From 8153f9ac43897f9f4786b30badc134fcc1a4fb11 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Wed, 12 Apr 2017 22:07:34 +0200
|
||||
Subject: [PATCH 08/13] ACPI/processor: Replace racy task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
acpi_processor_get_throttling() requires to invoke the getter function on
|
||||
the target CPU. This is achieved by temporarily setting the affinity of the
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:55 +0100
|
||||
Subject: [PATCH] futex: Pull rt_mutex_futex_unlock() out from under hb->lock
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit 16ffa12d742534d4ff73e8b3a4e81c1de39196f0
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 8cdde385c7a33afbe13fd71351da0968540fa566 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:39 +0200
|
||||
Subject: [PATCH 08/17] mm: Adjust system_state check
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 19830e55247cddb3f46f1bf60b8e245593491bea Mon Sep 17 00:00:00 2001
|
||||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Thu, 23 Mar 2017 15:56:14 +0100
|
||||
Subject: [PATCH 8/9] rtmutex: Fix more prio comparisons
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
There was a pure ->prio comparison left in try_to_wake_rt_mutex(),
|
||||
convert it to use rt_mutex_waiter_less(), noting that greater-or-equal
|
||||
|
|
92
debian/patches/features/all/rt/0008-tracing-Break-out-hist-trigger-assignment-parsing.patch
vendored
Normal file
92
debian/patches/features/all/rt/0008-tracing-Break-out-hist-trigger-assignment-parsing.patch
vendored
Normal file
|
@ -0,0 +1,92 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:09 -0500
|
||||
Subject: [PATCH 08/32] tracing: Break out hist trigger assignment parsing
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
This will make it easier to add variables, and makes the parsing code
|
||||
cleaner regardless.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 56 ++++++++++++++++++++++++---------------
|
||||
1 file changed, 35 insertions(+), 21 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -251,6 +251,35 @@ static void destroy_hist_trigger_attrs(s
|
||||
kfree(attrs);
|
||||
}
|
||||
|
||||
+static int parse_assignment(char *str, struct hist_trigger_attrs *attrs)
|
||||
+{
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ if ((strncmp(str, "key=", strlen("key=")) == 0) ||
|
||||
+ (strncmp(str, "keys=", strlen("keys=")) == 0))
|
||||
+ attrs->keys_str = kstrdup(str, GFP_KERNEL);
|
||||
+ else if ((strncmp(str, "val=", strlen("val=")) == 0) ||
|
||||
+ (strncmp(str, "vals=", strlen("vals=")) == 0) ||
|
||||
+ (strncmp(str, "values=", strlen("values=")) == 0))
|
||||
+ attrs->vals_str = kstrdup(str, GFP_KERNEL);
|
||||
+ else if (strncmp(str, "sort=", strlen("sort=")) == 0)
|
||||
+ attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
|
||||
+ else if (strncmp(str, "name=", strlen("name=")) == 0)
|
||||
+ attrs->name = kstrdup(str, GFP_KERNEL);
|
||||
+ else if (strncmp(str, "size=", strlen("size=")) == 0) {
|
||||
+ int map_bits = parse_map_size(str);
|
||||
+
|
||||
+ if (map_bits < 0) {
|
||||
+ ret = map_bits;
|
||||
+ goto out;
|
||||
+ }
|
||||
+ attrs->map_bits = map_bits;
|
||||
+ } else
|
||||
+ ret = -EINVAL;
|
||||
+ out:
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
|
||||
{
|
||||
struct hist_trigger_attrs *attrs;
|
||||
@@ -263,33 +292,18 @@ static struct hist_trigger_attrs *parse_
|
||||
while (trigger_str) {
|
||||
char *str = strsep(&trigger_str, ":");
|
||||
|
||||
- if ((strncmp(str, "key=", strlen("key=")) == 0) ||
|
||||
- (strncmp(str, "keys=", strlen("keys=")) == 0))
|
||||
- attrs->keys_str = kstrdup(str, GFP_KERNEL);
|
||||
- else if ((strncmp(str, "val=", strlen("val=")) == 0) ||
|
||||
- (strncmp(str, "vals=", strlen("vals=")) == 0) ||
|
||||
- (strncmp(str, "values=", strlen("values=")) == 0))
|
||||
- attrs->vals_str = kstrdup(str, GFP_KERNEL);
|
||||
- else if (strncmp(str, "sort=", strlen("sort=")) == 0)
|
||||
- attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
|
||||
- else if (strncmp(str, "name=", strlen("name=")) == 0)
|
||||
- attrs->name = kstrdup(str, GFP_KERNEL);
|
||||
- else if (strcmp(str, "pause") == 0)
|
||||
+ if (strchr(str, '=')) {
|
||||
+ ret = parse_assignment(str, attrs);
|
||||
+ if (ret)
|
||||
+ goto free;
|
||||
+ } else if (strcmp(str, "pause") == 0)
|
||||
attrs->pause = true;
|
||||
else if ((strcmp(str, "cont") == 0) ||
|
||||
(strcmp(str, "continue") == 0))
|
||||
attrs->cont = true;
|
||||
else if (strcmp(str, "clear") == 0)
|
||||
attrs->clear = true;
|
||||
- else if (strncmp(str, "size=", strlen("size=")) == 0) {
|
||||
- int map_bits = parse_map_size(str);
|
||||
-
|
||||
- if (map_bits < 0) {
|
||||
- ret = map_bits;
|
||||
- goto free;
|
||||
- }
|
||||
- attrs->map_bits = map_bits;
|
||||
- } else {
|
||||
+ else {
|
||||
ret = -EINVAL;
|
||||
goto free;
|
||||
}
|
|
@ -1,8 +1,7 @@
|
|||
From 38f05ed04beb276f780fcd2b5c0b78c76d0b3c0c Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Wed, 12 Apr 2017 22:55:03 +0200
|
||||
Subject: [PATCH 09/13] cpufreq/ia64: Replace racy task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The get() and target() callbacks must run on the affected cpu. This is
|
||||
achieved by temporarily setting the affinity of the calling thread to the
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From d04e31a23c3c828456cb5613f391ce4ac4e5765f Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:40 +0200
|
||||
Subject: [PATCH 09/17] cpufreq/pasemi: Adjust system_state check
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:56 +0100
|
||||
Subject: [PATCH] futex,rt_mutex: Introduce rt_mutex_init_waiter()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit 50809358dd7199aa7ce232f6877dd09ec30ef374
|
||||
|
||||
|
|
|
@ -1,9 +1,8 @@
|
|||
From def34eaae5ce04b324e48e1bfac873091d945213 Mon Sep 17 00:00:00 2001
|
||||
From: Mike Galbraith <efault@gmx.de>
|
||||
Date: Wed, 5 Apr 2017 10:08:27 +0200
|
||||
Subject: [PATCH 9/9] rtmutex: Plug preempt count leak in
|
||||
rt_mutex_futex_unlock()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
mark_wakeup_next_waiter() already disables preemption, doing so again
|
||||
leaves us with an unpaired preempt_disable().
|
||||
|
|
318
debian/patches/features/all/rt/0009-tracing-Make-traceprobe-parsing-code-reusable.patch
vendored
Normal file
318
debian/patches/features/all/rt/0009-tracing-Make-traceprobe-parsing-code-reusable.patch
vendored
Normal file
|
@ -0,0 +1,318 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:10 -0500
|
||||
Subject: [PATCH 09/32] tracing: Make traceprobe parsing code reusable
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
traceprobe_probes_write() and traceprobe_command() actually contain
|
||||
nothing that ties them to kprobes - the code is generically useful for
|
||||
similar types of parsing elsewhere, so separate it out and move it to
|
||||
trace.c/trace.h.
|
||||
|
||||
Other than moving it, the only change is in naming:
|
||||
traceprobe_probes_write() becomes trace_parse_run_command() and
|
||||
traceprobe_command() becomes trace_run_command().
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace.c | 86 ++++++++++++++++++++++++++++++++++++++++++++
|
||||
kernel/trace/trace.h | 7 +++
|
||||
kernel/trace/trace_kprobe.c | 18 ++++-----
|
||||
kernel/trace/trace_probe.c | 86 --------------------------------------------
|
||||
kernel/trace/trace_probe.h | 7 ---
|
||||
kernel/trace/trace_uprobe.c | 2 -
|
||||
6 files changed, 103 insertions(+), 103 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace.c
|
||||
+++ b/kernel/trace/trace.c
|
||||
@@ -7907,6 +7907,92 @@ void ftrace_dump(enum ftrace_dump_mode o
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ftrace_dump);
|
||||
|
||||
+int trace_run_command(const char *buf, int (*createfn)(int, char **))
|
||||
+{
|
||||
+ char **argv;
|
||||
+ int argc, ret;
|
||||
+
|
||||
+ argc = 0;
|
||||
+ ret = 0;
|
||||
+ argv = argv_split(GFP_KERNEL, buf, &argc);
|
||||
+ if (!argv)
|
||||
+ return -ENOMEM;
|
||||
+
|
||||
+ if (argc)
|
||||
+ ret = createfn(argc, argv);
|
||||
+
|
||||
+ argv_free(argv);
|
||||
+
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+#define WRITE_BUFSIZE 4096
|
||||
+
|
||||
+ssize_t trace_parse_run_command(struct file *file, const char __user *buffer,
|
||||
+ size_t count, loff_t *ppos,
|
||||
+ int (*createfn)(int, char **))
|
||||
+{
|
||||
+ char *kbuf, *buf, *tmp;
|
||||
+ int ret = 0;
|
||||
+ size_t done = 0;
|
||||
+ size_t size;
|
||||
+
|
||||
+ kbuf = kmalloc(WRITE_BUFSIZE, GFP_KERNEL);
|
||||
+ if (!kbuf)
|
||||
+ return -ENOMEM;
|
||||
+
|
||||
+ while (done < count) {
|
||||
+ size = count - done;
|
||||
+
|
||||
+ if (size >= WRITE_BUFSIZE)
|
||||
+ size = WRITE_BUFSIZE - 1;
|
||||
+
|
||||
+ if (copy_from_user(kbuf, buffer + done, size)) {
|
||||
+ ret = -EFAULT;
|
||||
+ goto out;
|
||||
+ }
|
||||
+ kbuf[size] = '\0';
|
||||
+ buf = kbuf;
|
||||
+ do {
|
||||
+ tmp = strchr(buf, '\n');
|
||||
+ if (tmp) {
|
||||
+ *tmp = '\0';
|
||||
+ size = tmp - buf + 1;
|
||||
+ } else {
|
||||
+ size = strlen(buf);
|
||||
+ if (done + size < count) {
|
||||
+ if (buf != kbuf)
|
||||
+ break;
|
||||
+ /* This can accept WRITE_BUFSIZE - 2 ('\n' + '\0') */
|
||||
+ pr_warn("Line length is too long: Should be less than %d\n",
|
||||
+ WRITE_BUFSIZE - 2);
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+ }
|
||||
+ done += size;
|
||||
+
|
||||
+ /* Remove comments */
|
||||
+ tmp = strchr(buf, '#');
|
||||
+
|
||||
+ if (tmp)
|
||||
+ *tmp = '\0';
|
||||
+
|
||||
+ ret = trace_run_command(buf, createfn);
|
||||
+ if (ret)
|
||||
+ goto out;
|
||||
+ buf += size;
|
||||
+
|
||||
+ } while (done < count);
|
||||
+ }
|
||||
+ ret = done;
|
||||
+
|
||||
+out:
|
||||
+ kfree(kbuf);
|
||||
+
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
__init static int tracer_alloc_buffers(void)
|
||||
{
|
||||
int ring_buf_size;
|
||||
--- a/kernel/trace/trace.h
|
||||
+++ b/kernel/trace/trace.h
|
||||
@@ -1650,6 +1650,13 @@ void trace_printk_start_comm(void);
|
||||
int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
|
||||
int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled);
|
||||
|
||||
+#define MAX_EVENT_NAME_LEN 64
|
||||
+
|
||||
+extern int trace_run_command(const char *buf, int (*createfn)(int, char**));
|
||||
+extern ssize_t trace_parse_run_command(struct file *file,
|
||||
+ const char __user *buffer, size_t count, loff_t *ppos,
|
||||
+ int (*createfn)(int, char**));
|
||||
+
|
||||
/*
|
||||
* Normal trace_printk() and friends allocates special buffers
|
||||
* to do the manipulation, as well as saves the print formats
|
||||
--- a/kernel/trace/trace_kprobe.c
|
||||
+++ b/kernel/trace/trace_kprobe.c
|
||||
@@ -878,8 +878,8 @@ static int probes_open(struct inode *ino
|
||||
static ssize_t probes_write(struct file *file, const char __user *buffer,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
- return traceprobe_probes_write(file, buffer, count, ppos,
|
||||
- create_trace_kprobe);
|
||||
+ return trace_parse_run_command(file, buffer, count, ppos,
|
||||
+ create_trace_kprobe);
|
||||
}
|
||||
|
||||
static const struct file_operations kprobe_events_ops = {
|
||||
@@ -1404,9 +1404,9 @@ static __init int kprobe_trace_self_test
|
||||
|
||||
pr_info("Testing kprobe tracing: ");
|
||||
|
||||
- ret = traceprobe_command("p:testprobe kprobe_trace_selftest_target "
|
||||
- "$stack $stack0 +0($stack)",
|
||||
- create_trace_kprobe);
|
||||
+ ret = trace_run_command("p:testprobe kprobe_trace_selftest_target "
|
||||
+ "$stack $stack0 +0($stack)",
|
||||
+ create_trace_kprobe);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_warn("error on probing function entry.\n");
|
||||
warn++;
|
||||
@@ -1426,8 +1426,8 @@ static __init int kprobe_trace_self_test
|
||||
}
|
||||
}
|
||||
|
||||
- ret = traceprobe_command("r:testprobe2 kprobe_trace_selftest_target "
|
||||
- "$retval", create_trace_kprobe);
|
||||
+ ret = trace_run_command("r:testprobe2 kprobe_trace_selftest_target "
|
||||
+ "$retval", create_trace_kprobe);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_warn("error on probing function return.\n");
|
||||
warn++;
|
||||
@@ -1497,13 +1497,13 @@ static __init int kprobe_trace_self_test
|
||||
disable_trace_kprobe(tk, file);
|
||||
}
|
||||
|
||||
- ret = traceprobe_command("-:testprobe", create_trace_kprobe);
|
||||
+ ret = trace_run_command("-:testprobe", create_trace_kprobe);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_warn("error on deleting a probe.\n");
|
||||
warn++;
|
||||
}
|
||||
|
||||
- ret = traceprobe_command("-:testprobe2", create_trace_kprobe);
|
||||
+ ret = trace_run_command("-:testprobe2", create_trace_kprobe);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_warn("error on deleting a probe.\n");
|
||||
warn++;
|
||||
--- a/kernel/trace/trace_probe.c
|
||||
+++ b/kernel/trace/trace_probe.c
|
||||
@@ -623,92 +623,6 @@ void traceprobe_free_probe_arg(struct pr
|
||||
kfree(arg->comm);
|
||||
}
|
||||
|
||||
-int traceprobe_command(const char *buf, int (*createfn)(int, char **))
|
||||
-{
|
||||
- char **argv;
|
||||
- int argc, ret;
|
||||
-
|
||||
- argc = 0;
|
||||
- ret = 0;
|
||||
- argv = argv_split(GFP_KERNEL, buf, &argc);
|
||||
- if (!argv)
|
||||
- return -ENOMEM;
|
||||
-
|
||||
- if (argc)
|
||||
- ret = createfn(argc, argv);
|
||||
-
|
||||
- argv_free(argv);
|
||||
-
|
||||
- return ret;
|
||||
-}
|
||||
-
|
||||
-#define WRITE_BUFSIZE 4096
|
||||
-
|
||||
-ssize_t traceprobe_probes_write(struct file *file, const char __user *buffer,
|
||||
- size_t count, loff_t *ppos,
|
||||
- int (*createfn)(int, char **))
|
||||
-{
|
||||
- char *kbuf, *buf, *tmp;
|
||||
- int ret = 0;
|
||||
- size_t done = 0;
|
||||
- size_t size;
|
||||
-
|
||||
- kbuf = kmalloc(WRITE_BUFSIZE, GFP_KERNEL);
|
||||
- if (!kbuf)
|
||||
- return -ENOMEM;
|
||||
-
|
||||
- while (done < count) {
|
||||
- size = count - done;
|
||||
-
|
||||
- if (size >= WRITE_BUFSIZE)
|
||||
- size = WRITE_BUFSIZE - 1;
|
||||
-
|
||||
- if (copy_from_user(kbuf, buffer + done, size)) {
|
||||
- ret = -EFAULT;
|
||||
- goto out;
|
||||
- }
|
||||
- kbuf[size] = '\0';
|
||||
- buf = kbuf;
|
||||
- do {
|
||||
- tmp = strchr(buf, '\n');
|
||||
- if (tmp) {
|
||||
- *tmp = '\0';
|
||||
- size = tmp - buf + 1;
|
||||
- } else {
|
||||
- size = strlen(buf);
|
||||
- if (done + size < count) {
|
||||
- if (buf != kbuf)
|
||||
- break;
|
||||
- /* This can accept WRITE_BUFSIZE - 2 ('\n' + '\0') */
|
||||
- pr_warn("Line length is too long: Should be less than %d\n",
|
||||
- WRITE_BUFSIZE - 2);
|
||||
- ret = -EINVAL;
|
||||
- goto out;
|
||||
- }
|
||||
- }
|
||||
- done += size;
|
||||
-
|
||||
- /* Remove comments */
|
||||
- tmp = strchr(buf, '#');
|
||||
-
|
||||
- if (tmp)
|
||||
- *tmp = '\0';
|
||||
-
|
||||
- ret = traceprobe_command(buf, createfn);
|
||||
- if (ret)
|
||||
- goto out;
|
||||
- buf += size;
|
||||
-
|
||||
- } while (done < count);
|
||||
- }
|
||||
- ret = done;
|
||||
-
|
||||
-out:
|
||||
- kfree(kbuf);
|
||||
-
|
||||
- return ret;
|
||||
-}
|
||||
-
|
||||
static int __set_print_fmt(struct trace_probe *tp, char *buf, int len,
|
||||
bool is_return)
|
||||
{
|
||||
--- a/kernel/trace/trace_probe.h
|
||||
+++ b/kernel/trace/trace_probe.h
|
||||
@@ -42,7 +42,6 @@
|
||||
|
||||
#define MAX_TRACE_ARGS 128
|
||||
#define MAX_ARGSTR_LEN 63
|
||||
-#define MAX_EVENT_NAME_LEN 64
|
||||
#define MAX_STRING_SIZE PATH_MAX
|
||||
|
||||
/* Reserved field names */
|
||||
@@ -356,12 +355,6 @@ extern void traceprobe_free_probe_arg(st
|
||||
|
||||
extern int traceprobe_split_symbol_offset(char *symbol, unsigned long *offset);
|
||||
|
||||
-extern ssize_t traceprobe_probes_write(struct file *file,
|
||||
- const char __user *buffer, size_t count, loff_t *ppos,
|
||||
- int (*createfn)(int, char**));
|
||||
-
|
||||
-extern int traceprobe_command(const char *buf, int (*createfn)(int, char**));
|
||||
-
|
||||
/* Sum up total data length for dynamic arraies (strings) */
|
||||
static nokprobe_inline int
|
||||
__get_data_size(struct trace_probe *tp, struct pt_regs *regs)
|
||||
--- a/kernel/trace/trace_uprobe.c
|
||||
+++ b/kernel/trace/trace_uprobe.c
|
||||
@@ -651,7 +651,7 @@ static int probes_open(struct inode *ino
|
||||
static ssize_t probes_write(struct file *file, const char __user *buffer,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
- return traceprobe_probes_write(file, buffer, count, ppos, create_trace_uprobe);
|
||||
+ return trace_parse_run_command(file, buffer, count, ppos, create_trace_uprobe);
|
||||
}
|
||||
|
||||
static const struct file_operations uprobe_events_ops = {
|
|
@ -1,8 +1,7 @@
|
|||
From 205dcc1ecbc566cbc20acf246e68de3b080b3ecf Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Wed, 12 Apr 2017 22:07:36 +0200
|
||||
Subject: [PATCH 10/13] cpufreq/sh: Replace racy task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The target() callback must run on the affected cpu. This is achieved by
|
||||
temporarily setting the affinity of the calling thread to the requested CPU
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:57 +0100
|
||||
Subject: [PATCH] futex,rt_mutex: Restructure rt_mutex_finish_proxy_lock()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit 38d589f2fd08f1296aea3ce62bebd185125c6d81
|
||||
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From b608fe356fe8328665445a26ec75dfac918c8c5d Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:41 +0200
|
||||
Subject: [PATCH 10/17] iommu/vt-d: Adjust system_state checks
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
106
debian/patches/features/all/rt/0010-tracing-Add-NO_DISCARD-event-file-flag.patch
vendored
Normal file
106
debian/patches/features/all/rt/0010-tracing-Add-NO_DISCARD-event-file-flag.patch
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:11 -0500
|
||||
Subject: [PATCH 10/32] tracing: Add NO_DISCARD event file flag
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Whenever an event_command has a post-trigger that needs access to the
|
||||
event record, the event record can't be discarded, or the post-trigger
|
||||
will eventually see bogus data.
|
||||
|
||||
In order to allow the discard check to treat this case separately, add
|
||||
an EVENT_FILE_FL_NO_DISCARD flag to the event file flags, along with
|
||||
code in the discard check that checks the flag and avoids the discard
|
||||
when the flag is set.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
include/linux/trace_events.h | 3 +++
|
||||
kernel/trace/trace.h | 13 ++++++++++---
|
||||
kernel/trace/trace_events_trigger.c | 16 +++++++++++++---
|
||||
3 files changed, 26 insertions(+), 6 deletions(-)
|
||||
|
||||
--- a/include/linux/trace_events.h
|
||||
+++ b/include/linux/trace_events.h
|
||||
@@ -306,6 +306,7 @@ enum {
|
||||
EVENT_FILE_FL_TRIGGER_MODE_BIT,
|
||||
EVENT_FILE_FL_TRIGGER_COND_BIT,
|
||||
EVENT_FILE_FL_PID_FILTER_BIT,
|
||||
+ EVENT_FILE_FL_NO_DISCARD_BIT,
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -320,6 +321,7 @@ enum {
|
||||
* TRIGGER_MODE - When set, invoke the triggers associated with the event
|
||||
* TRIGGER_COND - When set, one or more triggers has an associated filter
|
||||
* PID_FILTER - When set, the event is filtered based on pid
|
||||
+ * NO_DISCARD - When set, do not discard events, something needs them later
|
||||
*/
|
||||
enum {
|
||||
EVENT_FILE_FL_ENABLED = (1 << EVENT_FILE_FL_ENABLED_BIT),
|
||||
@@ -331,6 +333,7 @@ enum {
|
||||
EVENT_FILE_FL_TRIGGER_MODE = (1 << EVENT_FILE_FL_TRIGGER_MODE_BIT),
|
||||
EVENT_FILE_FL_TRIGGER_COND = (1 << EVENT_FILE_FL_TRIGGER_COND_BIT),
|
||||
EVENT_FILE_FL_PID_FILTER = (1 << EVENT_FILE_FL_PID_FILTER_BIT),
|
||||
+ EVENT_FILE_FL_NO_DISCARD = (1 << EVENT_FILE_FL_NO_DISCARD_BIT),
|
||||
};
|
||||
|
||||
struct trace_event_file {
|
||||
--- a/kernel/trace/trace.h
|
||||
+++ b/kernel/trace/trace.h
|
||||
@@ -1191,9 +1191,16 @@ static inline bool
|
||||
if (eflags & EVENT_FILE_FL_TRIGGER_COND)
|
||||
*tt = event_triggers_call(file, entry, event);
|
||||
|
||||
- if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags) ||
|
||||
- (unlikely(file->flags & EVENT_FILE_FL_FILTERED) &&
|
||||
- !filter_match_preds(file->filter, entry))) {
|
||||
+ if (unlikely(file->flags & EVENT_FILE_FL_FILTERED) &&
|
||||
+ !filter_match_preds(file->filter, entry)) {
|
||||
+ __trace_event_discard_commit(buffer, event);
|
||||
+ return true;
|
||||
+ }
|
||||
+
|
||||
+ if (test_bit(EVENT_FILE_FL_NO_DISCARD_BIT, &file->flags))
|
||||
+ return false;
|
||||
+
|
||||
+ if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags)) {
|
||||
__trace_event_discard_commit(buffer, event);
|
||||
return true;
|
||||
}
|
||||
--- a/kernel/trace/trace_events_trigger.c
|
||||
+++ b/kernel/trace/trace_events_trigger.c
|
||||
@@ -505,20 +505,30 @@ clear_event_triggers(struct trace_array
|
||||
void update_cond_flag(struct trace_event_file *file)
|
||||
{
|
||||
struct event_trigger_data *data;
|
||||
- bool set_cond = false;
|
||||
+ bool set_cond = false, set_no_discard = false;
|
||||
|
||||
list_for_each_entry_rcu(data, &file->triggers, list) {
|
||||
if (data->filter || event_command_post_trigger(data->cmd_ops) ||
|
||||
- event_command_needs_rec(data->cmd_ops)) {
|
||||
+ event_command_needs_rec(data->cmd_ops))
|
||||
set_cond = true;
|
||||
+
|
||||
+ if (event_command_post_trigger(data->cmd_ops) &&
|
||||
+ event_command_needs_rec(data->cmd_ops))
|
||||
+ set_no_discard = true;
|
||||
+
|
||||
+ if (set_cond && set_no_discard)
|
||||
break;
|
||||
- }
|
||||
}
|
||||
|
||||
if (set_cond)
|
||||
set_bit(EVENT_FILE_FL_TRIGGER_COND_BIT, &file->flags);
|
||||
else
|
||||
clear_bit(EVENT_FILE_FL_TRIGGER_COND_BIT, &file->flags);
|
||||
+
|
||||
+ if (set_no_discard)
|
||||
+ set_bit(EVENT_FILE_FL_NO_DISCARD_BIT, &file->flags);
|
||||
+ else
|
||||
+ clear_bit(EVENT_FILE_FL_NO_DISCARD_BIT, &file->flags);
|
||||
}
|
||||
|
||||
/**
|
|
@ -1,8 +1,7 @@
|
|||
From 9fe24c4e92d3963d92d7d383e28ed098bd5689d8 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Wed, 12 Apr 2017 22:07:37 +0200
|
||||
Subject: [PATCH 11/13] cpufreq/sparc-us3: Replace racy task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The access to the safari config register in the CPU frequency functions
|
||||
must be executed on the target CPU. This is achieved by temporarily setting
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:58 +0100
|
||||
Subject: [PATCH] futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit cfafcd117da0216520568c195cb2f6cd1980c4bb
|
||||
|
||||
|
|
29
debian/patches/features/all/rt/0011-tracing-Add-post-trigger-flag-to-hist-trigger-comman.patch
vendored
Normal file
29
debian/patches/features/all/rt/0011-tracing-Add-post-trigger-flag-to-hist-trigger-comman.patch
vendored
Normal file
|
@ -0,0 +1,29 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:12 -0500
|
||||
Subject: [PATCH 11/32] tracing: Add post-trigger flag to hist trigger command
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Add EVENT_CMD_FL_POST_TRIGGER to the hist trigger cmd - it doesn't
|
||||
affect the hist trigger results, and allows further events such as
|
||||
synthetic events to be generated from a hist trigger.
|
||||
|
||||
Without this change, generating an event from a hist trigger will
|
||||
cause the generated event to fail a ring buffer trace_recursive_lock()
|
||||
check and return without actually logging the event.
|
||||
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 2 +-
|
||||
1 file changed, 1 insertion(+), 1 deletion(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -1676,7 +1676,7 @@ static int event_hist_trigger_func(struc
|
||||
static struct event_command trigger_hist_cmd = {
|
||||
.name = "hist",
|
||||
.trigger_type = ETT_EVENT_HIST,
|
||||
- .flags = EVENT_CMD_FL_NEEDS_REC,
|
||||
+ .flags = EVENT_CMD_FL_NEEDS_REC | EVENT_CMD_FL_POST_TRIGGER,
|
||||
.func = event_hist_trigger_func,
|
||||
.reg = hist_register_trigger,
|
||||
.unreg = hist_unregister_trigger,
|
|
@ -1,8 +1,7 @@
|
|||
From b4def42724594cd399cfee365221f5b38639711d Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:43 +0200
|
||||
Subject: [PATCH 12/17] async: Adjust system_state checks
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 12699ac53a2e5fbd1fd7c164b11685d55c8aa28b Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Thu, 13 Apr 2017 10:22:43 +0200
|
||||
Subject: [PATCH 12/13] cpufreq/sparc-us2e: Replace racy task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The access to the HBIRD_ESTAR_MODE register in the cpu frequency control
|
||||
functions must happen on the target CPU. This is achieved by temporarily
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:35:59 +0100
|
||||
Subject: [PATCH] futex: Futex_unlock_pi() determinism
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit bebe5b514345f09be2c15e414d076b02ecb9cce8
|
||||
|
||||
|
|
232
debian/patches/features/all/rt/0012-tracing-Add-hist-trigger-timestamp-support.patch
vendored
Normal file
232
debian/patches/features/all/rt/0012-tracing-Add-hist-trigger-timestamp-support.patch
vendored
Normal file
|
@ -0,0 +1,232 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:13 -0500
|
||||
Subject: [PATCH 12/32] tracing: Add hist trigger timestamp support
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Add support for a timestamp event field. This is actually a 'pseudo-'
|
||||
event field in that it behaves like it's part of the event record, but
|
||||
is really part of the corresponding ring buffer event.
|
||||
|
||||
To make use of the timestamp field, users can specify
|
||||
"$common_timestamp" as a field name for any histogram. Note that this
|
||||
doesn't make much sense on its own either as either a key or value,
|
||||
but needs to be supported even so, since follow-on patches will add
|
||||
support for making use of this field in time deltas. The '$' is used
|
||||
as a prefix on the variable name to indicate that it's not an bonafide
|
||||
event field - so you won't find it in the event description - but
|
||||
rather it's a synthetic field that can be used like a real field).
|
||||
|
||||
Note that the use of this field requires the ring buffer be put into
|
||||
TIME_EXTEND_ABS mode, which saves the complete timestamp for each
|
||||
event rather than an offset. This mode will be enabled if and only if
|
||||
a histogram makes use of the "$common_timestamp" field.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 90 ++++++++++++++++++++++++++++-----------
|
||||
1 file changed, 66 insertions(+), 24 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -89,6 +89,12 @@ static u64 hist_field_log2(struct hist_f
|
||||
return (u64) ilog2(roundup_pow_of_two(val));
|
||||
}
|
||||
|
||||
+static u64 hist_field_timestamp(struct hist_field *hist_field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
+{
|
||||
+ return ring_buffer_event_time_stamp(rbe);
|
||||
+}
|
||||
+
|
||||
#define DEFINE_HIST_FIELD_FN(type) \
|
||||
static u64 hist_field_##type(struct hist_field *hist_field, \
|
||||
void *event, \
|
||||
@@ -135,6 +141,7 @@ enum hist_field_flags {
|
||||
HIST_FIELD_FL_SYSCALL = 128,
|
||||
HIST_FIELD_FL_STACKTRACE = 256,
|
||||
HIST_FIELD_FL_LOG2 = 512,
|
||||
+ HIST_FIELD_FL_TIMESTAMP = 1024,
|
||||
};
|
||||
|
||||
struct hist_trigger_attrs {
|
||||
@@ -159,6 +166,7 @@ struct hist_trigger_data {
|
||||
struct trace_event_file *event_file;
|
||||
struct hist_trigger_attrs *attrs;
|
||||
struct tracing_map *map;
|
||||
+ bool enable_timestamps;
|
||||
};
|
||||
|
||||
static const char *hist_field_name(struct hist_field *field,
|
||||
@@ -173,6 +181,8 @@ static const char *hist_field_name(struc
|
||||
field_name = field->field->name;
|
||||
else if (field->flags & HIST_FIELD_FL_LOG2)
|
||||
field_name = hist_field_name(field->operands[0], ++level);
|
||||
+ else if (field->flags & HIST_FIELD_FL_TIMESTAMP)
|
||||
+ field_name = "$common_timestamp";
|
||||
|
||||
if (field_name == NULL)
|
||||
field_name = "";
|
||||
@@ -435,6 +445,12 @@ static struct hist_field *create_hist_fi
|
||||
goto out;
|
||||
}
|
||||
|
||||
+ if (flags & HIST_FIELD_FL_TIMESTAMP) {
|
||||
+ hist_field->fn = hist_field_timestamp;
|
||||
+ hist_field->size = sizeof(u64);
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
if (WARN_ON_ONCE(!field))
|
||||
goto out;
|
||||
|
||||
@@ -512,10 +528,15 @@ static int create_val_field(struct hist_
|
||||
}
|
||||
}
|
||||
|
||||
- field = trace_find_event_field(file->event_call, field_name);
|
||||
- if (!field) {
|
||||
- ret = -EINVAL;
|
||||
- goto out;
|
||||
+ if (strcmp(field_name, "$common_timestamp") == 0) {
|
||||
+ flags |= HIST_FIELD_FL_TIMESTAMP;
|
||||
+ hist_data->enable_timestamps = true;
|
||||
+ } else {
|
||||
+ field = trace_find_event_field(file->event_call, field_name);
|
||||
+ if (!field) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
}
|
||||
|
||||
hist_data->fields[val_idx] = create_hist_field(field, flags);
|
||||
@@ -610,16 +631,22 @@ static int create_key_field(struct hist_
|
||||
}
|
||||
}
|
||||
|
||||
- field = trace_find_event_field(file->event_call, field_name);
|
||||
- if (!field) {
|
||||
- ret = -EINVAL;
|
||||
- goto out;
|
||||
- }
|
||||
+ if (strcmp(field_name, "$common_timestamp") == 0) {
|
||||
+ flags |= HIST_FIELD_FL_TIMESTAMP;
|
||||
+ hist_data->enable_timestamps = true;
|
||||
+ key_size = sizeof(u64);
|
||||
+ } else {
|
||||
+ field = trace_find_event_field(file->event_call, field_name);
|
||||
+ if (!field) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
|
||||
- if (is_string_field(field))
|
||||
- key_size = MAX_FILTER_STR_VAL;
|
||||
- else
|
||||
- key_size = field->size;
|
||||
+ if (is_string_field(field))
|
||||
+ key_size = MAX_FILTER_STR_VAL;
|
||||
+ else
|
||||
+ key_size = field->size;
|
||||
+ }
|
||||
}
|
||||
|
||||
hist_data->fields[key_idx] = create_hist_field(field, flags);
|
||||
@@ -756,7 +783,7 @@ static int create_sort_keys(struct hist_
|
||||
break;
|
||||
}
|
||||
|
||||
- if (strcmp(field_name, "hitcount") == 0) {
|
||||
+ if ((strcmp(field_name, "hitcount") == 0)) {
|
||||
descending = is_descending(field_str);
|
||||
if (descending < 0) {
|
||||
ret = descending;
|
||||
@@ -816,6 +843,9 @@ static int create_tracing_map_fields(str
|
||||
|
||||
if (hist_field->flags & HIST_FIELD_FL_STACKTRACE)
|
||||
cmp_fn = tracing_map_cmp_none;
|
||||
+ else if (!field)
|
||||
+ cmp_fn = tracing_map_cmp_num(hist_field->size,
|
||||
+ hist_field->is_signed);
|
||||
else if (is_string_field(field))
|
||||
cmp_fn = tracing_map_cmp_string;
|
||||
else
|
||||
@@ -1213,7 +1243,11 @@ static void hist_field_print(struct seq_
|
||||
{
|
||||
const char *field_name = hist_field_name(hist_field, 0);
|
||||
|
||||
- seq_printf(m, "%s", field_name);
|
||||
+ if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP)
|
||||
+ seq_puts(m, "$common_timestamp");
|
||||
+ else if (field_name)
|
||||
+ seq_printf(m, "%s", field_name);
|
||||
+
|
||||
if (hist_field->flags) {
|
||||
const char *flags_str = get_hist_field_flags(hist_field);
|
||||
|
||||
@@ -1264,27 +1298,25 @@ static int event_hist_trigger_print(stru
|
||||
|
||||
for (i = 0; i < hist_data->n_sort_keys; i++) {
|
||||
struct tracing_map_sort_key *sort_key;
|
||||
+ unsigned int idx;
|
||||
|
||||
sort_key = &hist_data->sort_keys[i];
|
||||
+ idx = sort_key->field_idx;
|
||||
+
|
||||
+ if (WARN_ON(idx >= TRACING_MAP_FIELDS_MAX))
|
||||
+ return -EINVAL;
|
||||
|
||||
if (i > 0)
|
||||
seq_puts(m, ",");
|
||||
|
||||
- if (sort_key->field_idx == HITCOUNT_IDX)
|
||||
+ if (idx == HITCOUNT_IDX)
|
||||
seq_puts(m, "hitcount");
|
||||
- else {
|
||||
- unsigned int idx = sort_key->field_idx;
|
||||
-
|
||||
- if (WARN_ON(idx >= TRACING_MAP_FIELDS_MAX))
|
||||
- return -EINVAL;
|
||||
-
|
||||
+ else
|
||||
hist_field_print(m, hist_data->fields[idx]);
|
||||
- }
|
||||
|
||||
if (sort_key->descending)
|
||||
seq_puts(m, ".descending");
|
||||
}
|
||||
-
|
||||
seq_printf(m, ":size=%u", (1 << hist_data->map->map_bits));
|
||||
|
||||
if (data->filter_str)
|
||||
@@ -1452,6 +1484,10 @@ static bool hist_trigger_match(struct ev
|
||||
return false;
|
||||
if (key_field->offset != key_field_test->offset)
|
||||
return false;
|
||||
+ if (key_field->size != key_field_test->size)
|
||||
+ return false;
|
||||
+ if (key_field->is_signed != key_field_test->is_signed)
|
||||
+ return false;
|
||||
}
|
||||
|
||||
for (i = 0; i < hist_data->n_sort_keys; i++) {
|
||||
@@ -1534,6 +1570,9 @@ static int hist_register_trigger(char *g
|
||||
|
||||
update_cond_flag(file);
|
||||
|
||||
+ if (hist_data->enable_timestamps)
|
||||
+ tracing_set_time_stamp_abs(file->tr, true);
|
||||
+
|
||||
if (trace_event_trigger_enable_disable(file, 1) < 0) {
|
||||
list_del_rcu(&data->list);
|
||||
update_cond_flag(file);
|
||||
@@ -1568,6 +1607,9 @@ static void hist_unregister_trigger(char
|
||||
|
||||
if (unregistered && test->ops->free)
|
||||
test->ops->free(test->ops, test);
|
||||
+
|
||||
+ if (hist_data->enable_timestamps)
|
||||
+ tracing_set_time_stamp_abs(file->tr, false);
|
||||
}
|
||||
|
||||
static void hist_unreg_all(struct trace_event_file *file)
|
|
@ -1,8 +1,7 @@
|
|||
From 73810a069120aa831debb4d967310ab900f628ad Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Thu, 13 Apr 2017 10:20:23 +0200
|
||||
Subject: [PATCH 13/13] crypto: N2 - Replace racy task affinity logic
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
spu_queue_register() needs to invoke setup functions on a particular
|
||||
CPU. This is achieved by temporarily setting the affinity of the
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 0594729c24d846889408a07057b5cc9e8d931419 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:44 +0200
|
||||
Subject: [PATCH 13/17] extable: Adjust system_state checks
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Peter Zijlstra <peterz@infradead.org>
|
||||
Date: Wed, 22 Mar 2017 11:36:00 +0100
|
||||
Subject: [PATCH] futex: Drop hb->lock before enqueueing on the rtmutex
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Upstream commit 56222b212e8edb1cf51f5dd73ff645809b082b40
|
||||
|
||||
|
|
233
debian/patches/features/all/rt/0013-tracing-Add-per-element-variable-support-to-tracing_.patch
vendored
Normal file
233
debian/patches/features/all/rt/0013-tracing-Add-per-element-variable-support-to-tracing_.patch
vendored
Normal file
|
@ -0,0 +1,233 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:14 -0500
|
||||
Subject: [PATCH 13/32] tracing: Add per-element variable support to
|
||||
tracing_map
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
In order to allow information to be passed between trace events, add
|
||||
support for per-element variables to tracing_map. This provides a
|
||||
means for histograms to associate a value or values with an entry when
|
||||
it's saved or updated, and retrieved by a subsequent event occurrences.
|
||||
|
||||
Variables can be set using tracing_map_set_var() and read using
|
||||
tracing_map_read_var(). tracing_map_var_set() returns true or false
|
||||
depending on whether or not the variable has been set or not, which is
|
||||
important for event-matching applications.
|
||||
|
||||
tracing_map_read_var_once() reads the variable and resets it to the
|
||||
'unset' state, implementing read-once variables, which are also
|
||||
important for event-matching uses.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/tracing_map.c | 113 +++++++++++++++++++++++++++++++++++++++++++++
|
||||
kernel/trace/tracing_map.h | 11 ++++
|
||||
2 files changed, 124 insertions(+)
|
||||
|
||||
--- a/kernel/trace/tracing_map.c
|
||||
+++ b/kernel/trace/tracing_map.c
|
||||
@@ -66,6 +66,73 @@ u64 tracing_map_read_sum(struct tracing_
|
||||
return (u64)atomic64_read(&elt->fields[i].sum);
|
||||
}
|
||||
|
||||
+/**
|
||||
+ * tracing_map_set_var - Assign a tracing_map_elt's variable field
|
||||
+ * @elt: The tracing_map_elt
|
||||
+ * @i: The index of the given variable associated with the tracing_map_elt
|
||||
+ * @n: The value to assign
|
||||
+ *
|
||||
+ * Assign n to variable i associated with the specified tracing_map_elt
|
||||
+ * instance. The index i is the index returned by the call to
|
||||
+ * tracing_map_add_var() when the tracing map was set up.
|
||||
+ */
|
||||
+void tracing_map_set_var(struct tracing_map_elt *elt, unsigned int i, u64 n)
|
||||
+{
|
||||
+ atomic64_set(&elt->vars[i], n);
|
||||
+ elt->var_set[i] = true;
|
||||
+}
|
||||
+
|
||||
+/**
|
||||
+ * tracing_map_var_set - Return whether or not a variable has been set
|
||||
+ * @elt: The tracing_map_elt
|
||||
+ * @i: The index of the given variable associated with the tracing_map_elt
|
||||
+ *
|
||||
+ * Return true if the variable has been set, false otherwise. The
|
||||
+ * index i is the index returned by the call to tracing_map_add_var()
|
||||
+ * when the tracing map was set up.
|
||||
+ */
|
||||
+bool tracing_map_var_set(struct tracing_map_elt *elt, unsigned int i)
|
||||
+{
|
||||
+ return elt->var_set[i];
|
||||
+}
|
||||
+
|
||||
+/**
|
||||
+ * tracing_map_read_var - Return the value of a tracing_map_elt's variable field
|
||||
+ * @elt: The tracing_map_elt
|
||||
+ * @i: The index of the given variable associated with the tracing_map_elt
|
||||
+ *
|
||||
+ * Retrieve the value of the variable i associated with the specified
|
||||
+ * tracing_map_elt instance. The index i is the index returned by the
|
||||
+ * call to tracing_map_add_var() when the tracing map was set
|
||||
+ * up.
|
||||
+ *
|
||||
+ * Return: The variable value associated with field i for elt.
|
||||
+ */
|
||||
+u64 tracing_map_read_var(struct tracing_map_elt *elt, unsigned int i)
|
||||
+{
|
||||
+ return (u64)atomic64_read(&elt->vars[i]);
|
||||
+}
|
||||
+
|
||||
+/**
|
||||
+ * tracing_map_read_var_once - Return and reset a tracing_map_elt's variable field
|
||||
+ * @elt: The tracing_map_elt
|
||||
+ * @i: The index of the given variable associated with the tracing_map_elt
|
||||
+ *
|
||||
+ * Retrieve the value of the variable i associated with the specified
|
||||
+ * tracing_map_elt instance, and reset the variable to the 'not set'
|
||||
+ * state. The index i is the index returned by the call to
|
||||
+ * tracing_map_add_var() when the tracing map was set up. The reset
|
||||
+ * essentially makes the variable a read-once variable if it's only
|
||||
+ * accessed using this function.
|
||||
+ *
|
||||
+ * Return: The variable value associated with field i for elt.
|
||||
+ */
|
||||
+u64 tracing_map_read_var_once(struct tracing_map_elt *elt, unsigned int i)
|
||||
+{
|
||||
+ elt->var_set[i] = false;
|
||||
+ return (u64)atomic64_read(&elt->vars[i]);
|
||||
+}
|
||||
+
|
||||
int tracing_map_cmp_string(void *val_a, void *val_b)
|
||||
{
|
||||
char *a = val_a;
|
||||
@@ -171,6 +238,28 @@ int tracing_map_add_sum_field(struct tra
|
||||
}
|
||||
|
||||
/**
|
||||
+ * tracing_map_add_var - Add a field describing a tracing_map var
|
||||
+ * @map: The tracing_map
|
||||
+ *
|
||||
+ * Add a var to the map and return the index identifying it in the map
|
||||
+ * and associated tracing_map_elts. This is the index used for
|
||||
+ * instance to update a var for a particular tracing_map_elt using
|
||||
+ * tracing_map_update_var() or reading it via tracing_map_read_var().
|
||||
+ *
|
||||
+ * Return: The index identifying the var in the map and associated
|
||||
+ * tracing_map_elts, or -EINVAL on error.
|
||||
+ */
|
||||
+int tracing_map_add_var(struct tracing_map *map)
|
||||
+{
|
||||
+ int ret = -EINVAL;
|
||||
+
|
||||
+ if (map->n_vars < TRACING_MAP_VARS_MAX)
|
||||
+ ret = map->n_vars++;
|
||||
+
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+/**
|
||||
* tracing_map_add_key_field - Add a field describing a tracing_map key
|
||||
* @map: The tracing_map
|
||||
* @offset: The offset within the key
|
||||
@@ -277,6 +366,11 @@ static void tracing_map_elt_clear(struct
|
||||
if (elt->fields[i].cmp_fn == tracing_map_cmp_atomic64)
|
||||
atomic64_set(&elt->fields[i].sum, 0);
|
||||
|
||||
+ for (i = 0; i < elt->map->n_vars; i++) {
|
||||
+ atomic64_set(&elt->vars[i], 0);
|
||||
+ elt->var_set[i] = false;
|
||||
+ }
|
||||
+
|
||||
if (elt->map->ops && elt->map->ops->elt_clear)
|
||||
elt->map->ops->elt_clear(elt);
|
||||
}
|
||||
@@ -303,6 +397,8 @@ static void tracing_map_elt_free(struct
|
||||
if (elt->map->ops && elt->map->ops->elt_free)
|
||||
elt->map->ops->elt_free(elt);
|
||||
kfree(elt->fields);
|
||||
+ kfree(elt->vars);
|
||||
+ kfree(elt->var_set);
|
||||
kfree(elt->key);
|
||||
kfree(elt);
|
||||
}
|
||||
@@ -330,6 +426,18 @@ static struct tracing_map_elt *tracing_m
|
||||
goto free;
|
||||
}
|
||||
|
||||
+ elt->vars = kcalloc(map->n_vars, sizeof(*elt->vars), GFP_KERNEL);
|
||||
+ if (!elt->vars) {
|
||||
+ err = -ENOMEM;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ elt->var_set = kcalloc(map->n_vars, sizeof(*elt->var_set), GFP_KERNEL);
|
||||
+ if (!elt->var_set) {
|
||||
+ err = -ENOMEM;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
tracing_map_elt_init_fields(elt);
|
||||
|
||||
if (map->ops && map->ops->elt_alloc) {
|
||||
@@ -833,6 +941,11 @@ static struct tracing_map_elt *copy_elt(
|
||||
dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
|
||||
}
|
||||
|
||||
+ for (i = 0; i < elt->map->n_vars; i++) {
|
||||
+ atomic64_set(&dup_elt->vars[i], atomic64_read(&elt->vars[i]));
|
||||
+ dup_elt->var_set[i] = elt->var_set[i];
|
||||
+ }
|
||||
+
|
||||
return dup_elt;
|
||||
}
|
||||
|
||||
--- a/kernel/trace/tracing_map.h
|
||||
+++ b/kernel/trace/tracing_map.h
|
||||
@@ -9,6 +9,7 @@
|
||||
#define TRACING_MAP_VALS_MAX 3
|
||||
#define TRACING_MAP_FIELDS_MAX (TRACING_MAP_KEYS_MAX + \
|
||||
TRACING_MAP_VALS_MAX)
|
||||
+#define TRACING_MAP_VARS_MAX 16
|
||||
#define TRACING_MAP_SORT_KEYS_MAX 2
|
||||
|
||||
typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
|
||||
@@ -136,6 +137,8 @@ struct tracing_map_field {
|
||||
struct tracing_map_elt {
|
||||
struct tracing_map *map;
|
||||
struct tracing_map_field *fields;
|
||||
+ atomic64_t *vars;
|
||||
+ bool *var_set;
|
||||
void *key;
|
||||
void *private_data;
|
||||
};
|
||||
@@ -191,6 +194,7 @@ struct tracing_map {
|
||||
int key_idx[TRACING_MAP_KEYS_MAX];
|
||||
unsigned int n_keys;
|
||||
struct tracing_map_sort_key sort_key;
|
||||
+ unsigned int n_vars;
|
||||
atomic64_t hits;
|
||||
atomic64_t drops;
|
||||
};
|
||||
@@ -247,6 +251,7 @@ tracing_map_create(unsigned int map_bits
|
||||
extern int tracing_map_init(struct tracing_map *map);
|
||||
|
||||
extern int tracing_map_add_sum_field(struct tracing_map *map);
|
||||
+extern int tracing_map_add_var(struct tracing_map *map);
|
||||
extern int tracing_map_add_key_field(struct tracing_map *map,
|
||||
unsigned int offset,
|
||||
tracing_map_cmp_fn_t cmp_fn);
|
||||
@@ -266,7 +271,13 @@ extern int tracing_map_cmp_none(void *va
|
||||
|
||||
extern void tracing_map_update_sum(struct tracing_map_elt *elt,
|
||||
unsigned int i, u64 n);
|
||||
+extern void tracing_map_set_var(struct tracing_map_elt *elt,
|
||||
+ unsigned int i, u64 n);
|
||||
+extern bool tracing_map_var_set(struct tracing_map_elt *elt, unsigned int i);
|
||||
extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
|
||||
+extern u64 tracing_map_read_var(struct tracing_map_elt *elt, unsigned int i);
|
||||
+extern u64 tracing_map_read_var_once(struct tracing_map_elt *elt, unsigned int i);
|
||||
+
|
||||
extern void tracing_map_set_field_descr(struct tracing_map *map,
|
||||
unsigned int i,
|
||||
unsigned int key_offset,
|
|
@ -1,8 +1,7 @@
|
|||
From ff48cd26fc4889b9deb5f9333d3c61746e450b7f Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:45 +0200
|
||||
Subject: [PATCH 14/17] printk: Adjust system_state checks
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
79
debian/patches/features/all/rt/0014-tracing-Add-hist_data-member-to-hist_field.patch
vendored
Normal file
79
debian/patches/features/all/rt/0014-tracing-Add-hist_data-member-to-hist_field.patch
vendored
Normal file
|
@ -0,0 +1,79 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:15 -0500
|
||||
Subject: [PATCH 14/32] tracing: Add hist_data member to hist_field
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Allow hist_data access via hist_field. Some users of hist_fields
|
||||
require or will require more access to the associated hist_data.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 14 +++++++++-----
|
||||
1 file changed, 9 insertions(+), 5 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -39,6 +39,7 @@ struct hist_field {
|
||||
unsigned int offset;
|
||||
unsigned int is_signed;
|
||||
struct hist_field *operands[HIST_FIELD_OPERANDS_MAX];
|
||||
+ struct hist_trigger_data *hist_data;
|
||||
};
|
||||
|
||||
static u64 hist_field_none(struct hist_field *field, void *event,
|
||||
@@ -415,7 +416,8 @@ static void destroy_hist_field(struct hi
|
||||
kfree(hist_field);
|
||||
}
|
||||
|
||||
-static struct hist_field *create_hist_field(struct ftrace_event_field *field,
|
||||
+static struct hist_field *create_hist_field(struct hist_trigger_data *hist_data,
|
||||
+ struct ftrace_event_field *field,
|
||||
unsigned long flags)
|
||||
{
|
||||
struct hist_field *hist_field;
|
||||
@@ -427,6 +429,8 @@ static struct hist_field *create_hist_fi
|
||||
if (!hist_field)
|
||||
return NULL;
|
||||
|
||||
+ hist_field->hist_data = hist_data;
|
||||
+
|
||||
if (flags & HIST_FIELD_FL_HITCOUNT) {
|
||||
hist_field->fn = hist_field_counter;
|
||||
goto out;
|
||||
@@ -440,7 +444,7 @@ static struct hist_field *create_hist_fi
|
||||
if (flags & HIST_FIELD_FL_LOG2) {
|
||||
unsigned long fl = flags & ~HIST_FIELD_FL_LOG2;
|
||||
hist_field->fn = hist_field_log2;
|
||||
- hist_field->operands[0] = create_hist_field(field, fl);
|
||||
+ hist_field->operands[0] = create_hist_field(hist_data, field, fl);
|
||||
hist_field->size = hist_field->operands[0]->size;
|
||||
goto out;
|
||||
}
|
||||
@@ -493,7 +497,7 @@ static void destroy_hist_fields(struct h
|
||||
static int create_hitcount_val(struct hist_trigger_data *hist_data)
|
||||
{
|
||||
hist_data->fields[HITCOUNT_IDX] =
|
||||
- create_hist_field(NULL, HIST_FIELD_FL_HITCOUNT);
|
||||
+ create_hist_field(hist_data, NULL, HIST_FIELD_FL_HITCOUNT);
|
||||
if (!hist_data->fields[HITCOUNT_IDX])
|
||||
return -ENOMEM;
|
||||
|
||||
@@ -539,7 +543,7 @@ static int create_val_field(struct hist_
|
||||
}
|
||||
}
|
||||
|
||||
- hist_data->fields[val_idx] = create_hist_field(field, flags);
|
||||
+ hist_data->fields[val_idx] = create_hist_field(hist_data, field, flags);
|
||||
if (!hist_data->fields[val_idx]) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
@@ -649,7 +653,7 @@ static int create_key_field(struct hist_
|
||||
}
|
||||
}
|
||||
|
||||
- hist_data->fields[key_idx] = create_hist_field(field, flags);
|
||||
+ hist_data->fields[key_idx] = create_hist_field(hist_data, field, flags);
|
||||
if (!hist_data->fields[key_idx]) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
|
@ -1,8 +1,7 @@
|
|||
From c6202adf3a0969514299cf10ff07376a84ad09bb Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:46 +0200
|
||||
Subject: [PATCH 15/17] mm/vmscan: Adjust system_state checks
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
|
||||
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
|
||||
|
|
131
debian/patches/features/all/rt/0015-tracing-Add-usecs-modifier-for-hist-trigger-timestam.patch
vendored
Normal file
131
debian/patches/features/all/rt/0015-tracing-Add-usecs-modifier-for-hist-trigger-timestam.patch
vendored
Normal file
|
@ -0,0 +1,131 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:16 -0500
|
||||
Subject: [PATCH 15/32] tracing: Add usecs modifier for hist trigger timestamps
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Appending .usecs onto a common_timestamp field will cause the
|
||||
timestamp value to be in microseconds instead of the default
|
||||
nanoseconds. A typical latency histogram using usecs would look like
|
||||
this:
|
||||
|
||||
# echo 'hist:keys=pid,prio:ts0=$common_timestamp.usecs ...
|
||||
# echo 'hist:keys=next_pid:wakeup_lat=$common_timestamp.usecs-$ts0 ...
|
||||
|
||||
This also adds an external trace_clock_in_ns() to trace.c for the
|
||||
timestamp conversion.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace.c | 8 ++++++++
|
||||
kernel/trace/trace.h | 2 ++
|
||||
kernel/trace/trace_events_hist.c | 28 ++++++++++++++++++++++------
|
||||
3 files changed, 32 insertions(+), 6 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace.c
|
||||
+++ b/kernel/trace/trace.c
|
||||
@@ -1164,6 +1164,14 @@ static struct {
|
||||
ARCH_TRACE_CLOCKS
|
||||
};
|
||||
|
||||
+bool trace_clock_in_ns(struct trace_array *tr)
|
||||
+{
|
||||
+ if (trace_clocks[tr->clock_id].in_ns)
|
||||
+ return true;
|
||||
+
|
||||
+ return false;
|
||||
+}
|
||||
+
|
||||
/*
|
||||
* trace_parser_get_init - gets the buffer for trace parser
|
||||
*/
|
||||
--- a/kernel/trace/trace.h
|
||||
+++ b/kernel/trace/trace.h
|
||||
@@ -280,6 +280,8 @@ extern void trace_array_put(struct trace
|
||||
|
||||
extern int tracing_set_time_stamp_abs(struct trace_array *tr, bool abs);
|
||||
|
||||
+extern bool trace_clock_in_ns(struct trace_array *tr);
|
||||
+
|
||||
/*
|
||||
* The global tracer (top) should be the first trace array added,
|
||||
* but we check the flag anyway.
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -90,12 +90,6 @@ static u64 hist_field_log2(struct hist_f
|
||||
return (u64) ilog2(roundup_pow_of_two(val));
|
||||
}
|
||||
|
||||
-static u64 hist_field_timestamp(struct hist_field *hist_field, void *event,
|
||||
- struct ring_buffer_event *rbe)
|
||||
-{
|
||||
- return ring_buffer_event_time_stamp(rbe);
|
||||
-}
|
||||
-
|
||||
#define DEFINE_HIST_FIELD_FN(type) \
|
||||
static u64 hist_field_##type(struct hist_field *hist_field, \
|
||||
void *event, \
|
||||
@@ -143,6 +137,7 @@ enum hist_field_flags {
|
||||
HIST_FIELD_FL_STACKTRACE = 256,
|
||||
HIST_FIELD_FL_LOG2 = 512,
|
||||
HIST_FIELD_FL_TIMESTAMP = 1024,
|
||||
+ HIST_FIELD_FL_TIMESTAMP_USECS = 2048,
|
||||
};
|
||||
|
||||
struct hist_trigger_attrs {
|
||||
@@ -153,6 +148,7 @@ struct hist_trigger_attrs {
|
||||
bool pause;
|
||||
bool cont;
|
||||
bool clear;
|
||||
+ bool ts_in_usecs;
|
||||
unsigned int map_bits;
|
||||
};
|
||||
|
||||
@@ -170,6 +166,20 @@ struct hist_trigger_data {
|
||||
bool enable_timestamps;
|
||||
};
|
||||
|
||||
+static u64 hist_field_timestamp(struct hist_field *hist_field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
+{
|
||||
+ struct hist_trigger_data *hist_data = hist_field->hist_data;
|
||||
+ struct trace_array *tr = hist_data->event_file->tr;
|
||||
+
|
||||
+ u64 ts = ring_buffer_event_time_stamp(rbe);
|
||||
+
|
||||
+ if (hist_data->attrs->ts_in_usecs && trace_clock_in_ns(tr))
|
||||
+ ts = ns2usecs(ts);
|
||||
+
|
||||
+ return ts;
|
||||
+}
|
||||
+
|
||||
static const char *hist_field_name(struct hist_field *field,
|
||||
unsigned int level)
|
||||
{
|
||||
@@ -629,6 +639,8 @@ static int create_key_field(struct hist_
|
||||
flags |= HIST_FIELD_FL_SYSCALL;
|
||||
else if (strcmp(field_str, "log2") == 0)
|
||||
flags |= HIST_FIELD_FL_LOG2;
|
||||
+ else if (strcmp(field_str, "usecs") == 0)
|
||||
+ flags |= HIST_FIELD_FL_TIMESTAMP_USECS;
|
||||
else {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
@@ -638,6 +650,8 @@ static int create_key_field(struct hist_
|
||||
if (strcmp(field_name, "$common_timestamp") == 0) {
|
||||
flags |= HIST_FIELD_FL_TIMESTAMP;
|
||||
hist_data->enable_timestamps = true;
|
||||
+ if (flags & HIST_FIELD_FL_TIMESTAMP_USECS)
|
||||
+ hist_data->attrs->ts_in_usecs = true;
|
||||
key_size = sizeof(u64);
|
||||
} else {
|
||||
field = trace_find_event_field(file->event_call, field_name);
|
||||
@@ -1239,6 +1253,8 @@ static const char *get_hist_field_flags(
|
||||
flags_str = "syscall";
|
||||
else if (hist_field->flags & HIST_FIELD_FL_LOG2)
|
||||
flags_str = "log2";
|
||||
+ else if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP_USECS)
|
||||
+ flags_str = "usecs";
|
||||
|
||||
return flags_str;
|
||||
}
|
|
@ -1,8 +1,7 @@
|
|||
From 69a78ff226fe0241ab6cb9dd961667be477e3cf7 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:47 +0200
|
||||
Subject: [PATCH 16/17] init: Introduce SYSTEM_SCHEDULING state
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
might_sleep() debugging and smp_processor_id() debugging should be active
|
||||
right after the scheduler starts working. The init task can invoke
|
||||
|
|
692
debian/patches/features/all/rt/0016-tracing-Add-variable-support-to-hist-triggers.patch
vendored
Normal file
692
debian/patches/features/all/rt/0016-tracing-Add-variable-support-to-hist-triggers.patch
vendored
Normal file
|
@ -0,0 +1,692 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:17 -0500
|
||||
Subject: [PATCH 16/32] tracing: Add variable support to hist triggers
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Add support for saving the value of a current event's event field by
|
||||
assigning it to a variable that can be read by a subsequent event.
|
||||
|
||||
The basic syntax for saving a variable is to simply prefix a unique
|
||||
variable name not corresponding to any keyword along with an '=' sign
|
||||
to any event field.
|
||||
|
||||
Both keys and values can be saved and retrieved in this way:
|
||||
|
||||
# echo 'hist:keys=next_pid:vals=ts0=common_timestamp ...
|
||||
# echo 'hist:key=timer_pid=common_pid ...'
|
||||
|
||||
If a variable isn't a key variable or prefixed with 'vals=', the
|
||||
associated event field will be saved in a variable but won't be summed
|
||||
as a value:
|
||||
|
||||
# echo 'hist:keys=next_pid:ts1=common_timestamp:...
|
||||
|
||||
Multiple variables can be assigned at the same time:
|
||||
|
||||
# echo 'hist:keys=pid:vals=ts0=common_timestamp,b=field1,field2 ...
|
||||
|
||||
Multiple (or single) variables can also be assigned at the same time
|
||||
using separate assignments:
|
||||
|
||||
# echo 'hist:keys=pid:vals=ts0=common_timestamp:b=field1:c=field2 ...
|
||||
|
||||
Variables set as above can be used by being referenced from another
|
||||
event, as described in a subsequent patch.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 299 ++++++++++++++++++++++++++++++++++-----
|
||||
1 file changed, 264 insertions(+), 35 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -30,6 +30,13 @@ typedef u64 (*hist_field_fn_t) (struct h
|
||||
struct ring_buffer_event *rbe);
|
||||
|
||||
#define HIST_FIELD_OPERANDS_MAX 2
|
||||
+#define HIST_FIELDS_MAX (TRACING_MAP_FIELDS_MAX + TRACING_MAP_VARS_MAX)
|
||||
+
|
||||
+struct hist_var {
|
||||
+ char *name;
|
||||
+ struct hist_trigger_data *hist_data;
|
||||
+ unsigned int idx;
|
||||
+};
|
||||
|
||||
struct hist_field {
|
||||
struct ftrace_event_field *field;
|
||||
@@ -40,6 +47,7 @@ struct hist_field {
|
||||
unsigned int is_signed;
|
||||
struct hist_field *operands[HIST_FIELD_OPERANDS_MAX];
|
||||
struct hist_trigger_data *hist_data;
|
||||
+ struct hist_var var;
|
||||
};
|
||||
|
||||
static u64 hist_field_none(struct hist_field *field, void *event,
|
||||
@@ -138,6 +146,8 @@ enum hist_field_flags {
|
||||
HIST_FIELD_FL_LOG2 = 512,
|
||||
HIST_FIELD_FL_TIMESTAMP = 1024,
|
||||
HIST_FIELD_FL_TIMESTAMP_USECS = 2048,
|
||||
+ HIST_FIELD_FL_VAR = 4096,
|
||||
+ HIST_FIELD_FL_VAR_ONLY = 8192,
|
||||
};
|
||||
|
||||
struct hist_trigger_attrs {
|
||||
@@ -150,13 +160,18 @@ struct hist_trigger_attrs {
|
||||
bool clear;
|
||||
bool ts_in_usecs;
|
||||
unsigned int map_bits;
|
||||
+
|
||||
+ char *assignment_str[TRACING_MAP_VARS_MAX];
|
||||
+ unsigned int n_assignments;
|
||||
};
|
||||
|
||||
struct hist_trigger_data {
|
||||
- struct hist_field *fields[TRACING_MAP_FIELDS_MAX];
|
||||
+ struct hist_field *fields[HIST_FIELDS_MAX];
|
||||
unsigned int n_vals;
|
||||
unsigned int n_keys;
|
||||
unsigned int n_fields;
|
||||
+ unsigned int n_vars;
|
||||
+ unsigned int n_var_only;
|
||||
unsigned int key_size;
|
||||
struct tracing_map_sort_key sort_keys[TRACING_MAP_SORT_KEYS_MAX];
|
||||
unsigned int n_sort_keys;
|
||||
@@ -164,6 +179,7 @@ struct hist_trigger_data {
|
||||
struct hist_trigger_attrs *attrs;
|
||||
struct tracing_map *map;
|
||||
bool enable_timestamps;
|
||||
+ bool remove;
|
||||
};
|
||||
|
||||
static u64 hist_field_timestamp(struct hist_field *hist_field, void *event,
|
||||
@@ -262,9 +278,14 @@ static int parse_map_size(char *str)
|
||||
|
||||
static void destroy_hist_trigger_attrs(struct hist_trigger_attrs *attrs)
|
||||
{
|
||||
+ unsigned int i;
|
||||
+
|
||||
if (!attrs)
|
||||
return;
|
||||
|
||||
+ for (i = 0; i < attrs->n_assignments; i++)
|
||||
+ kfree(attrs->assignment_str[i]);
|
||||
+
|
||||
kfree(attrs->name);
|
||||
kfree(attrs->sort_key_str);
|
||||
kfree(attrs->keys_str);
|
||||
@@ -295,8 +316,22 @@ static int parse_assignment(char *str, s
|
||||
goto out;
|
||||
}
|
||||
attrs->map_bits = map_bits;
|
||||
- } else
|
||||
- ret = -EINVAL;
|
||||
+ } else {
|
||||
+ char *assignment;
|
||||
+
|
||||
+ if (attrs->n_assignments == TRACING_MAP_VARS_MAX) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ assignment = kstrdup(str, GFP_KERNEL);
|
||||
+ if (!assignment) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ attrs->assignment_str[attrs->n_assignments++] = assignment;
|
||||
+ }
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
@@ -423,12 +458,15 @@ static void destroy_hist_field(struct hi
|
||||
for (i = 0; i < HIST_FIELD_OPERANDS_MAX; i++)
|
||||
destroy_hist_field(hist_field->operands[i], ++level);
|
||||
|
||||
+ kfree(hist_field->var.name);
|
||||
+
|
||||
kfree(hist_field);
|
||||
}
|
||||
|
||||
static struct hist_field *create_hist_field(struct hist_trigger_data *hist_data,
|
||||
struct ftrace_event_field *field,
|
||||
- unsigned long flags)
|
||||
+ unsigned long flags,
|
||||
+ char *var_name)
|
||||
{
|
||||
struct hist_field *hist_field;
|
||||
|
||||
@@ -454,7 +492,7 @@ static struct hist_field *create_hist_fi
|
||||
if (flags & HIST_FIELD_FL_LOG2) {
|
||||
unsigned long fl = flags & ~HIST_FIELD_FL_LOG2;
|
||||
hist_field->fn = hist_field_log2;
|
||||
- hist_field->operands[0] = create_hist_field(hist_data, field, fl);
|
||||
+ hist_field->operands[0] = create_hist_field(hist_data, field, fl, NULL);
|
||||
hist_field->size = hist_field->operands[0]->size;
|
||||
goto out;
|
||||
}
|
||||
@@ -489,14 +527,23 @@ static struct hist_field *create_hist_fi
|
||||
hist_field->field = field;
|
||||
hist_field->flags = flags;
|
||||
|
||||
+ if (var_name) {
|
||||
+ hist_field->var.name = kstrdup(var_name, GFP_KERNEL);
|
||||
+ if (!hist_field->var.name)
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
return hist_field;
|
||||
+ free:
|
||||
+ destroy_hist_field(hist_field, 0);
|
||||
+ return NULL;
|
||||
}
|
||||
|
||||
static void destroy_hist_fields(struct hist_trigger_data *hist_data)
|
||||
{
|
||||
unsigned int i;
|
||||
|
||||
- for (i = 0; i < TRACING_MAP_FIELDS_MAX; i++) {
|
||||
+ for (i = 0; i < HIST_FIELDS_MAX; i++) {
|
||||
if (hist_data->fields[i]) {
|
||||
destroy_hist_field(hist_data->fields[i], 0);
|
||||
hist_data->fields[i] = NULL;
|
||||
@@ -507,11 +554,12 @@ static void destroy_hist_fields(struct h
|
||||
static int create_hitcount_val(struct hist_trigger_data *hist_data)
|
||||
{
|
||||
hist_data->fields[HITCOUNT_IDX] =
|
||||
- create_hist_field(hist_data, NULL, HIST_FIELD_FL_HITCOUNT);
|
||||
+ create_hist_field(hist_data, NULL, HIST_FIELD_FL_HITCOUNT, NULL);
|
||||
if (!hist_data->fields[HITCOUNT_IDX])
|
||||
return -ENOMEM;
|
||||
|
||||
hist_data->n_vals++;
|
||||
+ hist_data->n_fields++;
|
||||
|
||||
if (WARN_ON(hist_data->n_vals > TRACING_MAP_VALS_MAX))
|
||||
return -EINVAL;
|
||||
@@ -519,19 +567,81 @@ static int create_hitcount_val(struct hi
|
||||
return 0;
|
||||
}
|
||||
|
||||
+static struct hist_field *find_var_field(struct hist_trigger_data *hist_data,
|
||||
+ const char *var_name)
|
||||
+{
|
||||
+ struct hist_field *hist_field, *found = NULL;
|
||||
+ int i;
|
||||
+
|
||||
+ for_each_hist_field(i, hist_data) {
|
||||
+ hist_field = hist_data->fields[i];
|
||||
+ if (hist_field && hist_field->flags & HIST_FIELD_FL_VAR &&
|
||||
+ strcmp(hist_field->var.name, var_name) == 0) {
|
||||
+ found = hist_field;
|
||||
+ break;
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ return found;
|
||||
+}
|
||||
+
|
||||
+static struct hist_field *find_var(struct trace_event_file *file,
|
||||
+ const char *var_name)
|
||||
+{
|
||||
+ struct hist_trigger_data *hist_data;
|
||||
+ struct event_trigger_data *test;
|
||||
+ struct hist_field *hist_field;
|
||||
+
|
||||
+ list_for_each_entry_rcu(test, &file->triggers, list) {
|
||||
+ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
|
||||
+ hist_data = test->private_data;
|
||||
+ hist_field = find_var_field(hist_data, var_name);
|
||||
+ if (hist_field)
|
||||
+ return hist_field;
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ return NULL;
|
||||
+}
|
||||
+
|
||||
static int create_val_field(struct hist_trigger_data *hist_data,
|
||||
unsigned int val_idx,
|
||||
struct trace_event_file *file,
|
||||
- char *field_str)
|
||||
+ char *field_str, bool var_only)
|
||||
{
|
||||
struct ftrace_event_field *field = NULL;
|
||||
+ char *field_name, *var_name;
|
||||
unsigned long flags = 0;
|
||||
- char *field_name;
|
||||
int ret = 0;
|
||||
|
||||
- if (WARN_ON(val_idx >= TRACING_MAP_VALS_MAX))
|
||||
+ if (WARN_ON(!var_only && val_idx >= TRACING_MAP_VALS_MAX))
|
||||
return -EINVAL;
|
||||
|
||||
+ var_name = strsep(&field_str, "=");
|
||||
+ if (field_str && var_name) {
|
||||
+ if (find_var(file, var_name) &&
|
||||
+ !hist_data->remove) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ flags |= HIST_FIELD_FL_VAR;
|
||||
+ hist_data->n_vars++;
|
||||
+ if (hist_data->n_vars > TRACING_MAP_VARS_MAX) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ if (var_only)
|
||||
+ flags |= HIST_FIELD_FL_VAR_ONLY;
|
||||
+ } else if (!var_only && var_name != NULL && field_str == NULL) {
|
||||
+ field_str = var_name;
|
||||
+ var_name = NULL;
|
||||
+ } else {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
field_name = strsep(&field_str, ".");
|
||||
if (field_str) {
|
||||
if (strcmp(field_str, "hex") == 0)
|
||||
@@ -553,15 +663,19 @@ static int create_val_field(struct hist_
|
||||
}
|
||||
}
|
||||
|
||||
- hist_data->fields[val_idx] = create_hist_field(hist_data, field, flags);
|
||||
+ hist_data->fields[val_idx] = create_hist_field(hist_data, field, flags, var_name);
|
||||
if (!hist_data->fields[val_idx]) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
++hist_data->n_vals;
|
||||
+ ++hist_data->n_fields;
|
||||
|
||||
- if (WARN_ON(hist_data->n_vals > TRACING_MAP_VALS_MAX))
|
||||
+ if (hist_data->fields[val_idx]->flags & HIST_FIELD_FL_VAR_ONLY)
|
||||
+ hist_data->n_var_only++;
|
||||
+
|
||||
+ if (WARN_ON(hist_data->n_vals > TRACING_MAP_VALS_MAX + TRACING_MAP_VARS_MAX))
|
||||
ret = -EINVAL;
|
||||
out:
|
||||
return ret;
|
||||
@@ -571,7 +685,7 @@ static int create_val_fields(struct hist
|
||||
struct trace_event_file *file)
|
||||
{
|
||||
char *fields_str, *field_str;
|
||||
- unsigned int i, j;
|
||||
+ unsigned int i, j = 1;
|
||||
int ret;
|
||||
|
||||
ret = create_hitcount_val(hist_data);
|
||||
@@ -591,12 +705,15 @@ static int create_val_fields(struct hist
|
||||
field_str = strsep(&fields_str, ",");
|
||||
if (!field_str)
|
||||
break;
|
||||
+
|
||||
if (strcmp(field_str, "hitcount") == 0)
|
||||
continue;
|
||||
- ret = create_val_field(hist_data, j++, file, field_str);
|
||||
+
|
||||
+ ret = create_val_field(hist_data, j++, file, field_str, false);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
+
|
||||
if (fields_str && (strcmp(fields_str, "hitcount") != 0))
|
||||
ret = -EINVAL;
|
||||
out:
|
||||
@@ -610,18 +727,32 @@ static int create_key_field(struct hist_
|
||||
char *field_str)
|
||||
{
|
||||
struct ftrace_event_field *field = NULL;
|
||||
+ struct hist_field *hist_field = NULL;
|
||||
unsigned long flags = 0;
|
||||
unsigned int key_size;
|
||||
+ char *var_name;
|
||||
int ret = 0;
|
||||
|
||||
- if (WARN_ON(key_idx >= TRACING_MAP_FIELDS_MAX))
|
||||
+ if (WARN_ON(key_idx >= HIST_FIELDS_MAX))
|
||||
return -EINVAL;
|
||||
|
||||
flags |= HIST_FIELD_FL_KEY;
|
||||
|
||||
+ var_name = strsep(&field_str, "=");
|
||||
+ if (field_str) {
|
||||
+ if (find_var(file, var_name) &&
|
||||
+ !hist_data->remove)
|
||||
+ return -EINVAL;
|
||||
+ flags |= HIST_FIELD_FL_VAR;
|
||||
+ } else {
|
||||
+ field_str = var_name;
|
||||
+ var_name = NULL;
|
||||
+ }
|
||||
+
|
||||
if (strcmp(field_str, "stacktrace") == 0) {
|
||||
flags |= HIST_FIELD_FL_STACKTRACE;
|
||||
key_size = sizeof(unsigned long) * HIST_STACKTRACE_DEPTH;
|
||||
+ hist_field = create_hist_field(hist_data, NULL, flags, var_name);
|
||||
} else {
|
||||
char *field_name = strsep(&field_str, ".");
|
||||
|
||||
@@ -667,7 +798,7 @@ static int create_key_field(struct hist_
|
||||
}
|
||||
}
|
||||
|
||||
- hist_data->fields[key_idx] = create_hist_field(hist_data, field, flags);
|
||||
+ hist_data->fields[key_idx] = create_hist_field(hist_data, field, flags, var_name);
|
||||
if (!hist_data->fields[key_idx]) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
@@ -683,6 +814,7 @@ static int create_key_field(struct hist_
|
||||
}
|
||||
|
||||
hist_data->n_keys++;
|
||||
+ hist_data->n_fields++;
|
||||
|
||||
if (WARN_ON(hist_data->n_keys > TRACING_MAP_KEYS_MAX))
|
||||
return -EINVAL;
|
||||
@@ -726,6 +858,29 @@ static int create_key_fields(struct hist
|
||||
return ret;
|
||||
}
|
||||
|
||||
+static int create_var_fields(struct hist_trigger_data *hist_data,
|
||||
+ struct trace_event_file *file)
|
||||
+{
|
||||
+ unsigned int i, j, k = hist_data->n_vals;
|
||||
+ char *str, *field_str;
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ for (i = 0; i < hist_data->attrs->n_assignments; i++) {
|
||||
+ str = hist_data->attrs->assignment_str[i];
|
||||
+
|
||||
+ for (j = 0; j < TRACING_MAP_VARS_MAX; j++) {
|
||||
+ field_str = strsep(&str, ",");
|
||||
+ if (!field_str)
|
||||
+ break;
|
||||
+ ret = create_val_field(hist_data, k++, file, field_str, true);
|
||||
+ if (ret)
|
||||
+ goto out;
|
||||
+ }
|
||||
+ }
|
||||
+ out:
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
static int create_hist_fields(struct hist_trigger_data *hist_data,
|
||||
struct trace_event_file *file)
|
||||
{
|
||||
@@ -735,11 +890,13 @@ static int create_hist_fields(struct his
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
- ret = create_key_fields(hist_data, file);
|
||||
+ ret = create_var_fields(hist_data, file);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
- hist_data->n_fields = hist_data->n_vals + hist_data->n_keys;
|
||||
+ ret = create_key_fields(hist_data, file);
|
||||
+ if (ret)
|
||||
+ goto out;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
@@ -763,7 +920,7 @@ static int create_sort_keys(struct hist_
|
||||
char *fields_str = hist_data->attrs->sort_key_str;
|
||||
struct tracing_map_sort_key *sort_key;
|
||||
int descending, ret = 0;
|
||||
- unsigned int i, j;
|
||||
+ unsigned int i, j, k;
|
||||
|
||||
hist_data->n_sort_keys = 1; /* we always have at least one, hitcount */
|
||||
|
||||
@@ -811,13 +968,21 @@ static int create_sort_keys(struct hist_
|
||||
continue;
|
||||
}
|
||||
|
||||
- for (j = 1; j < hist_data->n_fields; j++) {
|
||||
+ for (j = 1, k = 1; j < hist_data->n_fields; j++) {
|
||||
+ unsigned idx;
|
||||
+
|
||||
hist_field = hist_data->fields[j];
|
||||
+ if (hist_field->flags & HIST_FIELD_FL_VAR_ONLY)
|
||||
+ continue;
|
||||
+
|
||||
+ idx = k++;
|
||||
+
|
||||
test_name = hist_field_name(hist_field, 0);
|
||||
+
|
||||
if (test_name == NULL)
|
||||
continue;
|
||||
if (strcmp(field_name, test_name) == 0) {
|
||||
- sort_key->field_idx = j;
|
||||
+ sort_key->field_idx = idx;
|
||||
descending = is_descending(field_str);
|
||||
if (descending < 0) {
|
||||
ret = descending;
|
||||
@@ -832,6 +997,7 @@ static int create_sort_keys(struct hist_
|
||||
break;
|
||||
}
|
||||
}
|
||||
+
|
||||
hist_data->n_sort_keys = i;
|
||||
out:
|
||||
return ret;
|
||||
@@ -872,12 +1038,19 @@ static int create_tracing_map_fields(str
|
||||
idx = tracing_map_add_key_field(map,
|
||||
hist_field->offset,
|
||||
cmp_fn);
|
||||
-
|
||||
- } else
|
||||
+ } else if (!(hist_field->flags & HIST_FIELD_FL_VAR))
|
||||
idx = tracing_map_add_sum_field(map);
|
||||
|
||||
if (idx < 0)
|
||||
return idx;
|
||||
+
|
||||
+ if (hist_field->flags & HIST_FIELD_FL_VAR) {
|
||||
+ idx = tracing_map_add_var(map);
|
||||
+ if (idx < 0)
|
||||
+ return idx;
|
||||
+ hist_field->var.idx = idx;
|
||||
+ hist_field->var.hist_data = hist_data;
|
||||
+ }
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -901,7 +1074,8 @@ static bool need_tracing_map_ops(struct
|
||||
static struct hist_trigger_data *
|
||||
create_hist_data(unsigned int map_bits,
|
||||
struct hist_trigger_attrs *attrs,
|
||||
- struct trace_event_file *file)
|
||||
+ struct trace_event_file *file,
|
||||
+ bool remove)
|
||||
{
|
||||
const struct tracing_map_ops *map_ops = NULL;
|
||||
struct hist_trigger_data *hist_data;
|
||||
@@ -912,6 +1086,7 @@ create_hist_data(unsigned int map_bits,
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
hist_data->attrs = attrs;
|
||||
+ hist_data->remove = remove;
|
||||
|
||||
ret = create_hist_fields(hist_data, file);
|
||||
if (ret)
|
||||
@@ -958,14 +1133,29 @@ static void hist_trigger_elt_update(stru
|
||||
struct ring_buffer_event *rbe)
|
||||
{
|
||||
struct hist_field *hist_field;
|
||||
- unsigned int i;
|
||||
+ unsigned int i, var_idx;
|
||||
u64 hist_val;
|
||||
|
||||
for_each_hist_val_field(i, hist_data) {
|
||||
hist_field = hist_data->fields[i];
|
||||
- hist_val = hist_field->fn(hist_field, rec, rbe);
|
||||
+ hist_val = hist_field->fn(hist_field, rbe, rec);
|
||||
+ if (hist_field->flags & HIST_FIELD_FL_VAR) {
|
||||
+ var_idx = hist_field->var.idx;
|
||||
+ tracing_map_set_var(elt, var_idx, hist_val);
|
||||
+ if (hist_field->flags & HIST_FIELD_FL_VAR_ONLY)
|
||||
+ continue;
|
||||
+ }
|
||||
tracing_map_update_sum(elt, i, hist_val);
|
||||
}
|
||||
+
|
||||
+ for_each_hist_key_field(i, hist_data) {
|
||||
+ hist_field = hist_data->fields[i];
|
||||
+ if (hist_field->flags & HIST_FIELD_FL_VAR) {
|
||||
+ hist_val = hist_field->fn(hist_field, rbe, rec);
|
||||
+ var_idx = hist_field->var.idx;
|
||||
+ tracing_map_set_var(elt, var_idx, hist_val);
|
||||
+ }
|
||||
+ }
|
||||
}
|
||||
|
||||
static inline void add_to_key(char *compound_key, void *key,
|
||||
@@ -1140,6 +1330,9 @@ hist_trigger_entry_print(struct seq_file
|
||||
for (i = 1; i < hist_data->n_vals; i++) {
|
||||
field_name = hist_field_name(hist_data->fields[i], 0);
|
||||
|
||||
+ if (hist_data->fields[i]->flags & HIST_FIELD_FL_VAR)
|
||||
+ continue;
|
||||
+
|
||||
if (hist_data->fields[i]->flags & HIST_FIELD_FL_HEX) {
|
||||
seq_printf(m, " %s: %10llx", field_name,
|
||||
tracing_map_read_sum(elt, i));
|
||||
@@ -1263,6 +1456,9 @@ static void hist_field_print(struct seq_
|
||||
{
|
||||
const char *field_name = hist_field_name(hist_field, 0);
|
||||
|
||||
+ if (hist_field->var.name)
|
||||
+ seq_printf(m, "%s=", hist_field->var.name);
|
||||
+
|
||||
if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP)
|
||||
seq_puts(m, "$common_timestamp");
|
||||
else if (field_name)
|
||||
@@ -1281,7 +1477,8 @@ static int event_hist_trigger_print(stru
|
||||
struct event_trigger_data *data)
|
||||
{
|
||||
struct hist_trigger_data *hist_data = data->private_data;
|
||||
- struct hist_field *key_field;
|
||||
+ bool have_var_only = false;
|
||||
+ struct hist_field *field;
|
||||
unsigned int i;
|
||||
|
||||
seq_puts(m, "hist:");
|
||||
@@ -1292,25 +1489,47 @@ static int event_hist_trigger_print(stru
|
||||
seq_puts(m, "keys=");
|
||||
|
||||
for_each_hist_key_field(i, hist_data) {
|
||||
- key_field = hist_data->fields[i];
|
||||
+ field = hist_data->fields[i];
|
||||
|
||||
if (i > hist_data->n_vals)
|
||||
seq_puts(m, ",");
|
||||
|
||||
- if (key_field->flags & HIST_FIELD_FL_STACKTRACE)
|
||||
+ if (field->flags & HIST_FIELD_FL_STACKTRACE)
|
||||
seq_puts(m, "stacktrace");
|
||||
else
|
||||
- hist_field_print(m, key_field);
|
||||
+ hist_field_print(m, field);
|
||||
}
|
||||
|
||||
seq_puts(m, ":vals=");
|
||||
|
||||
for_each_hist_val_field(i, hist_data) {
|
||||
+ field = hist_data->fields[i];
|
||||
+ if (field->flags & HIST_FIELD_FL_VAR_ONLY) {
|
||||
+ have_var_only = true;
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
if (i == HITCOUNT_IDX)
|
||||
seq_puts(m, "hitcount");
|
||||
else {
|
||||
seq_puts(m, ",");
|
||||
- hist_field_print(m, hist_data->fields[i]);
|
||||
+ hist_field_print(m, field);
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ if (have_var_only) {
|
||||
+ unsigned int n = 0;
|
||||
+
|
||||
+ seq_puts(m, ":");
|
||||
+
|
||||
+ for_each_hist_val_field(i, hist_data) {
|
||||
+ field = hist_data->fields[i];
|
||||
+
|
||||
+ if (field->flags & HIST_FIELD_FL_VAR_ONLY) {
|
||||
+ if (n++)
|
||||
+ seq_puts(m, ",");
|
||||
+ hist_field_print(m, field);
|
||||
+ }
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1318,7 +1537,10 @@ static int event_hist_trigger_print(stru
|
||||
|
||||
for (i = 0; i < hist_data->n_sort_keys; i++) {
|
||||
struct tracing_map_sort_key *sort_key;
|
||||
- unsigned int idx;
|
||||
+ unsigned int idx, first_key_idx;
|
||||
+
|
||||
+ /* skip VAR_ONLY vals */
|
||||
+ first_key_idx = hist_data->n_vals - hist_data->n_var_only;
|
||||
|
||||
sort_key = &hist_data->sort_keys[i];
|
||||
idx = sort_key->field_idx;
|
||||
@@ -1331,8 +1553,11 @@ static int event_hist_trigger_print(stru
|
||||
|
||||
if (idx == HITCOUNT_IDX)
|
||||
seq_puts(m, "hitcount");
|
||||
- else
|
||||
+ else {
|
||||
+ if (idx >= first_key_idx)
|
||||
+ idx += hist_data->n_var_only;
|
||||
hist_field_print(m, hist_data->fields[idx]);
|
||||
+ }
|
||||
|
||||
if (sort_key->descending)
|
||||
seq_puts(m, ".descending");
|
||||
@@ -1656,12 +1881,16 @@ static int event_hist_trigger_func(struc
|
||||
struct hist_trigger_attrs *attrs;
|
||||
struct event_trigger_ops *trigger_ops;
|
||||
struct hist_trigger_data *hist_data;
|
||||
+ bool remove = false;
|
||||
char *trigger;
|
||||
int ret = 0;
|
||||
|
||||
if (!param)
|
||||
return -EINVAL;
|
||||
|
||||
+ if (glob[0] == '!')
|
||||
+ remove = true;
|
||||
+
|
||||
/* separate the trigger from the filter (k:v [if filter]) */
|
||||
trigger = strsep(¶m, " \t");
|
||||
if (!trigger)
|
||||
@@ -1674,7 +1903,7 @@ static int event_hist_trigger_func(struc
|
||||
if (attrs->map_bits)
|
||||
hist_trigger_bits = attrs->map_bits;
|
||||
|
||||
- hist_data = create_hist_data(hist_trigger_bits, attrs, file);
|
||||
+ hist_data = create_hist_data(hist_trigger_bits, attrs, file, remove);
|
||||
if (IS_ERR(hist_data)) {
|
||||
destroy_hist_trigger_attrs(attrs);
|
||||
return PTR_ERR(hist_data);
|
||||
@@ -1703,7 +1932,7 @@ static int event_hist_trigger_func(struc
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
- if (glob[0] == '!') {
|
||||
+ if (remove) {
|
||||
cmd_ops->unreg(glob+1, trigger_ops, trigger_data, file);
|
||||
ret = 0;
|
||||
goto out_free;
|
|
@ -1,9 +1,8 @@
|
|||
From 1c3c5eab171590f86edd8d31389d61dd1efe3037 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Gleixner <tglx@linutronix.de>
|
||||
Date: Tue, 16 May 2017 20:42:48 +0200
|
||||
Subject: [PATCH 17/17] sched/core: Enable might_sleep() and smp_processor_id()
|
||||
checks early
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
might_sleep() and smp_processor_id() checks are enabled after the boot
|
||||
process is done. That hides bugs in the SMP bringup and driver
|
||||
|
|
43
debian/patches/features/all/rt/0017-tracing-Account-for-variables-in-named-trigger-compa.patch
vendored
Normal file
43
debian/patches/features/all/rt/0017-tracing-Account-for-variables-in-named-trigger-compa.patch
vendored
Normal file
|
@ -0,0 +1,43 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:18 -0500
|
||||
Subject: [PATCH 17/32] tracing: Account for variables in named trigger
|
||||
compatibility
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Named triggers must also have the same set of variables in order to be
|
||||
considered compatible - update the trigger match test to account for
|
||||
that.
|
||||
|
||||
The reason for this requirement is that named triggers with variables
|
||||
are meant to allow one or more events to set the same variable.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 8 +++++++-
|
||||
1 file changed, 7 insertions(+), 1 deletion(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -1545,7 +1545,7 @@ static int event_hist_trigger_print(stru
|
||||
sort_key = &hist_data->sort_keys[i];
|
||||
idx = sort_key->field_idx;
|
||||
|
||||
- if (WARN_ON(idx >= TRACING_MAP_FIELDS_MAX))
|
||||
+ if (WARN_ON(idx >= HIST_FIELDS_MAX))
|
||||
return -EINVAL;
|
||||
|
||||
if (i > 0)
|
||||
@@ -1733,6 +1733,12 @@ static bool hist_trigger_match(struct ev
|
||||
return false;
|
||||
if (key_field->is_signed != key_field_test->is_signed)
|
||||
return false;
|
||||
+ if ((key_field->var.name && !key_field_test->var.name) ||
|
||||
+ (!key_field->var.name && key_field_test->var.name))
|
||||
+ return false;
|
||||
+ if ((key_field->var.name && key_field_test->var.name) &&
|
||||
+ strcmp(key_field->var.name, key_field_test->var.name) != 0)
|
||||
+ return false;
|
||||
}
|
||||
|
||||
for (i = 0; i < hist_data->n_sort_keys; i++) {
|
603
debian/patches/features/all/rt/0018-tracing-Add-simple-expression-support-to-hist-trigge.patch
vendored
Normal file
603
debian/patches/features/all/rt/0018-tracing-Add-simple-expression-support-to-hist-trigge.patch
vendored
Normal file
|
@ -0,0 +1,603 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:19 -0500
|
||||
Subject: [PATCH 18/32] tracing: Add simple expression support to hist triggers
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Add support for simple addition, subtraction, and unary expressions
|
||||
(-(expr) and expr, where expr = b-a, a+b, a+b+c) to hist triggers, in
|
||||
order to support a minimal set of useful inter-event calculations.
|
||||
|
||||
These operations are needed for calculating latencies between events
|
||||
(timestamp1-timestamp0) and for combined latencies (latencies over 3
|
||||
or more events).
|
||||
|
||||
In the process, factor out some common code from key and value
|
||||
parsing.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 457 +++++++++++++++++++++++++++++++++------
|
||||
1 file changed, 390 insertions(+), 67 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -32,6 +32,13 @@ typedef u64 (*hist_field_fn_t) (struct h
|
||||
#define HIST_FIELD_OPERANDS_MAX 2
|
||||
#define HIST_FIELDS_MAX (TRACING_MAP_FIELDS_MAX + TRACING_MAP_VARS_MAX)
|
||||
|
||||
+enum field_op_id {
|
||||
+ FIELD_OP_NONE,
|
||||
+ FIELD_OP_PLUS,
|
||||
+ FIELD_OP_MINUS,
|
||||
+ FIELD_OP_UNARY_MINUS,
|
||||
+};
|
||||
+
|
||||
struct hist_var {
|
||||
char *name;
|
||||
struct hist_trigger_data *hist_data;
|
||||
@@ -48,6 +55,8 @@ struct hist_field {
|
||||
struct hist_field *operands[HIST_FIELD_OPERANDS_MAX];
|
||||
struct hist_trigger_data *hist_data;
|
||||
struct hist_var var;
|
||||
+ enum field_op_id operator;
|
||||
+ char *name;
|
||||
};
|
||||
|
||||
static u64 hist_field_none(struct hist_field *field, void *event,
|
||||
@@ -98,6 +107,41 @@ static u64 hist_field_log2(struct hist_f
|
||||
return (u64) ilog2(roundup_pow_of_two(val));
|
||||
}
|
||||
|
||||
+static u64 hist_field_plus(struct hist_field *hist_field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
+{
|
||||
+ struct hist_field *operand1 = hist_field->operands[0];
|
||||
+ struct hist_field *operand2 = hist_field->operands[1];
|
||||
+
|
||||
+ u64 val1 = operand1->fn(operand1, event, rbe);
|
||||
+ u64 val2 = operand2->fn(operand2, event, rbe);
|
||||
+
|
||||
+ return val1 + val2;
|
||||
+}
|
||||
+
|
||||
+static u64 hist_field_minus(struct hist_field *hist_field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
+{
|
||||
+ struct hist_field *operand1 = hist_field->operands[0];
|
||||
+ struct hist_field *operand2 = hist_field->operands[1];
|
||||
+
|
||||
+ u64 val1 = operand1->fn(operand1, event, rbe);
|
||||
+ u64 val2 = operand2->fn(operand2, event, rbe);
|
||||
+
|
||||
+ return val1 - val2;
|
||||
+}
|
||||
+
|
||||
+static u64 hist_field_unary_minus(struct hist_field *hist_field, void *event,
|
||||
+ struct ring_buffer_event *rbe)
|
||||
+{
|
||||
+ struct hist_field *operand = hist_field->operands[0];
|
||||
+
|
||||
+ s64 sval = (s64)operand->fn(operand, event, rbe);
|
||||
+ u64 val = (u64)-sval;
|
||||
+
|
||||
+ return val;
|
||||
+}
|
||||
+
|
||||
#define DEFINE_HIST_FIELD_FN(type) \
|
||||
static u64 hist_field_##type(struct hist_field *hist_field, \
|
||||
void *event, \
|
||||
@@ -148,6 +192,7 @@ enum hist_field_flags {
|
||||
HIST_FIELD_FL_TIMESTAMP_USECS = 2048,
|
||||
HIST_FIELD_FL_VAR = 4096,
|
||||
HIST_FIELD_FL_VAR_ONLY = 8192,
|
||||
+ HIST_FIELD_FL_EXPR = 16384,
|
||||
};
|
||||
|
||||
struct hist_trigger_attrs {
|
||||
@@ -210,6 +255,8 @@ static const char *hist_field_name(struc
|
||||
field_name = hist_field_name(field->operands[0], ++level);
|
||||
else if (field->flags & HIST_FIELD_FL_TIMESTAMP)
|
||||
field_name = "$common_timestamp";
|
||||
+ else if (field->flags & HIST_FIELD_FL_EXPR)
|
||||
+ field_name = field->name;
|
||||
|
||||
if (field_name == NULL)
|
||||
field_name = "";
|
||||
@@ -444,6 +491,73 @@ static const struct tracing_map_ops hist
|
||||
.elt_init = hist_trigger_elt_comm_init,
|
||||
};
|
||||
|
||||
+static char *expr_str(struct hist_field *field, unsigned int level)
|
||||
+{
|
||||
+ char *expr = kzalloc(MAX_FILTER_STR_VAL, GFP_KERNEL);
|
||||
+
|
||||
+ if (!expr || level > 1)
|
||||
+ return NULL;
|
||||
+
|
||||
+ if (field->operator == FIELD_OP_UNARY_MINUS) {
|
||||
+ char *subexpr;
|
||||
+
|
||||
+ strcat(expr, "-(");
|
||||
+ subexpr = expr_str(field->operands[0], ++level);
|
||||
+ if (!subexpr) {
|
||||
+ kfree(expr);
|
||||
+ return NULL;
|
||||
+ }
|
||||
+ strcat(expr, subexpr);
|
||||
+ strcat(expr, ")");
|
||||
+
|
||||
+ return expr;
|
||||
+ }
|
||||
+
|
||||
+ strcat(expr, hist_field_name(field->operands[0], 0));
|
||||
+
|
||||
+ switch (field->operator) {
|
||||
+ case FIELD_OP_MINUS:
|
||||
+ strcat(expr, "-");
|
||||
+ break;
|
||||
+ case FIELD_OP_PLUS:
|
||||
+ strcat(expr, "+");
|
||||
+ break;
|
||||
+ default:
|
||||
+ kfree(expr);
|
||||
+ return NULL;
|
||||
+ }
|
||||
+
|
||||
+ strcat(expr, hist_field_name(field->operands[1], 0));
|
||||
+
|
||||
+ return expr;
|
||||
+}
|
||||
+
|
||||
+static int contains_operator(char *str)
|
||||
+{
|
||||
+ enum field_op_id field_op = FIELD_OP_NONE;
|
||||
+ char *op;
|
||||
+
|
||||
+ op = strpbrk(str, "+-");
|
||||
+ if (!op)
|
||||
+ return FIELD_OP_NONE;
|
||||
+
|
||||
+ switch (*op) {
|
||||
+ case '-':
|
||||
+ if (*str == '-')
|
||||
+ field_op = FIELD_OP_UNARY_MINUS;
|
||||
+ else
|
||||
+ field_op = FIELD_OP_MINUS;
|
||||
+ break;
|
||||
+ case '+':
|
||||
+ field_op = FIELD_OP_PLUS;
|
||||
+ break;
|
||||
+ default:
|
||||
+ break;
|
||||
+ }
|
||||
+
|
||||
+ return field_op;
|
||||
+}
|
||||
+
|
||||
static void destroy_hist_field(struct hist_field *hist_field,
|
||||
unsigned int level)
|
||||
{
|
||||
@@ -459,6 +573,7 @@ static void destroy_hist_field(struct hi
|
||||
destroy_hist_field(hist_field->operands[i], ++level);
|
||||
|
||||
kfree(hist_field->var.name);
|
||||
+ kfree(hist_field->name);
|
||||
|
||||
kfree(hist_field);
|
||||
}
|
||||
@@ -479,6 +594,9 @@ static struct hist_field *create_hist_fi
|
||||
|
||||
hist_field->hist_data = hist_data;
|
||||
|
||||
+ if (flags & HIST_FIELD_FL_EXPR)
|
||||
+ goto out; /* caller will populate */
|
||||
+
|
||||
if (flags & HIST_FIELD_FL_HITCOUNT) {
|
||||
hist_field->fn = hist_field_counter;
|
||||
goto out;
|
||||
@@ -551,6 +669,247 @@ static void destroy_hist_fields(struct h
|
||||
}
|
||||
}
|
||||
|
||||
+static struct ftrace_event_field *
|
||||
+parse_field(struct hist_trigger_data *hist_data, struct trace_event_file *file,
|
||||
+ char *field_str, unsigned long *flags)
|
||||
+{
|
||||
+ struct ftrace_event_field *field = NULL;
|
||||
+ char *field_name;
|
||||
+
|
||||
+ field_name = strsep(&field_str, ".");
|
||||
+ if (field_str) {
|
||||
+ if (strcmp(field_str, "hex") == 0)
|
||||
+ *flags |= HIST_FIELD_FL_HEX;
|
||||
+ else if (strcmp(field_str, "sym") == 0)
|
||||
+ *flags |= HIST_FIELD_FL_SYM;
|
||||
+ else if (strcmp(field_str, "sym-offset") == 0)
|
||||
+ *flags |= HIST_FIELD_FL_SYM_OFFSET;
|
||||
+ else if ((strcmp(field_str, "execname") == 0) &&
|
||||
+ (strcmp(field_name, "common_pid") == 0))
|
||||
+ *flags |= HIST_FIELD_FL_EXECNAME;
|
||||
+ else if (strcmp(field_str, "syscall") == 0)
|
||||
+ *flags |= HIST_FIELD_FL_SYSCALL;
|
||||
+ else if (strcmp(field_str, "log2") == 0)
|
||||
+ *flags |= HIST_FIELD_FL_LOG2;
|
||||
+ else if (strcmp(field_str, "usecs") == 0)
|
||||
+ *flags |= HIST_FIELD_FL_TIMESTAMP_USECS;
|
||||
+ else
|
||||
+ return ERR_PTR(-EINVAL);
|
||||
+ }
|
||||
+
|
||||
+ if (strcmp(field_name, "$common_timestamp") == 0) {
|
||||
+ *flags |= HIST_FIELD_FL_TIMESTAMP;
|
||||
+ hist_data->enable_timestamps = true;
|
||||
+ if (*flags & HIST_FIELD_FL_TIMESTAMP_USECS)
|
||||
+ hist_data->attrs->ts_in_usecs = true;
|
||||
+ } else {
|
||||
+ field = trace_find_event_field(file->event_call, field_name);
|
||||
+ if (!field)
|
||||
+ return ERR_PTR(-EINVAL);
|
||||
+ }
|
||||
+
|
||||
+ return field;
|
||||
+}
|
||||
+
|
||||
+struct hist_field *parse_atom(struct hist_trigger_data *hist_data,
|
||||
+ struct trace_event_file *file, char *str,
|
||||
+ unsigned long *flags, char *var_name)
|
||||
+{
|
||||
+ struct ftrace_event_field *field = NULL;
|
||||
+ struct hist_field *hist_field = NULL;
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ field = parse_field(hist_data, file, str, flags);
|
||||
+ if (IS_ERR(field)) {
|
||||
+ ret = PTR_ERR(field);
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ hist_field = create_hist_field(hist_data, field, *flags, var_name);
|
||||
+ if (!hist_field) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ return hist_field;
|
||||
+ out:
|
||||
+ return ERR_PTR(ret);
|
||||
+}
|
||||
+
|
||||
+static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
|
||||
+ struct trace_event_file *file,
|
||||
+ char *str, unsigned long flags,
|
||||
+ char *var_name, unsigned int level);
|
||||
+
|
||||
+static struct hist_field *parse_unary(struct hist_trigger_data *hist_data,
|
||||
+ struct trace_event_file *file,
|
||||
+ char *str, unsigned long flags,
|
||||
+ char *var_name, unsigned int level)
|
||||
+{
|
||||
+ struct hist_field *operand1, *expr = NULL;
|
||||
+ unsigned long operand_flags;
|
||||
+ char *operand1_str;
|
||||
+ int ret = 0;
|
||||
+ char *s;
|
||||
+
|
||||
+ // we support only -(xxx) i.e. explicit parens required
|
||||
+
|
||||
+ if (level > 2) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ str++; // skip leading '-'
|
||||
+
|
||||
+ s = strchr(str, '(');
|
||||
+ if (s)
|
||||
+ str++;
|
||||
+ else {
|
||||
+ ret = -EINVAL;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ s = strchr(str, ')');
|
||||
+ if (s)
|
||||
+ *s = '\0';
|
||||
+ else {
|
||||
+ ret = -EINVAL; // no closing ')'
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ operand1_str = strsep(&str, "(");
|
||||
+ if (!operand1_str)
|
||||
+ goto free;
|
||||
+
|
||||
+ flags |= HIST_FIELD_FL_EXPR;
|
||||
+ expr = create_hist_field(hist_data, NULL, flags, var_name);
|
||||
+ if (!expr) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ operand_flags = 0;
|
||||
+ operand1 = parse_expr(hist_data, file, str, operand_flags, NULL, ++level);
|
||||
+ if (IS_ERR(operand1)) {
|
||||
+ ret = PTR_ERR(operand1);
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ if (operand1 == NULL) {
|
||||
+ operand_flags = 0;
|
||||
+ operand1 = parse_atom(hist_data, file, operand1_str,
|
||||
+ &operand_flags, NULL);
|
||||
+ if (IS_ERR(operand1)) {
|
||||
+ ret = PTR_ERR(operand1);
|
||||
+ goto free;
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ expr->fn = hist_field_unary_minus;
|
||||
+ expr->operands[0] = operand1;
|
||||
+ expr->operator = FIELD_OP_UNARY_MINUS;
|
||||
+ expr->name = expr_str(expr, 0);
|
||||
+
|
||||
+ return expr;
|
||||
+ free:
|
||||
+ return ERR_PTR(ret);
|
||||
+}
|
||||
+
|
||||
+static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
|
||||
+ struct trace_event_file *file,
|
||||
+ char *str, unsigned long flags,
|
||||
+ char *var_name, unsigned int level)
|
||||
+{
|
||||
+ struct hist_field *operand1 = NULL, *operand2 = NULL, *expr = NULL;
|
||||
+ unsigned long operand_flags;
|
||||
+ int field_op, ret = -EINVAL;
|
||||
+ char *sep, *operand1_str;
|
||||
+
|
||||
+ if (level > 2)
|
||||
+ return NULL;
|
||||
+
|
||||
+ field_op = contains_operator(str);
|
||||
+ if (field_op == FIELD_OP_NONE)
|
||||
+ return NULL;
|
||||
+
|
||||
+ if (field_op == FIELD_OP_UNARY_MINUS)
|
||||
+ return parse_unary(hist_data, file, str, flags, var_name, ++level);
|
||||
+
|
||||
+ switch (field_op) {
|
||||
+ case FIELD_OP_MINUS:
|
||||
+ sep = "-";
|
||||
+ break;
|
||||
+ case FIELD_OP_PLUS:
|
||||
+ sep = "+";
|
||||
+ break;
|
||||
+ default:
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ operand1_str = strsep(&str, sep);
|
||||
+ if (!operand1_str || !str)
|
||||
+ goto free;
|
||||
+
|
||||
+ operand_flags = 0;
|
||||
+ operand1 = parse_atom(hist_data, file, operand1_str,
|
||||
+ &operand_flags, NULL);
|
||||
+ if (IS_ERR(operand1)) {
|
||||
+ ret = PTR_ERR(operand1);
|
||||
+ operand1 = NULL;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ // rest of string could be another expression e.g. b+c in a+b+c
|
||||
+ operand_flags = 0;
|
||||
+ operand2 = parse_expr(hist_data, file, str, operand_flags, NULL, ++level);
|
||||
+ if (IS_ERR(operand2)) {
|
||||
+ ret = PTR_ERR(operand2);
|
||||
+ operand2 = NULL;
|
||||
+ goto free;
|
||||
+ }
|
||||
+ if (!operand2) {
|
||||
+ operand_flags = 0;
|
||||
+ operand2 = parse_atom(hist_data, file, str,
|
||||
+ &operand_flags, NULL);
|
||||
+ if (IS_ERR(operand2)) {
|
||||
+ ret = PTR_ERR(operand2);
|
||||
+ operand2 = NULL;
|
||||
+ goto free;
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ flags |= HIST_FIELD_FL_EXPR;
|
||||
+ expr = create_hist_field(hist_data, NULL, flags, var_name);
|
||||
+ if (!expr) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ expr->operands[0] = operand1;
|
||||
+ expr->operands[1] = operand2;
|
||||
+ expr->operator = field_op;
|
||||
+ expr->name = expr_str(expr, 0);
|
||||
+
|
||||
+ switch (field_op) {
|
||||
+ case FIELD_OP_MINUS:
|
||||
+ expr->fn = hist_field_minus;
|
||||
+ break;
|
||||
+ case FIELD_OP_PLUS:
|
||||
+ expr->fn = hist_field_plus;
|
||||
+ break;
|
||||
+ default:
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ return expr;
|
||||
+ free:
|
||||
+ destroy_hist_field(operand1, 0);
|
||||
+ destroy_hist_field(operand2, 0);
|
||||
+ destroy_hist_field(expr, 0);
|
||||
+
|
||||
+ return ERR_PTR(ret);
|
||||
+}
|
||||
+
|
||||
static int create_hitcount_val(struct hist_trigger_data *hist_data)
|
||||
{
|
||||
hist_data->fields[HITCOUNT_IDX] =
|
||||
@@ -609,9 +968,9 @@ static int create_val_field(struct hist_
|
||||
struct trace_event_file *file,
|
||||
char *field_str, bool var_only)
|
||||
{
|
||||
- struct ftrace_event_field *field = NULL;
|
||||
- char *field_name, *var_name;
|
||||
+ struct hist_field *hist_field;
|
||||
unsigned long flags = 0;
|
||||
+ char *var_name;
|
||||
int ret = 0;
|
||||
|
||||
if (WARN_ON(!var_only && val_idx >= TRACING_MAP_VALS_MAX))
|
||||
@@ -642,37 +1001,27 @@ static int create_val_field(struct hist_
|
||||
goto out;
|
||||
}
|
||||
|
||||
- field_name = strsep(&field_str, ".");
|
||||
- if (field_str) {
|
||||
- if (strcmp(field_str, "hex") == 0)
|
||||
- flags |= HIST_FIELD_FL_HEX;
|
||||
- else {
|
||||
- ret = -EINVAL;
|
||||
- goto out;
|
||||
- }
|
||||
+ hist_field = parse_expr(hist_data, file, field_str, flags, var_name, 0);
|
||||
+ if (IS_ERR(hist_field)) {
|
||||
+ ret = PTR_ERR(hist_field);
|
||||
+ goto out;
|
||||
}
|
||||
|
||||
- if (strcmp(field_name, "$common_timestamp") == 0) {
|
||||
- flags |= HIST_FIELD_FL_TIMESTAMP;
|
||||
- hist_data->enable_timestamps = true;
|
||||
- } else {
|
||||
- field = trace_find_event_field(file->event_call, field_name);
|
||||
- if (!field) {
|
||||
- ret = -EINVAL;
|
||||
+ if (!hist_field) {
|
||||
+ hist_field = parse_atom(hist_data, file, field_str,
|
||||
+ &flags, var_name);
|
||||
+ if (IS_ERR(hist_field)) {
|
||||
+ ret = PTR_ERR(hist_field);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
- hist_data->fields[val_idx] = create_hist_field(hist_data, field, flags, var_name);
|
||||
- if (!hist_data->fields[val_idx]) {
|
||||
- ret = -ENOMEM;
|
||||
- goto out;
|
||||
- }
|
||||
+ hist_data->fields[val_idx] = hist_field;
|
||||
|
||||
++hist_data->n_vals;
|
||||
++hist_data->n_fields;
|
||||
|
||||
- if (hist_data->fields[val_idx]->flags & HIST_FIELD_FL_VAR_ONLY)
|
||||
+ if (hist_field->flags & HIST_FIELD_FL_VAR_ONLY)
|
||||
hist_data->n_var_only++;
|
||||
|
||||
if (WARN_ON(hist_data->n_vals > TRACING_MAP_VALS_MAX + TRACING_MAP_VARS_MAX))
|
||||
@@ -726,8 +1075,8 @@ static int create_key_field(struct hist_
|
||||
struct trace_event_file *file,
|
||||
char *field_str)
|
||||
{
|
||||
- struct ftrace_event_field *field = NULL;
|
||||
struct hist_field *hist_field = NULL;
|
||||
+
|
||||
unsigned long flags = 0;
|
||||
unsigned int key_size;
|
||||
char *var_name;
|
||||
@@ -754,60 +1103,33 @@ static int create_key_field(struct hist_
|
||||
key_size = sizeof(unsigned long) * HIST_STACKTRACE_DEPTH;
|
||||
hist_field = create_hist_field(hist_data, NULL, flags, var_name);
|
||||
} else {
|
||||
- char *field_name = strsep(&field_str, ".");
|
||||
-
|
||||
- if (field_str) {
|
||||
- if (strcmp(field_str, "hex") == 0)
|
||||
- flags |= HIST_FIELD_FL_HEX;
|
||||
- else if (strcmp(field_str, "sym") == 0)
|
||||
- flags |= HIST_FIELD_FL_SYM;
|
||||
- else if (strcmp(field_str, "sym-offset") == 0)
|
||||
- flags |= HIST_FIELD_FL_SYM_OFFSET;
|
||||
- else if ((strcmp(field_str, "execname") == 0) &&
|
||||
- (strcmp(field_name, "common_pid") == 0))
|
||||
- flags |= HIST_FIELD_FL_EXECNAME;
|
||||
- else if (strcmp(field_str, "syscall") == 0)
|
||||
- flags |= HIST_FIELD_FL_SYSCALL;
|
||||
- else if (strcmp(field_str, "log2") == 0)
|
||||
- flags |= HIST_FIELD_FL_LOG2;
|
||||
- else if (strcmp(field_str, "usecs") == 0)
|
||||
- flags |= HIST_FIELD_FL_TIMESTAMP_USECS;
|
||||
- else {
|
||||
- ret = -EINVAL;
|
||||
- goto out;
|
||||
- }
|
||||
+ hist_field = parse_expr(hist_data, file, field_str, flags,
|
||||
+ var_name, 0);
|
||||
+ if (IS_ERR(hist_field)) {
|
||||
+ ret = PTR_ERR(hist_field);
|
||||
+ goto out;
|
||||
}
|
||||
|
||||
- if (strcmp(field_name, "$common_timestamp") == 0) {
|
||||
- flags |= HIST_FIELD_FL_TIMESTAMP;
|
||||
- hist_data->enable_timestamps = true;
|
||||
- if (flags & HIST_FIELD_FL_TIMESTAMP_USECS)
|
||||
- hist_data->attrs->ts_in_usecs = true;
|
||||
- key_size = sizeof(u64);
|
||||
- } else {
|
||||
- field = trace_find_event_field(file->event_call, field_name);
|
||||
- if (!field) {
|
||||
- ret = -EINVAL;
|
||||
+ if (!hist_field) {
|
||||
+ hist_field = parse_atom(hist_data, file, field_str,
|
||||
+ &flags, var_name);
|
||||
+ if (IS_ERR(hist_field)) {
|
||||
+ ret = PTR_ERR(hist_field);
|
||||
goto out;
|
||||
}
|
||||
-
|
||||
- if (is_string_field(field))
|
||||
- key_size = MAX_FILTER_STR_VAL;
|
||||
- else
|
||||
- key_size = field->size;
|
||||
}
|
||||
- }
|
||||
|
||||
- hist_data->fields[key_idx] = create_hist_field(hist_data, field, flags, var_name);
|
||||
- if (!hist_data->fields[key_idx]) {
|
||||
- ret = -ENOMEM;
|
||||
- goto out;
|
||||
+ key_size = hist_field->size;
|
||||
}
|
||||
|
||||
+ hist_data->fields[key_idx] = hist_field;
|
||||
+
|
||||
key_size = ALIGN(key_size, sizeof(u64));
|
||||
hist_data->fields[key_idx]->size = key_size;
|
||||
hist_data->fields[key_idx]->offset = key_offset;
|
||||
+
|
||||
hist_data->key_size += key_size;
|
||||
+
|
||||
if (hist_data->key_size > HIST_KEY_SIZE_MAX) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
@@ -1330,7 +1652,8 @@ hist_trigger_entry_print(struct seq_file
|
||||
for (i = 1; i < hist_data->n_vals; i++) {
|
||||
field_name = hist_field_name(hist_data->fields[i], 0);
|
||||
|
||||
- if (hist_data->fields[i]->flags & HIST_FIELD_FL_VAR)
|
||||
+ if (hist_data->fields[i]->flags & HIST_FIELD_FL_VAR ||
|
||||
+ hist_data->fields[i]->flags & HIST_FIELD_FL_EXPR)
|
||||
continue;
|
||||
|
||||
if (hist_data->fields[i]->flags & HIST_FIELD_FL_HEX) {
|
1123
debian/patches/features/all/rt/0019-tracing-Add-variable-reference-handling-to-hist-trig.patch
vendored
Normal file
1123
debian/patches/features/all/rt/0019-tracing-Add-variable-reference-handling-to-hist-trig.patch
vendored
Normal file
File diff suppressed because it is too large
Load Diff
196
debian/patches/features/all/rt/0020-tracing-Add-support-for-dynamic-tracepoints.patch
vendored
Normal file
196
debian/patches/features/all/rt/0020-tracing-Add-support-for-dynamic-tracepoints.patch
vendored
Normal file
|
@ -0,0 +1,196 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:21 -0500
|
||||
Subject: [PATCH 20/32] tracing: Add support for dynamic tracepoints
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The tracepoint infrastructure assumes statically-defined tracepoints
|
||||
and uses static_keys for tracepoint enablement. In order to define
|
||||
tracepoints on the fly, we need to have a dynamic counterpart.
|
||||
|
||||
Add a dynamic_tracepoint_probe_register() and a dynamic param onto
|
||||
tracepoint_probe_unregister() for this purpose.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
include/linux/tracepoint.h | 11 +++++++----
|
||||
kernel/trace/trace_events.c | 4 ++--
|
||||
kernel/tracepoint.c | 42 ++++++++++++++++++++++++++++++------------
|
||||
3 files changed, 39 insertions(+), 18 deletions(-)
|
||||
|
||||
--- a/include/linux/tracepoint.h
|
||||
+++ b/include/linux/tracepoint.h
|
||||
@@ -37,9 +37,12 @@ extern int
|
||||
tracepoint_probe_register(struct tracepoint *tp, void *probe, void *data);
|
||||
extern int
|
||||
tracepoint_probe_register_prio(struct tracepoint *tp, void *probe, void *data,
|
||||
- int prio);
|
||||
+ int prio, bool dynamic);
|
||||
+extern int dynamic_tracepoint_probe_register(struct tracepoint *tp,
|
||||
+ void *probe, void *data);
|
||||
extern int
|
||||
-tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data);
|
||||
+tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data,
|
||||
+ bool dynamic);
|
||||
extern void
|
||||
for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
|
||||
void *priv);
|
||||
@@ -206,13 +209,13 @@ extern void syscall_unregfunc(void);
|
||||
int prio) \
|
||||
{ \
|
||||
return tracepoint_probe_register_prio(&__tracepoint_##name, \
|
||||
- (void *)probe, data, prio); \
|
||||
+ (void *)probe, data, prio, false); \
|
||||
} \
|
||||
static inline int \
|
||||
unregister_trace_##name(void (*probe)(data_proto), void *data) \
|
||||
{ \
|
||||
return tracepoint_probe_unregister(&__tracepoint_##name,\
|
||||
- (void *)probe, data); \
|
||||
+ (void *)probe, data, false); \
|
||||
} \
|
||||
static inline void \
|
||||
check_trace_callback_type_##name(void (*cb)(data_proto)) \
|
||||
--- a/kernel/trace/trace_events.c
|
||||
+++ b/kernel/trace/trace_events.c
|
||||
@@ -297,7 +297,7 @@ int trace_event_reg(struct trace_event_c
|
||||
case TRACE_REG_UNREGISTER:
|
||||
tracepoint_probe_unregister(call->tp,
|
||||
call->class->probe,
|
||||
- file);
|
||||
+ file, false);
|
||||
return 0;
|
||||
|
||||
#ifdef CONFIG_PERF_EVENTS
|
||||
@@ -308,7 +308,7 @@ int trace_event_reg(struct trace_event_c
|
||||
case TRACE_REG_PERF_UNREGISTER:
|
||||
tracepoint_probe_unregister(call->tp,
|
||||
call->class->perf_probe,
|
||||
- call);
|
||||
+ call, false);
|
||||
return 0;
|
||||
case TRACE_REG_PERF_OPEN:
|
||||
case TRACE_REG_PERF_CLOSE:
|
||||
--- a/kernel/tracepoint.c
|
||||
+++ b/kernel/tracepoint.c
|
||||
@@ -192,12 +192,15 @@ static void *func_remove(struct tracepoi
|
||||
* Add the probe function to a tracepoint.
|
||||
*/
|
||||
static int tracepoint_add_func(struct tracepoint *tp,
|
||||
- struct tracepoint_func *func, int prio)
|
||||
+ struct tracepoint_func *func, int prio,
|
||||
+ bool dynamic)
|
||||
{
|
||||
struct tracepoint_func *old, *tp_funcs;
|
||||
int ret;
|
||||
|
||||
- if (tp->regfunc && !static_key_enabled(&tp->key)) {
|
||||
+ if (tp->regfunc &&
|
||||
+ ((dynamic && !(atomic_read(&tp->key.enabled) > 0)) ||
|
||||
+ !static_key_enabled(&tp->key))) {
|
||||
ret = tp->regfunc();
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@@ -219,7 +222,9 @@ static int tracepoint_add_func(struct tr
|
||||
* is used.
|
||||
*/
|
||||
rcu_assign_pointer(tp->funcs, tp_funcs);
|
||||
- if (!static_key_enabled(&tp->key))
|
||||
+ if (dynamic && !(atomic_read(&tp->key.enabled) > 0))
|
||||
+ atomic_inc(&tp->key.enabled);
|
||||
+ else if (!dynamic && !static_key_enabled(&tp->key))
|
||||
static_key_slow_inc(&tp->key);
|
||||
release_probes(old);
|
||||
return 0;
|
||||
@@ -232,7 +237,7 @@ static int tracepoint_add_func(struct tr
|
||||
* by preempt_disable around the call site.
|
||||
*/
|
||||
static int tracepoint_remove_func(struct tracepoint *tp,
|
||||
- struct tracepoint_func *func)
|
||||
+ struct tracepoint_func *func, bool dynamic)
|
||||
{
|
||||
struct tracepoint_func *old, *tp_funcs;
|
||||
|
||||
@@ -246,10 +251,14 @@ static int tracepoint_remove_func(struct
|
||||
|
||||
if (!tp_funcs) {
|
||||
/* Removed last function */
|
||||
- if (tp->unregfunc && static_key_enabled(&tp->key))
|
||||
+ if (tp->unregfunc &&
|
||||
+ ((dynamic && (atomic_read(&tp->key.enabled) > 0)) ||
|
||||
+ static_key_enabled(&tp->key)))
|
||||
tp->unregfunc();
|
||||
|
||||
- if (static_key_enabled(&tp->key))
|
||||
+ if (dynamic && (atomic_read(&tp->key.enabled) > 0))
|
||||
+ atomic_dec(&tp->key.enabled);
|
||||
+ else if (!dynamic && static_key_enabled(&tp->key))
|
||||
static_key_slow_dec(&tp->key);
|
||||
}
|
||||
rcu_assign_pointer(tp->funcs, tp_funcs);
|
||||
@@ -258,7 +267,7 @@ static int tracepoint_remove_func(struct
|
||||
}
|
||||
|
||||
/**
|
||||
- * tracepoint_probe_register - Connect a probe to a tracepoint
|
||||
+ * tracepoint_probe_register_prio - Connect a probe to a tracepoint
|
||||
* @tp: tracepoint
|
||||
* @probe: probe handler
|
||||
* @data: tracepoint data
|
||||
@@ -271,7 +280,7 @@ static int tracepoint_remove_func(struct
|
||||
* within module exit functions.
|
||||
*/
|
||||
int tracepoint_probe_register_prio(struct tracepoint *tp, void *probe,
|
||||
- void *data, int prio)
|
||||
+ void *data, int prio, bool dynamic)
|
||||
{
|
||||
struct tracepoint_func tp_func;
|
||||
int ret;
|
||||
@@ -280,7 +289,7 @@ int tracepoint_probe_register_prio(struc
|
||||
tp_func.func = probe;
|
||||
tp_func.data = data;
|
||||
tp_func.prio = prio;
|
||||
- ret = tracepoint_add_func(tp, &tp_func, prio);
|
||||
+ ret = tracepoint_add_func(tp, &tp_func, prio, dynamic);
|
||||
mutex_unlock(&tracepoints_mutex);
|
||||
return ret;
|
||||
}
|
||||
@@ -301,10 +310,18 @@ EXPORT_SYMBOL_GPL(tracepoint_probe_regis
|
||||
*/
|
||||
int tracepoint_probe_register(struct tracepoint *tp, void *probe, void *data)
|
||||
{
|
||||
- return tracepoint_probe_register_prio(tp, probe, data, TRACEPOINT_DEFAULT_PRIO);
|
||||
+ return tracepoint_probe_register_prio(tp, probe, data, TRACEPOINT_DEFAULT_PRIO, false);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tracepoint_probe_register);
|
||||
|
||||
+int dynamic_tracepoint_probe_register(struct tracepoint *tp, void *probe,
|
||||
+ void *data)
|
||||
+{
|
||||
+ return tracepoint_probe_register_prio(tp, probe, data,
|
||||
+ TRACEPOINT_DEFAULT_PRIO, true);
|
||||
+}
|
||||
+EXPORT_SYMBOL_GPL(dynamic_tracepoint_probe_register);
|
||||
+
|
||||
/**
|
||||
* tracepoint_probe_unregister - Disconnect a probe from a tracepoint
|
||||
* @tp: tracepoint
|
||||
@@ -313,7 +330,8 @@ EXPORT_SYMBOL_GPL(tracepoint_probe_regis
|
||||
*
|
||||
* Returns 0 if ok, error value on error.
|
||||
*/
|
||||
-int tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data)
|
||||
+int tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data,
|
||||
+ bool dynamic)
|
||||
{
|
||||
struct tracepoint_func tp_func;
|
||||
int ret;
|
||||
@@ -321,7 +339,7 @@ int tracepoint_probe_unregister(struct t
|
||||
mutex_lock(&tracepoints_mutex);
|
||||
tp_func.func = probe;
|
||||
tp_func.data = data;
|
||||
- ret = tracepoint_remove_func(tp, &tp_func);
|
||||
+ ret = tracepoint_remove_func(tp, &tp_func, dynamic);
|
||||
mutex_unlock(&tracepoints_mutex);
|
||||
return ret;
|
||||
}
|
228
debian/patches/features/all/rt/0021-tracing-Add-hist-trigger-action-hook.patch
vendored
Normal file
228
debian/patches/features/all/rt/0021-tracing-Add-hist-trigger-action-hook.patch
vendored
Normal file
|
@ -0,0 +1,228 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:22 -0500
|
||||
Subject: [PATCH 21/32] tracing: Add hist trigger action hook
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Add a hook for executing extra actions whenever a histogram entry is
|
||||
added or updated.
|
||||
|
||||
The default 'action' when a hist entry is added to a histogram is to
|
||||
update the set of values associated with it. Some applications may
|
||||
want to perform additional actions at that point, such as generate
|
||||
another event, or compare and save a maximum.
|
||||
|
||||
Add a simple framework for doing that; specific actions will be
|
||||
implemented on top of it in later patches.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 114 +++++++++++++++++++++++++++++++++++++--
|
||||
1 file changed, 111 insertions(+), 3 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -33,6 +33,7 @@ typedef u64 (*hist_field_fn_t) (struct h
|
||||
|
||||
#define HIST_FIELD_OPERANDS_MAX 2
|
||||
#define HIST_FIELDS_MAX (TRACING_MAP_FIELDS_MAX + TRACING_MAP_VARS_MAX)
|
||||
+#define HIST_ACTIONS_MAX 8
|
||||
|
||||
enum field_op_id {
|
||||
FIELD_OP_NONE,
|
||||
@@ -233,6 +234,9 @@ struct hist_trigger_attrs {
|
||||
|
||||
char *assignment_str[TRACING_MAP_VARS_MAX];
|
||||
unsigned int n_assignments;
|
||||
+
|
||||
+ char *action_str[HIST_ACTIONS_MAX];
|
||||
+ unsigned int n_actions;
|
||||
};
|
||||
|
||||
struct hist_trigger_data {
|
||||
@@ -252,6 +256,21 @@ struct hist_trigger_data {
|
||||
bool remove;
|
||||
struct hist_field *var_refs[TRACING_MAP_VARS_MAX];
|
||||
unsigned int n_var_refs;
|
||||
+
|
||||
+ struct action_data *actions[HIST_ACTIONS_MAX];
|
||||
+ unsigned int n_actions;
|
||||
+};
|
||||
+
|
||||
+struct action_data;
|
||||
+
|
||||
+typedef void (*action_fn_t) (struct hist_trigger_data *hist_data,
|
||||
+ struct tracing_map_elt *elt, void *rec,
|
||||
+ struct ring_buffer_event *rbe,
|
||||
+ struct action_data *data, u64 *var_ref_vals);
|
||||
+
|
||||
+struct action_data {
|
||||
+ action_fn_t fn;
|
||||
+ unsigned int var_ref_idx;
|
||||
};
|
||||
|
||||
static u64 hist_field_timestamp(struct hist_field *hist_field,
|
||||
@@ -681,6 +700,9 @@ static void destroy_hist_trigger_attrs(s
|
||||
for (i = 0; i < attrs->n_assignments; i++)
|
||||
kfree(attrs->assignment_str[i]);
|
||||
|
||||
+ for (i = 0; i < attrs->n_actions; i++)
|
||||
+ kfree(attrs->action_str[i]);
|
||||
+
|
||||
kfree(attrs->name);
|
||||
kfree(attrs->sort_key_str);
|
||||
kfree(attrs->keys_str);
|
||||
@@ -688,6 +710,16 @@ static void destroy_hist_trigger_attrs(s
|
||||
kfree(attrs);
|
||||
}
|
||||
|
||||
+static int parse_action(char *str, struct hist_trigger_attrs *attrs)
|
||||
+{
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ if (attrs->n_actions >= HIST_ACTIONS_MAX)
|
||||
+ return ret;
|
||||
+
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
static int parse_assignment(char *str, struct hist_trigger_attrs *attrs)
|
||||
{
|
||||
int ret = 0;
|
||||
@@ -755,8 +787,9 @@ static struct hist_trigger_attrs *parse_
|
||||
else if (strcmp(str, "clear") == 0)
|
||||
attrs->clear = true;
|
||||
else {
|
||||
- ret = -EINVAL;
|
||||
- goto free;
|
||||
+ ret = parse_action(str, attrs);
|
||||
+ if (ret)
|
||||
+ goto free;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1722,11 +1755,63 @@ static int create_sort_keys(struct hist_
|
||||
return ret;
|
||||
}
|
||||
|
||||
+static void destroy_actions(struct hist_trigger_data *hist_data)
|
||||
+{
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ for (i = 0; i < hist_data->n_actions; i++) {
|
||||
+ struct action_data *data = hist_data->actions[i];
|
||||
+
|
||||
+ kfree(data);
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
+static int create_actions(struct hist_trigger_data *hist_data,
|
||||
+ struct trace_event_file *file)
|
||||
+{
|
||||
+ unsigned int i;
|
||||
+ int ret = 0;
|
||||
+ char *str;
|
||||
+
|
||||
+ for (i = 0; i < hist_data->attrs->n_actions; i++) {
|
||||
+ str = hist_data->attrs->action_str[i];
|
||||
+ }
|
||||
+
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+static void print_actions(struct seq_file *m,
|
||||
+ struct hist_trigger_data *hist_data,
|
||||
+ struct tracing_map_elt *elt)
|
||||
+{
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ for (i = 0; i < hist_data->n_actions; i++) {
|
||||
+ struct action_data *data = hist_data->actions[i];
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
+static void print_actions_spec(struct seq_file *m,
|
||||
+ struct hist_trigger_data *hist_data)
|
||||
+{
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ for (i = 0; i < hist_data->n_actions; i++) {
|
||||
+ struct action_data *data = hist_data->actions[i];
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
static void destroy_hist_data(struct hist_trigger_data *hist_data)
|
||||
{
|
||||
+ if (!hist_data)
|
||||
+ return;
|
||||
+
|
||||
destroy_hist_trigger_attrs(hist_data->attrs);
|
||||
destroy_hist_fields(hist_data);
|
||||
tracing_map_destroy(hist_data->map);
|
||||
+
|
||||
+ destroy_actions(hist_data);
|
||||
+
|
||||
kfree(hist_data);
|
||||
}
|
||||
|
||||
@@ -1886,6 +1971,20 @@ static inline void add_to_key(char *comp
|
||||
memcpy(compound_key + key_field->offset, key, size);
|
||||
}
|
||||
|
||||
+static void
|
||||
+hist_trigger_actions(struct hist_trigger_data *hist_data,
|
||||
+ struct tracing_map_elt *elt, void *rec,
|
||||
+ struct ring_buffer_event *rbe, u64 *var_ref_vals)
|
||||
+{
|
||||
+ struct action_data *data;
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ for (i = 0; i < hist_data->n_actions; i++) {
|
||||
+ data = hist_data->actions[i];
|
||||
+ data->fn(hist_data, elt, rec, rbe, data, var_ref_vals);
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
static void event_hist_trigger(struct event_trigger_data *data, void *rec,
|
||||
struct ring_buffer_event *rbe)
|
||||
{
|
||||
@@ -1941,6 +2040,9 @@ static void event_hist_trigger(struct ev
|
||||
return;
|
||||
|
||||
hist_trigger_elt_update(hist_data, elt, rec, rbe, var_ref_vals);
|
||||
+
|
||||
+ if (resolve_var_refs(hist_data, key, var_ref_vals, true))
|
||||
+ hist_trigger_actions(hist_data, elt, rec, rbe, var_ref_vals);
|
||||
}
|
||||
|
||||
static void hist_trigger_stacktrace_print(struct seq_file *m,
|
||||
@@ -2278,6 +2380,8 @@ static int event_hist_trigger_print(stru
|
||||
}
|
||||
seq_printf(m, ":size=%u", (1 << hist_data->map->map_bits));
|
||||
|
||||
+ print_actions_spec(m, hist_data);
|
||||
+
|
||||
if (data->filter_str)
|
||||
seq_printf(m, " if %s", data->filter_str);
|
||||
|
||||
@@ -2740,6 +2844,10 @@ static int event_hist_trigger_func(struc
|
||||
if (has_hist_vars(hist_data))
|
||||
save_hist_vars(hist_data);
|
||||
|
||||
+ ret = create_actions(hist_data, file);
|
||||
+ if (ret)
|
||||
+ goto out_unreg;
|
||||
+
|
||||
ret = tracing_map_init(hist_data->map);
|
||||
if (ret)
|
||||
goto out_unreg;
|
||||
@@ -2761,8 +2869,8 @@ static int event_hist_trigger_func(struc
|
||||
remove_hist_vars(hist_data);
|
||||
|
||||
kfree(trigger_data);
|
||||
-
|
||||
destroy_hist_data(hist_data);
|
||||
+
|
||||
goto out;
|
||||
}
|
||||
|
822
debian/patches/features/all/rt/0022-tracing-Add-support-for-synthetic-events.patch
vendored
Normal file
822
debian/patches/features/all/rt/0022-tracing-Add-support-for-synthetic-events.patch
vendored
Normal file
|
@ -0,0 +1,822 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:23 -0500
|
||||
Subject: [PATCH 22/32] tracing: Add support for 'synthetic' events
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Synthetic events are user-defined events generated from hist trigger
|
||||
variables saved from one or more other events.
|
||||
|
||||
To define a synthetic event, the user writes a simple specification
|
||||
consisting of the name of the new event along with one or more
|
||||
variables and their type(s), to the tracing/synthetic_events file.
|
||||
|
||||
For instance, the following creates a new event named 'wakeup_latency'
|
||||
with 3 fields: lat, pid, and prio:
|
||||
|
||||
# echo 'wakeup_latency u64 lat; pid_t pid; int prio' >> \
|
||||
/sys/kernel/debug/tracing/synthetic_events
|
||||
|
||||
Reading the tracing/synthetic_events file lists all the
|
||||
currently-defined synthetic events, in this case the event we defined
|
||||
above:
|
||||
|
||||
# cat /sys/kernel/debug/tracing/synthetic_events
|
||||
wakeup_latency u64 lat; pid_t pid; int prio
|
||||
|
||||
At this point, the synthetic event is ready to use, and a histogram
|
||||
can be defined using it:
|
||||
|
||||
# echo 'hist:keys=pid,prio,lat.log2:sort=pid,lat' >> \
|
||||
/sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
|
||||
|
||||
The new event is created under the tracing/events/synthetic/ directory
|
||||
and looks and behaves just like any other event:
|
||||
|
||||
# ls /sys/kernel/debug/tracing/events/synthetic/wakeup_latency
|
||||
enable filter format hist id trigger
|
||||
|
||||
Although a histogram can be defined for it, nothing will happen until
|
||||
an action tracing that event via the trace_synth() function occurs.
|
||||
The trace_synth() function is very similar to all the other trace_*
|
||||
invocations spread throughout the kernel, except in this case the
|
||||
trace_ function and its corresponding tracepoint isn't statically
|
||||
generated but defined by the user at run-time.
|
||||
|
||||
How this can be automatically hooked up via a hist trigger 'action' is
|
||||
discussed in a subsequent patch.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 738 +++++++++++++++++++++++++++++++++++++++
|
||||
1 file changed, 738 insertions(+)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -20,10 +20,14 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/stacktrace.h>
|
||||
#include <linux/rculist.h>
|
||||
+#include <linux/tracefs.h>
|
||||
|
||||
#include "tracing_map.h"
|
||||
#include "trace.h"
|
||||
|
||||
+#define SYNTH_SYSTEM "synthetic"
|
||||
+#define SYNTH_FIELDS_MAX 16
|
||||
+
|
||||
struct hist_field;
|
||||
|
||||
typedef u64 (*hist_field_fn_t) (struct hist_field *field,
|
||||
@@ -261,6 +265,23 @@ struct hist_trigger_data {
|
||||
unsigned int n_actions;
|
||||
};
|
||||
|
||||
+struct synth_field {
|
||||
+ char *type;
|
||||
+ char *name;
|
||||
+ unsigned int size;
|
||||
+ bool is_signed;
|
||||
+};
|
||||
+
|
||||
+struct synth_event {
|
||||
+ struct list_head list;
|
||||
+ char *name;
|
||||
+ struct synth_field **fields;
|
||||
+ unsigned int n_fields;
|
||||
+ struct trace_event_class class;
|
||||
+ struct trace_event_call call;
|
||||
+ struct tracepoint *tp;
|
||||
+};
|
||||
+
|
||||
struct action_data;
|
||||
|
||||
typedef void (*action_fn_t) (struct hist_trigger_data *hist_data,
|
||||
@@ -273,6 +294,688 @@ struct action_data {
|
||||
unsigned int var_ref_idx;
|
||||
};
|
||||
|
||||
+static LIST_HEAD(synth_event_list);
|
||||
+static DEFINE_MUTEX(synth_event_mutex);
|
||||
+
|
||||
+struct synth_trace_event {
|
||||
+ struct trace_entry ent;
|
||||
+ int n_fields;
|
||||
+ u64 fields[];
|
||||
+};
|
||||
+
|
||||
+static int synth_event_define_fields(struct trace_event_call *call)
|
||||
+{
|
||||
+ struct synth_trace_event trace;
|
||||
+ int offset = offsetof(typeof(trace), fields);
|
||||
+ struct synth_event *event = call->data;
|
||||
+ unsigned int i, size;
|
||||
+ char *name, *type;
|
||||
+ bool is_signed;
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ for (i = 0; i < event->n_fields; i++) {
|
||||
+ size = event->fields[i]->size;
|
||||
+ is_signed = event->fields[i]->is_signed;
|
||||
+ type = event->fields[i]->type;
|
||||
+ name = event->fields[i]->name;
|
||||
+ ret = trace_define_field(call, type, name, offset, size,
|
||||
+ is_signed, FILTER_OTHER);
|
||||
+ offset += sizeof(u64);
|
||||
+ }
|
||||
+
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+static enum print_line_t print_synth_event(struct trace_iterator *iter,
|
||||
+ int flags,
|
||||
+ struct trace_event *event)
|
||||
+{
|
||||
+ struct trace_array *tr = iter->tr;
|
||||
+ struct trace_seq *s = &iter->seq;
|
||||
+ struct synth_trace_event *entry;
|
||||
+ struct synth_event *se;
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ entry = (struct synth_trace_event *)iter->ent;
|
||||
+ se = container_of(event, struct synth_event, call.event);
|
||||
+
|
||||
+ trace_seq_printf(s, "%s: ", se->name);
|
||||
+
|
||||
+ for (i = 0; i < entry->n_fields; i++) {
|
||||
+ if (trace_seq_has_overflowed(s))
|
||||
+ goto end;
|
||||
+
|
||||
+ /* parameter types */
|
||||
+ if (tr->trace_flags & TRACE_ITER_VERBOSE)
|
||||
+ trace_seq_printf(s, "%s ", "u64");
|
||||
+
|
||||
+ /* parameter values */
|
||||
+ trace_seq_printf(s, "%s=%llu%s", se->fields[i]->name,
|
||||
+ entry->fields[i],
|
||||
+ i == entry->n_fields - 1 ? "" : ", ");
|
||||
+ }
|
||||
+end:
|
||||
+ trace_seq_putc(s, '\n');
|
||||
+
|
||||
+ return trace_handle_return(s);
|
||||
+}
|
||||
+
|
||||
+static struct trace_event_functions synth_event_funcs = {
|
||||
+ .trace = print_synth_event
|
||||
+};
|
||||
+
|
||||
+static notrace void trace_event_raw_event_synth(void *__data,
|
||||
+ u64 *var_ref_vals,
|
||||
+ unsigned int var_ref_idx)
|
||||
+{
|
||||
+ struct trace_event_file *trace_file = __data;
|
||||
+ struct synth_trace_event *entry;
|
||||
+ struct trace_event_buffer fbuffer;
|
||||
+ int fields_size;
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ struct synth_event *event;
|
||||
+
|
||||
+ event = trace_file->event_call->data;
|
||||
+
|
||||
+ if (trace_trigger_soft_disabled(trace_file))
|
||||
+ return;
|
||||
+
|
||||
+ fields_size = event->n_fields * sizeof(u64);
|
||||
+
|
||||
+ entry = trace_event_buffer_reserve(&fbuffer, trace_file,
|
||||
+ sizeof(*entry) + fields_size);
|
||||
+ if (!entry)
|
||||
+ return;
|
||||
+
|
||||
+ entry->n_fields = event->n_fields;
|
||||
+
|
||||
+ for (i = 0; i < event->n_fields; i++)
|
||||
+ entry->fields[i] = var_ref_vals[var_ref_idx + i];
|
||||
+
|
||||
+ trace_event_buffer_commit(&fbuffer);
|
||||
+}
|
||||
+
|
||||
+static void free_synth_event_print_fmt(struct trace_event_call *call)
|
||||
+{
|
||||
+ if (call)
|
||||
+ kfree(call->print_fmt);
|
||||
+}
|
||||
+
|
||||
+static int __set_synth_event_print_fmt(struct synth_event *event,
|
||||
+ char *buf, int len)
|
||||
+{
|
||||
+ int pos = 0;
|
||||
+ int i;
|
||||
+
|
||||
+ /* When len=0, we just calculate the needed length */
|
||||
+#define LEN_OR_ZERO (len ? len - pos : 0)
|
||||
+
|
||||
+ pos += snprintf(buf + pos, LEN_OR_ZERO, "\"");
|
||||
+ for (i = 0; i < event->n_fields; i++) {
|
||||
+ pos += snprintf(buf + pos, LEN_OR_ZERO, "%s: 0x%%0%zulx%s",
|
||||
+ event->fields[i]->name, sizeof(u64),
|
||||
+ i == event->n_fields - 1 ? "" : ", ");
|
||||
+ }
|
||||
+ pos += snprintf(buf + pos, LEN_OR_ZERO, "\"");
|
||||
+
|
||||
+ for (i = 0; i < event->n_fields; i++) {
|
||||
+ pos += snprintf(buf + pos, LEN_OR_ZERO,
|
||||
+ ", ((u64)(REC->%s))", event->fields[i]->name);
|
||||
+ }
|
||||
+
|
||||
+#undef LEN_OR_ZERO
|
||||
+
|
||||
+ /* return the length of print_fmt */
|
||||
+ return pos;
|
||||
+}
|
||||
+
|
||||
+static int set_synth_event_print_fmt(struct trace_event_call *call)
|
||||
+{
|
||||
+ struct synth_event *event = call->data;
|
||||
+ char *print_fmt;
|
||||
+ int len;
|
||||
+
|
||||
+ /* First: called with 0 length to calculate the needed length */
|
||||
+ len = __set_synth_event_print_fmt(event, NULL, 0);
|
||||
+
|
||||
+ print_fmt = kmalloc(len + 1, GFP_KERNEL);
|
||||
+ if (!print_fmt)
|
||||
+ return -ENOMEM;
|
||||
+
|
||||
+ /* Second: actually write the @print_fmt */
|
||||
+ __set_synth_event_print_fmt(event, print_fmt, len + 1);
|
||||
+ call->print_fmt = print_fmt;
|
||||
+
|
||||
+ return 0;
|
||||
+}
|
||||
+
|
||||
+int dynamic_trace_event_reg(struct trace_event_call *call,
|
||||
+ enum trace_reg type, void *data)
|
||||
+{
|
||||
+ struct trace_event_file *file = data;
|
||||
+
|
||||
+ WARN_ON(!(call->flags & TRACE_EVENT_FL_TRACEPOINT));
|
||||
+ switch (type) {
|
||||
+ case TRACE_REG_REGISTER:
|
||||
+ return dynamic_tracepoint_probe_register(call->tp,
|
||||
+ call->class->probe,
|
||||
+ file);
|
||||
+ case TRACE_REG_UNREGISTER:
|
||||
+ tracepoint_probe_unregister(call->tp,
|
||||
+ call->class->probe,
|
||||
+ file, true);
|
||||
+ return 0;
|
||||
+
|
||||
+#ifdef CONFIG_PERF_EVENTS
|
||||
+ case TRACE_REG_PERF_REGISTER:
|
||||
+ return dynamic_tracepoint_probe_register(call->tp,
|
||||
+ call->class->perf_probe,
|
||||
+ call);
|
||||
+ case TRACE_REG_PERF_UNREGISTER:
|
||||
+ tracepoint_probe_unregister(call->tp,
|
||||
+ call->class->perf_probe,
|
||||
+ call, true);
|
||||
+ return 0;
|
||||
+ case TRACE_REG_PERF_OPEN:
|
||||
+ case TRACE_REG_PERF_CLOSE:
|
||||
+ case TRACE_REG_PERF_ADD:
|
||||
+ case TRACE_REG_PERF_DEL:
|
||||
+ return 0;
|
||||
+#endif
|
||||
+ }
|
||||
+ return 0;
|
||||
+}
|
||||
+
|
||||
+static void free_synth_field(struct synth_field *field)
|
||||
+{
|
||||
+ kfree(field->type);
|
||||
+ kfree(field->name);
|
||||
+ kfree(field);
|
||||
+}
|
||||
+
|
||||
+static bool synth_field_signed(char *type)
|
||||
+{
|
||||
+ if (strncmp(type, "u", 1) == 0)
|
||||
+ return false;
|
||||
+
|
||||
+ return true;
|
||||
+}
|
||||
+
|
||||
+static unsigned int synth_field_size(char *type)
|
||||
+{
|
||||
+ unsigned int size = 0;
|
||||
+
|
||||
+ if (strcmp(type, "s64") == 0)
|
||||
+ size = sizeof(s64);
|
||||
+ else if (strcmp(type, "u64") == 0)
|
||||
+ size = sizeof(u64);
|
||||
+ else if (strcmp(type, "s32") == 0)
|
||||
+ size = sizeof(s32);
|
||||
+ else if (strcmp(type, "u32") == 0)
|
||||
+ size = sizeof(u32);
|
||||
+ else if (strcmp(type, "s16") == 0)
|
||||
+ size = sizeof(s16);
|
||||
+ else if (strcmp(type, "u16") == 0)
|
||||
+ size = sizeof(u16);
|
||||
+ else if (strcmp(type, "s8") == 0)
|
||||
+ size = sizeof(s8);
|
||||
+ else if (strcmp(type, "u8") == 0)
|
||||
+ size = sizeof(u8);
|
||||
+ else if (strcmp(type, "char") == 0)
|
||||
+ size = sizeof(char);
|
||||
+ else if (strcmp(type, "unsigned char") == 0)
|
||||
+ size = sizeof(unsigned char);
|
||||
+ else if (strcmp(type, "int") == 0)
|
||||
+ size = sizeof(int);
|
||||
+ else if (strcmp(type, "unsigned int") == 0)
|
||||
+ size = sizeof(unsigned int);
|
||||
+ else if (strcmp(type, "long") == 0)
|
||||
+ size = sizeof(long);
|
||||
+ else if (strcmp(type, "unsigned long") == 0)
|
||||
+ size = sizeof(unsigned long);
|
||||
+ else if (strcmp(type, "pid_t") == 0)
|
||||
+ size = sizeof(pid_t);
|
||||
+ else if (strstr(type, "[") == 0)
|
||||
+ size = sizeof(u64);
|
||||
+
|
||||
+ return size;
|
||||
+}
|
||||
+
|
||||
+static struct synth_field *parse_synth_field(char *field_type,
|
||||
+ char *field_name)
|
||||
+{
|
||||
+ struct synth_field *field;
|
||||
+ int len, ret = 0;
|
||||
+ char *array;
|
||||
+
|
||||
+ if (field_type[0] == ';')
|
||||
+ field_type++;
|
||||
+
|
||||
+ len = strlen(field_name);
|
||||
+ if (field_name[len - 1] == ';')
|
||||
+ field_name[len - 1] = '\0';
|
||||
+
|
||||
+ field = kzalloc(sizeof(*field), GFP_KERNEL);
|
||||
+ if (!field)
|
||||
+ return ERR_PTR(-ENOMEM);
|
||||
+
|
||||
+ len = strlen(field_type) + 1;
|
||||
+ array = strchr(field_name, '[');
|
||||
+ if (array)
|
||||
+ len += strlen(array);
|
||||
+ field->type = kzalloc(len, GFP_KERNEL);
|
||||
+ if (!field->type) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto free;
|
||||
+ }
|
||||
+ strcat(field->type, field_type);
|
||||
+ if (array)
|
||||
+ strcat(field->type, array);
|
||||
+
|
||||
+ field->size = synth_field_size(field->type);
|
||||
+ if (!field->size) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ field->is_signed = synth_field_signed(field->type);
|
||||
+
|
||||
+ field->name = kstrdup(field_name, GFP_KERNEL);
|
||||
+ if (!field->name) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto free;
|
||||
+ }
|
||||
+ out:
|
||||
+ return field;
|
||||
+ free:
|
||||
+ free_synth_field(field);
|
||||
+ field = ERR_PTR(ret);
|
||||
+ goto out;
|
||||
+}
|
||||
+
|
||||
+static void free_synth_tracepoint(struct tracepoint *tp)
|
||||
+{
|
||||
+ if (!tp)
|
||||
+ return;
|
||||
+
|
||||
+ kfree(tp->name);
|
||||
+ kfree(tp);
|
||||
+}
|
||||
+
|
||||
+static struct tracepoint *alloc_synth_tracepoint(char *name)
|
||||
+{
|
||||
+ struct tracepoint *tp;
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ tp = kzalloc(sizeof(*tp), GFP_KERNEL);
|
||||
+ if (!tp) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ tp->name = kstrdup(name, GFP_KERNEL);
|
||||
+ if (!tp->name) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
+ return tp;
|
||||
+ free:
|
||||
+ free_synth_tracepoint(tp);
|
||||
+
|
||||
+ return ERR_PTR(ret);
|
||||
+}
|
||||
+
|
||||
+static inline void trace_synth(struct synth_event *event, u64 *var_ref_vals,
|
||||
+ unsigned int var_ref_idx)
|
||||
+{
|
||||
+ struct tracepoint *tp = event->tp;
|
||||
+
|
||||
+ if (unlikely(atomic_read(&tp->key.enabled) > 0)) {
|
||||
+ struct tracepoint_func *it_func_ptr;
|
||||
+ void *it_func;
|
||||
+ void *__data;
|
||||
+
|
||||
+ if (!(cpu_online(raw_smp_processor_id())))
|
||||
+ return;
|
||||
+
|
||||
+ it_func_ptr = rcu_dereference_sched((tp)->funcs);
|
||||
+ if (it_func_ptr) {
|
||||
+ do {
|
||||
+ it_func = (it_func_ptr)->func;
|
||||
+ __data = (it_func_ptr)->data;
|
||||
+ ((void(*)(void *__data, u64 *var_ref_vals, unsigned int var_ref_idx))(it_func))(__data, var_ref_vals, var_ref_idx);
|
||||
+ } while ((++it_func_ptr)->func);
|
||||
+ }
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
+static struct synth_event *find_synth_event(const char *name)
|
||||
+{
|
||||
+ struct synth_event *event;
|
||||
+
|
||||
+ list_for_each_entry(event, &synth_event_list, list) {
|
||||
+ if (strcmp(event->name, name) == 0)
|
||||
+ return event;
|
||||
+ }
|
||||
+
|
||||
+ return NULL;
|
||||
+}
|
||||
+
|
||||
+static int register_synth_event(struct synth_event *event)
|
||||
+{
|
||||
+ struct trace_event_call *call = &event->call;
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ event->call.class = &event->class;
|
||||
+ event->class.system = kstrdup(SYNTH_SYSTEM, GFP_KERNEL);
|
||||
+ if (!event->class.system) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ event->tp = alloc_synth_tracepoint(event->name);
|
||||
+ if (IS_ERR(event->tp)) {
|
||||
+ ret = PTR_ERR(event->tp);
|
||||
+ event->tp = NULL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ INIT_LIST_HEAD(&call->class->fields);
|
||||
+ call->event.funcs = &synth_event_funcs;
|
||||
+ call->class->define_fields = synth_event_define_fields;
|
||||
+
|
||||
+ ret = register_trace_event(&call->event);
|
||||
+ if (!ret) {
|
||||
+ ret = -ENODEV;
|
||||
+ goto out;
|
||||
+ }
|
||||
+ call->flags = TRACE_EVENT_FL_TRACEPOINT;
|
||||
+ call->class->reg = dynamic_trace_event_reg;
|
||||
+ call->class->probe = trace_event_raw_event_synth;
|
||||
+ call->data = event;
|
||||
+ call->tp = event->tp;
|
||||
+ ret = trace_add_event_call(call);
|
||||
+ if (ret) {
|
||||
+ pr_warn("Failed to register synthetic event: %s\n",
|
||||
+ trace_event_name(call));
|
||||
+ goto err;
|
||||
+ }
|
||||
+
|
||||
+ ret = set_synth_event_print_fmt(call);
|
||||
+ if (ret < 0) {
|
||||
+ trace_remove_event_call(call);
|
||||
+ goto err;
|
||||
+ }
|
||||
+ out:
|
||||
+ return ret;
|
||||
+ err:
|
||||
+ unregister_trace_event(&call->event);
|
||||
+ goto out;
|
||||
+}
|
||||
+
|
||||
+static int unregister_synth_event(struct synth_event *event)
|
||||
+{
|
||||
+ struct trace_event_call *call = &event->call;
|
||||
+ int ret;
|
||||
+
|
||||
+ ret = trace_remove_event_call(call);
|
||||
+ if (ret) {
|
||||
+ pr_warn("Failed to remove synthetic event: %s\n",
|
||||
+ trace_event_name(call));
|
||||
+ free_synth_event_print_fmt(call);
|
||||
+ unregister_trace_event(&call->event);
|
||||
+ }
|
||||
+
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+static void remove_synth_event(struct synth_event *event)
|
||||
+{
|
||||
+ unregister_synth_event(event);
|
||||
+ list_del(&event->list);
|
||||
+}
|
||||
+
|
||||
+static int add_synth_event(struct synth_event *event)
|
||||
+{
|
||||
+ int ret;
|
||||
+
|
||||
+ ret = register_synth_event(event);
|
||||
+ if (ret)
|
||||
+ return ret;
|
||||
+
|
||||
+ list_add(&event->list, &synth_event_list);
|
||||
+
|
||||
+ return 0;
|
||||
+}
|
||||
+
|
||||
+static void free_synth_event(struct synth_event *event)
|
||||
+{
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ if (!event)
|
||||
+ return;
|
||||
+
|
||||
+ for (i = 0; i < event->n_fields; i++)
|
||||
+ free_synth_field(event->fields[i]);
|
||||
+
|
||||
+ kfree(event->fields);
|
||||
+ kfree(event->name);
|
||||
+ kfree(event->class.system);
|
||||
+ free_synth_tracepoint(event->tp);
|
||||
+ free_synth_event_print_fmt(&event->call);
|
||||
+ kfree(event);
|
||||
+}
|
||||
+
|
||||
+static struct synth_event *alloc_synth_event(char *event_name, int n_fields,
|
||||
+ struct synth_field **fields)
|
||||
+{
|
||||
+ struct synth_event *event;
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ event = kzalloc(sizeof(*event), GFP_KERNEL);
|
||||
+ if (!event) {
|
||||
+ event = ERR_PTR(-ENOMEM);
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ event->name = kstrdup(event_name, GFP_KERNEL);
|
||||
+ if (!event->name) {
|
||||
+ kfree(event);
|
||||
+ event = ERR_PTR(-ENOMEM);
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ event->fields = kcalloc(n_fields, sizeof(event->fields), GFP_KERNEL);
|
||||
+ if (!event->fields) {
|
||||
+ free_synth_event(event);
|
||||
+ event = ERR_PTR(-ENOMEM);
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ for (i = 0; i < n_fields; i++)
|
||||
+ event->fields[i] = fields[i];
|
||||
+
|
||||
+ event->n_fields = n_fields;
|
||||
+ out:
|
||||
+ return event;
|
||||
+}
|
||||
+
|
||||
+static int create_synth_event(int argc, char **argv)
|
||||
+{
|
||||
+ struct synth_field *fields[SYNTH_FIELDS_MAX];
|
||||
+ struct synth_event *event = NULL;
|
||||
+ bool delete_event = false;
|
||||
+ int i, n_fields = 0, ret = 0;
|
||||
+ char *name;
|
||||
+
|
||||
+ mutex_lock(&synth_event_mutex);
|
||||
+
|
||||
+ /*
|
||||
+ * Argument syntax:
|
||||
+ * - Add synthetic event: <event_name> field[;field] ...
|
||||
+ * - Remove synthetic event: !<event_name> field[;field] ...
|
||||
+ * where 'field' = type field_name
|
||||
+ */
|
||||
+ if (argc < 1) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto err;
|
||||
+ }
|
||||
+
|
||||
+ name = argv[0];
|
||||
+ if (name[0] == '!') {
|
||||
+ delete_event = true;
|
||||
+ name++;
|
||||
+ }
|
||||
+
|
||||
+ event = find_synth_event(name);
|
||||
+ if (event) {
|
||||
+ if (delete_event) {
|
||||
+ remove_synth_event(event);
|
||||
+ goto err;
|
||||
+ } else
|
||||
+ ret = -EEXIST;
|
||||
+ goto out;
|
||||
+ } else if (delete_event) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ if (argc < 2) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto err;
|
||||
+ }
|
||||
+
|
||||
+ for (i = 1; i < argc - 1; i++) {
|
||||
+ if (strcmp(argv[i], ";") == 0)
|
||||
+ continue;
|
||||
+ if (n_fields == SYNTH_FIELDS_MAX) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+ fields[n_fields] = parse_synth_field(argv[i], argv[i + 1]);
|
||||
+ if (!fields[n_fields])
|
||||
+ goto err;
|
||||
+ i++; n_fields++;
|
||||
+ }
|
||||
+ if (i < argc) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ event = alloc_synth_event(name, n_fields, fields);
|
||||
+ if (IS_ERR(event)) {
|
||||
+ ret = PTR_ERR(event);
|
||||
+ event = NULL;
|
||||
+ goto err;
|
||||
+ }
|
||||
+
|
||||
+ add_synth_event(event);
|
||||
+ out:
|
||||
+ mutex_unlock(&synth_event_mutex);
|
||||
+
|
||||
+ return ret;
|
||||
+ err:
|
||||
+ for (i = 0; i < n_fields; i++)
|
||||
+ free_synth_field(fields[i]);
|
||||
+ free_synth_event(event);
|
||||
+
|
||||
+ goto out;
|
||||
+}
|
||||
+
|
||||
+static int release_all_synth_events(void)
|
||||
+{
|
||||
+ struct synth_event *event, *e;
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ mutex_lock(&synth_event_mutex);
|
||||
+
|
||||
+ list_for_each_entry_safe(event, e, &synth_event_list, list) {
|
||||
+ remove_synth_event(event);
|
||||
+ free_synth_event(event);
|
||||
+ }
|
||||
+
|
||||
+ mutex_unlock(&synth_event_mutex);
|
||||
+
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+
|
||||
+static void *synth_events_seq_start(struct seq_file *m, loff_t *pos)
|
||||
+{
|
||||
+ mutex_lock(&synth_event_mutex);
|
||||
+
|
||||
+ return seq_list_start(&synth_event_list, *pos);
|
||||
+}
|
||||
+
|
||||
+static void *synth_events_seq_next(struct seq_file *m, void *v, loff_t *pos)
|
||||
+{
|
||||
+ return seq_list_next(v, &synth_event_list, pos);
|
||||
+}
|
||||
+
|
||||
+static void synth_events_seq_stop(struct seq_file *m, void *v)
|
||||
+{
|
||||
+ mutex_unlock(&synth_event_mutex);
|
||||
+}
|
||||
+
|
||||
+static int synth_events_seq_show(struct seq_file *m, void *v)
|
||||
+{
|
||||
+ struct synth_field *field;
|
||||
+ struct synth_event *event = v;
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ seq_printf(m, "%s\t", event->name);
|
||||
+
|
||||
+ for (i = 0; i < event->n_fields; i++) {
|
||||
+ field = event->fields[i];
|
||||
+
|
||||
+ /* parameter values */
|
||||
+ seq_printf(m, "%s %s%s", field->type, field->name,
|
||||
+ i == event->n_fields - 1 ? "" : "; ");
|
||||
+ }
|
||||
+
|
||||
+ seq_putc(m, '\n');
|
||||
+
|
||||
+ return 0;
|
||||
+}
|
||||
+
|
||||
+static const struct seq_operations synth_events_seq_op = {
|
||||
+ .start = synth_events_seq_start,
|
||||
+ .next = synth_events_seq_next,
|
||||
+ .stop = synth_events_seq_stop,
|
||||
+ .show = synth_events_seq_show
|
||||
+};
|
||||
+
|
||||
+static int synth_events_open(struct inode *inode, struct file *file)
|
||||
+{
|
||||
+ int ret;
|
||||
+
|
||||
+ if ((file->f_mode & FMODE_WRITE) && (file->f_flags & O_TRUNC)) {
|
||||
+ ret = release_all_synth_events();
|
||||
+ if (ret < 0)
|
||||
+ return ret;
|
||||
+ }
|
||||
+
|
||||
+ return seq_open(file, &synth_events_seq_op);
|
||||
+}
|
||||
+
|
||||
+static ssize_t synth_events_write(struct file *file,
|
||||
+ const char __user *buffer,
|
||||
+ size_t count, loff_t *ppos)
|
||||
+{
|
||||
+ return trace_parse_run_command(file, buffer, count, ppos,
|
||||
+ create_synth_event);
|
||||
+}
|
||||
+
|
||||
+static const struct file_operations synth_events_fops = {
|
||||
+ .open = synth_events_open,
|
||||
+ .write = synth_events_write,
|
||||
+ .read = seq_read,
|
||||
+ .llseek = seq_lseek,
|
||||
+ .release = seq_release,
|
||||
+};
|
||||
+
|
||||
static u64 hist_field_timestamp(struct hist_field *hist_field,
|
||||
struct tracing_map_elt *elt,
|
||||
struct ring_buffer_event *rbe,
|
||||
@@ -3028,3 +3731,38 @@ static __init void unregister_trigger_hi
|
||||
|
||||
return ret;
|
||||
}
|
||||
+
|
||||
+static __init int trace_events_hist_init(void)
|
||||
+{
|
||||
+ struct dentry *entry = NULL;
|
||||
+ struct trace_array *tr;
|
||||
+ struct dentry *d_tracer;
|
||||
+ int err = 0;
|
||||
+
|
||||
+ tr = top_trace_array();
|
||||
+ if (!tr) {
|
||||
+ err = -ENODEV;
|
||||
+ goto err;
|
||||
+ }
|
||||
+
|
||||
+ d_tracer = tracing_init_dentry();
|
||||
+ if (IS_ERR(d_tracer)) {
|
||||
+ err = PTR_ERR(d_tracer);
|
||||
+ goto err;
|
||||
+ }
|
||||
+
|
||||
+ entry = tracefs_create_file("synthetic_events", 0644, d_tracer,
|
||||
+ tr, &synth_events_fops);
|
||||
+ if (!entry) {
|
||||
+ err = -ENODEV;
|
||||
+ goto err;
|
||||
+ }
|
||||
+
|
||||
+ return err;
|
||||
+ err:
|
||||
+ pr_warn("Could not create tracefs 'synthetic_events' entry\n");
|
||||
+
|
||||
+ return err;
|
||||
+}
|
||||
+
|
||||
+fs_initcall(trace_events_hist_init);
|
1269
debian/patches/features/all/rt/0023-tracing-Add-onmatch-hist-trigger-action-support.patch
vendored
Normal file
1269
debian/patches/features/all/rt/0023-tracing-Add-onmatch-hist-trigger-action-support.patch
vendored
Normal file
File diff suppressed because it is too large
Load Diff
456
debian/patches/features/all/rt/0024-tracing-Add-onmax-hist-trigger-action-support.patch
vendored
Normal file
456
debian/patches/features/all/rt/0024-tracing-Add-onmax-hist-trigger-action-support.patch
vendored
Normal file
|
@ -0,0 +1,456 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:25 -0500
|
||||
Subject: [PATCH 24/32] tracing: Add 'onmax' hist trigger action support
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Add an 'onmax(var).save(field,...)' hist trigger action which is
|
||||
invoked whenever an event exceeds the current maximum.
|
||||
|
||||
The end result is that the trace event fields or variables specified
|
||||
as the onmax.save() params will be saved if 'var' exceeds the current
|
||||
maximum for that hist trigger entry. This allows context from the
|
||||
event that exhibited the new maximum to be saved for later reference.
|
||||
When the histogram is displayed, additional fields displaying the
|
||||
saved values will be printed.
|
||||
|
||||
As an example the below defines a couple of hist triggers, one for
|
||||
sched_wakeup and another for sched_switch, keyed on pid. Whenever a
|
||||
sched_wakeup occurs, the timestamp is saved in the entry corresponding
|
||||
to the current pid, and when the scheduler switches back to that pid,
|
||||
the timestamp difference is calculated. If the resulting latency
|
||||
exceeds the current maximum latency, the specified save() values are
|
||||
saved:
|
||||
|
||||
# echo 'hist:keys=pid:ts0=common_timestamp.usecs \
|
||||
if comm=="cyclictest"' >> \
|
||||
/sys/kernel/debug/tracing/events/sched/sched_wakeup/trigger
|
||||
|
||||
# echo 'hist:keys=next_pid:\
|
||||
wakeup_lat=common_timestamp.usecs-$ts0:\
|
||||
onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) \
|
||||
if next_comm=="cyclictest"' >> \
|
||||
/sys/kernel/debug/tracing/events/sched/sched_switch/trigger
|
||||
|
||||
When the histogram is displayed, the max value and the saved values
|
||||
corresponding to the max are displayed following the rest of the
|
||||
fields:
|
||||
|
||||
# cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
|
||||
{ next_pid: 2255 } hitcount: 239 \
|
||||
common_timestamp-$ts0: 0
|
||||
max: 27 next_comm: cyclictest \
|
||||
prev_pid: 0 prev_prio: 120 prev_comm: swapper/1 \
|
||||
{ next_pid: 2256 } hitcount: 2355 common_timestamp-$ts0: 0 \
|
||||
max: 49 next_comm: cyclictest \
|
||||
prev_pid: 0 prev_prio: 120 prev_comm: swapper/0
|
||||
|
||||
Totals:
|
||||
Hits: 12970
|
||||
Entries: 2
|
||||
Dropped: 0
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 310 ++++++++++++++++++++++++++++++++++-----
|
||||
1 file changed, 276 insertions(+), 34 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -282,6 +282,10 @@ struct hist_trigger_data {
|
||||
unsigned int n_field_var_str;
|
||||
struct field_var_hist *field_var_hists[SYNTH_FIELDS_MAX];
|
||||
unsigned int n_field_var_hists;
|
||||
+
|
||||
+ struct field_var *max_vars[SYNTH_FIELDS_MAX];
|
||||
+ unsigned int n_max_vars;
|
||||
+ unsigned int n_max_var_str;
|
||||
};
|
||||
|
||||
struct synth_field {
|
||||
@@ -318,6 +322,12 @@ struct action_data {
|
||||
char *match_event_system;
|
||||
char *synth_event_name;
|
||||
struct synth_event *synth_event;
|
||||
+
|
||||
+ char *onmax_var_str;
|
||||
+ char *onmax_fn_name;
|
||||
+ unsigned int max_var_ref_idx;
|
||||
+ struct hist_field *max_var;
|
||||
+ struct hist_field *onmax_var;
|
||||
};
|
||||
|
||||
static LIST_HEAD(synth_event_list);
|
||||
@@ -1493,7 +1503,8 @@ static int parse_action(char *str, struc
|
||||
if (attrs->n_actions >= HIST_ACTIONS_MAX)
|
||||
return ret;
|
||||
|
||||
- if ((strncmp(str, "onmatch(", strlen("onmatch(")) == 0)) {
|
||||
+ if ((strncmp(str, "onmatch(", strlen("onmatch(")) == 0) ||
|
||||
+ (strncmp(str, "onmax(", strlen("onmax(")) == 0)) {
|
||||
attrs->action_str[attrs->n_actions] = kstrdup(str, GFP_KERNEL);
|
||||
if (!attrs->action_str[attrs->n_actions]) {
|
||||
ret = -ENOMEM;
|
||||
@@ -1612,7 +1623,7 @@ static void hist_trigger_elt_data_free(s
|
||||
struct hist_elt_data *private_data = elt->private_data;
|
||||
unsigned int i, n_str;
|
||||
|
||||
- n_str = hist_data->n_field_var_str;
|
||||
+ n_str = hist_data->n_field_var_str + hist_data->n_max_var_str;
|
||||
|
||||
for (i = 0; i < n_str; i++)
|
||||
kfree(private_data->field_var_str[i]);
|
||||
@@ -1647,7 +1658,7 @@ static int hist_trigger_elt_data_alloc(s
|
||||
}
|
||||
}
|
||||
|
||||
- n_str = hist_data->n_field_var_str;
|
||||
+ n_str = hist_data->n_field_var_str + hist_data->n_max_var_str;
|
||||
|
||||
for (i = 0; i < n_str; i++) {
|
||||
elt_data->field_var_str[i] = kzalloc(size, GFP_KERNEL);
|
||||
@@ -2504,6 +2515,15 @@ static void update_field_vars(struct his
|
||||
hist_data->n_field_vars, 0);
|
||||
}
|
||||
|
||||
+static void update_max_vars(struct hist_trigger_data *hist_data,
|
||||
+ struct tracing_map_elt *elt,
|
||||
+ struct ring_buffer_event *rbe,
|
||||
+ void *rec)
|
||||
+{
|
||||
+ __update_field_vars(elt, rbe, rec, hist_data->max_vars,
|
||||
+ hist_data->n_max_vars, hist_data->n_field_var_str);
|
||||
+}
|
||||
+
|
||||
static struct hist_field *create_var(struct hist_trigger_data *hist_data,
|
||||
struct trace_event_file *file,
|
||||
char *name, int size, const char *type)
|
||||
@@ -2613,6 +2633,222 @@ create_target_field_var(struct hist_trig
|
||||
return create_field_var(hist_data, file, var_name);
|
||||
}
|
||||
|
||||
+static void onmax_print(struct seq_file *m,
|
||||
+ struct hist_trigger_data *hist_data,
|
||||
+ struct tracing_map_elt *elt,
|
||||
+ struct action_data *data)
|
||||
+{
|
||||
+ unsigned int i, save_var_idx, max_idx = data->max_var->var.idx;
|
||||
+
|
||||
+ seq_printf(m, "\n\tmax: %10llu", tracing_map_read_var(elt, max_idx));
|
||||
+
|
||||
+ for (i = 0; i < hist_data->n_max_vars; i++) {
|
||||
+ struct hist_field *save_val = hist_data->max_vars[i]->val;
|
||||
+ struct hist_field *save_var = hist_data->max_vars[i]->var;
|
||||
+ u64 val;
|
||||
+
|
||||
+ save_var_idx = save_var->var.idx;
|
||||
+
|
||||
+ val = tracing_map_read_var(elt, save_var_idx);
|
||||
+
|
||||
+ if (save_val->flags & HIST_FIELD_FL_STRING) {
|
||||
+ seq_printf(m, " %s: %-50s", save_var->var.name,
|
||||
+ (char *)(uintptr_t)(val));
|
||||
+ } else
|
||||
+ seq_printf(m, " %s: %10llu", save_var->var.name, val);
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
+static void onmax_save(struct hist_trigger_data *hist_data,
|
||||
+ struct tracing_map_elt *elt, void *rec,
|
||||
+ struct ring_buffer_event *rbe,
|
||||
+ struct action_data *data, u64 *var_ref_vals)
|
||||
+{
|
||||
+ unsigned int max_idx = data->max_var->var.idx;
|
||||
+ unsigned int max_var_ref_idx = data->max_var_ref_idx;
|
||||
+
|
||||
+ u64 var_val, max_val;
|
||||
+
|
||||
+ var_val = var_ref_vals[max_var_ref_idx];
|
||||
+ max_val = tracing_map_read_var(elt, max_idx);
|
||||
+
|
||||
+ if (var_val <= max_val)
|
||||
+ return;
|
||||
+
|
||||
+ tracing_map_set_var(elt, max_idx, var_val);
|
||||
+
|
||||
+ update_max_vars(hist_data, elt, rbe, rec);
|
||||
+}
|
||||
+
|
||||
+static void onmax_destroy(struct action_data *data)
|
||||
+{
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ destroy_hist_field(data->max_var, 0);
|
||||
+ destroy_hist_field(data->onmax_var, 0);
|
||||
+
|
||||
+ kfree(data->onmax_var_str);
|
||||
+ kfree(data->onmax_fn_name);
|
||||
+
|
||||
+ for (i = 0; i < data->n_params; i++)
|
||||
+ kfree(data->params[i]);
|
||||
+
|
||||
+ kfree(data);
|
||||
+}
|
||||
+
|
||||
+static int onmax_create(struct hist_trigger_data *hist_data,
|
||||
+ struct action_data *data)
|
||||
+{
|
||||
+ struct trace_event_call *call = hist_data->event_file->event_call;
|
||||
+ struct trace_event_file *file = hist_data->event_file;
|
||||
+ struct hist_field *var_field, *ref_field, *max_var;
|
||||
+ unsigned int var_ref_idx = hist_data->n_var_refs;
|
||||
+ struct field_var *field_var;
|
||||
+ char *onmax_var_str, *param;
|
||||
+ const char *event_name;
|
||||
+ unsigned long flags;
|
||||
+ unsigned int i;
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ onmax_var_str = data->onmax_var_str;
|
||||
+ if (onmax_var_str[0] != '$')
|
||||
+ return -EINVAL;
|
||||
+ onmax_var_str++;
|
||||
+
|
||||
+ event_name = trace_event_name(call);
|
||||
+ var_field = find_target_event_var(hist_data, NULL, NULL, onmax_var_str);
|
||||
+ if (!var_field)
|
||||
+ return -EINVAL;
|
||||
+
|
||||
+ flags = HIST_FIELD_FL_VAR_REF;
|
||||
+ ref_field = create_hist_field(hist_data, NULL, flags, NULL);
|
||||
+ if (!ref_field)
|
||||
+ return -ENOMEM;
|
||||
+
|
||||
+ ref_field->var.idx = var_field->var.idx;
|
||||
+ ref_field->var.hist_data = hist_data;
|
||||
+ ref_field->name = kstrdup(var_field->var.name, GFP_KERNEL);
|
||||
+ ref_field->type = kstrdup(var_field->type, GFP_KERNEL);
|
||||
+ if (!ref_field->name || !ref_field->type) {
|
||||
+ destroy_hist_field(ref_field, 0);
|
||||
+ ret = -ENOMEM;
|
||||
+ goto out;
|
||||
+ }
|
||||
+ hist_data->var_refs[hist_data->n_var_refs] = ref_field;
|
||||
+ ref_field->var_ref_idx = hist_data->n_var_refs++;
|
||||
+ data->onmax_var = ref_field;
|
||||
+
|
||||
+ data->fn = onmax_save;
|
||||
+ data->max_var_ref_idx = var_ref_idx;
|
||||
+ max_var = create_var(hist_data, file, "max", sizeof(u64), "u64");
|
||||
+ if (IS_ERR(max_var)) {
|
||||
+ ret = PTR_ERR(max_var);
|
||||
+ goto out;
|
||||
+ }
|
||||
+ data->max_var = max_var;
|
||||
+
|
||||
+ for (i = 0; i < data->n_params; i++) {
|
||||
+ param = kstrdup(data->params[i], GFP_KERNEL);
|
||||
+ if (!param)
|
||||
+ goto out;
|
||||
+
|
||||
+ field_var = create_target_field_var(hist_data, NULL, NULL, param);
|
||||
+ if (IS_ERR(field_var)) {
|
||||
+ ret = PTR_ERR(field_var);
|
||||
+ kfree(param);
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ hist_data->max_vars[hist_data->n_max_vars++] = field_var;
|
||||
+ if (field_var->val->flags & HIST_FIELD_FL_STRING)
|
||||
+ hist_data->n_max_var_str++;
|
||||
+
|
||||
+ kfree(param);
|
||||
+ }
|
||||
+
|
||||
+ hist_data->actions[hist_data->n_actions++] = data;
|
||||
+ out:
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+static int parse_action_params(char *params, struct action_data *data)
|
||||
+{
|
||||
+ char *param, *saved_param;
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ while (params) {
|
||||
+ if (data->n_params >= SYNTH_FIELDS_MAX)
|
||||
+ goto out;
|
||||
+
|
||||
+ param = strsep(¶ms, ",");
|
||||
+ if (!param)
|
||||
+ goto out;
|
||||
+
|
||||
+ param = strstrip(param);
|
||||
+ if (strlen(param) < 2) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ saved_param = kstrdup(param, GFP_KERNEL);
|
||||
+ if (!saved_param) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ data->params[data->n_params++] = saved_param;
|
||||
+ }
|
||||
+ out:
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+static struct action_data *onmax_parse(char *str)
|
||||
+{
|
||||
+ char *onmax_fn_name, *onmax_var_str;
|
||||
+ struct action_data *data;
|
||||
+ int ret = -EINVAL;
|
||||
+
|
||||
+ data = kzalloc(sizeof(*data), GFP_KERNEL);
|
||||
+ if (!data)
|
||||
+ return ERR_PTR(-ENOMEM);
|
||||
+
|
||||
+ onmax_var_str = strsep(&str, ")");
|
||||
+ if (!onmax_var_str || !str)
|
||||
+ return ERR_PTR(-EINVAL);
|
||||
+ data->onmax_var_str = kstrdup(onmax_var_str, GFP_KERNEL);
|
||||
+
|
||||
+ strsep(&str, ".");
|
||||
+ if (!str)
|
||||
+ goto free;
|
||||
+
|
||||
+ onmax_fn_name = strsep(&str, "(");
|
||||
+ if (!onmax_fn_name || !str)
|
||||
+ goto free;
|
||||
+
|
||||
+ if (strncmp(onmax_fn_name, "save", strlen("save")) == 0) {
|
||||
+ char *params = strsep(&str, ")");
|
||||
+
|
||||
+ if (!params)
|
||||
+ goto free;
|
||||
+
|
||||
+ ret = parse_action_params(params, data);
|
||||
+ if (ret)
|
||||
+ goto free;
|
||||
+ }
|
||||
+ data->onmax_fn_name = kstrdup(onmax_fn_name, GFP_KERNEL);
|
||||
+
|
||||
+ if (!data->onmax_var_str || !data->onmax_fn_name) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto free;
|
||||
+ }
|
||||
+ out:
|
||||
+ return data;
|
||||
+ free:
|
||||
+ onmax_destroy(data);
|
||||
+ data = ERR_PTR(ret);
|
||||
+ goto out;
|
||||
+}
|
||||
+
|
||||
static void onmatch_destroy(struct action_data *data)
|
||||
{
|
||||
unsigned int i;
|
||||
@@ -2689,37 +2925,6 @@ static int check_synth_field(struct synt
|
||||
return 0;
|
||||
}
|
||||
|
||||
-static int parse_action_params(char *params, struct action_data *data)
|
||||
-{
|
||||
- char *param, *saved_param;
|
||||
- int ret = 0;
|
||||
-
|
||||
- while (params) {
|
||||
- if (data->n_params >= SYNTH_FIELDS_MAX)
|
||||
- goto out;
|
||||
-
|
||||
- param = strsep(¶ms, ",");
|
||||
- if (!param)
|
||||
- goto out;
|
||||
-
|
||||
- param = strstrip(param);
|
||||
- if (strlen(param) < 2) {
|
||||
- ret = -EINVAL;
|
||||
- goto out;
|
||||
- }
|
||||
-
|
||||
- saved_param = kstrdup(param, GFP_KERNEL);
|
||||
- if (!saved_param) {
|
||||
- ret = -ENOMEM;
|
||||
- goto out;
|
||||
- }
|
||||
-
|
||||
- data->params[data->n_params++] = saved_param;
|
||||
- }
|
||||
- out:
|
||||
- return ret;
|
||||
-}
|
||||
-
|
||||
static struct hist_field *
|
||||
onmatch_find_var(struct hist_trigger_data *hist_data, struct action_data *data,
|
||||
char *system, char *event, char *var)
|
||||
@@ -3313,6 +3518,8 @@ static void destroy_actions(struct hist_
|
||||
|
||||
if (data->fn == action_trace)
|
||||
onmatch_destroy(data);
|
||||
+ else if (data->fn == onmax_save)
|
||||
+ onmax_destroy(data);
|
||||
else
|
||||
kfree(data);
|
||||
}
|
||||
@@ -3341,6 +3548,18 @@ static int create_actions(struct hist_tr
|
||||
onmatch_destroy(data);
|
||||
return ret;
|
||||
}
|
||||
+ } else if (strncmp(str, "onmax(", strlen("onmax(")) == 0) {
|
||||
+ char *action_str = str + strlen("onmax(");
|
||||
+
|
||||
+ data = onmax_parse(action_str);
|
||||
+ if (IS_ERR(data))
|
||||
+ return PTR_ERR(data);
|
||||
+
|
||||
+ ret = onmax_create(hist_data, data);
|
||||
+ if (ret) {
|
||||
+ onmax_destroy(data);
|
||||
+ return ret;
|
||||
+ }
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3355,9 +3574,30 @@ static void print_actions(struct seq_fil
|
||||
|
||||
for (i = 0; i < hist_data->n_actions; i++) {
|
||||
struct action_data *data = hist_data->actions[i];
|
||||
+
|
||||
+ if (data->fn == onmax_save)
|
||||
+ onmax_print(m, hist_data, elt, data);
|
||||
}
|
||||
}
|
||||
|
||||
+static void print_onmax_spec(struct seq_file *m,
|
||||
+ struct hist_trigger_data *hist_data,
|
||||
+ struct action_data *data)
|
||||
+{
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ seq_puts(m, ":onmax(");
|
||||
+ seq_printf(m, "%s", data->onmax_var_str);
|
||||
+ seq_printf(m, ").%s(", data->onmax_fn_name);
|
||||
+
|
||||
+ for (i = 0; i < hist_data->n_max_vars; i++) {
|
||||
+ seq_printf(m, "%s", hist_data->max_vars[i]->var->var.name);
|
||||
+ if (i < hist_data->n_max_vars - 1)
|
||||
+ seq_puts(m, ",");
|
||||
+ }
|
||||
+ seq_puts(m, ")");
|
||||
+}
|
||||
+
|
||||
static void print_onmatch_spec(struct seq_file *m,
|
||||
struct hist_trigger_data *hist_data,
|
||||
struct action_data *data)
|
||||
@@ -3388,6 +3628,8 @@ static void print_actions_spec(struct se
|
||||
|
||||
if (data->fn == action_trace)
|
||||
print_onmatch_spec(m, hist_data, data);
|
||||
+ else if (data->fn == onmax_save)
|
||||
+ print_onmax_spec(m, hist_data, data);
|
||||
}
|
||||
}
|
||||
|
58
debian/patches/features/all/rt/0025-tracing-Allow-whitespace-to-surround-hist-trigger-fi.patch
vendored
Normal file
58
debian/patches/features/all/rt/0025-tracing-Allow-whitespace-to-surround-hist-trigger-fi.patch
vendored
Normal file
|
@ -0,0 +1,58 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:26 -0500
|
||||
Subject: [PATCH 25/32] tracing: Allow whitespace to surround hist trigger
|
||||
filter
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The existing code only allows for one space before and after the 'if'
|
||||
specifying the filter for a hist trigger. Add code to make that more
|
||||
permissive as far as whitespace goes.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 19 +++++++++++++++----
|
||||
1 file changed, 15 insertions(+), 4 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -4632,7 +4632,7 @@ static int event_hist_trigger_func(struc
|
||||
struct event_trigger_ops *trigger_ops;
|
||||
struct hist_trigger_data *hist_data;
|
||||
bool remove = false;
|
||||
- char *trigger;
|
||||
+ char *trigger, *p;
|
||||
int ret = 0;
|
||||
|
||||
if (!param)
|
||||
@@ -4642,9 +4642,19 @@ static int event_hist_trigger_func(struc
|
||||
remove = true;
|
||||
|
||||
/* separate the trigger from the filter (k:v [if filter]) */
|
||||
- trigger = strsep(¶m, " \t");
|
||||
- if (!trigger)
|
||||
- return -EINVAL;
|
||||
+ trigger = param;
|
||||
+ p = strstr(param, " if");
|
||||
+ if (!p)
|
||||
+ p = strstr(param, "\tif");
|
||||
+ if (p) {
|
||||
+ if (p == trigger)
|
||||
+ return -EINVAL;
|
||||
+ param = p + 1;
|
||||
+ param = strstrip(param);
|
||||
+ *p = '\0';
|
||||
+ trigger = strstrip(trigger);
|
||||
+ } else
|
||||
+ param = NULL;
|
||||
|
||||
attrs = parse_hist_trigger_attrs(trigger);
|
||||
if (IS_ERR(attrs))
|
||||
@@ -4694,6 +4704,7 @@ static int event_hist_trigger_func(struc
|
||||
}
|
||||
|
||||
ret = cmd_ops->reg(glob, trigger_ops, trigger_data, file);
|
||||
+
|
||||
/*
|
||||
* The above returns on success the # of triggers registered,
|
||||
* but if it didn't register any it returns zero. Consider no
|
125
debian/patches/features/all/rt/0026-tracing-Make-duplicate-count-from-tracing_map-availa.patch
vendored
Normal file
125
debian/patches/features/all/rt/0026-tracing-Make-duplicate-count-from-tracing_map-availa.patch
vendored
Normal file
|
@ -0,0 +1,125 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:27 -0500
|
||||
Subject: [PATCH 26/32] tracing: Make duplicate count from tracing_map
|
||||
available
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Though extremely rare, there can be duplicate entries in the tracing
|
||||
map. This isn't normally a problem, as the sorting code makes this
|
||||
transparent by merging them during the sort.
|
||||
|
||||
It's useful to know however, as a check on that assumption - if a
|
||||
non-zero duplicate count is seen more than rarely, it might indicate
|
||||
an unexpected change to the algorithm, or a pathological data set.
|
||||
|
||||
Add an extra param to tracing_map_sort_entries() and use it to display
|
||||
the value in the hist trigger output.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 14 ++++++++------
|
||||
kernel/trace/tracing_map.c | 12 +++++++++---
|
||||
kernel/trace/tracing_map.h | 3 ++-
|
||||
3 files changed, 19 insertions(+), 10 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -4011,7 +4011,8 @@ hist_trigger_entry_print(struct seq_file
|
||||
}
|
||||
|
||||
static int print_entries(struct seq_file *m,
|
||||
- struct hist_trigger_data *hist_data)
|
||||
+ struct hist_trigger_data *hist_data,
|
||||
+ unsigned int *n_dups)
|
||||
{
|
||||
struct tracing_map_sort_entry **sort_entries = NULL;
|
||||
struct tracing_map *map = hist_data->map;
|
||||
@@ -4019,7 +4020,7 @@ static int print_entries(struct seq_file
|
||||
|
||||
n_entries = tracing_map_sort_entries(map, hist_data->sort_keys,
|
||||
hist_data->n_sort_keys,
|
||||
- &sort_entries);
|
||||
+ &sort_entries, n_dups);
|
||||
if (n_entries < 0)
|
||||
return n_entries;
|
||||
|
||||
@@ -4038,6 +4039,7 @@ static void hist_trigger_show(struct seq
|
||||
{
|
||||
struct hist_trigger_data *hist_data;
|
||||
int n_entries, ret = 0;
|
||||
+ unsigned int n_dups;
|
||||
|
||||
if (n > 0)
|
||||
seq_puts(m, "\n\n");
|
||||
@@ -4047,15 +4049,15 @@ static void hist_trigger_show(struct seq
|
||||
seq_puts(m, "#\n\n");
|
||||
|
||||
hist_data = data->private_data;
|
||||
- n_entries = print_entries(m, hist_data);
|
||||
+ n_entries = print_entries(m, hist_data, &n_dups);
|
||||
if (n_entries < 0) {
|
||||
ret = n_entries;
|
||||
n_entries = 0;
|
||||
}
|
||||
|
||||
- seq_printf(m, "\nTotals:\n Hits: %llu\n Entries: %u\n Dropped: %llu\n",
|
||||
- (u64)atomic64_read(&hist_data->map->hits),
|
||||
- n_entries, (u64)atomic64_read(&hist_data->map->drops));
|
||||
+ seq_printf(m, "\nTotals:\n Hits: %llu\n Entries: %u\n Dropped: %llu\n Duplicates: %u\n",
|
||||
+ (u64)atomic64_read(&hist_data->map->hits), n_entries,
|
||||
+ (u64)atomic64_read(&hist_data->map->drops), n_dups);
|
||||
}
|
||||
|
||||
static int hist_show(struct seq_file *m, void *v)
|
||||
--- a/kernel/trace/tracing_map.c
|
||||
+++ b/kernel/trace/tracing_map.c
|
||||
@@ -1084,6 +1084,7 @@ static void sort_secondary(struct tracin
|
||||
* @map: The tracing_map
|
||||
* @sort_key: The sort key to use for sorting
|
||||
* @sort_entries: outval: pointer to allocated and sorted array of entries
|
||||
+ * @n_dups: outval: pointer to variable receiving a count of duplicates found
|
||||
*
|
||||
* tracing_map_sort_entries() sorts the current set of entries in the
|
||||
* map and returns the list of tracing_map_sort_entries containing
|
||||
@@ -1100,13 +1101,16 @@ static void sort_secondary(struct tracin
|
||||
* The client should not hold on to the returned array but should use
|
||||
* it and call tracing_map_destroy_sort_entries() when done.
|
||||
*
|
||||
- * Return: the number of sort_entries in the struct tracing_map_sort_entry
|
||||
- * array, negative on error
|
||||
+ * Return: the number of sort_entries in the struct
|
||||
+ * tracing_map_sort_entry array, negative on error. If n_dups is
|
||||
+ * non-NULL, it will receive the number of duplicate entries found
|
||||
+ * (and merged) during the sort.
|
||||
*/
|
||||
int tracing_map_sort_entries(struct tracing_map *map,
|
||||
struct tracing_map_sort_key *sort_keys,
|
||||
unsigned int n_sort_keys,
|
||||
- struct tracing_map_sort_entry ***sort_entries)
|
||||
+ struct tracing_map_sort_entry ***sort_entries,
|
||||
+ unsigned int *n_dups)
|
||||
{
|
||||
int (*cmp_entries_fn)(const struct tracing_map_sort_entry **,
|
||||
const struct tracing_map_sort_entry **);
|
||||
@@ -1147,6 +1151,8 @@ int tracing_map_sort_entries(struct trac
|
||||
if (ret < 0)
|
||||
goto free;
|
||||
n_entries -= ret;
|
||||
+ if (n_dups)
|
||||
+ *n_dups = ret;
|
||||
|
||||
if (is_key(map, sort_keys[0].field_idx))
|
||||
cmp_entries_fn = cmp_entries_key;
|
||||
--- a/kernel/trace/tracing_map.h
|
||||
+++ b/kernel/trace/tracing_map.h
|
||||
@@ -286,7 +286,8 @@ extern int
|
||||
tracing_map_sort_entries(struct tracing_map *map,
|
||||
struct tracing_map_sort_key *sort_keys,
|
||||
unsigned int n_sort_keys,
|
||||
- struct tracing_map_sort_entry ***sort_entries);
|
||||
+ struct tracing_map_sort_entry ***sort_entries,
|
||||
+ unsigned int *n_dups);
|
||||
|
||||
extern void
|
||||
tracing_map_destroy_sort_entries(struct tracing_map_sort_entry **entries,
|
133
debian/patches/features/all/rt/0027-tracing-Add-cpu-field-for-hist-triggers.patch
vendored
Normal file
133
debian/patches/features/all/rt/0027-tracing-Add-cpu-field-for-hist-triggers.patch
vendored
Normal file
|
@ -0,0 +1,133 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:28 -0500
|
||||
Subject: [PATCH 27/32] tracing: Add cpu field for hist triggers
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
A common key to use in a histogram is the cpuid - add a new cpu
|
||||
'synthetic' field for that purpose. This field is named cpu rather
|
||||
than $cpu or $common_cpu because 'cpu' already exists as a special
|
||||
filter field and it makes more sense to match that rather than add
|
||||
another name for the same thing.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
Documentation/trace/events.txt | 18 ++++++++++++++++++
|
||||
kernel/trace/trace_events_hist.c | 30 +++++++++++++++++++++++++++---
|
||||
2 files changed, 45 insertions(+), 3 deletions(-)
|
||||
|
||||
--- a/Documentation/trace/events.txt
|
||||
+++ b/Documentation/trace/events.txt
|
||||
@@ -668,6 +668,24 @@ triggers (you have to use '!' for each o
|
||||
The examples below provide a more concrete illustration of the
|
||||
concepts and typical usage patterns discussed above.
|
||||
|
||||
+ 'synthetic' event fields
|
||||
+ ------------------------
|
||||
+
|
||||
+ There are a number of 'synthetic fields' available for use as keys
|
||||
+ or values in a hist trigger. These look like and behave as if they
|
||||
+ were event fields, but aren't actually part of the event's field
|
||||
+ definition or format file. They are however available for any
|
||||
+ event, and can be used anywhere an actual event field could be.
|
||||
+ 'Synthetic' field names are always prefixed with a '$' character to
|
||||
+ indicate that they're not normal fields (with the exception of
|
||||
+ 'cpu', for compatibility with existing filter usage):
|
||||
+
|
||||
+ $common_timestamp u64 - timestamp (from ring buffer) associated
|
||||
+ with the event, in nanoseconds. May be
|
||||
+ modified by .usecs to have timestamps
|
||||
+ interpreted as microseconds.
|
||||
+ cpu int - the cpu on which the event occurred.
|
||||
+
|
||||
|
||||
6.2 'hist' trigger examples
|
||||
---------------------------
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -224,6 +224,7 @@ enum hist_field_flags {
|
||||
HIST_FIELD_FL_VAR_ONLY = 8192,
|
||||
HIST_FIELD_FL_EXPR = 16384,
|
||||
HIST_FIELD_FL_VAR_REF = 32768,
|
||||
+ HIST_FIELD_FL_CPU = 65536,
|
||||
};
|
||||
|
||||
struct hist_trigger_attrs {
|
||||
@@ -1081,6 +1082,16 @@ static u64 hist_field_timestamp(struct h
|
||||
return ts;
|
||||
}
|
||||
|
||||
+static u64 hist_field_cpu(struct hist_field *hist_field,
|
||||
+ struct tracing_map_elt *elt,
|
||||
+ struct ring_buffer_event *rbe,
|
||||
+ void *event)
|
||||
+{
|
||||
+ int cpu = raw_smp_processor_id();
|
||||
+
|
||||
+ return cpu;
|
||||
+}
|
||||
+
|
||||
static struct hist_field *check_var_ref(struct hist_field *hist_field,
|
||||
struct hist_trigger_data *var_data,
|
||||
unsigned int var_idx)
|
||||
@@ -1407,6 +1418,8 @@ static const char *hist_field_name(struc
|
||||
field_name = hist_field_name(field->operands[0], ++level);
|
||||
else if (field->flags & HIST_FIELD_FL_TIMESTAMP)
|
||||
field_name = "$common_timestamp";
|
||||
+ else if (field->flags & HIST_FIELD_FL_CPU)
|
||||
+ field_name = "cpu";
|
||||
else if (field->flags & HIST_FIELD_FL_EXPR ||
|
||||
field->flags & HIST_FIELD_FL_VAR_REF)
|
||||
field_name = field->name;
|
||||
@@ -1848,6 +1861,15 @@ static struct hist_field *create_hist_fi
|
||||
goto out;
|
||||
}
|
||||
|
||||
+ if (flags & HIST_FIELD_FL_CPU) {
|
||||
+ hist_field->fn = hist_field_cpu;
|
||||
+ hist_field->size = sizeof(int);
|
||||
+ hist_field->type = kstrdup("int", GFP_KERNEL);
|
||||
+ if (!hist_field->type)
|
||||
+ goto free;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
if (WARN_ON_ONCE(!field))
|
||||
goto out;
|
||||
|
||||
@@ -1980,7 +2002,9 @@ parse_field(struct hist_trigger_data *hi
|
||||
hist_data->enable_timestamps = true;
|
||||
if (*flags & HIST_FIELD_FL_TIMESTAMP_USECS)
|
||||
hist_data->attrs->ts_in_usecs = true;
|
||||
- } else {
|
||||
+ } else if (strcmp(field_name, "cpu") == 0)
|
||||
+ *flags |= HIST_FIELD_FL_CPU;
|
||||
+ else {
|
||||
field = trace_find_event_field(file->event_call, field_name);
|
||||
if (!field)
|
||||
return ERR_PTR(-EINVAL);
|
||||
@@ -3019,7 +3043,6 @@ static int onmatch_create(struct hist_tr
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
-
|
||||
if (param[0] == '$')
|
||||
hist_field = onmatch_find_var(hist_data, data, system,
|
||||
event_name, param);
|
||||
@@ -3034,7 +3057,6 @@ static int onmatch_create(struct hist_tr
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
-
|
||||
if (check_synth_field(event, hist_field, field_pos) == 0) {
|
||||
var_ref = create_var_ref(hist_field);
|
||||
if (!var_ref) {
|
||||
@@ -4128,6 +4150,8 @@ static void hist_field_print(struct seq_
|
||||
|
||||
if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP)
|
||||
seq_puts(m, "$common_timestamp");
|
||||
+ else if (hist_field->flags & HIST_FIELD_FL_CPU)
|
||||
+ seq_puts(m, "cpu");
|
||||
else if (field_name)
|
||||
seq_printf(m, "%s", field_name);
|
||||
|
106
debian/patches/features/all/rt/0028-tracing-Add-hist-trigger-support-for-variable-refere.patch
vendored
Normal file
106
debian/patches/features/all/rt/0028-tracing-Add-hist-trigger-support-for-variable-refere.patch
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:29 -0500
|
||||
Subject: [PATCH 28/32] tracing: Add hist trigger support for variable
|
||||
reference aliases
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Add support for alias=$somevar where alias can be used as
|
||||
onmatch($alias).
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace_events_hist.c | 46 ++++++++++++++++++++++++++++++++++++---
|
||||
1 file changed, 43 insertions(+), 3 deletions(-)
|
||||
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -225,6 +225,7 @@ enum hist_field_flags {
|
||||
HIST_FIELD_FL_EXPR = 16384,
|
||||
HIST_FIELD_FL_VAR_REF = 32768,
|
||||
HIST_FIELD_FL_CPU = 65536,
|
||||
+ HIST_FIELD_FL_ALIAS = 131072,
|
||||
};
|
||||
|
||||
struct hist_trigger_attrs {
|
||||
@@ -1414,7 +1415,8 @@ static const char *hist_field_name(struc
|
||||
|
||||
if (field->field)
|
||||
field_name = field->field->name;
|
||||
- else if (field->flags & HIST_FIELD_FL_LOG2)
|
||||
+ else if (field->flags & HIST_FIELD_FL_LOG2 ||
|
||||
+ field->flags & HIST_FIELD_FL_ALIAS)
|
||||
field_name = hist_field_name(field->operands[0], ++level);
|
||||
else if (field->flags & HIST_FIELD_FL_TIMESTAMP)
|
||||
field_name = "$common_timestamp";
|
||||
@@ -1819,7 +1821,7 @@ static struct hist_field *create_hist_fi
|
||||
|
||||
hist_field->hist_data = hist_data;
|
||||
|
||||
- if (flags & HIST_FIELD_FL_EXPR)
|
||||
+ if (flags & HIST_FIELD_FL_EXPR || flags & HIST_FIELD_FL_ALIAS)
|
||||
goto out; /* caller will populate */
|
||||
|
||||
if (flags & HIST_FIELD_FL_VAR_REF) {
|
||||
@@ -2013,6 +2015,34 @@ parse_field(struct hist_trigger_data *hi
|
||||
return field;
|
||||
}
|
||||
|
||||
+static struct hist_field *create_alias(struct hist_trigger_data *hist_data,
|
||||
+ struct hist_field *var_ref,
|
||||
+ char *var_name)
|
||||
+{
|
||||
+ struct hist_field *alias = NULL;
|
||||
+ unsigned long flags = HIST_FIELD_FL_ALIAS | HIST_FIELD_FL_VAR |
|
||||
+ HIST_FIELD_FL_VAR_ONLY;
|
||||
+
|
||||
+ alias = create_hist_field(hist_data, NULL, flags, var_name);
|
||||
+ if (!alias)
|
||||
+ return NULL;
|
||||
+
|
||||
+ alias->fn = var_ref->fn;
|
||||
+ alias->operands[0] = var_ref;
|
||||
+ alias->var.idx = var_ref->var.idx;
|
||||
+ alias->var.hist_data = var_ref->hist_data;
|
||||
+ alias->size = var_ref->size;
|
||||
+ alias->is_signed = var_ref->is_signed;
|
||||
+ alias->type = kstrdup(var_ref->type, GFP_KERNEL);
|
||||
+ if (!alias->type) {
|
||||
+ kfree(alias->type);
|
||||
+ destroy_hist_field(alias, 0);
|
||||
+ return NULL;
|
||||
+ }
|
||||
+
|
||||
+ return alias;
|
||||
+}
|
||||
+
|
||||
struct hist_field *parse_atom(struct hist_trigger_data *hist_data,
|
||||
struct trace_event_file *file, char *str,
|
||||
unsigned long *flags, char *var_name)
|
||||
@@ -2036,6 +2066,13 @@ struct hist_field *parse_atom(struct his
|
||||
if (hist_field) {
|
||||
hist_data->var_refs[hist_data->n_var_refs] = hist_field;
|
||||
hist_field->var_ref_idx = hist_data->n_var_refs++;
|
||||
+ if (var_name) {
|
||||
+ hist_field = create_alias(hist_data, hist_field, var_name);
|
||||
+ if (!hist_field) {
|
||||
+ ret = -ENOMEM;
|
||||
+ goto out;
|
||||
+ }
|
||||
+ }
|
||||
return hist_field;
|
||||
}
|
||||
|
||||
@@ -4152,8 +4189,11 @@ static void hist_field_print(struct seq_
|
||||
seq_puts(m, "$common_timestamp");
|
||||
else if (hist_field->flags & HIST_FIELD_FL_CPU)
|
||||
seq_puts(m, "cpu");
|
||||
- else if (field_name)
|
||||
+ else if (field_name) {
|
||||
+ if (hist_field->flags & HIST_FIELD_FL_ALIAS)
|
||||
+ seq_putc(m, '$');
|
||||
seq_printf(m, "%s", field_name);
|
||||
+ }
|
||||
|
||||
if (hist_field->flags) {
|
||||
const char *flags_str = get_hist_field_flags(hist_field);
|
500
debian/patches/features/all/rt/0029-tracing-Add-last-error-error-facility-for-hist-trigg.patch
vendored
Normal file
500
debian/patches/features/all/rt/0029-tracing-Add-last-error-error-facility-for-hist-trigg.patch
vendored
Normal file
|
@ -0,0 +1,500 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:30 -0500
|
||||
Subject: [PATCH 29/32] tracing: Add 'last error' error facility for hist
|
||||
triggers
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
With the addition of variables and actions, it's become necessary to
|
||||
provide more detailed error information to users about syntax errors.
|
||||
|
||||
Add a 'last error' facility accessible via the erroring event's 'hist'
|
||||
file. Reading the hist file after an error will display more detailed
|
||||
information about what went wrong, if information is available. This
|
||||
extended error information will be available until the next hist
|
||||
trigger command for that event.
|
||||
|
||||
# echo xxx > /sys/kernel/debug/tracing/events/sched/sched_wakeup/trigger
|
||||
echo: write error: Invalid argument
|
||||
|
||||
# cat /sys/kernel/debug/tracing/events/sched/sched_wakeup/hist
|
||||
|
||||
ERROR: Couldn't yyy: zzz
|
||||
Last command: xxx
|
||||
|
||||
Also add specific error messages for variable and action errors.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
Documentation/trace/events.txt | 19 ++++
|
||||
kernel/trace/trace_events_hist.c | 181 ++++++++++++++++++++++++++++++++++++---
|
||||
2 files changed, 188 insertions(+), 12 deletions(-)
|
||||
|
||||
--- a/Documentation/trace/events.txt
|
||||
+++ b/Documentation/trace/events.txt
|
||||
@@ -686,6 +686,25 @@ triggers (you have to use '!' for each o
|
||||
interpreted as microseconds.
|
||||
cpu int - the cpu on which the event occurred.
|
||||
|
||||
+ Extended error information
|
||||
+ --------------------------
|
||||
+
|
||||
+ For some error conditions encountered when invoking a hist trigger
|
||||
+ command, extended error information is available via the
|
||||
+ corresponding event's 'hist' file. Reading the hist file after an
|
||||
+ error will display more detailed information about what went wrong,
|
||||
+ if information is available. This extended error information will
|
||||
+ be available until the next hist trigger command for that event.
|
||||
+
|
||||
+ If available for a given error condition, the extended error
|
||||
+ information and usage takes the following form:
|
||||
+
|
||||
+ # echo xxx > /sys/kernel/debug/tracing/events/sched/sched_wakeup/trigger
|
||||
+ echo: write error: Invalid argument
|
||||
+
|
||||
+ # cat /sys/kernel/debug/tracing/events/sched/sched_wakeup/hist
|
||||
+ ERROR: Couldn't yyy: zzz
|
||||
+ Last command: xxx
|
||||
|
||||
6.2 'hist' trigger examples
|
||||
---------------------------
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -288,6 +288,7 @@ struct hist_trigger_data {
|
||||
struct field_var *max_vars[SYNTH_FIELDS_MAX];
|
||||
unsigned int n_max_vars;
|
||||
unsigned int n_max_var_str;
|
||||
+ char *last_err;
|
||||
};
|
||||
|
||||
struct synth_field {
|
||||
@@ -332,6 +333,83 @@ struct action_data {
|
||||
struct hist_field *onmax_var;
|
||||
};
|
||||
|
||||
+
|
||||
+static char *hist_err_str;
|
||||
+static char *last_hist_cmd;
|
||||
+
|
||||
+static int hist_err_alloc(void)
|
||||
+{
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ last_hist_cmd = kzalloc(MAX_FILTER_STR_VAL, GFP_KERNEL);
|
||||
+ hist_err_str = kzalloc(MAX_FILTER_STR_VAL, GFP_KERNEL);
|
||||
+ if (!last_hist_cmd || !hist_err_str)
|
||||
+ ret = -ENOMEM;
|
||||
+
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+static void last_cmd_set(char *str)
|
||||
+{
|
||||
+ if (!last_hist_cmd || !str)
|
||||
+ return;
|
||||
+
|
||||
+ if (strlen(last_hist_cmd) > MAX_FILTER_STR_VAL - 1)
|
||||
+ return;
|
||||
+
|
||||
+ strcpy(last_hist_cmd, str);
|
||||
+}
|
||||
+
|
||||
+static void hist_err(char *str, char *var)
|
||||
+{
|
||||
+ int maxlen = MAX_FILTER_STR_VAL - 1;
|
||||
+
|
||||
+ if (strlen(hist_err_str))
|
||||
+ return;
|
||||
+
|
||||
+ if (!hist_err_str || !str)
|
||||
+ return;
|
||||
+
|
||||
+ if (!var)
|
||||
+ var = "";
|
||||
+
|
||||
+ if (strlen(hist_err_str) + strlen(str) + strlen(var) > maxlen)
|
||||
+ return;
|
||||
+
|
||||
+ strcat(hist_err_str, str);
|
||||
+ strcat(hist_err_str, var);
|
||||
+}
|
||||
+
|
||||
+static void hist_err_event(char *str, char *system, char *event, char *var)
|
||||
+{
|
||||
+ char err[MAX_FILTER_STR_VAL];
|
||||
+
|
||||
+ if (system && var)
|
||||
+ sprintf(err, "%s.%s.%s", system, event, var);
|
||||
+ else if (system)
|
||||
+ sprintf(err, "%s.%s", system, event);
|
||||
+ else
|
||||
+ strcpy(err, var);
|
||||
+
|
||||
+ hist_err(str, err);
|
||||
+}
|
||||
+
|
||||
+static void hist_err_clear(void)
|
||||
+{
|
||||
+ if (!hist_err_str)
|
||||
+ return;
|
||||
+
|
||||
+ hist_err_str[0] = '\0';
|
||||
+}
|
||||
+
|
||||
+static bool have_hist_err(void)
|
||||
+{
|
||||
+ if (hist_err_str && strlen(hist_err_str))
|
||||
+ return true;
|
||||
+
|
||||
+ return false;
|
||||
+}
|
||||
+
|
||||
static LIST_HEAD(synth_event_list);
|
||||
static DEFINE_MUTEX(synth_event_mutex);
|
||||
|
||||
@@ -1954,12 +2032,21 @@ static struct hist_field *create_var_ref
|
||||
return ref_field;
|
||||
}
|
||||
|
||||
+static bool is_common_field(char *var_name)
|
||||
+{
|
||||
+ if (strncmp(var_name, "$common_timestamp", strlen("$common_timestamp")) == 0)
|
||||
+ return true;
|
||||
+
|
||||
+ return false;
|
||||
+}
|
||||
+
|
||||
static struct hist_field *parse_var_ref(char *system, char *event_name,
|
||||
char *var_name)
|
||||
{
|
||||
struct hist_field *var_field = NULL, *ref_field = NULL;
|
||||
|
||||
- if (!var_name || strlen(var_name) < 2 || var_name[0] != '$')
|
||||
+ if (!var_name || strlen(var_name) < 2 || var_name[0] != '$' ||
|
||||
+ is_common_field(var_name))
|
||||
return NULL;
|
||||
|
||||
var_name++;
|
||||
@@ -1968,6 +2055,10 @@ static struct hist_field *parse_var_ref(
|
||||
if (var_field)
|
||||
ref_field = create_var_ref(var_field);
|
||||
|
||||
+ if (!ref_field)
|
||||
+ hist_err_event("Couldn't find variable: $",
|
||||
+ system, event_name, var_name);
|
||||
+
|
||||
return ref_field;
|
||||
}
|
||||
|
||||
@@ -2426,8 +2517,11 @@ create_field_var_hist(struct hist_trigge
|
||||
char *cmd;
|
||||
int ret;
|
||||
|
||||
- if (target_hist_data->n_field_var_hists >= SYNTH_FIELDS_MAX)
|
||||
+ if (target_hist_data->n_field_var_hists >= SYNTH_FIELDS_MAX) {
|
||||
+ hist_err_event("onmatch: Too many field variables defined: ",
|
||||
+ system, event_name, field_name);
|
||||
return ERR_PTR(-EINVAL);
|
||||
+ }
|
||||
|
||||
tr = top_trace_array();
|
||||
if (!tr)
|
||||
@@ -2435,13 +2529,18 @@ create_field_var_hist(struct hist_trigge
|
||||
|
||||
file = event_file(system, event_name);
|
||||
if (IS_ERR(file)) {
|
||||
+ hist_err_event("onmatch: Event file not found: ",
|
||||
+ system, event_name, field_name);
|
||||
ret = PTR_ERR(file);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
hist_data = find_compatible_hist(target_hist_data, file);
|
||||
- if (!hist_data)
|
||||
+ if (!hist_data) {
|
||||
+ hist_err_event("onmatch: Matching event histogram not found: ",
|
||||
+ system, event_name, field_name);
|
||||
return ERR_PTR(-EINVAL);
|
||||
+ }
|
||||
|
||||
var_hist = kzalloc(sizeof(*var_hist), GFP_KERNEL);
|
||||
if (!var_hist)
|
||||
@@ -2489,6 +2588,8 @@ create_field_var_hist(struct hist_trigge
|
||||
kfree(cmd);
|
||||
kfree(var_hist->cmd);
|
||||
kfree(var_hist);
|
||||
+ hist_err_event("onmatch: Couldn't create histogram for field: ",
|
||||
+ system, event_name, field_name);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
@@ -2500,6 +2601,8 @@ create_field_var_hist(struct hist_trigge
|
||||
kfree(cmd);
|
||||
kfree(var_hist->cmd);
|
||||
kfree(var_hist);
|
||||
+ hist_err_event("onmatch: Couldn't find synthetic variable: ",
|
||||
+ system, event_name, field_name);
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
@@ -2636,18 +2739,21 @@ static struct field_var *create_field_va
|
||||
int ret = 0;
|
||||
|
||||
if (hist_data->n_field_vars >= SYNTH_FIELDS_MAX) {
|
||||
+ hist_err("Too many field variables defined: ", field_name);
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
val = parse_atom(hist_data, file, field_name, &flags, NULL);
|
||||
if (IS_ERR(val)) {
|
||||
+ hist_err("Couldn't parse field variable: ", field_name);
|
||||
ret = PTR_ERR(val);
|
||||
goto err;
|
||||
}
|
||||
|
||||
var = create_var(hist_data, file, field_name, val->size, val->type);
|
||||
if (IS_ERR(var)) {
|
||||
+ hist_err("Couldn't create or find variable: ", field_name);
|
||||
kfree(val);
|
||||
ret = PTR_ERR(var);
|
||||
goto err;
|
||||
@@ -2772,14 +2878,18 @@ static int onmax_create(struct hist_trig
|
||||
int ret = 0;
|
||||
|
||||
onmax_var_str = data->onmax_var_str;
|
||||
- if (onmax_var_str[0] != '$')
|
||||
+ if (onmax_var_str[0] != '$') {
|
||||
+ hist_err("onmax: For onmax(x), x must be a variable: ", onmax_var_str);
|
||||
return -EINVAL;
|
||||
+ }
|
||||
onmax_var_str++;
|
||||
|
||||
event_name = trace_event_name(call);
|
||||
var_field = find_target_event_var(hist_data, NULL, NULL, onmax_var_str);
|
||||
- if (!var_field)
|
||||
+ if (!var_field) {
|
||||
+ hist_err("onmax: Couldn't find onmax variable: ", onmax_var_str);
|
||||
return -EINVAL;
|
||||
+ }
|
||||
|
||||
flags = HIST_FIELD_FL_VAR_REF;
|
||||
ref_field = create_hist_field(hist_data, NULL, flags, NULL);
|
||||
@@ -2803,6 +2913,7 @@ static int onmax_create(struct hist_trig
|
||||
data->max_var_ref_idx = var_ref_idx;
|
||||
max_var = create_var(hist_data, file, "max", sizeof(u64), "u64");
|
||||
if (IS_ERR(max_var)) {
|
||||
+ hist_err("onmax: Couldn't create onmax variable: ", "max");
|
||||
ret = PTR_ERR(max_var);
|
||||
goto out;
|
||||
}
|
||||
@@ -2815,6 +2926,7 @@ static int onmax_create(struct hist_trig
|
||||
|
||||
field_var = create_target_field_var(hist_data, NULL, NULL, param);
|
||||
if (IS_ERR(field_var)) {
|
||||
+ hist_err("onmax: Couldn't create field variable: ", param);
|
||||
ret = PTR_ERR(field_var);
|
||||
kfree(param);
|
||||
goto out;
|
||||
@@ -2847,6 +2959,7 @@ static int parse_action_params(char *par
|
||||
|
||||
param = strstrip(param);
|
||||
if (strlen(param) < 2) {
|
||||
+ hist_err("Invalid action param: ", param);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -3004,6 +3117,9 @@ onmatch_find_var(struct hist_trigger_dat
|
||||
hist_field = find_event_var(system, event, var);
|
||||
}
|
||||
|
||||
+ if (!hist_field)
|
||||
+ hist_err_event("onmatch: Couldn't find onmatch param: $", system, event, var);
|
||||
+
|
||||
return hist_field;
|
||||
}
|
||||
|
||||
@@ -3055,6 +3171,7 @@ static int onmatch_create(struct hist_tr
|
||||
|
||||
event = find_synth_event(data->synth_event_name);
|
||||
if (!event) {
|
||||
+ hist_err("onmatch: Couldn't find synthetic event: ", data->synth_event_name);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -3094,6 +3211,7 @@ static int onmatch_create(struct hist_tr
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
+
|
||||
if (check_synth_field(event, hist_field, field_pos) == 0) {
|
||||
var_ref = create_var_ref(hist_field);
|
||||
if (!var_ref) {
|
||||
@@ -3108,12 +3226,15 @@ static int onmatch_create(struct hist_tr
|
||||
continue;
|
||||
}
|
||||
|
||||
+ hist_err_event("onmatch: Param type doesn't match synthetic event field type: ",
|
||||
+ system, event_name, param);
|
||||
kfree(p);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (field_pos != event->n_fields) {
|
||||
+ hist_err("onmatch: Param count doesn't match synthetic event field count: ", event->name);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -3141,31 +3262,44 @@ static struct action_data *onmatch_parse
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
match_event = strsep(&str, ")");
|
||||
- if (!match_event || !str)
|
||||
+ if (!match_event || !str) {
|
||||
+ hist_err("onmatch: Missing closing paren: ", match_event);
|
||||
goto free;
|
||||
+ }
|
||||
|
||||
match_event_system = strsep(&match_event, ".");
|
||||
- if (!match_event)
|
||||
+ if (!match_event) {
|
||||
+ hist_err("onmatch: Missing subsystem for match event: ", match_event_system);
|
||||
goto free;
|
||||
+ }
|
||||
|
||||
- if (IS_ERR(event_file(match_event_system, match_event)))
|
||||
+ if (IS_ERR(event_file(match_event_system, match_event))) {
|
||||
+ hist_err_event("onmatch: Invalid subsystem or event name: ",
|
||||
+ match_event_system, match_event, NULL);
|
||||
goto free;
|
||||
+ }
|
||||
|
||||
data->match_event = kstrdup(match_event, GFP_KERNEL);
|
||||
data->match_event_system = kstrdup(match_event_system, GFP_KERNEL);
|
||||
|
||||
strsep(&str, ".");
|
||||
- if (!str)
|
||||
+ if (!str) {
|
||||
+ hist_err("onmatch: Missing . after onmatch(): ", str);
|
||||
goto free;
|
||||
+ }
|
||||
|
||||
synth_event_name = strsep(&str, "(");
|
||||
- if (!synth_event_name || !str)
|
||||
+ if (!synth_event_name || !str) {
|
||||
+ hist_err("onmatch: Missing opening paramlist paren: ", synth_event_name);
|
||||
goto free;
|
||||
+ }
|
||||
data->synth_event_name = kstrdup(synth_event_name, GFP_KERNEL);
|
||||
|
||||
params = strsep(&str, ")");
|
||||
- if (!params || !str || (str && strlen(str)))
|
||||
+ if (!params || !str || (str && strlen(str))) {
|
||||
+ hist_err("onmatch: Missing closing paramlist paren: ", params);
|
||||
goto free;
|
||||
+ }
|
||||
|
||||
ret = parse_action_params(params, data);
|
||||
if (ret)
|
||||
@@ -3217,6 +3351,7 @@ static int create_val_field(struct hist_
|
||||
if (field_str && var_name) {
|
||||
if (find_var(file, var_name) &&
|
||||
!hist_data->remove) {
|
||||
+ hist_err("Variable already defined: ", var_name);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -3224,6 +3359,7 @@ static int create_val_field(struct hist_
|
||||
flags |= HIST_FIELD_FL_VAR;
|
||||
hist_data->n_vars++;
|
||||
if (hist_data->n_vars > TRACING_MAP_VARS_MAX) {
|
||||
+ hist_err("Too many variables defined: ", var_name);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -3234,6 +3370,7 @@ static int create_val_field(struct hist_
|
||||
field_str = var_name;
|
||||
var_name = NULL;
|
||||
} else {
|
||||
+ hist_err("Malformed assignment: ", var_name);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -3248,6 +3385,7 @@ static int create_val_field(struct hist_
|
||||
hist_field = parse_atom(hist_data, file, field_str,
|
||||
&flags, var_name);
|
||||
if (IS_ERR(hist_field)) {
|
||||
+ hist_err("Unable to parse atom: ", field_str);
|
||||
ret = PTR_ERR(hist_field);
|
||||
goto out;
|
||||
}
|
||||
@@ -4138,6 +4276,11 @@ static int hist_show(struct seq_file *m,
|
||||
hist_trigger_show(m, data, n++);
|
||||
}
|
||||
|
||||
+ if (have_hist_err()) {
|
||||
+ seq_printf(m, "\nERROR: %s\n", hist_err_str);
|
||||
+ seq_printf(m, " Last command: %s\n", last_hist_cmd);
|
||||
+ }
|
||||
+
|
||||
out_unlock:
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
@@ -4509,6 +4652,7 @@ static int hist_register_trigger(char *g
|
||||
if (named_data) {
|
||||
if (!hist_trigger_match(data, named_data, named_data,
|
||||
true)) {
|
||||
+ hist_err("Named hist trigger doesn't match existing named trigger (includes variables): ", hist_data->attrs->name);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -4528,13 +4672,16 @@ static int hist_register_trigger(char *g
|
||||
test->paused = false;
|
||||
else if (hist_data->attrs->clear)
|
||||
hist_clear(test);
|
||||
- else
|
||||
+ else {
|
||||
+ hist_err("Hist trigger already exists", NULL);
|
||||
ret = -EEXIST;
|
||||
+ }
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
new:
|
||||
if (hist_data->attrs->cont || hist_data->attrs->clear) {
|
||||
+ hist_err("Can't clear or continue a nonexistent hist trigger", NULL);
|
||||
ret = -ENOENT;
|
||||
goto out;
|
||||
}
|
||||
@@ -4701,6 +4848,11 @@ static int event_hist_trigger_func(struc
|
||||
char *trigger, *p;
|
||||
int ret = 0;
|
||||
|
||||
+ if (glob && strlen(glob)) {
|
||||
+ last_cmd_set(param);
|
||||
+ hist_err_clear();
|
||||
+ }
|
||||
+
|
||||
if (!param)
|
||||
return -EINVAL;
|
||||
|
||||
@@ -4804,6 +4956,9 @@ static int event_hist_trigger_func(struc
|
||||
/* Just return zero, not the number of registered triggers */
|
||||
ret = 0;
|
||||
out:
|
||||
+ if (ret == 0)
|
||||
+ hist_err_clear();
|
||||
+
|
||||
return ret;
|
||||
out_unreg:
|
||||
cmd_ops->unreg(glob+1, trigger_ops, trigger_data, file);
|
||||
@@ -5002,6 +5157,8 @@ static __init int trace_events_hist_init
|
||||
goto err;
|
||||
}
|
||||
|
||||
+ hist_err_alloc();
|
||||
+
|
||||
return err;
|
||||
err:
|
||||
pr_warn("Could not create tracefs 'synthetic_events' entry\n");
|
403
debian/patches/features/all/rt/0030-tracing-Add-inter-event-hist-trigger-Documentation.patch
vendored
Normal file
403
debian/patches/features/all/rt/0030-tracing-Add-inter-event-hist-trigger-Documentation.patch
vendored
Normal file
|
@ -0,0 +1,403 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:31 -0500
|
||||
Subject: [PATCH 30/32] tracing: Add inter-event hist trigger Documentation
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Add background and details on inter-event hist triggers, including
|
||||
hist variables, synthetic events, and actions.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
Documentation/trace/events.txt | 376 +++++++++++++++++++++++++++++++++++++++++
|
||||
1 file changed, 376 insertions(+)
|
||||
|
||||
--- a/Documentation/trace/events.txt
|
||||
+++ b/Documentation/trace/events.txt
|
||||
@@ -571,6 +571,7 @@ triggers (you have to use '!' for each o
|
||||
.sym-offset display an address as a symbol and offset
|
||||
.syscall display a syscall id as a system call name
|
||||
.execname display a common_pid as a program name
|
||||
+ .usecs display a $common_timestamp in microseconds
|
||||
|
||||
Note that in general the semantics of a given field aren't
|
||||
interpreted when applying a modifier to it, but there are some
|
||||
@@ -2101,3 +2102,378 @@ triggers (you have to use '!' for each o
|
||||
Hits: 489
|
||||
Entries: 7
|
||||
Dropped: 0
|
||||
+
|
||||
+6.3 Inter-event hist triggers
|
||||
+-----------------------------
|
||||
+
|
||||
+Inter-event hist triggers are hist triggers that combine values from
|
||||
+one or more other events and create a histogram using that data. Data
|
||||
+from an inter-event histogram can in turn become the source for
|
||||
+further combined histograms, thus providing a chain of related
|
||||
+histograms, which is important for some applications.
|
||||
+
|
||||
+The most important example of an inter-event quantity that can be used
|
||||
+in this manner is latency, which is simply a difference in timestamps
|
||||
+between two events (although trace events don't have an externally
|
||||
+visible timestamp field, the inter-event hist trigger support adds a
|
||||
+pseudo-field to all events named '$common_timestamp' which can be used
|
||||
+as if it were an actual event field). Although latency is the most
|
||||
+important inter-event quantity, note that because the support is
|
||||
+completely general across the trace event subsystem, any event field
|
||||
+can be used in an inter-event quantity.
|
||||
+
|
||||
+An example of a histogram that combines data from other histograms
|
||||
+into a useful chain would be a 'wakeupswitch latency' histogram that
|
||||
+combines a 'wakeup latency' histogram and a 'switch latency'
|
||||
+histogram.
|
||||
+
|
||||
+Normally, a hist trigger specification consists of a (possibly
|
||||
+compound) key along with one or more numeric values, which are
|
||||
+continually updated sums associated with that key. A histogram
|
||||
+specification in this case consists of individual key and value
|
||||
+specifications that refer to trace event fields associated with a
|
||||
+single event type.
|
||||
+
|
||||
+The inter-event hist trigger extension allows fields from multiple
|
||||
+events to be referenced and combined into a multi-event histogram
|
||||
+specification. In support of this overall goal, a few enabling
|
||||
+features have been added to the hist trigger support:
|
||||
+
|
||||
+ - In order to compute an inter-event quantity, a value from one
|
||||
+ event needs to saved and then referenced from another event. This
|
||||
+ requires the introduction of support for histogram 'variables'.
|
||||
+
|
||||
+ - The computation of inter-event quantities and their combination
|
||||
+ require some minimal amount of support for applying simple
|
||||
+ expressions to variables (+ and -).
|
||||
+
|
||||
+ - A histogram consisting of inter-event quantities isn't logically a
|
||||
+ histogram on either event (so having the 'hist' file for either
|
||||
+ event host the histogram output doesn't really make sense). To
|
||||
+ address the idea that the histogram is associated with a
|
||||
+ combination of events, support is added allowing the creation of
|
||||
+ 'synthetic' events that are events derived from other events.
|
||||
+ These synthetic events are full-fledged events just like any other
|
||||
+ and can be used as such, as for instance to create the
|
||||
+ 'combination' histograms mentioned previously.
|
||||
+
|
||||
+ - A set of 'actions' can be associated with histogram entries -
|
||||
+ these can be used to generate the previously mentioned synthetic
|
||||
+ events, but can also be used for other purposes, such as for
|
||||
+ example saving context when a 'max' latency has been hit.
|
||||
+
|
||||
+ - Trace events don't have a 'timestamp' associated with them, but
|
||||
+ there is an implicit timestamp saved along with an event in the
|
||||
+ underlying ftrace ring buffer. This timestamp is now exposed as a
|
||||
+ a synthetic field named '$common_timestamp' which can be used in
|
||||
+ histograms as if it were any other event field. Note that it has
|
||||
+ a '$' prefixed to it - this is meant to indicate that it isn't an
|
||||
+ actual field in the trace format but rather is a synthesized value
|
||||
+ that nonetheless can be used as if it were an actual field. By
|
||||
+ default it is in units of nanoseconds; appending '.usecs' to a
|
||||
+ common_timestamp field changes the units to microseconds.
|
||||
+
|
||||
+These features are decribed in more detail in the following sections.
|
||||
+
|
||||
+6.3.1 Histogram Variables
|
||||
+-------------------------
|
||||
+
|
||||
+Variables are simply named locations used for saving and retrieving
|
||||
+values between matching events. A 'matching' event is defined as an
|
||||
+event that has a matching key - if a variable is saved for a histogram
|
||||
+entry corresponding to that key, any subsequent event with a matching
|
||||
+key can access that variable.
|
||||
+
|
||||
+A variable's value is normally available to any subsequent event until
|
||||
+it is set to something else by a subsequent event. The one exception
|
||||
+to that rule is that any variable used in an expression is essentially
|
||||
+'read-once' - once it's used by an expression in a subsequent event,
|
||||
+it's reset to its 'unset' state, which means it can't be used again
|
||||
+unless it's set again. This ensures not only that an event doesn't
|
||||
+use an uninitialized variable in a calculation, but that that variable
|
||||
+is used only once and not for any unrelated subsequent match.
|
||||
+
|
||||
+The basic syntax for saving a variable is to simply prefix a unique
|
||||
+variable name not corresponding to any keyword along with an '=' sign
|
||||
+to any event field.
|
||||
+
|
||||
+Either keys or values can be saved and retrieved in this way. This
|
||||
+creates a variable named 'ts0' for a histogram entry with the key
|
||||
+'next_pid':
|
||||
+
|
||||
+ # echo 'hist:keys=next_pid:vals=ts0=$common_timestamp ... >> event/trigger
|
||||
+
|
||||
+The ts0 variable can be accessed by any subsequent event having the
|
||||
+same pid as 'next_pid'.
|
||||
+
|
||||
+Variable references are formed by prepending the variable name with
|
||||
+the '$' sign. Thus for example, the ts0 variable above would be
|
||||
+referenced as '$ts0' in subsequent expressions.
|
||||
+
|
||||
+Because 'vals=' is used, the $common_timestamp variable value above
|
||||
+will also be summed as a normal histogram value would (though for a
|
||||
+timestamp it makes little sense).
|
||||
+
|
||||
+The below shows that a key value can also be saved in the same way:
|
||||
+
|
||||
+ # echo 'hist:key=timer_pid=common_pid ...' >> event/trigger
|
||||
+
|
||||
+If a variable isn't a key variable or prefixed with 'vals=', the
|
||||
+associated event field will be saved in a variable but won't be summed
|
||||
+as a value:
|
||||
+
|
||||
+ # echo 'hist:keys=next_pid:ts1=$common_timestamp ... >> event/trigger
|
||||
+
|
||||
+Multiple variables can be assigned at the same time. The below would
|
||||
+result in both ts0 and b being created as variables, with both
|
||||
+common_timestamp and field1 additionally being summed as values:
|
||||
+
|
||||
+ # echo 'hist:keys=pid:vals=ts0=$common_timestamp,b=field1 ... >> event/trigger
|
||||
+
|
||||
+Any number of variables not bound to a 'vals=' prefix can also be
|
||||
+assigned by simply separating them with colons. Below is the same
|
||||
+thing but without the values being summed in the histogram:
|
||||
+
|
||||
+ # echo 'hist:keys=pid:ts0=$common_timestamp:b=field1 ... >> event/trigger
|
||||
+
|
||||
+Variables set as above can be referenced and used in expressions on
|
||||
+another event.
|
||||
+
|
||||
+For example, here's how a latency can be calculated:
|
||||
+
|
||||
+ # echo 'hist:keys=pid,prio:ts0=$common_timestamp ... >> event1/trigger
|
||||
+ # echo 'hist:keys=next_pid:wakeup_lat=$common_timestamp-$ts0 ... >> event2/trigger
|
||||
+
|
||||
+In the first line above, the event's timetamp is saved into the
|
||||
+variable ts0. In the next line, ts0 is subtracted from the second
|
||||
+event's timestamp to produce the latency, which is then assigned into
|
||||
+yet another variable, 'wakeup_lat'. The hist trigger below in turn
|
||||
+makes use of the wakeup_lat variable to compute a combined latency
|
||||
+using the same key and variable from yet another event:
|
||||
+
|
||||
+ # echo 'hist:key=pid:wakeupswitch_lat=$wakeup_lat+$switchtime_lat ... >> event3/trigger
|
||||
+
|
||||
+6.3.2 Synthetic Events
|
||||
+----------------------
|
||||
+
|
||||
+Synthetic events are user-defined events generated from hist trigger
|
||||
+variables or fields associated with one or more other events. Their
|
||||
+purpose is to provide a mechanism for displaying data spanning
|
||||
+multiple events consistent with the existing and already familiar
|
||||
+usage for normal events.
|
||||
+
|
||||
+To define a synthetic event, the user writes a simple specification
|
||||
+consisting of the name of the new event along with one or more
|
||||
+variables and their types, which can be any valid field type,
|
||||
+separated by semicolons, to the tracing/synthetic_events file.
|
||||
+
|
||||
+For instance, the following creates a new event named 'wakeup_latency'
|
||||
+with 3 fields: lat, pid, and prio. Each of those fields is simply a
|
||||
+variable reference to a variable on another event:
|
||||
+
|
||||
+ # echo 'wakeup_latency \
|
||||
+ u64 lat; \
|
||||
+ pid_t pid; \
|
||||
+ int prio' >> \
|
||||
+ /sys/kernel/debug/tracing/synthetic_events
|
||||
+
|
||||
+Reading the tracing/synthetic_events file lists all the currently
|
||||
+defined synthetic events, in this case the event defined above:
|
||||
+
|
||||
+ # cat /sys/kernel/debug/tracing/synthetic_events
|
||||
+ wakeup_latency u64 lat; pid_t pid; int prio
|
||||
+
|
||||
+An existing synthetic event definition can be removed by prepending
|
||||
+the command that defined it with a '!':
|
||||
+
|
||||
+ # echo '!wakeup_latency u64 lat pid_t pid int prio' >> \
|
||||
+ /sys/kernel/debug/tracing/synthetic_events
|
||||
+
|
||||
+At this point, there isn't yet an actual 'wakeup_latency' event
|
||||
+instantiated in the event subsytem - for this to happen, a 'hist
|
||||
+trigger action' needs to be instantiated and bound to actual fields
|
||||
+and variables defined on other events (see Section 6.3.3 below).
|
||||
+
|
||||
+Once that is done, an event instance is created, and a histogram can
|
||||
+be defined using it:
|
||||
+
|
||||
+ # echo 'hist:keys=pid,prio,lat.log2:sort=pid,lat' >> \
|
||||
+ /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
|
||||
+
|
||||
+The new event is created under the tracing/events/synthetic/ directory
|
||||
+and looks and behaves just like any other event:
|
||||
+
|
||||
+ # ls /sys/kernel/debug/tracing/events/synthetic/wakeup_latency
|
||||
+ enable filter format hist id trigger
|
||||
+
|
||||
+Like any other event, once a histogram is enabled for the event, the
|
||||
+output can be displayed by reading the event's 'hist' file.
|
||||
+
|
||||
+6.3.3 Hist trigger 'actions'
|
||||
+----------------------------
|
||||
+
|
||||
+A hist trigger 'action' is a function that's executed whenever a
|
||||
+histogram entry is added or updated.
|
||||
+
|
||||
+The default 'action' if no special function is explicity specified is
|
||||
+as it always has been, to simply update the set of values associated
|
||||
+with an entry. Some applications, however, may want to perform
|
||||
+additional actions at that point, such as generate another event, or
|
||||
+compare and save a maximum.
|
||||
+
|
||||
+The following additional actions are available. To specify an action
|
||||
+for a given event, simply specify the action between colons in the
|
||||
+hist trigger specification.
|
||||
+
|
||||
+ - onmatch(matching.event).<synthetic_event_name>(param list)
|
||||
+
|
||||
+ The 'onmatch(matching.event).<synthetic_event_name>(params)' hist
|
||||
+ trigger action is invoked whenever an event matches and the
|
||||
+ histogram entry would be added or updated. It causes the named
|
||||
+ synthetic event to be generated with the values given in the
|
||||
+ 'param list'. The result is the generation of a synthetic event
|
||||
+ that consists of the values contained in those variables at the
|
||||
+ time the invoking event was hit.
|
||||
+
|
||||
+ The 'param list' consists of one or more parameters which may be
|
||||
+ either variables or fields defined on either the 'matching.event'
|
||||
+ or the target event. The variables or fields specified in the
|
||||
+ param list may be either fully-qualified or unqualified. If a
|
||||
+ variable is specified as unqualified, it must be unique between
|
||||
+ the two events. A field name used as a param can be unqualified
|
||||
+ if it refers to the target event, but must be fully qualified if
|
||||
+ it refers to the matching event. A fully-qualified name is of the
|
||||
+ form 'system.event_name.$var_name' or 'system.event_name.field'.
|
||||
+
|
||||
+ The 'matching.event' specification is simply the fully qualified
|
||||
+ event name of the event that matches the target event for the
|
||||
+ onmatch() functionality, in the form 'system.event_name'.
|
||||
+
|
||||
+ Finally, the number and type of variables/fields in the 'param
|
||||
+ list' must match the number and types of the fields in the
|
||||
+ synthetic event being generated.
|
||||
+
|
||||
+ As an example the below defines a simple synthetic event and uses
|
||||
+ a variable defined on the sched_wakeup_new event as a parameter
|
||||
+ when invoking the synthetic event. Here we define the synthetic
|
||||
+ event:
|
||||
+
|
||||
+ # echo 'wakeup_new_test pid_t pid' >> \
|
||||
+ /sys/kernel/debug/tracing/synthetic_events
|
||||
+
|
||||
+ # cat /sys/kernel/debug/tracing/synthetic_events
|
||||
+ wakeup_new_test pid_t pid
|
||||
+
|
||||
+ The following hist trigger both defines the missing testpid
|
||||
+ variable and specifies an onmatch() action that generates a
|
||||
+ wakeup_new_test synthetic event whenever a sched_wakeup_new event
|
||||
+ occurs, which because of the 'if comm == "cyclictest"' filter only
|
||||
+ happens when the executable is cyclictest:
|
||||
+
|
||||
+ # echo 'hist:keys=testpid=pid:onmatch(sched.sched_wakeup_new).\
|
||||
+ wakeup_new_test($testpid) if comm=="cyclictest"' >> \
|
||||
+ /sys/kernel/debug/tracing/events/sched/sched_wakeup_new/trigger
|
||||
+
|
||||
+ Creating and displaying a histogram based on those events is now
|
||||
+ just a matter of using the fields and new synthetic event in the
|
||||
+ tracing/events/synthetic directory, as usual:
|
||||
+
|
||||
+ # echo 'hist:keys=pid:sort=pid' >> \
|
||||
+ /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/trigger
|
||||
+
|
||||
+ Running 'cyclictest' should cause wakeup_new events to generate
|
||||
+ wakeup_new_test synthetic events which should result in histogram
|
||||
+ output in the wakeup_new_test event's hist file:
|
||||
+
|
||||
+ # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/hist
|
||||
+
|
||||
+ A more typical usage would be to use two events to calculate a
|
||||
+ latency. The following example uses a set of hist triggers to
|
||||
+ produce a 'wakeup_latency' histogram:
|
||||
+
|
||||
+ First, we define a 'wakeup_latency' synthetic event:
|
||||
+
|
||||
+ # echo 'wakeup_latency u64 lat; pid_t pid; int prio' >> \
|
||||
+ /sys/kernel/debug/tracing/synthetic_events
|
||||
+
|
||||
+ Next, we specify that whenever we see a sched_wakeup event for a
|
||||
+ cyclictest thread, save the timestamp in a 'ts0' variable:
|
||||
+
|
||||
+ # echo 'hist:keys=saved_pid=pid:ts0=$common_timestamp.usecs \
|
||||
+ if comm=="cyclictest"' >> \
|
||||
+ /sys/kernel/debug/tracing/events/sched/sched_wakeup/trigger
|
||||
+
|
||||
+ Then, when the corresponding thread is actually scheduled onto the
|
||||
+ CPU by a sched_switch event, calculate the latency and use that
|
||||
+ along with another variable and an event field to generate a
|
||||
+ wakeup_latency synthetic event:
|
||||
+
|
||||
+ # echo 'hist:keys=next_pid:wakeup_lat=$common_timestamp.usecs-$ts0:\
|
||||
+ onmatch(sched.sched_wakeup).wakeup_latency($wakeup_lat,\
|
||||
+ $saved_pid,next_prio) if next_comm=="cyclictest"' >> \
|
||||
+ /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
|
||||
+
|
||||
+ We also need to create a histogram on the wakeup_latency synthetic
|
||||
+ event in order to aggregate the generated synthetic event data:
|
||||
+
|
||||
+ # echo 'hist:keys=pid,prio,lat:sort=pid,lat' >> \
|
||||
+ /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
|
||||
+
|
||||
+ Finally, once we've run cyclictest to actually generate some
|
||||
+ events, we can see the output by looking at the wakeup_latency
|
||||
+ synthetic event's hist file:
|
||||
+
|
||||
+ # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/hist
|
||||
+
|
||||
+ - onmax(var).save(field,...)
|
||||
+
|
||||
+ The 'onmax(var).save(field,...)' hist trigger action is invoked
|
||||
+ whenever the value of 'var' associated with a histogram entry
|
||||
+ exceeds the current maximum contained in that variable.
|
||||
+
|
||||
+ The end result is that the trace event fields specified as the
|
||||
+ onmax.save() params will be saved if 'var' exceeds the current
|
||||
+ maximum for that hist trigger entry. This allows context from the
|
||||
+ event that exhibited the new maximum to be saved for later
|
||||
+ reference. When the histogram is displayed, additional fields
|
||||
+ displaying the saved values will be printed.
|
||||
+
|
||||
+ As an example the below defines a couple of hist triggers, one for
|
||||
+ sched_wakeup and another for sched_switch, keyed on pid. Whenever
|
||||
+ a sched_wakeup occurs, the timestamp is saved in the entry
|
||||
+ corresponding to the current pid, and when the scheduler switches
|
||||
+ back to that pid, the timestamp difference is calculated. If the
|
||||
+ resulting latency, stored in wakeup_lat, exceeds the current
|
||||
+ maximum latency, the values specified in the save() fields are
|
||||
+ recoreded:
|
||||
+
|
||||
+ # echo 'hist:keys=pid:ts0=$common_timestamp.usecs \
|
||||
+ if comm=="cyclictest"' >> \
|
||||
+ /sys/kernel/debug/tracing/events/sched/sched_wakeup/trigger
|
||||
+
|
||||
+ # echo 'hist:keys=next_pid:\
|
||||
+ wakeup_lat=$common_timestamp.usecs-$ts0:\
|
||||
+ onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) \
|
||||
+ if next_comm=="cyclictest"' >> \
|
||||
+ /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
|
||||
+
|
||||
+ When the histogram is displayed, the max value and the saved
|
||||
+ values corresponding to the max are displayed following the rest
|
||||
+ of the fields:
|
||||
+
|
||||
+ # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
|
||||
+ { next_pid: 2255 } hitcount: 239
|
||||
+ common_timestamp-ts0: 0
|
||||
+ max: 27
|
||||
+ next_comm: cyclictest
|
||||
+ prev_pid: 0 prev_prio: 120 prev_comm: swapper/1
|
||||
+
|
||||
+ { next_pid: 2256 } hitcount: 2355
|
||||
+ common_timestamp-ts0: 0
|
||||
+ max: 49 next_comm: cyclictest
|
||||
+ prev_pid: 0 prev_prio: 120 prev_comm: swapper/0
|
||||
+
|
||||
+ Totals:
|
||||
+ Hits: 12970
|
||||
+ Entries: 2
|
||||
+ Dropped: 0
|
40
debian/patches/features/all/rt/0031-tracing-Make-tracing_set_clock-non-static.patch
vendored
Normal file
40
debian/patches/features/all/rt/0031-tracing-Make-tracing_set_clock-non-static.patch
vendored
Normal file
|
@ -0,0 +1,40 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:32 -0500
|
||||
Subject: [PATCH 31/32] tracing: Make tracing_set_clock() non-static
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Allow tracing code outside of trace.c to access tracing_set_clock().
|
||||
|
||||
Some applications may require a particular clock in order to function
|
||||
properly, such as latency calculations.
|
||||
|
||||
Also, add an accessor returning the current clock string.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
kernel/trace/trace.c | 2 +-
|
||||
kernel/trace/trace.h | 1 +
|
||||
2 files changed, 2 insertions(+), 1 deletion(-)
|
||||
|
||||
--- a/kernel/trace/trace.c
|
||||
+++ b/kernel/trace/trace.c
|
||||
@@ -5887,7 +5887,7 @@ static int tracing_clock_show(struct seq
|
||||
return 0;
|
||||
}
|
||||
|
||||
-static int tracing_set_clock(struct trace_array *tr, const char *clockstr)
|
||||
+int tracing_set_clock(struct trace_array *tr, const char *clockstr)
|
||||
{
|
||||
int i;
|
||||
|
||||
--- a/kernel/trace/trace.h
|
||||
+++ b/kernel/trace/trace.h
|
||||
@@ -279,6 +279,7 @@ extern int trace_array_get(struct trace_
|
||||
extern void trace_array_put(struct trace_array *tr);
|
||||
|
||||
extern int tracing_set_time_stamp_abs(struct trace_array *tr, bool abs);
|
||||
+extern int tracing_set_clock(struct trace_array *tr, const char *clockstr);
|
||||
|
||||
extern bool trace_clock_in_ns(struct trace_array *tr);
|
||||
|
116
debian/patches/features/all/rt/0032-tracing-Add-a-clock-attribute-for-hist-triggers.patch
vendored
Normal file
116
debian/patches/features/all/rt/0032-tracing-Add-a-clock-attribute-for-hist-triggers.patch
vendored
Normal file
|
@ -0,0 +1,116 @@
|
|||
From: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Date: Mon, 26 Jun 2017 17:49:33 -0500
|
||||
Subject: [PATCH 32/32] tracing: Add a clock attribute for hist triggers
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The default clock if timestamps are used in a histogram is "global".
|
||||
If timestamps aren't used, the clock is irrelevant.
|
||||
|
||||
Use the "clock=" param only if you want to override the default
|
||||
"global" clock for a histogram with timestamps.
|
||||
|
||||
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
Documentation/trace/events.txt | 9 +++++++++
|
||||
kernel/trace/trace_events_hist.c | 34 +++++++++++++++++++++++++++++++---
|
||||
2 files changed, 40 insertions(+), 3 deletions(-)
|
||||
|
||||
--- a/Documentation/trace/events.txt
|
||||
+++ b/Documentation/trace/events.txt
|
||||
@@ -2173,6 +2173,15 @@ specification. In support of this overa
|
||||
default it is in units of nanoseconds; appending '.usecs' to a
|
||||
common_timestamp field changes the units to microseconds.
|
||||
|
||||
+A note on inter-event timestamps: If $common_timestamp is used in a
|
||||
+histogram, the trace buffer is automatically switched over to using
|
||||
+absolute timestamps and the "global" trace clock, in order to avoid
|
||||
+bogus timestamp differences with other clocks that aren't coherent
|
||||
+across CPUs. This can be overriden by specifying one of the other
|
||||
+trace clocks instead, using the "clock=XXX" hist trigger attribute,
|
||||
+where XXX is any of the clocks listed in the tracing/trace_clock
|
||||
+pseudo-file.
|
||||
+
|
||||
These features are decribed in more detail in the following sections.
|
||||
|
||||
6.3.1 Histogram Variables
|
||||
--- a/kernel/trace/trace_events_hist.c
|
||||
+++ b/kernel/trace/trace_events_hist.c
|
||||
@@ -233,6 +233,7 @@ struct hist_trigger_attrs {
|
||||
char *vals_str;
|
||||
char *sort_key_str;
|
||||
char *name;
|
||||
+ char *clock;
|
||||
bool pause;
|
||||
bool cont;
|
||||
bool clear;
|
||||
@@ -1586,6 +1587,7 @@ static void destroy_hist_trigger_attrs(s
|
||||
kfree(attrs->sort_key_str);
|
||||
kfree(attrs->keys_str);
|
||||
kfree(attrs->vals_str);
|
||||
+ kfree(attrs->clock);
|
||||
kfree(attrs);
|
||||
}
|
||||
|
||||
@@ -1625,7 +1627,16 @@ static int parse_assignment(char *str, s
|
||||
attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
|
||||
else if (strncmp(str, "name=", strlen("name=")) == 0)
|
||||
attrs->name = kstrdup(str, GFP_KERNEL);
|
||||
- else if (strncmp(str, "size=", strlen("size=")) == 0) {
|
||||
+ else if (strncmp(str, "clock=", strlen("clock=")) == 0) {
|
||||
+ strsep(&str, "=");
|
||||
+ if (!str) {
|
||||
+ ret = -EINVAL;
|
||||
+ goto out;
|
||||
+ }
|
||||
+
|
||||
+ str = strstrip(str);
|
||||
+ attrs->clock = kstrdup(str, GFP_KERNEL);
|
||||
+ } else if (strncmp(str, "size=", strlen("size=")) == 0) {
|
||||
int map_bits = parse_map_size(str);
|
||||
|
||||
if (map_bits < 0) {
|
||||
@@ -1688,6 +1699,12 @@ static struct hist_trigger_attrs *parse_
|
||||
goto free;
|
||||
}
|
||||
|
||||
+ if (!attrs->clock) {
|
||||
+ attrs->clock = kstrdup("global", GFP_KERNEL);
|
||||
+ if (!attrs->clock)
|
||||
+ goto free;
|
||||
+ }
|
||||
+
|
||||
return attrs;
|
||||
free:
|
||||
destroy_hist_trigger_attrs(attrs);
|
||||
@@ -4437,6 +4454,8 @@ static int event_hist_trigger_print(stru
|
||||
seq_puts(m, ".descending");
|
||||
}
|
||||
seq_printf(m, ":size=%u", (1 << hist_data->map->map_bits));
|
||||
+ if (hist_data->enable_timestamps)
|
||||
+ seq_printf(m, ":clock=%s", hist_data->attrs->clock);
|
||||
|
||||
print_actions_spec(m, hist_data);
|
||||
|
||||
@@ -4702,10 +4721,19 @@ static int hist_register_trigger(char *g
|
||||
goto out;
|
||||
}
|
||||
|
||||
- ret++;
|
||||
+ if (hist_data->enable_timestamps) {
|
||||
+ char *clock = hist_data->attrs->clock;
|
||||
+
|
||||
+ ret = tracing_set_clock(file->tr, hist_data->attrs->clock);
|
||||
+ if (ret) {
|
||||
+ hist_err("Couldn't set trace_clock: ", clock);
|
||||
+ goto out;
|
||||
+ }
|
||||
|
||||
- if (hist_data->enable_timestamps)
|
||||
tracing_set_time_stamp_abs(file->tr, true);
|
||||
+ }
|
||||
+
|
||||
+ ret++;
|
||||
out:
|
||||
return ret;
|
||||
}
|
|
@ -1,7 +1,7 @@
|
|||
From: "Yadi.hu" <yadi.hu@windriver.com>
|
||||
Date: Wed, 10 Dec 2014 10:32:09 +0800
|
||||
Subject: ARM: enable irq in translation/section permission fault handlers
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Probably happens on all ARM, with
|
||||
CONFIG_PREEMPT_RT_FULL
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
From 5ffb5cace8448c787c9f44e16a7b12f8c2866848 Mon Sep 17 00:00:00 2001
|
||||
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
Date: Tue, 4 Apr 2017 17:43:55 +0200
|
||||
Subject: [PATCH] CPUFREQ: Loongson2: drop set_cpus_allowed_ptr()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
It is pure mystery to me why we need to be on a specific CPU while
|
||||
looking up a value in an array.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
Date: Thu, 21 Mar 2013 19:01:05 +0100
|
||||
Subject: printk: Drop the logbuf_lock more often
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The lock is hold with irgs off. The latency drops 500us+ on my arm bugs
|
||||
with a "full" buffer after executing "dmesg" on the shell.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Josh Cartwright <joshc@ni.com>
|
||||
Date: Thu, 11 Feb 2016 11:54:01 -0600
|
||||
Subject: KVM: arm/arm64: downgrade preempt_disable()d region to migrate_disable()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
kvm_arch_vcpu_ioctl_run() disables the use of preemption when updating
|
||||
the vgic and timer states to prevent the calling task from migrating to
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Marcelo Tosatti <mtosatti@redhat.com>
|
||||
Date: Wed, 8 Apr 2015 20:33:25 -0300
|
||||
Subject: KVM: lapic: mark LAPIC timer handler as irqsafe
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Since lapic timer handler only wakes up a simple waitqueue,
|
||||
it can be executed from hardirq context.
|
||||
|
|
|
@ -5,7 +5,7 @@ Cc: Anna Schumaker <anna.schumaker@netapp.com>,
|
|||
linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org,
|
||||
tglx@linutronix.de
|
||||
Subject: NFSv4: replace seqcount_t with a seqlock_t
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
The raw_write_seqcount_begin() in nfs4_reclaim_open_state() bugs me
|
||||
because it maps to preempt_disable() in -RT which I can't have at this
|
||||
|
|
|
@ -1,162 +0,0 @@
|
|||
From 8adeebf2a94f4625c39c25ec461d0d2ab623b3ad Mon Sep 17 00:00:00 2001
|
||||
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
Date: Wed, 14 Jun 2017 21:29:16 +0200
|
||||
Subject: [PATCH] Revert "random: invalidate batched entropy after crng init"
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
|
||||
This reverts commit 86f95e53ed76fec2579e00351c6050ab398a7730.
|
||||
|
||||
In -RT lockdep complains with
|
||||
| -> #1 (primary_crng.lock){+.+...}:
|
||||
| lock_acquire+0xb5/0x2b0
|
||||
| rt_spin_lock+0x46/0x50
|
||||
| _extract_crng+0x39/0xa0
|
||||
| extract_crng+0x3a/0x40
|
||||
| get_random_u64+0x17a/0x200
|
||||
| cache_random_seq_create+0x51/0x100
|
||||
| init_cache_random_seq+0x35/0x90
|
||||
| __kmem_cache_create+0xd3/0x560
|
||||
| create_boot_cache+0x8c/0xb2
|
||||
| create_kmalloc_cache+0x54/0x9f
|
||||
| create_kmalloc_caches+0xe3/0xfd
|
||||
| kmem_cache_init+0x14f/0x1f0
|
||||
| start_kernel+0x1e7/0x3b3
|
||||
| x86_64_start_reservations+0x2a/0x2c
|
||||
| x86_64_start_kernel+0x13d/0x14c
|
||||
| verify_cpu+0x0/0xfc
|
||||
|
|
||||
| -> #0 (batched_entropy_reset_lock){+.+...}:
|
||||
| __lock_acquire+0x11b4/0x1320
|
||||
| lock_acquire+0xb5/0x2b0
|
||||
| rt_write_lock+0x26/0x40
|
||||
| rt_write_lock_irqsave+0x9/0x10
|
||||
| invalidate_batched_entropy+0x28/0xb0
|
||||
| crng_fast_load+0xb5/0xe0
|
||||
| add_interrupt_randomness+0x16c/0x1a0
|
||||
| irq_thread+0x15c/0x1e0
|
||||
| kthread+0x112/0x150
|
||||
| ret_from_fork+0x31/0x40
|
||||
|
||||
so revert this for now and check later with upstream.
|
||||
|
||||
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
---
|
||||
drivers/char/random.c | 37 -------------------------------------
|
||||
1 file changed, 37 deletions(-)
|
||||
|
||||
--- a/drivers/char/random.c
|
||||
+++ b/drivers/char/random.c
|
||||
@@ -1,9 +1,6 @@
|
||||
/*
|
||||
* random.c -- A strong random number generator
|
||||
*
|
||||
- * Copyright (C) 2017 Jason A. Donenfeld <Jason@zx2c4.com>. All
|
||||
- * Rights Reserved.
|
||||
- *
|
||||
* Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005
|
||||
*
|
||||
* Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All
|
||||
@@ -765,8 +762,6 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init
|
||||
static struct crng_state **crng_node_pool __read_mostly;
|
||||
#endif
|
||||
|
||||
-static void invalidate_batched_entropy(void);
|
||||
-
|
||||
static void crng_initialize(struct crng_state *crng)
|
||||
{
|
||||
int i;
|
||||
@@ -804,7 +799,6 @@ static int crng_fast_load(const char *cp
|
||||
cp++; crng_init_cnt++; len--;
|
||||
}
|
||||
if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
|
||||
- invalidate_batched_entropy();
|
||||
crng_init = 1;
|
||||
wake_up_interruptible(&crng_init_wait);
|
||||
pr_notice("random: fast init done\n");
|
||||
@@ -842,7 +836,6 @@ static void crng_reseed(struct crng_stat
|
||||
memzero_explicit(&buf, sizeof(buf));
|
||||
crng->init_time = jiffies;
|
||||
if (crng == &primary_crng && crng_init < 2) {
|
||||
- invalidate_batched_entropy();
|
||||
crng_init = 2;
|
||||
process_random_ready_list();
|
||||
wake_up_interruptible(&crng_init_wait);
|
||||
@@ -2023,7 +2016,6 @@ struct batched_entropy {
|
||||
};
|
||||
unsigned int position;
|
||||
};
|
||||
-static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock);
|
||||
|
||||
/*
|
||||
* Get a random word for internal kernel use only. The quality of the random
|
||||
@@ -2034,8 +2026,6 @@ static DEFINE_PER_CPU(struct batched_ent
|
||||
u64 get_random_u64(void)
|
||||
{
|
||||
u64 ret;
|
||||
- bool use_lock = crng_init < 2;
|
||||
- unsigned long flags;
|
||||
struct batched_entropy *batch;
|
||||
|
||||
#if BITS_PER_LONG == 64
|
||||
@@ -2048,15 +2038,11 @@ u64 get_random_u64(void)
|
||||
#endif
|
||||
|
||||
batch = &get_cpu_var(batched_entropy_u64);
|
||||
- if (use_lock)
|
||||
- read_lock_irqsave(&batched_entropy_reset_lock, flags);
|
||||
if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
|
||||
extract_crng((u8 *)batch->entropy_u64);
|
||||
batch->position = 0;
|
||||
}
|
||||
ret = batch->entropy_u64[batch->position++];
|
||||
- if (use_lock)
|
||||
- read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
|
||||
put_cpu_var(batched_entropy_u64);
|
||||
return ret;
|
||||
}
|
||||
@@ -2066,45 +2052,22 @@ static DEFINE_PER_CPU(struct batched_ent
|
||||
u32 get_random_u32(void)
|
||||
{
|
||||
u32 ret;
|
||||
- bool use_lock = crng_init < 2;
|
||||
- unsigned long flags;
|
||||
struct batched_entropy *batch;
|
||||
|
||||
if (arch_get_random_int(&ret))
|
||||
return ret;
|
||||
|
||||
batch = &get_cpu_var(batched_entropy_u32);
|
||||
- if (use_lock)
|
||||
- read_lock_irqsave(&batched_entropy_reset_lock, flags);
|
||||
if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
|
||||
extract_crng((u8 *)batch->entropy_u32);
|
||||
batch->position = 0;
|
||||
}
|
||||
ret = batch->entropy_u32[batch->position++];
|
||||
- if (use_lock)
|
||||
- read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
|
||||
put_cpu_var(batched_entropy_u32);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(get_random_u32);
|
||||
|
||||
-/* It's important to invalidate all potential batched entropy that might
|
||||
- * be stored before the crng is initialized, which we can do lazily by
|
||||
- * simply resetting the counter to zero so that it's re-extracted on the
|
||||
- * next usage. */
|
||||
-static void invalidate_batched_entropy(void)
|
||||
-{
|
||||
- int cpu;
|
||||
- unsigned long flags;
|
||||
-
|
||||
- write_lock_irqsave(&batched_entropy_reset_lock, flags);
|
||||
- for_each_possible_cpu (cpu) {
|
||||
- per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0;
|
||||
- per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0;
|
||||
- }
|
||||
- write_unlock_irqrestore(&batched_entropy_reset_lock, flags);
|
||||
-}
|
||||
-
|
||||
/**
|
||||
* randomize_page - Generate a random, page aligned address
|
||||
* @start: The smallest acceptable address the caller will take.
|
|
@ -1,7 +1,7 @@
|
|||
From: Steven Rostedt <rostedt@goodmis.org>
|
||||
Date: Wed, 13 Feb 2013 09:26:05 -0500
|
||||
Subject: acpi/rt: Convert acpi_gbl_hardware lock back to a raw_spinlock_t
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
We hit the following bug with 3.6-rt:
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
Date: Sat, 27 May 2017 19:02:06 +0200
|
||||
Subject: kernel/sched/core: add migrate_disable()
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
---
|
||||
include/linux/preempt.h | 23 ++++++++
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Anders Roxell <anders.roxell@linaro.org>
|
||||
Date: Thu, 14 May 2015 17:52:17 +0200
|
||||
Subject: arch/arm64: Add lazy preempt support
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
arm64 is missing support for PREEMPT_RT. The main feature which is
|
||||
lacking is support for lazy preemption. The arch-specific entry code,
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
From: Benedikt Spranger <b.spranger@linutronix.de>
|
||||
Date: Sat, 6 Mar 2010 17:47:10 +0100
|
||||
Subject: ARM: AT91: PIT: Remove irq handler when clock event is unused
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.5-rt1.tar.xz
|
||||
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.11/older/patches-4.11.8-rt5.tar.xz
|
||||
|
||||
Setup and remove the interrupt handler in clock event mode selection.
|
||||
This avoids calling the (shared) interrupt handler when the device is
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue