linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] Add new tracepoints required for EAS testing
@ 2019-05-05 11:57 Qais Yousef
  2019-05-05 11:57 ` [PATCH 1/7] sched: autogroup: Make autogroup_path() always available Qais Yousef
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-05 11:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Pavankumar Kondeti, Sebastian Andrzej Siewior,
	Uwe Kleine-Konig, Qais Yousef

The following patches add the bare minimum tracepoints required to perform EAS
testing in Lisa[1].

The new tracepoints are bare in a sense that they don't export any info in
tracefs, hence shouldn't introduce any ABI. The intended way to use them is by
loading a module that will probe the tracepoints and extract the info required
for userspace testing.

It is done in this way because adding new TRACE_EVENTS() is no longer accepted
AFAIU.

The tracepoints are focused around tracking PELT signals which is what EAS uses
to make its decision, hence knowing the value of PELT as it changes allows
verifying that EAS is doing the right thing based on synthetic tests that
simulate different scenarios.

Beside EAS, the new tracepoints can help investigate CFS load balancer and CFS
taskgroup handling as they are both based on PELT signals too.

The first 2 patches do a bit of code shuffling to expose some required
functions.

Patch 3 adds a new cfs helper function.

Patches 4-6 add the new tracepoints.

Patch 7 exports the tracepoints so that out of tree modules can probe the new
tracepoints with least amount of effort - which extends the usefulness of the
tracepoints since creating a module to probe them is the only way to access
them.

An example module that uses these tracepoints is available in [2].

[1] https://github.com/ARM-software/lisa
[2] https://github.com/qais-yousef/tracepoints-helpers/blob/master/lisa_tp/lisa_tp.c

Qais Yousef (7):
  sched: autogroup: Make autogroup_path() always available
  sched: fair: move helper functions into fair.h
  sched: fair.h: add a new cfs_rq_tg_path()
  sched: Add sched_load_rq tracepoint
  sched: Add sched_load_se tracepoint
  sched: Add sched_overutilized tracepoint
  sched: export the newly added tracepoints

 include/trace/events/sched.h     |  17 +++
 kernel/sched/autogroup.c         |   2 -
 kernel/sched/core.c              |   8 ++
 kernel/sched/fair.c              | 212 ++----------------------------
 kernel/sched/fair.h              | 215 +++++++++++++++++++++++++++++++
 kernel/sched/pelt.c              |   6 +
 kernel/sched/sched.h             |   1 +
 kernel/sched/sched_tracepoints.h |  52 ++++++++
 8 files changed, 313 insertions(+), 200 deletions(-)
 create mode 100644 kernel/sched/fair.h
 create mode 100644 kernel/sched/sched_tracepoints.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/7] sched: autogroup: Make autogroup_path() always available
  2019-05-05 11:57 [PATCH 0/7] Add new tracepoints required for EAS testing Qais Yousef
@ 2019-05-05 11:57 ` Qais Yousef
  2019-05-05 11:57 ` [PATCH 2/7] sched: fair: move helper functions into fair.h Qais Yousef
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-05 11:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Pavankumar Kondeti, Sebastian Andrzej Siewior,
	Uwe Kleine-Konig, Qais Yousef

By removing the #ifdef CONFIG_SCHED_DEBUG

Some of the tracepoints to be introduces in later patches need to access
this function. Hence make it always available since the tracepoints are
not protected by CONFIG_SCHED_DEBUG.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
---
 kernel/sched/autogroup.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/sched/autogroup.c b/kernel/sched/autogroup.c
index 2d4ff5353ded..2067080bb235 100644
--- a/kernel/sched/autogroup.c
+++ b/kernel/sched/autogroup.c
@@ -259,7 +259,6 @@ void proc_sched_autogroup_show_task(struct task_struct *p, struct seq_file *m)
 }
 #endif /* CONFIG_PROC_FS */
 
-#ifdef CONFIG_SCHED_DEBUG
 int autogroup_path(struct task_group *tg, char *buf, int buflen)
 {
 	if (!task_group_is_autogroup(tg))
@@ -267,4 +266,3 @@ int autogroup_path(struct task_group *tg, char *buf, int buflen)
 
 	return snprintf(buf, buflen, "%s-%ld", "/autogroup", tg->autogroup->id);
 }
-#endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/7] sched: fair: move helper functions into fair.h
  2019-05-05 11:57 [PATCH 0/7] Add new tracepoints required for EAS testing Qais Yousef
  2019-05-05 11:57 ` [PATCH 1/7] sched: autogroup: Make autogroup_path() always available Qais Yousef
@ 2019-05-05 11:57 ` Qais Yousef
  2019-05-05 11:57 ` [PATCH 3/7] sched: fair.h: add a new cfs_rq_tg_path() Qais Yousef
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-05 11:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Pavankumar Kondeti, Sebastian Andrzej Siewior,
	Uwe Kleine-Konig, Qais Yousef

Move the small cfs rq helper functions that are inlined into fair.h
header.

In later patches we need a couple of functions and it made more sense to
move the majority of the functions into their own header rather than the
two needed only. This keeps the functions grouped together in the same
file.

Always include the new header in sched.h to make them accessible to all
sched subsystem files like autogroup.h

find_match_se() was excluded because it wasn't inlined.

The two required functions are:

	- cfs_rq_of()
	- group_rq_of()

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
---
 kernel/sched/fair.c  | 195 ------------------------------------------
 kernel/sched/fair.h  | 197 +++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h |   1 +
 3 files changed, 198 insertions(+), 195 deletions(-)
 create mode 100644 kernel/sched/fair.h

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 35f3ea375084..2b4963bbeab4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -243,151 +243,7 @@ static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight
 
 const struct sched_class fair_sched_class;
 
-/**************************************************************
- * CFS operations on generic schedulable entities:
- */
-
 #ifdef CONFIG_FAIR_GROUP_SCHED
-static inline struct task_struct *task_of(struct sched_entity *se)
-{
-	SCHED_WARN_ON(!entity_is_task(se));
-	return container_of(se, struct task_struct, se);
-}
-
-/* Walk up scheduling entities hierarchy */
-#define for_each_sched_entity(se) \
-		for (; se; se = se->parent)
-
-static inline struct cfs_rq *task_cfs_rq(struct task_struct *p)
-{
-	return p->se.cfs_rq;
-}
-
-/* runqueue on which this entity is (to be) queued */
-static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se)
-{
-	return se->cfs_rq;
-}
-
-/* runqueue "owned" by this group */
-static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
-{
-	return grp->my_q;
-}
-
-static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
-{
-	struct rq *rq = rq_of(cfs_rq);
-	int cpu = cpu_of(rq);
-
-	if (cfs_rq->on_list)
-		return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list;
-
-	cfs_rq->on_list = 1;
-
-	/*
-	 * Ensure we either appear before our parent (if already
-	 * enqueued) or force our parent to appear after us when it is
-	 * enqueued. The fact that we always enqueue bottom-up
-	 * reduces this to two cases and a special case for the root
-	 * cfs_rq. Furthermore, it also means that we will always reset
-	 * tmp_alone_branch either when the branch is connected
-	 * to a tree or when we reach the top of the tree
-	 */
-	if (cfs_rq->tg->parent &&
-	    cfs_rq->tg->parent->cfs_rq[cpu]->on_list) {
-		/*
-		 * If parent is already on the list, we add the child
-		 * just before. Thanks to circular linked property of
-		 * the list, this means to put the child at the tail
-		 * of the list that starts by parent.
-		 */
-		list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
-			&(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list));
-		/*
-		 * The branch is now connected to its tree so we can
-		 * reset tmp_alone_branch to the beginning of the
-		 * list.
-		 */
-		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
-		return true;
-	}
-
-	if (!cfs_rq->tg->parent) {
-		/*
-		 * cfs rq without parent should be put
-		 * at the tail of the list.
-		 */
-		list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
-			&rq->leaf_cfs_rq_list);
-		/*
-		 * We have reach the top of a tree so we can reset
-		 * tmp_alone_branch to the beginning of the list.
-		 */
-		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
-		return true;
-	}
-
-	/*
-	 * The parent has not already been added so we want to
-	 * make sure that it will be put after us.
-	 * tmp_alone_branch points to the begin of the branch
-	 * where we will add parent.
-	 */
-	list_add_rcu(&cfs_rq->leaf_cfs_rq_list, rq->tmp_alone_branch);
-	/*
-	 * update tmp_alone_branch to points to the new begin
-	 * of the branch
-	 */
-	rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list;
-	return false;
-}
-
-static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
-{
-	if (cfs_rq->on_list) {
-		struct rq *rq = rq_of(cfs_rq);
-
-		/*
-		 * With cfs_rq being unthrottled/throttled during an enqueue,
-		 * it can happen the tmp_alone_branch points the a leaf that
-		 * we finally want to del. In this case, tmp_alone_branch moves
-		 * to the prev element but it will point to rq->leaf_cfs_rq_list
-		 * at the end of the enqueue.
-		 */
-		if (rq->tmp_alone_branch == &cfs_rq->leaf_cfs_rq_list)
-			rq->tmp_alone_branch = cfs_rq->leaf_cfs_rq_list.prev;
-
-		list_del_rcu(&cfs_rq->leaf_cfs_rq_list);
-		cfs_rq->on_list = 0;
-	}
-}
-
-static inline void assert_list_leaf_cfs_rq(struct rq *rq)
-{
-	SCHED_WARN_ON(rq->tmp_alone_branch != &rq->leaf_cfs_rq_list);
-}
-
-/* Iterate thr' all leaf cfs_rq's on a runqueue */
-#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos)			\
-	list_for_each_entry_safe(cfs_rq, pos, &rq->leaf_cfs_rq_list,	\
-				 leaf_cfs_rq_list)
-
-/* Do the two (enqueued) entities belong to the same group ? */
-static inline struct cfs_rq *
-is_same_group(struct sched_entity *se, struct sched_entity *pse)
-{
-	if (se->cfs_rq == pse->cfs_rq)
-		return se->cfs_rq;
-
-	return NULL;
-}
-
-static inline struct sched_entity *parent_entity(struct sched_entity *se)
-{
-	return se->parent;
-}
-
 static void
 find_matching_se(struct sched_entity **se, struct sched_entity **pse)
 {
@@ -419,62 +275,11 @@ find_matching_se(struct sched_entity **se, struct sched_entity **pse)
 		*pse = parent_entity(*pse);
 	}
 }
-
 #else	/* !CONFIG_FAIR_GROUP_SCHED */
-
-static inline struct task_struct *task_of(struct sched_entity *se)
-{
-	return container_of(se, struct task_struct, se);
-}
-
-#define for_each_sched_entity(se) \
-		for (; se; se = NULL)
-
-static inline struct cfs_rq *task_cfs_rq(struct task_struct *p)
-{
-	return &task_rq(p)->cfs;
-}
-
-static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se)
-{
-	struct task_struct *p = task_of(se);
-	struct rq *rq = task_rq(p);
-
-	return &rq->cfs;
-}
-
-/* runqueue "owned" by this group */
-static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
-{
-	return NULL;
-}
-
-static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
-{
-	return true;
-}
-
-static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
-{
-}
-
-static inline void assert_list_leaf_cfs_rq(struct rq *rq)
-{
-}
-
-#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos)	\
-		for (cfs_rq = &rq->cfs, pos = NULL; cfs_rq; cfs_rq = pos)
-
-static inline struct sched_entity *parent_entity(struct sched_entity *se)
-{
-	return NULL;
-}
-
 static inline void
 find_matching_se(struct sched_entity **se, struct sched_entity **pse)
 {
 }
-
 #endif	/* CONFIG_FAIR_GROUP_SCHED */
 
 static __always_inline
diff --git a/kernel/sched/fair.h b/kernel/sched/fair.h
new file mode 100644
index 000000000000..04c5c8c0e477
--- /dev/null
+++ b/kernel/sched/fair.h
@@ -0,0 +1,197 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * CFS operations on generic schedulable entities:
+ */
+
+#ifdef CONFIG_FAIR_GROUP_SCHED
+static inline struct task_struct *task_of(struct sched_entity *se)
+{
+	SCHED_WARN_ON(!entity_is_task(se));
+	return container_of(se, struct task_struct, se);
+}
+
+/* Walk up scheduling entities hierarchy */
+#define for_each_sched_entity(se) \
+		for (; se; se = se->parent)
+
+static inline struct cfs_rq *task_cfs_rq(struct task_struct *p)
+{
+	return p->se.cfs_rq;
+}
+
+/* runqueue on which this entity is (to be) queued */
+static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se)
+{
+	return se->cfs_rq;
+}
+
+/* runqueue "owned" by this group */
+static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
+{
+	return grp->my_q;
+}
+
+static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
+{
+	struct rq *rq = rq_of(cfs_rq);
+	int cpu = cpu_of(rq);
+
+	if (cfs_rq->on_list)
+		return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list;
+
+	cfs_rq->on_list = 1;
+
+	/*
+	 * Ensure we either appear before our parent (if already
+	 * enqueued) or force our parent to appear after us when it is
+	 * enqueued. The fact that we always enqueue bottom-up
+	 * reduces this to two cases and a special case for the root
+	 * cfs_rq. Furthermore, it also means that we will always reset
+	 * tmp_alone_branch either when the branch is connected
+	 * to a tree or when we reach the top of the tree
+	 */
+	if (cfs_rq->tg->parent &&
+	    cfs_rq->tg->parent->cfs_rq[cpu]->on_list) {
+		/*
+		 * If parent is already on the list, we add the child
+		 * just before. Thanks to circular linked property of
+		 * the list, this means to put the child at the tail
+		 * of the list that starts by parent.
+		 */
+		list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
+			&(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list));
+		/*
+		 * The branch is now connected to its tree so we can
+		 * reset tmp_alone_branch to the beginning of the
+		 * list.
+		 */
+		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
+		return true;
+	}
+
+	if (!cfs_rq->tg->parent) {
+		/*
+		 * cfs rq without parent should be put
+		 * at the tail of the list.
+		 */
+		list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
+			&rq->leaf_cfs_rq_list);
+		/*
+		 * We have reach the top of a tree so we can reset
+		 * tmp_alone_branch to the beginning of the list.
+		 */
+		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
+		return true;
+	}
+
+	/*
+	 * The parent has not already been added so we want to
+	 * make sure that it will be put after us.
+	 * tmp_alone_branch points to the begin of the branch
+	 * where we will add parent.
+	 */
+	list_add_rcu(&cfs_rq->leaf_cfs_rq_list, rq->tmp_alone_branch);
+	/*
+	 * update tmp_alone_branch to points to the new begin
+	 * of the branch
+	 */
+	rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list;
+	return false;
+}
+
+static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
+{
+	if (cfs_rq->on_list) {
+		struct rq *rq = rq_of(cfs_rq);
+
+		/*
+		 * With cfs_rq being unthrottled/throttled during an enqueue,
+		 * it can happen the tmp_alone_branch points the a leaf that
+		 * we finally want to del. In this case, tmp_alone_branch moves
+		 * to the prev element but it will point to rq->leaf_cfs_rq_list
+		 * at the end of the enqueue.
+		 */
+		if (rq->tmp_alone_branch == &cfs_rq->leaf_cfs_rq_list)
+			rq->tmp_alone_branch = cfs_rq->leaf_cfs_rq_list.prev;
+
+		list_del_rcu(&cfs_rq->leaf_cfs_rq_list);
+		cfs_rq->on_list = 0;
+	}
+}
+
+static inline void assert_list_leaf_cfs_rq(struct rq *rq)
+{
+	SCHED_WARN_ON(rq->tmp_alone_branch != &rq->leaf_cfs_rq_list);
+}
+
+/* Iterate thr' all leaf cfs_rq's on a runqueue */
+#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos)			\
+	list_for_each_entry_safe(cfs_rq, pos, &rq->leaf_cfs_rq_list,	\
+				 leaf_cfs_rq_list)
+
+/* Do the two (enqueued) entities belong to the same group ? */
+static inline struct cfs_rq *
+is_same_group(struct sched_entity *se, struct sched_entity *pse)
+{
+	if (se->cfs_rq == pse->cfs_rq)
+		return se->cfs_rq;
+
+	return NULL;
+}
+
+static inline struct sched_entity *parent_entity(struct sched_entity *se)
+{
+	return se->parent;
+}
+
+#else	/* !CONFIG_FAIR_GROUP_SCHED */
+
+static inline struct task_struct *task_of(struct sched_entity *se)
+{
+	return container_of(se, struct task_struct, se);
+}
+
+#define for_each_sched_entity(se) \
+		for (; se; se = NULL)
+
+static inline struct cfs_rq *task_cfs_rq(struct task_struct *p)
+{
+	return &task_rq(p)->cfs;
+}
+
+static inline struct cfs_rq *cfs_rq_of(struct sched_entity *se)
+{
+	struct task_struct *p = task_of(se);
+	struct rq *rq = task_rq(p);
+
+	return &rq->cfs;
+}
+
+/* runqueue "owned" by this group */
+static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
+{
+	return NULL;
+}
+
+static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
+{
+	return true;
+}
+
+static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
+{
+}
+
+static inline void assert_list_leaf_cfs_rq(struct rq *rq)
+{
+}
+
+#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos)	\
+		for (cfs_rq = &rq->cfs, pos = NULL; cfs_rq; cfs_rq = pos)
+
+static inline struct sched_entity *parent_entity(struct sched_entity *se)
+{
+	return NULL;
+}
+
+#endif	/* CONFIG_FAIR_GROUP_SCHED */
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index efa686eeff26..509c1dba77fc 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1418,6 +1418,7 @@ static inline void sched_ttwu_pending(void) { }
 
 #include "stats.h"
 #include "autogroup.h"
+#include "fair.h"
 
 #ifdef CONFIG_CGROUP_SCHED
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/7] sched: fair.h: add a new cfs_rq_tg_path()
  2019-05-05 11:57 [PATCH 0/7] Add new tracepoints required for EAS testing Qais Yousef
  2019-05-05 11:57 ` [PATCH 1/7] sched: autogroup: Make autogroup_path() always available Qais Yousef
  2019-05-05 11:57 ` [PATCH 2/7] sched: fair: move helper functions into fair.h Qais Yousef
@ 2019-05-05 11:57 ` Qais Yousef
  2019-05-05 11:57 ` [PATCH 4/7] sched: Add sched_load_rq tracepoint Qais Yousef
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-05 11:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Pavankumar Kondeti, Sebastian Andrzej Siewior,
	Uwe Kleine-Konig, Qais Yousef

The new function will be used in later patches when introducing the new
PELT tracepoints.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
---
 kernel/sched/fair.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/kernel/sched/fair.h b/kernel/sched/fair.h
index 04c5c8c0e477..aa57e3cb2eaa 100644
--- a/kernel/sched/fair.h
+++ b/kernel/sched/fair.h
@@ -31,6 +31,18 @@ static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
 	return grp->my_q;
 }
 
+static inline void cfs_rq_tg_path(struct cfs_rq *cfs_rq, char *path, int len)
+{
+	int l = path ? len : 0;
+
+	if (cfs_rq && task_group_is_autogroup(cfs_rq->tg))
+		autogroup_path(cfs_rq->tg, path, l);
+	else if (cfs_rq && cfs_rq->tg->css.cgroup)
+		cgroup_path(cfs_rq->tg->css.cgroup, path, l);
+	else if (path)
+		strcpy(path, "(null)");
+}
+
 static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 {
 	struct rq *rq = rq_of(cfs_rq);
@@ -173,6 +185,12 @@ static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
 	return NULL;
 }
 
+static inline void cfs_rq_tg_path(struct cfs_rq *cfs_rq, char *path, int len)
+{
+	if (path)
+		strcpy(path, "(null)");
+}
+
 static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 {
 	return true;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-05 11:57 [PATCH 0/7] Add new tracepoints required for EAS testing Qais Yousef
                   ` (2 preceding siblings ...)
  2019-05-05 11:57 ` [PATCH 3/7] sched: fair.h: add a new cfs_rq_tg_path() Qais Yousef
@ 2019-05-05 11:57 ` Qais Yousef
  2019-05-06  9:08   ` Peter Zijlstra
  2019-05-10  8:51   ` Dietmar Eggemann
  2019-05-05 11:57 ` [PATCH 5/7] sched: Add sched_load_se tracepoint Qais Yousef
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-05 11:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Pavankumar Kondeti, Sebastian Andrzej Siewior,
	Uwe Kleine-Konig, Qais Yousef

The new tracepoint allows tracking PELT signals at rq level for all
scheduling classes.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
---
 include/trace/events/sched.h     |  9 ++++++++
 kernel/sched/fair.c              |  9 ++++++--
 kernel/sched/pelt.c              |  4 ++++
 kernel/sched/sched_tracepoints.h | 39 ++++++++++++++++++++++++++++++++
 4 files changed, 59 insertions(+), 2 deletions(-)
 create mode 100644 kernel/sched/sched_tracepoints.h

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 9a4bdfadab07..2be4c471c6e9 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -587,6 +587,15 @@ TRACE_EVENT(sched_wake_idle_without_ipi,
 
 	TP_printk("cpu=%d", __entry->cpu)
 );
+
+/*
+ * Following tracepoints are not exported in tracefs and provide hooking
+ * mechanisms only for testing and debugging purposes.
+ */
+DECLARE_TRACE(sched_load_rq,
+	TP_PROTO(int cpu, const char *path, struct sched_avg *avg),
+	TP_ARGS(cpu, path, avg));
+
 #endif /* _TRACE_SCHED_H */
 
 /* This part must be outside protection */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2b4963bbeab4..e1e0cc7db7f6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -21,8 +21,7 @@
  *  Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
  */
 #include "sched.h"
-
-#include <trace/events/sched.h>
+#include "sched_tracepoints.h"
 
 /*
  * Targeted preemption latency for CPU-bound tasks:
@@ -3139,6 +3138,8 @@ static inline int propagate_entity_load_avg(struct sched_entity *se)
 	update_tg_cfs_util(cfs_rq, se, gcfs_rq);
 	update_tg_cfs_runnable(cfs_rq, se, gcfs_rq);
 
+	sched_tp_load_cfs_rq(cfs_rq);
+
 	return 1;
 }
 
@@ -3291,6 +3292,8 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
 	add_tg_cfs_propagate(cfs_rq, se->avg.load_sum);
 
 	cfs_rq_util_change(cfs_rq, flags);
+
+	sched_tp_load_cfs_rq(cfs_rq);
 }
 
 /**
@@ -3310,6 +3313,8 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
 	add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum);
 
 	cfs_rq_util_change(cfs_rq, 0);
+
+	sched_tp_load_cfs_rq(cfs_rq);
 }
 
 /*
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index befce29bd882..302affb14302 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -26,6 +26,7 @@
 
 #include <linux/sched.h>
 #include "sched.h"
+#include "sched_tracepoints.h"
 #include "pelt.h"
 
 /*
@@ -292,6 +293,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq)
 				cfs_rq->curr != NULL)) {
 
 		___update_load_avg(&cfs_rq->avg, 1, 1);
+		sched_tp_load_cfs_rq(cfs_rq);
 		return 1;
 	}
 
@@ -317,6 +319,7 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running)
 				running)) {
 
 		___update_load_avg(&rq->avg_rt, 1, 1);
+		sched_tp_load_rt_rq(rq);
 		return 1;
 	}
 
@@ -340,6 +343,7 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
 				running)) {
 
 		___update_load_avg(&rq->avg_dl, 1, 1);
+		sched_tp_load_dl_rq(rq);
 		return 1;
 	}
 
diff --git a/kernel/sched/sched_tracepoints.h b/kernel/sched/sched_tracepoints.h
new file mode 100644
index 000000000000..f4ded705118e
--- /dev/null
+++ b/kernel/sched/sched_tracepoints.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Scheduler tracepoints that are probe-able only and aren't exported ABI in
+ * tracefs.
+ */
+
+#include <trace/events/sched.h>
+
+#define SCHED_TP_PATH_LEN		64
+
+
+static __always_inline void sched_tp_load_cfs_rq(struct cfs_rq *cfs_rq)
+{
+	if (trace_sched_load_rq_enabled()) {
+		int cpu = cpu_of(rq_of(cfs_rq));
+		char path[SCHED_TP_PATH_LEN];
+
+		cfs_rq_tg_path(cfs_rq, path, SCHED_TP_PATH_LEN);
+		trace_sched_load_rq(cpu, path, &cfs_rq->avg);
+	}
+}
+
+static __always_inline void sched_tp_load_rt_rq(struct rq *rq)
+{
+	if (trace_sched_load_rq_enabled()) {
+		int cpu = cpu_of(rq);
+
+		trace_sched_load_rq(cpu, NULL, &rq->avg_rt);
+	}
+}
+
+static __always_inline void sched_tp_load_dl_rq(struct rq *rq)
+{
+	if (trace_sched_load_rq_enabled()) {
+		int cpu = cpu_of(rq);
+
+		trace_sched_load_rq(cpu, NULL, &rq->avg_dl);
+	}
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/7] sched: Add sched_load_se tracepoint
  2019-05-05 11:57 [PATCH 0/7] Add new tracepoints required for EAS testing Qais Yousef
                   ` (3 preceding siblings ...)
  2019-05-05 11:57 ` [PATCH 4/7] sched: Add sched_load_rq tracepoint Qais Yousef
@ 2019-05-05 11:57 ` Qais Yousef
  2019-05-05 11:57 ` [PATCH 6/7] sched: Add sched_overutilized tracepoint Qais Yousef
  2019-05-05 11:57 ` [PATCH 7/7] sched: export the newly added tracepoints Qais Yousef
  6 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-05 11:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Pavankumar Kondeti, Sebastian Andrzej Siewior,
	Uwe Kleine-Konig, Qais Yousef

The new tracepoint allows tracking PELT signals at sched_entity level.
Which is supported in CFS tasks and taskgroups only.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
---
 include/trace/events/sched.h     |  4 ++++
 kernel/sched/fair.c              |  1 +
 kernel/sched/pelt.c              |  2 ++
 kernel/sched/sched_tracepoints.h | 13 +++++++++++++
 4 files changed, 20 insertions(+)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 2be4c471c6e9..0933c08cfc7e 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -596,6 +596,10 @@ DECLARE_TRACE(sched_load_rq,
 	TP_PROTO(int cpu, const char *path, struct sched_avg *avg),
 	TP_ARGS(cpu, path, avg));
 
+DECLARE_TRACE(sched_load_se,
+	TP_PROTO(int cpu, const char *path, struct sched_entity *se),
+	TP_ARGS(cpu, path, se));
+
 #endif /* _TRACE_SCHED_H */
 
 /* This part must be outside protection */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e1e0cc7db7f6..3fd306079b57 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3139,6 +3139,7 @@ static inline int propagate_entity_load_avg(struct sched_entity *se)
 	update_tg_cfs_runnable(cfs_rq, se, gcfs_rq);
 
 	sched_tp_load_cfs_rq(cfs_rq);
+	sched_tp_load_se(se);
 
 	return 1;
 }
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 302affb14302..74e7bd121324 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -266,6 +266,7 @@ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se)
 {
 	if (___update_load_sum(now, &se->avg, 0, 0, 0)) {
 		___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
+		sched_tp_load_se(se);
 		return 1;
 	}
 
@@ -279,6 +280,7 @@ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se
 
 		___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
 		cfs_se_util_change(&se->avg);
+		sched_tp_load_se(se);
 		return 1;
 	}
 
diff --git a/kernel/sched/sched_tracepoints.h b/kernel/sched/sched_tracepoints.h
index f4ded705118e..4a53578c9a69 100644
--- a/kernel/sched/sched_tracepoints.h
+++ b/kernel/sched/sched_tracepoints.h
@@ -37,3 +37,16 @@ static __always_inline void sched_tp_load_dl_rq(struct rq *rq)
 		trace_sched_load_rq(cpu, NULL, &rq->avg_dl);
 	}
 }
+
+static __always_inline void sched_tp_load_se(struct sched_entity *se)
+{
+	if (trace_sched_load_se_enabled()) {
+		struct cfs_rq *gcfs_rq = group_cfs_rq(se);
+		struct cfs_rq *cfs_rq = cfs_rq_of(se);
+		char path[SCHED_TP_PATH_LEN];
+		int cpu = cpu_of(rq_of(cfs_rq));
+
+		cfs_rq_tg_path(gcfs_rq, path, SCHED_TP_PATH_LEN);
+		trace_sched_load_se(cpu, path, se);
+	}
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 6/7] sched: Add sched_overutilized tracepoint
  2019-05-05 11:57 [PATCH 0/7] Add new tracepoints required for EAS testing Qais Yousef
                   ` (4 preceding siblings ...)
  2019-05-05 11:57 ` [PATCH 5/7] sched: Add sched_load_se tracepoint Qais Yousef
@ 2019-05-05 11:57 ` Qais Yousef
  2019-05-05 11:57 ` [PATCH 7/7] sched: export the newly added tracepoints Qais Yousef
  6 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-05 11:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Pavankumar Kondeti, Sebastian Andrzej Siewior,
	Uwe Kleine-Konig, Qais Yousef

The new tracepoint allows us to track the changes in overutilized
status.

Overutilized status is associated with EAS. It indicates that the system
is in high performance state. EAS is disabled when the system is in this
state since there's not much energy savings while high performance tasks
are pushing the system to the limit and it's better to default to the
spreading behavior of the scheduler.

This tracepoint helps understanding and debugging the conditions under
which this happens.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
---
 include/trace/events/sched.h | 4 ++++
 kernel/sched/fair.c          | 7 ++++++-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 0933c08cfc7e..d27733d9aed6 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -600,6 +600,10 @@ DECLARE_TRACE(sched_load_se,
 	TP_PROTO(int cpu, const char *path, struct sched_entity *se),
 	TP_ARGS(cpu, path, se));
 
+DECLARE_TRACE(sched_overutilized,
+	TP_PROTO(int overutilized),
+	TP_ARGS(overutilized));
+
 #endif /* _TRACE_SCHED_H */
 
 /* This part must be outside protection */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3fd306079b57..75403918e158 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4965,8 +4965,10 @@ static inline bool cpu_overutilized(int cpu)
 
 static inline void update_overutilized_status(struct rq *rq)
 {
-	if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu))
+	if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu)) {
 		WRITE_ONCE(rq->rd->overutilized, SG_OVERUTILIZED);
+		trace_sched_overutilized(1);
+	}
 }
 #else
 static inline void update_overutilized_status(struct rq *rq) { }
@@ -8330,8 +8332,11 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 
 		/* Update over-utilization (tipping point, U >= 0) indicator */
 		WRITE_ONCE(rd->overutilized, sg_status & SG_OVERUTILIZED);
+
+		trace_sched_overutilized(!!(sg_status & SG_OVERUTILIZED));
 	} else if (sg_status & SG_OVERUTILIZED) {
 		WRITE_ONCE(env->dst_rq->rd->overutilized, SG_OVERUTILIZED);
+		trace_sched_overutilized(1);
 	}
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 7/7] sched: export the newly added tracepoints
  2019-05-05 11:57 [PATCH 0/7] Add new tracepoints required for EAS testing Qais Yousef
                   ` (5 preceding siblings ...)
  2019-05-05 11:57 ` [PATCH 6/7] sched: Add sched_overutilized tracepoint Qais Yousef
@ 2019-05-05 11:57 ` Qais Yousef
  6 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-05 11:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Pavankumar Kondeti, Sebastian Andrzej Siewior,
	Uwe Kleine-Konig, Qais Yousef

So that external modules can hook into them and extract the info they
need. Since these new tracepoints have no events associated with them
exporting these tracepoints make them useful for external modules to
perform testing and debugging. There's no other way otherwise to access
them.

Signed-off-by: Qais Yousef <qais.yousef@arm.com>
---
 kernel/sched/core.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4778c48a7fda..1841a4e9918e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -22,6 +22,14 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/sched.h>
 
+/*
+ * Export tracepoints that act as a bare tracehook (ie: have no trace event
+ * associated with them) to allow external modules to probe them.
+ */
+EXPORT_TRACEPOINT_SYMBOL_GPL(sched_load_rq);
+EXPORT_TRACEPOINT_SYMBOL_GPL(sched_load_se);
+EXPORT_TRACEPOINT_SYMBOL_GPL(sched_overutilized);
+
 DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
 
 #if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_JUMP_LABEL)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-05 11:57 ` [PATCH 4/7] sched: Add sched_load_rq tracepoint Qais Yousef
@ 2019-05-06  9:08   ` Peter Zijlstra
  2019-05-06  9:18     ` Peter Zijlstra
                       ` (2 more replies)
  2019-05-10  8:51   ` Dietmar Eggemann
  1 sibling, 3 replies; 20+ messages in thread
From: Peter Zijlstra @ 2019-05-06  9:08 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Ingo Molnar, Steven Rostedt, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On Sun, May 05, 2019 at 12:57:29PM +0100, Qais Yousef wrote:

> +/*
> + * Following tracepoints are not exported in tracefs and provide hooking
> + * mechanisms only for testing and debugging purposes.
> + */
> +DECLARE_TRACE(sched_load_rq,
> +	TP_PROTO(int cpu, const char *path, struct sched_avg *avg),
> +	TP_ARGS(cpu, path, avg));
> +

> +DECLARE_TRACE(sched_load_se,
> +       TP_PROTO(int cpu, const char *path, struct sched_entity *se),
> +       TP_ARGS(cpu, path, se));
> +

> +DECLARE_TRACE(sched_overutilized,
> +       TP_PROTO(int overutilized),
> +       TP_ARGS(overutilized));

This doesn't generate any actual userspace because of the lack of
DEFINE_EVENT() ?

> diff --git a/kernel/sched/sched_tracepoints.h b/kernel/sched/sched_tracepoints.h
> new file mode 100644
> index 000000000000..f4ded705118e
> --- /dev/null
> +++ b/kernel/sched/sched_tracepoints.h
> @@ -0,0 +1,39 @@

Like with the other newly introduced header files, this one is lacking
the normal include guard.

> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Scheduler tracepoints that are probe-able only and aren't exported ABI in
> + * tracefs.
> + */
> +
> +#include <trace/events/sched.h>
> +
> +#define SCHED_TP_PATH_LEN		64
> +
> +
> +static __always_inline void sched_tp_load_cfs_rq(struct cfs_rq *cfs_rq)
> +{
> +	if (trace_sched_load_rq_enabled()) {
> +		int cpu = cpu_of(rq_of(cfs_rq));
> +		char path[SCHED_TP_PATH_LEN];
> +
> +		cfs_rq_tg_path(cfs_rq, path, SCHED_TP_PATH_LEN);
> +		trace_sched_load_rq(cpu, path, &cfs_rq->avg);
> +	}
> +}
> +
> +static __always_inline void sched_tp_load_rt_rq(struct rq *rq)
> +{
> +	if (trace_sched_load_rq_enabled()) {
> +		int cpu = cpu_of(rq);
> +
> +		trace_sched_load_rq(cpu, NULL, &rq->avg_rt);
> +	}
> +}
> +
> +static __always_inline void sched_tp_load_dl_rq(struct rq *rq)
> +{
> +	if (trace_sched_load_rq_enabled()) {
> +		int cpu = cpu_of(rq);
> +
> +		trace_sched_load_rq(cpu, NULL, &rq->avg_dl);
> +	}
> +}

> +static __always_inline void sched_tp_load_se(struct sched_entity *se)
> +{
> +       if (trace_sched_load_se_enabled()) {
> +               struct cfs_rq *gcfs_rq = group_cfs_rq(se);
> +               struct cfs_rq *cfs_rq = cfs_rq_of(se);
> +               char path[SCHED_TP_PATH_LEN];
> +               int cpu = cpu_of(rq_of(cfs_rq));
> +
> +               cfs_rq_tg_path(gcfs_rq, path, SCHED_TP_PATH_LEN);
> +               trace_sched_load_se(cpu, path, se);
> +       }
> +}

These functions really should be called trace_*()

Also; I _really_ hate how fat they are. Why can't we do simple straight
forward things like:

	trace_pelt_cfq(cfq);
	trace_pelt_rq(rq);
	trace_pelt_se(se);

And then have the thing attached to the event do the fat bits like
extract the path and whatnot.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-06  9:08   ` Peter Zijlstra
@ 2019-05-06  9:18     ` Peter Zijlstra
  2019-05-08 13:38       ` Qais Yousef
  2019-05-06 13:52     ` Steven Rostedt
  2019-05-06 14:38     ` Qais Yousef
  2 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2019-05-06  9:18 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Ingo Molnar, Steven Rostedt, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On Mon, May 06, 2019 at 11:08:59AM +0200, Peter Zijlstra wrote:
> Also; I _really_ hate how fat they are. Why can't we do simple straight
> forward things like:
> 
> 	trace_pelt_cfq(cfq);
> 	trace_pelt_rq(rq);
> 	trace_pelt_se(se);
> 
> And then have the thing attached to the event do the fat bits like
> extract the path and whatnot.

ARGH, because we don't export any of those data structures (for good
reason).. bah I hate all this.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-06  9:08   ` Peter Zijlstra
  2019-05-06  9:18     ` Peter Zijlstra
@ 2019-05-06 13:52     ` Steven Rostedt
  2019-05-06 14:42       ` Qais Yousef
  2019-05-06 14:38     ` Qais Yousef
  2 siblings, 1 reply; 20+ messages in thread
From: Steven Rostedt @ 2019-05-06 13:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Qais Yousef, Ingo Molnar, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On Mon, 6 May 2019 11:08:59 +0200
Peter Zijlstra <peterz@infradead.org> wrote:

> These functions really should be called trace_*()
> 
> Also; I _really_ hate how fat they are. Why can't we do simple straight
> forward things like:
> 
> 	trace_pelt_cfq(cfq);
> 	trace_pelt_rq(rq);
> 	trace_pelt_se(se);
> 
> And then have the thing attached to the event do the fat bits like
> extract the path and whatnot.

I'd like to avoid functions called "trace_*" that are not trace events.
It's getting confusing when I see a "trace_*()" function and then go
look for the corresponding TRACE_EVENT() just to find out that one does
not exist.

 sched_trace_*()  maybe?

-- Steve

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-06  9:08   ` Peter Zijlstra
  2019-05-06  9:18     ` Peter Zijlstra
  2019-05-06 13:52     ` Steven Rostedt
@ 2019-05-06 14:38     ` Qais Yousef
  2 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-06 14:38 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Steven Rostedt, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On 05/06/19 11:08, Peter Zijlstra wrote:
> On Sun, May 05, 2019 at 12:57:29PM +0100, Qais Yousef wrote:
> 
> > +/*
> > + * Following tracepoints are not exported in tracefs and provide hooking
> > + * mechanisms only for testing and debugging purposes.
> > + */
> > +DECLARE_TRACE(sched_load_rq,
> > +	TP_PROTO(int cpu, const char *path, struct sched_avg *avg),
> > +	TP_ARGS(cpu, path, avg));
> > +
> 
> > +DECLARE_TRACE(sched_load_se,
> > +       TP_PROTO(int cpu, const char *path, struct sched_entity *se),
> > +       TP_ARGS(cpu, path, se));
> > +
> 
> > +DECLARE_TRACE(sched_overutilized,
> > +       TP_PROTO(int overutilized),
> > +       TP_ARGS(overutilized));
> 
> This doesn't generate any actual userspace because of the lack of
> DEFINE_EVENT() ?

Documentation/trace/tracepoints.rst suggests using DEFINE_TRACE(). But using
that causes compilation errors because of some magic that is being redefined.
Not doing DEFINE_TRACE() gave the intended effect according to the document, so
I assumed it's outdated.


kernel/sched/core.c:27:1: note: in expansion of macro ‘DEFINE_TRACE’
 DEFINE_TRACE(sched_overutilized);
 ^~~~~~~~~~~~
./include/linux/tracepoint.h:287:20: note: previous definition of ‘__tracepoint_sched_overutilized’ was here
  struct tracepoint __tracepoint_##name     \
                    ^
./include/linux/tracepoint.h:293:2: note: in expansion of macro ‘DEFINE_TRACE_FN’
  DEFINE_TRACE_FN(name, NULL, NULL);
  ^~~~~~~~~~~~~~~
./include/trace/define_trace.h:67:2: note: in expansion of macro ‘DEFINE_TRACE’
  DEFINE_TRACE(name)
  ^~~~~~~~~~~~
./include/trace/events/sched.h:603:1: note: in expansion of macro ‘DECLARE_TRACE’
 DECLARE_TRACE(sched_overutilized,
 ^~~~~~~~~~~~~


DEFINE_EVENT() is only used with TRACE_EVENT() so certainly we don't want it
here.

> 
> > diff --git a/kernel/sched/sched_tracepoints.h b/kernel/sched/sched_tracepoints.h
> > new file mode 100644
> > index 000000000000..f4ded705118e
> > --- /dev/null
> > +++ b/kernel/sched/sched_tracepoints.h
> > @@ -0,0 +1,39 @@
> 
> Like with the other newly introduced header files, this one is lacking
> the normal include guard.

I was going to add them but then when I looked in sched.h and autogroup.h they
had none. So I thought the convention is to not use guard here.

I will add it.

> 
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Scheduler tracepoints that are probe-able only and aren't exported ABI in
> > + * tracefs.
> > + */
> > +
> > +#include <trace/events/sched.h>
> > +
> > +#define SCHED_TP_PATH_LEN		64
> > +
> > +
> > +static __always_inline void sched_tp_load_cfs_rq(struct cfs_rq *cfs_rq)
> > +{
> > +	if (trace_sched_load_rq_enabled()) {
> > +		int cpu = cpu_of(rq_of(cfs_rq));
> > +		char path[SCHED_TP_PATH_LEN];
> > +
> > +		cfs_rq_tg_path(cfs_rq, path, SCHED_TP_PATH_LEN);
> > +		trace_sched_load_rq(cpu, path, &cfs_rq->avg);
> > +	}
> > +}
> > +
> > +static __always_inline void sched_tp_load_rt_rq(struct rq *rq)
> > +{
> > +	if (trace_sched_load_rq_enabled()) {
> > +		int cpu = cpu_of(rq);
> > +
> > +		trace_sched_load_rq(cpu, NULL, &rq->avg_rt);
> > +	}
> > +}
> > +
> > +static __always_inline void sched_tp_load_dl_rq(struct rq *rq)
> > +{
> > +	if (trace_sched_load_rq_enabled()) {
> > +		int cpu = cpu_of(rq);
> > +
> > +		trace_sched_load_rq(cpu, NULL, &rq->avg_dl);
> > +	}
> > +}
> 
> > +static __always_inline void sched_tp_load_se(struct sched_entity *se)
> > +{
> > +       if (trace_sched_load_se_enabled()) {
> > +               struct cfs_rq *gcfs_rq = group_cfs_rq(se);
> > +               struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > +               char path[SCHED_TP_PATH_LEN];
> > +               int cpu = cpu_of(rq_of(cfs_rq));
> > +
> > +               cfs_rq_tg_path(gcfs_rq, path, SCHED_TP_PATH_LEN);
> > +               trace_sched_load_se(cpu, path, se);
> > +       }
> > +}
> 
> These functions really should be called trace_*()

I can rename the wrappers to trace_pelt_load_rq() or sched_trace_pelt_load_rq()
as Steve was suggesting.

I assume you're okay with the name of the tracepoints and your comment was
about the wrapper above only? ie: sched_load_rq vs pelt_rq.

> 
> Also; I _really_ hate how fat they are. Why can't we do simple straight

We can create a percpu variable instead of pushing the path on the stack. But
this might fail if the trancepoint is called in a preempt enabled path. Also
having the percpu variable always hanging when these mostly disabled is ugly.

Maybe there's a better way to handle extracting this path info without copying
it here. Let me see if I can improve on this.

> forward things like:
> 
> 	trace_pelt_cfq(cfq);
> 	trace_pelt_rq(rq);
> 	trace_pelt_se(se);
> 
> And then have the thing attached to the event do the fat bits like
> extract the path and whatnot.

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-06 13:52     ` Steven Rostedt
@ 2019-05-06 14:42       ` Qais Yousef
  2019-05-06 14:46         ` Steven Rostedt
  0 siblings, 1 reply; 20+ messages in thread
From: Qais Yousef @ 2019-05-06 14:42 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On 05/06/19 09:52, Steven Rostedt wrote:
> On Mon, 6 May 2019 11:08:59 +0200
> Peter Zijlstra <peterz@infradead.org> wrote:
> 
> > These functions really should be called trace_*()
> > 
> > Also; I _really_ hate how fat they are. Why can't we do simple straight
> > forward things like:
> > 
> > 	trace_pelt_cfq(cfq);
> > 	trace_pelt_rq(rq);
> > 	trace_pelt_se(se);
> > 
> > And then have the thing attached to the event do the fat bits like
> > extract the path and whatnot.
> 
> I'd like to avoid functions called "trace_*" that are not trace events.
> It's getting confusing when I see a "trace_*()" function and then go
> look for the corresponding TRACE_EVENT() just to find out that one does
> not exist.
> 
>  sched_trace_*()  maybe?

I can control that for the wrappers I'm introducing. But the actual tracepoint
get the 'trace_' part prepended automatically by the macros.

ie DECLARE_TRACE(pelt_rq, ...) will automatically generate a function called
trace_pelt_se(...)

Or am I missing something?

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-06 14:42       ` Qais Yousef
@ 2019-05-06 14:46         ` Steven Rostedt
  2019-05-06 15:33           ` Qais Yousef
  0 siblings, 1 reply; 20+ messages in thread
From: Steven Rostedt @ 2019-05-06 14:46 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On Mon, 6 May 2019 15:42:00 +0100
Qais Yousef <qais.yousef@arm.com> wrote:

> I can control that for the wrappers I'm introducing. But the actual tracepoint
> get the 'trace_' part prepended automatically by the macros.
> 
> ie DECLARE_TRACE(pelt_rq, ...) will automatically generate a function called
> trace_pelt_se(...)
> 
> Or am I missing something?

No trace comes from the trace points.

So basically, we are going back to having tracepoints without
associated trace events. Which basically is just saying "we want trace
events here, but don't want an API". Of course, we can create a module
that can attach to them and create the trace events as well.

I'm not a big fan of this, but I'll let Peter decide.

-- Steve

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-06 14:46         ` Steven Rostedt
@ 2019-05-06 15:33           ` Qais Yousef
  2019-05-06 16:01             ` Steven Rostedt
  0 siblings, 1 reply; 20+ messages in thread
From: Qais Yousef @ 2019-05-06 15:33 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On 05/06/19 10:46, Steven Rostedt wrote:
> On Mon, 6 May 2019 15:42:00 +0100
> Qais Yousef <qais.yousef@arm.com> wrote:
> 
> > I can control that for the wrappers I'm introducing. But the actual tracepoint
> > get the 'trace_' part prepended automatically by the macros.
> > 
> > ie DECLARE_TRACE(pelt_rq, ...) will automatically generate a function called
> > trace_pelt_se(...)
> > 
> > Or am I missing something?
> 
> No trace comes from the trace points.

If you want I can do something like below to help create a distinction. It is
none enforcing though.

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 9c3186578ce0..f654ced20045 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -232,6 +232,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
  */
 #define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
        extern struct tracepoint __tracepoint_##name;                   \
+       static inline void tp_##name(proto) __alias(trace_##name);      \
        static inline void trace_##name(proto)                          \
        {                                                               \
                if (static_key_false(&__tracepoint_##name.key))         \


Another option is to extend DECLARE_TRACE() to take a new argument IS_TP and
based on that select the function name. This will be enforcing but I will have
to go fixup many places.

Of course 'TP' can be replaced with anything more appealing.

--
Qais Yousef

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-06 15:33           ` Qais Yousef
@ 2019-05-06 16:01             ` Steven Rostedt
  2019-05-06 17:23               ` Qais Yousef
  0 siblings, 1 reply; 20+ messages in thread
From: Steven Rostedt @ 2019-05-06 16:01 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On Mon, 6 May 2019 16:33:17 +0100
Qais Yousef <qais.yousef@arm.com> wrote:

> On 05/06/19 10:46, Steven Rostedt wrote:
> > On Mon, 6 May 2019 15:42:00 +0100
> > Qais Yousef <qais.yousef@arm.com> wrote:
> >   
> > > I can control that for the wrappers I'm introducing. But the actual tracepoint
> > > get the 'trace_' part prepended automatically by the macros.
> > > 
> > > ie DECLARE_TRACE(pelt_rq, ...) will automatically generate a function called
> > > trace_pelt_se(...)
> > > 
> > > Or am I missing something?  
> > 
> > No trace comes from the trace points.  

Re-reading that line, I see I totally didn't express what I meant :-p

> 
> If you want I can do something like below to help create a distinction. It is
> none enforcing though.
> 
> diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
> index 9c3186578ce0..f654ced20045 100644
> --- a/include/linux/tracepoint.h
> +++ b/include/linux/tracepoint.h
> @@ -232,6 +232,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
>   */
>  #define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
>         extern struct tracepoint __tracepoint_##name;                   \
> +       static inline void tp_##name(proto) __alias(trace_##name);      \
>         static inline void trace_##name(proto)                          \
>         {                                                               \
>                 if (static_key_false(&__tracepoint_##name.key))         \
> 
> 
> Another option is to extend DECLARE_TRACE() to take a new argument IS_TP and
> based on that select the function name. This will be enforcing but I will have
> to go fixup many places.
> 
> Of course 'TP' can be replaced with anything more appealing.

No no no, I meant to say...

 "No that's OK. The "trace_" *is* from the trace points, and trace
 events build on top of them."

-- Steve


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-06 16:01             ` Steven Rostedt
@ 2019-05-06 17:23               ` Qais Yousef
  0 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-06 17:23 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On 05/06/19 12:01, Steven Rostedt wrote:
> On Mon, 6 May 2019 16:33:17 +0100
> Qais Yousef <qais.yousef@arm.com> wrote:
> 
> > On 05/06/19 10:46, Steven Rostedt wrote:
> > > On Mon, 6 May 2019 15:42:00 +0100
> > > Qais Yousef <qais.yousef@arm.com> wrote:
> > >   
> > > > I can control that for the wrappers I'm introducing. But the actual tracepoint
> > > > get the 'trace_' part prepended automatically by the macros.
> > > > 
> > > > ie DECLARE_TRACE(pelt_rq, ...) will automatically generate a function called
> > > > trace_pelt_se(...)
> > > > 
> > > > Or am I missing something?  
> > > 
> > > No trace comes from the trace points.  
> 
> Re-reading that line, I see I totally didn't express what I meant :-p
> 
> > 
> > If you want I can do something like below to help create a distinction. It is
> > none enforcing though.
> > 
> > diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
> > index 9c3186578ce0..f654ced20045 100644
> > --- a/include/linux/tracepoint.h
> > +++ b/include/linux/tracepoint.h
> > @@ -232,6 +232,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
> >   */
> >  #define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
> >         extern struct tracepoint __tracepoint_##name;                   \
> > +       static inline void tp_##name(proto) __alias(trace_##name);      \
> >         static inline void trace_##name(proto)                          \
> >         {                                                               \
> >                 if (static_key_false(&__tracepoint_##name.key))         \
> > 
> > 
> > Another option is to extend DECLARE_TRACE() to take a new argument IS_TP and
> > based on that select the function name. This will be enforcing but I will have
> > to go fixup many places.
> > 
> > Of course 'TP' can be replaced with anything more appealing.
> 
> No no no, I meant to say...
> 
>  "No that's OK. The "trace_" *is* from the trace points, and trace
>  events build on top of them."

I did have to stare at the original statement for a bit :-)
This makes more sense now. Thanks for the clarification.

--
Qais Yousef

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-06  9:18     ` Peter Zijlstra
@ 2019-05-08 13:38       ` Qais Yousef
  0 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-08 13:38 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Steven Rostedt, linux-kernel, Pavankumar Kondeti,
	Sebastian Andrzej Siewior, Uwe Kleine-Konig

On 05/06/19 11:18, Peter Zijlstra wrote:
> On Mon, May 06, 2019 at 11:08:59AM +0200, Peter Zijlstra wrote:
> > Also; I _really_ hate how fat they are. Why can't we do simple straight
> > forward things like:
> > 
> > 	trace_pelt_cfq(cfq);
> > 	trace_pelt_rq(rq);
> > 	trace_pelt_se(se);
> > 
> > And then have the thing attached to the event do the fat bits like
> > extract the path and whatnot.
> 
> ARGH, because we don't export any of those data structures (for good
> reason).. bah I hate all this.

I am not a big fan either..

FWIW struct sched_entity and struct sched_avg are exported but only used in
kernel/sched/*. Are the reasons behind not exporting struct cfs_rq and struct
rq are really different to the other 2?

Anyways. I have v2 almost ready but thought I'd ask before posting if we want
to handle this in a different way.

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-05 11:57 ` [PATCH 4/7] sched: Add sched_load_rq tracepoint Qais Yousef
  2019-05-06  9:08   ` Peter Zijlstra
@ 2019-05-10  8:51   ` Dietmar Eggemann
  2019-05-10  9:14     ` Qais Yousef
  1 sibling, 1 reply; 20+ messages in thread
From: Dietmar Eggemann @ 2019-05-10  8:51 UTC (permalink / raw)
  To: Qais Yousef, Peter Zijlstra, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Pavankumar Kondeti, Sebastian Andrzej Siewior,
	Uwe Kleine-Konig

Hi Qais,

On 5/5/19 1:57 PM, Qais Yousef wrote:

[...]

> diff --git a/kernel/sched/sched_tracepoints.h b/kernel/sched/sched_tracepoints.h
> new file mode 100644
> index 000000000000..f4ded705118e
> --- /dev/null
> +++ b/kernel/sched/sched_tracepoints.h
> @@ -0,0 +1,39 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Scheduler tracepoints that are probe-able only and aren't exported ABI in
> + * tracefs.
> + */
> +
> +#include <trace/events/sched.h>
> +
> +#define SCHED_TP_PATH_LEN		64
> +
> +
> +static __always_inline void sched_tp_load_cfs_rq(struct cfs_rq *cfs_rq)
> +{
> +	if (trace_sched_load_rq_enabled()) {
> +		int cpu = cpu_of(rq_of(cfs_rq));
> +		char path[SCHED_TP_PATH_LEN];
> +
> +		cfs_rq_tg_path(cfs_rq, path, SCHED_TP_PATH_LEN);
> +		trace_sched_load_rq(cpu, path, &cfs_rq->avg);

This will let a !CONFIG_SMP build fail.

> +	}
> +}
> +
> +static __always_inline void sched_tp_load_rt_rq(struct rq *rq)
> +{
> +	if (trace_sched_load_rq_enabled()) {
> +		int cpu = cpu_of(rq);
> +
> +		trace_sched_load_rq(cpu, NULL, &rq->avg_rt);

Same here.

> +	}
> +}
> +
> +static __always_inline void sched_tp_load_dl_rq(struct rq *rq)
> +{
> +	if (trace_sched_load_rq_enabled()) {
> +		int cpu = cpu_of(rq);
> +
> +		trace_sched_load_rq(cpu, NULL, &rq->avg_dl);

and here.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/7] sched: Add sched_load_rq tracepoint
  2019-05-10  8:51   ` Dietmar Eggemann
@ 2019-05-10  9:14     ` Qais Yousef
  0 siblings, 0 replies; 20+ messages in thread
From: Qais Yousef @ 2019-05-10  9:14 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: Peter Zijlstra, Ingo Molnar, Steven Rostedt, linux-kernel,
	Pavankumar Kondeti, Sebastian Andrzej Siewior, Uwe Kleine-Konig

On 05/10/19 10:51, Dietmar Eggemann wrote:
> Hi Qais,
> 
> On 5/5/19 1:57 PM, Qais Yousef wrote:
> 
> [...]
> 
> > diff --git a/kernel/sched/sched_tracepoints.h b/kernel/sched/sched_tracepoints.h
> > new file mode 100644
> > index 000000000000..f4ded705118e
> > --- /dev/null
> > +++ b/kernel/sched/sched_tracepoints.h
> > @@ -0,0 +1,39 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Scheduler tracepoints that are probe-able only and aren't exported ABI in
> > + * tracefs.
> > + */
> > +
> > +#include <trace/events/sched.h>
> > +
> > +#define SCHED_TP_PATH_LEN		64
> > +
> > +
> > +static __always_inline void sched_tp_load_cfs_rq(struct cfs_rq *cfs_rq)
> > +{
> > +	if (trace_sched_load_rq_enabled()) {
> > +		int cpu = cpu_of(rq_of(cfs_rq));
> > +		char path[SCHED_TP_PATH_LEN];
> > +
> > +		cfs_rq_tg_path(cfs_rq, path, SCHED_TP_PATH_LEN);
> > +		trace_sched_load_rq(cpu, path, &cfs_rq->avg);
> 
> This will let a !CONFIG_SMP build fail.

You're right. sched_avg is only defined if CONFIG_SMP. Fixed all three
functions.

Thanks!

--
Qais Yousef

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-05-10  9:14 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-05 11:57 [PATCH 0/7] Add new tracepoints required for EAS testing Qais Yousef
2019-05-05 11:57 ` [PATCH 1/7] sched: autogroup: Make autogroup_path() always available Qais Yousef
2019-05-05 11:57 ` [PATCH 2/7] sched: fair: move helper functions into fair.h Qais Yousef
2019-05-05 11:57 ` [PATCH 3/7] sched: fair.h: add a new cfs_rq_tg_path() Qais Yousef
2019-05-05 11:57 ` [PATCH 4/7] sched: Add sched_load_rq tracepoint Qais Yousef
2019-05-06  9:08   ` Peter Zijlstra
2019-05-06  9:18     ` Peter Zijlstra
2019-05-08 13:38       ` Qais Yousef
2019-05-06 13:52     ` Steven Rostedt
2019-05-06 14:42       ` Qais Yousef
2019-05-06 14:46         ` Steven Rostedt
2019-05-06 15:33           ` Qais Yousef
2019-05-06 16:01             ` Steven Rostedt
2019-05-06 17:23               ` Qais Yousef
2019-05-06 14:38     ` Qais Yousef
2019-05-10  8:51   ` Dietmar Eggemann
2019-05-10  9:14     ` Qais Yousef
2019-05-05 11:57 ` [PATCH 5/7] sched: Add sched_load_se tracepoint Qais Yousef
2019-05-05 11:57 ` [PATCH 6/7] sched: Add sched_overutilized tracepoint Qais Yousef
2019-05-05 11:57 ` [PATCH 7/7] sched: export the newly added tracepoints Qais Yousef

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).