linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC V2 00/10] perf top optimization
@ 2017-09-11  2:23 kan.liang
  2017-09-11  2:23 ` [PATCH RFC V2 01/10] perf tools: hashtable for machine threads kan.liang
                   ` (11 more replies)
  0 siblings, 12 replies; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

The patch series intends to fix the severe performance issue in
Knights Landing/Mill, when monitoring in heavy load system.
perf top costs a few minutes to show the result, which is
unacceptable.
With the patch series applied, the latency will reduces to
several seconds.

machine__synthesize_threads and perf_top__mmap_read costs most of
the perf top time (> 99%).
Patch 1-9 do the optimization for machine__synthesize_threads.
Patch 10 does the optimization for perf_top__mmap_read.

Optimization for machine__synthesize_threads
  - Multithreading the whole process.
  - The threads number is set to the max online CPU# by default.
    User can change the threads number through the new option.
  - Introduces hashtable for machine threads to reduce the lock
    contention.
  - The optimization can also benefit other platforms and other
    perf tools, like perf record. But this patch series doesn't
    do the optimization for other tools. It can be done later
    separately.
  - With this optimization applied, there is a 1.56x speedup in
    Knights Mill with heavy workload.

Optimization for perf_top__mmap_read
  - switch back to overwrite mode
    For non overwrite mode, it tries to read everything in the ring buffer
    and does not check the messup. Once there are lots of samples delivered
    shortly, the processing time could be very long.
    Considering the real time requirement for perf top, it should switch
    back to overwrite mode.
  - With this optimization applied, there is a huge 53.8x speedup in
    Knights Mill with heavy workload.
  - With this optimization applied, the latency of perf_top__mmap_read is
    less than the default perf top fresh time (2s) in Knights Mill with
    heavy workload.

Here are perf top latency test result on Knights Mill and Skylake server

The heavy workload is to compile Linux kernel as below
"sudo nice make -j$(grep -c '^processor' /proc/cpuinfo)"
Then, "sudo perf top"

The latency period is the time between perf top launched and the first
profiling result shown.

- Latency on Knights Mill (272 CPUs)

Original(s)	With patch(s)	Speedup
272.68		16.48		16.54x

- Latency on Skylake server (192 CPUs)

Original(s)	With patch(s)	Speedup
12.28		2.96		4.15x

Changes since V1:
 - Patch 1: machine threads and hashtable related renaming (Arnaldo)
 - Patch 6: use a smaller locked section for comm_str__put
   add a locked wrapper for comm_str__findnew              (Arnaldo)

Kan Liang (10):
  perf tools: hashtable for machine threads
  perf tools: using scandir to replace readdir
  petf tools: using comm_str to replace comm in hist_entry
  petf tools: introduce a new function to set namespaces id
  perf tools: lock to protect thread list
  perf tools: lock to protect comm_str rb tree
  perf tools: change machine comm_exec type to atomic
  perf top: implement multithreading for perf_event__synthesize_threads
  perf top: add option to set the number of thread for event synthesize
  perf top: switch back to overwrite mode

 tools/perf/builtin-kvm.c              |   3 +-
 tools/perf/builtin-record.c           |   2 +-
 tools/perf/builtin-top.c              |   9 +-
 tools/perf/builtin-trace.c            |  21 +++--
 tools/perf/tests/mmap-thread-lookup.c |   2 +-
 tools/perf/ui/browsers/hists.c        |   2 +-
 tools/perf/util/comm.c                |  18 +++-
 tools/perf/util/event.c               | 149 +++++++++++++++++++++++++-------
 tools/perf/util/event.h               |  14 ++-
 tools/perf/util/evlist.c              |   5 +-
 tools/perf/util/hist.c                |  11 +--
 tools/perf/util/machine.c             | 158 +++++++++++++++++++++-------------
 tools/perf/util/machine.h             |  34 ++++++--
 tools/perf/util/rb_resort.h           |   5 +-
 tools/perf/util/sort.c                |   8 +-
 tools/perf/util/sort.h                |   2 +-
 tools/perf/util/thread.c              |  68 ++++++++++++---
 tools/perf/util/thread.h              |   6 +-
 tools/perf/util/top.h                 |   1 +
 19 files changed, 376 insertions(+), 142 deletions(-)

-- 
2.5.5

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 01/10] perf tools: hashtable for machine threads
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-13 13:29   ` Arnaldo Carvalho de Melo
  2017-09-22 16:42   ` [tip:perf/core] perf machine: Use " tip-bot for Kan Liang
  2017-09-11  2:23 ` [PATCH RFC V2 02/10] perf tools: using scandir to replace readdir kan.liang
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

To process any events, it needs to find the thread in the machine first.
The machine maintains a rb tree to store all threads. The rb tree is
protected by a rw lock.
It is not a problem for current perf which serially processing events.
However, it will have scalability performance issue to process events in
parallel, especially on a heavy load system which have many threads.

Introduce a hashtable to divide the big rb tree into many samll rb tree
for threads. The index is thread id % hashtable size. It can reduce the
lock contention.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/builtin-trace.c  |  19 +++---
 tools/perf/util/machine.c   | 139 ++++++++++++++++++++++++++++----------------
 tools/perf/util/machine.h   |  23 ++++++--
 tools/perf/util/rb_resort.h |   5 +-
 4 files changed, 120 insertions(+), 66 deletions(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 771ddab..ee8c6e8 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -2730,20 +2730,23 @@ DEFINE_RESORT_RB(threads, (thread__nr_events(a->thread->priv) < thread__nr_event
 
 static size_t trace__fprintf_thread_summary(struct trace *trace, FILE *fp)
 {
-	DECLARE_RESORT_RB_MACHINE_THREADS(threads, trace->host);
 	size_t printed = trace__fprintf_threads_header(fp);
 	struct rb_node *nd;
+	int i;
 
-	if (threads == NULL) {
-		fprintf(fp, "%s", "Error sorting output by nr_events!\n");
-		return 0;
-	}
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		DECLARE_RESORT_RB_MACHINE_THREADS(threads, trace->host, i);
 
-	resort_rb__for_each_entry(nd, threads)
-		printed += trace__fprintf_thread(fp, threads_entry->thread, trace);
+		if (threads == NULL) {
+			fprintf(fp, "%s", "Error sorting output by nr_events!\n");
+			return 0;
+		}
 
-	resort_rb__delete(threads);
+		resort_rb__for_each_entry(nd, threads)
+			printed += trace__fprintf_thread(fp, threads_entry->thread, trace);
 
+		resort_rb__delete(threads);
+	}
 	return printed;
 }
 
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index df70936..2f1ad29 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -33,6 +33,21 @@ static void dsos__init(struct dsos *dsos)
 	pthread_rwlock_init(&dsos->lock, NULL);
 }
 
+static void machine__thread_init(struct machine *machine)
+{
+	struct threads *thread;
+	int i;
+
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		thread = &machine->threads[i];
+		thread->entries = RB_ROOT;
+		pthread_rwlock_init(&thread->lock, NULL);
+		thread->nr = 0;
+		INIT_LIST_HEAD(&thread->dead);
+		thread->last_match = NULL;
+	}
+}
+
 int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
 {
 	memset(machine, 0, sizeof(*machine));
@@ -40,11 +55,7 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
 	RB_CLEAR_NODE(&machine->rb_node);
 	dsos__init(&machine->dsos);
 
-	machine->threads = RB_ROOT;
-	pthread_rwlock_init(&machine->threads_lock, NULL);
-	machine->nr_threads = 0;
-	INIT_LIST_HEAD(&machine->dead_threads);
-	machine->last_match = NULL;
+	machine__thread_init(machine);
 
 	machine->vdso_info = NULL;
 	machine->env = NULL;
@@ -140,28 +151,39 @@ static void dsos__exit(struct dsos *dsos)
 
 void machine__delete_threads(struct machine *machine)
 {
+	struct threads *thread;
 	struct rb_node *nd;
+	int i;
 
-	pthread_rwlock_wrlock(&machine->threads_lock);
-	nd = rb_first(&machine->threads);
-	while (nd) {
-		struct thread *t = rb_entry(nd, struct thread, rb_node);
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		thread = &machine->threads[i];
+		pthread_rwlock_wrlock(&thread->lock);
+		nd = rb_first(&thread->entries);
+		while (nd) {
+			struct thread *t = rb_entry(nd, struct thread, rb_node);
 
-		nd = rb_next(nd);
-		__machine__remove_thread(machine, t, false);
+			nd = rb_next(nd);
+			__machine__remove_thread(machine, t, false);
+		}
+		pthread_rwlock_unlock(&thread->lock);
 	}
-	pthread_rwlock_unlock(&machine->threads_lock);
 }
 
 void machine__exit(struct machine *machine)
 {
+	struct threads *thread;
+	int i;
+
 	machine__destroy_kernel_maps(machine);
 	map_groups__exit(&machine->kmaps);
 	dsos__exit(&machine->dsos);
 	machine__exit_vdso(machine);
 	zfree(&machine->root_dir);
 	zfree(&machine->current_tid);
-	pthread_rwlock_destroy(&machine->threads_lock);
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		thread = &machine->threads[i];
+		pthread_rwlock_destroy(&thread->lock);
+	}
 }
 
 void machine__delete(struct machine *machine)
@@ -382,7 +404,8 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 						  pid_t pid, pid_t tid,
 						  bool create)
 {
-	struct rb_node **p = &machine->threads.rb_node;
+	struct threads *thread = machine__thread(machine, tid);
+	struct rb_node **p = &thread->entries.rb_node;
 	struct rb_node *parent = NULL;
 	struct thread *th;
 
@@ -391,14 +414,14 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 	 * so most of the time we dont have to look up
 	 * the full rbtree:
 	 */
-	th = machine->last_match;
+	th = thread->last_match;
 	if (th != NULL) {
 		if (th->tid == tid) {
 			machine__update_thread_pid(machine, th, pid);
 			return thread__get(th);
 		}
 
-		machine->last_match = NULL;
+		thread->last_match = NULL;
 	}
 
 	while (*p != NULL) {
@@ -406,7 +429,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 		th = rb_entry(parent, struct thread, rb_node);
 
 		if (th->tid == tid) {
-			machine->last_match = th;
+			thread->last_match = th;
 			machine__update_thread_pid(machine, th, pid);
 			return thread__get(th);
 		}
@@ -423,7 +446,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 	th = thread__new(pid, tid);
 	if (th != NULL) {
 		rb_link_node(&th->rb_node, parent, p);
-		rb_insert_color(&th->rb_node, &machine->threads);
+		rb_insert_color(&th->rb_node, &thread->entries);
 
 		/*
 		 * We have to initialize map_groups separately
@@ -434,7 +457,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 		 * leader and that would screwed the rb tree.
 		 */
 		if (thread__init_map_groups(th, machine)) {
-			rb_erase_init(&th->rb_node, &machine->threads);
+			rb_erase_init(&th->rb_node, &thread->entries);
 			RB_CLEAR_NODE(&th->rb_node);
 			thread__put(th);
 			return NULL;
@@ -443,8 +466,8 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 		 * It is now in the rbtree, get a ref
 		 */
 		thread__get(th);
-		machine->last_match = th;
-		++machine->nr_threads;
+		thread->last_match = th;
+		++thread->nr;
 	}
 
 	return th;
@@ -458,21 +481,24 @@ struct thread *__machine__findnew_thread(struct machine *machine, pid_t pid, pid
 struct thread *machine__findnew_thread(struct machine *machine, pid_t pid,
 				       pid_t tid)
 {
+	struct threads *thread = machine__thread(machine, tid);
 	struct thread *th;
 
-	pthread_rwlock_wrlock(&machine->threads_lock);
+	pthread_rwlock_wrlock(&thread->lock);
 	th = __machine__findnew_thread(machine, pid, tid);
-	pthread_rwlock_unlock(&machine->threads_lock);
+	pthread_rwlock_unlock(&thread->lock);
 	return th;
 }
 
 struct thread *machine__find_thread(struct machine *machine, pid_t pid,
 				    pid_t tid)
 {
+	struct threads *thread = machine__thread(machine, tid);
 	struct thread *th;
-	pthread_rwlock_rdlock(&machine->threads_lock);
+
+	pthread_rwlock_rdlock(&thread->lock);
 	th =  ____machine__findnew_thread(machine, pid, tid, false);
-	pthread_rwlock_unlock(&machine->threads_lock);
+	pthread_rwlock_unlock(&thread->lock);
 	return th;
 }
 
@@ -719,21 +745,25 @@ size_t machine__fprintf_vmlinux_path(struct machine *machine, FILE *fp)
 
 size_t machine__fprintf(struct machine *machine, FILE *fp)
 {
-	size_t ret;
+	struct threads *thread;
 	struct rb_node *nd;
+	size_t ret;
+	int i;
 
-	pthread_rwlock_rdlock(&machine->threads_lock);
-
-	ret = fprintf(fp, "Threads: %u\n", machine->nr_threads);
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		thread = &machine->threads[i];
+		pthread_rwlock_rdlock(&thread->lock);
 
-	for (nd = rb_first(&machine->threads); nd; nd = rb_next(nd)) {
-		struct thread *pos = rb_entry(nd, struct thread, rb_node);
+		ret = fprintf(fp, "Threads: %u\n", thread->nr);
 
-		ret += thread__fprintf(pos, fp);
-	}
+		for (nd = rb_first(&thread->entries); nd; nd = rb_next(nd)) {
+			struct thread *pos = rb_entry(nd, struct thread, rb_node);
 
-	pthread_rwlock_unlock(&machine->threads_lock);
+			ret += thread__fprintf(pos, fp);
+		}
 
+		pthread_rwlock_unlock(&thread->lock);
+	}
 	return ret;
 }
 
@@ -1479,23 +1509,25 @@ int machine__process_mmap_event(struct machine *machine, union perf_event *event
 
 static void __machine__remove_thread(struct machine *machine, struct thread *th, bool lock)
 {
-	if (machine->last_match == th)
-		machine->last_match = NULL;
+	struct threads *thread = machine__thread(machine, th->tid);
+
+	if (thread->last_match == th)
+		thread->last_match = NULL;
 
 	BUG_ON(refcount_read(&th->refcnt) == 0);
 	if (lock)
-		pthread_rwlock_wrlock(&machine->threads_lock);
-	rb_erase_init(&th->rb_node, &machine->threads);
+		pthread_rwlock_wrlock(&thread->lock);
+	rb_erase_init(&th->rb_node, &thread->entries);
 	RB_CLEAR_NODE(&th->rb_node);
-	--machine->nr_threads;
+	--thread->nr;
 	/*
 	 * Move it first to the dead_threads list, then drop the reference,
 	 * if this is the last reference, then the thread__delete destructor
 	 * will be called and we will remove it from the dead_threads list.
 	 */
-	list_add_tail(&th->node, &machine->dead_threads);
+	list_add_tail(&th->node, &thread->dead);
 	if (lock)
-		pthread_rwlock_unlock(&machine->threads_lock);
+		pthread_rwlock_unlock(&thread->lock);
 	thread__put(th);
 }
 
@@ -2140,21 +2172,26 @@ int machine__for_each_thread(struct machine *machine,
 			     int (*fn)(struct thread *thread, void *p),
 			     void *priv)
 {
+	struct threads *threads;
 	struct rb_node *nd;
 	struct thread *thread;
 	int rc = 0;
+	int i;
 
-	for (nd = rb_first(&machine->threads); nd; nd = rb_next(nd)) {
-		thread = rb_entry(nd, struct thread, rb_node);
-		rc = fn(thread, priv);
-		if (rc != 0)
-			return rc;
-	}
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		threads = &machine->threads[i];
+		for (nd = rb_first(&threads->entries); nd; nd = rb_next(nd)) {
+			thread = rb_entry(nd, struct thread, rb_node);
+			rc = fn(thread, priv);
+			if (rc != 0)
+				return rc;
+		}
 
-	list_for_each_entry(thread, &machine->dead_threads, node) {
-		rc = fn(thread, priv);
-		if (rc != 0)
-			return rc;
+		list_for_each_entry(thread, &threads->dead, node) {
+			rc = fn(thread, priv);
+			if (rc != 0)
+				return rc;
+		}
 	}
 	return rc;
 }
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index 3cdb134..64663eb 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -23,6 +23,17 @@ extern const char *ref_reloc_sym_names[];
 
 struct vdso_info;
 
+#define THREADS__TABLE_BITS	8
+#define THREADS__TABLE_SIZE	(1 << THREADS__TABLE_BITS)
+
+struct threads {
+	struct rb_root	  entries;
+	pthread_rwlock_t  lock;
+	unsigned int	  nr;
+	struct list_head  dead;
+	struct thread	  *last_match;
+};
+
 struct machine {
 	struct rb_node	  rb_node;
 	pid_t		  pid;
@@ -30,11 +41,7 @@ struct machine {
 	bool		  comm_exec;
 	bool		  kptr_restrict_warned;
 	char		  *root_dir;
-	struct rb_root	  threads;
-	pthread_rwlock_t  threads_lock;
-	unsigned int	  nr_threads;
-	struct list_head  dead_threads;
-	struct thread	  *last_match;
+	struct threads    threads[THREADS__TABLE_SIZE];
 	struct vdso_info  *vdso_info;
 	struct perf_env   *env;
 	struct dsos	  dsos;
@@ -49,6 +56,12 @@ struct machine {
 };
 
 static inline
+struct threads *machine__thread(struct machine *machine, pid_t tid)
+{
+	return &machine->threads[tid % THREADS__TABLE_SIZE];
+}
+
+static inline
 struct map *__machine__kernel_map(struct machine *machine, enum map_type type)
 {
 	return machine->vmlinux_maps[type];
diff --git a/tools/perf/util/rb_resort.h b/tools/perf/util/rb_resort.h
index 808cc45..b88fcf1 100644
--- a/tools/perf/util/rb_resort.h
+++ b/tools/perf/util/rb_resort.h
@@ -143,7 +143,8 @@ struct __name##_sorted *__name = __name##_sorted__new
 				  __ilist->rblist.nr_entries)
 
 /* For 'struct machine->threads' */
-#define DECLARE_RESORT_RB_MACHINE_THREADS(__name, __machine)			\
-	DECLARE_RESORT_RB(__name)(&__machine->threads, __machine->nr_threads)
+#define DECLARE_RESORT_RB_MACHINE_THREADS(__name, __machine, tid)		\
+	DECLARE_RESORT_RB(__name)(&__machine->threads[tid].entries,		\
+				  __machine->threads[tid].nr)
 
 #endif /* _PERF_RESORT_RB_H_ */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 02/10] perf tools: using scandir to replace readdir
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
  2017-09-11  2:23 ` [PATCH RFC V2 01/10] perf tools: hashtable for machine threads kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-13 15:19   ` Arnaldo Carvalho de Melo
  2017-09-11  2:23 ` [PATCH RFC V2 03/10] petf tools: using comm_str to replace comm in hist_entry kan.liang
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

For perf_event__synthesize_threads, perf goes through all proc files
serially by readdir.
scandir did a snapshoot of proc, which is multithreading friendly.

It's possible that some threads which are added during event synthesize.
But the number of lost threads should be small.
They should not impact the final analysis.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/util/event.c | 46 ++++++++++++++++++++++++++--------------------
 1 file changed, 26 insertions(+), 20 deletions(-)

diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 1c905ba..c31f678 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -683,12 +683,14 @@ int perf_event__synthesize_threads(struct perf_tool *tool,
 				   bool mmap_data,
 				   unsigned int proc_map_timeout)
 {
-	DIR *proc;
-	char proc_path[PATH_MAX];
-	struct dirent *dirent;
 	union perf_event *comm_event, *mmap_event, *fork_event;
 	union perf_event *namespaces_event;
+	char proc_path[PATH_MAX];
+	struct dirent **dirent;
 	int err = -1;
+	char *end;
+	pid_t pid;
+	int n, i;
 
 	if (machine__is_default_guest(machine))
 		return 0;
@@ -712,29 +714,33 @@ int perf_event__synthesize_threads(struct perf_tool *tool,
 		goto out_free_fork;
 
 	snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir);
-	proc = opendir(proc_path);
+	n = scandir(proc_path, &dirent, 0, alphasort);
 
-	if (proc == NULL)
+	if (n < 0)
 		goto out_free_namespaces;
 
-	while ((dirent = readdir(proc)) != NULL) {
-		char *end;
-		pid_t pid = strtol(dirent->d_name, &end, 10);
-
-		if (*end) /* only interested in proper numerical dirents */
+	for (i = 0; i < n; i++) {
+		if (!isdigit(dirent[i]->d_name[0]))
 			continue;
-		/*
- 		 * We may race with exiting thread, so don't stop just because
- 		 * one thread couldn't be synthesized.
- 		 */
-		__event__synthesize_thread(comm_event, mmap_event, fork_event,
-					   namespaces_event, pid, 1, process,
-					   tool, machine, mmap_data,
-					   proc_map_timeout);
-	}
 
+		pid = (pid_t)strtol(dirent[i]->d_name, &end, 10);
+		/* only interested in proper numerical dirents */
+		if (!*end) {
+			/*
+			 * We may race with exiting thread, so don't stop
+			 * just because one thread couldn't be synthesized.
+			 */
+			__event__synthesize_thread(comm_event, mmap_event,
+						   fork_event, namespaces_event,
+						   pid, 1, process, tool,
+						   machine, mmap_data,
+						   proc_map_timeout);
+		}
+		free(dirent[i]);
+	}
+	free(dirent);
 	err = 0;
-	closedir(proc);
+
 out_free_namespaces:
 	free(namespaces_event);
 out_free_fork:
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 03/10] petf tools: using comm_str to replace comm in hist_entry
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
  2017-09-11  2:23 ` [PATCH RFC V2 01/10] perf tools: hashtable for machine threads kan.liang
  2017-09-11  2:23 ` [PATCH RFC V2 02/10] perf tools: using scandir to replace readdir kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-13 15:21   ` Arnaldo Carvalho de Melo
  2017-09-11  2:23 ` [PATCH RFC V2 04/10] petf tools: introduce a new function to set namespaces id kan.liang
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

For hist_entry, it only needs comm_str for cmp.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/util/hist.c | 4 ++--
 tools/perf/util/sort.c | 8 ++++----
 tools/perf/util/sort.h | 2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index e60d8d8..0f00dd9 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -587,7 +587,7 @@ __hists__add_entry(struct hists *hists,
 	struct namespaces *ns = thread__namespaces(al->thread);
 	struct hist_entry entry = {
 		.thread	= al->thread,
-		.comm = thread__comm(al->thread),
+		.comm_str = thread__comm_str(al->thread),
 		.cgroup_id = {
 			.dev = ns ? ns->link_info[CGROUP_NS_INDEX].dev : 0,
 			.ino = ns ? ns->link_info[CGROUP_NS_INDEX].ino : 0,
@@ -944,7 +944,7 @@ iter_add_next_cumulative_entry(struct hist_entry_iter *iter,
 		.hists = evsel__hists(evsel),
 		.cpu = al->cpu,
 		.thread = al->thread,
-		.comm = thread__comm(al->thread),
+		.comm_str = thread__comm_str(al->thread),
 		.ip = al->addr,
 		.ms = {
 			.map = al->map,
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index eb3ab90..99ab411 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -114,26 +114,26 @@ static int64_t
 sort__comm_cmp(struct hist_entry *left, struct hist_entry *right)
 {
 	/* Compare the addr that should be unique among comm */
-	return strcmp(comm__str(right->comm), comm__str(left->comm));
+	return strcmp(right->comm_str, left->comm_str);
 }
 
 static int64_t
 sort__comm_collapse(struct hist_entry *left, struct hist_entry *right)
 {
 	/* Compare the addr that should be unique among comm */
-	return strcmp(comm__str(right->comm), comm__str(left->comm));
+	return strcmp(right->comm_str, left->comm_str);
 }
 
 static int64_t
 sort__comm_sort(struct hist_entry *left, struct hist_entry *right)
 {
-	return strcmp(comm__str(right->comm), comm__str(left->comm));
+	return strcmp(right->comm_str, left->comm_str);
 }
 
 static int hist_entry__comm_snprintf(struct hist_entry *he, char *bf,
 				     size_t size, unsigned int width)
 {
-	return repsep_snprintf(bf, size, "%-*.*s", width, width, comm__str(he->comm));
+	return repsep_snprintf(bf, size, "%-*.*s", width, width, he->comm_str);
 }
 
 struct sort_entry sort_comm = {
diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h
index f36dc49..861cbe7 100644
--- a/tools/perf/util/sort.h
+++ b/tools/perf/util/sort.h
@@ -96,7 +96,7 @@ struct hist_entry {
 	struct he_stat		*stat_acc;
 	struct map_symbol	ms;
 	struct thread		*thread;
-	struct comm		*comm;
+	const char		*comm_str;
 	struct namespace_id	cgroup_id;
 	u64			ip;
 	u64			transaction;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 04/10] petf tools: introduce a new function to set namespaces id
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
                   ` (2 preceding siblings ...)
  2017-09-11  2:23 ` [PATCH RFC V2 03/10] petf tools: using comm_str to replace comm in hist_entry kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-11  2:23 ` [PATCH RFC V2 05/10] perf tools: lock to protect thread list kan.liang
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

Finish the namespaces id setting job in thread.c.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/util/hist.c   |  7 ++-----
 tools/perf/util/thread.c | 10 ++++++++++
 tools/perf/util/thread.h |  2 ++
 3 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index 0f00dd9..e332c4f 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -584,14 +584,9 @@ __hists__add_entry(struct hists *hists,
 		   bool sample_self,
 		   struct hist_entry_ops *ops)
 {
-	struct namespaces *ns = thread__namespaces(al->thread);
 	struct hist_entry entry = {
 		.thread	= al->thread,
 		.comm_str = thread__comm_str(al->thread),
-		.cgroup_id = {
-			.dev = ns ? ns->link_info[CGROUP_NS_INDEX].dev : 0,
-			.ino = ns ? ns->link_info[CGROUP_NS_INDEX].ino : 0,
-		},
 		.ms = {
 			.map	= al->map,
 			.sym	= al->sym,
@@ -617,6 +612,8 @@ __hists__add_entry(struct hists *hists,
 		.ops = ops,
 	};
 
+	thread__namespaces_id(al->thread, &entry.cgroup_id.dev,
+			      &entry.cgroup_id.ino);
 	return hists__findnew_entry(hists, &entry, al, sample_self);
 }
 
diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
index aee9a42..b7957da 100644
--- a/tools/perf/util/thread.c
+++ b/tools/perf/util/thread.c
@@ -149,6 +149,16 @@ int thread__set_namespaces(struct thread *thread, u64 timestamp,
 	return 0;
 }
 
+void thread__namespaces_id(const struct thread *thread,
+			   u64 *dev, u64 *ino)
+{
+	struct namespaces *ns;
+
+	ns = thread__namespaces(thread);
+	*dev = ns ? ns->link_info[CGROUP_NS_INDEX].dev : 0;
+	*ino = ns ? ns->link_info[CGROUP_NS_INDEX].ino : 0;
+}
+
 struct comm *thread__comm(const struct thread *thread)
 {
 	if (list_empty(&thread->comm_list))
diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h
index cb1a5dd..fd7bd41 100644
--- a/tools/perf/util/thread.h
+++ b/tools/perf/util/thread.h
@@ -68,6 +68,8 @@ static inline void thread__exited(struct thread *thread)
 struct namespaces *thread__namespaces(const struct thread *thread);
 int thread__set_namespaces(struct thread *thread, u64 timestamp,
 			   struct namespaces_event *event);
+void thread__namespaces_id(const struct thread *thread,
+			   u64 *dev, u64 *ino);
 
 int __thread__set_comm(struct thread *thread, const char *comm, u64 timestamp,
 		       bool exec);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 05/10] perf tools: lock to protect thread list
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
                   ` (3 preceding siblings ...)
  2017-09-11  2:23 ` [PATCH RFC V2 04/10] petf tools: introduce a new function to set namespaces id kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-18  8:50   ` Jiri Olsa
  2017-09-11  2:23 ` [PATCH RFC V2 06/10] perf tools: lock to protect comm_str rb tree kan.liang
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

Add two locks to protect namespaces_list and comm_list.

The comm which is used in db-export are not protected. Because the
multithread code will not touch it. It can be added later if required.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/ui/browsers/hists.c |  2 +-
 tools/perf/util/thread.c       | 60 +++++++++++++++++++++++++++++++++---------
 tools/perf/util/thread.h       |  6 +++--
 3 files changed, 52 insertions(+), 16 deletions(-)

diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
index 13dfb0a..429e1dd 100644
--- a/tools/perf/ui/browsers/hists.c
+++ b/tools/perf/ui/browsers/hists.c
@@ -2370,7 +2370,7 @@ static int perf_evsel_browser_title(struct hist_browser *browser,
 	char unit;
 	int printed;
 	const struct dso *dso = hists->dso_filter;
-	const struct thread *thread = hists->thread_filter;
+	struct thread *thread = hists->thread_filter;
 	int socket_id = hists->socket_filter;
 	unsigned long nr_samples = hists->stats.nr_events[PERF_RECORD_SAMPLE];
 	u64 nr_events = hists->stats.total_period;
diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
index b7957da..25830b2 100644
--- a/tools/perf/util/thread.c
+++ b/tools/perf/util/thread.c
@@ -45,6 +45,8 @@ struct thread *thread__new(pid_t pid, pid_t tid)
 		thread->cpu = -1;
 		INIT_LIST_HEAD(&thread->namespaces_list);
 		INIT_LIST_HEAD(&thread->comm_list);
+		pthread_mutex_init(&thread->namespaces_lock, NULL);
+		pthread_mutex_init(&thread->comm_lock, NULL);
 
 		comm_str = malloc(32);
 		if (!comm_str)
@@ -83,15 +85,21 @@ void thread__delete(struct thread *thread)
 		map_groups__put(thread->mg);
 		thread->mg = NULL;
 	}
+	pthread_mutex_lock(&thread->namespaces_lock);
 	list_for_each_entry_safe(namespaces, tmp_namespaces,
 				 &thread->namespaces_list, list) {
 		list_del(&namespaces->list);
 		namespaces__free(namespaces);
 	}
+	pthread_mutex_unlock(&thread->namespaces_lock);
+
+	pthread_mutex_lock(&thread->comm_lock);
 	list_for_each_entry_safe(comm, tmp_comm, &thread->comm_list, list) {
 		list_del(&comm->list);
 		comm__free(comm);
 	}
+	pthread_mutex_unlock(&thread->comm_lock);
+
 	unwind__finish_access(thread);
 	nsinfo__zput(thread->nsinfo);
 
@@ -128,11 +136,17 @@ struct namespaces *thread__namespaces(const struct thread *thread)
 int thread__set_namespaces(struct thread *thread, u64 timestamp,
 			   struct namespaces_event *event)
 {
-	struct namespaces *new, *curr = thread__namespaces(thread);
+	struct namespaces *new, *curr;
+
+	pthread_mutex_lock(&thread->namespaces_lock);
+
+	curr = thread__namespaces(thread);
 
 	new = namespaces__new(event);
-	if (!new)
+	if (!new) {
+		pthread_mutex_unlock(&thread->namespaces_lock);
 		return -ENOMEM;
+	}
 
 	list_add(&new->list, &thread->namespaces_list);
 
@@ -146,17 +160,21 @@ int thread__set_namespaces(struct thread *thread, u64 timestamp,
 		curr->end_time = timestamp;
 	}
 
+	pthread_mutex_unlock(&thread->namespaces_lock);
+
 	return 0;
 }
 
-void thread__namespaces_id(const struct thread *thread,
+void thread__namespaces_id(struct thread *thread,
 			   u64 *dev, u64 *ino)
 {
 	struct namespaces *ns;
 
+	pthread_mutex_lock(&thread->namespaces_lock);
 	ns = thread__namespaces(thread);
 	*dev = ns ? ns->link_info[CGROUP_NS_INDEX].dev : 0;
 	*ino = ns ? ns->link_info[CGROUP_NS_INDEX].ino : 0;
+	pthread_mutex_unlock(&thread->namespaces_lock);
 }
 
 struct comm *thread__comm(const struct thread *thread)
@@ -183,17 +201,23 @@ struct comm *thread__exec_comm(const struct thread *thread)
 int __thread__set_comm(struct thread *thread, const char *str, u64 timestamp,
 		       bool exec)
 {
-	struct comm *new, *curr = thread__comm(thread);
+	struct comm *new, *curr;
+	int err = 0;
+
+	pthread_mutex_lock(&thread->comm_lock);
+	curr = thread__comm(thread);
 
 	/* Override the default :tid entry */
 	if (!thread->comm_set) {
-		int err = comm__override(curr, str, timestamp, exec);
+		err = comm__override(curr, str, timestamp, exec);
 		if (err)
-			return err;
+			goto unlock;
 	} else {
 		new = comm__new(str, timestamp, exec);
-		if (!new)
-			return -ENOMEM;
+		if (!new) {
+			err = -ENOMEM;
+			goto unlock;
+		}
 		list_add(&new->list, &thread->comm_list);
 
 		if (exec)
@@ -202,7 +226,9 @@ int __thread__set_comm(struct thread *thread, const char *str, u64 timestamp,
 
 	thread->comm_set = true;
 
-	return 0;
+unlock:
+	pthread_mutex_unlock(&thread->comm_lock);
+	return err;
 }
 
 int thread__set_comm_from_proc(struct thread *thread)
@@ -222,14 +248,22 @@ int thread__set_comm_from_proc(struct thread *thread)
 	return err;
 }
 
-const char *thread__comm_str(const struct thread *thread)
+const char *thread__comm_str(struct thread *thread)
 {
-	const struct comm *comm = thread__comm(thread);
+	const struct comm *comm;
+	const char *str;
+
+	pthread_mutex_lock(&thread->comm_lock);
+	comm = thread__comm(thread);
 
-	if (!comm)
+	if (!comm) {
+		pthread_mutex_unlock(&thread->comm_lock);
 		return NULL;
+	}
+	str = comm__str(comm);
 
-	return comm__str(comm);
+	pthread_mutex_unlock(&thread->comm_lock);
+	return str;
 }
 
 /* CHECKME: it should probably better return the max comm len from its comm list */
diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h
index fd7bd41..1d3062a 100644
--- a/tools/perf/util/thread.h
+++ b/tools/perf/util/thread.h
@@ -29,7 +29,9 @@ struct thread {
 	int			comm_len;
 	bool			dead; /* if set thread has exited */
 	struct list_head	namespaces_list;
+	pthread_mutex_t		namespaces_lock;
 	struct list_head	comm_list;
+	pthread_mutex_t		comm_lock;
 	u64			db_id;
 
 	void			*priv;
@@ -68,7 +70,7 @@ static inline void thread__exited(struct thread *thread)
 struct namespaces *thread__namespaces(const struct thread *thread);
 int thread__set_namespaces(struct thread *thread, u64 timestamp,
 			   struct namespaces_event *event);
-void thread__namespaces_id(const struct thread *thread,
+void thread__namespaces_id(struct thread *thread,
 			   u64 *dev, u64 *ino);
 
 int __thread__set_comm(struct thread *thread, const char *comm, u64 timestamp,
@@ -84,7 +86,7 @@ int thread__set_comm_from_proc(struct thread *thread);
 int thread__comm_len(struct thread *thread);
 struct comm *thread__comm(const struct thread *thread);
 struct comm *thread__exec_comm(const struct thread *thread);
-const char *thread__comm_str(const struct thread *thread);
+const char *thread__comm_str(struct thread *thread);
 int thread__insert_map(struct thread *thread, struct map *map);
 int thread__fork(struct thread *thread, struct thread *parent, u64 timestamp);
 size_t thread__fprintf(struct thread *thread, FILE *fp);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 06/10] perf tools: lock to protect comm_str rb tree
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
                   ` (4 preceding siblings ...)
  2017-09-11  2:23 ` [PATCH RFC V2 05/10] perf tools: lock to protect thread list kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-11  2:23 ` [PATCH RFC V2 07/10] perf tools: change machine comm_exec type to atomic kan.liang
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

Add comm_str_lock to protect comm_str rb tree.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/util/comm.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/tools/perf/util/comm.c b/tools/perf/util/comm.c
index 7bc981b..cb7f98c 100644
--- a/tools/perf/util/comm.c
+++ b/tools/perf/util/comm.c
@@ -5,6 +5,7 @@
 #include <stdio.h>
 #include <string.h>
 #include <linux/refcount.h>
+#include <pthread.h>
 
 struct comm_str {
 	char *str;
@@ -14,6 +15,7 @@ struct comm_str {
 
 /* Should perhaps be moved to struct machine */
 static struct rb_root comm_str_root;
+static pthread_mutex_t comm_str_lock = PTHREAD_MUTEX_INITIALIZER;
 
 static struct comm_str *comm_str__get(struct comm_str *cs)
 {
@@ -25,7 +27,9 @@ static struct comm_str *comm_str__get(struct comm_str *cs)
 static void comm_str__put(struct comm_str *cs)
 {
 	if (cs && refcount_dec_and_test(&cs->refcnt)) {
+		pthread_mutex_lock(&comm_str_lock);
 		rb_erase(&cs->rb_node, &comm_str_root);
+		pthread_mutex_unlock(&comm_str_lock);
 		zfree(&cs->str);
 		free(cs);
 	}
@@ -50,7 +54,8 @@ static struct comm_str *comm_str__alloc(const char *str)
 	return cs;
 }
 
-static struct comm_str *comm_str__findnew(const char *str, struct rb_root *root)
+static
+struct comm_str *__comm_str__findnew(const char *str, struct rb_root *root)
 {
 	struct rb_node **p = &root->rb_node;
 	struct rb_node *parent = NULL;
@@ -81,6 +86,17 @@ static struct comm_str *comm_str__findnew(const char *str, struct rb_root *root)
 	return new;
 }
 
+static struct comm_str *comm_str__findnew(const char *str, struct rb_root *root)
+{
+	struct comm_str *cs;
+
+	pthread_mutex_lock(&comm_str_lock);
+	cs = __comm_str__findnew(str, root);
+	pthread_mutex_unlock(&comm_str_lock);
+
+	return cs;
+}
+
 struct comm *comm__new(const char *str, u64 timestamp, bool exec)
 {
 	struct comm *comm = zalloc(sizeof(*comm));
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 07/10] perf tools: change machine comm_exec type to atomic
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
                   ` (5 preceding siblings ...)
  2017-09-11  2:23 ` [PATCH RFC V2 06/10] perf tools: lock to protect comm_str rb tree kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-13 15:24   ` Arnaldo Carvalho de Melo
  2017-09-11  2:23 ` [PATCH RFC V2 08/10] perf top: implement multithreading for perf_event__synthesize_threads kan.liang
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

In case there are two or more threads want to change it.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/util/machine.c | 11 ++++++-----
 tools/perf/util/machine.h |  2 +-
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 2f1ad29..bbfb9e0 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -64,8 +64,8 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
 
 	machine->id_hdr_size = 0;
 	machine->kptr_restrict_warned = false;
-	machine->comm_exec = false;
 	machine->kernel_start = 0;
+	atomic_set(&machine->comm_exec, 0);
 
 	memset(machine->vmlinux_maps, 0, sizeof(machine->vmlinux_maps));
 
@@ -238,14 +238,15 @@ struct machine *machines__add(struct machines *machines, pid_t pid,
 
 void machines__set_comm_exec(struct machines *machines, bool comm_exec)
 {
+	int exec = comm_exec ? 1 : 0;
 	struct rb_node *nd;
 
-	machines->host.comm_exec = comm_exec;
+	atomic_set(&machines->host.comm_exec, exec);
 
 	for (nd = rb_first(&machines->guests); nd; nd = rb_next(nd)) {
 		struct machine *machine = rb_entry(nd, struct machine, rb_node);
 
-		machine->comm_exec = comm_exec;
+		atomic_set(&machine->comm_exec, exec);
 	}
 }
 
@@ -505,7 +506,7 @@ struct thread *machine__find_thread(struct machine *machine, pid_t pid,
 struct comm *machine__thread_exec_comm(struct machine *machine,
 				       struct thread *thread)
 {
-	if (machine->comm_exec)
+	if (atomic_read(&machine->comm_exec))
 		return thread__exec_comm(thread);
 	else
 		return thread__comm(thread);
@@ -521,7 +522,7 @@ int machine__process_comm_event(struct machine *machine, union perf_event *event
 	int err = 0;
 
 	if (exec)
-		machine->comm_exec = true;
+		atomic_set(&machine->comm_exec, 1);
 
 	if (dump_trace)
 		perf_event__fprintf_comm(event, stdout);
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index 64663eb..fb3c2a2 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -38,7 +38,7 @@ struct machine {
 	struct rb_node	  rb_node;
 	pid_t		  pid;
 	u16		  id_hdr_size;
-	bool		  comm_exec;
+	atomic_t	  comm_exec;
 	bool		  kptr_restrict_warned;
 	char		  *root_dir;
 	struct threads    threads[THREADS__TABLE_SIZE];
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 08/10] perf top: implement multithreading for perf_event__synthesize_threads
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
                   ` (6 preceding siblings ...)
  2017-09-11  2:23 ` [PATCH RFC V2 07/10] perf tools: change machine comm_exec type to atomic kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-18 11:24   ` Jiri Olsa
  2017-09-11  2:23 ` [PATCH RFC V2 09/10] perf top: add option to set the number of thread for event synthesize kan.liang
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

The proc files which is sorted with alphabetical order are evenly
assigned to several synthesize threads to be processed in parallel.

For perf top, the threads number hard code to online CPU number. The
following patch will introduce an option to set it.
For other perf tools, the thread number is 1. Because the process
function is not ready for multithreading, e.g.
process_synthesized_event.
This patch series only support event synthesize multithreading for perf
top. For other tools, it can be done separately later.

With multithread applied, the total processing time can get up to 1.56x
speedup on Knights Mill for perf top.

For specific single event processing, the processing time could increase
because of the lock contention. So proc_map_timeout may need to be
increased. Otherwise some proc maps will be truncated.
Based on my test, increasing the proc_map_timeout has small impact
on the total processing time. The total processing time still get 1.49x
speedup on Knights Mill after increasing the proc_map_timeout.
The patch itself doesn't increase the proc_map_timeout.

Doesn't need to implement multithreading for
perf_event__synthesize_thread_map. It doesn't have performance issue.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/builtin-kvm.c              |   3 +-
 tools/perf/builtin-record.c           |   2 +-
 tools/perf/builtin-top.c              |   4 +-
 tools/perf/builtin-trace.c            |   2 +-
 tools/perf/tests/mmap-thread-lookup.c |   2 +-
 tools/perf/util/event.c               | 146 ++++++++++++++++++++++++++--------
 tools/perf/util/event.h               |  14 +++-
 tools/perf/util/machine.c             |   8 +-
 tools/perf/util/machine.h             |   9 ++-
 9 files changed, 146 insertions(+), 44 deletions(-)

diff --git a/tools/perf/builtin-kvm.c b/tools/perf/builtin-kvm.c
index f309c37..edd14cb 100644
--- a/tools/perf/builtin-kvm.c
+++ b/tools/perf/builtin-kvm.c
@@ -1442,7 +1442,8 @@ static int kvm_events_live(struct perf_kvm_stat *kvm,
 	perf_session__set_id_hdr_size(kvm->session);
 	ordered_events__set_copy_on_queue(&kvm->session->ordered_events, true);
 	machine__synthesize_threads(&kvm->session->machines.host, &kvm->opts.target,
-				    kvm->evlist->threads, false, kvm->opts.proc_map_timeout);
+				    kvm->evlist->threads, false,
+				    kvm->opts.proc_map_timeout, 1);
 	err = kvm_live_open_events(kvm);
 	if (err)
 		goto out;
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 56f8142..f8a6c89 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -863,7 +863,7 @@ static int record__synthesize(struct record *rec, bool tail)
 
 	err = __machine__synthesize_threads(machine, tool, &opts->target, rec->evlist->threads,
 					    process_synthesized_event, opts->sample_address,
-					    opts->proc_map_timeout);
+					    opts->proc_map_timeout, 1);
 out:
 	return err;
 }
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index ee954bd..4b8fdc1 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -959,7 +959,9 @@ static int __cmd_top(struct perf_top *top)
 		goto out_delete;
 
 	machine__synthesize_threads(&top->session->machines.host, &opts->target,
-				    top->evlist->threads, false, opts->proc_map_timeout);
+				    top->evlist->threads, false,
+				    opts->proc_map_timeout,
+				    (unsigned int)sysconf(_SC_NPROCESSORS_ONLN));
 
 	if (perf_hpp_list.socket) {
 		ret = perf_env__read_cpu_topology_map(&perf_env);
diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index ee8c6e8..c6acd91 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -1131,7 +1131,7 @@ static int trace__symbols_init(struct trace *trace, struct perf_evlist *evlist)
 
 	err = __machine__synthesize_threads(trace->host, &trace->tool, &trace->opts.target,
 					    evlist->threads, trace__tool_process, false,
-					    trace->opts.proc_map_timeout);
+					    trace->opts.proc_map_timeout, 1);
 	if (err)
 		symbol__exit();
 
diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c
index f94a419..2a0068a 100644
--- a/tools/perf/tests/mmap-thread-lookup.c
+++ b/tools/perf/tests/mmap-thread-lookup.c
@@ -131,7 +131,7 @@ static int synth_all(struct machine *machine)
 {
 	return perf_event__synthesize_threads(NULL,
 					      perf_event__process,
-					      machine, 0, 500);
+					      machine, 0, 500, 1);
 }
 
 static int synth_process(struct machine *machine)
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index c31f678..8c4e072 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -677,23 +677,21 @@ int perf_event__synthesize_thread_map(struct perf_tool *tool,
 	return err;
 }
 
-int perf_event__synthesize_threads(struct perf_tool *tool,
-				   perf_event__handler_t process,
-				   struct machine *machine,
-				   bool mmap_data,
-				   unsigned int proc_map_timeout)
+static int __perf_event__synthesize_threads(struct perf_tool *tool,
+					    perf_event__handler_t process,
+					    struct machine *machine,
+					    bool mmap_data,
+					    unsigned int proc_map_timeout,
+					    struct dirent **dirent,
+					    int start,
+					    int num)
 {
 	union perf_event *comm_event, *mmap_event, *fork_event;
 	union perf_event *namespaces_event;
-	char proc_path[PATH_MAX];
-	struct dirent **dirent;
 	int err = -1;
 	char *end;
 	pid_t pid;
-	int n, i;
-
-	if (machine__is_default_guest(machine))
-		return 0;
+	int i;
 
 	comm_event = malloc(sizeof(comm_event->comm) + machine->id_hdr_size);
 	if (comm_event == NULL)
@@ -713,35 +711,25 @@ int perf_event__synthesize_threads(struct perf_tool *tool,
 	if (namespaces_event == NULL)
 		goto out_free_fork;
 
-	snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir);
-	n = scandir(proc_path, &dirent, 0, alphasort);
-
-	if (n < 0)
-		goto out_free_namespaces;
-
-	for (i = 0; i < n; i++) {
+	for (i = start; i < start + num; i++) {
 		if (!isdigit(dirent[i]->d_name[0]))
 			continue;
 
 		pid = (pid_t)strtol(dirent[i]->d_name, &end, 10);
 		/* only interested in proper numerical dirents */
-		if (!*end) {
-			/*
-			 * We may race with exiting thread, so don't stop
-			 * just because one thread couldn't be synthesized.
-			 */
-			__event__synthesize_thread(comm_event, mmap_event,
-						   fork_event, namespaces_event,
-						   pid, 1, process, tool,
-						   machine, mmap_data,
-						   proc_map_timeout);
-		}
-		free(dirent[i]);
+		if (*end)
+			continue;
+		/*
+		 * We may race with exiting thread, so don't stop
+		 * just because one thread couldn't be synthesized.
+		 */
+		__event__synthesize_thread(comm_event, mmap_event,
+					   fork_event, namespaces_event,
+					   pid, 1, process, tool,
+					   machine, mmap_data,
+					   proc_map_timeout);
 	}
-	free(dirent);
 	err = 0;
-
-out_free_namespaces:
 	free(namespaces_event);
 out_free_fork:
 	free(fork_event);
@@ -753,6 +741,98 @@ int perf_event__synthesize_threads(struct perf_tool *tool,
 	return err;
 }
 
+static void *synthesize_threads_worker(void *arg)
+{
+	struct synthesize_threads_arg *args = arg;
+
+	__perf_event__synthesize_threads(args->tool, args->process,
+					 args->machine, args->mmap_data,
+					 args->proc_map_timeout, args->dirent,
+					 args->start, args->num);
+	return NULL;
+}
+
+int perf_event__synthesize_threads(struct perf_tool *tool,
+				   perf_event__handler_t process,
+				   struct machine *machine,
+				   bool mmap_data,
+				   unsigned int proc_map_timeout,
+				   unsigned int nr_threads_synthesize)
+{
+	struct synthesize_threads_arg *args = NULL;
+	pthread_t *synthesize_threads = NULL;
+	char proc_path[PATH_MAX];
+	struct dirent **dirent;
+	int num_per_thread;
+	int m, n, i, j;
+	int thread_nr;
+	int base = 0;
+	int err = -1;
+
+
+	if (machine__is_default_guest(machine))
+		return 0;
+
+	snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir);
+	n = scandir(proc_path, &dirent, 0, alphasort);
+	if (n < 0)
+		return err;
+
+	thread_nr = nr_threads_synthesize;
+	if (thread_nr <= 0)
+		thread_nr = 1;
+	if (thread_nr > n)
+		thread_nr = n;
+
+	synthesize_threads = calloc(sizeof(pthread_t), thread_nr);
+	if (synthesize_threads == NULL)
+		goto free_dirent;
+
+	args = calloc(sizeof(*args), thread_nr);
+	if (args == NULL)
+		goto free_threads;
+
+	num_per_thread = n / thread_nr;
+	m = n % thread_nr;
+	for (i = 0; i < thread_nr; i++) {
+		args[i].tool = tool;
+		args[i].process = process;
+		args[i].machine = machine;
+		args[i].mmap_data = mmap_data;
+		args[i].proc_map_timeout = proc_map_timeout;
+		args[i].dirent = dirent;
+	}
+	for (i = 0; i < m; i++) {
+		args[i].num = num_per_thread + 1;
+		args[i].start = i * args[i].num;
+	}
+	if (i != 0)
+		base = args[i-1].start + args[i-1].num;
+	for (j = i; j < thread_nr; j++) {
+		args[j].num = num_per_thread;
+		args[j].start = base + (j - i) * args[i].num;
+	}
+
+	for (i = 0; i < thread_nr; i++) {
+		if (pthread_create(&synthesize_threads[i], NULL,
+				   synthesize_threads_worker, &args[i]))
+			goto out_join;
+	}
+	err = 0;
+out_join:
+	for (i = 0; i < thread_nr; i++)
+		pthread_join(synthesize_threads[i], NULL);
+	free(args);
+free_threads:
+	free(synthesize_threads);
+free_dirent:
+	for (i = 0; i < n; i++)
+		free(dirent[i]);
+	free(dirent);
+
+	return err;
+}
+
 struct process_symbol_args {
 	const char *name;
 	u64	   start;
diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
index ee7bcc8..7b987c8 100644
--- a/tools/perf/util/event.h
+++ b/tools/perf/util/event.h
@@ -664,6 +664,17 @@ typedef int (*perf_event__handler_t)(struct perf_tool *tool,
 				     struct perf_sample *sample,
 				     struct machine *machine);
 
+struct synthesize_threads_arg {
+	struct perf_tool *tool;
+	perf_event__handler_t process;
+	struct machine *machine;
+	bool mmap_data;
+	unsigned int proc_map_timeout;
+	struct dirent **dirent;
+	int num;
+	int start;
+};
+
 int perf_event__synthesize_thread_map(struct perf_tool *tool,
 				      struct thread_map *threads,
 				      perf_event__handler_t process,
@@ -680,7 +691,8 @@ int perf_event__synthesize_cpu_map(struct perf_tool *tool,
 int perf_event__synthesize_threads(struct perf_tool *tool,
 				   perf_event__handler_t process,
 				   struct machine *machine, bool mmap_data,
-				   unsigned int proc_map_timeout);
+				   unsigned int proc_map_timeout,
+				   unsigned int nr_threads_synthesize);
 int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
 				       perf_event__handler_t process,
 				       struct machine *machine);
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index bbfb9e0..ad575d7 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -2221,12 +2221,16 @@ int machines__for_each_thread(struct machines *machines,
 int __machine__synthesize_threads(struct machine *machine, struct perf_tool *tool,
 				  struct target *target, struct thread_map *threads,
 				  perf_event__handler_t process, bool data_mmap,
-				  unsigned int proc_map_timeout)
+				  unsigned int proc_map_timeout,
+				  unsigned int nr_threads_synthesize)
 {
 	if (target__has_task(target))
 		return perf_event__synthesize_thread_map(tool, threads, process, machine, data_mmap, proc_map_timeout);
 	else if (target__has_cpu(target))
-		return perf_event__synthesize_threads(tool, process, machine, data_mmap, proc_map_timeout);
+		return perf_event__synthesize_threads(tool, process,
+						      machine, data_mmap,
+						      proc_map_timeout,
+						      nr_threads_synthesize);
 	/* command specified */
 	return 0;
 }
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index fb3c2a2..6bdd1d0 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -256,15 +256,18 @@ int machines__for_each_thread(struct machines *machines,
 int __machine__synthesize_threads(struct machine *machine, struct perf_tool *tool,
 				  struct target *target, struct thread_map *threads,
 				  perf_event__handler_t process, bool data_mmap,
-				  unsigned int proc_map_timeout);
+				  unsigned int proc_map_timeout,
+				  unsigned int nr_threads_synthesize);
 static inline
 int machine__synthesize_threads(struct machine *machine, struct target *target,
 				struct thread_map *threads, bool data_mmap,
-				unsigned int proc_map_timeout)
+				unsigned int proc_map_timeout,
+				unsigned int nr_threads_synthesize)
 {
 	return __machine__synthesize_threads(machine, NULL, target, threads,
 					     perf_event__process, data_mmap,
-					     proc_map_timeout);
+					     proc_map_timeout,
+					     nr_threads_synthesize);
 }
 
 pid_t machine__get_current_tid(struct machine *machine, int cpu);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 09/10] perf top: add option to set the number of thread for event synthesize
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
                   ` (7 preceding siblings ...)
  2017-09-11  2:23 ` [PATCH RFC V2 08/10] perf top: implement multithreading for perf_event__synthesize_threads kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-11  2:23 ` [PATCH RFC V2 10/10] perf top: switch back to overwrite mode kan.liang
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

Using UINT_MAX to indicate the default thread#, which is the max number
of online CPU.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/builtin-top.c | 5 ++++-
 tools/perf/util/event.c  | 5 ++++-
 tools/perf/util/top.h    | 1 +
 3 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 4b8fdc1..5f59aa7 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -961,7 +961,7 @@ static int __cmd_top(struct perf_top *top)
 	machine__synthesize_threads(&top->session->machines.host, &opts->target,
 				    top->evlist->threads, false,
 				    opts->proc_map_timeout,
-				    (unsigned int)sysconf(_SC_NPROCESSORS_ONLN));
+				    top->nr_threads_synthesize);
 
 	if (perf_hpp_list.socket) {
 		ret = perf_env__read_cpu_topology_map(&perf_env);
@@ -1114,6 +1114,7 @@ int cmd_top(int argc, const char **argv)
 		},
 		.max_stack	     = sysctl_perf_event_max_stack,
 		.sym_pcnt_filter     = 5,
+		.nr_threads_synthesize = UINT_MAX,
 	};
 	struct record_opts *opts = &top.record_opts;
 	struct target *target = &opts->target;
@@ -1223,6 +1224,8 @@ int cmd_top(int argc, const char **argv)
 	OPT_BOOLEAN(0, "hierarchy", &symbol_conf.report_hierarchy,
 		    "Show entries in a hierarchy"),
 	OPT_BOOLEAN(0, "force", &symbol_conf.force, "don't complain, do it"),
+	OPT_UINTEGER(0, "num-thread-synthesize", &top.nr_threads_synthesize,
+			"number of thread to run event synthesize"),
 	OPT_END()
 	};
 	const char * const top_usage[] = {
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 8c4e072..ecef279 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -778,7 +778,10 @@ int perf_event__synthesize_threads(struct perf_tool *tool,
 	if (n < 0)
 		return err;
 
-	thread_nr = nr_threads_synthesize;
+	if (nr_threads_synthesize == UINT_MAX)
+		thread_nr = sysconf(_SC_NPROCESSORS_ONLN);
+	else
+		thread_nr = nr_threads_synthesize;
 	if (thread_nr <= 0)
 		thread_nr = 1;
 	if (thread_nr > n)
diff --git a/tools/perf/util/top.h b/tools/perf/util/top.h
index 9bdfb78..f4296e1 100644
--- a/tools/perf/util/top.h
+++ b/tools/perf/util/top.h
@@ -37,6 +37,7 @@ struct perf_top {
 	int		   sym_pcnt_filter;
 	const char	   *sym_filter;
 	float		   min_percent;
+	unsigned int	   nr_threads_synthesize;
 };
 
 #define CONSOLE_CLEAR "^[[H^[[2J"
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH RFC V2 10/10] perf top: switch back to overwrite mode
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
                   ` (8 preceding siblings ...)
  2017-09-11  2:23 ` [PATCH RFC V2 09/10] perf top: add option to set the number of thread for event synthesize kan.liang
@ 2017-09-11  2:23 ` kan.liang
  2017-09-13 15:25 ` [PATCH RFC V2 00/10] perf top optimization Arnaldo Carvalho de Melo
  2017-09-18  8:57 ` Jiri Olsa
  11 siblings, 0 replies; 37+ messages in thread
From: kan.liang @ 2017-09-11  2:23 UTC (permalink / raw)
  To: acme, peterz, mingo, linux-kernel
  Cc: jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak, Kan Liang

From: Kan Liang <kan.liang@intel.com>

perf_top__mmap_read has severe performance issue in
Knights Landing/Mill, when monitoring in heavy load system. It costs
several minutes to finish, which is unacceptable.

perf top was overwrite mode. But it is changed to non overwrite mode
since commit 93fc64f14472 ("perf top: Switch to non overwrite mode").
For non overwrite mode, it tries to read everything in the ring buffer
and does not check the messup. Once there are lots of samples delivered
shortly, the processing time could be very long.
Knights Landing/Mill as a manycore processor contains a large number of
small cores. Because of the huge core number, it will generated lots of
samples in a heavy load system. Also, since the huge sample#, the mmap
writer probably bite the tail and mess up the samples.

Switching to overwrite mode, which dropping the unsure mmap entries,
significantly speeds up the whole progress.
Considering the real time requirement for perf top, it should switch
back to overwrite mode.

Only warning once if the messup is detected.
Providing some hints to users.

Signed-off-by: Kan Liang <kan.liang@intel.com>
---
 tools/perf/builtin-top.c | 2 +-
 tools/perf/util/evlist.c | 5 ++++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 5f59aa7..8124b8f 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -902,7 +902,7 @@ static int perf_top__start_counters(struct perf_top *top)
 		}
 	}
 
-	if (perf_evlist__mmap(evlist, opts->mmap_pages, false) < 0) {
+	if (perf_evlist__mmap(evlist, opts->mmap_pages, true) < 0) {
 		ui__error("Failed to mmap with %d (%s)\n",
 			    errno, str_error_r(errno, msg, sizeof(msg)));
 		goto out_err;
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 6a0d7ff..ef0b6b1 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -723,7 +723,10 @@ perf_mmap__read(struct perf_mmap *md, bool check_messup, u64 start,
 		 * In either case, truncate and restart at 'end'.
 		 */
 		if (diff > md->mask / 2 || diff < 0) {
-			fprintf(stderr, "WARNING: failed to keep up with mmap data.\n");
+			WARN_ONCE(1, "WARNING: failed to keep up with mmap data.\n"
+				     "Please try increasing the period (-c) or\n"
+				     "decreasing the freq (-F) or\n"
+				     "limiting the number of CPUs");
 
 			/*
 			 * 'end' points to a known good entry, start there.
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 01/10] perf tools: hashtable for machine threads
  2017-09-11  2:23 ` [PATCH RFC V2 01/10] perf tools: hashtable for machine threads kan.liang
@ 2017-09-13 13:29   ` Arnaldo Carvalho de Melo
  2017-09-22 16:42   ` [tip:perf/core] perf machine: Use " tip-bot for Kan Liang
  1 sibling, 0 replies; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-13 13:29 UTC (permalink / raw)
  To: kan.liang
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, adrian.hunter,
	lukasz.odzioba, ak

Em Sun, Sep 10, 2017 at 07:23:14PM -0700, kan.liang@intel.com escreveu:
> From: Kan Liang <kan.liang@intel.com>
> 
> To process any events, it needs to find the thread in the machine first.
> The machine maintains a rb tree to store all threads. The rb tree is
> protected by a rw lock.
> It is not a problem for current perf which serially processing events.
> However, it will have scalability performance issue to process events in
> parallel, especially on a heavy load system which have many threads.
> 
> Introduce a hashtable to divide the big rb tree into many samll rb tree
> for threads. The index is thread id % hashtable size. It can reduce the
> lock contention.

Ok, I renamed variables of the type 'struct threads' from 'thread' to
'threads', to reduce confusion when looking at a specific variable as to
its type, so if you see a variable named 'threads', its of type 'struct
threads', if it is 'thread', ditto.

Interdiff from your patch to mine is below, now I'm testing this.

No need to resend anything now, I'll cope with whatever fallout due to
this comes up, if any.

- Arnaldo

diff -u b/tools/perf/util/machine.c b/tools/perf/util/machine.c
--- b/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -33,18 +33,17 @@
 	pthread_rwlock_init(&dsos->lock, NULL);
 }
 
-static void machine__thread_init(struct machine *machine)
+static void machine__threads_init(struct machine *machine)
 {
-	struct threads *thread;
 	int i;
 
 	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
-		thread = &machine->threads[i];
-		thread->entries = RB_ROOT;
-		pthread_rwlock_init(&thread->lock, NULL);
-		thread->nr = 0;
-		INIT_LIST_HEAD(&thread->dead);
-		thread->last_match = NULL;
+		struct threads *threads = &machine->threads[i];
+		threads->entries = RB_ROOT;
+		pthread_rwlock_init(&threads->lock, NULL);
+		threads->nr = 0;
+		INIT_LIST_HEAD(&threads->dead);
+		threads->last_match = NULL;
 	}
 }
 
@@ -55,7 +54,7 @@
 	RB_CLEAR_NODE(&machine->rb_node);
 	dsos__init(&machine->dsos);
 
-	machine__thread_init(machine);
+	machine__threads_init(machine);
 
 	machine->vdso_info = NULL;
 	machine->env = NULL;
@@ -151,27 +150,25 @@
 
 void machine__delete_threads(struct machine *machine)
 {
-	struct threads *thread;
 	struct rb_node *nd;
 	int i;
 
 	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
-		thread = &machine->threads[i];
-		pthread_rwlock_wrlock(&thread->lock);
-		nd = rb_first(&thread->entries);
+		struct threads *threads = &machine->threads[i];
+		pthread_rwlock_wrlock(&threads->lock);
+		nd = rb_first(&threads->entries);
 		while (nd) {
 			struct thread *t = rb_entry(nd, struct thread, rb_node);
 
 			nd = rb_next(nd);
 			__machine__remove_thread(machine, t, false);
 		}
-		pthread_rwlock_unlock(&thread->lock);
+		pthread_rwlock_unlock(&threads->lock);
 	}
 }
 
 void machine__exit(struct machine *machine)
 {
-	struct threads *thread;
 	int i;
 
 	machine__destroy_kernel_maps(machine);
@@ -180,9 +177,10 @@
 	machine__exit_vdso(machine);
 	zfree(&machine->root_dir);
 	zfree(&machine->current_tid);
+
 	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
-		thread = &machine->threads[i];
-		pthread_rwlock_destroy(&thread->lock);
+		struct threads *threads = &machine->threads[i];
+		pthread_rwlock_destroy(&threads->lock);
 	}
 }
 
@@ -404,8 +402,8 @@
 						  pid_t pid, pid_t tid,
 						  bool create)
 {
-	struct threads *thread = machine__thread(machine, tid);
-	struct rb_node **p = &thread->entries.rb_node;
+	struct threads *threads = machine__thread(machine, tid);
+	struct rb_node **p = &threads->entries.rb_node;
 	struct rb_node *parent = NULL;
 	struct thread *th;
 
@@ -414,14 +412,14 @@
 	 * so most of the time we dont have to look up
 	 * the full rbtree:
 	 */
-	th = thread->last_match;
+	th = threads->last_match;
 	if (th != NULL) {
 		if (th->tid == tid) {
 			machine__update_thread_pid(machine, th, pid);
 			return thread__get(th);
 		}
 
-		thread->last_match = NULL;
+		threads->last_match = NULL;
 	}
 
 	while (*p != NULL) {
@@ -429,7 +427,7 @@
 		th = rb_entry(parent, struct thread, rb_node);
 
 		if (th->tid == tid) {
-			thread->last_match = th;
+			threads->last_match = th;
 			machine__update_thread_pid(machine, th, pid);
 			return thread__get(th);
 		}
@@ -446,7 +444,7 @@
 	th = thread__new(pid, tid);
 	if (th != NULL) {
 		rb_link_node(&th->rb_node, parent, p);
-		rb_insert_color(&th->rb_node, &thread->entries);
+		rb_insert_color(&th->rb_node, &threads->entries);
 
 		/*
 		 * We have to initialize map_groups separately
@@ -457,7 +455,7 @@
 		 * leader and that would screwed the rb tree.
 		 */
 		if (thread__init_map_groups(th, machine)) {
-			rb_erase_init(&th->rb_node, &thread->entries);
+			rb_erase_init(&th->rb_node, &threads->entries);
 			RB_CLEAR_NODE(&th->rb_node);
 			thread__put(th);
 			return NULL;
@@ -466,8 +464,8 @@
 		 * It is now in the rbtree, get a ref
 		 */
 		thread__get(th);
-		thread->last_match = th;
-		++thread->nr;
+		threads->last_match = th;
+		++threads->nr;
 	}
 
 	return th;
@@ -481,24 +479,24 @@
 struct thread *machine__findnew_thread(struct machine *machine, pid_t pid,
 				       pid_t tid)
 {
-	struct threads *thread = machine__thread(machine, tid);
+	struct threads *threads = machine__thread(machine, tid);
 	struct thread *th;
 
-	pthread_rwlock_wrlock(&thread->lock);
+	pthread_rwlock_wrlock(&threads->lock);
 	th = __machine__findnew_thread(machine, pid, tid);
-	pthread_rwlock_unlock(&thread->lock);
+	pthread_rwlock_unlock(&threads->lock);
 	return th;
 }
 
 struct thread *machine__find_thread(struct machine *machine, pid_t pid,
 				    pid_t tid)
 {
-	struct threads *thread = machine__thread(machine, tid);
+	struct threads *threads = machine__thread(machine, tid);
 	struct thread *th;
 
-	pthread_rwlock_rdlock(&thread->lock);
+	pthread_rwlock_rdlock(&threads->lock);
 	th =  ____machine__findnew_thread(machine, pid, tid, false);
-	pthread_rwlock_unlock(&thread->lock);
+	pthread_rwlock_unlock(&threads->lock);
 	return th;
 }
 
@@ -745,24 +743,23 @@
 
 size_t machine__fprintf(struct machine *machine, FILE *fp)
 {
-	struct threads *thread;
 	struct rb_node *nd;
 	size_t ret;
 	int i;
 
 	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
-		thread = &machine->threads[i];
-		pthread_rwlock_rdlock(&thread->lock);
+		struct threads *threads = &machine->threads[i];
+		pthread_rwlock_rdlock(&threads->lock);
 
-		ret = fprintf(fp, "Threads: %u\n", thread->nr);
+		ret = fprintf(fp, "Threads: %u\n", threads->nr);
 
-		for (nd = rb_first(&thread->entries); nd; nd = rb_next(nd)) {
+		for (nd = rb_first(&threads->entries); nd; nd = rb_next(nd)) {
 			struct thread *pos = rb_entry(nd, struct thread, rb_node);
 
 			ret += thread__fprintf(pos, fp);
 		}
 
-		pthread_rwlock_unlock(&thread->lock);
+		pthread_rwlock_unlock(&threads->lock);
 	}
 	return ret;
 }
@@ -1509,25 +1506,25 @@
 
 static void __machine__remove_thread(struct machine *machine, struct thread *th, bool lock)
 {
-	struct threads *thread = machine__thread(machine, th->tid);
+	struct threads *threads = machine__thread(machine, th->tid);
 
-	if (thread->last_match == th)
-		thread->last_match = NULL;
+	if (threads->last_match == th)
+		threads->last_match = NULL;
 
 	BUG_ON(refcount_read(&th->refcnt) == 0);
 	if (lock)
-		pthread_rwlock_wrlock(&thread->lock);
-	rb_erase_init(&th->rb_node, &thread->entries);
+		pthread_rwlock_wrlock(&threads->lock);
+	rb_erase_init(&th->rb_node, &threads->entries);
 	RB_CLEAR_NODE(&th->rb_node);
-	--thread->nr;
+	--threads->nr;
 	/*
 	 * Move it first to the dead_threads list, then drop the reference,
 	 * if this is the last reference, then the thread__delete destructor
 	 * will be called and we will remove it from the dead_threads list.
 	 */
-	list_add_tail(&th->node, &thread->dead);
+	list_add_tail(&th->node, &threads->dead);
 	if (lock)
-		pthread_rwlock_unlock(&thread->lock);
+		pthread_rwlock_unlock(&threads->lock);
 	thread__put(th);
 }
 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 02/10] perf tools: using scandir to replace readdir
  2017-09-11  2:23 ` [PATCH RFC V2 02/10] perf tools: using scandir to replace readdir kan.liang
@ 2017-09-13 15:19   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-13 15:19 UTC (permalink / raw)
  To: kan.liang
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, adrian.hunter,
	lukasz.odzioba, ak

Em Sun, Sep 10, 2017 at 07:23:15PM -0700, kan.liang@intel.com escreveu:
> From: Kan Liang <kan.liang@intel.com>
> 
> For perf_event__synthesize_threads, perf goes through all proc files
> serially by readdir.
> scandir did a snapshoot of proc, which is multithreading friendly.
> 
> It's possible that some threads which are added during event synthesize.
> But the number of lost threads should be small.
> They should not impact the final analysis.

This one I had already merged into my perf/core branch,

- Arnaldo
 
> Signed-off-by: Kan Liang <kan.liang@intel.com>
> ---
>  tools/perf/util/event.c | 46 ++++++++++++++++++++++++++--------------------
>  1 file changed, 26 insertions(+), 20 deletions(-)
> 
> diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> index 1c905ba..c31f678 100644
> --- a/tools/perf/util/event.c
> +++ b/tools/perf/util/event.c
> @@ -683,12 +683,14 @@ int perf_event__synthesize_threads(struct perf_tool *tool,
>  				   bool mmap_data,
>  				   unsigned int proc_map_timeout)
>  {
> -	DIR *proc;
> -	char proc_path[PATH_MAX];
> -	struct dirent *dirent;
>  	union perf_event *comm_event, *mmap_event, *fork_event;
>  	union perf_event *namespaces_event;
> +	char proc_path[PATH_MAX];
> +	struct dirent **dirent;
>  	int err = -1;
> +	char *end;
> +	pid_t pid;
> +	int n, i;
>  
>  	if (machine__is_default_guest(machine))
>  		return 0;
> @@ -712,29 +714,33 @@ int perf_event__synthesize_threads(struct perf_tool *tool,
>  		goto out_free_fork;
>  
>  	snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir);
> -	proc = opendir(proc_path);
> +	n = scandir(proc_path, &dirent, 0, alphasort);
>  
> -	if (proc == NULL)
> +	if (n < 0)
>  		goto out_free_namespaces;
>  
> -	while ((dirent = readdir(proc)) != NULL) {
> -		char *end;
> -		pid_t pid = strtol(dirent->d_name, &end, 10);
> -
> -		if (*end) /* only interested in proper numerical dirents */
> +	for (i = 0; i < n; i++) {
> +		if (!isdigit(dirent[i]->d_name[0]))
>  			continue;
> -		/*
> - 		 * We may race with exiting thread, so don't stop just because
> - 		 * one thread couldn't be synthesized.
> - 		 */
> -		__event__synthesize_thread(comm_event, mmap_event, fork_event,
> -					   namespaces_event, pid, 1, process,
> -					   tool, machine, mmap_data,
> -					   proc_map_timeout);
> -	}
>  
> +		pid = (pid_t)strtol(dirent[i]->d_name, &end, 10);
> +		/* only interested in proper numerical dirents */
> +		if (!*end) {
> +			/*
> +			 * We may race with exiting thread, so don't stop
> +			 * just because one thread couldn't be synthesized.
> +			 */
> +			__event__synthesize_thread(comm_event, mmap_event,
> +						   fork_event, namespaces_event,
> +						   pid, 1, process, tool,
> +						   machine, mmap_data,
> +						   proc_map_timeout);
> +		}
> +		free(dirent[i]);
> +	}
> +	free(dirent);
>  	err = 0;
> -	closedir(proc);
> +
>  out_free_namespaces:
>  	free(namespaces_event);
>  out_free_fork:
> -- 
> 2.5.5

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 03/10] petf tools: using comm_str to replace comm in hist_entry
  2017-09-11  2:23 ` [PATCH RFC V2 03/10] petf tools: using comm_str to replace comm in hist_entry kan.liang
@ 2017-09-13 15:21   ` Arnaldo Carvalho de Melo
  2017-09-18  8:38     ` Jiri Olsa
  0 siblings, 1 reply; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-13 15:21 UTC (permalink / raw)
  To: Frédéric Weisbecker, kan.liang
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, adrian.hunter,
	lukasz.odzioba, ak

Em Sun, Sep 10, 2017 at 07:23:16PM -0700, kan.liang@intel.com escreveu:
> From: Kan Liang <kan.liang@intel.com>
> 
> For hist_entry, it only needs comm_str for cmp.

And thinking a bit more, isn't this even a bug fix, as at the time of
that sample that was the comm string associated with that thread?

Frédéric, am I nutz?

- Arnaldo
 
> Signed-off-by: Kan Liang <kan.liang@intel.com>
> ---
>  tools/perf/util/hist.c | 4 ++--
>  tools/perf/util/sort.c | 8 ++++----
>  tools/perf/util/sort.h | 2 +-
>  3 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> index e60d8d8..0f00dd9 100644
> --- a/tools/perf/util/hist.c
> +++ b/tools/perf/util/hist.c
> @@ -587,7 +587,7 @@ __hists__add_entry(struct hists *hists,
>  	struct namespaces *ns = thread__namespaces(al->thread);
>  	struct hist_entry entry = {
>  		.thread	= al->thread,
> -		.comm = thread__comm(al->thread),
> +		.comm_str = thread__comm_str(al->thread),
>  		.cgroup_id = {
>  			.dev = ns ? ns->link_info[CGROUP_NS_INDEX].dev : 0,
>  			.ino = ns ? ns->link_info[CGROUP_NS_INDEX].ino : 0,
> @@ -944,7 +944,7 @@ iter_add_next_cumulative_entry(struct hist_entry_iter *iter,
>  		.hists = evsel__hists(evsel),
>  		.cpu = al->cpu,
>  		.thread = al->thread,
> -		.comm = thread__comm(al->thread),
> +		.comm_str = thread__comm_str(al->thread),
>  		.ip = al->addr,
>  		.ms = {
>  			.map = al->map,
> diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
> index eb3ab90..99ab411 100644
> --- a/tools/perf/util/sort.c
> +++ b/tools/perf/util/sort.c
> @@ -114,26 +114,26 @@ static int64_t
>  sort__comm_cmp(struct hist_entry *left, struct hist_entry *right)
>  {
>  	/* Compare the addr that should be unique among comm */
> -	return strcmp(comm__str(right->comm), comm__str(left->comm));
> +	return strcmp(right->comm_str, left->comm_str);
>  }
>  
>  static int64_t
>  sort__comm_collapse(struct hist_entry *left, struct hist_entry *right)
>  {
>  	/* Compare the addr that should be unique among comm */
> -	return strcmp(comm__str(right->comm), comm__str(left->comm));
> +	return strcmp(right->comm_str, left->comm_str);
>  }
>  
>  static int64_t
>  sort__comm_sort(struct hist_entry *left, struct hist_entry *right)
>  {
> -	return strcmp(comm__str(right->comm), comm__str(left->comm));
> +	return strcmp(right->comm_str, left->comm_str);
>  }
>  
>  static int hist_entry__comm_snprintf(struct hist_entry *he, char *bf,
>  				     size_t size, unsigned int width)
>  {
> -	return repsep_snprintf(bf, size, "%-*.*s", width, width, comm__str(he->comm));
> +	return repsep_snprintf(bf, size, "%-*.*s", width, width, he->comm_str);
>  }
>  
>  struct sort_entry sort_comm = {
> diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h
> index f36dc49..861cbe7 100644
> --- a/tools/perf/util/sort.h
> +++ b/tools/perf/util/sort.h
> @@ -96,7 +96,7 @@ struct hist_entry {
>  	struct he_stat		*stat_acc;
>  	struct map_symbol	ms;
>  	struct thread		*thread;
> -	struct comm		*comm;
> +	const char		*comm_str;
>  	struct namespace_id	cgroup_id;
>  	u64			ip;
>  	u64			transaction;
> -- 
> 2.5.5

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 07/10] perf tools: change machine comm_exec type to atomic
  2017-09-11  2:23 ` [PATCH RFC V2 07/10] perf tools: change machine comm_exec type to atomic kan.liang
@ 2017-09-13 15:24   ` Arnaldo Carvalho de Melo
  2017-09-15 20:05     ` Liang, Kan
  0 siblings, 1 reply; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-13 15:24 UTC (permalink / raw)
  To: kan.liang
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, adrian.hunter,
	lukasz.odzioba, ak

Em Sun, Sep 10, 2017 at 07:23:20PM -0700, kan.liang@intel.com escreveu:
> From: Kan Liang <kan.liang@intel.com>
> 
> In case there are two or more threads want to change it.

Please describe the scenario that made you want to have this in place,
to help in reviewing and to have it documented in this patch commit log.

- Arnaldo
 
> Signed-off-by: Kan Liang <kan.liang@intel.com>
> ---
>  tools/perf/util/machine.c | 11 ++++++-----
>  tools/perf/util/machine.h |  2 +-
>  2 files changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index 2f1ad29..bbfb9e0 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -64,8 +64,8 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
>  
>  	machine->id_hdr_size = 0;
>  	machine->kptr_restrict_warned = false;
> -	machine->comm_exec = false;
>  	machine->kernel_start = 0;
> +	atomic_set(&machine->comm_exec, 0);
>  
>  	memset(machine->vmlinux_maps, 0, sizeof(machine->vmlinux_maps));
>  
> @@ -238,14 +238,15 @@ struct machine *machines__add(struct machines *machines, pid_t pid,
>  
>  void machines__set_comm_exec(struct machines *machines, bool comm_exec)
>  {
> +	int exec = comm_exec ? 1 : 0;
>  	struct rb_node *nd;
>  
> -	machines->host.comm_exec = comm_exec;
> +	atomic_set(&machines->host.comm_exec, exec);
>  
>  	for (nd = rb_first(&machines->guests); nd; nd = rb_next(nd)) {
>  		struct machine *machine = rb_entry(nd, struct machine, rb_node);
>  
> -		machine->comm_exec = comm_exec;
> +		atomic_set(&machine->comm_exec, exec);
>  	}
>  }
>  
> @@ -505,7 +506,7 @@ struct thread *machine__find_thread(struct machine *machine, pid_t pid,
>  struct comm *machine__thread_exec_comm(struct machine *machine,
>  				       struct thread *thread)
>  {
> -	if (machine->comm_exec)
> +	if (atomic_read(&machine->comm_exec))
>  		return thread__exec_comm(thread);
>  	else
>  		return thread__comm(thread);
> @@ -521,7 +522,7 @@ int machine__process_comm_event(struct machine *machine, union perf_event *event
>  	int err = 0;
>  
>  	if (exec)
> -		machine->comm_exec = true;
> +		atomic_set(&machine->comm_exec, 1);
>  
>  	if (dump_trace)
>  		perf_event__fprintf_comm(event, stdout);
> diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
> index 64663eb..fb3c2a2 100644
> --- a/tools/perf/util/machine.h
> +++ b/tools/perf/util/machine.h
> @@ -38,7 +38,7 @@ struct machine {
>  	struct rb_node	  rb_node;
>  	pid_t		  pid;
>  	u16		  id_hdr_size;
> -	bool		  comm_exec;
> +	atomic_t	  comm_exec;
>  	bool		  kptr_restrict_warned;
>  	char		  *root_dir;
>  	struct threads    threads[THREADS__TABLE_SIZE];
> -- 
> 2.5.5

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 00/10] perf top optimization
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
                   ` (9 preceding siblings ...)
  2017-09-11  2:23 ` [PATCH RFC V2 10/10] perf top: switch back to overwrite mode kan.liang
@ 2017-09-13 15:25 ` Arnaldo Carvalho de Melo
  2017-09-13 15:29   ` Liang, Kan
  2017-09-18  8:57 ` Jiri Olsa
  11 siblings, 1 reply; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-13 15:25 UTC (permalink / raw)
  To: kan.liang
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, adrian.hunter,
	lukasz.odzioba, ak

Em Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com escreveu:

So I got the first two patches already merged, and made some comments
about the other patches, please check those,

Thanks,

- Arnaldo
 
> Changes since V1:
>  - Patch 1: machine threads and hashtable related renaming (Arnaldo)
>  - Patch 6: use a smaller locked section for comm_str__put
>    add a locked wrapper for comm_str__findnew              (Arnaldo)
> 
> Kan Liang (10):
>   perf tools: hashtable for machine threads
>   perf tools: using scandir to replace readdir
>   petf tools: using comm_str to replace comm in hist_entry
>   petf tools: introduce a new function to set namespaces id
>   perf tools: lock to protect thread list
>   perf tools: lock to protect comm_str rb tree
>   perf tools: change machine comm_exec type to atomic
>   perf top: implement multithreading for perf_event__synthesize_threads
>   perf top: add option to set the number of thread for event synthesize
>   perf top: switch back to overwrite mode
> 
>  tools/perf/builtin-kvm.c              |   3 +-
>  tools/perf/builtin-record.c           |   2 +-
>  tools/perf/builtin-top.c              |   9 +-
>  tools/perf/builtin-trace.c            |  21 +++--
>  tools/perf/tests/mmap-thread-lookup.c |   2 +-
>  tools/perf/ui/browsers/hists.c        |   2 +-
>  tools/perf/util/comm.c                |  18 +++-
>  tools/perf/util/event.c               | 149 +++++++++++++++++++++++++-------
>  tools/perf/util/event.h               |  14 ++-
>  tools/perf/util/evlist.c              |   5 +-
>  tools/perf/util/hist.c                |  11 +--
>  tools/perf/util/machine.c             | 158 +++++++++++++++++++++-------------
>  tools/perf/util/machine.h             |  34 ++++++--
>  tools/perf/util/rb_resort.h           |   5 +-
>  tools/perf/util/sort.c                |   8 +-
>  tools/perf/util/sort.h                |   2 +-
>  tools/perf/util/thread.c              |  68 ++++++++++++---
>  tools/perf/util/thread.h              |   6 +-
>  tools/perf/util/top.h                 |   1 +
>  19 files changed, 376 insertions(+), 142 deletions(-)
> 
> -- 
> 2.5.5

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH RFC V2 00/10] perf top optimization
  2017-09-13 15:25 ` [PATCH RFC V2 00/10] perf top optimization Arnaldo Carvalho de Melo
@ 2017-09-13 15:29   ` Liang, Kan
  2017-09-13 15:38     ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 37+ messages in thread
From: Liang, Kan @ 2017-09-13 15:29 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

> 
> Em Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com escreveu:
> 
> So I got the first two patches already merged, and made some comments
> about the other patches, please check those,
> 

Thanks for the review Arnaldo.

I will take a close look for the comments. 
For the next version, I only need to include patch 3-10, correct?


Thanks,
Kan

> Thanks,
> 
> - Arnaldo
> 
> > Changes since V1:
> >  - Patch 1: machine threads and hashtable related renaming (Arnaldo)
> >  - Patch 6: use a smaller locked section for comm_str__put
> >    add a locked wrapper for comm_str__findnew              (Arnaldo)
> >
> > Kan Liang (10):
> >   perf tools: hashtable for machine threads
> >   perf tools: using scandir to replace readdir
> >   petf tools: using comm_str to replace comm in hist_entry
> >   petf tools: introduce a new function to set namespaces id
> >   perf tools: lock to protect thread list
> >   perf tools: lock to protect comm_str rb tree
> >   perf tools: change machine comm_exec type to atomic
> >   perf top: implement multithreading for perf_event__synthesize_threads
> >   perf top: add option to set the number of thread for event synthesize
> >   perf top: switch back to overwrite mode
> >
> >  tools/perf/builtin-kvm.c              |   3 +-
> >  tools/perf/builtin-record.c           |   2 +-
> >  tools/perf/builtin-top.c              |   9 +-
> >  tools/perf/builtin-trace.c            |  21 +++--
> >  tools/perf/tests/mmap-thread-lookup.c |   2 +-
> >  tools/perf/ui/browsers/hists.c        |   2 +-
> >  tools/perf/util/comm.c                |  18 +++-
> >  tools/perf/util/event.c               | 149 +++++++++++++++++++++++++-------
> >  tools/perf/util/event.h               |  14 ++-
> >  tools/perf/util/evlist.c              |   5 +-
> >  tools/perf/util/hist.c                |  11 +--
> >  tools/perf/util/machine.c             | 158 +++++++++++++++++++++-------------
> >  tools/perf/util/machine.h             |  34 ++++++--
> >  tools/perf/util/rb_resort.h           |   5 +-
> >  tools/perf/util/sort.c                |   8 +-
> >  tools/perf/util/sort.h                |   2 +-
> >  tools/perf/util/thread.c              |  68 ++++++++++++---
> >  tools/perf/util/thread.h              |   6 +-
> >  tools/perf/util/top.h                 |   1 +
> >  19 files changed, 376 insertions(+), 142 deletions(-)
> >
> > --
> > 2.5.5

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 00/10] perf top optimization
  2017-09-13 15:29   ` Liang, Kan
@ 2017-09-13 15:38     ` Arnaldo Carvalho de Melo
  2017-09-14 21:19       ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-13 15:38 UTC (permalink / raw)
  To: Liang, Kan
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

Em Wed, Sep 13, 2017 at 03:29:44PM +0000, Liang, Kan escreveu:
> > 
> > Em Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com escreveu:
> > 
> > So I got the first two patches already merged, and made some comments
> > about the other patches, please check those,
> > 
> 
> Thanks for the review Arnaldo.
> 
> I will take a close look for the comments. 
> For the next version, I only need to include patch 3-10, correct?

Right, and go from my perf/core branch. The hashtable patch is still not
there as I am running tests before pushing out, but it should be there
later today.

Thanks!

- Arnaldo
 
> 
> Thanks,
> Kan
> 
> > Thanks,
> > 
> > - Arnaldo
> > 
> > > Changes since V1:
> > >  - Patch 1: machine threads and hashtable related renaming (Arnaldo)
> > >  - Patch 6: use a smaller locked section for comm_str__put
> > >    add a locked wrapper for comm_str__findnew              (Arnaldo)
> > >
> > > Kan Liang (10):
> > >   perf tools: hashtable for machine threads
> > >   perf tools: using scandir to replace readdir
> > >   petf tools: using comm_str to replace comm in hist_entry
> > >   petf tools: introduce a new function to set namespaces id
> > >   perf tools: lock to protect thread list
> > >   perf tools: lock to protect comm_str rb tree
> > >   perf tools: change machine comm_exec type to atomic
> > >   perf top: implement multithreading for perf_event__synthesize_threads
> > >   perf top: add option to set the number of thread for event synthesize
> > >   perf top: switch back to overwrite mode
> > >
> > >  tools/perf/builtin-kvm.c              |   3 +-
> > >  tools/perf/builtin-record.c           |   2 +-
> > >  tools/perf/builtin-top.c              |   9 +-
> > >  tools/perf/builtin-trace.c            |  21 +++--
> > >  tools/perf/tests/mmap-thread-lookup.c |   2 +-
> > >  tools/perf/ui/browsers/hists.c        |   2 +-
> > >  tools/perf/util/comm.c                |  18 +++-
> > >  tools/perf/util/event.c               | 149 +++++++++++++++++++++++++-------
> > >  tools/perf/util/event.h               |  14 ++-
> > >  tools/perf/util/evlist.c              |   5 +-
> > >  tools/perf/util/hist.c                |  11 +--
> > >  tools/perf/util/machine.c             | 158 +++++++++++++++++++++-------------
> > >  tools/perf/util/machine.h             |  34 ++++++--
> > >  tools/perf/util/rb_resort.h           |   5 +-
> > >  tools/perf/util/sort.c                |   8 +-
> > >  tools/perf/util/sort.h                |   2 +-
> > >  tools/perf/util/thread.c              |  68 ++++++++++++---
> > >  tools/perf/util/thread.h              |   6 +-
> > >  tools/perf/util/top.h                 |   1 +
> > >  19 files changed, 376 insertions(+), 142 deletions(-)
> > >
> > > --
> > > 2.5.5

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 00/10] perf top optimization
  2017-09-13 15:38     ` Arnaldo Carvalho de Melo
@ 2017-09-14 21:19       ` Arnaldo Carvalho de Melo
  2017-09-15 15:11         ` Liang, Kan
  0 siblings, 1 reply; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-14 21:19 UTC (permalink / raw)
  To: Liang, Kan
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

Em Wed, Sep 13, 2017 at 12:38:19PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Wed, Sep 13, 2017 at 03:29:44PM +0000, Liang, Kan escreveu:
> > > 
> > > Em Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com escreveu:
> > > 
> > > So I got the first two patches already merged, and made some comments
> > > about the other patches, please check those,
> > > 
> > 
> > Thanks for the review Arnaldo.
> > 
> > I will take a close look for the comments. 
> > For the next version, I only need to include patch 3-10, correct?
> 
> Right, and go from my perf/core branch. The hashtable patch is still not
> there as I am running tests before pushing out, but it should be there
> later today.

So, its at my repo, branch tmp.perf/threads_hashtable

But 'perf trace' is broken, please take a look below:

[root@jouet ~]# gdb -c core
GNU gdb (GDB) Fedora 8.0-20.fc26
<SNIP>
Core was generated by `perf trace -e block:block_bio_queue'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000000000051089a in ?? ()
(gdb) file perf
Reading symbols from perf...done.
(gdb) bt
#0  0x000000000051089a in ____machine__findnew_thread (machine=0x3dfcab0, threads=0x3dfca78, pid=-1, tid=-1, create=false) at util/machine.c:429
#1  0x0000000000510b49 in machine__find_thread (machine=0x3dfcab0, pid=-1, tid=-1) at util/machine.c:498
#2  0x0000000000483dd0 in trace__set_filter_loop_pids (trace=0x7ffc0d23f880) at builtin-trace.c:2247
#3  0x000000000048438b in trace__run (trace=0x7ffc0d23f880, argc=0, argv=0x7ffc0d242a00) at builtin-trace.c:2385
#4  0x0000000000487219 in cmd_trace (argc=0, argv=0x7ffc0d242a00) at builtin-trace.c:3121
#5  0x00000000004c0b4a in run_builtin (p=0xa75d00 <commands+576>, argc=3, argv=0x7ffc0d242a00) at perf.c:296
#6  0x00000000004c0db7 in handle_internal_command (argc=3, argv=0x7ffc0d242a00) at perf.c:348
#7  0x00000000004c0f09 in run_argv (argcp=0x7ffc0d24285c, argv=0x7ffc0d242850) at perf.c:392
#8  0x00000000004c12d7 in main (argc=3, argv=0x7ffc0d242a00) at perf.c:536
(gdb)

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH RFC V2 00/10] perf top optimization
  2017-09-14 21:19       ` Arnaldo Carvalho de Melo
@ 2017-09-15 15:11         ` Liang, Kan
  2017-09-15 17:26           ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 37+ messages in thread
From: Liang, Kan @ 2017-09-15 15:11 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

> Em Wed, Sep 13, 2017 at 12:38:19PM -0300, Arnaldo Carvalho de Melo
> escreveu:
> > Em Wed, Sep 13, 2017 at 03:29:44PM +0000, Liang, Kan escreveu:
> > > >
> > > > Em Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com
> escreveu:
> > > >
> > > > So I got the first two patches already merged, and made some
> > > > comments about the other patches, please check those,
> > > >
> > >
> > > Thanks for the review Arnaldo.
> > >
> > > I will take a close look for the comments.
> > > For the next version, I only need to include patch 3-10, correct?
> >
> > Right, and go from my perf/core branch. The hashtable patch is still
> > not there as I am running tests before pushing out, but it should be
> > there later today.
> 
> So, its at my repo, branch tmp.perf/threads_hashtable
> 
> But 'perf trace' is broken, please take a look below:
> 
> [root@jouet ~]# gdb -c core
> GNU gdb (GDB) Fedora 8.0-20.fc26
> <SNIP>
> Core was generated by `perf trace -e block:block_bio_queue'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x000000000051089a in ?? ()
> (gdb) file perf
> Reading symbols from perf...done.
> (gdb) bt
> #0  0x000000000051089a in ____machine__findnew_thread
> (machine=0x3dfcab0, threads=0x3dfca78, pid=-1, tid=-1, create=false) at
> util/machine.c:429

I think the root cause is tid==-1. So the index of hashtable will be -1.
The patch as below should fix it.

diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index e6d5381..3c564b8 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -57,7 +57,7 @@ struct machine {
 
 static inline struct threads *machine__threads(struct machine *machine, pid_t tid)
 {
-	return &machine->threads[tid % THREADS__TABLE_SIZE];
+	return &machine->threads[(unsigned int)tid % THREADS__TABLE_SIZE];
 }
 
 static inline


There should be another issue which was introduced by  
33013b9a5607 ("perf machine: Optimize a bit the machine__findnew_thread() methods")
It should use tid not pid to get the threads.

diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 90ae9c7..ddeea05 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -473,7 +473,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 
 struct thread *__machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid)
 {
-	return ____machine__findnew_thread(machine, machine__threads(machine, pid), pid, tid, true);
+	return ____machine__findnew_thread(machine, machine__threads(machine, tid), pid, tid, true);
 }

They are small fixes. I think it's better to merge them with the old patches.
Should I include the modified hashtable patches in V3?

Thanks,
Kan

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 00/10] perf top optimization
  2017-09-15 15:11         ` Liang, Kan
@ 2017-09-15 17:26           ` Arnaldo Carvalho de Melo
  2017-09-15 17:29             ` Liang, Kan
  0 siblings, 1 reply; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-15 17:26 UTC (permalink / raw)
  To: Liang, Kan
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

Em Fri, Sep 15, 2017 at 03:11:51PM +0000, Liang, Kan escreveu:
> > Em Wed, Sep 13, 2017 at 12:38:19PM -0300, Arnaldo Carvalho de Melo
> > escreveu:
> > > Em Wed, Sep 13, 2017 at 03:29:44PM +0000, Liang, Kan escreveu:
> > > > >
> > > > > Em Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com
> > escreveu:
> > > > >
> > > > > So I got the first two patches already merged, and made some
> > > > > comments about the other patches, please check those,
> > > > >
> > > >
> > > > Thanks for the review Arnaldo.
> > > >
> > > > I will take a close look for the comments.
> > > > For the next version, I only need to include patch 3-10, correct?
> > >
> > > Right, and go from my perf/core branch. The hashtable patch is still
> > > not there as I am running tests before pushing out, but it should be
> > > there later today.
> > 
> > So, its at my repo, branch tmp.perf/threads_hashtable
> > 
> > But 'perf trace' is broken, please take a look below:
> > 
> > [root@jouet ~]# gdb -c core
> > GNU gdb (GDB) Fedora 8.0-20.fc26
> > <SNIP>
> > Core was generated by `perf trace -e block:block_bio_queue'.
> > Program terminated with signal SIGSEGV, Segmentation fault.
> > #0  0x000000000051089a in ?? ()
> > (gdb) file perf
> > Reading symbols from perf...done.
> > (gdb) bt
> > #0  0x000000000051089a in ____machine__findnew_thread
> > (machine=0x3dfcab0, threads=0x3dfca78, pid=-1, tid=-1, create=false) at
> > util/machine.c:429
> 
> I think the root cause is tid==-1. So the index of hashtable will be -1.
> The patch as below should fix it.
> 
> diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
> index e6d5381..3c564b8 100644
> --- a/tools/perf/util/machine.h
> +++ b/tools/perf/util/machine.h
> @@ -57,7 +57,7 @@ struct machine {
>  
>  static inline struct threads *machine__threads(struct machine *machine, pid_t tid)
>  {
> -	return &machine->threads[tid % THREADS__TABLE_SIZE];
> +	return &machine->threads[(unsigned int)tid % THREADS__TABLE_SIZE];
>  }
>  
>  static inline
> 
> 
> There should be another issue which was introduced by  
> 33013b9a5607 ("perf machine: Optimize a bit the machine__findnew_thread() methods")
> It should use tid not pid to get the threads.
> 
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index 90ae9c7..ddeea05 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -473,7 +473,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
>  
>  struct thread *__machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid)
>  {
> -	return ____machine__findnew_thread(machine, machine__threads(machine, pid), pid, tid, true);
> +	return ____machine__findnew_thread(machine, machine__threads(machine, tid), pid, tid, true);
>  }
> 
> They are small fixes. I think it's better to merge them with the old patches.
> Should I include the modified hashtable patches in V3?

I'll add these now and test, then push another branch, ok?

- Arnaldo

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH RFC V2 00/10] perf top optimization
  2017-09-15 17:26           ` Arnaldo Carvalho de Melo
@ 2017-09-15 17:29             ` Liang, Kan
  2017-09-15 18:24               ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 37+ messages in thread
From: Liang, Kan @ 2017-09-15 17:29 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

> Em Fri, Sep 15, 2017 at 03:11:51PM +0000, Liang, Kan escreveu:
> > > Em Wed, Sep 13, 2017 at 12:38:19PM -0300, Arnaldo Carvalho de Melo
> > > escreveu:
> > > > Em Wed, Sep 13, 2017 at 03:29:44PM +0000, Liang, Kan escreveu:
> > > > > >
> > > > > > Em Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com
> > > escreveu:
> > > > > >
> > > > > > So I got the first two patches already merged, and made some
> > > > > > comments about the other patches, please check those,
> > > > > >
> > > > >
> > > > > Thanks for the review Arnaldo.
> > > > >
> > > > > I will take a close look for the comments.
> > > > > For the next version, I only need to include patch 3-10, correct?
> > > >
> > > > Right, and go from my perf/core branch. The hashtable patch is
> > > > still not there as I am running tests before pushing out, but it
> > > > should be there later today.
> > >
> > > So, its at my repo, branch tmp.perf/threads_hashtable
> > >
> > > But 'perf trace' is broken, please take a look below:
> > >
> > > [root@jouet ~]# gdb -c core
> > > GNU gdb (GDB) Fedora 8.0-20.fc26
> > > <SNIP>
> > > Core was generated by `perf trace -e block:block_bio_queue'.
> > > Program terminated with signal SIGSEGV, Segmentation fault.
> > > #0  0x000000000051089a in ?? ()
> > > (gdb) file perf
> > > Reading symbols from perf...done.
> > > (gdb) bt
> > > #0  0x000000000051089a in ____machine__findnew_thread
> > > (machine=0x3dfcab0, threads=0x3dfca78, pid=-1, tid=-1, create=false)
> > > at
> > > util/machine.c:429
> >
> > I think the root cause is tid==-1. So the index of hashtable will be -1.
> > The patch as below should fix it.
> >
> > diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
> > index e6d5381..3c564b8 100644
> > --- a/tools/perf/util/machine.h
> > +++ b/tools/perf/util/machine.h
> > @@ -57,7 +57,7 @@ struct machine {
> >
> >  static inline struct threads *machine__threads(struct machine
> > *machine, pid_t tid)  {
> > -	return &machine->threads[tid % THREADS__TABLE_SIZE];
> > +	return &machine->threads[(unsigned int)tid %
> THREADS__TABLE_SIZE];
> >  }
> >
> >  static inline
> >
> >
> > There should be another issue which was introduced by
> > 33013b9a5607 ("perf machine: Optimize a bit the
> > machine__findnew_thread() methods") It should use tid not pid to get the
> threads.
> >
> > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > index 90ae9c7..ddeea05 100644
> > --- a/tools/perf/util/machine.c
> > +++ b/tools/perf/util/machine.c
> > @@ -473,7 +473,7 @@ static struct thread
> > *____machine__findnew_thread(struct machine *machine,
> >
> >  struct thread *__machine__findnew_thread(struct machine *machine,
> > pid_t pid, pid_t tid)  {
> > -	return ____machine__findnew_thread(machine,
> machine__threads(machine, pid), pid, tid, true);
> > +	return ____machine__findnew_thread(machine,
> > +machine__threads(machine, tid), pid, tid, true);
> >  }
> >
> > They are small fixes. I think it's better to merge them with the old patches.
> > Should I include the modified hashtable patches in V3?
> 
> I'll add these now and test, then push another branch, ok?
>

Sure. Thanks.
I will prepare the V3 for the new branch then. 

Thanks,
Kan

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 00/10] perf top optimization
  2017-09-15 17:29             ` Liang, Kan
@ 2017-09-15 18:24               ` Arnaldo Carvalho de Melo
  2017-09-15 18:26                 ` Liang, Kan
  0 siblings, 1 reply; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-15 18:24 UTC (permalink / raw)
  To: Liang, Kan
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

Em Fri, Sep 15, 2017 at 05:29:13PM +0000, Liang, Kan escreveu:
> > Em Fri, Sep 15, 2017 at 03:11:51PM +0000, Liang, Kan escreveu:
> > > They are small fixes. I think it's better to merge them with the old patches.
> > > Should I include the modified hashtable patches in V3?

> > I'll add these now and test, then push another branch, ok?
 
> Sure. Thanks.
> I will prepare the V3 for the new branch then. 

Ok, passes all the tests I trew at them, including 'perf test' and
building in several distro containers.

I just pushed perf/core, please continue from there, ok?

- Arnaldo

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH RFC V2 00/10] perf top optimization
  2017-09-15 18:24               ` Arnaldo Carvalho de Melo
@ 2017-09-15 18:26                 ` Liang, Kan
  0 siblings, 0 replies; 37+ messages in thread
From: Liang, Kan @ 2017-09-15 18:26 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

> Em Fri, Sep 15, 2017 at 05:29:13PM +0000, Liang, Kan escreveu:
> > > Em Fri, Sep 15, 2017 at 03:11:51PM +0000, Liang, Kan escreveu:
> > > > They are small fixes. I think it's better to merge them with the old
> patches.
> > > > Should I include the modified hashtable patches in V3?
> 
> > > I'll add these now and test, then push another branch, ok?
> 
> > Sure. Thanks.
> > I will prepare the V3 for the new branch then.
> 
> Ok, passes all the tests I trew at them, including 'perf test' and building in
> several distro containers.
> 
> I just pushed perf/core, please continue from there, ok?
> 

Sure.

Thanks,
Kan

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH RFC V2 07/10] perf tools: change machine comm_exec type to atomic
  2017-09-13 15:24   ` Arnaldo Carvalho de Melo
@ 2017-09-15 20:05     ` Liang, Kan
  2017-09-18 11:30       ` Jiri Olsa
  0 siblings, 1 reply; 37+ messages in thread
From: Liang, Kan @ 2017-09-15 20:05 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

> Em Sun, Sep 10, 2017 at 07:23:20PM -0700, kan.liang@intel.com escreveu:
> > From: Kan Liang <kan.liang@intel.com>
> >
> > In case there are two or more threads want to change it.
> 
> Please describe the scenario that made you want to have this in place, to
> help in reviewing and to have it documented in this patch commit log.
> 

Thinking more about it. For synthesizing event, we don't need to change the
comm_exec. PERF_RECORD_MISC_COMM_EXEC is not set in this stage.
But we should consider it if  we want to multithread the event processing stage.

Since it's not an issue for now, I will drop this patch and add some comments
in the code for V3.

Thanks,
Kan

> - Arnaldo
> 
> > Signed-off-by: Kan Liang <kan.liang@intel.com>
> > ---
> >  tools/perf/util/machine.c | 11 ++++++-----  tools/perf/util/machine.h
> > |  2 +-
> >  2 files changed, 7 insertions(+), 6 deletions(-)
> >
> > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > index 2f1ad29..bbfb9e0 100644
> > --- a/tools/perf/util/machine.c
> > +++ b/tools/perf/util/machine.c
> > @@ -64,8 +64,8 @@ int machine__init(struct machine *machine, const
> > char *root_dir, pid_t pid)
> >
> >  	machine->id_hdr_size = 0;
> >  	machine->kptr_restrict_warned = false;
> > -	machine->comm_exec = false;
> >  	machine->kernel_start = 0;
> > +	atomic_set(&machine->comm_exec, 0);
> >
> >  	memset(machine->vmlinux_maps, 0, sizeof(machine-
> >vmlinux_maps));
> >
> > @@ -238,14 +238,15 @@ struct machine *machines__add(struct machines
> > *machines, pid_t pid,
> >
> >  void machines__set_comm_exec(struct machines *machines, bool
> > comm_exec)  {
> > +	int exec = comm_exec ? 1 : 0;
> >  	struct rb_node *nd;
> >
> > -	machines->host.comm_exec = comm_exec;
> > +	atomic_set(&machines->host.comm_exec, exec);
> >
> >  	for (nd = rb_first(&machines->guests); nd; nd = rb_next(nd)) {
> >  		struct machine *machine = rb_entry(nd, struct machine,
> rb_node);
> >
> > -		machine->comm_exec = comm_exec;
> > +		atomic_set(&machine->comm_exec, exec);
> >  	}
> >  }
> >
> > @@ -505,7 +506,7 @@ struct thread *machine__find_thread(struct
> machine
> > *machine, pid_t pid,  struct comm *machine__thread_exec_comm(struct
> machine *machine,
> >  				       struct thread *thread)
> >  {
> > -	if (machine->comm_exec)
> > +	if (atomic_read(&machine->comm_exec))
> >  		return thread__exec_comm(thread);
> >  	else
> >  		return thread__comm(thread);
> > @@ -521,7 +522,7 @@ int machine__process_comm_event(struct machine
> *machine, union perf_event *event
> >  	int err = 0;
> >
> >  	if (exec)
> > -		machine->comm_exec = true;
> > +		atomic_set(&machine->comm_exec, 1);
> >
> >  	if (dump_trace)
> >  		perf_event__fprintf_comm(event, stdout); diff --git
> > a/tools/perf/util/machine.h b/tools/perf/util/machine.h index
> > 64663eb..fb3c2a2 100644
> > --- a/tools/perf/util/machine.h
> > +++ b/tools/perf/util/machine.h
> > @@ -38,7 +38,7 @@ struct machine {
> >  	struct rb_node	  rb_node;
> >  	pid_t		  pid;
> >  	u16		  id_hdr_size;
> > -	bool		  comm_exec;
> > +	atomic_t	  comm_exec;
> >  	bool		  kptr_restrict_warned;
> >  	char		  *root_dir;
> >  	struct threads    threads[THREADS__TABLE_SIZE];
> > --
> > 2.5.5

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 03/10] petf tools: using comm_str to replace comm in hist_entry
  2017-09-13 15:21   ` Arnaldo Carvalho de Melo
@ 2017-09-18  8:38     ` Jiri Olsa
  0 siblings, 0 replies; 37+ messages in thread
From: Jiri Olsa @ 2017-09-18  8:38 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Frédéric Weisbecker, kan.liang, peterz, mingo,
	linux-kernel, jolsa, namhyung, adrian.hunter, lukasz.odzioba, ak

On Wed, Sep 13, 2017 at 12:21:56PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Sun, Sep 10, 2017 at 07:23:16PM -0700, kan.liang@intel.com escreveu:
> > From: Kan Liang <kan.liang@intel.com>
> > 
> > For hist_entry, it only needs comm_str for cmp.
> 
> And thinking a bit more, isn't this even a bug fix, as at the time of
> that sample that was the comm string associated with that thread?
> 
> Frédéric, am I nutz?

I think it's ok, because struct comm holds just one comm string
which is 'correct/accurate' at the time the hist entry is created

this patch just seems to switch comm pointer with comm string pointer,
I'm guessing it's useful in follow up patches

jirka

> 
> - Arnaldo
>  
> > Signed-off-by: Kan Liang <kan.liang@intel.com>
> > ---
> >  tools/perf/util/hist.c | 4 ++--
> >  tools/perf/util/sort.c | 8 ++++----
> >  tools/perf/util/sort.h | 2 +-
> >  3 files changed, 7 insertions(+), 7 deletions(-)
> > 
> > diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> > index e60d8d8..0f00dd9 100644
> > --- a/tools/perf/util/hist.c
> > +++ b/tools/perf/util/hist.c
> > @@ -587,7 +587,7 @@ __hists__add_entry(struct hists *hists,
> >  	struct namespaces *ns = thread__namespaces(al->thread);
> >  	struct hist_entry entry = {
> >  		.thread	= al->thread,
> > -		.comm = thread__comm(al->thread),
> > +		.comm_str = thread__comm_str(al->thread),
> >  		.cgroup_id = {
> >  			.dev = ns ? ns->link_info[CGROUP_NS_INDEX].dev : 0,
> >  			.ino = ns ? ns->link_info[CGROUP_NS_INDEX].ino : 0,
> > @@ -944,7 +944,7 @@ iter_add_next_cumulative_entry(struct hist_entry_iter *iter,
> >  		.hists = evsel__hists(evsel),
> >  		.cpu = al->cpu,
> >  		.thread = al->thread,
> > -		.comm = thread__comm(al->thread),
> > +		.comm_str = thread__comm_str(al->thread),
> >  		.ip = al->addr,
> >  		.ms = {
> >  			.map = al->map,
> > diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
> > index eb3ab90..99ab411 100644
> > --- a/tools/perf/util/sort.c
> > +++ b/tools/perf/util/sort.c
> > @@ -114,26 +114,26 @@ static int64_t
> >  sort__comm_cmp(struct hist_entry *left, struct hist_entry *right)
> >  {
> >  	/* Compare the addr that should be unique among comm */
> > -	return strcmp(comm__str(right->comm), comm__str(left->comm));
> > +	return strcmp(right->comm_str, left->comm_str);
> >  }
> >  
> >  static int64_t
> >  sort__comm_collapse(struct hist_entry *left, struct hist_entry *right)
> >  {
> >  	/* Compare the addr that should be unique among comm */
> > -	return strcmp(comm__str(right->comm), comm__str(left->comm));
> > +	return strcmp(right->comm_str, left->comm_str);
> >  }
> >  
> >  static int64_t
> >  sort__comm_sort(struct hist_entry *left, struct hist_entry *right)
> >  {
> > -	return strcmp(comm__str(right->comm), comm__str(left->comm));
> > +	return strcmp(right->comm_str, left->comm_str);
> >  }
> >  
> >  static int hist_entry__comm_snprintf(struct hist_entry *he, char *bf,
> >  				     size_t size, unsigned int width)
> >  {
> > -	return repsep_snprintf(bf, size, "%-*.*s", width, width, comm__str(he->comm));
> > +	return repsep_snprintf(bf, size, "%-*.*s", width, width, he->comm_str);
> >  }
> >  
> >  struct sort_entry sort_comm = {
> > diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h
> > index f36dc49..861cbe7 100644
> > --- a/tools/perf/util/sort.h
> > +++ b/tools/perf/util/sort.h
> > @@ -96,7 +96,7 @@ struct hist_entry {
> >  	struct he_stat		*stat_acc;
> >  	struct map_symbol	ms;
> >  	struct thread		*thread;
> > -	struct comm		*comm;
> > +	const char		*comm_str;
> >  	struct namespace_id	cgroup_id;
> >  	u64			ip;
> >  	u64			transaction;
> > -- 
> > 2.5.5

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 05/10] perf tools: lock to protect thread list
  2017-09-11  2:23 ` [PATCH RFC V2 05/10] perf tools: lock to protect thread list kan.liang
@ 2017-09-18  8:50   ` Jiri Olsa
  2017-09-18 16:18     ` Liang, Kan
  0 siblings, 1 reply; 37+ messages in thread
From: Jiri Olsa @ 2017-09-18  8:50 UTC (permalink / raw)
  To: kan.liang
  Cc: acme, peterz, mingo, linux-kernel, jolsa, namhyung,
	adrian.hunter, lukasz.odzioba, ak

On Sun, Sep 10, 2017 at 07:23:18PM -0700, kan.liang@intel.com wrote:

SNIP

> +	pthread_mutex_unlock(&thread->namespaces_lock);
> +
>  	return 0;
>  }
>  
> -void thread__namespaces_id(const struct thread *thread,
> +void thread__namespaces_id(struct thread *thread,
>  			   u64 *dev, u64 *ino)
>  {
>  	struct namespaces *ns;
>  
> +	pthread_mutex_lock(&thread->namespaces_lock);
>  	ns = thread__namespaces(thread);

isn't it just thread__namespaces that needs this lock?

if that's the case we don't need the change for __hists__add_entry
in previous patch

jirka

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 00/10] perf top optimization
  2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
                   ` (10 preceding siblings ...)
  2017-09-13 15:25 ` [PATCH RFC V2 00/10] perf top optimization Arnaldo Carvalho de Melo
@ 2017-09-18  8:57 ` Jiri Olsa
  2017-09-18 13:01   ` Arnaldo Carvalho de Melo
  11 siblings, 1 reply; 37+ messages in thread
From: Jiri Olsa @ 2017-09-18  8:57 UTC (permalink / raw)
  To: kan.liang
  Cc: acme, peterz, mingo, linux-kernel, jolsa, namhyung,
	adrian.hunter, lukasz.odzioba, ak

On Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com wrote:
> From: Kan Liang <kan.liang@intel.com>
> 
> The patch series intends to fix the severe performance issue in
> Knights Landing/Mill, when monitoring in heavy load system.
> perf top costs a few minutes to show the result, which is
> unacceptable.
> With the patch series applied, the latency will reduces to
> several seconds.
> 
> machine__synthesize_threads and perf_top__mmap_read costs most of
> the perf top time (> 99%).

looks like this patchset adds locking into code paths
used by other single threaded tools and that might
be bad for them as noted by Andi in here:

  https://marc.info/?l=linux-kernel&m=149031672928989&w=2

he proposed solution and it was changed&posted by Arnaldo in here:

  https://marc.info/?l=linux-kernel&m=149132267410294&w=2

but looks like it never got merged

could you please add this or similar code before you add the
locking code/overhead in?

thanks,
jirka

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 08/10] perf top: implement multithreading for perf_event__synthesize_threads
  2017-09-11  2:23 ` [PATCH RFC V2 08/10] perf top: implement multithreading for perf_event__synthesize_threads kan.liang
@ 2017-09-18 11:24   ` Jiri Olsa
  0 siblings, 0 replies; 37+ messages in thread
From: Jiri Olsa @ 2017-09-18 11:24 UTC (permalink / raw)
  To: kan.liang
  Cc: acme, peterz, mingo, linux-kernel, jolsa, namhyung,
	adrian.hunter, lukasz.odzioba, ak

On Sun, Sep 10, 2017 at 07:23:21PM -0700, kan.liang@intel.com wrote:

SNIP

> diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
> index ee7bcc8..7b987c8 100644
> --- a/tools/perf/util/event.h
> +++ b/tools/perf/util/event.h
> @@ -664,6 +664,17 @@ typedef int (*perf_event__handler_t)(struct perf_tool *tool,
>  				     struct perf_sample *sample,
>  				     struct machine *machine);
>  
> +struct synthesize_threads_arg {
> +	struct perf_tool *tool;
> +	perf_event__handler_t process;
> +	struct machine *machine;
> +	bool mmap_data;
> +	unsigned int proc_map_timeout;
> +	struct dirent **dirent;
> +	int num;
> +	int start;
> +};

does not need to be global, can be defined in event.c

jirka

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 07/10] perf tools: change machine comm_exec type to atomic
  2017-09-15 20:05     ` Liang, Kan
@ 2017-09-18 11:30       ` Jiri Olsa
  0 siblings, 0 replies; 37+ messages in thread
From: Jiri Olsa @ 2017-09-18 11:30 UTC (permalink / raw)
  To: Liang, Kan
  Cc: Arnaldo Carvalho de Melo, peterz, mingo, linux-kernel, jolsa,
	namhyung, Hunter, Adrian, Odzioba, Lukasz, ak

On Fri, Sep 15, 2017 at 08:05:11PM +0000, Liang, Kan wrote:
> > Em Sun, Sep 10, 2017 at 07:23:20PM -0700, kan.liang@intel.com escreveu:
> > > From: Kan Liang <kan.liang@intel.com>
> > >
> > > In case there are two or more threads want to change it.
> > 
> > Please describe the scenario that made you want to have this in place, to
> > help in reviewing and to have it documented in this patch commit log.
> > 
> 
> Thinking more about it. For synthesizing event, we don't need to change the
> comm_exec. PERF_RECORD_MISC_COMM_EXEC is not set in this stage.
> But we should consider it if  we want to multithread the event processing stage.
> 
> Since it's not an issue for now, I will drop this patch and add some comments
> in the code for V3.

any chance you could push v3 into a git branch somewhere?

thanks,
jirka

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 00/10] perf top optimization
  2017-09-18  8:57 ` Jiri Olsa
@ 2017-09-18 13:01   ` Arnaldo Carvalho de Melo
  2017-09-18 16:21     ` Liang, Kan
  2017-09-19  8:19     ` Jiri Olsa
  0 siblings, 2 replies; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-18 13:01 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: kan.liang, peterz, mingo, linux-kernel, jolsa, namhyung,
	adrian.hunter, lukasz.odzioba, ak

Em Mon, Sep 18, 2017 at 10:57:08AM +0200, Jiri Olsa escreveu:
> On Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com wrote:
> > From: Kan Liang <kan.liang@intel.com>
> > 
> > The patch series intends to fix the severe performance issue in
> > Knights Landing/Mill, when monitoring in heavy load system.
> > perf top costs a few minutes to show the result, which is
> > unacceptable.
> > With the patch series applied, the latency will reduces to
> > several seconds.
> > 
> > machine__synthesize_threads and perf_top__mmap_read costs most of
> > the perf top time (> 99%).
> 
> looks like this patchset adds locking into code paths
> used by other single threaded tools and that might
> be bad for them as noted by Andi in here:
> 
>   https://marc.info/?l=linux-kernel&m=149031672928989&w=2
> 
> he proposed solution and it was changed&posted by Arnaldo in here:
> 
>   https://marc.info/?l=linux-kernel&m=149132267410294&w=2
> 
> but looks like it never got merged
> 
> could you please add this or similar code before you add the
> locking code/overhead in?

I'm rehashing that patch and adding it on top of what is in my perf/core
branch, will push soon, for now you can take a look at tmp.perf/core.

- Arnaldo

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH RFC V2 05/10] perf tools: lock to protect thread list
  2017-09-18  8:50   ` Jiri Olsa
@ 2017-09-18 16:18     ` Liang, Kan
  0 siblings, 0 replies; 37+ messages in thread
From: Liang, Kan @ 2017-09-18 16:18 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: acme, peterz, mingo, linux-kernel, jolsa, namhyung, Hunter,
	Adrian, Odzioba, Lukasz, ak

> 
> SNIP
> 
> > +	pthread_mutex_unlock(&thread->namespaces_lock);
> > +
> >  	return 0;
> >  }
> >
> > -void thread__namespaces_id(const struct thread *thread,
> > +void thread__namespaces_id(struct thread *thread,
> >  			   u64 *dev, u64 *ino)
> >  {
> >  	struct namespaces *ns;
> >
> > +	pthread_mutex_lock(&thread->namespaces_lock);
> >  	ns = thread__namespaces(thread);
> 
> isn't it just thread__namespaces that needs this lock?

I also wanted to protect
*dev = ns ? ns->link_info[CGROUP_NS_INDEX].dev : 0;
*ino = ns ? ns->link_info[CGROUP_NS_INDEX].ino : 0;
Because I was not sure if ns is still accurate when we try to use
it later.
But for our case (perf top event synthesizing), it looks I worried too much.
Namespaces event isn't processed at all.  
So yes, we don't need patch 4 for the optimization.

Based on the same reason, I used comm_str in patch 3.
It's not help for the optimization either, but should be useful for future.

Anyway, I think I will drop patch 3 & 4 for V3.

Thanks,
Kan

> 
> if that's the case we don't need the change for __hists__add_entry in
> previous patch
> 
> jirka

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH RFC V2 00/10] perf top optimization
  2017-09-18 13:01   ` Arnaldo Carvalho de Melo
@ 2017-09-18 16:21     ` Liang, Kan
  2017-09-19  8:19     ` Jiri Olsa
  1 sibling, 0 replies; 37+ messages in thread
From: Liang, Kan @ 2017-09-18 16:21 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Jiri Olsa
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak



> Em Mon, Sep 18, 2017 at 10:57:08AM +0200, Jiri Olsa escreveu:
> > On Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com wrote:
> > > From: Kan Liang <kan.liang@intel.com>
> > >
> > > The patch series intends to fix the severe performance issue in
> > > Knights Landing/Mill, when monitoring in heavy load system.
> > > perf top costs a few minutes to show the result, which is
> > > unacceptable.
> > > With the patch series applied, the latency will reduces to several
> > > seconds.
> > >
> > > machine__synthesize_threads and perf_top__mmap_read costs most of
> > > the perf top time (> 99%).
> >
> > looks like this patchset adds locking into code paths used by other
> > single threaded tools and that might be bad for them as noted by Andi
> > in here:
> >
> >   https://marc.info/?l=linux-kernel&m=149031672928989&w=2
> >
> > he proposed solution and it was changed&posted by Arnaldo in here:
> >
> >   https://marc.info/?l=linux-kernel&m=149132267410294&w=2
> >
> > but looks like it never got merged
> >
> > could you please add this or similar code before you add the locking
> > code/overhead in?
> 
> I'm rehashing that patch and adding it on top of what is in my perf/core
> branch, will push soon, for now you can take a look at tmp.perf/core.

Thanks.
I will make the V3 based on tmp.perf/core.

Kan

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 00/10] perf top optimization
  2017-09-18 13:01   ` Arnaldo Carvalho de Melo
  2017-09-18 16:21     ` Liang, Kan
@ 2017-09-19  8:19     ` Jiri Olsa
  2017-09-19 12:39       ` Liang, Kan
  1 sibling, 1 reply; 37+ messages in thread
From: Jiri Olsa @ 2017-09-19  8:19 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: kan.liang, peterz, mingo, linux-kernel, jolsa, namhyung,
	adrian.hunter, lukasz.odzioba, ak

On Mon, Sep 18, 2017 at 10:01:00AM -0300, Arnaldo Carvalho de Melo wrote:
> Em Mon, Sep 18, 2017 at 10:57:08AM +0200, Jiri Olsa escreveu:
> > On Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com wrote:
> > > From: Kan Liang <kan.liang@intel.com>
> > > 
> > > The patch series intends to fix the severe performance issue in
> > > Knights Landing/Mill, when monitoring in heavy load system.
> > > perf top costs a few minutes to show the result, which is
> > > unacceptable.
> > > With the patch series applied, the latency will reduces to
> > > several seconds.
> > > 
> > > machine__synthesize_threads and perf_top__mmap_read costs most of
> > > the perf top time (> 99%).
> > 
> > looks like this patchset adds locking into code paths
> > used by other single threaded tools and that might
> > be bad for them as noted by Andi in here:
> > 
> >   https://marc.info/?l=linux-kernel&m=149031672928989&w=2
> > 
> > he proposed solution and it was changed&posted by Arnaldo in here:
> > 
> >   https://marc.info/?l=linux-kernel&m=149132267410294&w=2
> > 
> > but looks like it never got merged
> > 
> > could you please add this or similar code before you add the
> > locking code/overhead in?
> 
> I'm rehashing that patch and adding it on top of what is in my perf/core
> branch, will push soon, for now you can take a look at tmp.perf/core.

checked the code.. one nit, could we have single threaded by default?
only one command is multithreaded atm, it could call perf_set_multihreaded
instead of all current related commands call perf_set_singlethreaded

other than that it looks ok

thanks,
jirka

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [PATCH RFC V2 00/10] perf top optimization
  2017-09-19  8:19     ` Jiri Olsa
@ 2017-09-19 12:39       ` Liang, Kan
  2017-09-19 14:24         ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 37+ messages in thread
From: Liang, Kan @ 2017-09-19 12:39 UTC (permalink / raw)
  To: Jiri Olsa, Arnaldo Carvalho de Melo
  Cc: peterz, mingo, linux-kernel, jolsa, namhyung, Hunter, Adrian,
	Odzioba, Lukasz, ak

> On Mon, Sep 18, 2017 at 10:01:00AM -0300, Arnaldo Carvalho de Melo wrote:
> > Em Mon, Sep 18, 2017 at 10:57:08AM +0200, Jiri Olsa escreveu:
> > > On Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com wrote:
> > > > From: Kan Liang <kan.liang@intel.com>
> > > >
> > > > The patch series intends to fix the severe performance issue in
> > > > Knights Landing/Mill, when monitoring in heavy load system.
> > > > perf top costs a few minutes to show the result, which is
> > > > unacceptable.
> > > > With the patch series applied, the latency will reduces to several
> > > > seconds.
> > > >
> > > > machine__synthesize_threads and perf_top__mmap_read costs most
> of
> > > > the perf top time (> 99%).
> > >
> > > looks like this patchset adds locking into code paths used by other
> > > single threaded tools and that might be bad for them as noted by
> > > Andi in here:
> > >
> > >   https://marc.info/?l=linux-kernel&m=149031672928989&w=2
> > >
> > > he proposed solution and it was changed&posted by Arnaldo in here:
> > >
> > >   https://marc.info/?l=linux-kernel&m=149132267410294&w=2
> > >
> > > but looks like it never got merged
> > >
> > > could you please add this or similar code before you add the locking
> > > code/overhead in?
> >
> > I'm rehashing that patch and adding it on top of what is in my
> > perf/core branch, will push soon, for now you can take a look at
> tmp.perf/core.
> 
> checked the code.. one nit, could we have single threaded by default?
> only one command is multithreaded atm, it could call perf_set_multihreaded
> instead of all current related commands call perf_set_singlethreaded

I agree with single threaded as default setting, also I think we need both
functions, perf_set_multihreaded and perf_set_singlethreaded.
Perf tools probably be half single threaded and half multithreaded.
E.g. the perf top optimization. Only the events synthesize codes are
multithreaded. So we have to set multithreaded first, then change it
to single threaded.

Thanks,
Kan

> 
> other than that it looks ok
> 
> thanks,
> jirka

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH RFC V2 00/10] perf top optimization
  2017-09-19 12:39       ` Liang, Kan
@ 2017-09-19 14:24         ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 37+ messages in thread
From: Arnaldo Carvalho de Melo @ 2017-09-19 14:24 UTC (permalink / raw)
  To: Liang, Kan
  Cc: Jiri Olsa, peterz, mingo, linux-kernel, jolsa, namhyung, Hunter,
	Adrian, Odzioba, Lukasz, ak

Em Tue, Sep 19, 2017 at 12:39:47PM +0000, Liang, Kan escreveu:
> > On Mon, Sep 18, 2017 at 10:01:00AM -0300, Arnaldo Carvalho de Melo wrote:
> > > Em Mon, Sep 18, 2017 at 10:57:08AM +0200, Jiri Olsa escreveu:
> > > > he proposed solution and it was changed&posted by Arnaldo in here:

> > > >   https://marc.info/?l=linux-kernel&m=149132267410294&w=2

> > > > but looks like it never got merged

> > > > could you please add this or similar code before you add the locking
> > > > code/overhead in?

> > > I'm rehashing that patch and adding it on top of what is in my
> > > perf/core branch, will push soon, for now you can take a look at
> > tmp.perf/core.

> > checked the code.. one nit, could we have single threaded by default?
> > only one command is multithreaded atm, it could call perf_set_multihreaded
> > instead of all current related commands call perf_set_singlethreaded

> I agree with single threaded as default setting, also I think we need both
> functions, perf_set_multihreaded and perf_set_singlethreaded.
> Perf tools probably be half single threaded and half multithreaded.
> E.g. the perf top optimization. Only the events synthesize codes are
> multithreaded. So we have to set multithreaded first, then change it
> to single threaded.

Ok, agreed with both of you, i.e. I'll make it single threaded by
default, and provide both functions, this way we get a default that is
what most tools use, and a way to select multithreaded mode for when it
is needed, then going back to single threaded.

- Arnaldo

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [tip:perf/core] perf machine: Use hashtable for machine threads
  2017-09-11  2:23 ` [PATCH RFC V2 01/10] perf tools: hashtable for machine threads kan.liang
  2017-09-13 13:29   ` Arnaldo Carvalho de Melo
@ 2017-09-22 16:42   ` tip-bot for Kan Liang
  1 sibling, 0 replies; 37+ messages in thread
From: tip-bot for Kan Liang @ 2017-09-22 16:42 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, ak, lukasz.odzioba, kan.liang, peterz, jolsa,
	linux-kernel, hpa, namhyung, adrian.hunter, acme, tglx

Commit-ID:  91e467bc568f15da2eac688e131010601e889184
Gitweb:     http://git.kernel.org/tip/91e467bc568f15da2eac688e131010601e889184
Author:     Kan Liang <kan.liang@intel.com>
AuthorDate: Sun, 10 Sep 2017 19:23:14 -0700
Committer:  Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Mon, 18 Sep 2017 09:40:19 -0300

perf machine: Use hashtable for machine threads

To process any events, it needs to find the thread in the machine first.
The machine maintains a rb tree to store all threads. The rb tree is
protected by a rw lock.

It is not a problem for current perf which serially processing events.
However, it will have scalability performance issue to process events in
parallel, especially on a heavy load system which have many threads.

Introduce a hashtable to divide the big rb tree into many samll rb tree
for threads. The index is thread id % hashtable size. It can reduce the
lock contention.

Committer notes:

Renamed some variables and function names to reduce semantic confusion:

  'struct threads' pointers: thread -> threads
  threads hastable index: tid -> hash_bucket
  struct threads *machine__thread() -> machine__threads()
  Cast tid to (unsigned int) to handle -1 in machine__threads() (Kan Liang)

Signed-off-by: Kan Liang <kan.liang@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Lukasz Odzioba <lukasz.odzioba@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1505096603-215017-2-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
 tools/perf/builtin-trace.c  |  19 ++++---
 tools/perf/util/machine.c   | 136 +++++++++++++++++++++++++++-----------------
 tools/perf/util/machine.h   |  23 ++++++--
 tools/perf/util/rb_resort.h |   5 +-
 4 files changed, 117 insertions(+), 66 deletions(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 771ddab..ee8c6e8 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -2730,20 +2730,23 @@ DEFINE_RESORT_RB(threads, (thread__nr_events(a->thread->priv) < thread__nr_event
 
 static size_t trace__fprintf_thread_summary(struct trace *trace, FILE *fp)
 {
-	DECLARE_RESORT_RB_MACHINE_THREADS(threads, trace->host);
 	size_t printed = trace__fprintf_threads_header(fp);
 	struct rb_node *nd;
+	int i;
 
-	if (threads == NULL) {
-		fprintf(fp, "%s", "Error sorting output by nr_events!\n");
-		return 0;
-	}
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		DECLARE_RESORT_RB_MACHINE_THREADS(threads, trace->host, i);
 
-	resort_rb__for_each_entry(nd, threads)
-		printed += trace__fprintf_thread(fp, threads_entry->thread, trace);
+		if (threads == NULL) {
+			fprintf(fp, "%s", "Error sorting output by nr_events!\n");
+			return 0;
+		}
 
-	resort_rb__delete(threads);
+		resort_rb__for_each_entry(nd, threads)
+			printed += trace__fprintf_thread(fp, threads_entry->thread, trace);
 
+		resort_rb__delete(threads);
+	}
 	return printed;
 }
 
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index df70936..f4f9267 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -33,6 +33,20 @@ static void dsos__init(struct dsos *dsos)
 	pthread_rwlock_init(&dsos->lock, NULL);
 }
 
+static void machine__threads_init(struct machine *machine)
+{
+	int i;
+
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		struct threads *threads = &machine->threads[i];
+		threads->entries = RB_ROOT;
+		pthread_rwlock_init(&threads->lock, NULL);
+		threads->nr = 0;
+		INIT_LIST_HEAD(&threads->dead);
+		threads->last_match = NULL;
+	}
+}
+
 int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
 {
 	memset(machine, 0, sizeof(*machine));
@@ -40,11 +54,7 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
 	RB_CLEAR_NODE(&machine->rb_node);
 	dsos__init(&machine->dsos);
 
-	machine->threads = RB_ROOT;
-	pthread_rwlock_init(&machine->threads_lock, NULL);
-	machine->nr_threads = 0;
-	INIT_LIST_HEAD(&machine->dead_threads);
-	machine->last_match = NULL;
+	machine__threads_init(machine);
 
 	machine->vdso_info = NULL;
 	machine->env = NULL;
@@ -141,27 +151,37 @@ static void dsos__exit(struct dsos *dsos)
 void machine__delete_threads(struct machine *machine)
 {
 	struct rb_node *nd;
+	int i;
 
-	pthread_rwlock_wrlock(&machine->threads_lock);
-	nd = rb_first(&machine->threads);
-	while (nd) {
-		struct thread *t = rb_entry(nd, struct thread, rb_node);
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		struct threads *threads = &machine->threads[i];
+		pthread_rwlock_wrlock(&threads->lock);
+		nd = rb_first(&threads->entries);
+		while (nd) {
+			struct thread *t = rb_entry(nd, struct thread, rb_node);
 
-		nd = rb_next(nd);
-		__machine__remove_thread(machine, t, false);
+			nd = rb_next(nd);
+			__machine__remove_thread(machine, t, false);
+		}
+		pthread_rwlock_unlock(&threads->lock);
 	}
-	pthread_rwlock_unlock(&machine->threads_lock);
 }
 
 void machine__exit(struct machine *machine)
 {
+	int i;
+
 	machine__destroy_kernel_maps(machine);
 	map_groups__exit(&machine->kmaps);
 	dsos__exit(&machine->dsos);
 	machine__exit_vdso(machine);
 	zfree(&machine->root_dir);
 	zfree(&machine->current_tid);
-	pthread_rwlock_destroy(&machine->threads_lock);
+
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		struct threads *threads = &machine->threads[i];
+		pthread_rwlock_destroy(&threads->lock);
+	}
 }
 
 void machine__delete(struct machine *machine)
@@ -382,7 +402,8 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 						  pid_t pid, pid_t tid,
 						  bool create)
 {
-	struct rb_node **p = &machine->threads.rb_node;
+	struct threads *threads = machine__threads(machine, tid);
+	struct rb_node **p = &threads->entries.rb_node;
 	struct rb_node *parent = NULL;
 	struct thread *th;
 
@@ -391,14 +412,14 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 	 * so most of the time we dont have to look up
 	 * the full rbtree:
 	 */
-	th = machine->last_match;
+	th = threads->last_match;
 	if (th != NULL) {
 		if (th->tid == tid) {
 			machine__update_thread_pid(machine, th, pid);
 			return thread__get(th);
 		}
 
-		machine->last_match = NULL;
+		threads->last_match = NULL;
 	}
 
 	while (*p != NULL) {
@@ -406,7 +427,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 		th = rb_entry(parent, struct thread, rb_node);
 
 		if (th->tid == tid) {
-			machine->last_match = th;
+			threads->last_match = th;
 			machine__update_thread_pid(machine, th, pid);
 			return thread__get(th);
 		}
@@ -423,7 +444,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 	th = thread__new(pid, tid);
 	if (th != NULL) {
 		rb_link_node(&th->rb_node, parent, p);
-		rb_insert_color(&th->rb_node, &machine->threads);
+		rb_insert_color(&th->rb_node, &threads->entries);
 
 		/*
 		 * We have to initialize map_groups separately
@@ -434,7 +455,7 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 		 * leader and that would screwed the rb tree.
 		 */
 		if (thread__init_map_groups(th, machine)) {
-			rb_erase_init(&th->rb_node, &machine->threads);
+			rb_erase_init(&th->rb_node, &threads->entries);
 			RB_CLEAR_NODE(&th->rb_node);
 			thread__put(th);
 			return NULL;
@@ -443,8 +464,8 @@ static struct thread *____machine__findnew_thread(struct machine *machine,
 		 * It is now in the rbtree, get a ref
 		 */
 		thread__get(th);
-		machine->last_match = th;
-		++machine->nr_threads;
+		threads->last_match = th;
+		++threads->nr;
 	}
 
 	return th;
@@ -458,21 +479,24 @@ struct thread *__machine__findnew_thread(struct machine *machine, pid_t pid, pid
 struct thread *machine__findnew_thread(struct machine *machine, pid_t pid,
 				       pid_t tid)
 {
+	struct threads *threads = machine__threads(machine, tid);
 	struct thread *th;
 
-	pthread_rwlock_wrlock(&machine->threads_lock);
+	pthread_rwlock_wrlock(&threads->lock);
 	th = __machine__findnew_thread(machine, pid, tid);
-	pthread_rwlock_unlock(&machine->threads_lock);
+	pthread_rwlock_unlock(&threads->lock);
 	return th;
 }
 
 struct thread *machine__find_thread(struct machine *machine, pid_t pid,
 				    pid_t tid)
 {
+	struct threads *threads = machine__threads(machine, tid);
 	struct thread *th;
-	pthread_rwlock_rdlock(&machine->threads_lock);
+
+	pthread_rwlock_rdlock(&threads->lock);
 	th =  ____machine__findnew_thread(machine, pid, tid, false);
-	pthread_rwlock_unlock(&machine->threads_lock);
+	pthread_rwlock_unlock(&threads->lock);
 	return th;
 }
 
@@ -719,21 +743,24 @@ size_t machine__fprintf_vmlinux_path(struct machine *machine, FILE *fp)
 
 size_t machine__fprintf(struct machine *machine, FILE *fp)
 {
-	size_t ret;
 	struct rb_node *nd;
+	size_t ret;
+	int i;
 
-	pthread_rwlock_rdlock(&machine->threads_lock);
-
-	ret = fprintf(fp, "Threads: %u\n", machine->nr_threads);
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		struct threads *threads = &machine->threads[i];
+		pthread_rwlock_rdlock(&threads->lock);
 
-	for (nd = rb_first(&machine->threads); nd; nd = rb_next(nd)) {
-		struct thread *pos = rb_entry(nd, struct thread, rb_node);
+		ret = fprintf(fp, "Threads: %u\n", threads->nr);
 
-		ret += thread__fprintf(pos, fp);
-	}
+		for (nd = rb_first(&threads->entries); nd; nd = rb_next(nd)) {
+			struct thread *pos = rb_entry(nd, struct thread, rb_node);
 
-	pthread_rwlock_unlock(&machine->threads_lock);
+			ret += thread__fprintf(pos, fp);
+		}
 
+		pthread_rwlock_unlock(&threads->lock);
+	}
 	return ret;
 }
 
@@ -1479,23 +1506,25 @@ out_problem:
 
 static void __machine__remove_thread(struct machine *machine, struct thread *th, bool lock)
 {
-	if (machine->last_match == th)
-		machine->last_match = NULL;
+	struct threads *threads = machine__threads(machine, th->tid);
+
+	if (threads->last_match == th)
+		threads->last_match = NULL;
 
 	BUG_ON(refcount_read(&th->refcnt) == 0);
 	if (lock)
-		pthread_rwlock_wrlock(&machine->threads_lock);
-	rb_erase_init(&th->rb_node, &machine->threads);
+		pthread_rwlock_wrlock(&threads->lock);
+	rb_erase_init(&th->rb_node, &threads->entries);
 	RB_CLEAR_NODE(&th->rb_node);
-	--machine->nr_threads;
+	--threads->nr;
 	/*
 	 * Move it first to the dead_threads list, then drop the reference,
 	 * if this is the last reference, then the thread__delete destructor
 	 * will be called and we will remove it from the dead_threads list.
 	 */
-	list_add_tail(&th->node, &machine->dead_threads);
+	list_add_tail(&th->node, &threads->dead);
 	if (lock)
-		pthread_rwlock_unlock(&machine->threads_lock);
+		pthread_rwlock_unlock(&threads->lock);
 	thread__put(th);
 }
 
@@ -2140,21 +2169,26 @@ int machine__for_each_thread(struct machine *machine,
 			     int (*fn)(struct thread *thread, void *p),
 			     void *priv)
 {
+	struct threads *threads;
 	struct rb_node *nd;
 	struct thread *thread;
 	int rc = 0;
+	int i;
 
-	for (nd = rb_first(&machine->threads); nd; nd = rb_next(nd)) {
-		thread = rb_entry(nd, struct thread, rb_node);
-		rc = fn(thread, priv);
-		if (rc != 0)
-			return rc;
-	}
+	for (i = 0; i < THREADS__TABLE_SIZE; i++) {
+		threads = &machine->threads[i];
+		for (nd = rb_first(&threads->entries); nd; nd = rb_next(nd)) {
+			thread = rb_entry(nd, struct thread, rb_node);
+			rc = fn(thread, priv);
+			if (rc != 0)
+				return rc;
+		}
 
-	list_for_each_entry(thread, &machine->dead_threads, node) {
-		rc = fn(thread, priv);
-		if (rc != 0)
-			return rc;
+		list_for_each_entry(thread, &threads->dead, node) {
+			rc = fn(thread, priv);
+			if (rc != 0)
+				return rc;
+		}
 	}
 	return rc;
 }
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index 3cdb134..fe2f058 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -23,6 +23,17 @@ extern const char *ref_reloc_sym_names[];
 
 struct vdso_info;
 
+#define THREADS__TABLE_BITS	8
+#define THREADS__TABLE_SIZE	(1 << THREADS__TABLE_BITS)
+
+struct threads {
+	struct rb_root	  entries;
+	pthread_rwlock_t  lock;
+	unsigned int	  nr;
+	struct list_head  dead;
+	struct thread	  *last_match;
+};
+
 struct machine {
 	struct rb_node	  rb_node;
 	pid_t		  pid;
@@ -30,11 +41,7 @@ struct machine {
 	bool		  comm_exec;
 	bool		  kptr_restrict_warned;
 	char		  *root_dir;
-	struct rb_root	  threads;
-	pthread_rwlock_t  threads_lock;
-	unsigned int	  nr_threads;
-	struct list_head  dead_threads;
-	struct thread	  *last_match;
+	struct threads    threads[THREADS__TABLE_SIZE];
 	struct vdso_info  *vdso_info;
 	struct perf_env   *env;
 	struct dsos	  dsos;
@@ -48,6 +55,12 @@ struct machine {
 	};
 };
 
+static inline struct threads *machine__threads(struct machine *machine, pid_t tid)
+{
+	/* Cast it to handle tid == -1 */
+	return &machine->threads[(unsigned int)tid % THREADS__TABLE_SIZE];
+}
+
 static inline
 struct map *__machine__kernel_map(struct machine *machine, enum map_type type)
 {
diff --git a/tools/perf/util/rb_resort.h b/tools/perf/util/rb_resort.h
index 808cc45..b30746f 100644
--- a/tools/perf/util/rb_resort.h
+++ b/tools/perf/util/rb_resort.h
@@ -143,7 +143,8 @@ struct __name##_sorted *__name = __name##_sorted__new
 				  __ilist->rblist.nr_entries)
 
 /* For 'struct machine->threads' */
-#define DECLARE_RESORT_RB_MACHINE_THREADS(__name, __machine)			\
-	DECLARE_RESORT_RB(__name)(&__machine->threads, __machine->nr_threads)
+#define DECLARE_RESORT_RB_MACHINE_THREADS(__name, __machine, hash_bucket)	\
+	DECLARE_RESORT_RB(__name)(&__machine->threads[hash_bucket].entries,	\
+				  __machine->threads[hash_bucket].nr)
 
 #endif /* _PERF_RESORT_RB_H_ */

^ permalink raw reply related	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2017-09-22 16:46 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-11  2:23 [PATCH RFC V2 00/10] perf top optimization kan.liang
2017-09-11  2:23 ` [PATCH RFC V2 01/10] perf tools: hashtable for machine threads kan.liang
2017-09-13 13:29   ` Arnaldo Carvalho de Melo
2017-09-22 16:42   ` [tip:perf/core] perf machine: Use " tip-bot for Kan Liang
2017-09-11  2:23 ` [PATCH RFC V2 02/10] perf tools: using scandir to replace readdir kan.liang
2017-09-13 15:19   ` Arnaldo Carvalho de Melo
2017-09-11  2:23 ` [PATCH RFC V2 03/10] petf tools: using comm_str to replace comm in hist_entry kan.liang
2017-09-13 15:21   ` Arnaldo Carvalho de Melo
2017-09-18  8:38     ` Jiri Olsa
2017-09-11  2:23 ` [PATCH RFC V2 04/10] petf tools: introduce a new function to set namespaces id kan.liang
2017-09-11  2:23 ` [PATCH RFC V2 05/10] perf tools: lock to protect thread list kan.liang
2017-09-18  8:50   ` Jiri Olsa
2017-09-18 16:18     ` Liang, Kan
2017-09-11  2:23 ` [PATCH RFC V2 06/10] perf tools: lock to protect comm_str rb tree kan.liang
2017-09-11  2:23 ` [PATCH RFC V2 07/10] perf tools: change machine comm_exec type to atomic kan.liang
2017-09-13 15:24   ` Arnaldo Carvalho de Melo
2017-09-15 20:05     ` Liang, Kan
2017-09-18 11:30       ` Jiri Olsa
2017-09-11  2:23 ` [PATCH RFC V2 08/10] perf top: implement multithreading for perf_event__synthesize_threads kan.liang
2017-09-18 11:24   ` Jiri Olsa
2017-09-11  2:23 ` [PATCH RFC V2 09/10] perf top: add option to set the number of thread for event synthesize kan.liang
2017-09-11  2:23 ` [PATCH RFC V2 10/10] perf top: switch back to overwrite mode kan.liang
2017-09-13 15:25 ` [PATCH RFC V2 00/10] perf top optimization Arnaldo Carvalho de Melo
2017-09-13 15:29   ` Liang, Kan
2017-09-13 15:38     ` Arnaldo Carvalho de Melo
2017-09-14 21:19       ` Arnaldo Carvalho de Melo
2017-09-15 15:11         ` Liang, Kan
2017-09-15 17:26           ` Arnaldo Carvalho de Melo
2017-09-15 17:29             ` Liang, Kan
2017-09-15 18:24               ` Arnaldo Carvalho de Melo
2017-09-15 18:26                 ` Liang, Kan
2017-09-18  8:57 ` Jiri Olsa
2017-09-18 13:01   ` Arnaldo Carvalho de Melo
2017-09-18 16:21     ` Liang, Kan
2017-09-19  8:19     ` Jiri Olsa
2017-09-19 12:39       ` Liang, Kan
2017-09-19 14:24         ` Arnaldo Carvalho de Melo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).