linux-trace-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2 v2] tracing: Have trace_pid_list be a sparse array
@ 2021-09-24 10:29 Steven Rostedt
  2021-09-24 10:29 ` [PATCH 1/2 v2] tracing: Place trace_pid_list logic into abstract functions Steven Rostedt
  2021-09-24 10:29 ` [PATCH 2/2 v2] tracing: Create a sparse bitmask for pid filtering Steven Rostedt
  0 siblings, 2 replies; 3+ messages in thread
From: Steven Rostedt @ 2021-09-24 10:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Andrew Morton, Masami Hiramatsu, Mathieu Desnoyers,
	linux-trace-devel

When the trace_pid_list was created, the default pid max was 32768.
Creating a bitmask that can hold one bit for all 32768 took up 4096 (one
page). Having a one page bitmask was not much of a problem, and that was
used for mapping pids. But today, systems are bigger and can run more
tasks, and now the default pid_max is usually set to 4194304. Which means
to handle that many pids requires 524288 bytes. Worse yet, the pid_max can
be set to 2^30 (1073741824 or 1G) which would take 134217728 (128M) of
memory to store this array.

Since the pid_list array is very sparsely populated, it is a huge waste of
memory to store all possible bits for each pid when most will not be set.

Instead, use a page table scheme to store the array, and allow this to
handle up to 30 bit pids.

The pid_mask will start out with 256 entries for the first 8 MSB bits.
This will cost 1K for 32 bit architectures and 2K for 64 bit. Each of
these will have a 256 array to store the next 8 bits of the pid (another
1 or 2K). These will hold an 2K byte bitmask (which will cover the LSB
14 bits or 16384 pids).
 
When the trace_pid_list is allocated, it will have the 1/2K upper bits
allocated, and then it will allocate a cache for the next upper chunks and
the lower chunks (default 6 of each). Then when a bit is "set", these
chunks will be pulled from the free list and added to the array. If the
free list gets down to a lever (default 2), it will trigger an irqwork
that will refill the cache back up.

On clearing a bit, if the clear causes the bitmask to be zero, that chunk
will then be placed back into the free cache for later use, keeping the
need to allocate more down to a minimum.

Changes since v1: https://lore.kernel.org/all/20210924033547.939554938@goodmis.org/

 - Changed the bit split from 10,10,12 to 8,8,14 and only check the
   30 bits of the pid, as according to linux/thread.h a pid may only
   be at most 30 bits.

 - Added a WARN_ON_ONCE(pid_max > (1 << 30)) to be sure.

Steven Rostedt (VMware) (2):
      tracing: Place trace_pid_list logic into abstract functions
      tracing: Create a sparse bitmask for pid filtering

----
 kernel/trace/Makefile       |   1 +
 kernel/trace/ftrace.c       |   6 +-
 kernel/trace/pid_list.c     | 557 ++++++++++++++++++++++++++++++++++++++++++++
 kernel/trace/trace.c        |  78 +++----
 kernel/trace/trace.h        |  14 +-
 kernel/trace/trace_events.c |   6 +-
 6 files changed, 601 insertions(+), 61 deletions(-)
 create mode 100644 kernel/trace/pid_list.c

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-09-24 10:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-24 10:29 [PATCH 0/2 v2] tracing: Have trace_pid_list be a sparse array Steven Rostedt
2021-09-24 10:29 ` [PATCH 1/2 v2] tracing: Place trace_pid_list logic into abstract functions Steven Rostedt
2021-09-24 10:29 ` [PATCH 2/2 v2] tracing: Create a sparse bitmask for pid filtering Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).