linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* perf synthesized mmap timeouts
@ 2018-10-28  4:41 David Miller
  2018-10-29 13:46 ` Liang, Kan
  0 siblings, 1 reply; 2+ messages in thread
From: David Miller @ 2018-10-28  4:41 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, kan.liang


If I understand the commit message for:

commit 8cc42de736b617827a4e7664fb8d7a325bc125bc
Author: Kan Liang <kan.liang@intel.com>
Date:   Thu Jan 18 13:26:32 2018 -0800

    perf top: Check the latency of perf_top__mmap_read()

properly, the problem is that a malicious or out of control
app can be doing endless mmaps causing perf to loop forever
processing the /proc/$PID/maps file.

But that is not what this commit is handling at all.

It is instead applying a large hammer which quits if it is taking a
long time to process the maps, not if the process's mmap list is
growing endlessly while we process it.

This triggers any time I run perf top on a fully loaded system making
perf less useful than it should be.

And it triggers simply because the perf synthesize threads have to
share the cpu with the workload already running.

So it takes more than half a second to process emacs's 527 maps when
the number of running processes is ~NCPUS?  Big deal.  We should let
it finish....

The tradeoff choosen here is really bad.

Guess what happens if you don't have maps for a given process?

What happens is that for every single sample we get within that range,
we get a completely unique histogram entry.

This means potentially millions and millions of histogram entries
where there should only be a few hundred.

This makes the histogram rbtree huge, and slow to process.

So not only is top unable to provide correct histogram output, it is
also running sluggishly.

A way to mitigate the actual problem would be to snapshot the maps
file into a large buffer, if possible.  We can get the full contents
faster than the process in question can make more maps.  At most we
will do one additional read at the end if they were able to sneak in
one new mmap during the initial read.

No timeout necessary.  We have the complete maps file, our processing
time is therefore bounded.

Thanks.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: perf synthesized mmap timeouts
  2018-10-28  4:41 perf synthesized mmap timeouts David Miller
@ 2018-10-29 13:46 ` Liang, Kan
  0 siblings, 0 replies; 2+ messages in thread
From: Liang, Kan @ 2018-10-29 13:46 UTC (permalink / raw)
  To: David Miller, acme; +Cc: linux-kernel, kan.liang



On 10/28/2018 12:41 AM, David Miller wrote:
> 
> If I understand the commit message for:
> 
> commit 8cc42de736b617827a4e7664fb8d7a325bc125bc
> Author: Kan Liang <kan.liang@intel.com>
> Date:   Thu Jan 18 13:26:32 2018 -0800
> 
>      perf top: Check the latency of perf_top__mmap_read()
> 
> properly, the problem is that a malicious or out of control
> app can be doing endless mmaps causing perf to loop forever
> processing the /proc/$PID/maps file.
>

NO, the perf_top__mmap_read() is used to read and process all available 
samples on each ringbuffer. It will be called repeatedly until perf top 
exit.

If it is a fully loaded system, the processing time of the function will 
be very long. Even much longer than refresh time for display thread. If 
so, it means that only stale data can be shown, or even worse, nothing 
can be shown.
For example, in the Knights Landing/Mill platform, the processing time 
could be tens of minutes if there is a heavy load (kernel building).
Because for the Knights Landing/Mill platform, the CPU# is big (> 200), 
but the computational capabilities for each core is weak (Atom core).
There is nothing shown on the screen during this period.
That's why we add a warning here to give user a hint.

If it's annoying for your case, I think you encountered the similar 
problem. You may want to follow the hint, and have the latest data 
refreshed. On the other hand, I agree that we need a more decent way to 
deliver the hint.

Thanks,
Kan

> But that is not what this commit is handling at all.
> 
> It is instead applying a large hammer which quits if it is taking a
> long time to process the maps, not if the process's mmap list is
> growing endlessly while we process it.
> 
> This triggers any time I run perf top on a fully loaded system making
> perf less useful than it should be.
> 
> And it triggers simply because the perf synthesize threads have to
> share the cpu with the workload already running.
> 
> So it takes more than half a second to process emacs's 527 maps when
> the number of running processes is ~NCPUS?  Big deal.  We should let
> it finish....
> 
> The tradeoff choosen here is really bad.
> 
> Guess what happens if you don't have maps for a given process?
> 
> What happens is that for every single sample we get within that range,
> we get a completely unique histogram entry.
> 
> This means potentially millions and millions of histogram entries
> where there should only be a few hundred.
> 
> This makes the histogram rbtree huge, and slow to process.
> 
> So not only is top unable to provide correct histogram output, it is
> also running sluggishly.
> 
> A way to mitigate the actual problem would be to snapshot the maps
> file into a large buffer, if possible.  We can get the full contents
> faster than the process in question can make more maps.  At most we
> will do one additional read at the end if they were able to sneak in
> one new mmap during the initial read.
> 
> No timeout necessary.  We have the complete maps file, our processing
> time is therefore bounded.
> 
> Thanks.
> 

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-10-29 13:46 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-28  4:41 perf synthesized mmap timeouts David Miller
2018-10-29 13:46 ` Liang, Kan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).