All of lore.kernel.org
 help / color / mirror / Atom feed
* Investigate busy ceph-msgr worker thread
@ 2020-12-03 11:01 Stefan Kooman
  2020-12-03 16:46 ` Jeff Layton
  0 siblings, 1 reply; 4+ messages in thread
From: Stefan Kooman @ 2020-12-03 11:01 UTC (permalink / raw)
  To: Ceph Development

Hi,

We have a cephfs linux kernel (5.4.0-53-generic) workload (rsync) that 
seems to be limited by a single ceph-msgr thread (doing close to 100% 
cpu). We would like to investigate what this thread is so busy with. 
What would be the easiest way to do this? On a related note: what would 
be the best way to scale cephfs client performance for a single process 
(if at all possible)?

Thanks for any pointers.

Gr. Stefan

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Investigate busy ceph-msgr worker thread
  2020-12-03 11:01 Investigate busy ceph-msgr worker thread Stefan Kooman
@ 2020-12-03 16:46 ` Jeff Layton
  2020-12-04 19:49   ` Stefan Kooman
  0 siblings, 1 reply; 4+ messages in thread
From: Jeff Layton @ 2020-12-03 16:46 UTC (permalink / raw)
  To: Stefan Kooman, Ceph Development

On Thu, 2020-12-03 at 12:01 +0100, Stefan Kooman wrote:
> Hi,
> 
> We have a cephfs linux kernel (5.4.0-53-generic) workload (rsync) that 
> seems to be limited by a single ceph-msgr thread (doing close to 100% 
> cpu). We would like to investigate what this thread is so busy with. 
> What would be the easiest way to do this? On a related note: what would 
> be the best way to scale cephfs client performance for a single process 
> (if at all possible)?
> 
> Thanks for any pointers.
> 

Usually kernel profiling (a'la perf) is the way to go about this. You
may want to consider trying more recent kernels and see if they fare any
better. With a new enough MDS and kernel, you can try enabling async
creates as well, and see whether that helps performance any.

As far as optimizing for a single process, there's not a lot you can do,
really.
-- 
Jeff Layton <jlayton@kernel.org>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Investigate busy ceph-msgr worker thread
  2020-12-03 16:46 ` Jeff Layton
@ 2020-12-04 19:49   ` Stefan Kooman
  2020-12-04 21:13     ` Jeff Layton
  0 siblings, 1 reply; 4+ messages in thread
From: Stefan Kooman @ 2020-12-04 19:49 UTC (permalink / raw)
  To: Jeff Layton, Ceph Development

On 12/3/20 5:46 PM, Jeff Layton wrote:
> On Thu, 2020-12-03 at 12:01 +0100, Stefan Kooman wrote:
>> Hi,
>>
>> We have a cephfs linux kernel (5.4.0-53-generic) workload (rsync) that
>> seems to be limited by a single ceph-msgr thread (doing close to 100%
>> cpu). We would like to investigate what this thread is so busy with.
>> What would be the easiest way to do this? On a related note: what would
>> be the best way to scale cephfs client performance for a single process
>> (if at all possible)?
>>
>> Thanks for any pointers.
>>
> 
> Usually kernel profiling (a'la perf) is the way to go about this. You
> may want to consider trying more recent kernels and see if they fare any
> better. With a new enough MDS and kernel, you can try enabling async
> creates as well, and see whether that helps performance any.

The thread is mostly busy with "build_snap_context":


+   94.39%    94.23%  kworker/4:1-cep  [kernel.kallsyms]  [k] 
build_snap_context

Do I understand correctly if this code is checking for any potential 
snapshots? As grepping through linux cephfs code gives a hit on snap.c

Our cephfs filesystem has been created in Luminous, and upgraded through 
Mimic to Nautilus. We have never enabled snapshot support (ceph fs set 
cephfs allow_new_snaps true). But the filesystem does seem to support it 
(.snap dirs present). The data rsync is processing does contain a lot of 
directories. It might explain the amount of time spent in this code path.

Would this be a plausible explanation?

Thanks,

Stefan

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Investigate busy ceph-msgr worker thread
  2020-12-04 19:49   ` Stefan Kooman
@ 2020-12-04 21:13     ` Jeff Layton
  0 siblings, 0 replies; 4+ messages in thread
From: Jeff Layton @ 2020-12-04 21:13 UTC (permalink / raw)
  To: Stefan Kooman, Ceph Development

On Fri, 2020-12-04 at 20:49 +0100, Stefan Kooman wrote:
> On 12/3/20 5:46 PM, Jeff Layton wrote:
> > On Thu, 2020-12-03 at 12:01 +0100, Stefan Kooman wrote:
> > > Hi,
> > > 
> > > We have a cephfs linux kernel (5.4.0-53-generic) workload (rsync) that
> > > seems to be limited by a single ceph-msgr thread (doing close to 100%
> > > cpu). We would like to investigate what this thread is so busy with.
> > > What would be the easiest way to do this? On a related note: what would
> > > be the best way to scale cephfs client performance for a single process
> > > (if at all possible)?
> > > 
> > > Thanks for any pointers.
> > > 
> > 
> > Usually kernel profiling (a'la perf) is the way to go about this. You
> > may want to consider trying more recent kernels and see if they fare any
> > better. With a new enough MDS and kernel, you can try enabling async
> > creates as well, and see whether that helps performance any.
> 
> The thread is mostly busy with "build_snap_context":
> 
> 
> +   94.39%    94.23%  kworker/4:1-cep  [kernel.kallsyms]  [k] 
> build_snap_context
> 
> Do I understand correctly if this code is checking for any potential 
> snapshots? As grepping through linux cephfs code gives a hit on snap.c
> 
> Our cephfs filesystem has been created in Luminous, and upgraded through 
> Mimic to Nautilus. We have never enabled snapshot support (ceph fs set 
> cephfs allow_new_snaps true). But the filesystem does seem to support it 
> (.snap dirs present). The data rsync is processing does contain a lot of 
> directories. It might explain the amount of time spent in this code path.
> 
> Would this be a plausible explanation?
> 
> Thanks,
> 
> Stefan

Yes, that sounds plausible. You probably want to stop rsync from
recursing down into .snap/ directories altogether if you have it doing
that.
-- 
Jeff Layton <jlayton@kernel.org>


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-12-04 21:14 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-03 11:01 Investigate busy ceph-msgr worker thread Stefan Kooman
2020-12-03 16:46 ` Jeff Layton
2020-12-04 19:49   ` Stefan Kooman
2020-12-04 21:13     ` Jeff Layton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.