All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dominique Martinet <asmadeus@codewreck.org>
To: Christian Schoenebeck <linux_oss@crudebyte.com>
Cc: Kent Overstreet <kent.overstreet@gmail.com>,
	linux-kernel@vger.kernel.org,
	v9fs-developer@lists.sourceforge.net,
	Eric Van Hensbergen <ericvh@gmail.com>,
	Latchesar Ionkov <lucho@ionkov.net>
Subject: Re: [PATCH 3/3] 9p: Add mempools for RPCs
Date: Sat, 9 Jul 2022 16:43:47 +0900	[thread overview]
Message-ID: <Yskxs4uQ4v8l7Zb9@codewreck.org> (raw)
In-Reply-To: <72042449.h6Bkk5LDil@silver>

I've taken the mempool patches to 9p-next

Christian Schoenebeck wrote on Mon, Jul 04, 2022 at 03:56:55PM +0200:
>> (I appreciate the need for testing, but this feels much less risky than
>> the iovec series we've had recently... Famous last words?)
> 
> Got it, consider my famous last words dropped. ;-)

Ok, so I think you won this one...

Well -- when testing normally it obviously works well, performance wise
is roughly the same (obviously since it tries to allocate from slab
first and in normal case that will work)

When I tried gaming it with very low memory though I thought it worked
well, but I managed to get a bunch of processes stuck in mempool_alloc
with no obvious tid waiting for a reply.
I had the bright idea of using fio with io_uring and interestingly the
uring worker doesn't show up in ps or /proc/<pid>, but with qemu's gdb
and lx-ps I could find a bunch of iou-wrk-<pid> that are all with
similar stacks
   1   │ [<0>] mempool_alloc+0x136/0x180
   2   │ [<0>] p9_fcall_init+0x63/0x80 [9pnet]
   3   │ [<0>] p9_client_prepare_req+0xa9/0x290 [9pnet]
   4   │ [<0>] p9_client_rpc+0x64/0x610 [9pnet]
   5   │ [<0>] p9_client_write+0xcb/0x210 [9pnet]
   6   │ [<0>] v9fs_file_write_iter+0x4d/0xc0 [9p]
   7   │ [<0>] io_write+0x129/0x2c0
   8   │ [<0>] io_issue_sqe+0xa1/0x25b0
   9   │ [<0>] io_wq_submit_work+0x90/0x190
  10   │ [<0>] io_worker_handle_work+0x211/0x550
  11   │ [<0>] io_wqe_worker+0x2c5/0x340
  12   │ [<0>] ret_from_fork+0x1f/0x30

or, and that's the interesting part
   1   │ [<0>] mempool_alloc+0x136/0x180
   2   │ [<0>] p9_fcall_init+0x63/0x80 [9pnet]
   3   │ [<0>] p9_client_prepare_req+0xa9/0x290 [9pnet]
   4   │ [<0>] p9_client_rpc+0x64/0x610 [9pnet]
   5   │ [<0>] p9_client_flush+0x81/0xc0 [9pnet]
   6   │ [<0>] p9_client_rpc+0x591/0x610 [9pnet]
   7   │ [<0>] p9_client_write+0xcb/0x210 [9pnet]
   8   │ [<0>] v9fs_file_write_iter+0x4d/0xc0 [9p]
   9   │ [<0>] io_write+0x129/0x2c0
  10   │ [<0>] io_issue_sqe+0xa1/0x25b0
  11   │ [<0>] io_wq_submit_work+0x90/0x190
  12   │ [<0>] io_worker_handle_work+0x211/0x550
  13   │ [<0>] io_wqe_worker+0x2c5/0x340
  14   │ [<0>] ret_from_fork+0x1f/0x30

The problem is these flushes : the same task is holding a buffer for the
original rpc and tries to get a new one, but waits for someone to free
and.. obviously there isn't anyone (I cound 11 flushes pending, so more
than the minimum number of buffers we'd expect from the mempool, and I
don't think we missed any free)


Now I'm not sure what's best here.
The best thing to do would probably to just tell the client it can't use
the mempools for flushes -- the flushes are rare and will use small
buffers with your smaller allocations patch; I bet I wouldn't be able to
reproduce that anymore but it should probably just forbid the mempool
just in case.


Anyway, I'm not comfortable with this patch right now, a hang is worse
than an allocation failure warning.


> > > How about I address the already discussed issues and post a v5 of those
> > > patches this week and then we can continue from there?
> > 
> > I would have been happy to rebase your patches 9..12 on top of Kent's
> > this weekend but if you want to refresh them this week we can continue
> > from there, sure.
> 
> I'll rebase them on master and address what we discussed so far. Then we'll 
> see.

FWIW and regarding the other thread with virito queue sizes, I was only
considering the later patches with small RPCs for this merge window.

Shall we try to focus on that first, and then revisit the virtio and
mempool patches once that's done?

-- 
Dominique

  reply	other threads:[~2022-07-09  7:44 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-04  1:09 [merged mm-stable] tools-add-memcg_shrinkerpy.patch removed from -mm tree Andrew Morton
2022-07-04  1:42 ` [PATCH 1/3] 9p: Drop kref usage Kent Overstreet
2022-07-04  1:42   ` [PATCH 2/3] 9p: Add client parameter to p9_req_put() Kent Overstreet
2022-07-04  1:42   ` [PATCH 3/3] 9p: Add mempools for RPCs Kent Overstreet
2022-07-04  2:22     ` Dominique Martinet
2022-07-04  3:05       ` Kent Overstreet
2022-07-04  3:38         ` Dominique Martinet
2022-07-04  3:52           ` Kent Overstreet
2022-07-04 11:12           ` Christian Schoenebeck
2022-07-04 13:06             ` Dominique Martinet
2022-07-04 13:56               ` Christian Schoenebeck
2022-07-09  7:43                 ` Dominique Martinet [this message]
2022-07-09 14:21                   ` Christian Schoenebeck
2022-07-09 14:42                     ` Dominique Martinet
2022-07-09 18:08                       ` Christian Schoenebeck
2022-07-09 20:50                         ` Dominique Martinet
2022-07-10 12:57                           ` Christian Schoenebeck
2022-07-10 13:19                             ` Dominique Martinet
2022-07-10 15:16                               ` Christian Schoenebeck
2022-07-13  4:17                                 ` [RFC PATCH] 9p: forbid use of mempool for TFLUSH Dominique Martinet
2022-07-13  6:39                                   ` Kent Overstreet
2022-07-13  7:12                                     ` Dominique Martinet
2022-07-13  7:40                                       ` Kent Overstreet
2022-07-13  8:18                                         ` Dominique Martinet
2022-07-14 19:16                                   ` Christian Schoenebeck
2022-07-14 22:31                                     ` Dominique Martinet
2022-07-15 10:23                                       ` Christian Schoenebeck
2022-07-04 13:06             ` [PATCH 3/3] 9p: Add mempools for RPCs Kent Overstreet
2022-07-04 13:39               ` Christian Schoenebeck
2022-07-04 14:19                 ` Kent Overstreet
2022-07-05  9:59                   ` Christian Schoenebeck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yskxs4uQ4v8l7Zb9@codewreck.org \
    --to=asmadeus@codewreck.org \
    --cc=ericvh@gmail.com \
    --cc=kent.overstreet@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux_oss@crudebyte.com \
    --cc=lucho@ionkov.net \
    --cc=v9fs-developer@lists.sourceforge.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.