linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Haris Iqbal <haris.iqbal@cloud.ionos.com>
To: Jason Gunthorpe <jgg@ziepe.ca>, Leon Romanovsky <leon@kernel.org>
Cc: linux-block@vger.kernel.org, linux-rdma@vger.kernel.org,
	Danil Kipnis <danil.kipnis@cloud.ionos.com>,
	Jinpu Wang <jinpu.wang@cloud.ionos.com>,
	dledford@redhat.com, kernel test robot <rong.a.chen@intel.com>
Subject: Re: [PATCH] Delay the initialization of rnbd_server module to late_initcall level
Date: Tue, 23 Jun 2020 15:20:27 +0530	[thread overview]
Message-ID: <CAJpMwygqz20=H7ovSL0nSWLbVpMv-KLOgYO=nRCLv==OC8sgHw@mail.gmail.com> (raw)
In-Reply-To: <CAJpMwyjJSu4exkTAoFLhY-ubzNQLp6nWqq83k6vWn1Uw3eaK_Q@mail.gmail.com>

Hi Jason and Leon,

Did you get a chance to look into my previous email?

On Thu, Jun 18, 2020 at 2:44 PM Haris Iqbal <haris.iqbal@cloud.ionos.com> wrote:
>
> It seems that the "rdma_bind_addr()" is called by the nvme rdma
> module; but during the following events
> 1) When a discover happens from the client side. Call trace for that looks like,
> [ 1098.409398] nvmf_dev_write
> [ 1098.409403] nvmf_create_ctrl
> [ 1098.414568] nvme_rdma_create_ctrl
> [ 1098.415009] nvme_rdma_setup_ctrl
> [ 1098.415010] nvme_rdma_configure_admin_queue
> [ 1098.415010] nvme_rdma_alloc_queue
>                              -->(rdma_create_id)
> [ 1098.415032] rdma_resolve_addr
> [ 1098.415032] cma_bind_addr
> [ 1098.415033] rdma_bind_addr
>
> 2) When a connect happens from the client side. Call trace is the same
> as above, plus "nvme_rdma_alloc_queue()" is called n number of times;
> n being the number of IO queues being created.
>
>
> On the server side, when an nvmf port is enabled, that also triggers a
> call to "rdma_bind_addr()", but that is not from the nvme rdma module.
> may be nvme target rdma? (not sure).
>
> On Thu, Jun 18, 2020 at 7:15 AM Haris Iqbal <haris.iqbal@cloud.ionos.com> wrote:
> >
> > (Apologies for multiple emails. Was having trouble with an extension,
> > cause of which emails did not get delivered to the mailing list.
> > Resolved now.)
> >
> > > Somehow nvme-rdma works:
> >
> > I think that's because the callchain during the nvme_rdma_init_module
> > initialization stops at "nvmf_register_transport()". Here only the
> > "struct nvmf_transport_ops nvme_rdma_transport" is registered, which
> > contains the function "nvme_rdma_create_ctrl()". I tested this in my
> > local setup and during kernel boot, that's the extent of the
> > callchain.
> > The ".create_ctrl"; which now points to "nvme_rdma_create_ctrl()" is
> > called later from "nvmf_dev_write()". I am not sure when this is
> > called, probably when the "discover" happens from the client side or
> > during the server config. I am trying to test this to confirm, will
> > send more details once I am done.
> > Am I missing something here?
> >
> >
> > > If the rdma_create_id() is not on a callchain from module_init then you don't have a problem.
> >
> > I am a little confused. I thought the problem occurs from a call to
> > either "rdma_resolve_addr()" which calls "rdma_bind_addr()",
> > or a direct call to "rdma_bind_addr()" as in rtrs case.
> > In both the cases, a call to "rdma_create_id()" is needed before this.
> >
> >
> > > Similarly they are supposed to be created from the client attachment.
> > I am a beginner in terms of concepts here. Did you mean when a client
> > tries to establish the first connection to an rdma server?
> >
> >
> > On Thu, Jun 18, 2020 at 12:56 AM Jason Gunthorpe <jgg@ziepe.ca> wrote:
> > >
> > > On Wed, Jun 17, 2020 at 10:07:56PM +0300, Leon Romanovsky wrote:
> > > > On Wed, Jun 17, 2020 at 03:20:46PM -0300, Jason Gunthorpe wrote:
> > > > > On Wed, Jun 17, 2020 at 02:28:11PM +0300, Leon Romanovsky wrote:
> > > > > > On Wed, Jun 17, 2020 at 04:07:32PM +0530, haris.iqbal@cloud.ionos.com wrote:
> > > > > > > From: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> > > > > > >
> > > > > > > Fixes: 2de6c8de192b ("block/rnbd: server: main functionality")
> > > > > > > Reported-by: kernel test robot <rong.a.chen@intel.com>
> > > > > > > Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> > > > > > >
> > > > > > > The rnbd_server module's communication manager initialization depends on the
> > > > > > > registration of the "network namespace subsystem" of the RDMA CM agent module.
> > > > > > > As such, when the kernel is configured to load the rnbd_server and the RDMA
> > > > > > > cma module during initialization; and if the rnbd_server module is initialized
> > > > > > > before RDMA cma module, a null ptr dereference occurs during the RDMA bind
> > > > > > > operation.
> > > > > > > This patch delays the initialization of the rnbd_server module to the
> > > > > > > late_initcall level, since RDMA cma module uses module_init which puts it into
> > > > > > > the device_initcall level.
> > > > > > >  drivers/block/rnbd/rnbd-srv.c | 2 +-
> > > > > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > > >
> > > > > > > diff --git a/drivers/block/rnbd/rnbd-srv.c b/drivers/block/rnbd/rnbd-srv.c
> > > > > > > index 86e61523907b..213df05e5994 100644
> > > > > > > +++ b/drivers/block/rnbd/rnbd-srv.c
> > > > > > > @@ -840,5 +840,5 @@ static void __exit rnbd_srv_cleanup_module(void)
> > > > > > >         rnbd_srv_destroy_sysfs_files();
> > > > > > >  }
> > > > > > >
> > > > > > > -module_init(rnbd_srv_init_module);
> > > > > > > +late_initcall(rnbd_srv_init_module);
> > > > > >
> > > > > > I don't think that this is correct change. Somehow nvme-rdma works:
> > > > > > module_init(nvme_rdma_init_module);
> > > > > > -> nvme_rdma_init_module
> > > > > >  -> nvmf_register_transport(&nvme_rdma_transport);
> > > > > >   -> nvme_rdma_create_ctrl
> > > > > >    -> nvme_rdma_setup_ctrl
> > > > > >     -> nvme_rdma_configure_admin_queue
> > > > > >      -> nvme_rdma_alloc_queue
> > > > > >       -> rdma_create_id
> > > > >
> > > > > If it does work, it is by luck.
> > > >
> > > > I didn't check every ULP, but it seems that all ULPs use the same
> > > > module_init.
> > > >
> > > > >
> > > > > Keep in mind all this only matters for kernels without modules.
> > > >
> > > > Can it be related to the fact that other ULPs call to ib_register_client()
> > > > before calling to rdma-cm? RNBD does not have such call.
> > >
> > > If the rdma_create_id() is not on a callchain from module_init then
> > > you don't have a problem.
> > >
> > > nvme has a bug here, IIRC. It is not OK to create RDMA CM IDs outside
> > > a client - CM IDs are supposed to be cleaned up when the client is
> > > removed.
> > >
> > > Similarly they are supposed to be created from the client attachment.
> > >
> > > Though listening CM IDs unbound to any device may change that
> > > slightly, I think it is probably best practice to create the listening
> > > ID only if a client is bound.
> > >
> > > Most probably that is the best way to fix rnbd
> > >
> > > > I'm not proposing this, but just loudly wondering, do we really need rdma-cm
> > > > as a separate module? Can we bring it to be part of ib_core?
> > >
> > > No idea.. It doesn't help this situation at least
> > >
> > > Jason
> >
> >
> >
> > --
> >
> > Regards
> > -Haris
>
>
>
> --
>
> Regards
> -Haris



-- 

Regards
-Haris

  reply	other threads:[~2020-06-23  9:50 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-17 10:37 [PATCH] Delay the initialization of rnbd_server module to late_initcall level haris.iqbal
2020-06-17 10:57 ` Jinpu Wang
2020-06-17 11:28 ` Leon Romanovsky
2020-06-17 18:20   ` Jason Gunthorpe
2020-06-17 19:07     ` Leon Romanovsky
2020-06-17 19:26       ` Jason Gunthorpe
2020-06-18  1:45         ` Haris Iqbal
2020-06-18  9:14           ` Haris Iqbal
2020-06-23  9:50             ` Haris Iqbal [this message]
2020-06-23 12:17               ` Jason Gunthorpe
2020-06-23 13:45                 ` Haris Iqbal
2020-06-23 14:24                   ` Jason Gunthorpe
2020-06-23 11:35                     ` Haris Iqbal
2020-06-23 17:23                       ` Jason Gunthorpe
2020-08-04 13:37                         ` [PATCH v2] RDMA/rtrs-srv: Incorporate ib_register_client into rtrs server init Md Haris Iqbal
2020-08-05  5:57                           ` Leon Romanovsky
2020-08-05  7:50                             ` Haris Iqbal
2020-08-05  9:04                               ` Leon Romanovsky
2020-08-05 11:09                                 ` Haris Iqbal
2020-08-05 13:12                                   ` Leon Romanovsky
2020-08-05 13:53                                     ` Haris Iqbal
2020-08-05 14:55                                       ` Leon Romanovsky
2020-08-05 15:27                                         ` Haris Iqbal
2020-08-05  9:09                             ` Danil Kipnis
     [not found]                               ` <20200805091644.GG4432@unreal>
2020-08-05 11:18                                 ` Danil Kipnis
2020-08-05 13:09                                   ` Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJpMwygqz20=H7ovSL0nSWLbVpMv-KLOgYO=nRCLv==OC8sgHw@mail.gmail.com' \
    --to=haris.iqbal@cloud.ionos.com \
    --cc=danil.kipnis@cloud.ionos.com \
    --cc=dledford@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=jinpu.wang@cloud.ionos.com \
    --cc=leon@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=rong.a.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).