From: Haris Iqbal <haris.iqbal@cloud.ionos.com>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Leon Romanovsky <leon@kernel.org>,
linux-block@vger.kernel.org, linux-rdma@vger.kernel.org,
Danil Kipnis <danil.kipnis@cloud.ionos.com>,
Jinpu Wang <jinpu.wang@cloud.ionos.com>,
dledford@redhat.com, kernel test robot <rong.a.chen@intel.com>
Subject: Re: [PATCH] Delay the initialization of rnbd_server module to late_initcall level
Date: Tue, 23 Jun 2020 19:15:03 +0530 [thread overview]
Message-ID: <CAJpMwyj_Fa6AhYXcGh4kS79Vd2Dy3N7B5-9XhKHn4qWDo-HVjw@mail.gmail.com> (raw)
In-Reply-To: <20200623121721.GZ6578@ziepe.ca>
On Tue, Jun 23, 2020 at 5:47 PM Jason Gunthorpe <jgg@ziepe.ca> wrote:
>
> On Tue, Jun 23, 2020 at 03:20:27PM +0530, Haris Iqbal wrote:
> > Hi Jason and Leon,
> >
> > Did you get a chance to look into my previous email?
>
> Was there a question?
Multiple actually :)
>
> Jason
In response to your emails,
> Somehow nvme-rdma works:
I think that's because the callchain during the nvme_rdma_init_module
initialization stops at "nvmf_register_transport()". Here only the
"struct nvmf_transport_ops nvme_rdma_transport" is registered, which
contains the function "nvme_rdma_create_ctrl()". I tested this in my
local setup and during kernel boot, that's the extent of the
callchain.
The ".create_ctrl"; which now points to "nvme_rdma_create_ctrl()" is
called later from "nvmf_dev_write()". I am not sure when this is
called, probably when the "discover" happens from the client side or
during the server config.
It seems that the "rdma_bind_addr()" is called by the nvme rdma
module; but during the following events
1) When a discover happens from the client side. Call trace for that looks like,
[ 1098.409398] nvmf_dev_write
[ 1098.409403] nvmf_create_ctrl
[ 1098.414568] nvme_rdma_create_ctrl
[ 1098.415009] nvme_rdma_setup_ctrl
[ 1098.415010] nvme_rdma_configure_admin_queue
[ 1098.415010] nvme_rdma_alloc_queue
-->(rdma_create_id)
[ 1098.415032] rdma_resolve_addr
[ 1098.415032] cma_bind_addr
[ 1098.415033] rdma_bind_addr
2) When a connect happens from the client side. Call trace is the same
as above, plus "nvme_rdma_alloc_queue()" is called n number of times;
n being the number of IO queues being created.
On the server side, when an nvmf port is enabled, that also triggers a
call to "rdma_bind_addr()", but that is not from the nvme rdma module.
may be nvme target rdma? (not sure).
Does this make sense or am I missing something here?
> If the rdma_create_id() is not on a callchain from module_init then you don't have a problem.
I am a little confused. I thought the problem occurs from a call to
either "rdma_resolve_addr()" which calls "rdma_bind_addr()",
or a direct call to "rdma_bind_addr()" as in rtrs case.
In both the cases, a call to "rdma_create_id()" is needed before this.
> Similarly they are supposed to be created from the client attachment.
I am a beginner in terms of concepts here. Did you mean when a client
tries to establish the first connection to an rdma server?
--
Regards
-Haris
next prev parent reply other threads:[~2020-06-23 13:45 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-17 10:37 [PATCH] Delay the initialization of rnbd_server module to late_initcall level haris.iqbal
2020-06-17 10:57 ` Jinpu Wang
2020-06-17 11:28 ` Leon Romanovsky
2020-06-17 18:20 ` Jason Gunthorpe
2020-06-17 19:07 ` Leon Romanovsky
2020-06-17 19:26 ` Jason Gunthorpe
2020-06-18 1:45 ` Haris Iqbal
2020-06-18 9:14 ` Haris Iqbal
2020-06-23 9:50 ` Haris Iqbal
2020-06-23 12:17 ` Jason Gunthorpe
2020-06-23 13:45 ` Haris Iqbal [this message]
2020-06-23 14:24 ` Jason Gunthorpe
2020-06-23 11:35 ` Haris Iqbal
2020-06-23 17:23 ` Jason Gunthorpe
2020-08-04 13:37 ` [PATCH v2] RDMA/rtrs-srv: Incorporate ib_register_client into rtrs server init Md Haris Iqbal
2020-08-05 5:57 ` Leon Romanovsky
2020-08-05 7:50 ` Haris Iqbal
2020-08-05 9:04 ` Leon Romanovsky
2020-08-05 11:09 ` Haris Iqbal
2020-08-05 13:12 ` Leon Romanovsky
2020-08-05 13:53 ` Haris Iqbal
2020-08-05 14:55 ` Leon Romanovsky
2020-08-05 15:27 ` Haris Iqbal
2020-08-05 9:09 ` Danil Kipnis
2020-08-05 9:16 ` Leon Romanovsky
2020-08-05 11:18 ` Danil Kipnis
2020-08-05 13:09 ` Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJpMwyj_Fa6AhYXcGh4kS79Vd2Dy3N7B5-9XhKHn4qWDo-HVjw@mail.gmail.com \
--to=haris.iqbal@cloud.ionos.com \
--cc=danil.kipnis@cloud.ionos.com \
--cc=dledford@redhat.com \
--cc=jgg@ziepe.ca \
--cc=jinpu.wang@cloud.ionos.com \
--cc=leon@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=rong.a.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).