From: Sagi Grimberg <sagi@grimberg.me>
To: linux-nvme@lists.infradead.org
Cc: Keith Busch <keith.busch@intel.com>,
Hannes Reinecke <hare@suse.de>, Christoph Hellwig <hch@lst.de>,
James Smart <james.smart@broadcom.com>
Subject: [PATCH v5 0/4] Support discovery log change events
Date: Fri, 6 Sep 2019 11:12:30 -0700 [thread overview]
Message-ID: <20190906181235.20365-1-sagi@grimberg.me> (raw)
We want to be able to support discovery log change events automatically
without user intervention.
The definition of discovery log change events applies on "persistent" long
lived controllers, so first we need to have discovery controllers to stay
for a long time and accept kato value.
Then when we do happen to get a discovery log change event on the persistent
discovery controller, we simply fire a udev event to user-space to re-query
the discovery log page and connect to new subsystems in the fabric.
This works with latest nvme-cli master with the nvme-cli patch added
to this series.
Changes from v4:
- fixed comma at end-of-line
- fixed lines >80 characters
- removed redundant conditions on ctrl->opts
- fixed dev argument name
- collected review tags
Changes from v3:
- Add nvme_class uevent callout for controller specific environment variables
- send discovery just like any AEN that we send to userspace
- merged discovery aen enable + send uevents to userspace into a single patch
as they are now trivially adding support for the feature
- Added nvme-cli modifications to handle the new information from the event
Changes from v2:
- added patch to always enable aen, regardless of the number of I/O queues
- fixes line over 80 characters
Changes from v1:
- rebase to nvme-5.3
- pass none if trsvcid is uninitialized
- pass NVME_CTRL_NAME instead of NVME_CTRL_INSTANCE
Sagi Grimberg (4):
nvme-fabrics: allow discovery subsystems accept a kato
nvme: enable aen regardless of the presence of I/O queues
nvme: add uevent variables for controller devices
nvme: send discovery log page change events to userspace
drivers/nvme/host/core.c | 40 ++++++++++++++++++++++++++++++++++---
drivers/nvme/host/fabrics.c | 12 ++---------
2 files changed, 39 insertions(+), 13 deletions(-)
--
2.17.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next reply other threads:[~2019-09-06 18:12 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-06 18:12 Sagi Grimberg [this message]
2019-09-06 18:12 ` [PATCH v5 1/4] nvme-fabrics: allow discovery subsystems accept a kato Sagi Grimberg
2019-09-06 18:12 ` [PATCH v5 2/4] nvme: enable aen regardless of the presence of I/O queues Sagi Grimberg
2019-09-06 18:12 ` [PATCH v5 3/4] nvme: add uevent variables for controller devices Sagi Grimberg
2019-09-06 18:12 ` [PATCH v5 4/4] nvme: send discovery log page change events to userspace Sagi Grimberg
2019-09-06 18:12 ` [PATCH v5 5/4 nvme-cli] udev: convert the discovery event handler to the kernel support Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190906181235.20365-1-sagi@grimberg.me \
--to=sagi@grimberg.me \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=james.smart@broadcom.com \
--cc=keith.busch@intel.com \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).