* [RFC PATCH] nvme: allow to handle AER events by kernel drivers @ 2018-04-13 11:42 Javier González 2018-04-13 11:43 ` [PATCH] nvme: allow lightnvm to have visibility over AER events Javier González 0 siblings, 1 reply; 10+ messages in thread From: Javier González @ 2018-04-13 11:42 UTC (permalink / raw) In the current AER path, the kernel just picks up the event and re-sends the async event command to allow the device to report a new event. Sending the get log page to manage the event itself and clear the device flag is left to user space for what I can see. In OCSSD we require some of these events to be handled by LightNVM targets, thus we need to pull the AER event up the stack in a meaningful way. The main challenge as I see it is moving the event in a clean way to the right driver. This patch is a sketch on how to do this, very much work in progress. If we could add a bit to the result carried into nvme_result to signal which events are handled by the kernel, this would be much cleaner. From here on, drivers could register for specific events and then handled them accordingly. Feedback at this early stage is very much appreciated. Thanks! Javier Javier Gonz?lez (1): nvme: allow lightnvm to have visibility over AER events drivers/nvme/host/core.c | 37 +++++++++++++++++++++---- drivers/nvme/host/lightnvm.c | 65 ++++++++++++++++++++++++++++++++++++++++++++ drivers/nvme/host/nvme.h | 3 ++ include/linux/lightnvm.h | 24 ++++++++++++++++ include/linux/nvme.h | 1 + 5 files changed, 125 insertions(+), 5 deletions(-) -- 2.7.4 ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] nvme: allow lightnvm to have visibility over AER events 2018-04-13 11:42 [RFC PATCH] nvme: allow to handle AER events by kernel drivers Javier González @ 2018-04-13 11:43 ` Javier González 2018-04-13 15:27 ` Scott Bauer ` (2 more replies) 0 siblings, 3 replies; 10+ messages in thread From: Javier González @ 2018-04-13 11:43 UTC (permalink / raw) --- drivers/nvme/host/core.c | 37 +++++++++++++++++++++---- drivers/nvme/host/lightnvm.c | 65 ++++++++++++++++++++++++++++++++++++++++++++ drivers/nvme/host/nvme.h | 3 ++ include/linux/lightnvm.h | 24 ++++++++++++++++ include/linux/nvme.h | 1 + 5 files changed, 125 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index f81e3b323366..e08cd59e6049 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3263,15 +3263,31 @@ static void nvme_aen_uevent(struct nvme_ctrl *ctrl) kfree(envp[0]); } -static void nvme_async_event_work(struct work_struct *work) +static void nvme_async_event(struct nvme_ctrl *ctrl) { - struct nvme_ctrl *ctrl = - container_of(work, struct nvme_ctrl, async_event_work); - nvme_aen_uevent(ctrl); ctrl->ops->submit_async_event(ctrl); } +static void nvme_async_event_work(struct work_struct *work) +{ + struct nvme_ctrl *ctrl = + container_of(work, struct nvme_ctrl, async_event_work); + + nvme_async_event(ctrl); +} + +static void nvme_aer_handle_work(struct work_struct *work) +{ + struct nvme_ctrl *ctrl = + container_of(work, struct nvme_ctrl, aer_handle_work); + + if (ctrl->aen_result & NVME_AER_NOTICE_LNVM_CHUNK) + nvme_nvm_aer_handle(ctrl); + + nvme_async_event(ctrl); +} + static bool nvme_ctrl_pp_status(struct nvme_ctrl *ctrl) { @@ -3362,7 +3378,16 @@ void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status, default: dev_warn(ctrl->device, "async event result %08x\n", result); } - queue_work(nvme_wq, &ctrl->async_event_work); + + /* TODO: Create a flag signaling the events that require kernel handling + * to make this treatment generic + */ + if (result & NVME_AER_NOTICE_LNVM_CHUNK) { + ctrl->aen_result = result; + queue_work(nvme_wq, &ctrl->aer_handle_work); + } else { + queue_work(nvme_wq, &ctrl->async_event_work); + } } EXPORT_SYMBOL_GPL(nvme_complete_async_event); @@ -3370,6 +3395,7 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl) { nvme_stop_keep_alive(ctrl); flush_work(&ctrl->async_event_work); + flush_work(&ctrl->aer_handle_work); flush_work(&ctrl->scan_work); cancel_work_sync(&ctrl->fw_act_work); if (ctrl->ops->stop_ctrl) @@ -3437,6 +3463,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev, ctrl->quirks = quirks; INIT_WORK(&ctrl->scan_work, nvme_scan_work); INIT_WORK(&ctrl->async_event_work, nvme_async_event_work); + INIT_WORK(&ctrl->aer_handle_work, nvme_aer_handle_work); INIT_WORK(&ctrl->fw_act_work, nvme_fw_act_work); INIT_WORK(&ctrl->delete_work, nvme_delete_ctrl_work); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 41279da799ed..f51200447503 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -250,6 +250,17 @@ struct nvme_nvm_chk_meta { __le64 wp; }; +struct nvme_ocssd_log { + __le64 notification_count; + __le64 lba; + __le32 nsid; + __le16 state :5; + __le16 rsvd_state :11; + __u8 lba_mask :3; + __u8 lba_state :5; + __u8 rsvd_23_63[41]; +}; + /* * Check we didn't inadvertently grow the command struct */ @@ -950,6 +961,60 @@ static int nvme_nvm_user_vcmd(struct nvme_ns *ns, int admin, return ret; } +void nvm_print_log_page(struct nvme_ocssd_log *log) +{ + printk(KERN_CRIT "cnt(%llu), ppa(0x%llx), nsid(%u), st(0x%05x), msk(0x%03x)\n", + log->notification_count, log->lba, log->nsid, log->state, + log->lba_mask); +} + +void nvme_nvm_aer_handle(struct nvme_ctrl *ctrl) +{ + struct nvme_ns *iter, *ns = NULL; + struct nvm_dev *nvmdev; + struct nvme_command c = {}; + struct nvme_ocssd_log log; + struct nvm_log_page log_page; + int ret; + + c.common.opcode = nvme_admin_get_log_page; + c.common.command_id = NVME_AQ_BLK_MQ_DEPTH; + c.common.cdw10[0] = cpu_to_le32( + (((sizeof(struct nvme_ocssd_log) / 4) - 1) << 16) | + 0xd0); + + memset(&log, 0, sizeof(struct nvme_ocssd_log)); + + ret = nvme_submit_sync_cmd(ctrl->admin_q, &c, &log, + sizeof(struct nvme_ocssd_log)); + if (ret) + return; + + log_page.ppa.ppa = log.lba; + log_page.scope = log.lba_mask & NVM_LOGPAGE_STATE_MASK; + log_page.severity = log.state & NVM_LOGPAGE_SEVERITY_MASK; + + down_read(&ctrl->namespaces_rwsem); + list_for_each_entry(iter, &ctrl->namespaces, list) { + if (iter->head->ns_id == log.nsid) { + ns = iter; + break; + } + } + up_read(&ctrl->namespaces_rwsem); + + if (!ns) + return; + + nvmdev = ns->ndev; + + /* TODO: TOGO */ + nvm_print_log_page(&log); + + /* nvm_notify_log_page(nvmdev, log_page); */ + +} + int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg) { switch (cmd) { diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index aa10842a6709..7d760f54b175 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -183,6 +183,7 @@ struct nvme_ctrl { struct nvme_effects_log *effects; struct work_struct scan_work; struct work_struct async_event_work; + struct work_struct aer_handle_work; struct delayed_work ka_work; struct nvme_command ka_cmd; struct work_struct fw_act_work; @@ -506,6 +507,7 @@ void nvme_nvm_unregister(struct nvme_ns *ns); int nvme_nvm_register_sysfs(struct nvme_ns *ns); void nvme_nvm_unregister_sysfs(struct nvme_ns *ns); int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg); +void nvme_nvm_aer_handle(struct nvme_ctrl *ctrl); #else static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {}; static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, @@ -525,6 +527,7 @@ static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, { return -ENOTTY; } +static inline void nvme_nvm_aer_handle(struct nvme_ctrl *ctrl) {}; #endif /* CONFIG_NVM */ static inline struct nvme_ns *nvme_get_ns_from_dev(struct device *dev) diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 6e0859b9d4d2..001c64690125 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -233,6 +233,30 @@ struct nvm_addrf { }; enum { + /* valid bit mask */ + NVM_LOGPAGE_STATE_MASK = 0x3, + NVM_LOGPAGE_SEVERITY_MASK = 0x5, + + /* scope */ + NVM_LOGPAGE_SCOPE_SECTOR = 1, + NVM_LOGPAGE_SCOPE_CHUNK = 2, + NVM_LOGPAGE_SCOPE_LUN = 4, + + /* severity */ + NVM_LOGPAGE_SEVERITY_LOW = 1, + NVM_LOGPAGE_SEVERITY_MID = 2, + NVM_LOGPAGE_SEVERITY_HIGH = 4, + NVM_LOGPAGE_SEVERITY_UNREC = 8, + NVM_LOGPAGE_SEVERITY_DEV = 16, +}; + +struct nvm_log_page { + struct ppa_addr ppa; + u16 scope; + u8 severity; +}; + +enum { /* Chunk states */ NVM_CHK_ST_FREE = 1 << 0, NVM_CHK_ST_CLOSED = 1 << 1, diff --git a/include/linux/nvme.h b/include/linux/nvme.h index 4112e2bd747f..f85053601c5b 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -440,6 +440,7 @@ enum { NVME_AER_VS = 7, NVME_AER_NOTICE_NS_CHANGED = 0x0002, NVME_AER_NOTICE_FW_ACT_STARTING = 0x0102, + NVME_AER_NOTICE_LNVM_CHUNK = 0xd00007, }; struct nvme_lba_range_type { -- 2.7.4 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH] nvme: allow lightnvm to have visibility over AER events 2018-04-13 11:43 ` [PATCH] nvme: allow lightnvm to have visibility over AER events Javier González @ 2018-04-13 15:27 ` Scott Bauer 2018-04-13 18:01 ` Javier Gonzalez 2018-04-13 16:58 ` Keith Busch 2018-04-13 17:11 ` Christoph Hellwig 2 siblings, 1 reply; 10+ messages in thread From: Scott Bauer @ 2018-04-13 15:27 UTC (permalink / raw) On Fri, Apr 13, 2018@01:43:00PM +0200, Javier Gonz?lez wrote: > --- > drivers/nvme/host/core.c | 37 +++++++++++++++++++++---- > drivers/nvme/host/lightnvm.c | 65 ++++++++++++++++++++++++++++++++++++++++++++ > drivers/nvme/host/nvme.h | 3 ++ > include/linux/lightnvm.h | 24 ++++++++++++++++ > include/linux/nvme.h | 1 + > 5 files changed, 125 insertions(+), 5 deletions(-) > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index f81e3b323366..e08cd59e6049 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -3263,15 +3263,31 @@ static void nvme_aen_uevent(struct nvme_ctrl *ctrl) > kfree(envp[0]); > } > > -static void nvme_async_event_work(struct work_struct *work) > +static void nvme_async_event(struct nvme_ctrl *ctrl) > { > - struct nvme_ctrl *ctrl = > - container_of(work, struct nvme_ctrl, async_event_work); > - > nvme_aen_uevent(ctrl); > ctrl->ops->submit_async_event(ctrl); > } > > +static void nvme_async_event_work(struct work_struct *work) > +{ > + struct nvme_ctrl *ctrl = > + container_of(work, struct nvme_ctrl, async_event_work); > + > + nvme_async_event(ctrl); > +} > + > +static void nvme_aer_handle_work(struct work_struct *work) > +{ > + struct nvme_ctrl *ctrl = > + container_of(work, struct nvme_ctrl, aer_handle_work); > + > + if (ctrl->aen_result & NVME_AER_NOTICE_LNVM_CHUNK) > + nvme_nvm_aer_handle(ctrl); Just throwing ideas out: Does it make sense to have the lnvm core register an "aer callback" into the core driver? Then instead of calling the nvm_aer_handle directly we'd do like: int res = 0; if (child_driver->handle_aer) res = child_driver->handle_aer(); if (!res) /* normal nvme driver handle */ If we do it this way if Fabrics, or what ever comes in the future could register aer handlers as well and we wouldn't have to constantly update this function. ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] nvme: allow lightnvm to have visibility over AER events 2018-04-13 15:27 ` Scott Bauer @ 2018-04-13 18:01 ` Javier Gonzalez 0 siblings, 0 replies; 10+ messages in thread From: Javier Gonzalez @ 2018-04-13 18:01 UTC (permalink / raw) > On 13 Apr 2018,@17.27, Scott Bauer <scott.bauer@intel.com> wrote: > > On Fri, Apr 13, 2018@01:43:00PM +0200, Javier Gonz?lez wrote: >> --- >> drivers/nvme/host/core.c | 37 +++++++++++++++++++++---- >> drivers/nvme/host/lightnvm.c | 65 ++++++++++++++++++++++++++++++++++++++++++++ >> drivers/nvme/host/nvme.h | 3 ++ >> include/linux/lightnvm.h | 24 ++++++++++++++++ >> include/linux/nvme.h | 1 + >> 5 files changed, 125 insertions(+), 5 deletions(-) >> >> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >> index f81e3b323366..e08cd59e6049 100644 >> --- a/drivers/nvme/host/core.c >> +++ b/drivers/nvme/host/core.c >> @@ -3263,15 +3263,31 @@ static void nvme_aen_uevent(struct nvme_ctrl *ctrl) >> kfree(envp[0]); >> } >> >> -static void nvme_async_event_work(struct work_struct *work) >> +static void nvme_async_event(struct nvme_ctrl *ctrl) >> { >> - struct nvme_ctrl *ctrl = >> - container_of(work, struct nvme_ctrl, async_event_work); >> - >> nvme_aen_uevent(ctrl); >> ctrl->ops->submit_async_event(ctrl); >> } >> >> +static void nvme_async_event_work(struct work_struct *work) >> +{ >> + struct nvme_ctrl *ctrl = >> + container_of(work, struct nvme_ctrl, async_event_work); >> + >> + nvme_async_event(ctrl); >> +} >> + >> +static void nvme_aer_handle_work(struct work_struct *work) >> +{ >> + struct nvme_ctrl *ctrl = >> + container_of(work, struct nvme_ctrl, aer_handle_work); >> + >> + if (ctrl->aen_result & NVME_AER_NOTICE_LNVM_CHUNK) >> + nvme_nvm_aer_handle(ctrl); > > Just throwing ideas out: > Does it make sense to have the lnvm core register an "aer callback" into the > core driver? Then instead of calling the nvm_aer_handle directly we'd do like: > > int res = 0; > if (child_driver->handle_aer) > res = child_driver->handle_aer(); > > if (!res) > /* normal nvme driver handle */ > > > If we do it this way if Fabrics, or what ever comes > in the future could register aer handlers as well and we wouldn't have to constantly > update this function. If we can agree that having a similar logic to this workqueue is a good way of allowing the nvme driver to enable in-kernel AER handlers, then the structure you mention would be a good way to handle the same event by different registered drivers. At this point though, there is no other in-kernel users (that I can see), who are interested in managing AER events. Maybe they start popping out soon. Javier -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20180413/25dbc185/attachment-0001.sig> ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] nvme: allow lightnvm to have visibility over AER events 2018-04-13 11:43 ` [PATCH] nvme: allow lightnvm to have visibility over AER events Javier González 2018-04-13 15:27 ` Scott Bauer @ 2018-04-13 16:58 ` Keith Busch 2018-04-13 17:55 ` Javier González 2018-04-13 17:11 ` Christoph Hellwig 2 siblings, 1 reply; 10+ messages in thread From: Keith Busch @ 2018-04-13 16:58 UTC (permalink / raw) Hi Javier, We currently do have some logic in the core to say which events are sent to user space for handling and others that are handled internally. There's currently just two events with kernel specifc handling: namespace inventory change, and firmware activation. The example in your patch looks like one of the scenarios envisioned to be handled by a udev rule. Is that not going to work for you here? If you do have an event that needs special handling, I think you'd want to filter that out of the user space notification and invoke the kernel callback specific to it your event. On Fri, Apr 13, 2018@01:43:00PM +0200, Javier Gonz?lez wrote: > +static void nvme_aer_handle_work(struct work_struct *work) > +{ > + struct nvme_ctrl *ctrl = > + container_of(work, struct nvme_ctrl, aer_handle_work); > + > + if (ctrl->aen_result & NVME_AER_NOTICE_LNVM_CHUNK) > + nvme_nvm_aer_handle(ctrl); This 'if' will evaluate to true for _any_ VS AEN result, so this doesn't look like the right criteria for an LNVM specific. > + nvme_async_event(ctrl); > +} > + > static bool nvme_ctrl_pp_status(struct nvme_ctrl *ctrl) > { > > @@ -3362,7 +3378,16 @@ void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status, > default: > dev_warn(ctrl->device, "async event result %08x\n", result); > } > - queue_work(nvme_wq, &ctrl->async_event_work); > + > + /* TODO: Create a flag signaling the events that require kernel handling > + * to make this treatment generic > + */ > + if (result & NVME_AER_NOTICE_LNVM_CHUNK) { > + ctrl->aen_result = result; > + queue_work(nvme_wq, &ctrl->aer_handle_work); If this event needs special kernel handling, you should filter it out in the switch statement above this code and directly invoke the special callback. I'm a bit concerned about special handling on a vendor specific code, though. That could easily collide with another vendor's that has an entirely different meaning. ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] nvme: allow lightnvm to have visibility over AER events 2018-04-13 16:58 ` Keith Busch @ 2018-04-13 17:55 ` Javier González 2018-04-16 9:16 ` Sagi Grimberg 0 siblings, 1 reply; 10+ messages in thread From: Javier González @ 2018-04-13 17:55 UTC (permalink / raw) > On 13 Apr 2018,@18.58, Keith Busch <keith.busch@intel.com> wrote: > > Hi Javier, > > We currently do have some logic in the core to say which events are > sent to user space for handling and others that are handled internally. > There's currently just two events with kernel specifc handling: > namespace inventory change, and firmware activation. > > The example in your patch looks like one of the scenarios envisioned > to be handled by a udev rule. Is that not going to work for you here? > > If you do have an event that needs special handling, I think you'd want > to filter that out of the user space notification and invoke the kernel > callback specific to it your event. Can I plug a udev rule to an in-kernel driver? A concrete example of the events we will handle is pblk having to remap data from a chunk that is becoming unreliable. > > On Fri, Apr 13, 2018@01:43:00PM +0200, Javier Gonz?lez wrote: >> +static void nvme_aer_handle_work(struct work_struct *work) >> +{ >> + struct nvme_ctrl *ctrl = >> + container_of(work, struct nvme_ctrl, aer_handle_work); >> + >> + if (ctrl->aen_result & NVME_AER_NOTICE_LNVM_CHUNK) >> + nvme_nvm_aer_handle(ctrl); > > This 'if' will evaluate to true for _any_ VS AEN result, so this doesn't > look like the right criteria for an LNVM specific. See the next comment. > >> + nvme_async_event(ctrl); >> +} >> + >> static bool nvme_ctrl_pp_status(struct nvme_ctrl *ctrl) >> { >> >> @@ -3362,7 +3378,16 @@ void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status, >> default: >> dev_warn(ctrl->device, "async event result %08x\n", result); >> } >> - queue_work(nvme_wq, &ctrl->async_event_work); >> + >> + /* TODO: Create a flag signaling the events that require kernel handling >> + * to make this treatment generic >> + */ >> + if (result & NVME_AER_NOTICE_LNVM_CHUNK) { >> + ctrl->aen_result = result; >> + queue_work(nvme_wq, &ctrl->aer_handle_work); > > If this event needs special kernel handling, you should filter it out > in the switch statement above this code and directly invoke the special > callback. My thought with this is that a driver having to manage the event will need to send the get log page command, thus it will need a workque or similar; dealing with such logic in the driver makes it easier to maintain I think. Then at the nvme_aer_handle_work() level, we would filter out the events to the specific drivers. > > I'm a bit concerned about special handling on a vendor specific code, > though. That could easily collide with another vendor's that has an > entirely different meaning. Of course. None of the specific bits will be upstreamed until they become standarized. As I answer to Christoph, at this point I want to start building the foundation. Thanks! Javier -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20180413/5a5013aa/attachment.sig> ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] nvme: allow lightnvm to have visibility over AER events 2018-04-13 17:55 ` Javier González @ 2018-04-16 9:16 ` Sagi Grimberg 2018-04-16 9:21 ` Javier González 0 siblings, 1 reply; 10+ messages in thread From: Sagi Grimberg @ 2018-04-16 9:16 UTC (permalink / raw) Javier, >> We currently do have some logic in the core to say which events are >> sent to user space for handling and others that are handled internally. >> There's currently just two events with kernel specifc handling: >> namespace inventory change, and firmware activation. >> >> The example in your patch looks like one of the scenarios envisioned >> to be handled by a udev rule. Is that not going to work for you here? >> >> If you do have an event that needs special handling, I think you'd want >> to filter that out of the user space notification and invoke the kernel >> callback specific to it your event. > > Can I plug a udev rule to an in-kernel driver? A concrete example of the > events we will handle is pblk having to remap data from a chunk that is > becoming unreliable. You can plug user-space action and kernel interface for it. It looks pretty stateless.. Does pblk expose a char device? I think that user-space can issue the log page and then pass the information to the kernel.. Given that I do see also that nvmet would be a potential consumer for AER, maybe user-space would be a good place to handle all this... ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] nvme: allow lightnvm to have visibility over AER events 2018-04-16 9:16 ` Sagi Grimberg @ 2018-04-16 9:21 ` Javier González 0 siblings, 0 replies; 10+ messages in thread From: Javier González @ 2018-04-16 9:21 UTC (permalink / raw) > On 16 Apr 2018,@11.16, Sagi Grimberg <sagi@grimberg.me> wrote: > > Javier, > >>> We currently do have some logic in the core to say which events are >>> sent to user space for handling and others that are handled internally. >>> There's currently just two events with kernel specifc handling: >>> namespace inventory change, and firmware activation. >>> >>> The example in your patch looks like one of the scenarios envisioned >>> to be handled by a udev rule. Is that not going to work for you here? >>> >>> If you do have an event that needs special handling, I think you'd want >>> to filter that out of the user space notification and invoke the kernel >>> callback specific to it your event. >> Can I plug a udev rule to an in-kernel driver? A concrete example of the >> events we will handle is pblk having to remap data from a chunk that is >> becoming unreliable. > > You can plug user-space action and kernel interface for it. It looks > pretty stateless.. > Ok. I'll look into it. Thanks. > Does pblk expose a char device? I think that user-space can issue > the log page and then pass the information to the kernel.. > Not yet. But we can have a generic handle in user-space to send the get log page, then we can easily expose one. > Given that I do see also that nvmet would be a potential consumer for > AER, maybe user-space would be a good place to handle all this... Great. Javier -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20180416/1c052cdd/attachment.sig> ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] nvme: allow lightnvm to have visibility over AER events 2018-04-13 11:43 ` [PATCH] nvme: allow lightnvm to have visibility over AER events Javier González 2018-04-13 15:27 ` Scott Bauer 2018-04-13 16:58 ` Keith Busch @ 2018-04-13 17:11 ` Christoph Hellwig 2018-04-13 17:20 ` Javier Gonzalez 2 siblings, 1 reply; 10+ messages in thread From: Christoph Hellwig @ 2018-04-13 17:11 UTC (permalink / raw) > enum { > + /* valid bit mask */ > + NVM_LOGPAGE_STATE_MASK = 0x3, > + NVM_LOGPAGE_SEVERITY_MASK = 0x5, > + > + /* scope */ > + NVM_LOGPAGE_SCOPE_SECTOR = 1, > + NVM_LOGPAGE_SCOPE_CHUNK = 2, > + NVM_LOGPAGE_SCOPE_LUN = 4, > + > + /* severity */ > + NVM_LOGPAGE_SEVERITY_LOW = 1, > + NVM_LOGPAGE_SEVERITY_MID = 2, > + NVM_LOGPAGE_SEVERITY_HIGH = 4, > + NVM_LOGPAGE_SEVERITY_UNREC = 8, > + NVM_LOGPAGE_SEVERITY_DEV = 16, > +}; > + > +struct nvm_log_page { > + struct ppa_addr ppa; > + u16 scope; > + u8 severity; > +}; > + > +enum { > /* Chunk states */ > NVM_CHK_ST_FREE = 1 << 0, > NVM_CHK_ST_CLOSED = 1 << 1, > diff --git a/include/linux/nvme.h b/include/linux/nvme.h > index 4112e2bd747f..f85053601c5b 100644 > --- a/include/linux/nvme.h > +++ b/include/linux/nvme.h > @@ -440,6 +440,7 @@ enum { > NVME_AER_VS = 7, > NVME_AER_NOTICE_NS_CHANGED = 0x0002, > NVME_AER_NOTICE_FW_ACT_STARTING = 0x0102, > + NVME_AER_NOTICE_LNVM_CHUNK = 0xd00007, > }; Non of these is in the NVMe spec or a ratified TP, so NAK. ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] nvme: allow lightnvm to have visibility over AER events 2018-04-13 17:11 ` Christoph Hellwig @ 2018-04-13 17:20 ` Javier Gonzalez 0 siblings, 0 replies; 10+ messages in thread From: Javier Gonzalez @ 2018-04-13 17:20 UTC (permalink / raw) > On 13 Apr 2018,@19.11, Christoph Hellwig <hch@infradead.org> wrote: > > >> enum { >> + /* valid bit mask */ >> + NVM_LOGPAGE_STATE_MASK = 0x3, >> + NVM_LOGPAGE_SEVERITY_MASK = 0x5, >> + >> + /* scope */ >> + NVM_LOGPAGE_SCOPE_SECTOR = 1, >> + NVM_LOGPAGE_SCOPE_CHUNK = 2, >> + NVM_LOGPAGE_SCOPE_LUN = 4, >> + >> + /* severity */ >> + NVM_LOGPAGE_SEVERITY_LOW = 1, >> + NVM_LOGPAGE_SEVERITY_MID = 2, >> + NVM_LOGPAGE_SEVERITY_HIGH = 4, >> + NVM_LOGPAGE_SEVERITY_UNREC = 8, >> + NVM_LOGPAGE_SEVERITY_DEV = 16, >> +}; >> + >> +struct nvm_log_page { >> + struct ppa_addr ppa; >> + u16 scope; >> + u8 severity; >> +}; >> + >> +enum { >> /* Chunk states */ >> NVM_CHK_ST_FREE = 1 << 0, >> NVM_CHK_ST_CLOSED = 1 << 1, >> diff --git a/include/linux/nvme.h b/include/linux/nvme.h >> index 4112e2bd747f..f85053601c5b 100644 >> --- a/include/linux/nvme.h >> +++ b/include/linux/nvme.h >> @@ -440,6 +440,7 @@ enum { >> NVME_AER_VS = 7, >> NVME_AER_NOTICE_NS_CHANGED = 0x0002, >> NVME_AER_NOTICE_FW_ACT_STARTING = 0x0102, >> + NVME_AER_NOTICE_LNVM_CHUNK = 0xd00007, >> }; > > Non of these is in the NVMe spec or a ratified TP, so NAK. I only included these to show an example. At this point I only intend to build the foundation, not upstream non-standard, as you point out. I should have made this more clear in the RFC message. ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-04-16 9:21 UTC | newest] Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-04-13 11:42 [RFC PATCH] nvme: allow to handle AER events by kernel drivers Javier González 2018-04-13 11:43 ` [PATCH] nvme: allow lightnvm to have visibility over AER events Javier González 2018-04-13 15:27 ` Scott Bauer 2018-04-13 18:01 ` Javier Gonzalez 2018-04-13 16:58 ` Keith Busch 2018-04-13 17:55 ` Javier González 2018-04-16 9:16 ` Sagi Grimberg 2018-04-16 9:21 ` Javier González 2018-04-13 17:11 ` Christoph Hellwig 2018-04-13 17:20 ` Javier Gonzalez
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.