From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933697AbcATE7p (ORCPT ); Tue, 19 Jan 2016 23:59:45 -0500 Received: from m50-132.163.com ([123.125.50.132]:53543 "EHLO m50-132.163.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933127AbcATE7h (ORCPT ); Tue, 19 Jan 2016 23:59:37 -0500 From: Wenbo Wang To: keith.busch@intel.com, axboe@fb.com Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, wenwei.tao@memblaze.com, Wenbo Wang , Wenbo Wang Subject: [PATCH] NVMe: init nvme queue before enabling irq Date: Tue, 19 Jan 2016 23:57:40 -0500 Message-Id: <1453265860-31080-1-git-send-email-mail_weber_wang@163.com> X-Mailer: git-send-email 1.8.3.1 X-CM-TRANSID: DNGowEDZhFHQE59W4fdbAA--.5322S2 X-Coremail-Antispam: 1Uf129KBjvJXoW7Wryrtr4xJry5uFWxCr48JFb_yoW8WFyUpF W8Z3WkCr97CayUK3y3Aw4DZr13uwnaqr9rAryxKa43Ar1Utry5Zr1aka4YvF4rGrWvqa1r XrW8Jw47ury5ZrJanT9S1TB71UUUUUDqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07UJ8nrUUUUU= X-Originating-IP: [111.204.49.2] X-CM-SenderInfo: xpdlzspzhev2xbzd0wi6rwjhhfrp/1tbiVBH1IFUL65sNogAAsR Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org During reset process, the nvme_dev->bar (ioremapped) may change, so nvmeq->q_db shall be also updated by nvme_init_queue(). Currently nvmeq irq is enabled before queue init, so a spurious interrupt triggered nvme_process_cq may access nvmeq->q_db just before it is updated, this could cause kernel panic. Signed-off-by: Wenbo Wang Reviewed-by: Wenwei Tao --- drivers/nvme/host/pci.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index f5c0e26..df55f28 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1590,11 +1590,17 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) if (result < 0) goto release_cq; + /* + * Init queue door bell ioremap address before enabling irq, if not, + * a spurious interrupt triggered nvme_process_cq may access invalid + * address + */ + nvme_init_queue(nvmeq, qid); + result = queue_request_irq(dev, nvmeq, nvmeq->irqname); if (result < 0) goto release_sq; - nvme_init_queue(nvmeq, qid); return result; release_sq: @@ -1789,6 +1795,8 @@ static int nvme_configure_admin_queue(struct nvme_dev *dev) if (result) goto free_nvmeq; + nvme_init_queue(nvmeq, 0); + nvmeq->cq_vector = 0; result = queue_request_irq(dev, nvmeq, nvmeq->irqname); if (result) { @@ -3164,7 +3172,6 @@ static void nvme_probe_work(struct work_struct *work) goto disable; } - nvme_init_queue(dev->queues[0], 0); result = nvme_alloc_admin_tags(dev); if (result) goto disable; -- 1.8.3.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: mail_weber_wang@163.com (Wenbo Wang) Date: Tue, 19 Jan 2016 23:57:40 -0500 Subject: [PATCH] NVMe: init nvme queue before enabling irq Message-ID: <1453265860-31080-1-git-send-email-mail_weber_wang@163.com> During reset process, the nvme_dev->bar (ioremapped) may change, so nvmeq->q_db shall be also updated by nvme_init_queue(). Currently nvmeq irq is enabled before queue init, so a spurious interrupt triggered nvme_process_cq may access nvmeq->q_db just before it is updated, this could cause kernel panic. Signed-off-by: Wenbo Wang Reviewed-by: Wenwei Tao --- drivers/nvme/host/pci.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index f5c0e26..df55f28 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1590,11 +1590,17 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) if (result < 0) goto release_cq; + /* + * Init queue door bell ioremap address before enabling irq, if not, + * a spurious interrupt triggered nvme_process_cq may access invalid + * address + */ + nvme_init_queue(nvmeq, qid); + result = queue_request_irq(dev, nvmeq, nvmeq->irqname); if (result < 0) goto release_sq; - nvme_init_queue(nvmeq, qid); return result; release_sq: @@ -1789,6 +1795,8 @@ static int nvme_configure_admin_queue(struct nvme_dev *dev) if (result) goto free_nvmeq; + nvme_init_queue(nvmeq, 0); + nvmeq->cq_vector = 0; result = queue_request_irq(dev, nvmeq, nvmeq->irqname); if (result) { @@ -3164,7 +3172,6 @@ static void nvme_probe_work(struct work_struct *work) goto disable; } - nvme_init_queue(dev->queues[0], 0); result = nvme_alloc_admin_tags(dev); if (result) goto disable; -- 1.8.3.1