From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932674AbbLCJGu (ORCPT ); Thu, 3 Dec 2015 04:06:50 -0500 Received: from verein.lst.de ([213.95.11.211]:33641 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932611AbbLCJGk (ORCPT ); Thu, 3 Dec 2015 04:06:40 -0500 Date: Thu, 3 Dec 2015 10:06:38 +0100 From: Christoph Hellwig To: Matias =?iso-8859-1?Q?Bj=F8rling?= Cc: Jens Axboe , Christoph Hellwig , Mark Brown , Keith Busch , linux-next@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: Re: linux-next: build failure after merge of the block tree Message-ID: <20151203090638.GA14329@lst.de> References: <20151202161936.22b23668cf9dea9872b5079b@kernel.org> <20151202164527.GA31048@lst.de> <565F5D96.5050902@kernel.dk> <565FFFA5.6000003@bjorling.me> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <565FFFA5.6000003@bjorling.me> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Dec 03, 2015 at 09:39:01AM +0100, Matias Bjørling wrote: > A little crazy yes. The reason is that the NVMe admin queues and NVMe user > queues are driven by different request queues. Previously this was patched > up with having two queues in the lightnvm core. One for admin and another > for user. But was later merged into a single queue. Why? If you look at the current structure we have the admin queue which is always allocated by the Low level driver, although it could and should move to the core eventually. And then we have Command set specific request_queues for the I/O queues. One per NS for NVM currenly, either one per NS or one globally for LightNVM, and in Fabrics I currently have another magic one :) Due to the tagset pointer in struct nvme_ctrl that's really easy to handle.