From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753442AbdHUMHk (ORCPT ); Mon, 21 Aug 2017 08:07:40 -0400 Received: from verein.lst.de ([213.95.11.211]:33500 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752542AbdHUMHj (ORCPT ); Mon, 21 Aug 2017 08:07:39 -0400 Date: Mon, 21 Aug 2017 14:07:37 +0200 From: Christoph Hellwig To: Marc Zyngier Cc: YASUAKI ISHIMATSU , tglx@linutronix.de, axboe@kernel.dk, mpe@ellerman.id.au, keith.busch@intel.com, peterz@infradead.org, LKML , Christoph Hellwig Subject: Re: system hung up when offlining CPUs Message-ID: <20170821120737.GA11622@lst.de> References: <20170809124213.0d9518bb@why.wild-wind.fr.eu.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Marc, in general the driver should know not to use the queue / irq, as blk-mq will never schedule I/O to queues that have no online cpus. The real bugs seems to be that we're using affinity for a device that only has one real queue (as the config queue should not have affinity). Let me dig into what's going on here with virtio.