From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97050C43381 for ; Mon, 25 Feb 2019 17:24:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 71A7C20684 for ; Mon, 25 Feb 2019 17:24:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728651AbfBYRW1 (ORCPT ); Mon, 25 Feb 2019 12:22:27 -0500 Received: from mx2.suse.de ([195.135.220.15]:59440 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728368AbfBYRW1 (ORCPT ); Mon, 25 Feb 2019 12:22:27 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 1BED2AC96; Mon, 25 Feb 2019 17:22:26 +0000 (UTC) Subject: Re: [LSF/MM TOPIC] Handling of managed IRQs when hotplugging CPUs To: Ming Lei Cc: "lsf-pc@lists.linux-foundation.org" , SCSI Mailing List , linux-block , Thomas Gleixner , Christoph Hellwig References: <99dc311f-16b6-9b0d-e309-198dcc9dcde7@suse.de> From: Hannes Reinecke Message-ID: <7b9adff2-4e57-6441-017b-19d164661cf5@suse.de> Date: Mon, 25 Feb 2019 18:22:23 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On 2/19/19 3:19 AM, Ming Lei wrote: > On Tue, Feb 5, 2019 at 11:30 PM Hannes Reinecke wrote: >> >> Hi all, >> >> this came up during discussion on the mailing list (cf thread "Question >> on handling managed IRQs when hotplugging CPUs"). >> The problem is that with managed IRQs and block-mq I/O will be routed to >> individual CPUs, and the response will be send to the IRQ assigned to >> that CPU. >> >> If now a CPU hotplug event occurs when I/O is still in-flight the IRQ >> will _still_ be assigned to the CPU, causing any pending interrupt to be >> lost. >> Hence the driver will never notice that an interrupt has happened, and >> an I/O timeout occurs. > > Lots of driver's timeout handler only returns BLK_EH_RESET_TIMER, > and this situation can't be covered by IO timeout for these devices. > > For example, we have see IO hang issue on HPSA, megaraid_sas > before when wrong msi vector is set on IO command. Even one such > issue on aacraid isn't fixed yet. > Precisely. >> >> One proposal was to quiesce the device when a CPU hotplug event occurs, >> and only allow for CPU hotplugging once it's fully quiesced. > > That is the original solution, but big problem is that queue dependency > exists, such as loop/DM's queue depends on underlying's queue, NVMe > IO queue depends on its admin queue. > Well, obviously we would have to wait for _all_ queues to be quiesced. And for stacked devices we will need to take the I/O stack into account, true. >> >> While this would be working, it will be introducing quite some system >> stall, and it actually a rather big impact in the system. >> Another possiblity would be to have the driver abort the requests >> itself, but this requires specific callbacks into the driver, and, of >> course, the driver having the ability to actually do so. >> >> I would like to discuss at LSF/MM how these issues can be addressed best. > > One related topic is that the current static queue mapping without CPU hotplug > handler involved may waste lots of IRQ vectors[1], and how to deal > with this problem? > > [1] http://lists.infradead.org/pipermail/linux-nvme/2019-January/021961.html > Good point. Let's do it. Cheers, Hannes