From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7249AC433E1 for ; Tue, 28 Jul 2020 23:46:31 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 40CA62075D for ; Tue, 28 Jul 2020 23:46:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="SXaXaqd2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40CA62075D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=grimberg.me Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=XmKxCb+lvzZnPSeMZaay4CzF8+QXe38NeGOZUZ1FXNA=; b=SXaXaqd2n7Kd5BgxKFZJgliCe 0Z5yILyTgonHO8ZCxpy893fF79i+QtorsCCRVP/xNKrN+25oNQCWTetlCurZlNOlh1no+S4z88XAm StA94lH4wOkVF1wHtmIHGeIWNVjYPdyPiGVtQCngS6kdtqg8f0YVlA4en9Z7d0B7SgffE21OVpZZE 0wobCbBUBm7lFUUMYWyp8EU4yczeaF6nY5ztKnGu0WtNa8TsTrW6Fv1hJ9PVqyMb0Uxxp6dRIQ2rx VuCaqQDIUu0gOIAun/m5GQ990p13zELhKDnFWAkE3S+I5GURlMeCaPfZYDzGOFt0O/w1xwfv3Arat rJHvBGF+g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0ZIi-0008HL-UV; Tue, 28 Jul 2020 23:46:29 +0000 Received: from mail-pl1-f193.google.com ([209.85.214.193]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0ZIh-0008Gp-0B for linux-nvme@lists.infradead.org; Tue, 28 Jul 2020 23:46:27 +0000 Received: by mail-pl1-f193.google.com with SMTP id q17so10909908pls.9 for ; Tue, 28 Jul 2020 16:46:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=vVYisOyWkKYyWKcrtYGb2W0FzH+WmFt68pzFrRJmeys=; b=R4w5hkKP17f/f0MkInlLuJhUK7xCo8SrjYskz5tzX+StwQ/GghsKI5GavdCuFue5Q8 icIigZmueP60l2TpY8+22mSRZsmavwvJV+/w+fKahhzArItibCYyKZ9eYJECDsqUTy4G nVEQkNYktleIiRt0xH2jF/VYAtyYw2TEBXr4RjhzV+DHuoCXDkzQV0nel7Clf02DYYa7 E8XO2zJknLvNhs2LfPxuu/Axy14m7VQV6+8VCX0t5jZyRI1ksmJ5/+TG0ISG86VddbAX uOrqkW6BPzFSh0dawb/R/9imKYq3FQ7WzCckhibxvWrlR1VPi5/wbHvqPk/mNUJNx3yq sZvA== X-Gm-Message-State: AOAM531C8VlMmCTKWN6HG3mp5h08Vp4Q9mDB8R9Bery5fzp8BEDanG+k mVmSD+90Lha1PtAnmmRP4fYL38J/ X-Google-Smtp-Source: ABdhPJyQ6oNwws9JqVd40NewMcu/lFOsIla9Y+TTF3gVGnye95+Znm2xVx8m7SfILRWekZO5yjeiAw== X-Received: by 2002:a17:902:b714:: with SMTP id d20mr23801312pls.318.1595979985975; Tue, 28 Jul 2020 16:46:25 -0700 (PDT) Received: from ?IPv6:2601:647:4802:9070:fcc5:69d8:6e20:4fd1? ([2601:647:4802:9070:fcc5:69d8:6e20:4fd1]) by smtp.gmail.com with ESMTPSA id 12sm150351pfn.173.2020.07.28.16.46.24 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jul 2020 16:46:25 -0700 (PDT) Subject: Re: [PATCH v5 1/2] blk-mq: add tagset quiesce interface To: paulmck@kernel.org References: <20200727231022.307602-1-sagi@grimberg.me> <20200727231022.307602-2-sagi@grimberg.me> <20200728071859.GA21629@lst.de> <20200728091633.GB1326626@T590> <20200728135436.GP9247@paulmck-ThinkPad-P72> From: Sagi Grimberg Message-ID: Date: Tue, 28 Jul 2020 16:46:23 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200728135436.GP9247@paulmck-ThinkPad-P72> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200728_194627_059314_7916AC0B X-CRM114-Status: GOOD ( 23.66 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jens Axboe , linux-nvme@lists.infradead.org, Ming Lei , linux-block@vger.kernel.org, Chao Leng , Keith Busch , Ming Lin , Christoph Hellwig Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org Hey Paul, > Indeed you cannot. And if you build with CONFIG_DEBUG_OBJECTS_RCU_HEAD=y > it will yell at you when you try. > > You -can- pass on-stack rcu_head structures to call_srcu(), though, > if that helps. You of course must have some way of waiting for the > callback to be invoked before exiting that function. This should be > easy for me to package into an API, maybe using one of the existing > reference-counting APIs. > > So, do you have a separate stack frame for each of the desired call_srcu() > invocations? If not, do you know at build time how many rcu_head > structures you need? If the answer to both of these is "no", then > it is likely that there needs to be an rcu_head in each of the relevant > data structures, as was noted earlier in this thread. > > Yeah, I should go read the code. But I would need to know where it is > and it is still early in the morning over here! ;-) > > I probably should also have read the remainder of the thread before > replying, as well. But what is the fun in that? The use-case is to quiesce submissions to queues. This flow is where we want to teardown stuff, and we can potentially have 1000's of queues that we need to quiesce each one. each queue (hctx) has either rcu or srcu depending if it may sleep during submission. The goal is that the overall quiesce should be fast, so we want to wait for all of these queues elapsed period ~once, in parallel, instead of synchronizing each serially as done today. The guys here are resisting to add a rcu_synchronize to each and every hctx because it will take 32 bytes more or less from 1000's of hctxs. Dynamically allocating each one is possible but not very scalable. The question is if there is some way, we can do this with on-stack or a single on-heap rcu_head or equivalent that can achieve the same effect. _______________________________________________ Linux-nvme mailing list Linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme