From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752818AbcD1P6c (ORCPT ); Thu, 28 Apr 2016 11:58:32 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45647 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751722AbcD1P6a (ORCPT ); Thu, 28 Apr 2016 11:58:30 -0400 Date: Thu, 28 Apr 2016 11:58:28 -0400 (EDT) From: Mikulas Patocka X-X-Sender: mpatocka@file01.intranet.prod.int.rdu2.redhat.com To: Ming Lei cc: Jens Axboe , Linux Kernel Mailing List , linux-block@vger.kernel.org, Christoph Hellwig , Btrfs BTRFS , Shaun Tancheff , Alan Cox , Neil Brown , Liu Bo , Jens Axboe Subject: Re: [PATCH v3 3/3] block: avoid to call .bi_end_io() recursively In-Reply-To: Message-ID: References: <1461805789-3632-1-git-send-email-ming.lei@canonical.com> <1461805789-3632-3-git-send-email-ming.lei@canonical.com> User-Agent: Alpine 2.02 (LRH 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 28 Apr 2016, Ming Lei wrote: > Hi Mikulas, > > On Thu, Apr 28, 2016 at 11:29 PM, Mikulas Patocka wrote: > > > > > > On Thu, 28 Apr 2016, Ming Lei wrote: > > > >> There were reports about heavy stack use by recursive calling > >> .bi_end_io()([1][2][3]). For example, more than 16K stack is > >> consumed in a single bio complete path[3], and in [2] stack > >> overflow can be triggered if 20 nested dm-crypt is used. > >> > >> Also patches[1] [2] [3] were posted for addressing the issue, > >> but never be merged. And the idea in these patches is basically > >> similar, all serializes the recursive calling of .bi_end_io() by > >> percpu list. > >> > >> This patch still takes the same idea, but uses bio_list to > >> implement it, which turns out more simple and the code becomes > >> more readable meantime. > >> > >> One corner case which wasn't covered before is that > >> bi_endio() may be scheduled to run in process context(such > >> as btrfs), and this patch just bypasses the optimizing for > >> that case because one new context should have enough stack space, > >> and this approach isn't capable of optimizing it too because > >> there isn't easy way to get a per-task linked list head. > > > > Hi > > > > You could use preempt_disable() and then you could use per-cpu list even > > in the process context. > > Image why the .bi_end_io() is scheduled to process context, and the only > workable/simple way I thought of is to use per-task list because it may sleep. The bi_end_io callback should not sleep, even if it is called from the process context. > Given new context should have enough stack and only btrfs has this kind of > usage as far as I see, so don't think that is worth of the optimization. > > Thanks, > Ming Mikulas