From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8794120F5 for ; Thu, 7 Apr 2022 08:03:18 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id BA7481F859; Thu, 7 Apr 2022 08:03:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1649318596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=BFj1weRWNO0gJwf1a0jytpVaNV2SIOQyml/DNHaVaAYv+GWtC9RRp3Nkr1QhUY/V7QdLG9 baoJsXmF2FufEolsvsTPwbwZ3Or5/MyPwA8jPnqPoYDtXH5U9TjRT7F8cNWLR3Nqx6fvJv Ws2fOIHqbo98FqfWwnQjig+YJFloGxM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1649318596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=DjSOFiN41nmAx2XAVq/tnPaDKbkIhCw6FaIeWu9KyiZMi8ecy7DPyjacZPoBAIl6E7Jspy 2pyi9oy0n8T7WCDA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4CF8613485; Thu, 7 Apr 2022 08:03:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id F2mtCL+aTmKlAgAAMHmgww (envelope-from ); Thu, 07 Apr 2022 08:03:11 +0000 Message-ID: <9f91936a-7dd7-2ee6-3293-f199ada85210@suse.de> Date: Thu, 7 Apr 2022 16:03:09 +0800 Precedence: bulk X-Mailing-List: ntfs3@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Subject: Re: [PATCH 22/27] block: refactor discard bio size limiting Content-Language: en-US To: Christoph Hellwig Cc: dm-devel@redhat.com, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-um@lists.infradead.org, linux-block@vger.kernel.org, drbd-dev@lists.linbit.com, nbd@other.debian.org, ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, jfs-discussion@lists.sourceforge.net, linux-nilfs@vger.kernel.org, ntfs3@lists.linux.dev, ocfs2-devel@oss.oracle.com, linux-mm@kvack.org, Jens Axboe References: <20220406060516.409838-1-hch@lst.de> <20220406060516.409838-23-hch@lst.de> From: Coly Li In-Reply-To: <20220406060516.409838-23-hch@lst.de> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 4/6/22 2:05 PM, Christoph Hellwig wrote: > Move all the logic to limit the discard bio size into a common helper > so that it is better documented. > > Signed-off-by: Christoph Hellwig Acked-by: Coly Li Thanks for the change. Coly Li > --- > block/blk-lib.c | 59 ++++++++++++++++++++++++------------------------- > block/blk.h | 14 ------------ > 2 files changed, 29 insertions(+), 44 deletions(-) > > diff --git a/block/blk-lib.c b/block/blk-lib.c > index 237d60d8b5857..2ae32a722851c 100644 > --- a/block/blk-lib.c > +++ b/block/blk-lib.c > @@ -10,6 +10,32 @@ > > #include "blk.h" > > +static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector) > +{ > + unsigned int discard_granularity = > + bdev_get_queue(bdev)->limits.discard_granularity; > + sector_t granularity_aligned_sector; > + > + if (bdev_is_partition(bdev)) > + sector += bdev->bd_start_sect; > + > + granularity_aligned_sector = > + round_up(sector, discard_granularity >> SECTOR_SHIFT); > + > + /* > + * Make sure subsequent bios start aligned to the discard granularity if > + * it needs to be split. > + */ > + if (granularity_aligned_sector != sector) > + return granularity_aligned_sector - sector; > + > + /* > + * Align the bio size to the discard granularity to make splitting the bio > + * at discard granularity boundaries easier in the driver if needed. > + */ > + return round_down(UINT_MAX, discard_granularity) >> SECTOR_SHIFT; > +} > + > int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > sector_t nr_sects, gfp_t gfp_mask, int flags, > struct bio **biop) > @@ -17,7 +43,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > struct request_queue *q = bdev_get_queue(bdev); > struct bio *bio = *biop; > unsigned int op; > - sector_t bs_mask, part_offset = 0; > + sector_t bs_mask; > > if (bdev_read_only(bdev)) > return -EPERM; > @@ -48,36 +74,9 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > if (!nr_sects) > return -EINVAL; > > - /* In case the discard request is in a partition */ > - if (bdev_is_partition(bdev)) > - part_offset = bdev->bd_start_sect; > - > while (nr_sects) { > - sector_t granularity_aligned_lba, req_sects; > - sector_t sector_mapped = sector + part_offset; > - > - granularity_aligned_lba = round_up(sector_mapped, > - q->limits.discard_granularity >> SECTOR_SHIFT); > - > - /* > - * Check whether the discard bio starts at a discard_granularity > - * aligned LBA, > - * - If no: set (granularity_aligned_lba - sector_mapped) to > - * bi_size of the first split bio, then the second bio will > - * start at a discard_granularity aligned LBA on the device. > - * - If yes: use bio_aligned_discard_max_sectors() as the max > - * possible bi_size of the first split bio. Then when this bio > - * is split in device drive, the split ones are very probably > - * to be aligned to discard_granularity of the device's queue. > - */ > - if (granularity_aligned_lba == sector_mapped) > - req_sects = min_t(sector_t, nr_sects, > - bio_aligned_discard_max_sectors(q)); > - else > - req_sects = min_t(sector_t, nr_sects, > - granularity_aligned_lba - sector_mapped); > - > - WARN_ON_ONCE((req_sects << 9) > UINT_MAX); > + sector_t req_sects = > + min(nr_sects, bio_discard_limit(bdev, sector)); > > bio = blk_next_bio(bio, bdev, 0, op, gfp_mask); > bio->bi_iter.bi_sector = sector; > diff --git a/block/blk.h b/block/blk.h > index 8ccbc6e076369..1fdc1d28e6d60 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -346,20 +346,6 @@ static inline unsigned int bio_allowed_max_sectors(struct request_queue *q) > return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9; > } > > -/* > - * The max bio size which is aligned to q->limits.discard_granularity. This > - * is a hint to split large discard bio in generic block layer, then if device > - * driver needs to split the discard bio into smaller ones, their bi_size can > - * be very probably and easily aligned to discard_granularity of the device's > - * queue. > - */ > -static inline unsigned int bio_aligned_discard_max_sectors( > - struct request_queue *q) > -{ > - return round_down(UINT_MAX, q->limits.discard_granularity) >> > - SECTOR_SHIFT; > -} > - > /* > * Internal io_context interface > */ From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.sourceforge.net (lists.sourceforge.net [216.105.38.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8F66AC433F5 for ; Thu, 7 Apr 2022 08:03:29 +0000 (UTC) Received: from [127.0.0.1] (helo=sfs-ml-4.v29.lw.sourceforge.com) by sfs-ml-4.v29.lw.sourceforge.com with esmtp (Exim 4.94.2) (envelope-from ) id 1ncN72-0004kH-Bb; Thu, 07 Apr 2022 08:03:26 +0000 Received: from [172.30.20.202] (helo=mx.sourceforge.net) by sfs-ml-4.v29.lw.sourceforge.com with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1ncN70-0004k3-IE; Thu, 07 Apr 2022 08:03:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sourceforge.net; s=x; h=Content-Transfer-Encoding:Content-Type:In-Reply-To: From:References:Cc:To:Subject:MIME-Version:Date:Message-ID:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=kGk9kJQMZyVuEKintxLCdR/FeB aC6Q+VebHLRq9OAkfQr4MKfzyyw79nEAwba+fUu2n6xirFIVcQ37YFWxDMsdnwoocEm9i/LypGZYB ByVKOQGnhjobQWKmW3XRrdSQKERDsMpO6Ws5HJlAlYu9X3hOZtniZm4O0xBYd6D+EW4Q=; DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sf.net; s=x ; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:References:Cc:To: Subject:MIME-Version:Date:Message-ID:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=dStfSyeMi4byf5tDOPvb7VdFw+ YD7vIrSDxYQtS0D0bPWssYHHXD660fSG62DII7ft4rcHofEL+UoPZlgwA4eLfy65zVo342J42HaqC Qi9sUGbkVyfAyI3NzyWvE3eenm7v5l/Hrmy2Iajr4OMIcuOjC+luU5PVB8FltNc/i8l8=; Received: from smtp-out2.suse.de ([195.135.220.29]) by sfi-mx-2.v28.lw.sourceforge.com with esmtps (TLS1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.94.2) id 1ncN6x-0005uq-RT; Thu, 07 Apr 2022 08:03:24 +0000 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id BA7481F859; Thu, 7 Apr 2022 08:03:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1649318596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=BFj1weRWNO0gJwf1a0jytpVaNV2SIOQyml/DNHaVaAYv+GWtC9RRp3Nkr1QhUY/V7QdLG9 baoJsXmF2FufEolsvsTPwbwZ3Or5/MyPwA8jPnqPoYDtXH5U9TjRT7F8cNWLR3Nqx6fvJv Ws2fOIHqbo98FqfWwnQjig+YJFloGxM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1649318596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=DjSOFiN41nmAx2XAVq/tnPaDKbkIhCw6FaIeWu9KyiZMi8ecy7DPyjacZPoBAIl6E7Jspy 2pyi9oy0n8T7WCDA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4CF8613485; Thu, 7 Apr 2022 08:03:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id F2mtCL+aTmKlAgAAMHmgww (envelope-from ); Thu, 07 Apr 2022 08:03:11 +0000 Message-ID: <9f91936a-7dd7-2ee6-3293-f199ada85210@suse.de> Date: Thu, 7 Apr 2022 16:03:09 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Content-Language: en-US To: Christoph Hellwig References: <20220406060516.409838-1-hch@lst.de> <20220406060516.409838-23-hch@lst.de> From: Coly Li In-Reply-To: <20220406060516.409838-23-hch@lst.de> X-Headers-End: 1ncN6x-0005uq-RT Subject: Re: [f2fs-dev] [PATCH 22/27] block: refactor discard bio size limiting X-BeenThere: linux-f2fs-devel@lists.sourceforge.net X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jfs-discussion@lists.sourceforge.net, linux-nvme@lists.infradead.org, virtualization@lists.linux-foundation.org, linux-mm@kvack.org, dm-devel@redhat.com, target-devel@vger.kernel.org, linux-mtd@lists.infradead.org, drbd-dev@lists.linbit.com, linux-s390@vger.kernel.org, linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org, cluster-devel@redhat.com, xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org, linux-um@lists.infradead.org, nbd@other.debian.org, linux-block@vger.kernel.org, linux-bcache@vger.kernel.org, ceph-devel@vger.kernel.org, Jens Axboe , linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, ocfs2-devel@oss.oracle.com, linux-fsdevel@vger.kernel.org, ntfs3@lists.linux.dev, linux-btrfs@vger.kernel.org Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: linux-f2fs-devel-bounces@lists.sourceforge.net On 4/6/22 2:05 PM, Christoph Hellwig wrote: > Move all the logic to limit the discard bio size into a common helper > so that it is better documented. > > Signed-off-by: Christoph Hellwig Acked-by: Coly Li Thanks for the change. Coly Li > --- > block/blk-lib.c | 59 ++++++++++++++++++++++++------------------------- > block/blk.h | 14 ------------ > 2 files changed, 29 insertions(+), 44 deletions(-) > > diff --git a/block/blk-lib.c b/block/blk-lib.c > index 237d60d8b5857..2ae32a722851c 100644 > --- a/block/blk-lib.c > +++ b/block/blk-lib.c > @@ -10,6 +10,32 @@ > > #include "blk.h" > > +static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector) > +{ > + unsigned int discard_granularity = > + bdev_get_queue(bdev)->limits.discard_granularity; > + sector_t granularity_aligned_sector; > + > + if (bdev_is_partition(bdev)) > + sector += bdev->bd_start_sect; > + > + granularity_aligned_sector = > + round_up(sector, discard_granularity >> SECTOR_SHIFT); > + > + /* > + * Make sure subsequent bios start aligned to the discard granularity if > + * it needs to be split. > + */ > + if (granularity_aligned_sector != sector) > + return granularity_aligned_sector - sector; > + > + /* > + * Align the bio size to the discard granularity to make splitting the bio > + * at discard granularity boundaries easier in the driver if needed. > + */ > + return round_down(UINT_MAX, discard_granularity) >> SECTOR_SHIFT; > +} > + > int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > sector_t nr_sects, gfp_t gfp_mask, int flags, > struct bio **biop) > @@ -17,7 +43,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > struct request_queue *q = bdev_get_queue(bdev); > struct bio *bio = *biop; > unsigned int op; > - sector_t bs_mask, part_offset = 0; > + sector_t bs_mask; > > if (bdev_read_only(bdev)) > return -EPERM; > @@ -48,36 +74,9 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > if (!nr_sects) > return -EINVAL; > > - /* In case the discard request is in a partition */ > - if (bdev_is_partition(bdev)) > - part_offset = bdev->bd_start_sect; > - > while (nr_sects) { > - sector_t granularity_aligned_lba, req_sects; > - sector_t sector_mapped = sector + part_offset; > - > - granularity_aligned_lba = round_up(sector_mapped, > - q->limits.discard_granularity >> SECTOR_SHIFT); > - > - /* > - * Check whether the discard bio starts at a discard_granularity > - * aligned LBA, > - * - If no: set (granularity_aligned_lba - sector_mapped) to > - * bi_size of the first split bio, then the second bio will > - * start at a discard_granularity aligned LBA on the device. > - * - If yes: use bio_aligned_discard_max_sectors() as the max > - * possible bi_size of the first split bio. Then when this bio > - * is split in device drive, the split ones are very probably > - * to be aligned to discard_granularity of the device's queue. > - */ > - if (granularity_aligned_lba == sector_mapped) > - req_sects = min_t(sector_t, nr_sects, > - bio_aligned_discard_max_sectors(q)); > - else > - req_sects = min_t(sector_t, nr_sects, > - granularity_aligned_lba - sector_mapped); > - > - WARN_ON_ONCE((req_sects << 9) > UINT_MAX); > + sector_t req_sects = > + min(nr_sects, bio_discard_limit(bdev, sector)); > > bio = blk_next_bio(bio, bdev, 0, op, gfp_mask); > bio->bi_iter.bi_sector = sector; > diff --git a/block/blk.h b/block/blk.h > index 8ccbc6e076369..1fdc1d28e6d60 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -346,20 +346,6 @@ static inline unsigned int bio_allowed_max_sectors(struct request_queue *q) > return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9; > } > > -/* > - * The max bio size which is aligned to q->limits.discard_granularity. This > - * is a hint to split large discard bio in generic block layer, then if device > - * driver needs to split the discard bio into smaller ones, their bi_size can > - * be very probably and easily aligned to discard_granularity of the device's > - * queue. > - */ > -static inline unsigned int bio_aligned_discard_max_sectors( > - struct request_queue *q) > -{ > - return round_down(UINT_MAX, q->limits.discard_granularity) >> > - SECTOR_SHIFT; > -} > - > /* > * Internal io_context interface > */ _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 813FFC433F5 for ; Thu, 7 Apr 2022 08:03:26 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-346-GqmGwSL1O_eyWe9RWqcavg-1; Thu, 07 Apr 2022 04:03:24 -0400 X-MC-Unique: GqmGwSL1O_eyWe9RWqcavg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 38B8E1C00BAD; Thu, 7 Apr 2022 08:03:22 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id D8F4DC28111; Thu, 7 Apr 2022 08:03:20 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id B21F3194034C; Thu, 7 Apr 2022 08:03:20 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 38FD11949763 for ; Thu, 7 Apr 2022 08:03:20 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 2353BC28115; Thu, 7 Apr 2022 08:03:20 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast10.extmail.prod.ext.rdu2.redhat.com [10.11.55.26]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1ECF9C28114 for ; Thu, 7 Apr 2022 08:03:20 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 04EAD1C09421 for ; Thu, 7 Apr 2022 08:03:20 +0000 (UTC) Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-142-q9FMfr96NKqYC32OpjoEVw-1; Thu, 07 Apr 2022 04:03:18 -0400 X-MC-Unique: q9FMfr96NKqYC32OpjoEVw-1 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id BA7481F859; Thu, 7 Apr 2022 08:03:16 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4CF8613485; Thu, 7 Apr 2022 08:03:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id F2mtCL+aTmKlAgAAMHmgww (envelope-from ); Thu, 07 Apr 2022 08:03:11 +0000 Message-ID: <9f91936a-7dd7-2ee6-3293-f199ada85210@suse.de> Date: Thu, 7 Apr 2022 16:03:09 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 To: Christoph Hellwig References: <20220406060516.409838-1-hch@lst.de> <20220406060516.409838-23-hch@lst.de> From: Coly Li In-Reply-To: <20220406060516.409838-23-hch@lst.de> X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Subject: Re: [dm-devel] [PATCH 22/27] block: refactor discard bio size limiting X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jfs-discussion@lists.sourceforge.net, linux-nvme@lists.infradead.org, virtualization@lists.linux-foundation.org, linux-mm@kvack.org, dm-devel@redhat.com, target-devel@vger.kernel.org, linux-mtd@lists.infradead.org, drbd-dev@lists.linbit.com, linux-s390@vger.kernel.org, linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org, cluster-devel@redhat.com, xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org, linux-um@lists.infradead.org, nbd@other.debian.org, linux-block@vger.kernel.org, linux-bcache@vger.kernel.org, ceph-devel@vger.kernel.org, Jens Axboe , linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, ocfs2-devel@oss.oracle.com, linux-fsdevel@vger.kernel.org, ntfs3@lists.linux.dev, linux-btrfs@vger.kernel.org Errors-To: dm-devel-bounces@redhat.com Sender: "dm-devel" X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" On 4/6/22 2:05 PM, Christoph Hellwig wrote: > Move all the logic to limit the discard bio size into a common helper > so that it is better documented. > > Signed-off-by: Christoph Hellwig Acked-by: Coly Li Thanks for the change. Coly Li > --- > block/blk-lib.c | 59 ++++++++++++++++++++++++------------------------- > block/blk.h | 14 ------------ > 2 files changed, 29 insertions(+), 44 deletions(-) > > diff --git a/block/blk-lib.c b/block/blk-lib.c > index 237d60d8b5857..2ae32a722851c 100644 > --- a/block/blk-lib.c > +++ b/block/blk-lib.c > @@ -10,6 +10,32 @@ > > #include "blk.h" > > +static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector) > +{ > + unsigned int discard_granularity = > + bdev_get_queue(bdev)->limits.discard_granularity; > + sector_t granularity_aligned_sector; > + > + if (bdev_is_partition(bdev)) > + sector += bdev->bd_start_sect; > + > + granularity_aligned_sector = > + round_up(sector, discard_granularity >> SECTOR_SHIFT); > + > + /* > + * Make sure subsequent bios start aligned to the discard granularity if > + * it needs to be split. > + */ > + if (granularity_aligned_sector != sector) > + return granularity_aligned_sector - sector; > + > + /* > + * Align the bio size to the discard granularity to make splitting the bio > + * at discard granularity boundaries easier in the driver if needed. > + */ > + return round_down(UINT_MAX, discard_granularity) >> SECTOR_SHIFT; > +} > + > int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > sector_t nr_sects, gfp_t gfp_mask, int flags, > struct bio **biop) > @@ -17,7 +43,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > struct request_queue *q = bdev_get_queue(bdev); > struct bio *bio = *biop; > unsigned int op; > - sector_t bs_mask, part_offset = 0; > + sector_t bs_mask; > > if (bdev_read_only(bdev)) > return -EPERM; > @@ -48,36 +74,9 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > if (!nr_sects) > return -EINVAL; > > - /* In case the discard request is in a partition */ > - if (bdev_is_partition(bdev)) > - part_offset = bdev->bd_start_sect; > - > while (nr_sects) { > - sector_t granularity_aligned_lba, req_sects; > - sector_t sector_mapped = sector + part_offset; > - > - granularity_aligned_lba = round_up(sector_mapped, > - q->limits.discard_granularity >> SECTOR_SHIFT); > - > - /* > - * Check whether the discard bio starts at a discard_granularity > - * aligned LBA, > - * - If no: set (granularity_aligned_lba - sector_mapped) to > - * bi_size of the first split bio, then the second bio will > - * start at a discard_granularity aligned LBA on the device. > - * - If yes: use bio_aligned_discard_max_sectors() as the max > - * possible bi_size of the first split bio. Then when this bio > - * is split in device drive, the split ones are very probably > - * to be aligned to discard_granularity of the device's queue. > - */ > - if (granularity_aligned_lba == sector_mapped) > - req_sects = min_t(sector_t, nr_sects, > - bio_aligned_discard_max_sectors(q)); > - else > - req_sects = min_t(sector_t, nr_sects, > - granularity_aligned_lba - sector_mapped); > - > - WARN_ON_ONCE((req_sects << 9) > UINT_MAX); > + sector_t req_sects = > + min(nr_sects, bio_discard_limit(bdev, sector)); > > bio = blk_next_bio(bio, bdev, 0, op, gfp_mask); > bio->bi_iter.bi_sector = sector; > diff --git a/block/blk.h b/block/blk.h > index 8ccbc6e076369..1fdc1d28e6d60 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -346,20 +346,6 @@ static inline unsigned int bio_allowed_max_sectors(struct request_queue *q) > return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9; > } > > -/* > - * The max bio size which is aligned to q->limits.discard_granularity. This > - * is a hint to split large discard bio in generic block layer, then if device > - * driver needs to split the discard bio into smaller ones, their bi_size can > - * be very probably and easily aligned to discard_granularity of the device's > - * queue. > - */ > -static inline unsigned int bio_aligned_discard_max_sectors( > - struct request_queue *q) > -{ > - return round_down(UINT_MAX, q->limits.discard_granularity) >> > - SECTOR_SHIFT; > -} > - > /* > * Internal io_context interface > */ -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF909C433F5 for ; Thu, 7 Apr 2022 08:09:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To:Subject: MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zat/oWjPuHVlmThPzbsI0pQzioYsHUmGWZXVhcxs7ZE=; b=LnUsrdB3zfdZut M3sZNk0hMJL9NVS+3FjGrfi49o55WN0tRFIrv1uGhs0CnZ1iov1Q5keQvxk67bEo0eyPEAjlu9Tkd BkMrRX6x1dSDv7OyZKMvuEYOH6vVCVkDHfLMvmp76cty+gTFvZdCtD4C0SmCr6vsHamPt5wwYoM6z a121V8auYImDA8S7k7XsbeXUv0KFLvnMJvujoawUt6ovg0edLxrUFvo5gokNkP3avKfyr6j2wLjlf ykcjD6ASYm303lVqdcFeWN0p4byP752fmdf6kF3d4hJBGIhjda1fXnsQQozEmmzDEX682tKqY74H6 heZyOSakBSks5aPcjsxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ncNCF-00AFxp-9S; Thu, 07 Apr 2022 08:08:52 +0000 Received: from smtp-out2.suse.de ([195.135.220.29]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ncN6x-00ADV8-48; Thu, 07 Apr 2022 08:03:25 +0000 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id BA7481F859; Thu, 7 Apr 2022 08:03:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1649318596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=BFj1weRWNO0gJwf1a0jytpVaNV2SIOQyml/DNHaVaAYv+GWtC9RRp3Nkr1QhUY/V7QdLG9 baoJsXmF2FufEolsvsTPwbwZ3Or5/MyPwA8jPnqPoYDtXH5U9TjRT7F8cNWLR3Nqx6fvJv Ws2fOIHqbo98FqfWwnQjig+YJFloGxM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1649318596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=DjSOFiN41nmAx2XAVq/tnPaDKbkIhCw6FaIeWu9KyiZMi8ecy7DPyjacZPoBAIl6E7Jspy 2pyi9oy0n8T7WCDA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4CF8613485; Thu, 7 Apr 2022 08:03:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id F2mtCL+aTmKlAgAAMHmgww (envelope-from ); Thu, 07 Apr 2022 08:03:11 +0000 Message-ID: <9f91936a-7dd7-2ee6-3293-f199ada85210@suse.de> Date: Thu, 7 Apr 2022 16:03:09 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Subject: Re: [PATCH 22/27] block: refactor discard bio size limiting Content-Language: en-US To: Christoph Hellwig Cc: dm-devel@redhat.com, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-um@lists.infradead.org, linux-block@vger.kernel.org, drbd-dev@lists.linbit.com, nbd@other.debian.org, ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org, linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, jfs-discussion@lists.sourceforge.net, linux-nilfs@vger.kernel.org, ntfs3@lists.linux.dev, ocfs2-devel@oss.oracle.com, linux-mm@kvack.org, Jens Axboe References: <20220406060516.409838-1-hch@lst.de> <20220406060516.409838-23-hch@lst.de> From: Coly Li In-Reply-To: <20220406060516.409838-23-hch@lst.de> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220407_010323_397232_B6A83B76 X-CRM114-Status: GOOD ( 29.61 ) X-BeenThere: linux-mtd@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux MTD discussion mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-mtd" Errors-To: linux-mtd-bounces+linux-mtd=archiver.kernel.org@lists.infradead.org On 4/6/22 2:05 PM, Christoph Hellwig wrote: > Move all the logic to limit the discard bio size into a common helper > so that it is better documented. > > Signed-off-by: Christoph Hellwig Acked-by: Coly Li Thanks for the change. Coly Li > --- > block/blk-lib.c | 59 ++++++++++++++++++++++++------------------------- > block/blk.h | 14 ------------ > 2 files changed, 29 insertions(+), 44 deletions(-) > > diff --git a/block/blk-lib.c b/block/blk-lib.c > index 237d60d8b5857..2ae32a722851c 100644 > --- a/block/blk-lib.c > +++ b/block/blk-lib.c > @@ -10,6 +10,32 @@ > > #include "blk.h" > > +static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector) > +{ > + unsigned int discard_granularity = > + bdev_get_queue(bdev)->limits.discard_granularity; > + sector_t granularity_aligned_sector; > + > + if (bdev_is_partition(bdev)) > + sector += bdev->bd_start_sect; > + > + granularity_aligned_sector = > + round_up(sector, discard_granularity >> SECTOR_SHIFT); > + > + /* > + * Make sure subsequent bios start aligned to the discard granularity if > + * it needs to be split. > + */ > + if (granularity_aligned_sector != sector) > + return granularity_aligned_sector - sector; > + > + /* > + * Align the bio size to the discard granularity to make splitting the bio > + * at discard granularity boundaries easier in the driver if needed. > + */ > + return round_down(UINT_MAX, discard_granularity) >> SECTOR_SHIFT; > +} > + > int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > sector_t nr_sects, gfp_t gfp_mask, int flags, > struct bio **biop) > @@ -17,7 +43,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > struct request_queue *q = bdev_get_queue(bdev); > struct bio *bio = *biop; > unsigned int op; > - sector_t bs_mask, part_offset = 0; > + sector_t bs_mask; > > if (bdev_read_only(bdev)) > return -EPERM; > @@ -48,36 +74,9 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > if (!nr_sects) > return -EINVAL; > > - /* In case the discard request is in a partition */ > - if (bdev_is_partition(bdev)) > - part_offset = bdev->bd_start_sect; > - > while (nr_sects) { > - sector_t granularity_aligned_lba, req_sects; > - sector_t sector_mapped = sector + part_offset; > - > - granularity_aligned_lba = round_up(sector_mapped, > - q->limits.discard_granularity >> SECTOR_SHIFT); > - > - /* > - * Check whether the discard bio starts at a discard_granularity > - * aligned LBA, > - * - If no: set (granularity_aligned_lba - sector_mapped) to > - * bi_size of the first split bio, then the second bio will > - * start at a discard_granularity aligned LBA on the device. > - * - If yes: use bio_aligned_discard_max_sectors() as the max > - * possible bi_size of the first split bio. Then when this bio > - * is split in device drive, the split ones are very probably > - * to be aligned to discard_granularity of the device's queue. > - */ > - if (granularity_aligned_lba == sector_mapped) > - req_sects = min_t(sector_t, nr_sects, > - bio_aligned_discard_max_sectors(q)); > - else > - req_sects = min_t(sector_t, nr_sects, > - granularity_aligned_lba - sector_mapped); > - > - WARN_ON_ONCE((req_sects << 9) > UINT_MAX); > + sector_t req_sects = > + min(nr_sects, bio_discard_limit(bdev, sector)); > > bio = blk_next_bio(bio, bdev, 0, op, gfp_mask); > bio->bi_iter.bi_sector = sector; > diff --git a/block/blk.h b/block/blk.h > index 8ccbc6e076369..1fdc1d28e6d60 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -346,20 +346,6 @@ static inline unsigned int bio_allowed_max_sectors(struct request_queue *q) > return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9; > } > > -/* > - * The max bio size which is aligned to q->limits.discard_granularity. This > - * is a hint to split large discard bio in generic block layer, then if device > - * driver needs to split the discard bio into smaller ones, their bi_size can > - * be very probably and easily aligned to discard_granularity of the device's > - * queue. > - */ > -static inline unsigned int bio_aligned_discard_max_sectors( > - struct request_queue *q) > -{ > - return round_down(UINT_MAX, q->limits.discard_granularity) >> > - SECTOR_SHIFT; > -} > - > /* > * Internal io_context interface > */ ______________________________________________________ Linux MTD discussion mailing list http://lists.infradead.org/mailman/listinfo/linux-mtd/ From mboxrd@z Thu Jan 1 00:00:00 1970 From: Coly Li Date: Thu, 7 Apr 2022 16:03:09 +0800 Subject: [Cluster-devel] [PATCH 22/27] block: refactor discard bio size limiting In-Reply-To: <20220406060516.409838-23-hch@lst.de> References: <20220406060516.409838-1-hch@lst.de> <20220406060516.409838-23-hch@lst.de> Message-ID: <9f91936a-7dd7-2ee6-3293-f199ada85210@suse.de> List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit On 4/6/22 2:05 PM, Christoph Hellwig wrote: > Move all the logic to limit the discard bio size into a common helper > so that it is better documented. > > Signed-off-by: Christoph Hellwig Acked-by: Coly Li Thanks for the change. Coly Li > --- > block/blk-lib.c | 59 ++++++++++++++++++++++++------------------------- > block/blk.h | 14 ------------ > 2 files changed, 29 insertions(+), 44 deletions(-) > > diff --git a/block/blk-lib.c b/block/blk-lib.c > index 237d60d8b5857..2ae32a722851c 100644 > --- a/block/blk-lib.c > +++ b/block/blk-lib.c > @@ -10,6 +10,32 @@ > > #include "blk.h" > > +static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector) > +{ > + unsigned int discard_granularity = > + bdev_get_queue(bdev)->limits.discard_granularity; > + sector_t granularity_aligned_sector; > + > + if (bdev_is_partition(bdev)) > + sector += bdev->bd_start_sect; > + > + granularity_aligned_sector = > + round_up(sector, discard_granularity >> SECTOR_SHIFT); > + > + /* > + * Make sure subsequent bios start aligned to the discard granularity if > + * it needs to be split. > + */ > + if (granularity_aligned_sector != sector) > + return granularity_aligned_sector - sector; > + > + /* > + * Align the bio size to the discard granularity to make splitting the bio > + * at discard granularity boundaries easier in the driver if needed. > + */ > + return round_down(UINT_MAX, discard_granularity) >> SECTOR_SHIFT; > +} > + > int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > sector_t nr_sects, gfp_t gfp_mask, int flags, > struct bio **biop) > @@ -17,7 +43,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > struct request_queue *q = bdev_get_queue(bdev); > struct bio *bio = *biop; > unsigned int op; > - sector_t bs_mask, part_offset = 0; > + sector_t bs_mask; > > if (bdev_read_only(bdev)) > return -EPERM; > @@ -48,36 +74,9 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > if (!nr_sects) > return -EINVAL; > > - /* In case the discard request is in a partition */ > - if (bdev_is_partition(bdev)) > - part_offset = bdev->bd_start_sect; > - > while (nr_sects) { > - sector_t granularity_aligned_lba, req_sects; > - sector_t sector_mapped = sector + part_offset; > - > - granularity_aligned_lba = round_up(sector_mapped, > - q->limits.discard_granularity >> SECTOR_SHIFT); > - > - /* > - * Check whether the discard bio starts at a discard_granularity > - * aligned LBA, > - * - If no: set (granularity_aligned_lba - sector_mapped) to > - * bi_size of the first split bio, then the second bio will > - * start at a discard_granularity aligned LBA on the device. > - * - If yes: use bio_aligned_discard_max_sectors() as the max > - * possible bi_size of the first split bio. Then when this bio > - * is split in device drive, the split ones are very probably > - * to be aligned to discard_granularity of the device's queue. > - */ > - if (granularity_aligned_lba == sector_mapped) > - req_sects = min_t(sector_t, nr_sects, > - bio_aligned_discard_max_sectors(q)); > - else > - req_sects = min_t(sector_t, nr_sects, > - granularity_aligned_lba - sector_mapped); > - > - WARN_ON_ONCE((req_sects << 9) > UINT_MAX); > + sector_t req_sects = > + min(nr_sects, bio_discard_limit(bdev, sector)); > > bio = blk_next_bio(bio, bdev, 0, op, gfp_mask); > bio->bi_iter.bi_sector = sector; > diff --git a/block/blk.h b/block/blk.h > index 8ccbc6e076369..1fdc1d28e6d60 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -346,20 +346,6 @@ static inline unsigned int bio_allowed_max_sectors(struct request_queue *q) > return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9; > } > > -/* > - * The max bio size which is aligned to q->limits.discard_granularity. This > - * is a hint to split large discard bio in generic block layer, then if device > - * driver needs to split the discard bio into smaller ones, their bi_size can > - * be very probably and easily aligned to discard_granularity of the device's > - * queue. > - */ > -static inline unsigned int bio_aligned_discard_max_sectors( > - struct request_queue *q) > -{ > - return round_down(UINT_MAX, q->limits.discard_granularity) >> > - SECTOR_SHIFT; > -} > - > /* > * Internal io_context interface > */ From mboxrd@z Thu Jan 1 00:00:00 1970 From: Coly Li Subject: Re: [PATCH 22/27] block: refactor discard bio size limiting Date: Thu, 7 Apr 2022 16:03:09 +0800 Message-ID: <9f91936a-7dd7-2ee6-3293-f199ada85210@suse.de> References: <20220406060516.409838-1-hch@lst.de> <20220406060516.409838-23-hch@lst.de> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1649318596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=BFj1weRWNO0gJwf1a0jytpVaNV2SIOQyml/DNHaVaAYv+GWtC9RRp3Nkr1QhUY/V7QdLG9 baoJsXmF2FufEolsvsTPwbwZ3Or5/MyPwA8jPnqPoYDtXH5U9TjRT7F8cNWLR3Nqx6fvJv Ws2fOIHqbo98FqfWwnQjig+YJFloGxM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1649318596; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7k0z5t+RkWgzil40AtKwhJmpL0iBjAFtHXia6nIwjc8=; b=DjSOFiN41nmAx2XAVq/tnPaDKbkIhCw6FaIeWu9KyiZMi8ecy7DPyjacZPoBAIl6E7Jspy 2pyi9oy0n8T7WCDA== Content-Language: en-US In-Reply-To: <20220406060516.409838-23-hch-jcswGhMUV9g@public.gmane.org> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: drbd-dev-bounces-cunTk1MwBs8qoQakbn7OcQ@public.gmane.org Errors-To: drbd-dev-bounces-cunTk1MwBs8qoQakbn7OcQ@public.gmane.org Content-Type: text/plain; charset="us-ascii"; format="flowed" To: Christoph Hellwig Cc: jfs-discussion-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, dm-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, target-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, drbd-dev-cunTk1MwBs8qoQakbn7OcQ@public.gmane.org, linux-s390-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-nilfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cluster-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b@public.gmane.org, linux-ext4-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-um-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, nbd-2H2hN8V1XRtuHlm7Suoebg@public.gmane.org, linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Jens Axboe , linux-raid-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mmc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-f2fs-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, linux-xfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, ocfs2-devel-N0ozoZBvEnrZJqsBc5GL+g@public.gmane.org, linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, ntfs3-cunTk1MwBs/YUNznpcFYbw@public.gmane.org, linux-btrfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org On 4/6/22 2:05 PM, Christoph Hellwig wrote: > Move all the logic to limit the discard bio size into a common helper > so that it is better documented. > > Signed-off-by: Christoph Hellwig Acked-by: Coly Li Thanks for the change. Coly Li > --- > block/blk-lib.c | 59 ++++++++++++++++++++++++------------------------- > block/blk.h | 14 ------------ > 2 files changed, 29 insertions(+), 44 deletions(-) > > diff --git a/block/blk-lib.c b/block/blk-lib.c > index 237d60d8b5857..2ae32a722851c 100644 > --- a/block/blk-lib.c > +++ b/block/blk-lib.c > @@ -10,6 +10,32 @@ > > #include "blk.h" > > +static sector_t bio_discard_limit(struct block_device *bdev, sector_t sector) > +{ > + unsigned int discard_granularity = > + bdev_get_queue(bdev)->limits.discard_granularity; > + sector_t granularity_aligned_sector; > + > + if (bdev_is_partition(bdev)) > + sector += bdev->bd_start_sect; > + > + granularity_aligned_sector = > + round_up(sector, discard_granularity >> SECTOR_SHIFT); > + > + /* > + * Make sure subsequent bios start aligned to the discard granularity if > + * it needs to be split. > + */ > + if (granularity_aligned_sector != sector) > + return granularity_aligned_sector - sector; > + > + /* > + * Align the bio size to the discard granularity to make splitting the bio > + * at discard granularity boundaries easier in the driver if needed. > + */ > + return round_down(UINT_MAX, discard_granularity) >> SECTOR_SHIFT; > +} > + > int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > sector_t nr_sects, gfp_t gfp_mask, int flags, > struct bio **biop) > @@ -17,7 +43,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > struct request_queue *q = bdev_get_queue(bdev); > struct bio *bio = *biop; > unsigned int op; > - sector_t bs_mask, part_offset = 0; > + sector_t bs_mask; > > if (bdev_read_only(bdev)) > return -EPERM; > @@ -48,36 +74,9 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > if (!nr_sects) > return -EINVAL; > > - /* In case the discard request is in a partition */ > - if (bdev_is_partition(bdev)) > - part_offset = bdev->bd_start_sect; > - > while (nr_sects) { > - sector_t granularity_aligned_lba, req_sects; > - sector_t sector_mapped = sector + part_offset; > - > - granularity_aligned_lba = round_up(sector_mapped, > - q->limits.discard_granularity >> SECTOR_SHIFT); > - > - /* > - * Check whether the discard bio starts at a discard_granularity > - * aligned LBA, > - * - If no: set (granularity_aligned_lba - sector_mapped) to > - * bi_size of the first split bio, then the second bio will > - * start at a discard_granularity aligned LBA on the device. > - * - If yes: use bio_aligned_discard_max_sectors() as the max > - * possible bi_size of the first split bio. Then when this bio > - * is split in device drive, the split ones are very probably > - * to be aligned to discard_granularity of the device's queue. > - */ > - if (granularity_aligned_lba == sector_mapped) > - req_sects = min_t(sector_t, nr_sects, > - bio_aligned_discard_max_sectors(q)); > - else > - req_sects = min_t(sector_t, nr_sects, > - granularity_aligned_lba - sector_mapped); > - > - WARN_ON_ONCE((req_sects << 9) > UINT_MAX); > + sector_t req_sects = > + min(nr_sects, bio_discard_limit(bdev, sector)); > > bio = blk_next_bio(bio, bdev, 0, op, gfp_mask); > bio->bi_iter.bi_sector = sector; > diff --git a/block/blk.h b/block/blk.h > index 8ccbc6e076369..1fdc1d28e6d60 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -346,20 +346,6 @@ static inline unsigned int bio_allowed_max_sectors(struct request_queue *q) > return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9; > } > > -/* > - * The max bio size which is aligned to q->limits.discard_granularity. This > - * is a hint to split large discard bio in generic block layer, then if device > - * driver needs to split the discard bio into smaller ones, their bi_size can > - * be very probably and easily aligned to discard_granularity of the device's > - * queue. > - */ > -static inline unsigned int bio_aligned_discard_max_sectors( > - struct request_queue *q) > -{ > - return round_down(UINT_MAX, q->limits.discard_granularity) >> > - SECTOR_SHIFT; > -} > - > /* > * Internal io_context interface > */