From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81EAFC433FE for ; Mon, 23 May 2022 21:01:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229817AbiEWVBz (ORCPT ); Mon, 23 May 2022 17:01:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229746AbiEWVBt (ORCPT ); Mon, 23 May 2022 17:01:49 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1139E40E4C for ; Mon, 23 May 2022 14:01:47 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24NKGvl1005165 for ; Mon, 23 May 2022 14:01:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=eRneQhw/4voGrTEK3ToZOovsTGEyzl7+Qokn/kBUyrI=; b=gYUzyuFCp5BsOqJpGVKNEmBykUDYBWVclcEmgnckYyok3OM7UfkhRJ1nYU99U+3lFz79 re2gB78Qm4nSou2zd0683M9SowzRScBFQcB6T4Vmc2IrZXUWdffqzoFNJabGNSphaOyL mg26DVqBG1JJpS5O0xk+qkIdpJnjDorwHQk= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3g6wtx3s17-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 23 May 2022 14:01:46 -0700 Received: from twshared4937.07.ash9.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Mon, 23 May 2022 14:01:45 -0700 Received: by devbig007.nao1.facebook.com (Postfix, from userid 544533) id B53D34464156; Mon, 23 May 2022 14:01:34 -0700 (PDT) From: Keith Busch To: , CC: , Kernel Team , , , , , Keith Busch Subject: [PATCHv3 4/6] block/merge: count bytes instead of sectors Date: Mon, 23 May 2022 14:01:17 -0700 Message-ID: <20220523210119.2500150-5-kbusch@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220523210119.2500150-1-kbusch@fb.com> References: <20220523210119.2500150-1-kbusch@fb.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-FB-Internal: Safe Content-Type: text/plain X-Proofpoint-ORIG-GUID: x6z_uzSrDh5jJAyY3ua0vaOZdx76wqsn X-Proofpoint-GUID: x6z_uzSrDh5jJAyY3ua0vaOZdx76wqsn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-23_09,2022-05-23_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Keith Busch Individual bv_len's may not be a sector size. Signed-off-by: Keith Busch --- block/blk-merge.c | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 7771dacc99cb..46cbbc10d68d 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -201,11 +201,11 @@ static inline unsigned get_max_segment_size(const s= truct request_queue *q, * @nsegs: [in,out] Number of segments in the bio being built. Increm= ented * by the number of segments from @bv that may be appended to= that * bio without exceeding @max_segs - * @sectors: [in,out] Number of sectors in the bio being built. Increme= nted - * by the number of sectors from @bv that may be appended to = that - * bio without exceeding @max_sectors + * @bytes: [in,out] Number of bytes in the bio being built. Increment= ed + * by the number of bytes from @bv that may be appended to th= at + * bio without exceeding @max_bytes * @max_segs: [in] upper bound for *@nsegs - * @max_sectors: [in] upper bound for *@sectors + * @max_bytes: [in] upper bound for *@bytes * * When splitting a bio, it can happen that a bvec is encountered that i= s too * big to fit in a single segment and hence that it has to be split in t= he @@ -216,10 +216,10 @@ static inline unsigned get_max_segment_size(const s= truct request_queue *q, */ static bool bvec_split_segs(const struct request_queue *q, const struct bio_vec *bv, unsigned *nsegs, - unsigned *sectors, unsigned max_segs, - unsigned max_sectors) + unsigned *bytes, unsigned max_segs, + unsigned max_bytes) { - unsigned max_len =3D (min(max_sectors, UINT_MAX >> 9) - *sectors) << 9; + unsigned max_len =3D min(max_bytes, UINT_MAX) - *bytes; unsigned len =3D min(bv->bv_len, max_len); unsigned total_len =3D 0; unsigned seg_size =3D 0; @@ -237,7 +237,7 @@ static bool bvec_split_segs(const struct request_queu= e *q, break; } =20 - *sectors +=3D total_len >> 9; + *bytes +=3D total_len; =20 /* tell the caller to split the bvec if it is too big to fit */ return len > 0 || bv->bv_len > max_len; @@ -269,8 +269,8 @@ static struct bio *blk_bio_segment_split(struct reque= st_queue *q, { struct bio_vec bv, bvprv, *bvprvp =3D NULL; struct bvec_iter iter; - unsigned nsegs =3D 0, sectors =3D 0; - const unsigned max_sectors =3D get_max_io_size(q, bio); + unsigned nsegs =3D 0, bytes =3D 0; + const unsigned max_bytes =3D get_max_io_size(q, bio) << 9; const unsigned max_segs =3D queue_max_segments(q); =20 bio_for_each_bvec(bv, bio, iter) { @@ -282,12 +282,12 @@ static struct bio *blk_bio_segment_split(struct req= uest_queue *q, goto split; =20 if (nsegs < max_segs && - sectors + (bv.bv_len >> 9) <=3D max_sectors && + bytes + bv.bv_len <=3D max_bytes && bv.bv_offset + bv.bv_len <=3D PAGE_SIZE) { nsegs++; - sectors +=3D bv.bv_len >> 9; - } else if (bvec_split_segs(q, &bv, &nsegs, §ors, max_segs, - max_sectors)) { + bytes +=3D bv.bv_len; + } else if (bvec_split_segs(q, &bv, &nsegs, &bytes, max_segs, + max_bytes)) { goto split; } =20 @@ -299,6 +299,7 @@ static struct bio *blk_bio_segment_split(struct reque= st_queue *q, return NULL; split: *segs =3D nsegs; + bytes =3D ALIGN_DOWN(bytes, queue_logical_block_size(q)); =20 /* * Bio splitting may cause subtle trouble such as hang when doing sync @@ -306,7 +307,7 @@ static struct bio *blk_bio_segment_split(struct reque= st_queue *q, * big IO can be trival, disable iopoll when split needed. */ bio_clear_polled(bio); - return bio_split(bio, sectors, GFP_NOIO, bs); + return bio_split(bio, bytes >> 9, GFP_NOIO, bs); } =20 /** @@ -375,7 +376,7 @@ EXPORT_SYMBOL(blk_queue_split); unsigned int blk_recalc_rq_segments(struct request *rq) { unsigned int nr_phys_segs =3D 0; - unsigned int nr_sectors =3D 0; + unsigned int bytes =3D 0; struct req_iterator iter; struct bio_vec bv; =20 @@ -398,7 +399,7 @@ unsigned int blk_recalc_rq_segments(struct request *r= q) } =20 rq_for_each_bvec(bv, rq, iter) - bvec_split_segs(rq->q, &bv, &nr_phys_segs, &nr_sectors, + bvec_split_segs(rq->q, &bv, &nr_phys_segs, &bytes, UINT_MAX, UINT_MAX); return nr_phys_segs; } --=20 2.30.2