From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752436AbdBAPGY (ORCPT ); Wed, 1 Feb 2017 10:06:24 -0500 Received: from mail-lf0-f66.google.com ([209.85.215.66]:33879 "EHLO mail-lf0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752269AbdBAPGW (ORCPT ); Wed, 1 Feb 2017 10:06:22 -0500 MIME-Version: 1.0 In-Reply-To: <20170131214334.c4f3eac9a477af0fa9a22c46@gmail.com> References: <20170131213829.3d86c07ffd1358019354c937@gmail.com> <20170131214334.c4f3eac9a477af0fa9a22c46@gmail.com> From: Dan Streetman Date: Wed, 1 Feb 2017 10:05:35 -0500 X-Google-Sender-Auth: P_EKIluc7MOcYa_pvDxmCUtptU8 Message-ID: Subject: Re: [PATCH/RESEND v3 3/5] z3fold: extend compaction function To: Vitaly Wool Cc: Linux-MM , linux-kernel , Andrew Morton Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 31, 2017 at 3:43 PM, Vitaly Wool wrote: > z3fold_compact_page() currently only handles the situation when > there's a single middle chunk within the z3fold page. However it > may be worth it to move middle chunk closer to either first or > last chunk, whichever is there, if the gap between them is big > enough. > > This patch adds the relevant code, using BIG_CHUNK_GAP define as > a threshold for middle chunk to be worth moving. > > Signed-off-by: Vitaly Wool Reviewed-by: Dan Streetman > --- > mm/z3fold.c | 26 +++++++++++++++++++++++++- > 1 file changed, 25 insertions(+), 1 deletion(-) > > diff --git a/mm/z3fold.c b/mm/z3fold.c > index 98ab01f..be8b56e 100644 > --- a/mm/z3fold.c > +++ b/mm/z3fold.c > @@ -268,6 +268,7 @@ static inline void *mchunk_memmove(struct z3fold_header *zhdr, > zhdr->middle_chunks << CHUNK_SHIFT); > } > > +#define BIG_CHUNK_GAP 3 > /* Has to be called with lock held */ > static int z3fold_compact_page(struct z3fold_header *zhdr) > { > @@ -286,8 +287,31 @@ static int z3fold_compact_page(struct z3fold_header *zhdr) > zhdr->middle_chunks = 0; > zhdr->start_middle = 0; > zhdr->first_num++; > + return 1; > } > - return 1; > + > + /* > + * moving data is expensive, so let's only do that if > + * there's substantial gain (at least BIG_CHUNK_GAP chunks) > + */ > + if (zhdr->first_chunks != 0 && zhdr->last_chunks == 0 && > + zhdr->start_middle - (zhdr->first_chunks + ZHDR_CHUNKS) >= > + BIG_CHUNK_GAP) { > + mchunk_memmove(zhdr, zhdr->first_chunks + ZHDR_CHUNKS); > + zhdr->start_middle = zhdr->first_chunks + ZHDR_CHUNKS; > + return 1; > + } else if (zhdr->last_chunks != 0 && zhdr->first_chunks == 0 && > + TOTAL_CHUNKS - (zhdr->last_chunks + zhdr->start_middle > + + zhdr->middle_chunks) >= > + BIG_CHUNK_GAP) { > + unsigned short new_start = TOTAL_CHUNKS - zhdr->last_chunks - > + zhdr->middle_chunks; > + mchunk_memmove(zhdr, new_start); > + zhdr->start_middle = new_start; > + return 1; > + } > + > + return 0; > } > > /** > -- > 2.4.2