From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Cyrus-Session-Id: sloti22d1t05-3124918-1523473478-2-7717077827191078684 X-Sieve: CMU Sieve 3.0 X-Spam-known-sender: no X-Spam-score: 0.0 X-Spam-hits: BAYES_00 -1.9, HEADER_FROM_DIFFERENT_DOMAINS 0.25, MAILING_LIST_MULTI -1, ME_NOAUTH 0.01, RCVD_IN_DNSWL_HI -5, URIBL_SBL 1.623, URIBL_SBL_A 0.1, LANGUAGES en, BAYES_USED global, SA_VERSION 3.4.0 X-Spam-source: IP='209.132.180.67', Host='vger.kernel.org', Country='US', FromHeader='org', MailFrom='org' X-Spam-charsets: plain='UTF-8' X-Resolved-to: greg@kroah.com X-Delivered-to: greg@kroah.com X-Mail-from: stable-owner@vger.kernel.org ARC-Seal: i=1; a=rsa-sha256; cv=none; d=messagingengine.com; s=fm2; t= 1523473473; b=TBgeZvAgORK4zFHDE2Dqsy98bmxJvL/U9lPzuGuMfP54hoC2bF pUXPYxm4+t/xOEYMcx1831jERfQlhs5W5p+Y34zMKtbsdVLlFkcPvvfrM9cZNRaZ KGztyNb6inbfVSViUi9sxhyq8s5RfUrwxn9dQXrVbNbxbHZT+AoJ9f52uQ7ETvP4 xQgYjVjvA/ArVnUXq5SZ6CPPi9Tqdy+eq0kdPFXpN22TXyYBElfm/XNVu1Zm5Riv uV9Txr3N7jcFe9xiiZC9EhioW9VwVYvnFzsqeeLxT9rpb6ScNHPoCh074621DanT xzpXi72Bbs0frQ3nmWCWxDdgxf21ccTNbciw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-type:sender :list-id; s=fm2; t=1523473473; bh=cOUWhtD41oxwJX7agRD3c9lSITfM6w kbSkZ4WPb/rxI=; b=iWGrUtoOEbNZgNZBNXJM9H7fB3BfLhLjsxVMFzRT4KQl1j CDdZAWC7ySZBb30AbZm+PgOIxp+4SBnr7qXjAgqmpaPcSvN5MsDetxmAyVXXa2ks SiG4Xz44qbUwWS0wpyLUOLFA8jty3c92MjOqsu+B2cR+kfSDi/y5bA9xsQu7xXus ARkfn774jZNQXais0QQcCn+r8IaShBaklwKWFE3Sih2WjoAxoAndJO6u0PPCiDUx REl8CKTIyEsLSHXbDTyED02nZPb28vFv8p5n/jUNF/p+Gl/WvHDtmSlOQOLIkSSq SjHi/XKGeaT1dgV+Pa65usyL0oZve8xsrBOhNr3A== ARC-Authentication-Results: i=1; mx6.messagingengine.com; arc=none (no signatures found); dkim=none (no signatures found); dmarc=none (p=none,has-list-id=yes,d=none) header.from=linuxfoundation.org; iprev=pass policy.iprev=209.132.180.67 (vger.kernel.org); spf=none smtp.mailfrom=stable-owner@vger.kernel.org smtp.helo=vger.kernel.org; x-aligned-from=fail; x-cm=none score=0; x-ptr=pass x-ptr-helo=vger.kernel.org x-ptr-lookup=vger.kernel.org; x-return-mx=pass smtp.domain=vger.kernel.org smtp.result=pass smtp_org.domain=kernel.org smtp_org.result=pass smtp_is_org_domain=no header.domain=linuxfoundation.org header.result=pass header_is_org_domain=yes; x-vs=clean score=-100 state=0 Authentication-Results: mx6.messagingengine.com; arc=none (no signatures found); dkim=none (no signatures found); dmarc=none (p=none,has-list-id=yes,d=none) header.from=linuxfoundation.org; iprev=pass policy.iprev=209.132.180.67 (vger.kernel.org); spf=none smtp.mailfrom=stable-owner@vger.kernel.org smtp.helo=vger.kernel.org; x-aligned-from=fail; x-cm=none score=0; x-ptr=pass x-ptr-helo=vger.kernel.org x-ptr-lookup=vger.kernel.org; x-return-mx=pass smtp.domain=vger.kernel.org smtp.result=pass smtp_org.domain=kernel.org smtp_org.result=pass smtp_is_org_domain=no header.domain=linuxfoundation.org header.result=pass header_is_org_domain=yes; x-vs=clean score=-100 state=0 X-ME-VSCategory: clean X-CM-Envelope: MS4wfEZvkzTvc8TyRXWCPdykIUsGFqQ819aR4Fj0Zqd3OBmppCDl+Obl7h6V0a2EeJQc0JFYMlxRR+7gESqmXQBMc278pq22+Wlggz5KtMmNf2WOnPcvjhA6 3KX7ccsz0sTft98D3CLQAkIiX9hh3vsTeL5s7esaGo68tH+Dn9wMn2z9CsCaV9ZZdqLJQwx1nD66IZkS/iGaFO3FYFW9U50bunJPKFGpESONrWJ3+W+EJs1l X-CM-Analysis: v=2.3 cv=FKU1Odgs c=1 sm=1 tr=0 a=UK1r566ZdBxH71SXbqIOeA==:117 a=UK1r566ZdBxH71SXbqIOeA==:17 a=IkcTkHD0fZMA:10 a=Kd1tUaAdevIA:10 a=1RTuLK3dAAAA:8 a=uTJJjIG0AAAA:8 a=yMhMjlubAAAA:8 a=ag1SF4gXAAAA:8 a=46AKBb0M5UFIkv5KyAQA:9 a=wtDc5oijCdDNsavP:21 a=80RSUtREoxbM1zfV:21 a=QEXdDO2ut3YA:10 a=kRpfLKi8w9umh8uBmg1i:22 a=Oxz5h5Y_mNNx07kSEfs-:22 a=Yupwre4RP9_Eg_Bd0iYG:22 X-ME-CMScore: 0 X-ME-CMCategory: none Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934837AbeDKTEZ (ORCPT ); Wed, 11 Apr 2018 15:04:25 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:39824 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934800AbeDKTEY (ORCPT ); Wed, 11 Apr 2018 15:04:24 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tang Junhui , Michael Lyle , Jens Axboe , Sasha Levin Subject: [PATCH 4.9 258/310] bcache: segregate flash only volume write streams Date: Wed, 11 Apr 2018 20:36:37 +0200 Message-Id: <20180411183633.621411606@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180411183622.305902791@linuxfoundation.org> References: <20180411183622.305902791@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org X-Mailing-List: stable@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-Mailing-List: linux-kernel@vger.kernel.org List-ID: 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Tang Junhui [ Upstream commit 4eca1cb28d8b0574ca4f1f48e9331c5f852d43b9 ] In such scenario that there are some flash only volumes , and some cached devices, when many tasks request these devices in writeback mode, the write IOs may fall to the same bucket as bellow: | cached data | flash data | cached data | cached data| flash data| then after writeback of these cached devices, the bucket would be like bellow bucket: | free | flash data | free | free | flash data | So, there are many free space in this bucket, but since data of flash only volumes still exists, so this bucket cannot be reclaimable, which would cause waste of bucket space. In this patch, we segregate flash only volume write streams from cached devices, so data from flash only volumes and cached devices can store in different buckets. Compare to v1 patch, this patch do not add a additionally open bucket list, and it is try best to segregate flash only volume write streams from cached devices, sectors of flash only volumes may still be mixed with dirty sectors of cached device, but the number is very small. [mlyle: fixed commit log formatting, permissions, line endings] Signed-off-by: Tang Junhui Reviewed-by: Michael Lyle Signed-off-by: Michael Lyle Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/md/bcache/alloc.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) --- a/drivers/md/bcache/alloc.c +++ b/drivers/md/bcache/alloc.c @@ -512,15 +512,21 @@ struct open_bucket { /* * We keep multiple buckets open for writes, and try to segregate different - * write streams for better cache utilization: first we look for a bucket where - * the last write to it was sequential with the current write, and failing that - * we look for a bucket that was last used by the same task. + * write streams for better cache utilization: first we try to segregate flash + * only volume write streams from cached devices, secondly we look for a bucket + * where the last write to it was sequential with the current write, and + * failing that we look for a bucket that was last used by the same task. * * The ideas is if you've got multiple tasks pulling data into the cache at the * same time, you'll get better cache utilization if you try to segregate their * data and preserve locality. * - * For example, say you've starting Firefox at the same time you're copying a + * For example, dirty sectors of flash only volume is not reclaimable, if their + * dirty sectors mixed with dirty sectors of cached device, such buckets will + * be marked as dirty and won't be reclaimed, though the dirty data of cached + * device have been written back to backend device. + * + * And say you've starting Firefox at the same time you're copying a * bunch of files. Firefox will likely end up being fairly hot and stay in the * cache awhile, but the data you copied might not be; if you wrote all that * data to the same buckets it'd get invalidated at the same time. @@ -537,7 +543,10 @@ static struct open_bucket *pick_data_buc struct open_bucket *ret, *ret_task = NULL; list_for_each_entry_reverse(ret, &c->data_buckets, list) - if (!bkey_cmp(&ret->key, search)) + if (UUID_FLASH_ONLY(&c->uuids[KEY_INODE(&ret->key)]) != + UUID_FLASH_ONLY(&c->uuids[KEY_INODE(search)])) + continue; + else if (!bkey_cmp(&ret->key, search)) goto found; else if (ret->last_write_point == write_point) ret_task = ret;