All of lore.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo@cn.fujitsu.com>
To: Nicholas D Steeves <nsteeves@gmail.com>,
	Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: [PATCH v8 00/27][For 4.7] Btrfs: Add inband (write time) de-duplication framework
Date: Mon, 25 Apr 2016 09:25:22 +0800	[thread overview]
Message-ID: <1071b71a-95b7-0083-6fcb-2e551e64fa46@cn.fujitsu.com> (raw)
In-Reply-To: <CAD=QJKgJ9JAgZAOSivJTL-bcLbdkP6UqGb0i6g=fS9j6XKtcLA@mail.gmail.com>



Nicholas D Steeves wrote on 2016/04/22 18:14 -0400:
> Hi Qu,
>
> On 6 April 2016 at 01:22, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>>
>>
>> Nicholas D Steeves wrote on 2016/04/05 23:47 -0400:
>>>
>>> It is unlikely that I will use dedupe, but I imagine your work will
>>> apply tot he following wishlist:
>>>
>>> 1. Allow disabling of memory-backend hash via a kernel argument,
>>> sysctl, or mount option for those of us have ECC RAM.
>>>      * page_cache never gets pushed to swap, so this should be safe, no?
>>
>> And why it's related to ECC RAM? To avoid memory corruption which will
>> finally lead to file corruption?
>> If so, it makes sense.
>
> Yes, my assumption is that a system with ECC will either correct the
> error, or that an uncorrectable event will trigger the same error
> handling procedure as if the software checksum failed.
>
>> Also I didn't get the point when you mention page_cache.
>> For hash pool, we didn't use page cache. We just use kmalloc, which won't be
>> swapped out.
>> For file page cache, it's not affected at all.
>
> My apologies, I'm still very new to this, and my "point" only
> demonstrates my lack of understanding.  Thank you for directing me to
> the kmalloc-related sections.
>
>>> 2. Implementing an intelligent cache so that it's possible to offset
>>> the cost of hashing the most actively read data.  I'm guessing there's
>>> already some sort of weighed cache eviction algorithm in place, but I
>>> don't yet know how to look into it, let alone enough to leverage it...
>>
>>
>> I not quite a fan of such intelligent but complicated cache design.
>> The main problem is we are putting police into kernel space.
>>
>> Currently, either use last-recent-use in-memory backend, or use all-in
>> ondisk backend.
>> For user want more precious control on which file/dir shouldn't go through
>> dedupe, they have the btrfs prop to set per-file flag to avoid dedupe.
>
> I'm looking into a project for some (hopefully) safe,
> low-hanging-fruit read optimisations, and read that
>
> Qu Wenruo wrote on 2016/04/05 11:08 +0800:
>> In-memory backend is much like an experimental field for new ideas,
>> as it won't affect on-disk format at all."
>
> Do you think that last-recent-use in-memory backend could be used in
> this way?  Specifically, I'm wondering the even|odd PID method of
> choosing which disk to read from could be replaced with the following
> method for rotational disks:
>
> The last-recent-use in-memory backend stores the value of last
> allocation group (and/or transaction ID, or something else), with an
> attached value of which disk did the IO.  I imagine it's possible to
> minimize seeks by choosing the disk by getting the absolute value
> difference between requested_location and last-recent-use_location of
> each disk with a simple a static_cast.

For allocation group, did you mean chunk or block group?

>
> Would the addition of that value pair (recent-use_location, disk) keep
> things simple and maybe prove to be useful, or is last-recent-use
> in-memory the wrong place for it?

Maybe I missed something, but it doesn't seem to have something to do 
with inband dedupe.
It looks more like a RAID read optimization.

And I'm not familiar with btrfs RAID, but it seems to be that btrfs 
doesn't have anything smart for balancing bio request.
So it may makes sense.

But you also mentioned "each disk", if you are going to do it at disk 
basis, then it may not make any sense, as we already have block level 
scheduler, which will do bio merge/re-order to improve performance.

It would be better if you can provide a clearer view on what you are 
going to do.
For example, at RAID level or at block device level.

Thanks,
Qu

>
> Thank you for taking the time to reply,
> Nicholas



> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>



  reply	other threads:[~2016-04-25  1:25 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-22  1:35 [PATCH v8 00/27][For 4.7] Btrfs: Add inband (write time) de-duplication framework Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 01/27] btrfs: dedupe: Introduce dedupe framework and its header Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 02/27] btrfs: dedupe: Introduce function to initialize dedupe info Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 03/27] btrfs: dedupe: Introduce function to add hash into in-memory tree Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 04/27] btrfs: dedupe: Introduce function to remove hash from " Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 05/27] btrfs: delayed-ref: Add support for increasing data ref under spinlock Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 06/27] btrfs: dedupe: Introduce function to search for an existing hash Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 07/27] btrfs: dedupe: Implement btrfs_dedupe_calc_hash interface Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 08/27] btrfs: ordered-extent: Add support for dedupe Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 09/27] btrfs: dedupe: Inband in-memory only de-duplication implement Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 10/27] btrfs: dedupe: Add basic tree structure for on-disk dedupe method Qu Wenruo
2016-03-24 20:58   ` Chris Mason
2016-03-25  1:59     ` Qu Wenruo
2016-03-25 15:11       ` Chris Mason
2016-03-26 13:11         ` Qu Wenruo
2016-03-28 14:09           ` Chris Mason
2016-03-29  1:47             ` Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 11/27] btrfs: dedupe: Introduce interfaces to resume and cleanup dedupe info Qu Wenruo
2016-03-29 17:31   ` Alex Lyakas
2016-03-30  0:26     ` Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 12/27] btrfs: dedupe: Add support for on-disk hash search Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 13/27] btrfs: dedupe: Add support to delete hash for on-disk backend Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 14/27] btrfs: dedupe: Add support for adding " Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 15/27] btrfs: dedupe: Add ioctl for inband dedupelication Qu Wenruo
2016-03-22  2:29   ` kbuild test robot
2016-03-22  2:48   ` kbuild test robot
2016-03-22  1:35 ` [PATCH v8 16/27] btrfs: dedupe: add an inode nodedupe flag Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 17/27] btrfs: dedupe: add a property handler for online dedupe Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 18/27] btrfs: dedupe: add per-file online dedupe control Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 19/27] btrfs: try more times to alloc metadata reserve space Qu Wenruo
2016-04-22 18:06   ` Josef Bacik
2016-04-25  0:54     ` Qu Wenruo
2016-04-25 14:05       ` Josef Bacik
2016-04-26  0:50         ` Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 20/27] btrfs: dedupe: Fix a bug when running inband dedupe with balance Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 21/27] btrfs: Fix a memory leak in inband dedupe hash Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 22/27] btrfs: dedupe: Fix metadata balance error when dedupe is enabled Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 23/27] btrfs: dedupe: Avoid submit IO for hash hit extent Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 24/27] btrfs: dedupe: Preparation for compress-dedupe co-work Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 25/27] btrfs: dedupe: Add support for compression and dedpue Qu Wenruo
2016-03-24 20:35   ` Chris Mason
2016-03-25  1:44     ` Qu Wenruo
2016-03-25 15:12       ` Chris Mason
2016-03-22  1:35 ` [PATCH v8 26/27] btrfs: relocation: Enhance error handling to avoid BUG_ON Qu Wenruo
2016-03-22  1:35 ` [PATCH v8 27/27] btrfs: dedupe: Fix a space cache delalloc bytes underflow bug Qu Wenruo
2016-03-22 13:38 ` [PATCH v8 00/27][For 4.7] Btrfs: Add inband (write time) de-duplication framework David Sterba
2016-03-23  2:25   ` Qu Wenruo
2016-03-24 13:42     ` David Sterba
2016-03-25  1:38       ` Qu Wenruo
2016-04-04 16:55         ` David Sterba
2016-04-05  3:08           ` Qu Wenruo
2016-04-20  2:02             ` Qu Wenruo
2016-04-20 19:14               ` Chris Mason
2016-04-06  3:47           ` Nicholas D Steeves
2016-04-06  5:22             ` Qu Wenruo
2016-04-22 22:14               ` Nicholas D Steeves
2016-04-25  1:25                 ` Qu Wenruo [this message]
2016-03-29 17:22 ` Alex Lyakas
2016-03-30  0:34   ` Qu Wenruo
2016-03-30 10:36     ` Alex Lyakas
2016-04-03  8:22     ` Alex Lyakas
2016-04-05  3:51       ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1071b71a-95b7-0083-6fcb-2e551e64fa46@cn.fujitsu.com \
    --to=quwenruo@cn.fujitsu.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=nsteeves@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.