All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [RFC] ubihealthd
       [not found] <9b03f4d6e2004f07b30a13cfa4cfcc96@SIWEX5A.sing.micron.com>
@ 2017-06-27 14:51 ` Richard Weinberger
  0 siblings, 0 replies; 6+ messages in thread
From: Richard Weinberger @ 2017-06-27 14:51 UTC (permalink / raw)
  To: Bean Huo (beanhuo); +Cc: linux-mtd

Bean,

Am 27.06.2017 um 16:25 schrieb Bean Huo (beanhuo):
> I mean these two links:
> http://lists.infradead.org/pipermail/linux-mtd/2015-November/063122.html
> http://lists.infradead.org/pipermail/linux-mtd/2015-March/058519.html
> so, you are preferring to implement latest approach in user space by ubihealthd.
> But so far, you don't have plan to merge into mainline kernel and mtd-utils. Right?

I have plans but no time.

> do you have a roughly estimated data for the CPU and memory usage for the ubihealthd?

42.

Really, read the source. :-)

Thanks,
//richard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] ubihealthd
       [not found] <f9de52c909c44f5daa7bcb154ec27e2e@SIWEX5A.sing.micron.com>
@ 2017-06-27 13:42 ` Richard Weinberger
  0 siblings, 0 replies; 6+ messages in thread
From: Richard Weinberger @ 2017-06-27 13:42 UTC (permalink / raw)
  To: Bean Huo (beanhuo); +Cc: linux-mtd

Bean,

Am 27.06.2017 um 15:35 schrieb Bean Huo (beanhuo):
> Hi, Richard
> I see ubihealthd tool doesn't merge in to mainline MTD-utils, also the kernel 
> " UBI statistics and bitrot interface".  
> http://lists.infradead.org/pipermail/linux-mtd/2015-November/063122.html
> I have a little bit confusion.  What is it difference with your patch series " UBI: Bitrot checking "?
> http://lists.infradead.org/pipermail/linux-mtd/2015-November/063122.html

this is two times the same link.
The most recent series is the latest approach. I sent multiple approaches.

> the ubihealthd is a user space tool and can trigger re-reads and scrubbing.
> I see " UBI: Bitrot checking " also has this action. They are separate or working together?
> Can you please provide more information about both?

The planned mode was that all logic happens in userspace, therefore ubihealthd.

Thanks,
//richard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] ubihealthd
  2016-04-15  6:26 ` Sascha Hauer
  2016-04-15  9:02   ` Boris Brezillon
@ 2016-07-05 17:27   ` Daniel Walter
  1 sibling, 0 replies; 6+ messages in thread
From: Daniel Walter @ 2016-07-05 17:27 UTC (permalink / raw)
  To: Sascha Hauer, Richard Weinberger; +Cc: linux-mtd, boris.brezillon, alex

On 04/15/2016 08:26 AM, Sascha Hauer wrote:
> Hi Richard, Daniel,
> 
> On Thu, Nov 05, 2015 at 11:59:59PM +0100, Richard Weinberger wrote:
>> ubihealthd is a tiny C program which takes care of your NAND.
>> It will trigger re-reads and scrubbing such that read-disturb and
>> data retention will be addressed before data is lost.
>> Currently the policy is rather trivial. It re-reads every PEB within
>> a given time frame, same for scrubbing and if a PEB's read counter exceeds
>> a given threshold it will also trigger a re-read.
>>
>> At ELCE some people asked why this is done in userspace.
>> The reason is that this is a classical example of kernel offers mechanism
>> and userspace the policy. Also ubihealthd is not mandatory.
>> Depending on your NAND it can help you increasing its lifetime.
>> But you won't lose data immediately if it does not run for a while.
>> It is something like smartd is for hard disks.
>> I did this also in kernel space and it was messy.
> 
> I gave ubihealthd a try and it basically works as expected. I let it run
> on a UBI device with a ton of (artificial) bitflips and the demon crawls
> over them moving the data away.
> 
> Do you have plans to further work on this and to integrate it into the
> kernel and mtd-utils?
> 
> One thing I noticed is that ubihealthd always scrubs blocks, even when
> there are no bitflips in that block. Why is that done? I would assume
> that rewriting a block when there are more bitflips than we can accept
> is enough, no?
> 
> Sascha
> 

Hi Sascha,

sorry for the late reply.

I've picked up working on ubihealthd again and after your comments and
the comments from Brian, I came to the conclusion, that we can indeed
skip the scrubbing, since it will be done by the kernel anyways as soon
as a read requests produces bitfilps.

I assume that I'll finish the work for the next version for ubihealthd
within the next few days and will send an updated RFC to the list.

daniel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] ubihealthd
  2016-04-15  6:26 ` Sascha Hauer
@ 2016-04-15  9:02   ` Boris Brezillon
  2016-07-05 17:27   ` Daniel Walter
  1 sibling, 0 replies; 6+ messages in thread
From: Boris Brezillon @ 2016-04-15  9:02 UTC (permalink / raw)
  To: Sascha Hauer; +Cc: Richard Weinberger, linux-mtd, alex, Daniel Walter

On Fri, 15 Apr 2016 08:26:04 +0200
Sascha Hauer <s.hauer@pengutronix.de> wrote:

> Hi Richard, Daniel,
> 
> On Thu, Nov 05, 2015 at 11:59:59PM +0100, Richard Weinberger wrote:
> > ubihealthd is a tiny C program which takes care of your NAND.
> > It will trigger re-reads and scrubbing such that read-disturb and
> > data retention will be addressed before data is lost.
> > Currently the policy is rather trivial. It re-reads every PEB within
> > a given time frame, same for scrubbing and if a PEB's read counter exceeds
> > a given threshold it will also trigger a re-read.
> > 
> > At ELCE some people asked why this is done in userspace.
> > The reason is that this is a classical example of kernel offers mechanism
> > and userspace the policy. Also ubihealthd is not mandatory.
> > Depending on your NAND it can help you increasing its lifetime.
> > But you won't lose data immediately if it does not run for a while.
> > It is something like smartd is for hard disks.
> > I did this also in kernel space and it was messy.
> 
> I gave ubihealthd a try and it basically works as expected. I let it run
> on a UBI device with a ton of (artificial) bitflips and the demon crawls
> over them moving the data away.
> 
> Do you have plans to further work on this and to integrate it into the
> kernel and mtd-utils?
> 
> One thing I noticed is that ubihealthd always scrubs blocks, even when
> there are no bitflips in that block. Why is that done? I would assume
> that rewriting a block when there are more bitflips than we can accept
> is enough, no?

Yep, that's my opinion too: we should not scrub the block if we're
below the bitflip_threshold. If one wants to be conservative, and
scrub as soon as there's a single bitflip, he can always manually set
bitflips_threshold to something really low.


-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] ubihealthd
  2015-11-05 22:59 Richard Weinberger
@ 2016-04-15  6:26 ` Sascha Hauer
  2016-04-15  9:02   ` Boris Brezillon
  2016-07-05 17:27   ` Daniel Walter
  0 siblings, 2 replies; 6+ messages in thread
From: Sascha Hauer @ 2016-04-15  6:26 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd, boris.brezillon, alex, Daniel Walter

Hi Richard, Daniel,

On Thu, Nov 05, 2015 at 11:59:59PM +0100, Richard Weinberger wrote:
> ubihealthd is a tiny C program which takes care of your NAND.
> It will trigger re-reads and scrubbing such that read-disturb and
> data retention will be addressed before data is lost.
> Currently the policy is rather trivial. It re-reads every PEB within
> a given time frame, same for scrubbing and if a PEB's read counter exceeds
> a given threshold it will also trigger a re-read.
> 
> At ELCE some people asked why this is done in userspace.
> The reason is that this is a classical example of kernel offers mechanism
> and userspace the policy. Also ubihealthd is not mandatory.
> Depending on your NAND it can help you increasing its lifetime.
> But you won't lose data immediately if it does not run for a while.
> It is something like smartd is for hard disks.
> I did this also in kernel space and it was messy.

I gave ubihealthd a try and it basically works as expected. I let it run
on a UBI device with a ton of (artificial) bitflips and the demon crawls
over them moving the data away.

Do you have plans to further work on this and to integrate it into the
kernel and mtd-utils?

One thing I noticed is that ubihealthd always scrubs blocks, even when
there are no bitflips in that block. Why is that done? I would assume
that rewriting a block when there are more bitflips than we can accept
is enough, no?

Sascha

-- 
Pengutronix e.K.                           |                             |
Industrial Linux Solutions                 | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC] ubihealthd
@ 2015-11-05 22:59 Richard Weinberger
  2016-04-15  6:26 ` Sascha Hauer
  0 siblings, 1 reply; 6+ messages in thread
From: Richard Weinberger @ 2015-11-05 22:59 UTC (permalink / raw)
  To: linux-mtd; +Cc: boris.brezillon, alex

ubihealthd is a tiny C program which takes care of your NAND.
It will trigger re-reads and scrubbing such that read-disturb and
data retention will be addressed before data is lost.
Currently the policy is rather trivial. It re-reads every PEB within
a given time frame, same for scrubbing and if a PEB's read counter exceeds
a given threshold it will also trigger a re-read.

At ELCE some people asked why this is done in userspace.
The reason is that this is a classical example of kernel offers mechanism
and userspace the policy. Also ubihealthd is not mandatory.
Depending on your NAND it can help you increasing its lifetime.
But you won't lose data immediately if it does not run for a while.
It is something like smartd is for hard disks.
I did this also in kernel space and it was messy.

[PATCH 1/4] Add kernel style linked lists
[PATCH 2/4] Include new ioctls and struct in ubi-user.h
[PATCH 3/4] Initial implementation for ubihealthd.
[PATCH 4/4] Documentation for ubihealthd

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-06-27 14:51 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <9b03f4d6e2004f07b30a13cfa4cfcc96@SIWEX5A.sing.micron.com>
2017-06-27 14:51 ` [RFC] ubihealthd Richard Weinberger
     [not found] <f9de52c909c44f5daa7bcb154ec27e2e@SIWEX5A.sing.micron.com>
2017-06-27 13:42 ` Richard Weinberger
2015-11-05 22:59 Richard Weinberger
2016-04-15  6:26 ` Sascha Hauer
2016-04-15  9:02   ` Boris Brezillon
2016-07-05 17:27   ` Daniel Walter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.