linux-bcache.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Dirty data loss after cache disk error recovery
@ 2021-04-20  3:17 吴本卿(云桌面 福州)
  2021-04-28 18:30 ` Kai Krakow
  0 siblings, 1 reply; 7+ messages in thread
From: 吴本卿(云桌面 福州) @ 2021-04-20  3:17 UTC (permalink / raw)
  To: linux-bcache

Hi, Recently I found a problem in the process of using bcache. My cache disk was offline for some reasons. When the cache disk was back online, I found that the backend in the detached state. I tried to attach the backend to the bcache again, and found that the dirty data was lost. The md5 value of the same file on backend's filesystem is different because dirty data loss.

I checked the log and found that logs:
[12228.642630] bcache: conditional_stop_bcache_device() stop_when_cache_set_failed of bcache0 is "auto" and cache is dirty, stop it to avoid potential data corruption.
[12228.644072] bcache: cached_dev_detach_finish() Caching disabled for sdb
[12228.644352] bcache: cache_set_free() Cache set 55b9112d-d52b-4e15-aa93-e7d5ccfcac37 unregistered

I checked the code of bcache and found that a cache disk IO error will trigger __cache_set_unregister, which will cause the backend to be datach, which also causes the loss of dirty data. Because after the backend is reattached, the allocated bcache_device->id is incremented, and the bkey that points to the dirty data stores the old id.

Is there a way to avoid this problem, such as providing users with options, if a cache disk error occurs, execute the stop process instead of detach.
I tried to increase cache_set->io_error_limit, in order to win the time to execute stop cache_set.
echo 4294967295 > /sys/fs/bcache/55b9112d-d52b-4e15-aa93-e7d5ccfcac37/io_error_limit

It did not work at that time, because in addition to bch_count_io_errors, which calls bch_cache_set_error, there are other code paths that also call bch_cache_set_error. For example, an io error occurs in the journal:
Apr 19 05:50:18 localhost.localdomain kernel: bcache: bch_cache_set_error() bcache: error on 55b9112d-d52b-4e15-aa93-e7d5ccfcac37: 
Apr 19 05:50:18 localhost.localdomain kernel: journal io error
Apr 19 05:50:18 localhost.localdomain kernel: bcache: bch_cache_set_error() , disabling caching
Apr 19 05:50:18 localhost.localdomain kernel: bcache: conditional_stop_bcache_device() stop_when_cache_set_failed of bcache0 is "auto" and cache is dirty, stop it to avoid potential data corruption.

When an error occurs in the cache device, why is it designed to unregister the cache_set? What is the original intention? The unregister operation means that all backend relationships are deleted, which will result in the loss of dirty data.
Is it possible to provide users with a choice to stop the cache_set instead of unregistering it.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Dirty data loss after cache disk error recovery
  2021-04-20  3:17 Dirty data loss after cache disk error recovery 吴本卿(云桌面 福州)
@ 2021-04-28 18:30 ` Kai Krakow
  2021-04-28 18:39   ` Kai Krakow
  0 siblings, 1 reply; 7+ messages in thread
From: Kai Krakow @ 2021-04-28 18:30 UTC (permalink / raw)
  To: 吴本卿(云桌面 福州)
  Cc: linux-bcache

Hello!

Am Di., 20. Apr. 2021 um 05:24 Uhr schrieb 吴本卿(云桌面 福州)
<wubenqing@ruijie.com.cn>:
>
> Hi, Recently I found a problem in the process of using bcache. My cache disk was offline for some reasons. When the cache disk was back online, I found that the backend in the detached state. I tried to attach the backend to the bcache again, and found that the dirty data was lost. The md5 value of the same file on backend's filesystem is different because dirty data loss.
>
> I checked the log and found that logs:
> [12228.642630] bcache: conditional_stop_bcache_device() stop_when_cache_set_failed of bcache0 is "auto" and cache is dirty, stop it to avoid potential data corruption.

"stop it to avoid potential data corruption" is not what it actually
does: neither it stops it, nor it prevents corruption because dirty
data becomes thrown away.

> [12228.644072] bcache: cached_dev_detach_finish() Caching disabled for sdb
> [12228.644352] bcache: cache_set_free() Cache set 55b9112d-d52b-4e15-aa93-e7d5ccfcac37 unregistered
>
> I checked the code of bcache and found that a cache disk IO error will trigger __cache_set_unregister, which will cause the backend to be datach, which also causes the loss of dirty data. Because after the backend is reattached, the allocated bcache_device->id is incremented, and the bkey that points to the dirty data stores the old id.
>
> Is there a way to avoid this problem, such as providing users with options, if a cache disk error occurs, execute the stop process instead of detach.
> I tried to increase cache_set->io_error_limit, in order to win the time to execute stop cache_set.
> echo 4294967295 > /sys/fs/bcache/55b9112d-d52b-4e15-aa93-e7d5ccfcac37/io_error_limit
>
> It did not work at that time, because in addition to bch_count_io_errors, which calls bch_cache_set_error, there are other code paths that also call bch_cache_set_error. For example, an io error occurs in the journal:
> Apr 19 05:50:18 localhost.localdomain kernel: bcache: bch_cache_set_error() bcache: error on 55b9112d-d52b-4e15-aa93-e7d5ccfcac37:
> Apr 19 05:50:18 localhost.localdomain kernel: journal io error
> Apr 19 05:50:18 localhost.localdomain kernel: bcache: bch_cache_set_error() , disabling caching
> Apr 19 05:50:18 localhost.localdomain kernel: bcache: conditional_stop_bcache_device() stop_when_cache_set_failed of bcache0 is "auto" and cache is dirty, stop it to avoid potential data corruption.
>
> When an error occurs in the cache device, why is it designed to unregister the cache_set? What is the original intention? The unregister operation means that all backend relationships are deleted, which will result in the loss of dirty data.
> Is it possible to provide users with a choice to stop the cache_set instead of unregistering it.

I think the same problem hit me, too, last night.

My kernel choked because of a GPU error, and that somehow disconnected
the cache. I can only guess that there was some sort of timeout due to
blocked queues, and that introduced an IO error which detached the
caches.

Sadly, I only realized this after I already reformatted and started
restore from backup: During the restore I watched the bcache status
and found that the devices are not attached.

I don't know if I could have re-attached the devices instead of
formatting. But I think the dirty data would have been discarded
anyways due to incrementing bcache_device->id.

This really needs a better solution, detaching is one of the worst,
especially on btrfs this has catastrophic consequences because data is
not updated inline but via copy on write. This requires updating a lot
of pointers. Usually, cow filesystem would be robust to this kind of
data-loss but the vast amount of dirty data that is lost puts the tree
generations too far behind of what btrfs is expecting, making it
essentially broken beyond repair. If some trees in the FS are just a
few generations behind, btrfs can repair itself by using a backup tree
root, but when the bcache is lost, generation numbers usually lag
behind several hundred generations. Detaching would be fine if there'd
be no dirty data - otherwise the device should probably stop and
refuse any more IO.

@Coly If I patched the source to stop instead of detach, would it have
made anything better? Would there be any side-effects? Is it possible
to atomically check for dirty data in that case and take either the
one or the other action?

Thanks,
Kai

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Dirty data loss after cache disk error recovery
  2021-04-28 18:30 ` Kai Krakow
@ 2021-04-28 18:39   ` Kai Krakow
  2021-04-28 18:51     ` Kai Krakow
  2021-05-07 12:13     ` Coly Li
  0 siblings, 2 replies; 7+ messages in thread
From: Kai Krakow @ 2021-04-28 18:39 UTC (permalink / raw)
  To: 吴本卿(云桌面 福州)
  Cc: linux-bcache

Hi Coly!

Am Mi., 28. Apr. 2021 um 20:30 Uhr schrieb Kai Krakow <kai@kaishome.de>:
>
> Hello!
>
> Am Di., 20. Apr. 2021 um 05:24 Uhr schrieb 吴本卿(云桌面 福州)
> <wubenqing@ruijie.com.cn>:
> >
> > Hi, Recently I found a problem in the process of using bcache. My cache disk was offline for some reasons. When the cache disk was back online, I found that the backend in the detached state. I tried to attach the backend to the bcache again, and found that the dirty data was lost. The md5 value of the same file on backend's filesystem is different because dirty data loss.
> >
> > I checked the log and found that logs:
> > [12228.642630] bcache: conditional_stop_bcache_device() stop_when_cache_set_failed of bcache0 is "auto" and cache is dirty, stop it to avoid potential data corruption.
>
> "stop it to avoid potential data corruption" is not what it actually
> does: neither it stops it, nor it prevents corruption because dirty
> data becomes thrown away.
>
> > [12228.644072] bcache: cached_dev_detach_finish() Caching disabled for sdb
> > [12228.644352] bcache: cache_set_free() Cache set 55b9112d-d52b-4e15-aa93-e7d5ccfcac37 unregistered
> >
> > I checked the code of bcache and found that a cache disk IO error will trigger __cache_set_unregister, which will cause the backend to be datach, which also causes the loss of dirty data. Because after the backend is reattached, the allocated bcache_device->id is incremented, and the bkey that points to the dirty data stores the old id.
> >
> > Is there a way to avoid this problem, such as providing users with options, if a cache disk error occurs, execute the stop process instead of detach.
> > I tried to increase cache_set->io_error_limit, in order to win the time to execute stop cache_set.
> > echo 4294967295 > /sys/fs/bcache/55b9112d-d52b-4e15-aa93-e7d5ccfcac37/io_error_limit
> >
> > It did not work at that time, because in addition to bch_count_io_errors, which calls bch_cache_set_error, there are other code paths that also call bch_cache_set_error. For example, an io error occurs in the journal:
> > Apr 19 05:50:18 localhost.localdomain kernel: bcache: bch_cache_set_error() bcache: error on 55b9112d-d52b-4e15-aa93-e7d5ccfcac37:
> > Apr 19 05:50:18 localhost.localdomain kernel: journal io error
> > Apr 19 05:50:18 localhost.localdomain kernel: bcache: bch_cache_set_error() , disabling caching
> > Apr 19 05:50:18 localhost.localdomain kernel: bcache: conditional_stop_bcache_device() stop_when_cache_set_failed of bcache0 is "auto" and cache is dirty, stop it to avoid potential data corruption.
> >
> > When an error occurs in the cache device, why is it designed to unregister the cache_set? What is the original intention? The unregister operation means that all backend relationships are deleted, which will result in the loss of dirty data.
> > Is it possible to provide users with a choice to stop the cache_set instead of unregistering it.
>
> I think the same problem hit me, too, last night.
>
> My kernel choked because of a GPU error, and that somehow disconnected
> the cache. I can only guess that there was some sort of timeout due to
> blocked queues, and that introduced an IO error which detached the
> caches.
>
> Sadly, I only realized this after I already reformatted and started
> restore from backup: During the restore I watched the bcache status
> and found that the devices are not attached.
>
> I don't know if I could have re-attached the devices instead of
> formatting. But I think the dirty data would have been discarded
> anyways due to incrementing bcache_device->id.
>
> This really needs a better solution, detaching is one of the worst,
> especially on btrfs this has catastrophic consequences because data is
> not updated inline but via copy on write. This requires updating a lot
> of pointers. Usually, cow filesystem would be robust to this kind of
> data-loss but the vast amount of dirty data that is lost puts the tree
> generations too far behind of what btrfs is expecting, making it
> essentially broken beyond repair. If some trees in the FS are just a
> few generations behind, btrfs can repair itself by using a backup tree
> root, but when the bcache is lost, generation numbers usually lag
> behind several hundred generations. Detaching would be fine if there'd
> be no dirty data - otherwise the device should probably stop and
> refuse any more IO.
>
> @Coly If I patched the source to stop instead of detach, would it have
> made anything better? Would there be any side-effects? Is it possible
> to atomically check for dirty data in that case and take either the
> one or the other action?

I think this behavior was introduced by https://lwn.net/Articles/748226/

So above is my late review. ;-)

(around commit 7e027ca4b534b6b99a7c0471e13ba075ffa3f482 if you cannot
access LWN for reasons[tm])

Thanks,
Kai

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Dirty data loss after cache disk error recovery
  2021-04-28 18:39   ` Kai Krakow
@ 2021-04-28 18:51     ` Kai Krakow
  2021-05-07 12:11       ` Coly Li
  2021-05-07 12:13     ` Coly Li
  1 sibling, 1 reply; 7+ messages in thread
From: Kai Krakow @ 2021-04-28 18:51 UTC (permalink / raw)
  To: 吴本卿(云桌面 福州)
  Cc: linux-bcache

> I think this behavior was introduced by https://lwn.net/Articles/748226/
>
> So above is my late review. ;-)
>
> (around commit 7e027ca4b534b6b99a7c0471e13ba075ffa3f482 if you cannot
> access LWN for reasons[tm])

The problem may actually come from a different code path which retires
the cache on metadata error:

commit 804f3c6981f5e4a506a8f14dc284cb218d0659ae
"bcache: fix cached_dev->count usage for bch_cache_set_error()"

It probably should consider if there's any dirty data. As a first
step, it may be sufficient to run a BUG_ON(there_is_dirty_data) (this
would kill the bcache thread, may not be a good idea) or even freeze
the system with an unrecoverable error, or at least stop the device to
prevent any IO with possibly stale data (because retiring throws away
dirty data). A good solution would be if the "with dirty data" error
path could somehow force the attached file system into read-only mode,
maybe by just reporting IO errors when this bdev is accessed through
bcache.

Thanks,
Kai

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Dirty data loss after cache disk error recovery
  2021-04-28 18:51     ` Kai Krakow
@ 2021-05-07 12:11       ` Coly Li
  2021-05-07 14:56         ` Kai Krakow
  0 siblings, 1 reply; 7+ messages in thread
From: Coly Li @ 2021-05-07 12:11 UTC (permalink / raw)
  To: Kai Krakow
  Cc: linux-bcache,
	吴本卿(云桌面
	福州)

On 4/29/21 2:51 AM, Kai Krakow wrote:
>> I think this behavior was introduced by https://lwn.net/Articles/748226/
>>
>> So above is my late review. ;-)
>>
>> (around commit 7e027ca4b534b6b99a7c0471e13ba075ffa3f482 if you cannot
>> access LWN for reasons[tm])
> 
> The problem may actually come from a different code path which retires
> the cache on metadata error:
> 
> commit 804f3c6981f5e4a506a8f14dc284cb218d0659ae
> "bcache: fix cached_dev->count usage for bch_cache_set_error()"
> 
> It probably should consider if there's any dirty data. As a first
> step, it may be sufficient to run a BUG_ON(there_is_dirty_data) (this
> would kill the bcache thread, may not be a good idea) or even freeze
> the system with an unrecoverable error, or at least stop the device to
> prevent any IO with possibly stale data (because retiring throws away
> dirty data). A good solution would be if the "with dirty data" error
> path could somehow force the attached file system into read-only mode,
> maybe by just reporting IO errors when this bdev is accessed through
> bcache.


There is an option to panic the system when cache device failed. It is
in errors file with available options as "unregister" and "panic". This
option is default set to "unregister", if you set it to "panic" then
panic() will be called.

If the cache set is attached, read-only the bcache device does not
prevent the meta data I/O on cache device (when try to cache the reading
data), if the cache device is really disconnected that will be
problematic too.

The "auto" and "always" options are for "unregister" error action. When
I enhance the device failure handling, I don't add new error action, all
my work was to make the "unregister" action work better.

Adding a new "stop" error action IMHO doesn't make things better. When
the cache device is disconnected, it is always risky that some caching
data or meta data is not updated onto cache device. Permit the cache
device to be re-attached to the backing device may introduce "silent
data loss" which might be worse....  It was the reason why I didn't add
new error action for the device failure handling patch set.

Thanks.

Coly Li

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Dirty data loss after cache disk error recovery
  2021-04-28 18:39   ` Kai Krakow
  2021-04-28 18:51     ` Kai Krakow
@ 2021-05-07 12:13     ` Coly Li
  1 sibling, 0 replies; 7+ messages in thread
From: Coly Li @ 2021-05-07 12:13 UTC (permalink / raw)
  To: Kai Krakow,
	吴本卿(云桌面
	福州)
  Cc: linux-bcache

On 4/29/21 2:39 AM, Kai Krakow wrote:
> Hi Coly!
> 
> Am Mi., 28. Apr. 2021 um 20:30 Uhr schrieb Kai Krakow <kai@kaishome.de>:
>>
>> Hello!
>>
>> Am Di., 20. Apr. 2021 um 05:24 Uhr schrieb 吴本卿(云桌面 福州)
>> <wubenqing@ruijie.com.cn>:
>>>
>>> Hi, Recently I found a problem in the process of using bcache. My cache disk was offline for some reasons. When the cache disk was back online, I found that the backend in the detached state. I tried to attach the backend to the bcache again, and found that the dirty data was lost. The md5 value of the same file on backend's filesystem is different because dirty data loss.
>>>
>>> I checked the log and found that logs:
>>> [12228.642630] bcache: conditional_stop_bcache_device() stop_when_cache_set_failed of bcache0 is "auto" and cache is dirty, stop it to avoid potential data corruption.
>>
>> "stop it to avoid potential data corruption" is not what it actually
>> does: neither it stops it, nor it prevents corruption because dirty
>> data becomes thrown away.
>>
>>> [12228.644072] bcache: cached_dev_detach_finish() Caching disabled for sdb
>>> [12228.644352] bcache: cache_set_free() Cache set 55b9112d-d52b-4e15-aa93-e7d5ccfcac37 unregistered
>>>
>>> I checked the code of bcache and found that a cache disk IO error will trigger __cache_set_unregister, which will cause the backend to be datach, which also causes the loss of dirty data. Because after the backend is reattached, the allocated bcache_device->id is incremented, and the bkey that points to the dirty data stores the old id.
>>>
>>> Is there a way to avoid this problem, such as providing users with options, if a cache disk error occurs, execute the stop process instead of detach.
>>> I tried to increase cache_set->io_error_limit, in order to win the time to execute stop cache_set.
>>> echo 4294967295 > /sys/fs/bcache/55b9112d-d52b-4e15-aa93-e7d5ccfcac37/io_error_limit
>>>
>>> It did not work at that time, because in addition to bch_count_io_errors, which calls bch_cache_set_error, there are other code paths that also call bch_cache_set_error. For example, an io error occurs in the journal:
>>> Apr 19 05:50:18 localhost.localdomain kernel: bcache: bch_cache_set_error() bcache: error on 55b9112d-d52b-4e15-aa93-e7d5ccfcac37:
>>> Apr 19 05:50:18 localhost.localdomain kernel: journal io error
>>> Apr 19 05:50:18 localhost.localdomain kernel: bcache: bch_cache_set_error() , disabling caching
>>> Apr 19 05:50:18 localhost.localdomain kernel: bcache: conditional_stop_bcache_device() stop_when_cache_set_failed of bcache0 is "auto" and cache is dirty, stop it to avoid potential data corruption.
>>>
>>> When an error occurs in the cache device, why is it designed to unregister the cache_set? What is the original intention? The unregister operation means that all backend relationships are deleted, which will result in the loss of dirty data.
>>> Is it possible to provide users with a choice to stop the cache_set instead of unregistering it.
>>
>> I think the same problem hit me, too, last night.
>>
>> My kernel choked because of a GPU error, and that somehow disconnected
>> the cache. I can only guess that there was some sort of timeout due to
>> blocked queues, and that introduced an IO error which detached the
>> caches.
>>
>> Sadly, I only realized this after I already reformatted and started
>> restore from backup: During the restore I watched the bcache status
>> and found that the devices are not attached.
>>
>> I don't know if I could have re-attached the devices instead of
>> formatting. But I think the dirty data would have been discarded
>> anyways due to incrementing bcache_device->id.
>>
>> This really needs a better solution, detaching is one of the worst,
>> especially on btrfs this has catastrophic consequences because data is
>> not updated inline but via copy on write. This requires updating a lot
>> of pointers. Usually, cow filesystem would be robust to this kind of
>> data-loss but the vast amount of dirty data that is lost puts the tree
>> generations too far behind of what btrfs is expecting, making it
>> essentially broken beyond repair. If some trees in the FS are just a
>> few generations behind, btrfs can repair itself by using a backup tree
>> root, but when the bcache is lost, generation numbers usually lag
>> behind several hundred generations. Detaching would be fine if there'd
>> be no dirty data - otherwise the device should probably stop and
>> refuse any more IO.
>>
>> @Coly If I patched the source to stop instead of detach, would it have
>> made anything better? Would there be any side-effects? Is it possible
>> to atomically check for dirty data in that case and take either the
>> one or the other action?
> 
> I think this behavior was introduced by https://lwn.net/Articles/748226/
> 
> So above is my late review. ;-)
> 
> (around commit 7e027ca4b534b6b99a7c0471e13ba075ffa3f482 if you cannot
> access LWN for reasons[tm])
> 

Hi Kai,

Sorry I just find this thread from my INBOX. Hope it is not too late. I
replied in your latest reply in this thread.

Thanks.

Coly Li

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Dirty data loss after cache disk error recovery
  2021-05-07 12:11       ` Coly Li
@ 2021-05-07 14:56         ` Kai Krakow
  0 siblings, 0 replies; 7+ messages in thread
From: Kai Krakow @ 2021-05-07 14:56 UTC (permalink / raw)
  To: Coly Li
  Cc: linux-bcache,
	吴本卿(云桌面
	福州)

Hi!

> There is an option to panic the system when cache device failed. It is
> in errors file with available options as "unregister" and "panic". This
> option is default set to "unregister", if you set it to "panic" then
> panic() will be called.

Hmm, okay, I didn't find "panic" documented somewhere. I'll take a
look at it again. If it's missing, I'll create a patch to improve
documentation.

> If the cache set is attached, read-only the bcache device does not
> prevent the meta data I/O on cache device (when try to cache the reading
> data), if the cache device is really disconnected that will be
> problematic too.

I didn't completely understand the sentence, it seems to miss a word.
But whatever it is, it's probably true. ;-)

> The "auto" and "always" options are for "unregister" error action. When
> I enhance the device failure handling, I don't add new error action, all
> my work was to make the "unregister" action work better.

But isn't the failure case here that it hits both code paths: The one
that unregisters the device, and the one that then retires the cache?

> Adding a new "stop" error action IMHO doesn't make things better. When
> the cache device is disconnected, it is always risky that some caching
> data or meta data is not updated onto cache device. Permit the cache
> device to be re-attached to the backing device may introduce "silent
> data loss" which might be worse....  It was the reason why I didn't add
> new error action for the device failure handling patch set.

But we are actually now seeing silent data loss: The system f'ed up
somehow, needed a hard reset, and after reboot the bcache device was
accessible in cache mode "none" (because they have been unregistered
before, and because udev just detected it and you can use bcache
without an attached cache in "none" mode), completely hiding the fact
that we lost dirty write-back data, it's even not quite obvious that
/dev/bcache0 now is detached, cache mode none, but accessible
nevertheless. To me, this is quite clearly "silent data loss",
especially since the unregister action threw the dirty data away.

So this:

> Permit the cache
> device to be re-attached to the backing device may introduce "silent
> data loss" which might be worse....

is actually the situation we are facing currently: Device has been
unregistered, after reboot, udev detects it has clean backing device
without cache association, using cache mode none, and it is readable
and writable just fine: It essentially permitted access to the stale
backing device (tho, it didn't re-attach as you outlined, but that's
more or less the same situation).

Maybe devices that become disassociated from a cache due to IO errors
but have dirty data should go to a caching mode "stale", and bcache
should refuse to access such devices or throw away their dirty data
until I decide to force them back online into the cache set or force
discard the dirty data. Then at least I would discover that something
went badly wrong. Otherwise, I may not detect that dirty data wasn't
written. In the best case, that makes my FS unmountable, in the worst
case, some file data is simply lost (aka silent data loss), besides
both situations are the worst-case scenario anyways.

The whole situation probably comes from udev auto-registering bcache
backing devices again, and bcache has no record of why the device was
unregistered - it looks clean after such a situation.

> Sorry I just find this thread from my INBOX. Hope it is not too late.

No worries. ;-)

It was already too late when the dirty cache was discarded but I have
daily backups. My system is up and running again, but it's probably
not a question of IF it happens again but WHEN it does. So I'd like to
discuss how we can get a cleaner fail situation because currently it's
just unclean because every status is lost after reboot, and devices
look clean, and caching mode is simply "none", which is completely
fine for the boot process.

Thanks,
Kai

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-05-07 14:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-20  3:17 Dirty data loss after cache disk error recovery 吴本卿(云桌面 福州)
2021-04-28 18:30 ` Kai Krakow
2021-04-28 18:39   ` Kai Krakow
2021-04-28 18:51     ` Kai Krakow
2021-05-07 12:11       ` Coly Li
2021-05-07 14:56         ` Kai Krakow
2021-05-07 12:13     ` Coly Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).