* [PATCH] dm writecache: skip writecache_wait for pmem mode
@ 2019-09-02 10:04 Huaisheng Ye
2019-09-04 8:48 ` Mikulas Patocka
0 siblings, 1 reply; 5+ messages in thread
From: Huaisheng Ye @ 2019-09-02 10:04 UTC (permalink / raw)
To: mpatocka, snitzer, agk; +Cc: prarit, tyu1, dm-devel, linux-kernel, Huaisheng Ye
From: Huaisheng Ye <yehs1@lenovo.com>
The array bio_in_progress[2] only have chance to be increased and
decreased with ssd mode. For pmem mode, they are not involved at all.
So skip writecache_wait_for_ios in writecache_flush for pmem.
Suggested-by: Doris Yu <tyu1@lenovo.com>
Signed-off-by: Huaisheng Ye <yehs1@lenovo.com>
---
drivers/md/dm-writecache.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
index c481947..d06b8aa 100644
--- a/drivers/md/dm-writecache.c
+++ b/drivers/md/dm-writecache.c
@@ -726,7 +726,8 @@ static void writecache_flush(struct dm_writecache *wc)
}
writecache_commit_flushed(wc);
- writecache_wait_for_ios(wc, WRITE);
+ if (!WC_MODE_PMEM(wc))
+ writecache_wait_for_ios(wc, WRITE);
wc->seq_count++;
pmem_assign(sb(wc)->seq_count, cpu_to_le64(wc->seq_count));
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] dm writecache: skip writecache_wait for pmem mode
2019-09-02 10:04 [PATCH] dm writecache: skip writecache_wait for pmem mode Huaisheng Ye
@ 2019-09-04 8:48 ` Mikulas Patocka
2019-09-04 15:18 ` [External] " Huaisheng HS1 Ye
0 siblings, 1 reply; 5+ messages in thread
From: Mikulas Patocka @ 2019-09-04 8:48 UTC (permalink / raw)
To: Huaisheng Ye
Cc: snitzer, agk, prarit, tyu1, dm-devel, linux-kernel, Huaisheng Ye
On Mon, 2 Sep 2019, Huaisheng Ye wrote:
> From: Huaisheng Ye <yehs1@lenovo.com>
>
> The array bio_in_progress[2] only have chance to be increased and
> decreased with ssd mode. For pmem mode, they are not involved at all.
> So skip writecache_wait_for_ios in writecache_flush for pmem.
>
> Suggested-by: Doris Yu <tyu1@lenovo.com>
> Signed-off-by: Huaisheng Ye <yehs1@lenovo.com>
> ---
> drivers/md/dm-writecache.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
> index c481947..d06b8aa 100644
> --- a/drivers/md/dm-writecache.c
> +++ b/drivers/md/dm-writecache.c
> @@ -726,7 +726,8 @@ static void writecache_flush(struct dm_writecache *wc)
> }
> writecache_commit_flushed(wc);
>
> - writecache_wait_for_ios(wc, WRITE);
> + if (!WC_MODE_PMEM(wc))
> + writecache_wait_for_ios(wc, WRITE);
>
> wc->seq_count++;
> pmem_assign(sb(wc)->seq_count, cpu_to_le64(wc->seq_count));
> --
> 1.8.3.1
I think this is not needed - wait_event in writecache_wait_for_ios exits
immediatelly if the condition is true.
This code path is not so hot that we would need microoptimizations like
this to avoid function calls.
Mikulas
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [External] Re: [PATCH] dm writecache: skip writecache_wait for pmem mode
2019-09-04 8:48 ` Mikulas Patocka
@ 2019-09-04 15:18 ` Huaisheng HS1 Ye
2019-09-04 15:36 ` Mikulas Patocka
0 siblings, 1 reply; 5+ messages in thread
From: Huaisheng HS1 Ye @ 2019-09-04 15:18 UTC (permalink / raw)
To: Mikulas Patocka, snitzer
Cc: agk, prarit, Tzu ting Yu1, dm-devel, linux-kernel, Huaisheng Ye
> -----Original Message-----
> From: Mikulas Patocka <mpatocka@redhat.com>
> Sent: Wednesday, September 4, 2019 4:49 PM
> On Mon, 2 Sep 2019, Huaisheng Ye wrote:
>
> > From: Huaisheng Ye <yehs1@lenovo.com>
> >
> > The array bio_in_progress[2] only have chance to be increased and
> > decreased with ssd mode. For pmem mode, they are not involved at all.
> > So skip writecache_wait_for_ios in writecache_flush for pmem.
> >
> > Suggested-by: Doris Yu <tyu1@lenovo.com>
> > Signed-off-by: Huaisheng Ye <yehs1@lenovo.com>
> > ---
> > drivers/md/dm-writecache.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
> > index c481947..d06b8aa 100644
> > --- a/drivers/md/dm-writecache.c
> > +++ b/drivers/md/dm-writecache.c
> > @@ -726,7 +726,8 @@ static void writecache_flush(struct dm_writecache *wc)
> > }
> > writecache_commit_flushed(wc);
> >
> > - writecache_wait_for_ios(wc, WRITE);
> > + if (!WC_MODE_PMEM(wc))
> > + writecache_wait_for_ios(wc, WRITE);
> >
> > wc->seq_count++;
> > pmem_assign(sb(wc)->seq_count, cpu_to_le64(wc->seq_count));
> > --
> > 1.8.3.1
>
> I think this is not needed - wait_event in writecache_wait_for_ios exits
> immediatelly if the condition is true.
>
> This code path is not so hot that we would need microoptimizations like this to
> avoid function calls.
Hi Mikulas,
Thanks for your reply, I see what you mean, but I can't agree with you.
For pmem mode, this code path (writecache_flush) is much more hot than SSD mode.
Because in the code, the AUTOCOMMIT_BLOCKS_PMEM has been defined to 64, which means if more than 64 blocks have been inserted to cache device, also called uncommitted, writecache_flush would be called.
Otherwise, there is a timer callback function will be called every 1000 milliseconds.
#define AUTOCOMMIT_BLOCKS_SSD 65536
#define AUTOCOMMIT_BLOCKS_PMEM 64
#define AUTOCOMMIT_MSEC 1000
So when dm-writecache running in working mode, there are continuous WRITE operations has been mapped to writecache_map, writecache_flush will be used much more often than SSD mode.
Cheers,
Huaisheng Ye
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [External] Re: [PATCH] dm writecache: skip writecache_wait for pmem mode
2019-09-04 15:18 ` [External] " Huaisheng HS1 Ye
@ 2019-09-04 15:36 ` Mikulas Patocka
0 siblings, 0 replies; 5+ messages in thread
From: Mikulas Patocka @ 2019-09-04 15:36 UTC (permalink / raw)
To: Huaisheng HS1 Ye
Cc: snitzer, agk, prarit, Tzu ting Yu1, dm-devel, linux-kernel, Huaisheng Ye
On Wed, 4 Sep 2019, Huaisheng HS1 Ye wrote:
> > -----Original Message-----
> > From: Mikulas Patocka <mpatocka@redhat.com>
> > Sent: Wednesday, September 4, 2019 4:49 PM
> > On Mon, 2 Sep 2019, Huaisheng Ye wrote:
> >
> > > From: Huaisheng Ye <yehs1@lenovo.com>
> > >
> > > The array bio_in_progress[2] only have chance to be increased and
> > > decreased with ssd mode. For pmem mode, they are not involved at all.
> > > So skip writecache_wait_for_ios in writecache_flush for pmem.
> > >
> > > Suggested-by: Doris Yu <tyu1@lenovo.com>
> > > Signed-off-by: Huaisheng Ye <yehs1@lenovo.com>
> > > ---
> > > drivers/md/dm-writecache.c | 3 ++-
> > > 1 file changed, 2 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
> > > index c481947..d06b8aa 100644
> > > --- a/drivers/md/dm-writecache.c
> > > +++ b/drivers/md/dm-writecache.c
> > > @@ -726,7 +726,8 @@ static void writecache_flush(struct dm_writecache *wc)
> > > }
> > > writecache_commit_flushed(wc);
> > >
> > > - writecache_wait_for_ios(wc, WRITE);
> > > + if (!WC_MODE_PMEM(wc))
> > > + writecache_wait_for_ios(wc, WRITE);
> > >
> > > wc->seq_count++;
> > > pmem_assign(sb(wc)->seq_count, cpu_to_le64(wc->seq_count));
> > > --
> > > 1.8.3.1
> >
> > I think this is not needed - wait_event in writecache_wait_for_ios exits
> > immediatelly if the condition is true.
> >
> > This code path is not so hot that we would need microoptimizations like this to
> > avoid function calls.
>
> Hi Mikulas,
>
> Thanks for your reply, I see what you mean, but I can't agree with you.
>
> For pmem mode, this code path (writecache_flush) is much more hot than
> SSD mode. Because in the code, the AUTOCOMMIT_BLOCKS_PMEM has been
> defined to 64, which means if more than 64 blocks have been inserted to
> cache device, also called uncommitted, writecache_flush would be called.
> Otherwise, there is a timer callback function will be called every 1000
> milliseconds.
>
> #define AUTOCOMMIT_BLOCKS_SSD 65536
> #define AUTOCOMMIT_BLOCKS_PMEM 64
> #define AUTOCOMMIT_MSEC 1000
>
> So when dm-writecache running in working mode, there are continuous
> WRITE operations has been mapped to writecache_map, writecache_flush
> will be used much more often than SSD mode.
>
> Cheers,
> Huaisheng Ye
So, you save one instruction cache line for every 64*4096 bytes written to
persistent memory.
If you insist on it, I can acknowledge it, but I think it is really an
over-optimization.
Acked-By: Mikulas Patocka <mpatocka@redhat.com>
Mikulas
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [External] Re: [PATCH] dm writecache: skip writecache_wait for pmem mode
2019-09-18 10:23 ` Mikulas Patocka
@ 2019-09-18 12:48 ` Huaisheng HS1 Ye
0 siblings, 0 replies; 5+ messages in thread
From: Huaisheng HS1 Ye @ 2019-09-18 12:48 UTC (permalink / raw)
To: Mikulas Patocka
Cc: snitzer, agk, prarit, Tzu ting Yu1, dm-devel, linux-kernel, Huaisheng Ye
> -----Original Message-----
> From: Mikulas Patocka <mpatocka@redhat.com>
> Sent: Wednesday, September 18, 2019 6:24 PM
> The dm-writecache target is supposed to optimize writes, not reads.
> Normally, there won't be any reads following a write, because the data would be
> stored in the cache in RAM.
>
> Mikulas
Got it, Thanks for pointing that out.
Cheers,
Huaisheng Ye | 叶怀胜
Linux kernel | Lenovo DCG
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-09-18 12:48 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-02 10:04 [PATCH] dm writecache: skip writecache_wait for pmem mode Huaisheng Ye
2019-09-04 8:48 ` Mikulas Patocka
2019-09-04 15:18 ` [External] " Huaisheng HS1 Ye
2019-09-04 15:36 ` Mikulas Patocka
2019-09-05 5:59 Huaisheng HS1 Ye
2019-09-18 10:23 ` Mikulas Patocka
2019-09-18 12:48 ` [External] " Huaisheng HS1 Ye
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).