* [RFC PATCH 08/15] fs: proc: use PAGES_PER_SECTION for page offline checking period.
[not found] <20210805190253.2795604-1-zi.yan@sent.com>
@ 2021-08-05 19:02 ` Zi Yan
2021-08-07 10:32 ` Mike Rapoport
0 siblings, 1 reply; 3+ messages in thread
From: Zi Yan @ 2021-08-05 19:02 UTC (permalink / raw)
To: David Hildenbrand, linux-mm
Cc: Matthew Wilcox, Vlastimil Babka, Kirill A . Shutemov,
Mike Kravetz, Michal Hocko, John Hubbard, linux-kernel, Zi Yan,
Mike Rapoport, Oscar Salvador, Ying Chen, Feng Zhou,
linux-fsdevel
From: Zi Yan <ziy@nvidia.com>
It keeps the existing behavior after MAX_ORDER is increased beyond
a section size.
Signed-off-by: Zi Yan <ziy@nvidia.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Ying Chen <chenying.kernel@bytedance.com>
Cc: Feng Zhou <zhoufeng.zf@bytedance.com>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
fs/proc/kcore.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
index 3f148759a5fd..77b7ba48fb44 100644
--- a/fs/proc/kcore.c
+++ b/fs/proc/kcore.c
@@ -486,7 +486,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos)
}
}
- if (page_offline_frozen++ % MAX_ORDER_NR_PAGES == 0) {
+ if (page_offline_frozen++ % PAGES_PER_SECTION == 0) {
page_offline_thaw();
cond_resched();
page_offline_freeze();
--
2.30.2
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [RFC PATCH 08/15] fs: proc: use PAGES_PER_SECTION for page offline checking period.
2021-08-05 19:02 ` [RFC PATCH 08/15] fs: proc: use PAGES_PER_SECTION for page offline checking period Zi Yan
@ 2021-08-07 10:32 ` Mike Rapoport
2021-08-09 15:45 ` [RFC PATCH 08/15] " Zi Yan
0 siblings, 1 reply; 3+ messages in thread
From: Mike Rapoport @ 2021-08-07 10:32 UTC (permalink / raw)
To: Zi Yan
Cc: David Hildenbrand, linux-mm, Matthew Wilcox, Vlastimil Babka,
Kirill A . Shutemov, Mike Kravetz, Michal Hocko, John Hubbard,
linux-kernel, Oscar Salvador, Ying Chen, Feng Zhou,
linux-fsdevel
On Thu, Aug 05, 2021 at 03:02:46PM -0400, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
>
> It keeps the existing behavior after MAX_ORDER is increased beyond
> a section size.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Ying Chen <chenying.kernel@bytedance.com>
> Cc: Feng Zhou <zhoufeng.zf@bytedance.com>
> Cc: linux-fsdevel@vger.kernel.org
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
> fs/proc/kcore.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
> index 3f148759a5fd..77b7ba48fb44 100644
> --- a/fs/proc/kcore.c
> +++ b/fs/proc/kcore.c
> @@ -486,7 +486,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos)
> }
> }
>
> - if (page_offline_frozen++ % MAX_ORDER_NR_PAGES == 0) {
> + if (page_offline_frozen++ % PAGES_PER_SECTION == 0) {
The behavior changes here. E.g. with default configuration on x86 instead
of cond_resched() every 2M we get cond_resched() every 128M.
I'm not saying it's wrong but at least it deserves an explanation why.
> page_offline_thaw();
> cond_resched();
> page_offline_freeze();
> --
> 2.30.2
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [RFC PATCH 08/15] proc: use PAGES_PER_SECTION for page offline checking period.
2021-08-07 10:32 ` Mike Rapoport
@ 2021-08-09 15:45 ` Zi Yan
0 siblings, 0 replies; 3+ messages in thread
From: Zi Yan @ 2021-08-09 15:45 UTC (permalink / raw)
To: Mike Rapoport
Cc: David Hildenbrand, linux-mm, Matthew Wilcox, Vlastimil Babka,
Kirill A . Shutemov, Mike Kravetz, Michal Hocko, John Hubbard,
linux-kernel, Oscar Salvador, Ying Chen, Feng Zhou,
linux-fsdevel
[-- Attachment #1: Type: text/plain, Size: 1612 bytes --]
On 7 Aug 2021, at 6:32, Mike Rapoport wrote:
> On Thu, Aug 05, 2021 at 03:02:46PM -0400, Zi Yan wrote:
>> From: Zi Yan <ziy@nvidia.com>
>>
>> It keeps the existing behavior after MAX_ORDER is increased beyond
>> a section size.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> Cc: Mike Rapoport <rppt@kernel.org>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Ying Chen <chenying.kernel@bytedance.com>
>> Cc: Feng Zhou <zhoufeng.zf@bytedance.com>
>> Cc: linux-fsdevel@vger.kernel.org
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> ---
>> fs/proc/kcore.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
>> index 3f148759a5fd..77b7ba48fb44 100644
>> --- a/fs/proc/kcore.c
>> +++ b/fs/proc/kcore.c
>> @@ -486,7 +486,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos)
>> }
>> }
>>
>> - if (page_offline_frozen++ % MAX_ORDER_NR_PAGES == 0) {
>> + if (page_offline_frozen++ % PAGES_PER_SECTION == 0) {
>
> The behavior changes here. E.g. with default configuration on x86 instead
> of cond_resched() every 2M we get cond_resched() every 128M.
>
> I'm not saying it's wrong but at least it deserves an explanation why.
Sure. I will also think about whether I should use PAGES_PER_SECTION or pageblock_nr_pages
to replace MAX_ORDER in this and other patches. pageblock_nr_pages will be unchanged,
so at least in x86_64, using pageblock_nr_pages would not change code behaviors.
—
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-08-09 15:45 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20210805190253.2795604-1-zi.yan@sent.com>
2021-08-05 19:02 ` [RFC PATCH 08/15] fs: proc: use PAGES_PER_SECTION for page offline checking period Zi Yan
2021-08-07 10:32 ` Mike Rapoport
2021-08-09 15:45 ` [RFC PATCH 08/15] " Zi Yan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).