All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ni zhan Chen <nizhan.chen@gmail.com>
To: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: x86@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-acpi@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-ia64@vger.kernel.org, cmetcalf@tilera.com,
	sparclinux@vger.kernel.org, rientjes@google.com,
	liuj97@gmail.com, len.brown@intel.com, benh@kernel.crashing.org,
	paulus@samba.org, cl@linux.com, minchan.kim@gmail.com,
	akpm@linux-foundation.org, kosaki.motohiro@jp.fujitsu.com,
	Wen Congyang <wency@cn.fujitsu.com>
Subject: Re: [RFC v9 PATCH 16/21] memory-hotplug: free memmap of sparse-vmemmap
Date: Sat, 06 Oct 2012 14:18:18 +0000	[thread overview]
Message-ID: <50703DAA.20407@gmail.com> (raw)
In-Reply-To: <506D2C1C.5060706@jp.fujitsu.com>

On 10/04/2012 02:26 PM, Yasuaki Ishimatsu wrote:
> Hi Chen,
>
> Sorry for late reply.
>
> 2012/10/02 13:21, Ni zhan Chen wrote:
>> On 09/05/2012 05:25 PM, wency@cn.fujitsu.com wrote:
>>> From: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>>>
>>> All pages of virtual mapping in removed memory cannot be freed, 
>>> since some pages
>>> used as PGD/PUD includes not only removed memory but also other 
>>> memory. So the
>>> patch checks whether page can be freed or not.
>>>
>>> How to check whether page can be freed or not?
>>>   1. When removing memory, the page structs of the revmoved memory 
>>> are filled
>>>      with 0FD.
>>>   2. All page structs are filled with 0xFD on PT/PMD, PT/PMD can be 
>>> cleared.
>>>      In this case, the page used as PT/PMD can be freed.
>>>
>>> Applying patch, __remove_section() of CONFIG_SPARSEMEM_VMEMMAP is 
>>> integrated
>>> into one. So __remove_section() of CONFIG_SPARSEMEM_VMEMMAP is deleted.
>>>
>>> Note:  vmemmap_kfree() and vmemmap_free_bootmem() are not 
>>> implemented for ia64,
>>> ppc, s390, and sparc.
>>>
>>> CC: David Rientjes <rientjes@google.com>
>>> CC: Jiang Liu <liuj97@gmail.com>
>>> CC: Len Brown <len.brown@intel.com>
>>> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>>> CC: Paul Mackerras <paulus@samba.org>
>>> CC: Christoph Lameter <cl@linux.com>
>>> Cc: Minchan Kim <minchan.kim@gmail.com>
>>> CC: Andrew Morton <akpm@linux-foundation.org>
>>> CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
>>> CC: Wen Congyang <wency@cn.fujitsu.com>
>>> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>>> ---
>>>   arch/ia64/mm/discontig.c  |    8 +++
>>>   arch/powerpc/mm/init_64.c |    8 +++
>>>   arch/s390/mm/vmem.c       |    8 +++
>>>   arch/sparc/mm/init_64.c   |    8 +++
>>>   arch/x86/mm/init_64.c     |  119 
>>> +++++++++++++++++++++++++++++++++++++++++++++
>>>   include/linux/mm.h        |    2 +
>>>   mm/memory_hotplug.c       |   17 +------
>>>   mm/sparse.c               |    5 +-
>>>   8 files changed, 158 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
>>> index 33943db..0d23b69 100644
>>> --- a/arch/ia64/mm/discontig.c
>>> +++ b/arch/ia64/mm/discontig.c
>>> @@ -823,6 +823,14 @@ int __meminit vmemmap_populate(struct page 
>>> *start_page,
>>>       return vmemmap_populate_basepages(start_page, size, node);
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
>>> index 3690c44..835a2b3 100644
>>> --- a/arch/powerpc/mm/init_64.c
>>> +++ b/arch/powerpc/mm/init_64.c
>>> @@ -299,6 +299,14 @@ int __meminit vmemmap_populate(struct page 
>>> *start_page,
>>>       return 0;
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>> index eda55cd..4b42b0b 100644
>>> --- a/arch/s390/mm/vmem.c
>>> +++ b/arch/s390/mm/vmem.c
>>> @@ -227,6 +227,14 @@ out:
>>>       return ret;
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
>>> index add1cc7..1384826 100644
>>> --- a/arch/sparc/mm/init_64.c
>>> +++ b/arch/sparc/mm/init_64.c
>>> @@ -2078,6 +2078,14 @@ void __meminit vmemmap_populate_print_last(void)
>>>       }
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>>> index 0075592..4e8f8a4 100644
>>> --- a/arch/x86/mm/init_64.c
>>> +++ b/arch/x86/mm/init_64.c
>>> @@ -1138,6 +1138,125 @@ vmemmap_populate(struct page *start_page, 
>>> unsigned long size, int node)
>>>       return 0;
>>>   }
>>> +#define PAGE_INUSE 0xFD
>>> +
>>> +unsigned long find_and_clear_pte_page(unsigned long addr, unsigned 
>>> long end,
>>> +                struct page **pp, int *page_size)
>>> +{
>>> +    pgd_t *pgd;
>>> +    pud_t *pud;
>>> +    pmd_t *pmd;
>>> +    pte_t *pte;
>>> +    void *page_addr;
>>> +    unsigned long next;
>>> +
>>> +    *pp = NULL;
>>> +
>>> +    pgd = pgd_offset_k(addr);
>>> +    if (pgd_none(*pgd))
>>> +        return pgd_addr_end(addr, end);
>>> +
>>> +    pud = pud_offset(pgd, addr);
>>> +    if (pud_none(*pud))
>>> +        return pud_addr_end(addr, end);
>>> +
>>> +    if (!cpu_has_pse) {
>>> +        next = (addr + PAGE_SIZE) & PAGE_MASK;
>>> +        pmd = pmd_offset(pud, addr);
>>> +        if (pmd_none(*pmd))
>>> +            return next;
>>> +
>>> +        pte = pte_offset_kernel(pmd, addr);
>>> +        if (pte_none(*pte))
>>> +            return next;
>>> +
>>> +        *page_size = PAGE_SIZE;
>>> +        *pp = pte_page(*pte);
>>> +    } else {
>>> +        next = pmd_addr_end(addr, end);
>>> +
>>> +        pmd = pmd_offset(pud, addr);
>>> +        if (pmd_none(*pmd))
>>> +            return next;
>>> +
>>> +        *page_size = PMD_SIZE;
>>> +        *pp = pmd_page(*pmd);
>>> +    }
>>> +
>>> +    /*
>>> +     * Removed page structs are filled with 0xFD.
>>> +     */
>>> +    memset((void *)addr, PAGE_INUSE, next - addr);
>>> +
>>> +    page_addr = page_address(*pp);
>>> +
>>> +    /*
>>> +     * Check the page is filled with 0xFD or not.
>>> +     * memchr_inv() returns the address. In this case, we cannot
>>> +     * clear PTE/PUD entry, since the page is used by other.
>>> +     * So we cannot also free the page.
>>> +     *
>>> +     * memchr_inv() returns NULL. In this case, we can clear
>>> +     * PTE/PUD entry, since the page is not used by other.
>>> +     * So we can also free the page.
>>> +     */
>>> +    if (memchr_inv(page_addr, PAGE_INUSE, *page_size)) {
>>> +        *pp = NULL;
>>> +        return next;
>>> +    }
>>> +
>>
>> Hi Yasuaki,
>>
>> why call memchr_inv check after memset, this time the page can always 
>> be filled with 0xFD.
>
> The page is not always filled with 0xFD. find_and_clear_pte_page()
> is calld in each section. So the function fills the page
> section size/sizeof(page) byte with 0xFD one time. Thus if section 
> size is
> small, the page is filled with 0xFD.

Hi Yasuaki,

But when section size will be small?

Regards,
Chen

>
> Thanks,
> Yasuaki Ishimatsu
>
>
>>> +    if (!cpu_has_pse)
>>> +        pte_clear(&init_mm, addr, pte);
>>> +    else
>>> +        pmd_clear(pmd);
>>> +
>>> +    return next;
>>> +}
>>> +
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +    unsigned long addr = (unsigned long)memmap;
>>> +    unsigned long end = (unsigned long)(memmap + nr_pages);
>>> +    unsigned long next;
>>> +    struct page *page;
>>> +    int page_size;
>>> +
>>> +    for (; addr < end; addr = next) {
>>> +        page = NULL;
>>> +        page_size = 0;
>>> +        next = find_and_clear_pte_page(addr, end, &page, &page_size);
>>> +        if (!page)
>>> +            continue;
>>> +
>>> +        free_pages((unsigned long)page_address(page),
>>> +                get_order(page_size));
>>> +        __flush_tlb_one(addr);
>>> +    }
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +    unsigned long addr = (unsigned long)memmap;
>>> +    unsigned long end = (unsigned long)(memmap + nr_pages);
>>> +    unsigned long next;
>>> +    struct page *page;
>>> +    int page_size;
>>> +    unsigned long magic;
>>> +
>>> +    for (; addr < end; addr = next) {
>>> +        page = NULL;
>>> +        page_size = 0;
>>> +        next = find_and_clear_pte_page(addr, end, &page, &page_size);
>>> +        if (!page)
>>> +            continue;
>>> +
>>> +        magic = (unsigned long) page->lru.next;
>>> +        if (magic = SECTION_INFO)
>>> +            put_page_bootmem(page);
>>> +        flush_tlb_kernel_range(addr, end);
>>> +    }
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index c607913..fb0d1fc 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -1620,6 +1620,8 @@ int vmemmap_populate(struct page *start_page, 
>>> unsigned long pages, int node);
>>>   void vmemmap_populate_print_last(void);
>>>   void register_page_bootmem_memmap(unsigned long section_nr, struct 
>>> page *map,
>>>                     unsigned long size);
>>> +void vmemmap_kfree(struct page *memmpa, unsigned long nr_pages);
>>> +void vmemmap_free_bootmem(struct page *memmpa, unsigned long 
>>> nr_pages);
>>>   enum mf_flags {
>>>       MF_COUNT_INCREASED = 1 << 0,
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 647a7f2..c54922c 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -308,19 +308,6 @@ static int __meminit __add_section(int nid, 
>>> struct zone *zone,
>>>       return register_new_memory(nid, 
>>> __pfn_to_section(phys_start_pfn));
>>>   }
>>> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
>>> -static int __remove_section(struct zone *zone, struct mem_section *ms)
>>> -{
>>> -    int ret = -EINVAL;
>>> -
>>> -    if (!valid_section(ms))
>>> -        return ret;
>>> -
>>> -    ret = unregister_memory_section(ms);
>>> -
>>> -    return ret;
>>> -}
>>> -#else
>>>   static int __remove_section(struct zone *zone, struct mem_section 
>>> *ms)
>>>   {
>>>       unsigned long flags;
>>> @@ -337,9 +324,9 @@ static int __remove_section(struct zone *zone, 
>>> struct mem_section *ms)
>>>       pgdat_resize_lock(pgdat, &flags);
>>>       sparse_remove_one_section(zone, ms);
>>>       pgdat_resize_unlock(pgdat, &flags);
>>> -    return 0;
>>> +
>>> +    return ret;
>>>   }
>>> -#endif
>>>   /*
>>>    * Reasonably generic function for adding memory.  It is
>>> diff --git a/mm/sparse.c b/mm/sparse.c
>>> index fac95f2..ab9d755 100644
>>> --- a/mm/sparse.c
>>> +++ b/mm/sparse.c
>>> @@ -613,12 +613,13 @@ static inline struct page 
>>> *kmalloc_section_memmap(unsigned long pnum, int nid,
>>>       /* This will make the necessary allocations eventually. */
>>>       return sparse_mem_map_populate(pnum, nid);
>>>   }
>>> -static void __kfree_section_memmap(struct page *memmap, unsigned 
>>> long nr_pages)
>>> +static void __kfree_section_memmap(struct page *page, unsigned long 
>>> nr_pages)
>>>   {
>>> -    return; /* XXX: Not implemented yet */
>>> +    vmemmap_kfree(page, nr_pages);
>>>   }
>>>   static void free_map_bootmem(struct page *page, unsigned long 
>>> nr_pages)
>>>   {
>>> +    vmemmap_free_bootmem(page, nr_pages);
>>>   }
>>>   #else
>>>   static struct page *__kmalloc_section_memmap(unsigned long nr_pages)
>>
>
>
>


WARNING: multiple messages have this Message-ID (diff)
From: Ni zhan Chen <nizhan.chen@gmail.com>
To: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: x86@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-acpi@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-ia64@vger.kernel.org, cmetcalf@tilera.com,
	sparclinux@vger.kernel.org, rientjes@google.com,
	liuj97@gmail.com, len.brown@intel.com, benh@kernel.crashing.org,
	paulus@samba.org, cl@linux.com, minchan.kim@gmail.com,
	akpm@linux-foundation.org, kosaki.motohiro@jp.fujitsu.com,
	Wen Congyang <wency@cn.fujitsu.com>
Subject: Re: [RFC v9 PATCH 16/21] memory-hotplug: free memmap of sparse-vmemmap
Date: Sat, 06 Oct 2012 22:18:18 +0800	[thread overview]
Message-ID: <50703DAA.20407@gmail.com> (raw)
In-Reply-To: <506D2C1C.5060706@jp.fujitsu.com>

On 10/04/2012 02:26 PM, Yasuaki Ishimatsu wrote:
> Hi Chen,
>
> Sorry for late reply.
>
> 2012/10/02 13:21, Ni zhan Chen wrote:
>> On 09/05/2012 05:25 PM, wency@cn.fujitsu.com wrote:
>>> From: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>>>
>>> All pages of virtual mapping in removed memory cannot be freed, 
>>> since some pages
>>> used as PGD/PUD includes not only removed memory but also other 
>>> memory. So the
>>> patch checks whether page can be freed or not.
>>>
>>> How to check whether page can be freed or not?
>>>   1. When removing memory, the page structs of the revmoved memory 
>>> are filled
>>>      with 0FD.
>>>   2. All page structs are filled with 0xFD on PT/PMD, PT/PMD can be 
>>> cleared.
>>>      In this case, the page used as PT/PMD can be freed.
>>>
>>> Applying patch, __remove_section() of CONFIG_SPARSEMEM_VMEMMAP is 
>>> integrated
>>> into one. So __remove_section() of CONFIG_SPARSEMEM_VMEMMAP is deleted.
>>>
>>> Note:  vmemmap_kfree() and vmemmap_free_bootmem() are not 
>>> implemented for ia64,
>>> ppc, s390, and sparc.
>>>
>>> CC: David Rientjes <rientjes@google.com>
>>> CC: Jiang Liu <liuj97@gmail.com>
>>> CC: Len Brown <len.brown@intel.com>
>>> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>>> CC: Paul Mackerras <paulus@samba.org>
>>> CC: Christoph Lameter <cl@linux.com>
>>> Cc: Minchan Kim <minchan.kim@gmail.com>
>>> CC: Andrew Morton <akpm@linux-foundation.org>
>>> CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
>>> CC: Wen Congyang <wency@cn.fujitsu.com>
>>> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>>> ---
>>>   arch/ia64/mm/discontig.c  |    8 +++
>>>   arch/powerpc/mm/init_64.c |    8 +++
>>>   arch/s390/mm/vmem.c       |    8 +++
>>>   arch/sparc/mm/init_64.c   |    8 +++
>>>   arch/x86/mm/init_64.c     |  119 
>>> +++++++++++++++++++++++++++++++++++++++++++++
>>>   include/linux/mm.h        |    2 +
>>>   mm/memory_hotplug.c       |   17 +------
>>>   mm/sparse.c               |    5 +-
>>>   8 files changed, 158 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
>>> index 33943db..0d23b69 100644
>>> --- a/arch/ia64/mm/discontig.c
>>> +++ b/arch/ia64/mm/discontig.c
>>> @@ -823,6 +823,14 @@ int __meminit vmemmap_populate(struct page 
>>> *start_page,
>>>       return vmemmap_populate_basepages(start_page, size, node);
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
>>> index 3690c44..835a2b3 100644
>>> --- a/arch/powerpc/mm/init_64.c
>>> +++ b/arch/powerpc/mm/init_64.c
>>> @@ -299,6 +299,14 @@ int __meminit vmemmap_populate(struct page 
>>> *start_page,
>>>       return 0;
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>> index eda55cd..4b42b0b 100644
>>> --- a/arch/s390/mm/vmem.c
>>> +++ b/arch/s390/mm/vmem.c
>>> @@ -227,6 +227,14 @@ out:
>>>       return ret;
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
>>> index add1cc7..1384826 100644
>>> --- a/arch/sparc/mm/init_64.c
>>> +++ b/arch/sparc/mm/init_64.c
>>> @@ -2078,6 +2078,14 @@ void __meminit vmemmap_populate_print_last(void)
>>>       }
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>>> index 0075592..4e8f8a4 100644
>>> --- a/arch/x86/mm/init_64.c
>>> +++ b/arch/x86/mm/init_64.c
>>> @@ -1138,6 +1138,125 @@ vmemmap_populate(struct page *start_page, 
>>> unsigned long size, int node)
>>>       return 0;
>>>   }
>>> +#define PAGE_INUSE 0xFD
>>> +
>>> +unsigned long find_and_clear_pte_page(unsigned long addr, unsigned 
>>> long end,
>>> +                struct page **pp, int *page_size)
>>> +{
>>> +    pgd_t *pgd;
>>> +    pud_t *pud;
>>> +    pmd_t *pmd;
>>> +    pte_t *pte;
>>> +    void *page_addr;
>>> +    unsigned long next;
>>> +
>>> +    *pp = NULL;
>>> +
>>> +    pgd = pgd_offset_k(addr);
>>> +    if (pgd_none(*pgd))
>>> +        return pgd_addr_end(addr, end);
>>> +
>>> +    pud = pud_offset(pgd, addr);
>>> +    if (pud_none(*pud))
>>> +        return pud_addr_end(addr, end);
>>> +
>>> +    if (!cpu_has_pse) {
>>> +        next = (addr + PAGE_SIZE) & PAGE_MASK;
>>> +        pmd = pmd_offset(pud, addr);
>>> +        if (pmd_none(*pmd))
>>> +            return next;
>>> +
>>> +        pte = pte_offset_kernel(pmd, addr);
>>> +        if (pte_none(*pte))
>>> +            return next;
>>> +
>>> +        *page_size = PAGE_SIZE;
>>> +        *pp = pte_page(*pte);
>>> +    } else {
>>> +        next = pmd_addr_end(addr, end);
>>> +
>>> +        pmd = pmd_offset(pud, addr);
>>> +        if (pmd_none(*pmd))
>>> +            return next;
>>> +
>>> +        *page_size = PMD_SIZE;
>>> +        *pp = pmd_page(*pmd);
>>> +    }
>>> +
>>> +    /*
>>> +     * Removed page structs are filled with 0xFD.
>>> +     */
>>> +    memset((void *)addr, PAGE_INUSE, next - addr);
>>> +
>>> +    page_addr = page_address(*pp);
>>> +
>>> +    /*
>>> +     * Check the page is filled with 0xFD or not.
>>> +     * memchr_inv() returns the address. In this case, we cannot
>>> +     * clear PTE/PUD entry, since the page is used by other.
>>> +     * So we cannot also free the page.
>>> +     *
>>> +     * memchr_inv() returns NULL. In this case, we can clear
>>> +     * PTE/PUD entry, since the page is not used by other.
>>> +     * So we can also free the page.
>>> +     */
>>> +    if (memchr_inv(page_addr, PAGE_INUSE, *page_size)) {
>>> +        *pp = NULL;
>>> +        return next;
>>> +    }
>>> +
>>
>> Hi Yasuaki,
>>
>> why call memchr_inv check after memset, this time the page can always 
>> be filled with 0xFD.
>
> The page is not always filled with 0xFD. find_and_clear_pte_page()
> is calld in each section. So the function fills the page
> section size/sizeof(page) byte with 0xFD one time. Thus if section 
> size is
> small, the page is filled with 0xFD.

Hi Yasuaki,

But when section size will be small?

Regards,
Chen

>
> Thanks,
> Yasuaki Ishimatsu
>
>
>>> +    if (!cpu_has_pse)
>>> +        pte_clear(&init_mm, addr, pte);
>>> +    else
>>> +        pmd_clear(pmd);
>>> +
>>> +    return next;
>>> +}
>>> +
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +    unsigned long addr = (unsigned long)memmap;
>>> +    unsigned long end = (unsigned long)(memmap + nr_pages);
>>> +    unsigned long next;
>>> +    struct page *page;
>>> +    int page_size;
>>> +
>>> +    for (; addr < end; addr = next) {
>>> +        page = NULL;
>>> +        page_size = 0;
>>> +        next = find_and_clear_pte_page(addr, end, &page, &page_size);
>>> +        if (!page)
>>> +            continue;
>>> +
>>> +        free_pages((unsigned long)page_address(page),
>>> +                get_order(page_size));
>>> +        __flush_tlb_one(addr);
>>> +    }
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +    unsigned long addr = (unsigned long)memmap;
>>> +    unsigned long end = (unsigned long)(memmap + nr_pages);
>>> +    unsigned long next;
>>> +    struct page *page;
>>> +    int page_size;
>>> +    unsigned long magic;
>>> +
>>> +    for (; addr < end; addr = next) {
>>> +        page = NULL;
>>> +        page_size = 0;
>>> +        next = find_and_clear_pte_page(addr, end, &page, &page_size);
>>> +        if (!page)
>>> +            continue;
>>> +
>>> +        magic = (unsigned long) page->lru.next;
>>> +        if (magic == SECTION_INFO)
>>> +            put_page_bootmem(page);
>>> +        flush_tlb_kernel_range(addr, end);
>>> +    }
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index c607913..fb0d1fc 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -1620,6 +1620,8 @@ int vmemmap_populate(struct page *start_page, 
>>> unsigned long pages, int node);
>>>   void vmemmap_populate_print_last(void);
>>>   void register_page_bootmem_memmap(unsigned long section_nr, struct 
>>> page *map,
>>>                     unsigned long size);
>>> +void vmemmap_kfree(struct page *memmpa, unsigned long nr_pages);
>>> +void vmemmap_free_bootmem(struct page *memmpa, unsigned long 
>>> nr_pages);
>>>   enum mf_flags {
>>>       MF_COUNT_INCREASED = 1 << 0,
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 647a7f2..c54922c 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -308,19 +308,6 @@ static int __meminit __add_section(int nid, 
>>> struct zone *zone,
>>>       return register_new_memory(nid, 
>>> __pfn_to_section(phys_start_pfn));
>>>   }
>>> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
>>> -static int __remove_section(struct zone *zone, struct mem_section *ms)
>>> -{
>>> -    int ret = -EINVAL;
>>> -
>>> -    if (!valid_section(ms))
>>> -        return ret;
>>> -
>>> -    ret = unregister_memory_section(ms);
>>> -
>>> -    return ret;
>>> -}
>>> -#else
>>>   static int __remove_section(struct zone *zone, struct mem_section 
>>> *ms)
>>>   {
>>>       unsigned long flags;
>>> @@ -337,9 +324,9 @@ static int __remove_section(struct zone *zone, 
>>> struct mem_section *ms)
>>>       pgdat_resize_lock(pgdat, &flags);
>>>       sparse_remove_one_section(zone, ms);
>>>       pgdat_resize_unlock(pgdat, &flags);
>>> -    return 0;
>>> +
>>> +    return ret;
>>>   }
>>> -#endif
>>>   /*
>>>    * Reasonably generic function for adding memory.  It is
>>> diff --git a/mm/sparse.c b/mm/sparse.c
>>> index fac95f2..ab9d755 100644
>>> --- a/mm/sparse.c
>>> +++ b/mm/sparse.c
>>> @@ -613,12 +613,13 @@ static inline struct page 
>>> *kmalloc_section_memmap(unsigned long pnum, int nid,
>>>       /* This will make the necessary allocations eventually. */
>>>       return sparse_mem_map_populate(pnum, nid);
>>>   }
>>> -static void __kfree_section_memmap(struct page *memmap, unsigned 
>>> long nr_pages)
>>> +static void __kfree_section_memmap(struct page *page, unsigned long 
>>> nr_pages)
>>>   {
>>> -    return; /* XXX: Not implemented yet */
>>> +    vmemmap_kfree(page, nr_pages);
>>>   }
>>>   static void free_map_bootmem(struct page *page, unsigned long 
>>> nr_pages)
>>>   {
>>> +    vmemmap_free_bootmem(page, nr_pages);
>>>   }
>>>   #else
>>>   static struct page *__kmalloc_section_memmap(unsigned long nr_pages)
>>
>
>
>


WARNING: multiple messages have this Message-ID (diff)
From: Ni zhan Chen <nizhan.chen@gmail.com>
To: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: x86@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-acpi@vger.kernel.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-ia64@vger.kernel.org, cmetcalf@tilera.com,
	sparclinux@vger.kernel.org, rientjes@google.com,
	liuj97@gmail.com, len.brown@intel.com, benh@kernel.crashing.org,
	paulus@samba.org, cl@linux.com, minchan.kim@gmail.com,
	akpm@linux-foundation.org, kosaki.motohiro@jp.fujitsu.com,
	Wen Congyang <wency@cn.fujitsu.com>
Subject: Re: [RFC v9 PATCH 16/21] memory-hotplug: free memmap of sparse-vmemmap
Date: Sat, 06 Oct 2012 22:18:18 +0800	[thread overview]
Message-ID: <50703DAA.20407@gmail.com> (raw)
In-Reply-To: <506D2C1C.5060706@jp.fujitsu.com>

On 10/04/2012 02:26 PM, Yasuaki Ishimatsu wrote:
> Hi Chen,
>
> Sorry for late reply.
>
> 2012/10/02 13:21, Ni zhan Chen wrote:
>> On 09/05/2012 05:25 PM, wency@cn.fujitsu.com wrote:
>>> From: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>>>
>>> All pages of virtual mapping in removed memory cannot be freed, 
>>> since some pages
>>> used as PGD/PUD includes not only removed memory but also other 
>>> memory. So the
>>> patch checks whether page can be freed or not.
>>>
>>> How to check whether page can be freed or not?
>>>   1. When removing memory, the page structs of the revmoved memory 
>>> are filled
>>>      with 0FD.
>>>   2. All page structs are filled with 0xFD on PT/PMD, PT/PMD can be 
>>> cleared.
>>>      In this case, the page used as PT/PMD can be freed.
>>>
>>> Applying patch, __remove_section() of CONFIG_SPARSEMEM_VMEMMAP is 
>>> integrated
>>> into one. So __remove_section() of CONFIG_SPARSEMEM_VMEMMAP is deleted.
>>>
>>> Note:  vmemmap_kfree() and vmemmap_free_bootmem() are not 
>>> implemented for ia64,
>>> ppc, s390, and sparc.
>>>
>>> CC: David Rientjes <rientjes@google.com>
>>> CC: Jiang Liu <liuj97@gmail.com>
>>> CC: Len Brown <len.brown@intel.com>
>>> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>>> CC: Paul Mackerras <paulus@samba.org>
>>> CC: Christoph Lameter <cl@linux.com>
>>> Cc: Minchan Kim <minchan.kim@gmail.com>
>>> CC: Andrew Morton <akpm@linux-foundation.org>
>>> CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
>>> CC: Wen Congyang <wency@cn.fujitsu.com>
>>> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>>> ---
>>>   arch/ia64/mm/discontig.c  |    8 +++
>>>   arch/powerpc/mm/init_64.c |    8 +++
>>>   arch/s390/mm/vmem.c       |    8 +++
>>>   arch/sparc/mm/init_64.c   |    8 +++
>>>   arch/x86/mm/init_64.c     |  119 
>>> +++++++++++++++++++++++++++++++++++++++++++++
>>>   include/linux/mm.h        |    2 +
>>>   mm/memory_hotplug.c       |   17 +------
>>>   mm/sparse.c               |    5 +-
>>>   8 files changed, 158 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
>>> index 33943db..0d23b69 100644
>>> --- a/arch/ia64/mm/discontig.c
>>> +++ b/arch/ia64/mm/discontig.c
>>> @@ -823,6 +823,14 @@ int __meminit vmemmap_populate(struct page 
>>> *start_page,
>>>       return vmemmap_populate_basepages(start_page, size, node);
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
>>> index 3690c44..835a2b3 100644
>>> --- a/arch/powerpc/mm/init_64.c
>>> +++ b/arch/powerpc/mm/init_64.c
>>> @@ -299,6 +299,14 @@ int __meminit vmemmap_populate(struct page 
>>> *start_page,
>>>       return 0;
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>> index eda55cd..4b42b0b 100644
>>> --- a/arch/s390/mm/vmem.c
>>> +++ b/arch/s390/mm/vmem.c
>>> @@ -227,6 +227,14 @@ out:
>>>       return ret;
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
>>> index add1cc7..1384826 100644
>>> --- a/arch/sparc/mm/init_64.c
>>> +++ b/arch/sparc/mm/init_64.c
>>> @@ -2078,6 +2078,14 @@ void __meminit vmemmap_populate_print_last(void)
>>>       }
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>>> index 0075592..4e8f8a4 100644
>>> --- a/arch/x86/mm/init_64.c
>>> +++ b/arch/x86/mm/init_64.c
>>> @@ -1138,6 +1138,125 @@ vmemmap_populate(struct page *start_page, 
>>> unsigned long size, int node)
>>>       return 0;
>>>   }
>>> +#define PAGE_INUSE 0xFD
>>> +
>>> +unsigned long find_and_clear_pte_page(unsigned long addr, unsigned 
>>> long end,
>>> +                struct page **pp, int *page_size)
>>> +{
>>> +    pgd_t *pgd;
>>> +    pud_t *pud;
>>> +    pmd_t *pmd;
>>> +    pte_t *pte;
>>> +    void *page_addr;
>>> +    unsigned long next;
>>> +
>>> +    *pp = NULL;
>>> +
>>> +    pgd = pgd_offset_k(addr);
>>> +    if (pgd_none(*pgd))
>>> +        return pgd_addr_end(addr, end);
>>> +
>>> +    pud = pud_offset(pgd, addr);
>>> +    if (pud_none(*pud))
>>> +        return pud_addr_end(addr, end);
>>> +
>>> +    if (!cpu_has_pse) {
>>> +        next = (addr + PAGE_SIZE) & PAGE_MASK;
>>> +        pmd = pmd_offset(pud, addr);
>>> +        if (pmd_none(*pmd))
>>> +            return next;
>>> +
>>> +        pte = pte_offset_kernel(pmd, addr);
>>> +        if (pte_none(*pte))
>>> +            return next;
>>> +
>>> +        *page_size = PAGE_SIZE;
>>> +        *pp = pte_page(*pte);
>>> +    } else {
>>> +        next = pmd_addr_end(addr, end);
>>> +
>>> +        pmd = pmd_offset(pud, addr);
>>> +        if (pmd_none(*pmd))
>>> +            return next;
>>> +
>>> +        *page_size = PMD_SIZE;
>>> +        *pp = pmd_page(*pmd);
>>> +    }
>>> +
>>> +    /*
>>> +     * Removed page structs are filled with 0xFD.
>>> +     */
>>> +    memset((void *)addr, PAGE_INUSE, next - addr);
>>> +
>>> +    page_addr = page_address(*pp);
>>> +
>>> +    /*
>>> +     * Check the page is filled with 0xFD or not.
>>> +     * memchr_inv() returns the address. In this case, we cannot
>>> +     * clear PTE/PUD entry, since the page is used by other.
>>> +     * So we cannot also free the page.
>>> +     *
>>> +     * memchr_inv() returns NULL. In this case, we can clear
>>> +     * PTE/PUD entry, since the page is not used by other.
>>> +     * So we can also free the page.
>>> +     */
>>> +    if (memchr_inv(page_addr, PAGE_INUSE, *page_size)) {
>>> +        *pp = NULL;
>>> +        return next;
>>> +    }
>>> +
>>
>> Hi Yasuaki,
>>
>> why call memchr_inv check after memset, this time the page can always 
>> be filled with 0xFD.
>
> The page is not always filled with 0xFD. find_and_clear_pte_page()
> is calld in each section. So the function fills the page
> section size/sizeof(page) byte with 0xFD one time. Thus if section 
> size is
> small, the page is filled with 0xFD.

Hi Yasuaki,

But when section size will be small?

Regards,
Chen

>
> Thanks,
> Yasuaki Ishimatsu
>
>
>>> +    if (!cpu_has_pse)
>>> +        pte_clear(&init_mm, addr, pte);
>>> +    else
>>> +        pmd_clear(pmd);
>>> +
>>> +    return next;
>>> +}
>>> +
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +    unsigned long addr = (unsigned long)memmap;
>>> +    unsigned long end = (unsigned long)(memmap + nr_pages);
>>> +    unsigned long next;
>>> +    struct page *page;
>>> +    int page_size;
>>> +
>>> +    for (; addr < end; addr = next) {
>>> +        page = NULL;
>>> +        page_size = 0;
>>> +        next = find_and_clear_pte_page(addr, end, &page, &page_size);
>>> +        if (!page)
>>> +            continue;
>>> +
>>> +        free_pages((unsigned long)page_address(page),
>>> +                get_order(page_size));
>>> +        __flush_tlb_one(addr);
>>> +    }
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +    unsigned long addr = (unsigned long)memmap;
>>> +    unsigned long end = (unsigned long)(memmap + nr_pages);
>>> +    unsigned long next;
>>> +    struct page *page;
>>> +    int page_size;
>>> +    unsigned long magic;
>>> +
>>> +    for (; addr < end; addr = next) {
>>> +        page = NULL;
>>> +        page_size = 0;
>>> +        next = find_and_clear_pte_page(addr, end, &page, &page_size);
>>> +        if (!page)
>>> +            continue;
>>> +
>>> +        magic = (unsigned long) page->lru.next;
>>> +        if (magic == SECTION_INFO)
>>> +            put_page_bootmem(page);
>>> +        flush_tlb_kernel_range(addr, end);
>>> +    }
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index c607913..fb0d1fc 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -1620,6 +1620,8 @@ int vmemmap_populate(struct page *start_page, 
>>> unsigned long pages, int node);
>>>   void vmemmap_populate_print_last(void);
>>>   void register_page_bootmem_memmap(unsigned long section_nr, struct 
>>> page *map,
>>>                     unsigned long size);
>>> +void vmemmap_kfree(struct page *memmpa, unsigned long nr_pages);
>>> +void vmemmap_free_bootmem(struct page *memmpa, unsigned long 
>>> nr_pages);
>>>   enum mf_flags {
>>>       MF_COUNT_INCREASED = 1 << 0,
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 647a7f2..c54922c 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -308,19 +308,6 @@ static int __meminit __add_section(int nid, 
>>> struct zone *zone,
>>>       return register_new_memory(nid, 
>>> __pfn_to_section(phys_start_pfn));
>>>   }
>>> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
>>> -static int __remove_section(struct zone *zone, struct mem_section *ms)
>>> -{
>>> -    int ret = -EINVAL;
>>> -
>>> -    if (!valid_section(ms))
>>> -        return ret;
>>> -
>>> -    ret = unregister_memory_section(ms);
>>> -
>>> -    return ret;
>>> -}
>>> -#else
>>>   static int __remove_section(struct zone *zone, struct mem_section 
>>> *ms)
>>>   {
>>>       unsigned long flags;
>>> @@ -337,9 +324,9 @@ static int __remove_section(struct zone *zone, 
>>> struct mem_section *ms)
>>>       pgdat_resize_lock(pgdat, &flags);
>>>       sparse_remove_one_section(zone, ms);
>>>       pgdat_resize_unlock(pgdat, &flags);
>>> -    return 0;
>>> +
>>> +    return ret;
>>>   }
>>> -#endif
>>>   /*
>>>    * Reasonably generic function for adding memory.  It is
>>> diff --git a/mm/sparse.c b/mm/sparse.c
>>> index fac95f2..ab9d755 100644
>>> --- a/mm/sparse.c
>>> +++ b/mm/sparse.c
>>> @@ -613,12 +613,13 @@ static inline struct page 
>>> *kmalloc_section_memmap(unsigned long pnum, int nid,
>>>       /* This will make the necessary allocations eventually. */
>>>       return sparse_mem_map_populate(pnum, nid);
>>>   }
>>> -static void __kfree_section_memmap(struct page *memmap, unsigned 
>>> long nr_pages)
>>> +static void __kfree_section_memmap(struct page *page, unsigned long 
>>> nr_pages)
>>>   {
>>> -    return; /* XXX: Not implemented yet */
>>> +    vmemmap_kfree(page, nr_pages);
>>>   }
>>>   static void free_map_bootmem(struct page *page, unsigned long 
>>> nr_pages)
>>>   {
>>> +    vmemmap_free_bootmem(page, nr_pages);
>>>   }
>>>   #else
>>>   static struct page *__kmalloc_section_memmap(unsigned long nr_pages)
>>
>
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Ni zhan Chen <nizhan.chen@gmail.com>
To: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: linux-s390@vger.kernel.org, linux-ia64@vger.kernel.org,
	Wen Congyang <wency@cn.fujitsu.com>,
	len.brown@intel.com, linux-acpi@vger.kernel.org,
	linux-sh@vger.kernel.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, cmetcalf@tilera.com,
	linux-mm@kvack.org, paulus@samba.org, minchan.kim@gmail.com,
	kosaki.motohiro@jp.fujitsu.com, rientjes@google.com,
	sparclinux@vger.kernel.org, cl@linux.com,
	linuxppc-dev@lists.ozlabs.org, akpm@linux-foundation.org,
	liuj97@gmail.com
Subject: Re: [RFC v9 PATCH 16/21] memory-hotplug: free memmap of sparse-vmemmap
Date: Sat, 06 Oct 2012 22:18:18 +0800	[thread overview]
Message-ID: <50703DAA.20407@gmail.com> (raw)
In-Reply-To: <506D2C1C.5060706@jp.fujitsu.com>

On 10/04/2012 02:26 PM, Yasuaki Ishimatsu wrote:
> Hi Chen,
>
> Sorry for late reply.
>
> 2012/10/02 13:21, Ni zhan Chen wrote:
>> On 09/05/2012 05:25 PM, wency@cn.fujitsu.com wrote:
>>> From: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>>>
>>> All pages of virtual mapping in removed memory cannot be freed, 
>>> since some pages
>>> used as PGD/PUD includes not only removed memory but also other 
>>> memory. So the
>>> patch checks whether page can be freed or not.
>>>
>>> How to check whether page can be freed or not?
>>>   1. When removing memory, the page structs of the revmoved memory 
>>> are filled
>>>      with 0FD.
>>>   2. All page structs are filled with 0xFD on PT/PMD, PT/PMD can be 
>>> cleared.
>>>      In this case, the page used as PT/PMD can be freed.
>>>
>>> Applying patch, __remove_section() of CONFIG_SPARSEMEM_VMEMMAP is 
>>> integrated
>>> into one. So __remove_section() of CONFIG_SPARSEMEM_VMEMMAP is deleted.
>>>
>>> Note:  vmemmap_kfree() and vmemmap_free_bootmem() are not 
>>> implemented for ia64,
>>> ppc, s390, and sparc.
>>>
>>> CC: David Rientjes <rientjes@google.com>
>>> CC: Jiang Liu <liuj97@gmail.com>
>>> CC: Len Brown <len.brown@intel.com>
>>> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>>> CC: Paul Mackerras <paulus@samba.org>
>>> CC: Christoph Lameter <cl@linux.com>
>>> Cc: Minchan Kim <minchan.kim@gmail.com>
>>> CC: Andrew Morton <akpm@linux-foundation.org>
>>> CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
>>> CC: Wen Congyang <wency@cn.fujitsu.com>
>>> Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>>> ---
>>>   arch/ia64/mm/discontig.c  |    8 +++
>>>   arch/powerpc/mm/init_64.c |    8 +++
>>>   arch/s390/mm/vmem.c       |    8 +++
>>>   arch/sparc/mm/init_64.c   |    8 +++
>>>   arch/x86/mm/init_64.c     |  119 
>>> +++++++++++++++++++++++++++++++++++++++++++++
>>>   include/linux/mm.h        |    2 +
>>>   mm/memory_hotplug.c       |   17 +------
>>>   mm/sparse.c               |    5 +-
>>>   8 files changed, 158 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
>>> index 33943db..0d23b69 100644
>>> --- a/arch/ia64/mm/discontig.c
>>> +++ b/arch/ia64/mm/discontig.c
>>> @@ -823,6 +823,14 @@ int __meminit vmemmap_populate(struct page 
>>> *start_page,
>>>       return vmemmap_populate_basepages(start_page, size, node);
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
>>> index 3690c44..835a2b3 100644
>>> --- a/arch/powerpc/mm/init_64.c
>>> +++ b/arch/powerpc/mm/init_64.c
>>> @@ -299,6 +299,14 @@ int __meminit vmemmap_populate(struct page 
>>> *start_page,
>>>       return 0;
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>> index eda55cd..4b42b0b 100644
>>> --- a/arch/s390/mm/vmem.c
>>> +++ b/arch/s390/mm/vmem.c
>>> @@ -227,6 +227,14 @@ out:
>>>       return ret;
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
>>> index add1cc7..1384826 100644
>>> --- a/arch/sparc/mm/init_64.c
>>> +++ b/arch/sparc/mm/init_64.c
>>> @@ -2078,6 +2078,14 @@ void __meminit vmemmap_populate_print_last(void)
>>>       }
>>>   }
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>>> index 0075592..4e8f8a4 100644
>>> --- a/arch/x86/mm/init_64.c
>>> +++ b/arch/x86/mm/init_64.c
>>> @@ -1138,6 +1138,125 @@ vmemmap_populate(struct page *start_page, 
>>> unsigned long size, int node)
>>>       return 0;
>>>   }
>>> +#define PAGE_INUSE 0xFD
>>> +
>>> +unsigned long find_and_clear_pte_page(unsigned long addr, unsigned 
>>> long end,
>>> +                struct page **pp, int *page_size)
>>> +{
>>> +    pgd_t *pgd;
>>> +    pud_t *pud;
>>> +    pmd_t *pmd;
>>> +    pte_t *pte;
>>> +    void *page_addr;
>>> +    unsigned long next;
>>> +
>>> +    *pp = NULL;
>>> +
>>> +    pgd = pgd_offset_k(addr);
>>> +    if (pgd_none(*pgd))
>>> +        return pgd_addr_end(addr, end);
>>> +
>>> +    pud = pud_offset(pgd, addr);
>>> +    if (pud_none(*pud))
>>> +        return pud_addr_end(addr, end);
>>> +
>>> +    if (!cpu_has_pse) {
>>> +        next = (addr + PAGE_SIZE) & PAGE_MASK;
>>> +        pmd = pmd_offset(pud, addr);
>>> +        if (pmd_none(*pmd))
>>> +            return next;
>>> +
>>> +        pte = pte_offset_kernel(pmd, addr);
>>> +        if (pte_none(*pte))
>>> +            return next;
>>> +
>>> +        *page_size = PAGE_SIZE;
>>> +        *pp = pte_page(*pte);
>>> +    } else {
>>> +        next = pmd_addr_end(addr, end);
>>> +
>>> +        pmd = pmd_offset(pud, addr);
>>> +        if (pmd_none(*pmd))
>>> +            return next;
>>> +
>>> +        *page_size = PMD_SIZE;
>>> +        *pp = pmd_page(*pmd);
>>> +    }
>>> +
>>> +    /*
>>> +     * Removed page structs are filled with 0xFD.
>>> +     */
>>> +    memset((void *)addr, PAGE_INUSE, next - addr);
>>> +
>>> +    page_addr = page_address(*pp);
>>> +
>>> +    /*
>>> +     * Check the page is filled with 0xFD or not.
>>> +     * memchr_inv() returns the address. In this case, we cannot
>>> +     * clear PTE/PUD entry, since the page is used by other.
>>> +     * So we cannot also free the page.
>>> +     *
>>> +     * memchr_inv() returns NULL. In this case, we can clear
>>> +     * PTE/PUD entry, since the page is not used by other.
>>> +     * So we can also free the page.
>>> +     */
>>> +    if (memchr_inv(page_addr, PAGE_INUSE, *page_size)) {
>>> +        *pp = NULL;
>>> +        return next;
>>> +    }
>>> +
>>
>> Hi Yasuaki,
>>
>> why call memchr_inv check after memset, this time the page can always 
>> be filled with 0xFD.
>
> The page is not always filled with 0xFD. find_and_clear_pte_page()
> is calld in each section. So the function fills the page
> section size/sizeof(page) byte with 0xFD one time. Thus if section 
> size is
> small, the page is filled with 0xFD.

Hi Yasuaki,

But when section size will be small?

Regards,
Chen

>
> Thanks,
> Yasuaki Ishimatsu
>
>
>>> +    if (!cpu_has_pse)
>>> +        pte_clear(&init_mm, addr, pte);
>>> +    else
>>> +        pmd_clear(pmd);
>>> +
>>> +    return next;
>>> +}
>>> +
>>> +void vmemmap_kfree(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +    unsigned long addr = (unsigned long)memmap;
>>> +    unsigned long end = (unsigned long)(memmap + nr_pages);
>>> +    unsigned long next;
>>> +    struct page *page;
>>> +    int page_size;
>>> +
>>> +    for (; addr < end; addr = next) {
>>> +        page = NULL;
>>> +        page_size = 0;
>>> +        next = find_and_clear_pte_page(addr, end, &page, &page_size);
>>> +        if (!page)
>>> +            continue;
>>> +
>>> +        free_pages((unsigned long)page_address(page),
>>> +                get_order(page_size));
>>> +        __flush_tlb_one(addr);
>>> +    }
>>> +}
>>> +
>>> +void vmemmap_free_bootmem(struct page *memmap, unsigned long nr_pages)
>>> +{
>>> +    unsigned long addr = (unsigned long)memmap;
>>> +    unsigned long end = (unsigned long)(memmap + nr_pages);
>>> +    unsigned long next;
>>> +    struct page *page;
>>> +    int page_size;
>>> +    unsigned long magic;
>>> +
>>> +    for (; addr < end; addr = next) {
>>> +        page = NULL;
>>> +        page_size = 0;
>>> +        next = find_and_clear_pte_page(addr, end, &page, &page_size);
>>> +        if (!page)
>>> +            continue;
>>> +
>>> +        magic = (unsigned long) page->lru.next;
>>> +        if (magic == SECTION_INFO)
>>> +            put_page_bootmem(page);
>>> +        flush_tlb_kernel_range(addr, end);
>>> +    }
>>> +}
>>> +
>>>   void register_page_bootmem_memmap(unsigned long section_nr,
>>>                     struct page *start_page, unsigned long size)
>>>   {
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index c607913..fb0d1fc 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -1620,6 +1620,8 @@ int vmemmap_populate(struct page *start_page, 
>>> unsigned long pages, int node);
>>>   void vmemmap_populate_print_last(void);
>>>   void register_page_bootmem_memmap(unsigned long section_nr, struct 
>>> page *map,
>>>                     unsigned long size);
>>> +void vmemmap_kfree(struct page *memmpa, unsigned long nr_pages);
>>> +void vmemmap_free_bootmem(struct page *memmpa, unsigned long 
>>> nr_pages);
>>>   enum mf_flags {
>>>       MF_COUNT_INCREASED = 1 << 0,
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 647a7f2..c54922c 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -308,19 +308,6 @@ static int __meminit __add_section(int nid, 
>>> struct zone *zone,
>>>       return register_new_memory(nid, 
>>> __pfn_to_section(phys_start_pfn));
>>>   }
>>> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
>>> -static int __remove_section(struct zone *zone, struct mem_section *ms)
>>> -{
>>> -    int ret = -EINVAL;
>>> -
>>> -    if (!valid_section(ms))
>>> -        return ret;
>>> -
>>> -    ret = unregister_memory_section(ms);
>>> -
>>> -    return ret;
>>> -}
>>> -#else
>>>   static int __remove_section(struct zone *zone, struct mem_section 
>>> *ms)
>>>   {
>>>       unsigned long flags;
>>> @@ -337,9 +324,9 @@ static int __remove_section(struct zone *zone, 
>>> struct mem_section *ms)
>>>       pgdat_resize_lock(pgdat, &flags);
>>>       sparse_remove_one_section(zone, ms);
>>>       pgdat_resize_unlock(pgdat, &flags);
>>> -    return 0;
>>> +
>>> +    return ret;
>>>   }
>>> -#endif
>>>   /*
>>>    * Reasonably generic function for adding memory.  It is
>>> diff --git a/mm/sparse.c b/mm/sparse.c
>>> index fac95f2..ab9d755 100644
>>> --- a/mm/sparse.c
>>> +++ b/mm/sparse.c
>>> @@ -613,12 +613,13 @@ static inline struct page 
>>> *kmalloc_section_memmap(unsigned long pnum, int nid,
>>>       /* This will make the necessary allocations eventually. */
>>>       return sparse_mem_map_populate(pnum, nid);
>>>   }
>>> -static void __kfree_section_memmap(struct page *memmap, unsigned 
>>> long nr_pages)
>>> +static void __kfree_section_memmap(struct page *page, unsigned long 
>>> nr_pages)
>>>   {
>>> -    return; /* XXX: Not implemented yet */
>>> +    vmemmap_kfree(page, nr_pages);
>>>   }
>>>   static void free_map_bootmem(struct page *page, unsigned long 
>>> nr_pages)
>>>   {
>>> +    vmemmap_free_bootmem(page, nr_pages);
>>>   }
>>>   #else
>>>   static struct page *__kmalloc_section_memmap(unsigned long nr_pages)
>>
>
>
>

  reply	other threads:[~2012-10-06 14:18 UTC|newest]

Thread overview: 255+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-05  9:25 [RFC v9 PATCH 00/21] memory-hotplug: hot-remove physical memory wency
2012-09-05  9:25 ` wency
2012-09-05  9:25 ` wency
2012-09-05  9:25 ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 01/21] memory-hotplug: rename remove_memory() to offline_memory()/offline_pages() wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-28  2:22   ` Ni zhan Chen
2012-09-28  2:22     ` Ni zhan Chen
2012-09-28  2:22     ` Ni zhan Chen
2012-09-28  2:22     ` Ni zhan Chen
2012-09-28  3:50     ` Yasuaki Ishimatsu
2012-09-28  3:50       ` Yasuaki Ishimatsu
2012-09-28  3:50       ` Yasuaki Ishimatsu
2012-09-28  3:50       ` Yasuaki Ishimatsu
2012-09-28 22:15       ` KOSAKI Motohiro
2012-09-28 22:15         ` KOSAKI Motohiro
2012-09-28 22:15         ` KOSAKI Motohiro
2012-09-28 22:15         ` KOSAKI Motohiro
2012-10-02  1:18         ` Yasuaki Ishimatsu
2012-10-02  1:18           ` Yasuaki Ishimatsu
2012-10-02  1:18           ` Yasuaki Ishimatsu
2012-10-02  1:18           ` Yasuaki Ishimatsu
2012-10-02 17:29           ` KOSAKI Motohiro
2012-10-02 17:29             ` KOSAKI Motohiro
2012-10-02 17:29             ` KOSAKI Motohiro
2012-10-02 17:29             ` KOSAKI Motohiro
2012-09-05  9:25 ` [RFC v9 PATCH 02/21] memory-hotplug: implement offline_memory() wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 03/21] memory-hotplug: store the node id in acpi_memory_device wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-28  3:21   ` Ni zhan Chen
2012-09-28  3:21     ` Ni zhan Chen
2012-09-28  3:21     ` Ni zhan Chen
2012-09-28  3:21     ` Ni zhan Chen
2012-10-01  7:38     ` Yasuaki Ishimatsu
2012-10-01  7:38       ` Yasuaki Ishimatsu
2012-10-01  7:38       ` Yasuaki Ishimatsu
2012-10-01  7:38       ` Yasuaki Ishimatsu
2012-10-01  7:38       ` Yasuaki Ishimatsu
2012-09-05  9:25 ` [RFC v9 PATCH 04/21] memory-hotplug: offline and remove memory when removing the memory device wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-28  4:48   ` Ni zhan Chen
2012-09-28  4:48     ` Ni zhan Chen
2012-09-28  4:48     ` Ni zhan Chen
2012-09-28  4:48     ` Ni zhan Chen
2012-09-05  9:25 ` [RFC v9 PATCH 05/21] memory-hotplug: check whether memory is present or not wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-11  2:15   ` Wen Congyang
2012-09-11  2:15     ` Wen Congyang
2012-09-11  2:15     ` Wen Congyang
2012-09-11  2:15     ` Wen Congyang
2012-09-11  2:24     ` Yasuaki Ishimatsu
2012-09-11  2:24       ` Yasuaki Ishimatsu
2012-09-11  2:24       ` Yasuaki Ishimatsu
2012-09-11  2:24       ` Yasuaki Ishimatsu
2012-09-11  2:24       ` Yasuaki Ishimatsu
2012-09-11  2:46       ` Wen Congyang
2012-09-11  2:46         ` Wen Congyang
2012-09-11  2:46         ` Wen Congyang
2012-09-11  2:46         ` Wen Congyang
2012-09-28  3:37       ` Ni zhan Chen
2012-09-28  3:37         ` Ni zhan Chen
2012-09-28  3:37         ` Ni zhan Chen
2012-09-28  3:37         ` Ni zhan Chen
2012-09-05  9:25 ` [RFC v9 PATCH 06/21] memory-hotplug: export the function acpi_bus_remove() wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-10-02  0:34   ` Ni zhan Chen
2012-10-02  0:34     ` Ni zhan Chen
2012-10-02  0:34     ` Ni zhan Chen
2012-10-02  0:34     ` Ni zhan Chen
2012-10-02 17:28     ` KOSAKI Motohiro
2012-10-02 17:28       ` KOSAKI Motohiro
2012-10-02 17:28       ` KOSAKI Motohiro
2012-10-02 17:28       ` KOSAKI Motohiro
2012-09-05  9:25 ` [RFC v9 PATCH 07/21] memory-hotplug: call acpi_bus_remove() to remove memory device wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 08/21] memory-hotplug: remove /sys/firmware/memmap/X sysfs wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 09/21] memory-hotplug: does not release memory region in PAGES_PER_SECTION chunks wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 10/21] memory-hotplug: add memory_block_release wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 11/21] memory-hotplug: remove_memory calls __remove_pages wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 12/21] memory-hotplug: introduce new function arch_remove_memory() wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 13/21] memory-hotplug: check page type in get_page_bootmem wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-29  2:15   ` Ni zhan Chen
2012-09-29  2:15     ` Ni zhan Chen
2012-09-29  2:15     ` Ni zhan Chen
2012-09-29  2:15     ` Ni zhan Chen
2012-10-01  3:03     ` Yasuaki Ishimatsu
2012-10-01  3:03       ` Yasuaki Ishimatsu
2012-10-01  3:03       ` Yasuaki Ishimatsu
2012-10-01  3:03       ` Yasuaki Ishimatsu
2012-10-01  3:03       ` Yasuaki Ishimatsu
2012-10-02 12:24       ` Ni zhan Chen
2012-10-02 12:24         ` Ni zhan Chen
2012-10-02 12:24         ` Ni zhan Chen
2012-10-02 12:24         ` Ni zhan Chen
2012-09-05  9:25 ` [RFC v9 PATCH 14/21] memory-hotplug: move register_page_bootmem_info_node and put_page_bootmem for s wency
2012-09-05  9:25   ` [RFC v9 PATCH 14/21] memory-hotplug: move register_page_bootmem_info_node and put_page_bootmem for sparse-vmemmap wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 15/21] memory-hotplug: implement register_page_bootmem_info_section of sparse-vmemmap wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 16/21] memory-hotplug: free memmap " wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-10-02  4:21   ` Ni zhan Chen
2012-10-02  4:21     ` Ni zhan Chen
2012-10-02  4:21     ` Ni zhan Chen
2012-10-02  4:21     ` Ni zhan Chen
2012-10-04  6:26     ` Yasuaki Ishimatsu
2012-10-04  6:26       ` Yasuaki Ishimatsu
2012-10-04  6:26       ` Yasuaki Ishimatsu
2012-10-04  6:26       ` Yasuaki Ishimatsu
2012-10-04  6:26       ` Yasuaki Ishimatsu
2012-10-06 14:18       ` Ni zhan Chen [this message]
2012-10-06 14:18         ` Ni zhan Chen
2012-10-06 14:18         ` Ni zhan Chen
2012-10-06 14:18         ` Ni zhan Chen
2012-09-05  9:25 ` [RFC v9 PATCH 17/21] memory_hotplug: clear zone when the memory is removed wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 18/21] memory-hotplug: add node_device_release wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 19/21] memory-hotplug: remove sysfs file of node wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25 ` [RFC v9 PATCH 20/21] memory-hotplug: clear hwpoisoned flag when onlining pages wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-06  7:27   ` andywu106建国
2012-09-06  7:27     ` andywu106建国
2012-09-06  7:27     ` andywu106建国
2012-09-06  7:27     ` andywu106建国
2012-09-06  8:41     ` Wen Congyang
2012-09-06  8:41       ` Wen Congyang
2012-09-06  8:41       ` Wen Congyang
2012-09-06  8:41       ` Wen Congyang
2012-09-05  9:25 ` [RFC v9 PATCH 21/21] memory-hotplug: auto offline page_cgroup when onlining memory block failed wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-05  9:25   ` wency
2012-09-26 16:46 ` [RFC v9 PATCH 00/21] memory-hotplug: hot-remove physical memory Vasilis Liaskovitis
2012-09-26 16:46   ` Vasilis Liaskovitis
2012-09-26 16:46   ` Vasilis Liaskovitis
2012-09-26 16:46   ` Vasilis Liaskovitis
2012-09-27  0:59   ` Wen Congyang
2012-09-27  0:59     ` Wen Congyang
2012-09-27  0:59     ` Wen Congyang
2012-09-27  0:59     ` Wen Congyang
2012-09-27  6:37   ` Wen Congyang
2012-09-27  6:37     ` Wen Congyang
2012-09-27  6:37     ` Wen Congyang
2012-09-27  6:37     ` Wen Congyang
2012-09-27 10:35     ` Vasilis Liaskovitis
2012-09-27 10:35       ` Vasilis Liaskovitis
2012-09-27 10:35       ` Vasilis Liaskovitis
2012-09-28  1:41       ` Wen Congyang
2012-09-28  1:41         ` Wen Congyang
2012-09-28  1:41         ` Wen Congyang
2012-09-28  1:41         ` Wen Congyang
2012-10-08 10:19   ` Wen Congyang
2012-10-08 10:19     ` Wen Congyang
2012-10-09  6:16   ` Wen Congyang
2012-10-09  6:16     ` Wen Congyang
2012-10-09  6:16     ` Wen Congyang
2012-10-09  6:16     ` Wen Congyang
2012-10-09  8:11   ` Wen Congyang
2012-10-09  8:11     ` Wen Congyang
2012-10-09  8:11     ` Wen Congyang
2012-10-09  8:11     ` Wen Congyang
2012-09-26 16:58 ` Vasilis Liaskovitis
2012-09-26 16:58   ` Vasilis Liaskovitis
2012-09-26 16:58   ` Vasilis Liaskovitis
2012-09-26 16:58   ` Vasilis Liaskovitis
2012-09-27  0:58   ` Wen Congyang
2012-09-27  1:03     ` Wen Congyang
2012-09-27  1:03     ` Wen Congyang
2012-09-27  1:03     ` Wen Congyang
2012-09-27  0:58     ` Wen Congyang
2012-09-27  0:58     ` Wen Congyang
2012-09-27  8:53   ` Wen Congyang
2012-09-27  8:53     ` Wen Congyang
2012-09-27  8:53     ` Wen Congyang
2012-09-27  8:53     ` Wen Congyang
2012-09-27 10:06   ` Wen Congyang
2012-09-27 10:06     ` Wen Congyang
2012-09-27 10:06     ` Wen Congyang
2012-09-27 10:06     ` Wen Congyang
2012-09-27 10:06     ` Wen Congyang
2012-09-27 10:06     ` Wen Congyang
2012-09-27 11:02     ` Vasilis Liaskovitis
2012-09-27 11:02       ` Vasilis Liaskovitis
2012-09-27 11:02       ` Vasilis Liaskovitis
2012-09-27 11:02       ` Vasilis Liaskovitis
2012-09-29  3:45 ` Ni zhan Chen
2012-09-29  3:45   ` Ni zhan Chen
2012-09-29  3:45   ` Ni zhan Chen
2012-09-29  3:45   ` Ni zhan Chen
2012-09-29  8:19 ` Ni zhan Chen
2012-09-29  8:19   ` Ni zhan Chen
2012-09-29  8:19   ` Ni zhan Chen
2012-09-29  8:19   ` Ni zhan Chen
2012-10-01  4:44   ` Yasuaki Ishimatsu
2012-10-01  4:44     ` Yasuaki Ishimatsu
2012-10-01  4:44     ` Yasuaki Ishimatsu
2012-10-01  4:44     ` Yasuaki Ishimatsu
2012-10-01  4:44     ` Yasuaki Ishimatsu
2012-10-01 23:45     ` Ni zhan Chen
2012-10-01 23:45       ` Ni zhan Chen
2012-10-01 23:45       ` Ni zhan Chen
2012-10-01 23:45       ` Ni zhan Chen
2012-10-02  0:02       ` Yasuaki Ishimatsu
2012-10-02  0:02         ` Yasuaki Ishimatsu
2012-10-02  0:02         ` Yasuaki Ishimatsu
2012-10-02  0:02         ` Yasuaki Ishimatsu
2012-10-02  0:02         ` Yasuaki Ishimatsu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50703DAA.20407@gmail.com \
    --to=nizhan.chen@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=benh@kernel.crashing.org \
    --cc=cl@linux.com \
    --cc=cmetcalf@tilera.com \
    --cc=isimatu.yasuaki@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=len.brown@intel.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=liuj97@gmail.com \
    --cc=minchan.kim@gmail.com \
    --cc=paulus@samba.org \
    --cc=rientjes@google.com \
    --cc=sparclinux@vger.kernel.org \
    --cc=wency@cn.fujitsu.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.