All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory
@ 2013-06-13  8:57 Huang Shijie
  2013-06-13  8:57 ` [PATCH 2/2] ARM: mm: dump the memblock info in the later place Huang Shijie
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Huang Shijie @ 2013-06-13  8:57 UTC (permalink / raw)
  To: linux-arm-kernel

If we want to steal 128K memory in the machine_desc->reserve() hook, we
will hang up immediately.

The hang reason is like this:

  [1] Stealing 128K makes the left memory is not aligned with the SECTION_SIZE.

  [2] So when the map_lowmem() tries to maps the lowmem memory banks,
      it will call the memblock_alloc(in early_alloc_aligned()) to allocate
      a page to store the pte. This pte page is in the unaligned region
      which is not mapped yet.

  [3] And when we use the memset() in the early_alloc_aligned(), we will hang
      right now.

  [4] The hang only occurs in the map_lowmem(). After the map_lowmem(), we have
      setup the PTE mappings. So in the later places, such as
      dma_contiguous_remap(), the hang will never occurs,

This patch adds a global variable, in_map_lowmem, to check if we are in
the map_lowmem() or not. If we are in the map_lowmem(), and we steal
a SECTION_SIZE unaligned memory, we will use the memblock_alloc_base()
to allocate the pte page. The @max_addr for memblock_alloc_base() is the
last mapped address.

Signed-off-by: Huang Shijie <b32955@freescale.com>
---
 arch/arm/mm/mmu.c |   34 ++++++++++++++++++++++++++++++----
 1 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index faa36d7..56d1a22 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -113,6 +113,8 @@ static struct cachepolicy cache_policies[] __initdata = {
 	}
 };
 
+static bool in_map_lowmem __initdata;
+
 #ifdef CONFIG_CPU_CP15
 /*
  * These are useful for identifying cache coherency
@@ -595,10 +597,32 @@ static void __init *early_alloc(unsigned long sz)
 	return early_alloc_aligned(sz, sz);
 }
 
-static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr, unsigned long prot)
+static void __init *early_alloc_max_addr(unsigned long sz, phys_addr_t maddr)
+{
+	void *ptr;
+
+	if (maddr == MEMBLOCK_ALLOC_ACCESSIBLE)
+		return early_alloc_aligned(sz, sz);
+
+	ptr = __va(memblock_alloc_base(sz, sz, maddr));
+	memset(ptr, 0, sz);
+	return ptr;
+}
+
+static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr,
+				unsigned long end, unsigned long prot)
 {
 	if (pmd_none(*pmd)) {
-		pte_t *pte = early_alloc(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE);
+		pte_t *pte;
+		phys_addr_t maddr = MEMBLOCK_ALLOC_ACCESSIBLE;
+
+		if (in_map_lowmem && (end & SECTION_MASK)) {
+			end &= PGDIR_MASK;
+			BUG_ON(!end);
+			maddr = __virt_to_phys(end);
+		}
+		pte = early_alloc_max_addr(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE,
+					maddr);
 		__pmd_populate(pmd, __pa(pte), prot);
 	}
 	BUG_ON(pmd_bad(*pmd));
@@ -609,7 +633,7 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
 				  unsigned long end, unsigned long pfn,
 				  const struct mem_type *type)
 {
-	pte_t *pte = early_pte_alloc(pmd, addr, type->prot_l1);
+	pte_t *pte = early_pte_alloc(pmd, addr, end, type->prot_l1);
 	do {
 		set_pte_ext(pte, pfn_pte(pfn, __pgprot(type->prot_pte)), 0);
 		pfn++;
@@ -1253,7 +1277,7 @@ static void __init kmap_init(void)
 {
 #ifdef CONFIG_HIGHMEM
 	pkmap_page_table = early_pte_alloc(pmd_off_k(PKMAP_BASE),
-		PKMAP_BASE, _PAGE_KERNEL_TABLE);
+		PKMAP_BASE, 0, _PAGE_KERNEL_TABLE);
 #endif
 }
 
@@ -1261,6 +1285,7 @@ static void __init map_lowmem(void)
 {
 	struct memblock_region *reg;
 
+	in_map_lowmem = 1;
 	/* Map all the lowmem memory banks. */
 	for_each_memblock(memory, reg) {
 		phys_addr_t start = reg->base;
@@ -1279,6 +1304,7 @@ static void __init map_lowmem(void)
 
 		create_mapping(&map);
 	}
+	in_map_lowmem = 0;
 }
 
 /*
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] ARM: mm: dump the memblock info in the later place
  2013-06-13  8:57 [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory Huang Shijie
@ 2013-06-13  8:57 ` Huang Shijie
  2013-06-17  2:44 ` [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory Huang Shijie
  2013-06-18 15:29 ` Will Deacon
  2 siblings, 0 replies; 8+ messages in thread
From: Huang Shijie @ 2013-06-13  8:57 UTC (permalink / raw)
  To: linux-arm-kernel

Current code calls the memblock_dump_all() in the arm_memblock_init(),
but it will dump out uncorrect memblock information.

So we move it to paging_init() where we will not allocate memory from
the memblock any more.

Signed-off-by: Huang Shijie <b32955@freescale.com>
---
 arch/arm/mm/init.c |    1 -
 arch/arm/mm/mmu.c  |    2 ++
 2 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 59c18b8..c99b543 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -384,7 +384,6 @@ void __init arm_memblock_init(struct meminfo *mi, struct machine_desc *mdesc)
 
 	arm_memblock_steal_permitted = false;
 	memblock_allow_resize();
-	memblock_dump_all();
 }
 
 void __init bootmem_init(void)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 56d1a22..70ad181 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1332,6 +1332,8 @@ void __init paging_init(struct machine_desc *mdesc)
 
 	bootmem_init();
 
+	memblock_dump_all();
+
 	empty_zero_page = virt_to_page(zero_page);
 	__flush_dcache_page(NULL, empty_zero_page);
 }
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory
  2013-06-13  8:57 [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory Huang Shijie
  2013-06-13  8:57 ` [PATCH 2/2] ARM: mm: dump the memblock info in the later place Huang Shijie
@ 2013-06-17  2:44 ` Huang Shijie
  2013-06-18 15:29 ` Will Deacon
  2 siblings, 0 replies; 8+ messages in thread
From: Huang Shijie @ 2013-06-17  2:44 UTC (permalink / raw)
  To: linux-arm-kernel

? 2013?06?13? 16:57, Huang Shijie ??:
> If we want to steal 128K memory in the machine_desc->reserve() hook, we
> will hang up immediately.
>
> The hang reason is like this:
>
>   [1] Stealing 128K makes the left memory is not aligned with the SECTION_SIZE.
>
>   [2] So when the map_lowmem() tries to maps the lowmem memory banks,
>       it will call the memblock_alloc(in early_alloc_aligned()) to allocate
>       a page to store the pte. This pte page is in the unaligned region
>       which is not mapped yet.
>
>   [3] And when we use the memset() in the early_alloc_aligned(), we will hang
>       right now.
>
>   [4] The hang only occurs in the map_lowmem(). After the map_lowmem(), we have
>       setup the PTE mappings. So in the later places, such as
>       dma_contiguous_remap(), the hang will never occurs,
>
> This patch adds a global variable, in_map_lowmem, to check if we are in
> the map_lowmem() or not. If we are in the map_lowmem(), and we steal
> a SECTION_SIZE unaligned memory, we will use the memblock_alloc_base()
> to allocate the pte page. The @max_addr for memblock_alloc_base() is the
> last mapped address.
>
> Signed-off-by: Huang Shijie <b32955@freescale.com>
> ---
>  arch/arm/mm/mmu.c |   34 ++++++++++++++++++++++++++++++----
>  1 files changed, 30 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index faa36d7..56d1a22 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -113,6 +113,8 @@ static struct cachepolicy cache_policies[] __initdata = {
>  	}
>  };
>  
> +static bool in_map_lowmem __initdata;
> +
>  #ifdef CONFIG_CPU_CP15
>  /*
>   * These are useful for identifying cache coherency
> @@ -595,10 +597,32 @@ static void __init *early_alloc(unsigned long sz)
>  	return early_alloc_aligned(sz, sz);
>  }
>  
> -static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr, unsigned long prot)
> +static void __init *early_alloc_max_addr(unsigned long sz, phys_addr_t maddr)
> +{
> +	void *ptr;
> +
> +	if (maddr == MEMBLOCK_ALLOC_ACCESSIBLE)
> +		return early_alloc_aligned(sz, sz);
> +
> +	ptr = __va(memblock_alloc_base(sz, sz, maddr));
> +	memset(ptr, 0, sz);
> +	return ptr;
> +}
> +
> +static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr,
> +				unsigned long end, unsigned long prot)
>  {
>  	if (pmd_none(*pmd)) {
> -		pte_t *pte = early_alloc(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE);
> +		pte_t *pte;
> +		phys_addr_t maddr = MEMBLOCK_ALLOC_ACCESSIBLE;
> +
> +		if (in_map_lowmem && (end & SECTION_MASK)) {
> +			end &= PGDIR_MASK;
> +			BUG_ON(!end);
> +			maddr = __virt_to_phys(end);
> +		}
> +		pte = early_alloc_max_addr(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE,
> +					maddr);
>  		__pmd_populate(pmd, __pa(pte), prot);
>  	}
>  	BUG_ON(pmd_bad(*pmd));
> @@ -609,7 +633,7 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
>  				  unsigned long end, unsigned long pfn,
>  				  const struct mem_type *type)
>  {
> -	pte_t *pte = early_pte_alloc(pmd, addr, type->prot_l1);
> +	pte_t *pte = early_pte_alloc(pmd, addr, end, type->prot_l1);
>  	do {
>  		set_pte_ext(pte, pfn_pte(pfn, __pgprot(type->prot_pte)), 0);
>  		pfn++;
> @@ -1253,7 +1277,7 @@ static void __init kmap_init(void)
>  {
>  #ifdef CONFIG_HIGHMEM
>  	pkmap_page_table = early_pte_alloc(pmd_off_k(PKMAP_BASE),
> -		PKMAP_BASE, _PAGE_KERNEL_TABLE);
> +		PKMAP_BASE, 0, _PAGE_KERNEL_TABLE);
>  #endif
>  }
>  
> @@ -1261,6 +1285,7 @@ static void __init map_lowmem(void)
>  {
>  	struct memblock_region *reg;
>  
> +	in_map_lowmem = 1;
>  	/* Map all the lowmem memory banks. */
>  	for_each_memblock(memory, reg) {
>  		phys_addr_t start = reg->base;
> @@ -1279,6 +1304,7 @@ static void __init map_lowmem(void)
>  
>  		create_mapping(&map);
>  	}
> +	in_map_lowmem = 0;
>  }
>  
>  /*
just a ping.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory
  2013-06-13  8:57 [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory Huang Shijie
  2013-06-13  8:57 ` [PATCH 2/2] ARM: mm: dump the memblock info in the later place Huang Shijie
  2013-06-17  2:44 ` [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory Huang Shijie
@ 2013-06-18 15:29 ` Will Deacon
  2013-06-18 16:52   ` Russell King - ARM Linux
  2 siblings, 1 reply; 8+ messages in thread
From: Will Deacon @ 2013-06-18 15:29 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jun 13, 2013 at 09:57:05AM +0100, Huang Shijie wrote:
> If we want to steal 128K memory in the machine_desc->reserve() hook, we
> will hang up immediately.
> 
> The hang reason is like this:
> 
>   [1] Stealing 128K makes the left memory is not aligned with the SECTION_SIZE.
> 
>   [2] So when the map_lowmem() tries to maps the lowmem memory banks,
>       it will call the memblock_alloc(in early_alloc_aligned()) to allocate
>       a page to store the pte. This pte page is in the unaligned region
>       which is not mapped yet.
> 
>   [3] And when we use the memset() in the early_alloc_aligned(), we will hang
>       right now.
> 
>   [4] The hang only occurs in the map_lowmem(). After the map_lowmem(), we have
>       setup the PTE mappings. So in the later places, such as
>       dma_contiguous_remap(), the hang will never occurs,
> 
> This patch adds a global variable, in_map_lowmem, to check if we are in
> the map_lowmem() or not. If we are in the map_lowmem(), and we steal
> a SECTION_SIZE unaligned memory, we will use the memblock_alloc_base()
> to allocate the pte page. The @max_addr for memblock_alloc_base() is the
> last mapped address.

Wouldn't this be better achieved with a parameter, rather than a global
state variable? That said, I don't completely follow why memblock_alloc is
giving you back an unmapped physical address. It sounds like we're freeing
too much as part of the stealing (or simply that stealing has to be section
aligned), but memblock only deals with physical addresses.

Could you elaborate please?

Will

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory
  2013-06-18 15:29 ` Will Deacon
@ 2013-06-18 16:52   ` Russell King - ARM Linux
  2013-06-19  2:36     ` Huang Shijie
  0 siblings, 1 reply; 8+ messages in thread
From: Russell King - ARM Linux @ 2013-06-18 16:52 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 18, 2013 at 04:29:05PM +0100, Will Deacon wrote:
> Wouldn't this be better achieved with a parameter, rather than a global
> state variable? That said, I don't completely follow why memblock_alloc is
> giving you back an unmapped physical address. It sounds like we're freeing
> too much as part of the stealing (or simply that stealing has to be section
> aligned), but memblock only deals with physical addresses.
> 
> Could you elaborate please?

It's a catch-22 situation.  memblock allocates from the top of usable
memory.

While setting up the page tables for the second time, we insert section
mappings.  If the last mapping is not section sized, we will try to set
it up using page mappings.  For this, we need to allocate L2 page
tables from memblock.

memblock returns a 4K page in the last non-section sized mapping - which
we're trying to setup, and hence is not yet mapped.

This is why I've always said - if you steal memory from memblock, it
_must_ be aligned to 1MB (the section size) to avoid this.  Not only
that, but we didn't _used_ to allow page-sized mappings for MT_MEMORY -
that got added for OMAP's SRAM support.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory
  2013-06-18 16:52   ` Russell King - ARM Linux
@ 2013-06-19  2:36     ` Huang Shijie
  2013-06-19  8:28       ` Russell King - ARM Linux
  0 siblings, 1 reply; 8+ messages in thread
From: Huang Shijie @ 2013-06-19  2:36 UTC (permalink / raw)
  To: linux-arm-kernel

? 2013?06?19? 00:52, Russell King - ARM Linux ??:
> This is why I've always said - if you steal memory from memblock, it
> _must_  be aligned to 1MB (the section size) to avoid this.  Not only

firstly, it's a little waste to _must_ steal 1M for some board with less 
memory such as 512M memory;

secondly, if we do not support the section unaligned mapping, we should 
remove the alloc_init_pte() in alloc_init_pmd(),
add a comment to tell that we do not support the section unaligned mapping.

thanks
Huang Shijie

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory
  2013-06-19  2:36     ` Huang Shijie
@ 2013-06-19  8:28       ` Russell King - ARM Linux
  2013-06-19  8:47         ` Huang Shijie
  0 siblings, 1 reply; 8+ messages in thread
From: Russell King - ARM Linux @ 2013-06-19  8:28 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jun 19, 2013 at 10:36:06AM +0800, Huang Shijie wrote:
> ? 2013?06?19? 00:52, Russell King - ARM Linux ??:
>> This is why I've always said - if you steal memory from memblock, it
>> _must_  be aligned to 1MB (the section size) to avoid this.  Not only
>
> firstly, it's a little waste to _must_ steal 1M for some board with less  
> memory such as 512M memory;

You're complaining about 512M memory?  Christ.  Some of my machines
which I run here have as little as 32M of memory!  1M is nothing in
512M.

> secondly, if we do not support the section unaligned mapping, we should  
> remove the alloc_init_pte() in alloc_init_pmd(),
> add a comment to tell that we do not support the section unaligned mapping.

No, because that breaks a whole load of other non-memory mappings.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory
  2013-06-19  8:28       ` Russell King - ARM Linux
@ 2013-06-19  8:47         ` Huang Shijie
  0 siblings, 0 replies; 8+ messages in thread
From: Huang Shijie @ 2013-06-19  8:47 UTC (permalink / raw)
  To: linux-arm-kernel

? 2013?06?19? 16:28, Russell King - ARM Linux ??:
> On Wed, Jun 19, 2013 at 10:36:06AM +0800, Huang Shijie wrote:
>> ? 2013?06?19? 00:52, Russell King - ARM Linux ??:
>>> This is why I've always said - if you steal memory from memblock, it
>>> _must_  be aligned to 1MB (the section size) to avoid this.  Not only
>> firstly, it's a little waste to _must_ steal 1M for some board with less
>> memory such as 512M memory;
> You're complaining about 512M memory?  Christ.  Some of my machines
> which I run here have as little as 32M of memory!  1M is nothing in
> 512M.
>
My meaning was : we only need 128K in some case, why we waste other 896K?
IMHO, we should treasure the memory.

If you think there is no need to fix this issue, I am ok too.

>> secondly, if we do not support the section unaligned mapping, we should
>> remove the alloc_init_pte() in alloc_init_pmd(),
>> add a comment to tell that we do not support the section unaligned mapping.
> No, because that breaks a whole load of other non-memory mappings.
>
yes. you are right. I forgot the other mappings besides the map_lowmem().

thanks
Huang Shijie

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-06-19  8:47 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-13  8:57 [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory Huang Shijie
2013-06-13  8:57 ` [PATCH 2/2] ARM: mm: dump the memblock info in the later place Huang Shijie
2013-06-17  2:44 ` [PATCH 1/2] ARM: mmu: fix the hang when we steal a section unaligned size memory Huang Shijie
2013-06-18 15:29 ` Will Deacon
2013-06-18 16:52   ` Russell King - ARM Linux
2013-06-19  2:36     ` Huang Shijie
2013-06-19  8:28       ` Russell King - ARM Linux
2013-06-19  8:47         ` Huang Shijie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.