All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: fix some spelling mistakes in comments
@ 2020-11-27  1:17 Haitao Shi
  2020-11-27  8:06 ` Mike Rapoport
  2020-11-27 19:29   ` Souptick Joarder
  0 siblings, 2 replies; 4+ messages in thread
From: Haitao Shi @ 2020-11-27  1:17 UTC (permalink / raw)
  To: akpm, rppt, linux-mm, linux-kernel; +Cc: shihaitao1, wangle6

Fix some spelling mistakes in comments:
	udpate ==> update
	succesful ==> successful
	exmaple ==> example
	unneccessary ==> unnecessary
	stoping ==> stopping
	uknown ==> unknown

Signed-off-by: Haitao Shi <shihaitao1@huawei.com>
---
 mm/filemap.c     | 2 +-
 mm/huge_memory.c | 2 +-
 mm/khugepaged.c  | 2 +-
 mm/memblock.c    | 2 +-
 mm/migrate.c     | 2 +-
 mm/page_ext.c    | 2 +-
 6 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 3ebbe64..8826c48 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1359,7 +1359,7 @@ static int __wait_on_page_locked_async(struct page *page,
 	else
 		ret = PageLocked(page);
 	/*
-	 * If we were succesful now, we know we're still on the
+	 * If we were successful now, we know we're still on the
 	 * waitqueue as we're still under the lock. This means it's
 	 * safe to remove and return success, we know the callback
 	 * isn't going to trigger.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ec2bb93..0fea0c2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2356,7 +2356,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
 	 * Clone page flags before unfreezing refcount.
 	 *
 	 * After successful get_page_unless_zero() might follow flags change,
-	 * for exmaple lock_page() which set PG_waiters.
+	 * for example lock_page() which set PG_waiters.
 	 */
 	page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
 	page_tail->flags |= (head->flags &
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 4e3dff1..d6f7ede 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1273,7 +1273,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 			 * PTEs are armed with uffd write protection.
 			 * Here we can also mark the new huge pmd as
 			 * write protected if any of the small ones is
-			 * marked but that could bring uknown
+			 * marked but that could bring unknown
 			 * userfault messages that falls outside of
 			 * the registered range.  So, just be simple.
 			 */
diff --git a/mm/memblock.c b/mm/memblock.c
index b68ee86..086662a 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -871,7 +871,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size)
  * @base: base address of the region
  * @size: size of the region
  * @set: set or clear the flag
- * @flag: the flag to udpate
+ * @flag: the flag to update
  *
  * This function isolates region [@base, @base + @size), and sets/clears flag
  *
diff --git a/mm/migrate.c b/mm/migrate.c
index 5795cb8..8a3580c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2548,7 +2548,7 @@ static bool migrate_vma_check_page(struct page *page)
 		 * will bump the page reference count. Sadly there is no way to
 		 * differentiate a regular pin from migration wait. Hence to
 		 * avoid 2 racing thread trying to migrate back to CPU to enter
-		 * infinite loop (one stoping migration because the other is
+		 * infinite loop (one stopping migration because the other is
 		 * waiting on pte migration entry). We always return true here.
 		 *
 		 * FIXME proper solution is to rework migration_entry_wait() so
diff --git a/mm/page_ext.c b/mm/page_ext.c
index a3616f7..cf931eb 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -34,7 +34,7 @@
  *
  * The need callback is used to decide whether extended memory allocation is
  * needed or not. Sometimes users want to deactivate some features in this
- * boot and extra memory would be unneccessary. In this case, to avoid
+ * boot and extra memory would be unnecessary. In this case, to avoid
  * allocating huge chunk of memory, each clients represent their need of
  * extra memory through the need callback. If one of the need callbacks
  * returns true, it means that someone needs extra memory so that
-- 
2.10.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: fix some spelling mistakes in comments
  2020-11-27  1:17 [PATCH] mm: fix some spelling mistakes in comments Haitao Shi
@ 2020-11-27  8:06 ` Mike Rapoport
  2020-11-27 19:29   ` Souptick Joarder
  1 sibling, 0 replies; 4+ messages in thread
From: Mike Rapoport @ 2020-11-27  8:06 UTC (permalink / raw)
  To: Haitao Shi; +Cc: akpm, linux-mm, linux-kernel, wangle6

On Fri, Nov 27, 2020 at 09:17:47AM +0800, Haitao Shi wrote:
> Fix some spelling mistakes in comments:
> 	udpate ==> update
> 	succesful ==> successful
> 	exmaple ==> example
> 	unneccessary ==> unnecessary
> 	stoping ==> stopping
> 	uknown ==> unknown
> 
> Signed-off-by: Haitao Shi <shihaitao1@huawei.com>

Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>

> ---
>  mm/filemap.c     | 2 +-
>  mm/huge_memory.c | 2 +-
>  mm/khugepaged.c  | 2 +-
>  mm/memblock.c    | 2 +-
>  mm/migrate.c     | 2 +-
>  mm/page_ext.c    | 2 +-
>  6 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 3ebbe64..8826c48 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1359,7 +1359,7 @@ static int __wait_on_page_locked_async(struct page *page,
>  	else
>  		ret = PageLocked(page);
>  	/*
> -	 * If we were succesful now, we know we're still on the
> +	 * If we were successful now, we know we're still on the
>  	 * waitqueue as we're still under the lock. This means it's
>  	 * safe to remove and return success, we know the callback
>  	 * isn't going to trigger.
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ec2bb93..0fea0c2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2356,7 +2356,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
>  	 * Clone page flags before unfreezing refcount.
>  	 *
>  	 * After successful get_page_unless_zero() might follow flags change,
> -	 * for exmaple lock_page() which set PG_waiters.
> +	 * for example lock_page() which set PG_waiters.
>  	 */
>  	page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
>  	page_tail->flags |= (head->flags &
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 4e3dff1..d6f7ede 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1273,7 +1273,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
>  			 * PTEs are armed with uffd write protection.
>  			 * Here we can also mark the new huge pmd as
>  			 * write protected if any of the small ones is
> -			 * marked but that could bring uknown
> +			 * marked but that could bring unknown
>  			 * userfault messages that falls outside of
>  			 * the registered range.  So, just be simple.
>  			 */
> diff --git a/mm/memblock.c b/mm/memblock.c
> index b68ee86..086662a 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -871,7 +871,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size)
>   * @base: base address of the region
>   * @size: size of the region
>   * @set: set or clear the flag
> - * @flag: the flag to udpate
> + * @flag: the flag to update
>   *
>   * This function isolates region [@base, @base + @size), and sets/clears flag
>   *
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5795cb8..8a3580c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2548,7 +2548,7 @@ static bool migrate_vma_check_page(struct page *page)
>  		 * will bump the page reference count. Sadly there is no way to
>  		 * differentiate a regular pin from migration wait. Hence to
>  		 * avoid 2 racing thread trying to migrate back to CPU to enter
> -		 * infinite loop (one stoping migration because the other is
> +		 * infinite loop (one stopping migration because the other is
>  		 * waiting on pte migration entry). We always return true here.
>  		 *
>  		 * FIXME proper solution is to rework migration_entry_wait() so
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index a3616f7..cf931eb 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -34,7 +34,7 @@
>   *
>   * The need callback is used to decide whether extended memory allocation is
>   * needed or not. Sometimes users want to deactivate some features in this
> - * boot and extra memory would be unneccessary. In this case, to avoid
> + * boot and extra memory would be unnecessary. In this case, to avoid
>   * allocating huge chunk of memory, each clients represent their need of
>   * extra memory through the need callback. If one of the need callbacks
>   * returns true, it means that someone needs extra memory so that
> -- 
> 2.10.1
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: fix some spelling mistakes in comments
  2020-11-27  1:17 [PATCH] mm: fix some spelling mistakes in comments Haitao Shi
@ 2020-11-27 19:29   ` Souptick Joarder
  2020-11-27 19:29   ` Souptick Joarder
  1 sibling, 0 replies; 4+ messages in thread
From: Souptick Joarder @ 2020-11-27 19:29 UTC (permalink / raw)
  To: Haitao Shi; +Cc: Andrew Morton, rppt, Linux-MM, linux-kernel, wangle6

On Fri, Nov 27, 2020 at 6:50 AM Haitao Shi <shihaitao1@huawei.com> wrote:
>
> Fix some spelling mistakes in comments:
>         udpate ==> update
>         succesful ==> successful
>         exmaple ==> example
>         unneccessary ==> unnecessary
>         stoping ==> stopping
>         uknown ==> unknown
>
> Signed-off-by: Haitao Shi <shihaitao1@huawei.com>

Reviewed-by: Souptick Joarder <jrdr.linux@gmail.com>

> ---
>  mm/filemap.c     | 2 +-
>  mm/huge_memory.c | 2 +-
>  mm/khugepaged.c  | 2 +-
>  mm/memblock.c    | 2 +-
>  mm/migrate.c     | 2 +-
>  mm/page_ext.c    | 2 +-
>  6 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 3ebbe64..8826c48 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1359,7 +1359,7 @@ static int __wait_on_page_locked_async(struct page *page,
>         else
>                 ret = PageLocked(page);
>         /*
> -        * If we were succesful now, we know we're still on the
> +        * If we were successful now, we know we're still on the
>          * waitqueue as we're still under the lock. This means it's
>          * safe to remove and return success, we know the callback
>          * isn't going to trigger.
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ec2bb93..0fea0c2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2356,7 +2356,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
>          * Clone page flags before unfreezing refcount.
>          *
>          * After successful get_page_unless_zero() might follow flags change,
> -        * for exmaple lock_page() which set PG_waiters.
> +        * for example lock_page() which set PG_waiters.
>          */
>         page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
>         page_tail->flags |= (head->flags &
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 4e3dff1..d6f7ede 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1273,7 +1273,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
>                          * PTEs are armed with uffd write protection.
>                          * Here we can also mark the new huge pmd as
>                          * write protected if any of the small ones is
> -                        * marked but that could bring uknown
> +                        * marked but that could bring unknown
>                          * userfault messages that falls outside of
>                          * the registered range.  So, just be simple.
>                          */
> diff --git a/mm/memblock.c b/mm/memblock.c
> index b68ee86..086662a 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -871,7 +871,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size)
>   * @base: base address of the region
>   * @size: size of the region
>   * @set: set or clear the flag
> - * @flag: the flag to udpate
> + * @flag: the flag to update
>   *
>   * This function isolates region [@base, @base + @size), and sets/clears flag
>   *
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5795cb8..8a3580c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2548,7 +2548,7 @@ static bool migrate_vma_check_page(struct page *page)
>                  * will bump the page reference count. Sadly there is no way to
>                  * differentiate a regular pin from migration wait. Hence to
>                  * avoid 2 racing thread trying to migrate back to CPU to enter
> -                * infinite loop (one stoping migration because the other is
> +                * infinite loop (one stopping migration because the other is
>                  * waiting on pte migration entry). We always return true here.
>                  *
>                  * FIXME proper solution is to rework migration_entry_wait() so
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index a3616f7..cf931eb 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -34,7 +34,7 @@
>   *
>   * The need callback is used to decide whether extended memory allocation is
>   * needed or not. Sometimes users want to deactivate some features in this
> - * boot and extra memory would be unneccessary. In this case, to avoid
> + * boot and extra memory would be unnecessary. In this case, to avoid
>   * allocating huge chunk of memory, each clients represent their need of
>   * extra memory through the need callback. If one of the need callbacks
>   * returns true, it means that someone needs extra memory so that
> --
> 2.10.1
>
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: fix some spelling mistakes in comments
@ 2020-11-27 19:29   ` Souptick Joarder
  0 siblings, 0 replies; 4+ messages in thread
From: Souptick Joarder @ 2020-11-27 19:29 UTC (permalink / raw)
  To: Haitao Shi; +Cc: Andrew Morton, rppt, Linux-MM, linux-kernel, wangle6

On Fri, Nov 27, 2020 at 6:50 AM Haitao Shi <shihaitao1@huawei.com> wrote:
>
> Fix some spelling mistakes in comments:
>         udpate ==> update
>         succesful ==> successful
>         exmaple ==> example
>         unneccessary ==> unnecessary
>         stoping ==> stopping
>         uknown ==> unknown
>
> Signed-off-by: Haitao Shi <shihaitao1@huawei.com>

Reviewed-by: Souptick Joarder <jrdr.linux@gmail.com>

> ---
>  mm/filemap.c     | 2 +-
>  mm/huge_memory.c | 2 +-
>  mm/khugepaged.c  | 2 +-
>  mm/memblock.c    | 2 +-
>  mm/migrate.c     | 2 +-
>  mm/page_ext.c    | 2 +-
>  6 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 3ebbe64..8826c48 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1359,7 +1359,7 @@ static int __wait_on_page_locked_async(struct page *page,
>         else
>                 ret = PageLocked(page);
>         /*
> -        * If we were succesful now, we know we're still on the
> +        * If we were successful now, we know we're still on the
>          * waitqueue as we're still under the lock. This means it's
>          * safe to remove and return success, we know the callback
>          * isn't going to trigger.
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ec2bb93..0fea0c2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2356,7 +2356,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
>          * Clone page flags before unfreezing refcount.
>          *
>          * After successful get_page_unless_zero() might follow flags change,
> -        * for exmaple lock_page() which set PG_waiters.
> +        * for example lock_page() which set PG_waiters.
>          */
>         page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
>         page_tail->flags |= (head->flags &
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 4e3dff1..d6f7ede 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1273,7 +1273,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
>                          * PTEs are armed with uffd write protection.
>                          * Here we can also mark the new huge pmd as
>                          * write protected if any of the small ones is
> -                        * marked but that could bring uknown
> +                        * marked but that could bring unknown
>                          * userfault messages that falls outside of
>                          * the registered range.  So, just be simple.
>                          */
> diff --git a/mm/memblock.c b/mm/memblock.c
> index b68ee86..086662a 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -871,7 +871,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size)
>   * @base: base address of the region
>   * @size: size of the region
>   * @set: set or clear the flag
> - * @flag: the flag to udpate
> + * @flag: the flag to update
>   *
>   * This function isolates region [@base, @base + @size), and sets/clears flag
>   *
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5795cb8..8a3580c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2548,7 +2548,7 @@ static bool migrate_vma_check_page(struct page *page)
>                  * will bump the page reference count. Sadly there is no way to
>                  * differentiate a regular pin from migration wait. Hence to
>                  * avoid 2 racing thread trying to migrate back to CPU to enter
> -                * infinite loop (one stoping migration because the other is
> +                * infinite loop (one stopping migration because the other is
>                  * waiting on pte migration entry). We always return true here.
>                  *
>                  * FIXME proper solution is to rework migration_entry_wait() so
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index a3616f7..cf931eb 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -34,7 +34,7 @@
>   *
>   * The need callback is used to decide whether extended memory allocation is
>   * needed or not. Sometimes users want to deactivate some features in this
> - * boot and extra memory would be unneccessary. In this case, to avoid
> + * boot and extra memory would be unnecessary. In this case, to avoid
>   * allocating huge chunk of memory, each clients represent their need of
>   * extra memory through the need callback. If one of the need callbacks
>   * returns true, it means that someone needs extra memory so that
> --
> 2.10.1
>
>


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-11-28  0:30 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-27  1:17 [PATCH] mm: fix some spelling mistakes in comments Haitao Shi
2020-11-27  8:06 ` Mike Rapoport
2020-11-27 19:29 ` Souptick Joarder
2020-11-27 19:29   ` Souptick Joarder

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.