All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Hansen <haveblue@us.ibm.com>
To: Ray Bryant <raybry@sgi.com>
Cc: Hirokazu Takahashi <taka@valinux.co.jp>,
	Marcelo Tosatti <marcelo.tosatti@cyclades.com>,
	Andi Kleen <ak@suse.de>, Christoph Hellwig <hch@infradead.org>,
	Ray Bryant <raybry@austin.rr.com>, linux-mm <linux-mm@kvack.org>,
	lhms <lhms-devel@lists.sourceforge.net>,
	Paul Jackson <pj@sgi.com>, Nathan Scott <nathans@sgi.com>
Subject: Re: [PATCH 2.6.12-rc5 4/10] mm: manual page migration-rc3 -- add-sys_migrate_pages-rc3.patch
Date: Wed, 22 Jun 2005 10:23:33 -0700	[thread overview]
Message-ID: <1119461013.18457.61.camel@localhost> (raw)
In-Reply-To: <20050622163934.25515.22804.81297@tomahawk.engr.sgi.com>

On Wed, 2005-06-22 at 09:39 -0700, Ray Bryant wrote:
> +asmlinkage long
> +sys_migrate_pages(pid_t pid, __u32 count, __u32 *old_nodes, __u32 *new_nodes)
> +{

Should the buffers be marked __user?

> +       if ((count < 1) || (count > MAX_NUMNODES))
> +               return -EINVAL;

Since you have an out_einval:, it's probably best to use it
consistently.  There is another place or two like this.

> +       for (i = 0; i < count; i++) {
> +               int n;
> +
> +               n = tmp_old_nodes[i];
> +               if ((n < 0) || (n >= MAX_NUMNODES))
> +                       goto out_einval;
> +               node_set(n, old_node_mask);
> +
> +               n = tmp_new_nodes[i];
> +               if ((n < 0) || (n >= MAX_NUMNODES) || !node_online(n))
> +                       goto out_einval;
> +               node_set(n, new_node_mask);
> +
> +       }

I know it's a simple operation, but I think I'd probably break out the
array validation into its own function.

Then, replace the above loop with this:

if (!migrate_masks_valid(tmp_old_nodes, count) ||
     migrate_masks_valid(tmp_old_nodes, count))
	goto out_einval;

for (i = 0; i < count; i++) {
	node_set(tmp_old_nodes[i], old_node_mask);
	node_set(tmp_new_nodes[i], new_node_mask);
}

> +static int
> +migrate_vma(struct task_struct *task, struct mm_struct *mm,
> +       struct vm_area_struct *vma, int *node_map)
...
> +       spin_lock(&mm->page_table_lock);
> +       for (vaddr = vma->vm_start; vaddr < vma->vm_end; vaddr += PAGE_SIZE) {
> +               page = follow_page(mm, vaddr, 0);
> +               /*
> +                * follow_page has been known to return pages with zero mapcount
> +                * and NULL mapping.  Skip those pages as well
> +                */
> +               if (page && page_mapcount(page)) {
> +                       if (node_map[page_to_nid(page)] >= 0) {
> +                               if (steal_page_from_lru(page_zone(page), page,
> +                                       &page_list))
> +                                               count++;
> +                               else
> +                                       BUG();
> +                       }
> +               }
> +       }
> +       spin_unlock(&mm->page_table_lock);

Personally, I dislike having so many embedded ifs, especially in a for
loop like that.  I think it's a lot more logical to code it up as a
series of continues, mostly because it's easy to read a continue as,
"skip this page."  You can't always see that as easily with an if().  It
also makes it so that you don't have to wrap the steal_page_from_lru()
call across two lines, which is super-ugly. :)

for (vaddr = vma->vm_start; vaddr < vma->vm_end; vaddr += PAGE_SIZE) {
	page = follow_page(mm, vaddr, 0);
	if (!page || !page_mapcount(page))
		continue;

	if (node_map[page_to_nid(page)] < 0)
		continue;

	if (steal_page_from_lru(page_zone(page), page, &page_list));
		count++;
	else
		BUG();
}

The same kind of thing goes for this if: 

> +       /* call the page migration code to move the pages */
> +       if (count) {
> +               nr_busy = try_to_migrate_pages(&page_list, node_map);
> +
> +               if (nr_busy < 0)
> +                       return nr_busy;
> +
> +               if (nr_busy == 0)
> +                       return count;
> +
> +               /* return the unmigrated pages to the LRU lists */
> +               list_for_each_entry_safe(page, page2, &page_list, lru)
> {
> +                       list_del(&page->lru);
> +                       putback_page_to_lru(page_zone(page), page);
> +               }
> +               return -EAGAIN;
> +       }
> +
> +       return 0;

It looks a lot cleaner if you just do 

	if (!count)
		return count;

	... contents of the if(){} block go here

-- Dave

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

  reply	other threads:[~2005-06-22 17:24 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-06-22 16:39 [PATCH 2.6.12-rc5 0/10] mm: manual page migration-rc3 -- overview Ray Bryant
2005-06-22 16:39 ` [PATCH 2.6.12-rc5 1/10] mm: hirokazu-steal_page_from_lru.patch Ray Bryant
2005-06-22 16:39 ` [PATCH 2.6.12-rc5 2/10] mm: manual page migration-rc3 -- xfs-migrate-page-rc3.patch Ray Bryant
2005-06-22 17:30   ` [Lhms-devel] " Joel Schopp
2005-06-23  4:01   ` Nathan Scott
2005-06-22 16:39 ` [PATCH 2.6.12-rc5 3/10] mm: manual page migration-rc3 -- add-node_map-arg-to-try_to_migrate_pages-rc3.patch Ray Bryant
2005-06-22 16:39 ` [PATCH 2.6.12-rc5 4/10] mm: manual page migration-rc3 -- add-sys_migrate_pages-rc3.patch Ray Bryant
2005-06-22 17:23   ` Dave Hansen [this message]
2005-06-23  1:34     ` Ray Bryant
2005-06-23  1:42       ` Dave Hansen
2005-06-25 10:32   ` Hirokazu Takahashi
2005-06-22 16:39 ` [PATCH 2.6.12-rc5 5/10] mm: manual page migration-rc3 -- sys_migrate_pages-mempolicy-migration-rc3.patch Ray Bryant
2005-06-23  1:51   ` Andi Kleen
2005-06-23 20:59     ` [Lhms-devel] " Ray Bryant
2005-06-23 21:05       ` Andi Kleen
2005-06-25  5:11         ` Ray Bryant
2005-06-22 16:39 ` [PATCH 2.6.12-rc5 6/10] mm: manual page migration-rc3 -- add-mempolicy-control-rc3.patch Ray Bryant
2005-06-22 16:39 ` [PATCH 2.6.12-rc5 7/10] mm: manual page migration-rc3 -- sys_migrate_pages-migration-selection-rc3.patch Ray Bryant
2005-06-22 16:40 ` [PATCH 2.6.12-rc5 8/10] mm: manual page migration-rc3 -- sys_migrate_pages-cpuset-support-rc3.patch Ray Bryant
2005-06-22 16:40 ` [PATCH 2.6.12-rc5 9/10] mm: manual page migration-rc3 -- sys_migrate_pages-permissions-check-rc3.patch Ray Bryant
2005-06-22 16:40 ` [PATCH 2.6.12-rc5 10/10] mm: manual page migration-rc3 -- N1.2-add-nodemap-to-try_to_migrate_pages-call.patch Ray Bryant
2005-06-23 21:31 ` [PATCH 2.6.12-rc5 0/10] mm: manual page migration-rc3 -- overview Christoph Lameter
2005-06-23 23:00   ` Ray Bryant
2005-06-23 23:03     ` Christoph Lameter
2005-06-24 14:15   ` [Lhms-devel] " Ray Bryant
2005-06-24 15:41     ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1119461013.18457.61.camel@localhost \
    --to=haveblue@us.ibm.com \
    --cc=ak@suse.de \
    --cc=hch@infradead.org \
    --cc=lhms-devel@lists.sourceforge.net \
    --cc=linux-mm@kvack.org \
    --cc=marcelo.tosatti@cyclades.com \
    --cc=nathans@sgi.com \
    --cc=pj@sgi.com \
    --cc=raybry@austin.rr.com \
    --cc=raybry@sgi.com \
    --cc=taka@valinux.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.