From: Andrea Arcangeli <aarcange@redhat.com>
To: John Stoffel <john@stoffel.org>
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
Johannes Weiner <jweiner@redhat.com>,
Pekka Enberg <penberg@kernel.org>,
Cyclonus J <cyclonusj@gmail.com>,
Sasha Levin <levinsasha928@gmail.com>,
Christoph Hellwig <hch@infradead.org>,
David Rientjes <rientjes@google.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Konrad Wilk <konrad.wilk@oracle.com>,
Jeremy Fitzhardinge <jeremy@goop.org>,
Seth Jennings <sjenning@linux.vnet.ibm.com>,
ngupta@vflare.org, Chris Mason <chris.mason@oracle.com>,
JBeulich@novell.com, Dave Hansen <dave@linux.vnet.ibm.com>,
Jonathan Corbet <corbet@lwn.net>
Subject: Re: [GIT PULL] mm: frontswap (for 3.2 window)
Date: Mon, 31 Oct 2011 19:44:43 +0100 [thread overview]
Message-ID: <20111031184443.GH3466@redhat.com> (raw)
In-Reply-To: <20138.62532.493295.522948@quad.stoffel.home>
On Fri, Oct 28, 2011 at 02:28:20PM -0400, John Stoffel wrote:
> and service. How would TM benefit me? I don't use Xen, don't want to
> play with it honestly because I'm busy enough as it is, and I just
> don't see the hard benefits.
If you used Xen tmem would be more or less the equivalent of
cache=writethrough/writeback. For us tmem is the linux host pagecache
running on the baremetal in short. But at least when we vmexit for a
read we read 128-512k of it (depending on if=virtio or others and
guest kernel readahead decision), not just a fixed absolutely worst
case 4k unit like tmem would do...
Without tmem Xen can only work like KVM cache=off.
If at least it would drop us a copy, but no, it still does the bounce
buffer, so I'd rather bounce in the host kernel function
file_read_actor than in some superflous (as far as KVM is concerned)
tmem code, plus we normally read orders of magnitude more than 4k in
each vmexit, so our default cache=writeback/writethroguh may already
be more efficient than if we'd use tmem for that.
We could only consider for swap compression but for swap compression
I've no idea why we still need to do a copy, instead of just
compressing from userland page in zerocopy (worst case using any
mechanism introduced to provide stable pages).
And when host linux pagecache will go hugepage we'll get a >4k copy in
one go while tmem bounce will still be stuck at 4k...
WARNING: multiple messages have this Message-ID (diff)
From: Andrea Arcangeli <aarcange@redhat.com>
To: John Stoffel <john@stoffel.org>
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
Johannes Weiner <jweiner@redhat.com>,
Pekka Enberg <penberg@kernel.org>,
Cyclonus J <cyclonusj@gmail.com>,
Sasha Levin <levinsasha928@gmail.com>,
Christoph Hellwig <hch@infradead.org>,
David Rientjes <rientjes@google.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Konrad Wilk <konrad.wilk@oracle.com>,
Jeremy Fitzhardinge <jeremy@goop.org>,
Seth Jennings <sjenning@linux.vnet.ibm.com>,
ngupta@vflare.org, Chris Mason <chris.mason@oracle.com>,
JBeulich@novell.com, Dave Hansen <dave@linux.vnet.ibm.com>,
Jonathan Corbet <corbet@lwn.net>
Subject: Re: [GIT PULL] mm: frontswap (for 3.2 window)
Date: Mon, 31 Oct 2011 19:44:43 +0100 [thread overview]
Message-ID: <20111031184443.GH3466@redhat.com> (raw)
In-Reply-To: <20138.62532.493295.522948@quad.stoffel.home>
On Fri, Oct 28, 2011 at 02:28:20PM -0400, John Stoffel wrote:
> and service. How would TM benefit me? I don't use Xen, don't want to
> play with it honestly because I'm busy enough as it is, and I just
> don't see the hard benefits.
If you used Xen tmem would be more or less the equivalent of
cache=writethrough/writeback. For us tmem is the linux host pagecache
running on the baremetal in short. But at least when we vmexit for a
read we read 128-512k of it (depending on if=virtio or others and
guest kernel readahead decision), not just a fixed absolutely worst
case 4k unit like tmem would do...
Without tmem Xen can only work like KVM cache=off.
If at least it would drop us a copy, but no, it still does the bounce
buffer, so I'd rather bounce in the host kernel function
file_read_actor than in some superflous (as far as KVM is concerned)
tmem code, plus we normally read orders of magnitude more than 4k in
each vmexit, so our default cache=writeback/writethroguh may already
be more efficient than if we'd use tmem for that.
We could only consider for swap compression but for swap compression
I've no idea why we still need to do a copy, instead of just
compressing from userland page in zerocopy (worst case using any
mechanism introduced to provide stable pages).
And when host linux pagecache will go hugepage we'll get a >4k copy in
one go while tmem bounce will still be stuck at 4k...
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-10-31 18:45 UTC|newest]
Thread overview: 175+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-10-27 18:52 [GIT PULL] mm: frontswap (for 3.2 window) Dan Magenheimer
2011-10-27 18:52 ` Dan Magenheimer
[not found] ` <alpine.DEB.2.00.1110271318220.7639@chino.kir.corp.google.com20111027211157.GA1199@infradead.org>
2011-10-27 19:30 ` Kurt Hackel
2011-10-27 19:30 ` Kurt Hackel
2011-10-27 20:18 ` David Rientjes
2011-10-27 20:18 ` David Rientjes
2011-10-27 21:11 ` Christoph Hellwig
2011-10-27 21:11 ` Christoph Hellwig
2011-10-27 21:49 ` Dan Magenheimer
2011-10-27 21:49 ` Dan Magenheimer
2011-10-27 21:52 ` Christoph Hellwig
2011-10-27 21:52 ` Christoph Hellwig
2011-10-27 22:21 ` Dan Magenheimer
2011-10-27 22:21 ` Dan Magenheimer
2011-10-28 7:12 ` Sasha Levin
2011-10-28 7:12 ` Sasha Levin
[not found] ` <CAOzbF4fnD=CGR-nizZoBxmFSuAjFC3uAHf3wDj5RLneJvJhrOQ@mail.gmail.comCAOJsxLGOTw7rtFnqeHvzFxifA0QgPVDHZzrEo=-uB2Gkrvp=JQ@mail.gmail.com>
[not found] ` <552d2067-474d-4aef-a9a4-89e5fd8ef84f@default20111031181651.GF3466@redhat.com>
[not found] ` <60592afd-97aa-4eaf-b86b-f6695d31c7f1@default20111031223717.GI3466@redhat.com>
[not found] ` <1b2e4f74-7058-4712-85a7-84198723e3ee@default20111101012017.GJ3466@redhat.com>
[not found] ` <6a9db6d9-6f13-4855-b026-ba668c29ddfa@default20111101180702.GL3466@redhat.com>
[not found] ` <b8a0ca71-a31b-488a-9a92-2502d4a6e9bf@default20111102013122.GA18879@redhat.com>
2011-10-28 7:30 ` Cyclonus J
2011-10-28 7:30 ` Cyclonus J
2011-10-28 14:26 ` Pekka Enberg
2011-10-28 14:26 ` Pekka Enberg
2011-10-28 15:21 ` Dan Magenheimer
2011-10-28 15:21 ` Dan Magenheimer
[not found] ` <CAOJsxLEE-qf9me1SAZLFiEVhHVnDh7BDrSx1+abe9R4mfkhD=g@mail.gmail.com20111028163053.GC1319@redhat.com>
2011-10-28 15:36 ` Pekka Enberg
2011-10-28 15:36 ` Pekka Enberg
2011-10-28 16:30 ` Johannes Weiner
2011-10-28 16:30 ` Johannes Weiner
2011-10-28 17:01 ` Pekka Enberg
2011-10-28 17:01 ` Pekka Enberg
2011-10-28 17:07 ` Dan Magenheimer
2011-10-28 17:07 ` Dan Magenheimer
2011-10-28 18:28 ` John Stoffel
2011-10-28 18:28 ` John Stoffel
2011-10-28 20:19 ` Dan Magenheimer
2011-10-28 20:19 ` Dan Magenheimer
2011-10-28 20:52 ` John Stoffel
2011-10-28 20:52 ` John Stoffel
2011-10-30 19:18 ` Dan Magenheimer
2011-10-30 19:18 ` Dan Magenheimer
2011-10-30 20:06 ` Dave Hansen
2011-10-30 20:06 ` Dave Hansen
2011-10-30 21:50 ` Dan Magenheimer
2011-10-30 21:50 ` Dan Magenheimer
2011-11-02 19:45 ` Rik van Riel
2011-11-02 19:45 ` Rik van Riel
2011-11-02 20:45 ` Dan Magenheimer
2011-11-02 20:45 ` Dan Magenheimer
2011-11-06 22:32 ` Valdis.Kletnieks
2011-11-08 12:15 ` Ed Tomlinson
2011-11-08 12:15 ` Ed Tomlinson
2011-10-31 8:12 ` James Bottomley
2011-10-31 8:12 ` James Bottomley
2011-10-31 15:39 ` Dan Magenheimer
2011-10-31 15:39 ` Dan Magenheimer
2011-11-01 10:13 ` James Bottomley
2011-11-01 10:13 ` James Bottomley
2011-11-01 18:10 ` Dan Magenheimer
2011-11-01 18:10 ` Dan Magenheimer
2011-11-01 18:48 ` Dave Hansen
2011-11-01 18:48 ` Dave Hansen
2011-11-01 21:32 ` Dan Magenheimer
2011-11-01 21:32 ` Dan Magenheimer
2011-11-02 7:44 ` James Bottomley
2011-11-02 7:44 ` James Bottomley
2011-11-02 19:39 ` Dan Magenheimer
2011-11-02 19:39 ` Dan Magenheimer
2011-10-31 18:44 ` Andrea Arcangeli [this message]
2011-10-31 18:44 ` Andrea Arcangeli
2011-10-30 21:47 ` Johannes Weiner
2011-10-30 21:47 ` Johannes Weiner
2011-10-30 23:19 ` Dan Magenheimer
2011-10-30 23:19 ` Dan Magenheimer
2011-10-31 18:34 ` Andrea Arcangeli
2011-10-31 18:34 ` Andrea Arcangeli
2011-10-31 21:45 ` Dan Magenheimer
2011-10-31 21:45 ` Dan Magenheimer
2011-10-28 16:37 ` Dan Magenheimer
2011-10-28 16:37 ` Dan Magenheimer
2011-10-28 16:59 ` Pekka Enberg
2011-10-28 16:59 ` Pekka Enberg
2011-10-28 17:20 ` Dan Magenheimer
2011-10-28 17:20 ` Dan Magenheimer
2011-10-31 18:16 ` Andrea Arcangeli
2011-10-31 18:16 ` Andrea Arcangeli
2011-10-31 20:58 ` Dan Magenheimer
2011-10-31 20:58 ` Dan Magenheimer
2011-10-31 22:37 ` Andrea Arcangeli
2011-10-31 22:37 ` Andrea Arcangeli
2011-10-31 23:36 ` Dan Magenheimer
2011-10-31 23:36 ` Dan Magenheimer
2011-11-01 1:20 ` Andrea Arcangeli
2011-11-01 1:20 ` Andrea Arcangeli
2011-11-01 16:41 ` Dan Magenheimer
2011-11-01 16:41 ` Dan Magenheimer
2011-11-01 18:07 ` Andrea Arcangeli
2011-11-01 18:07 ` Andrea Arcangeli
2011-11-01 21:00 ` Dan Magenheimer
2011-11-01 21:00 ` Dan Magenheimer
2011-11-02 1:31 ` Andrea Arcangeli
2011-11-02 1:31 ` Andrea Arcangeli
2011-11-02 19:06 ` Dan Magenheimer
2011-11-02 19:06 ` Dan Magenheimer
2011-11-03 0:32 ` Andrea Arcangeli
2011-11-03 0:32 ` Andrea Arcangeli
2011-11-03 22:29 ` Dan Magenheimer
2011-11-03 22:29 ` Dan Magenheimer
2011-11-02 20:51 ` Rik van Riel
2011-11-02 20:51 ` Rik van Riel
2011-11-02 21:14 ` Dan Magenheimer
2011-11-02 21:14 ` Dan Magenheimer
2011-11-15 16:29 ` Rik van Riel
2011-11-15 16:29 ` Rik van Riel
2011-11-15 17:33 ` Jeremy Fitzhardinge
2011-11-15 17:33 ` Jeremy Fitzhardinge
2011-11-16 14:49 ` Konrad Rzeszutek Wilk
2011-11-16 14:49 ` Konrad Rzeszutek Wilk
2011-11-01 10:16 ` James Bottomley
2011-11-01 10:16 ` James Bottomley
2011-11-01 18:21 ` Dan Magenheimer
2011-11-01 18:21 ` Dan Magenheimer
2011-11-02 8:14 ` James Bottomley
2011-11-02 8:14 ` James Bottomley
2011-11-02 20:08 ` Dan Magenheimer
2011-11-02 20:08 ` Dan Magenheimer
2011-11-03 10:30 ` Theodore Tso
2011-11-03 10:30 ` Theodore Tso
2011-11-03 14:59 ` Dan Magenheimer
2011-11-03 14:59 ` Dan Magenheimer
2011-11-02 15:44 ` Avi Kivity
2011-11-02 15:44 ` Avi Kivity
2011-11-02 16:02 ` Andrea Arcangeli
2011-11-02 16:02 ` Andrea Arcangeli
2011-11-02 16:13 ` Avi Kivity
2011-11-02 16:13 ` Avi Kivity
2011-11-02 20:27 ` Dan Magenheimer
2011-11-02 20:27 ` Dan Magenheimer
2011-11-02 20:19 ` Dan Magenheimer
2011-11-02 20:19 ` Dan Magenheimer
2011-10-27 21:44 ` Avi Miller
2011-10-27 21:44 ` Avi Miller
2011-10-27 22:33 ` Brian King
2011-10-27 22:33 ` Brian King
2011-10-28 5:17 ` Nitin Gupta
2011-10-28 5:17 ` Nitin Gupta
2011-10-29 13:43 ` Ed Tomlinson
2011-10-29 13:43 ` Ed Tomlinson
2011-10-31 8:13 ` KAMEZAWA Hiroyuki
2011-10-31 8:13 ` KAMEZAWA Hiroyuki
2011-10-31 16:38 ` Dan Magenheimer
2011-10-31 16:38 ` Dan Magenheimer
2011-11-01 0:50 ` KAMEZAWA Hiroyuki
2011-11-01 0:50 ` KAMEZAWA Hiroyuki
2011-11-01 15:25 ` Dan Magenheimer
2011-11-01 15:25 ` Dan Magenheimer
2011-11-01 21:43 ` Andrew Morton
2011-11-01 21:43 ` Andrew Morton
2011-11-01 22:25 ` Dan Magenheimer
2011-11-01 22:25 ` Dan Magenheimer
2011-11-02 21:03 ` Rik van Riel
2011-11-02 21:03 ` Rik van Riel
2011-11-02 21:42 ` Dan Magenheimer
2011-11-02 21:42 ` Dan Magenheimer
2011-11-02 1:14 ` KAMEZAWA Hiroyuki
2011-11-02 1:14 ` KAMEZAWA Hiroyuki
2011-11-02 15:12 ` Dan Magenheimer
2011-11-02 15:12 ` Dan Magenheimer
2011-11-04 4:19 ` KAMEZAWA Hiroyuki
2011-11-04 4:19 ` KAMEZAWA Hiroyuki
2011-11-03 16:49 ` Jan Beulich
2011-11-03 16:49 ` Jan Beulich
2011-11-04 0:54 ` Andrew Morton
2011-11-04 0:54 ` Andrew Morton
2011-11-04 8:49 ` Jan Beulich
2011-11-04 8:49 ` Jan Beulich
2011-11-04 12:37 Clayton Weaver
2011-11-05 17:08 Clayton Weaver
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20111031184443.GH3466@redhat.com \
--to=aarcange@redhat.com \
--cc=JBeulich@novell.com \
--cc=akpm@linux-foundation.org \
--cc=chris.mason@oracle.com \
--cc=corbet@lwn.net \
--cc=cyclonusj@gmail.com \
--cc=dan.magenheimer@oracle.com \
--cc=dave@linux.vnet.ibm.com \
--cc=hch@infradead.org \
--cc=jeremy@goop.org \
--cc=john@stoffel.org \
--cc=jweiner@redhat.com \
--cc=konrad.wilk@oracle.com \
--cc=levinsasha928@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ngupta@vflare.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=sjenning@linux.vnet.ibm.com \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.