* [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
@ 2010-10-21 15:21 Dan Magenheimer
2010-10-21 15:36 ` Lin Ming
2010-10-27 18:37 ` Ping? " Dan Magenheimer
0 siblings, 2 replies; 10+ messages in thread
From: Dan Magenheimer @ 2010-10-21 15:21 UTC (permalink / raw)
To: torvalds; +Cc: linux-kernel
Hi Linus --
Please pull:
git://git.kernel.org/pub/scm/linux/kernel/git/djm/tmem.git for-linus
since git commit cb655d0f3d57c23db51b981648e452988c0223f9:
Linus Torvalds (1):
Linux 2.6.36-rc7
This cleancache patchset crosses multiple subsystem boundaries and
at the recent filesystem/storage/mm summit, people suggested that
I should just submit it to linux-next (done), and directly to you
at the next merge window. Previous lkml postings received a great
deal of review and comment from a wide variety of maintainers, as
documented in the commit logs.
In addition, the cleancache shim to Xen Transcendent Memory makes
use of and is dependent on the cleancache patchset. Jeremy Fitzhardinge
asked that I just include the shim in my tree to be pulled at the
same time, to avoid any merge ordering issues. (Another cleancache
user, zcache, developed by Nitin Gupta has been submitted for the
drivers/staging tree and will follow at some point, though possibly
not until the next merge window.)
The patches apply cleanly against linux-2.6.36-rc7. In
linux-next, sfr had to manually resolve a couple of very minor merge
conflicts in mm/Kconfig and include/linux/fs.h due to minor changes
in other trees.
If you have any questions or concerns, please let me know!
Thanks,
Dan
P.S. This is my first direct submission to you and today
happens to be my 50th birthday, so please be kind :-)
Dan Magenheimer (9):
mm/fs: cleancache documentation
fs: add field to superblock to support cleancache
mm: cleancache core ops functions and config
mm/fs: add hooks to support cleancache
ext3: add cleancache support
btrfs: add cleancache support
ext4: add cleancache support
ocfs2: add cleancache support
xen: cleancache shim to Xen Transcendent Memory
.../ABI/testing/sysfs-kernel-mm-cleancache | 11 +
Documentation/vm/cleancache.txt | 267 ++++++++++++++++++++
arch/x86/include/asm/xen/hypercall.h | 7 +
drivers/xen/Makefile | 1 +
drivers/xen/tmem.c | 264 +++++++++++++++++++
fs/btrfs/extent_io.c | 9 +
fs/btrfs/super.c | 2 +
fs/buffer.c | 5 +
fs/ext3/super.c | 2 +
fs/ext4/super.c | 2 +
fs/mpage.c | 7 +
fs/ocfs2/super.c | 2 +
fs/super.c | 3 +
include/linux/cleancache.h | 118 +++++++++
include/linux/fs.h | 5 +
include/xen/interface/xen.h | 22 ++
mm/Kconfig | 22 ++
mm/Makefile | 1 +
mm/cleancache.c | 245 ++++++++++++++++++
mm/filemap.c | 11 +
mm/truncate.c | 10 +
21 files changed, 1016 insertions(+), 0 deletions(-)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
2010-10-21 15:21 [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window Dan Magenheimer
@ 2010-10-21 15:36 ` Lin Ming
2010-10-21 15:57 ` gene heskett
2010-10-27 18:37 ` Ping? " Dan Magenheimer
1 sibling, 1 reply; 10+ messages in thread
From: Lin Ming @ 2010-10-21 15:36 UTC (permalink / raw)
To: Dan Magenheimer; +Cc: torvalds, linux-kernel
On Thu, Oct 21, 2010 at 11:21 PM, Dan Magenheimer
<dan.magenheimer@oracle.com> wrote:
> P.S. This is my first direct submission to you and today
> happens to be my 50th birthday, so please be kind :-)
Happy Birthday :)
Lin Ming
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
2010-10-21 15:36 ` Lin Ming
@ 2010-10-21 15:57 ` gene heskett
0 siblings, 0 replies; 10+ messages in thread
From: gene heskett @ 2010-10-21 15:57 UTC (permalink / raw)
To: Lin Ming; +Cc: Dan Magenheimer, torvalds, linux-kernel
On Thursday, October 21, 2010, Lin Ming wrote:
>On Thu, Oct 21, 2010 at 11:21 PM, Dan Magenheimer
>
><dan.magenheimer@oracle.com> wrote:
>> P.S. This is my first direct submission to you and today
>> happens to be my 50th birthday, so please be kind :-)
>
>Happy Birthday :)
I'll second that wish but caution that 50 is spring chicken compared to
some of us old farts. I turned 76 back on the 4th, but it wasn't worth a
public notice to me as I was busy getting stitches in my face, too much sun
for too many years. ;-( Better than the alternative.
>
>Lin Ming
>--
>To unsubscribe from this list: send the line "unsubscribe linux-kernel"
>in the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
An adequate bootstrap is a contradiction in terms.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Ping? RE: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
2010-10-21 15:21 [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window Dan Magenheimer
2010-10-21 15:36 ` Lin Ming
@ 2010-10-27 18:37 ` Dan Magenheimer
2010-10-30 19:06 ` Andrew Morton
1 sibling, 1 reply; 10+ messages in thread
From: Dan Magenheimer @ 2010-10-27 18:37 UTC (permalink / raw)
To: torvalds; +Cc: linux-kernel
> From: Dan Magenheimer
> To: torvalds@linux-foundation.org
> Cc: linux-kernel@vger.kernel.org
> Subject: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
>
> Hi Linus --
>
> Please pull:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/djm/tmem.git for-linus
>
> since git commit cb655d0f3d57c23db51b981648e452988c0223f9:
> Linus Torvalds (1):
>
> Linux 2.6.36-rc7
>
> This cleancache patchset crosses multiple subsystem boundaries and
> at the recent filesystem/storage/mm summit, people suggested that
> I should just submit it to linux-next (done), and directly to you
> at the next merge window. Previous lkml postings received a great
> deal of review and comment from a wide variety of maintainers, as
> documented in the commit logs.
<snip>
Ping? I hope you are still considering this. If not or if
there are any questions I can answer, please let me know.
Thanks,
Dan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Ping? RE: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
2010-10-27 18:37 ` Ping? " Dan Magenheimer
@ 2010-10-30 19:06 ` Andrew Morton
2010-10-30 20:49 ` Jeremy Fitzhardinge
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Andrew Morton @ 2010-10-30 19:06 UTC (permalink / raw)
To: Dan Magenheimer
Cc: torvalds, linux-kernel, Christoph Hellwig, Jeremy Fitzhardinge
On Wed, 27 Oct 2010 11:37:47 -0700 (PDT) Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
> Ping? I hope you are still considering this. If not or if
> there are any questions I can answer, please let me know.
What's happened here is that the patchset has gone through its
iterations and a few people have commented and then after a while,
nobody had anything to say about the code so nobody said anything more.
But silence doesn't mean acceptance - it just means that nobody had
anything to say.
I think I looked at the earlier iterations, tried to understand the
point behind it all, made a few code suggestions and eventually tuned
out. At that time (and hence at this time) I just cannot explain to
myself why we would want to merge this code.
All new code is a cost/benefit decision. The costs are pretty well
known: larger codebase, more code for us and our "customers" to
maintain and support, etc. That the code pokes around in vfs and
various filesystems does increase those costs a little.
But the extent of the benefits to our users aren't obvious to me. The
code is still xen-specific, I believe? If so, that immediately reduces
the benefit side by a large amount simply because of the reduced
audience.
We did spend some time trying to get this wired up to zram so that the
feature would be potentially useful to *all* users, thereby setting the
usefulness multiplier back to 1.0. But I don't recall that anything
came of this?
I also don't know how useful the code is to its intended
micro-audience: xen users!
So can we please revisit all this from the top level? Jeremy, your
input would be valuable. Christoph, I recall that you had technical
objections - can you please repeat those?
It's the best I can do to kick this along, sorry.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Ping? RE: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
2010-10-30 19:06 ` Andrew Morton
@ 2010-10-30 20:49 ` Jeremy Fitzhardinge
2010-10-31 2:19 ` Dan Magenheimer
2011-01-19 16:42 ` Dan Magenheimer
2 siblings, 0 replies; 10+ messages in thread
From: Jeremy Fitzhardinge @ 2010-10-30 20:49 UTC (permalink / raw)
To: Andrew Morton
Cc: Dan Magenheimer, torvalds, linux-kernel, Christoph Hellwig,
Chris Mason, Nitin Gupta
On 10/30/2010 12:06 PM, Andrew Morton wrote:
> On Wed, 27 Oct 2010 11:37:47 -0700 (PDT) Dan Magenheimer <dan.magenheimer@oracle.com> wrote:
>
>> Ping? I hope you are still considering this. If not or if
>> there are any questions I can answer, please let me know.
> What's happened here is that the patchset has gone through its
> iterations and a few people have commented and then after a while,
> nobody had anything to say about the code so nobody said anything more.
>
> But silence doesn't mean acceptance - it just means that nobody had
> anything to say.
>
> I think I looked at the earlier iterations, tried to understand the
> point behind it all, made a few code suggestions and eventually tuned
> out. At that time (and hence at this time) I just cannot explain to
> myself why we would want to merge this code.
>
> All new code is a cost/benefit decision. The costs are pretty well
> known: larger codebase, more code for us and our "customers" to
> maintain and support, etc. That the code pokes around in vfs and
> various filesystems does increase those costs a little.
>
> But the extent of the benefits to our users aren't obvious to me. The
> code is still xen-specific, I believe? If so, that immediately reduces
> the benefit side by a large amount simply because of the reduced
> audience.
>
> We did spend some time trying to get this wired up to zram so that the
> feature would be potentially useful to *all* users, thereby setting the
> usefulness multiplier back to 1.0. But I don't recall that anything
> came of this?
Nitin was definitely working on this and made some constructive comments
as a result, but I don't know if there's any completed/usable code or not.
> I also don't know how useful the code is to its intended
> micro-audience: xen users!
The benefit is that it allows memory to be much more fluidly assigned
between domains as needed, rather than having to statically allocate big
chunks of memory. The result is that its possible to provision domains
with much smaller amounts of memory while still reducing the number of
real pagefault IOs. Dan's numbers are certainly very interesting (Dan,
perhaps you can repost those results).
However, I don't think it has been widely deployed yet, since most users
are using upstream/distro kernels.
> So can we please revisit all this from the top level? Jeremy, your
> input would be valuable.
OK, I'll need to get myself up to speed on the issues again. Will you
be about in Boston next week?
> Christoph, I recall that you had technical
> objections - can you please repeat those?
I think (and I don't want to misrepresent or minimize Christoph's
concerns here) the most acute one were the need for a one-line addition
to each filesystem which wants to participate.
> It's the best I can do to kick this along, sorry.
Thanks,
J
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: Ping? RE: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
2010-10-30 19:06 ` Andrew Morton
2010-10-30 20:49 ` Jeremy Fitzhardinge
@ 2010-10-31 2:19 ` Dan Magenheimer
2011-01-19 16:42 ` Dan Magenheimer
2 siblings, 0 replies; 10+ messages in thread
From: Dan Magenheimer @ 2010-10-31 2:19 UTC (permalink / raw)
To: Andrew Morton
Cc: torvalds, linux-kernel, Christoph Hellwig, Jeremy Fitzhardinge,
Nitin Gupta, Chris Mason
> From: Andrew Morton [mailto:akpm@linux-foundation.org]
> So can we please revisit all this from the top level?
> Jeremy, your input would be valuable.
Hi Andrew --
Thanks for your reply! Between preparing for LPC and some
upcoming personal time off, I may not be able to reply in
a timely way to some future discussion on this thread, so
I will try to respond now but still encourage others to
respond. I would also be happy to talk f2f at LPC. I see as
I type this that Jeremy has already replied and will try
to incorporate information re his comments.
Andrew, I think you raise four interesting questions...
I hope it's OK if I paraphrase rather than quote directly?
1) Is something Xen-specific [though no, it's not] worth
the cost of this addition to the code base?
2) Even if (1) is yes, is this going to be used by a
significant percentage of Xen users?
3) Is cleancache beneficial to NON-Xen users?
4) Ignoring the "user-base" questions, are there technical
objections to and issues with cleancache that would
stop it from being merged?
So, I hope you have the time to read my long-winded reply:
1) By using the term, "micro-audience" to refer to the xen
user base, I think you are grossly minimizing their
number. Since this is a technical list, we can leave
it up to industry analysts to argue that point, but I
think it is at least fair to point out that there are
literally dozens of merges accepted in just this window
that had a much larger code impact than cleancache
but have a much smaller user base than Xen.
While it is reasonable to argue that most of those other
merges don't touch core mm/vfs code, please realize that
the cleancache impact to mm/vfs totals less than 50 lines,
(written by Chris Mason, not me) and these patched lines
have been essentially static since about 2.6.18, and
in all of the versions I've posted. Most of the
cleancache version churn was designing a clean layer
so that those hooks are more broadly useful plus
my inexperience in Linux coding style and the process
for posting patches to lkml. (The layering, plus
some sysfs info and documentation contributes nearly
all of the additional lines of the patch.)
2) Will a lot of Xen users use this? That's sort of a
chicken and egg thing. I have talked to many real
Xen users who would love to use cleancache (actually
Transcendent Memory of which cleancache is the
key part), but they get scared when I tell them it
requires patches that are not upstream yet. Distros
(including Oracle's) are similarly skittish.
At Linux and Xen conferences[1], I've shown a nice performance
improvement on some workloads and negligible worst case
loss. The on-list controversies over cleancache have rarely
involved any performance questions/issues, but I can
revisit the data if it makes a difference on the
decision to merge.
3) GregKH was ready to apply Nitin's zram (actually called
zcache) patch until I pointed out that it was dependent
on cleancache which wasn't yet merged. See:
http://lkml.org/lkml/2010/7/22/491. Due to an internal
API change at v5 (to use exportfs to support fs's where
an inode_t doesn't uniquely represent a file -- with
input and guidance from Andreas Dilger),
zcache needs a few changes and Nitin appears otherwise
occupied right now. If Nitin doesn't get `round to it
and doesn't object, and this is the only barrier to merging
cleancache, I'll be happy to make those changes myself.
I'm separately working on some similar in-kernel compression
ideas, plus the "page-accessible memory" ideas I proposed
for LSF10/MM where, ahem, certain future idiosyncratic fast
solid-state-ish memory technologies are a good match for
cleancache. The core hooks are highly similar to what was
used for Geiger (google Geiger ASPLOS 2006) and I've heard
from several university students that are interested in
researching other ideas built on top of cleancache.
Oh, and at LinuxCon, Anthony Liguori told me he thought
there were at least parts of it that KVM can use.
So, no, this isn't a xen-only thing, nor a one-time
thing. Cleancache is a generic mechanism for grabbing
data from clean pages when they are reclaimed, cacheing
the data in non-kernel-directly-addressable memory, and
avoiding kernel disk reads into file cache when the pages
can be found in cleancache. I just happen to get paid to
work on Xen, so that's where this story started.
4) The on-list lkml patch review process was very helpful
in helping to clean up the cleancache patchset. The
biggest technical hole -- filesystems for which an
inode_t can't uniquely identify a file -- is fixed.
Other technical questions/feedback are summarized
in the commit comments and in the FAQ included with
the patch including, I believe, everything both
on-list and f2f from hch.
A LOT of people have provided review, useful feedback
and questions, and I've tried to be very diligent in
replying to all reviewers. If I've missed any that
would lead anyone to disagree with merging cleancache,
I hope they will re-raise them prior to the next
merge window.
Hope that helps... and I hope I am not sounding defensive.
Thanks again for offering to revisit it.
Thanks,
Dan
[1] For a quick performance summary, see slides 37-39 of http://oss.oracle.com/projects/tmem/dist/documentation/presentations/TranscendentMemoryXenSummit2010.pdf
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: Ping? RE: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
2010-10-30 19:06 ` Andrew Morton
2010-10-30 20:49 ` Jeremy Fitzhardinge
2010-10-31 2:19 ` Dan Magenheimer
@ 2011-01-19 16:42 ` Dan Magenheimer
2011-02-17 20:14 ` PING? cleancache for 2.6.39 window? Dan Magenheimer
2 siblings, 1 reply; 10+ messages in thread
From: Dan Magenheimer @ 2011-01-19 16:42 UTC (permalink / raw)
To: Andrew Morton
Cc: torvalds, linux-kernel, Christoph Hellwig, Jeremy Fitzhardinge
> On Wed, 27 Oct 2010 11:37:47 -0700 (PDT) Dan Magenheimer
> <dan.magenheimer@oracle.com> wrote:
>
> > Ping? I hope you are still considering this. If not or if
> > there are any questions I can answer, please let me know.
>
> What's happened here is that the patchset has gone through its
> iterations and a few people have commented and then after a while,
> nobody had anything to say about the code so nobody said anything more.
>
> But silence doesn't mean acceptance - it just means that nobody had
> anything to say.
>
> I think I looked at the earlier iterations, tried to understand the
> point behind it all, made a few code suggestions and eventually tuned
> out. At that time (and hence at this time) I just cannot explain to
> myself why we would want to merge this code.
>
> All new code is a cost/benefit decision. The costs are pretty well
> known: larger codebase, more code for us and our "customers" to
> maintain and support, etc. That the code pokes around in vfs and
> various filesystems does increase those costs a little.
>
> But the extent of the benefits to our users aren't obvious to me. The
> coe is still xen-specific, I believe? If so, that immediately reduces
> the benefit side by a large amount simply because of the reduced
> audience.
>
> We did spend some time trying to get this wired up to zram so that the
> feature would be potentially useful to *all* users, thereby setting the
> usefulness multiplier back to 1.0. But I don't recall that anything
> came of this?
>
> I also don't know how useful the code is to its intended
> micro-audience: xen users!
>
> So can we please revisit all this from the top level? Jeremy, your
> input would be valuable. Christoph, I recall that you had technical
> objections - can you please repeat those?
>
> It's the best I can do to kick this along, sorry.
Hi Andrew (and Linus) --
Time to re-open this conversation (for 2.6.39 merge window)?
Assuming GregKH approves kztmem as a staging driver, it should
now set "the usefulness multiplier back to 1.0". Kztmem
is a superset of Nitin's zcache and zram but more dynamic
and is completely independent of Xen and virtualization.
See kztmem overview: https://lkml.org/lkml/2011/1/18/170
And I believe Christoph's technical objections have all been
resolved. See longer version of previous reply here:
https://lkml.org/lkml/2010/10/30/226
So please reconsider cleancache!
Thanks,
Dan
P.S. Christoph, apologies, I see I didn't have you on the dist list
for the kztmem patch.
^ permalink raw reply [flat|nested] 10+ messages in thread
* PING? cleancache for 2.6.39 window?
2011-01-19 16:42 ` Dan Magenheimer
@ 2011-02-17 20:14 ` Dan Magenheimer
0 siblings, 0 replies; 10+ messages in thread
From: Dan Magenheimer @ 2011-02-17 20:14 UTC (permalink / raw)
To: Andrew Morton, torvalds; +Cc: linux-kernel
> From: Dan Magenheimer
> Sent: Wednesday, January 19, 2011 9:42 AM
> To: Andrew Morton
> Cc: torvalds@linux-foundation.org; linux-kernel@vger.kernel.org;
> Christoph Hellwig; Jeremy Fitzhardinge
> Subject: RE: Ping? RE: [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge
> window
>
> > On Wed, 27 Oct 2010 11:37:47 -0700 (PDT) Dan Magenheimer
> > <dan.magenheimer@oracle.com> wrote:
> >
> > > Ping? I hope you are still considering this. If not or if
> > > there are any questions I can answer, please let me know.
> >
> > What's happened here is that the patchset has gone through its
> > iterations and a few people have commented and then after a while,
> > nobody had anything to say about the code so nobody said anything
> more.
> >
> > But silence doesn't mean acceptance - it just means that nobody had
> > anything to say.
> >
> > I think I looked at the earlier iterations, tried to understand the
> > point behind it all, made a few code suggestions and eventually tuned
> > out. At that time (and hence at this time) I just cannot explain to
> > myself why we would want to merge this code.
> >
> > All new code is a cost/benefit decision. The costs are pretty well
> > known: larger codebase, more code for us and our "customers" to
> > maintain and support, etc. That the code pokes around in vfs and
> > various filesystems does increase those costs a little.
> >
> > But the extent of the benefits to our users aren't obvious to me.
> The
> > coe is still xen-specific, I believe? If so, that immediately
> reduces
> > the benefit side by a large amount simply because of the reduced
> > audience.
> >
> > We did spend some time trying to get this wired up to zram so that
> the
> > feature would be potentially useful to *all* users, thereby setting
> the
> > usefulness multiplier back to 1.0. But I don't recall that anything
> > came of this?
> >
> > I also don't know how useful the code is to its intended
> > micro-audience: xen users!
> >
> > So can we please revisit all this from the top level? Jeremy, your
> > input would be valuable. Christoph, I recall that you had technical
> > objections - can you please repeat those?
> >
> > It's the best I can do to kick this along, sorry.
>
> Hi Andrew (and Linus) --
>
> Time to re-open this conversation (for 2.6.39 merge window)?
>
> Assuming GregKH approves kztmem as a staging driver, it should
> now set "the usefulness multiplier back to 1.0". Kztmem
> is a superset of Nitin's zcache and zram but more dynamic
> and is completely independent of Xen and virtualization.
>
> See kztmem overview: https://lkml.org/lkml/2011/1/18/170
>
> And I believe Christoph's technical objections have all been
> resolved. See longer version of previous reply here:
> https://lkml.org/lkml/2010/10/30/226
>
> So please reconsider cleancache!
Hi Andrew (and Linus) --
As you may have seen, gregkh has taken zcache into drivers/staging
for merging when the 2.6.39 window opens. And zcache (which
works entirely in-kernel, no virtualization required) depends on
cleancache.
Cleancache and zcache are both in linux-next and, via linux-next,
in mmotm. Last October, Linus said that he preferred cleancache
to merge via you (Andrew), not pulled from my git tree. So:
1) Is cleancache finally now acceptable for merging in the
upcoming 2.6.39 window? and
2) If so, is there anything else I need to do to ensure the
merging of cleancache happens during the open window or will
it all happen automagically (from my point of view) through
your (Andrew's) normal open-window processes?
Sorry to be excessively persistent, but I don't want to miss
the 2.6.39 window due to my newbie-ness. And I plan to be on
vacation for some time during March and would also like to ensure
I don't miss something *I* need to do before or during the
window.
Thanks,
Dan Magenheimer
^ permalink raw reply [flat|nested] 10+ messages in thread
* [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window
@ 2010-10-21 15:20 Dan Magenheimer
0 siblings, 0 replies; 10+ messages in thread
From: Dan Magenheimer @ 2010-10-21 15:20 UTC (permalink / raw)
To: torvalds; +Cc: linux-kernel
Hi Linus --
Please pull:
git://git.kernel.org/pub/scm/linux/kernel/git/djm/tmem.git for-linus
since git commit cb655d0f3d57c23db51b981648e452988c0223f9:
Linus Torvalds (1):
Linux 2.6.36-rc7
This cleancache patchset crosses multiple subsystem boundaries and
at the recent filesystem/storage/mm summit, people suggested that
I should just submit it to linux-next (done), and directly to you
at the next merge window. Previous lkml postings received a great
deal of review and comment from a wide variety of maintainers, as
documented in the commit logs.
In addition, the cleancache shim to Xen Transcendent Memory makes
use of and is dependent on the cleancache patchset. Jeremy Fitzhardinge
asked that I just include the shim in my tree to be pulled at the
same time, to avoid any merge ordering issues. (Another cleancache
user, zcache, developed by Nitin Gupta has been submitted for the
drivers/staging tree and will follow at some point, though possibly
not until the next merge window.)
The patches apply cleanly against linux-2.6.36-rc7. In
linux-next, sfr had to manually resolve a couple of very minor merge
conflicts in mm/Kconfig and include/linux/fs.h due to minor changes
in other trees.
If you have any questions or concerns, please let me know!
Thanks,
Dan
P.S. This is my first direct submission to you and today
happens to be my 50th birthday, so please be kind :-)
Dan Magenheimer (9):
mm/fs: cleancache documentation
fs: add field to superblock to support cleancache
mm: cleancache core ops functions and config
mm/fs: add hooks to support cleancache
ext3: add cleancache support
btrfs: add cleancache support
ext4: add cleancache support
ocfs2: add cleancache support
xen: cleancache shim to Xen Transcendent Memory
.../ABI/testing/sysfs-kernel-mm-cleancache | 11 +
Documentation/vm/cleancache.txt | 267 ++++++++++++++++++++
arch/x86/include/asm/xen/hypercall.h | 7 +
drivers/xen/Makefile | 1 +
drivers/xen/tmem.c | 264 +++++++++++++++++++
fs/btrfs/extent_io.c | 9 +
fs/btrfs/super.c | 2 +
fs/buffer.c | 5 +
fs/ext3/super.c | 2 +
fs/ext4/super.c | 2 +
fs/mpage.c | 7 +
fs/ocfs2/super.c | 2 +
fs/super.c | 3 +
include/linux/cleancache.h | 118 +++++++++
include/linux/fs.h | 5 +
include/xen/interface/xen.h | 22 ++
mm/Kconfig | 22 ++
mm/Makefile | 1 +
mm/cleancache.c | 245 ++++++++++++++++++
mm/filemap.c | 11 +
mm/truncate.c | 10 +
21 files changed, 1016 insertions(+), 0 deletions(-)
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2011-02-17 20:15 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-21 15:21 [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window Dan Magenheimer
2010-10-21 15:36 ` Lin Ming
2010-10-21 15:57 ` gene heskett
2010-10-27 18:37 ` Ping? " Dan Magenheimer
2010-10-30 19:06 ` Andrew Morton
2010-10-30 20:49 ` Jeremy Fitzhardinge
2010-10-31 2:19 ` Dan Magenheimer
2011-01-19 16:42 ` Dan Magenheimer
2011-02-17 20:14 ` PING? cleancache for 2.6.39 window? Dan Magenheimer
-- strict thread matches above, loose matches on Subject: below --
2010-10-21 15:20 [GIT PULL] mm/vfs/fs:cleancache for 2.6.37 merge window Dan Magenheimer
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.