From mboxrd@z Thu Jan 1 00:00:00 1970 From: "J. R. Okajima" Subject: Re: Q. cache in squashfs? Date: Thu, 08 Jul 2010 15:08:47 +0900 Message-ID: <6356.1278569327@jrobl> References: <19486.1277347066@jrobl> <4C354CBE.70401@lougher.demon.co.uk> Cc: linux-fsdevel@vger.kernel.org To: Phillip Lougher Return-path: Received: from mtoichi13.ns.itscom.net ([219.110.2.183]:56271 "EHLO mtoichi13.ns.itscom.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751970Ab0GHGJM (ORCPT ); Thu, 8 Jul 2010 02:09:12 -0400 In-Reply-To: <4C354CBE.70401@lougher.demon.co.uk> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Phillip Lougher: > What I think you're seeing here is the negative effect of fragment > blocks (tail-end packing) in the native squashfs example and the > positive effect of vfs/loop block caching in the ext3 on squashfs example. Thank you very much for your explanation. I think the number of cached decompressed fragment blocks is related too. I thought it is much larger, but I found it is 3 by default. I will try larger value with/without -no-fragments which you pointed. Also I am afraid the nested loopback mount will cause caching doubly (or more), cache by ext3-loopback and by native squashfs loopback, and some people doesn't want this. But if user has rich memory and doen't care about nested caching (because it will be reclaimed when necessary), then I expect the nested loopback mount will be a good option. For instance, - CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE = 1 - inner single ext2 image - mksquashfs without -no-fragments - ram 1GB - the squashfs image size 250MB Do you think will it be better for very random access pattern? J. R. Okajima