From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60818) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bhI1E-0001TC-2W for qemu-devel@nongnu.org; Tue, 06 Sep 2016 11:10:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bhI18-0007zC-Dh for qemu-devel@nongnu.org; Tue, 06 Sep 2016 11:10:35 -0400 References: <20160829171021.4902-1-pbutsykin@virtuozzo.com> <20160901141718.GB6355@noname.redhat.com> From: Pavel Butsykin Message-ID: <57CEB840.4000809@virtuozzo.com> Date: Tue, 6 Sep 2016 15:36:16 +0300 MIME-Version: 1.0 In-Reply-To: <20160901141718.GB6355@noname.redhat.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH RFC v2 00/22] I/O prefetch cache List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: qemu-block@nongnu.org, qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com, den@openvz.org, jsnow@redhat.com, eblake@redhat.com, famz@redhat.com On 01.09.2016 17:17, Kevin Wolf wrote: > Am 29.08.2016 um 19:09 hat Pavel Butsykin geschrieben: >> The prefetch cache aims to improve the performance of sequential read data. >> Of most interest here are the requests of a small size of data for sequential >> read, such requests can be optimized by extending them and moving into >> the prefetch cache. >> [...] > > Before I start actually looking into your code, I read both this cover > letter and your KVM Forum slides, and as far as I can say, the > fundamental idea and your design look sound to me. It was a good read, > too, so thanks for writing all the explanations! > Thank you! > One thing that came to mind is that we have more caches elsewhere, most > notably the qcow2 metadata cache, and I still have that private branch > that adds a qcow2 data cache, too (for merging small allocating writes, > if you remember my talk from last year). However, the existing > Qcow2Cache has a few problems like being tied to the cluster size. > > So I wondered how hard you think it would be to split pcache into a > reusable cache core that just manages the contents based on calls like > "allocate/drop/get cached memory for bytes x...y", and the actual > pcache code that implements the read-ahead policy. Then qcow2 could > reuse the core and use its own policy about what metadata to cache etc. > I think it's a good idea, and at first glance, this seems achievable. But if we consider more I need a little more information about the interfaces that can use your private cache. Probably for all operations that require increase or decrease the reference count on the object memory will need to make interfaces which will be sufficient to implement the read-ahead policy. But in generally, the separation of memory resources and different policies over the cache memory seems very correct decision. > Of course, this can be done incrementally on top and should by no means > block the inclusion of your code, but if it's possible, it might be an > interesting thought to keep in mind. > I like the idea, so I'm ready to work on more effective solutions. For a start, it would be nice to designate the interfaces of the cache core. The next version I could build, based on anticipated interfaces. Also I might suggest these interfaces in the next version of pcache, your requirements to the interfaces might help me for this. > Kevin >