From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752134AbcKGDiE convert rfc822-to-8bit (ORCPT ); Sun, 6 Nov 2016 22:38:04 -0500 Received: from mga06.intel.com ([134.134.136.31]:45280 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751596AbcKGDiC (ORCPT ); Sun, 6 Nov 2016 22:38:02 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,604,1473145200"; d="scan'208";a="28216972" From: "Li, Liang Z" To: "Hansen, Dave" , "mst@redhat.com" CC: "pbonzini@redhat.com" , "amit.shah@redhat.com" , "quintela@redhat.com" , "dgilbert@redhat.com" , "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "virtio-dev@lists.oasis-open.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , "mgorman@techsingularity.net" , "cornelia.huck@de.ibm.com" Subject: RE: [PATCH kernel v4 7/7] virtio-balloon: tell host vm's unused page info Thread-Topic: [PATCH kernel v4 7/7] virtio-balloon: tell host vm's unused page info Thread-Index: AQHSNsa3FAqHCsTfF0WhBu9G4uLOg6DM0C0A Date: Mon, 7 Nov 2016 03:37:58 +0000 Message-ID: References: <1478067447-24654-1-git-send-email-liang.z.li@intel.com> <1478067447-24654-8-git-send-email-liang.z.li@intel.com> In-Reply-To: Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNTBmZjgwOGEtNWI5NS00ZDA3LThkNGQtYTc0NmFhNjczMzZkIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6InVXTUkzZ2J4enJDNlNJV3cyS2RUdUhZRVVTeWNiRlg2WVo0MHkyN29GYlU9In0= x-ctpclassification: CTP_IC x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Please squish this and patch 5 together. It makes no sense to separate them. > OK. > > +static void send_unused_pages_info(struct virtio_balloon *vb, > > + unsigned long req_id) > > +{ > > + struct scatterlist sg_in; > > + unsigned long pfn = 0, bmap_len, pfn_limit, last_pfn, nr_pfn; > > + struct virtqueue *vq = vb->req_vq; > > + struct virtio_balloon_resp_hdr *hdr = vb->resp_hdr; > > + int ret = 1, used_nr_bmap = 0, i; > > + > > + if (virtio_has_feature(vb->vdev, > VIRTIO_BALLOON_F_PAGE_BITMAP) && > > + vb->nr_page_bmap == 1) > > + extend_page_bitmap(vb); > > + > > + pfn_limit = PFNS_PER_BMAP * vb->nr_page_bmap; > > + mutex_lock(&vb->balloon_lock); > > + last_pfn = get_max_pfn(); > > + > > + while (ret) { > > + clear_page_bitmap(vb); > > + ret = get_unused_pages(pfn, pfn + pfn_limit, vb- > >page_bitmap, > > + PFNS_PER_BMAP, vb->nr_page_bmap); > > This changed the underlying data structure without changing the way that > the structure is populated. > > This algorithm picks a "PFNS_PER_BMAP * vb->nr_page_bmap"-sized set of > pfns, allocates a bitmap for them, the loops through all zones looking for > pages in any free list that are in that range. > > Unpacking all the indirection, it looks like this: > > for (pfn = 0; pfn < get_max_pfn(); pfn += BITMAP_SIZE_IN_PFNS) > for_each_populated_zone(zone) > for_each_migratetype_order(order, t) > list_for_each(..., &zone->free_area[order])... > > Let's say we do a 32k bitmap that can hold ~1M pages. That's 4GB of RAM. > On a 1TB system, that's 256 passes through the top-level loop. > The bottom-level lists have tens of thousands of pages in them, even on my > laptop. Only 1/256 of these pages will get consumed in a given pass. > Your description is not exactly. A 32k bitmap is used only when there is few free memory left in the system and when the extend_page_bitmap() failed to allocate more memory for the bitmap. Or dozens of 32k split bitmap will be used, this version limit the bitmap count to 32, it means we can use at most 32*32 kB for the bitmap, which can cover 128GB for RAM. We can increase the bitmap count limit to a larger value if 32 is not big enough. > That's an awfully inefficient way of doing it. This patch essentially changed > the data structure without changing the algorithm to populate it. > > Please change the *algorithm* to use the new data structure efficiently. > Such a change would only do a single pass through each freelist, and would > choose whether to use the extent-based (pfn -> range) or bitmap-based > approach based on the contents of the free lists. Save the free page info to a raw bitmap first and then process the raw bitmap to get the proper ' extent-based ' and 'bitmap-based' is the most efficient way I can come up with to save the virtio data transmission. Do you have some better idea? In the QEMU, no matter how we encode the bitmap, the raw format bitmap will be used in the end. But what I did in this version is: kernel: get the raw bitmap --> encode the bitmap QEMU: decode the bitmap --> get the raw bitmap Is it worth to do this kind of job here? we can save the virtio data transmission, but at the same time, we did extra work. It seems the benefit we get for this feature is not as big as that in fast balloon inflating/deflating. > > You should not be using get_max_pfn(). Any patch set that continues to use > it is not likely to be using a proper algorithm. Do you have any suggestion about how to avoid it? Thanks! Liang