All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wei Wang <wei.w.wang@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org,
	qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
	kvm@vger.kernel.org, linux-mm@kvack.org, mhocko@kernel.org,
	akpm@linux-foundation.org, mawilcox@microsoft.com,
	david@redhat.com, penguin-kernel@I-love.SAKURA.ne.jp,
	cornelia.huck@de.ibm.com, mgorman@techsingularity.net,
	aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com,
	willy@infradead.org, liliang.opensource@gmail.com,
	yang.zhang.wz@gmail.com, quan.xu@aliyun.com, nilal@redhat.com,
	riel@redhat.com
Subject: Re: [PATCH v18 07/10] virtio-balloon: VIRTIO_BALLOON_F_SG
Date: Mon, 04 Dec 2017 11:46:46 +0800	[thread overview]
Message-ID: <5A24C526.2060400@intel.com> (raw)
In-Reply-To: <20171201171746-mutt-send-email-mst@kernel.org>

On 12/01/2017 11:38 PM, Michael S. Tsirkin wrote:
> On Wed, Nov 29, 2017 at 09:55:23PM +0800, Wei Wang wrote:
>> +static void send_one_desc(struct virtio_balloon *vb,
>> +			  struct virtqueue *vq,
>> +			  uint64_t addr,
>> +			  uint32_t len,
>> +			  bool inbuf,
>> +			  bool batch)
>> +{
>> +	int err;
>> +	unsigned int size;
>> +
>> +	/* Detach all the used buffers from the vq */
>> +	while (virtqueue_get_buf(vq, &size))
>> +		;
>> +
>> +	err = virtqueue_add_one_desc(vq, addr, len, inbuf, vq);
>> +	/*
>> +	 * This is expected to never fail: there is always at least 1 entry
>> +	 * available on the vq, because when the vq is full the worker thread
>> +	 * that adds the desc will be put into sleep until at least 1 entry is
>> +	 * available to use.
>> +	 */
>> +	BUG_ON(err);
>> +
>> +	/* If batching is requested, we batch till the vq is full */
>> +	if (!batch || !vq->num_free)
>> +		kick_and_wait(vq, vb->acked);
>> +}
>> +
> This internal kick complicates callers. I suggest that instead,
> you move this to callers, just return a "kick required" boolean.
> This way callers do not need to play with num_free at all.

Then in what situation would the function return true of "kick required"?

I think this wouldn't make a difference fundamentally. For example, we 
have 257 sgs (batching size=256) to send to host:

while (i < 257) {
     kick_required = send_sgs();
     if (kick_required)
         kick(); // After the 256 sgs have been added, the caller 
performs a kick().
}

Do we still need a kick here for the 257th sg as before? Only the caller 
knows if the last added sgs need a kick (when the send_sgs receives one 
sg, it doesn't know if there are more to come).

There is another approach to checking if the last added sgs haven't been 
sync-ed to the host: expose "vring_virtqueue->num_added" to the caller 
via a virtio_ring API:

     unsigned int virtqueue_num_added(struct virtqueue *_vq)
    {
         struct vring_virtqueue *vq = to_vvq(_vq);

         return vq->num_added;
   }



>> +/*
>> + * Send balloon pages in sgs to host. The balloon pages are recorded in the
>> + * page xbitmap. Each bit in the bitmap corresponds to a page of PAGE_SIZE.
>> + * The page xbitmap is searched for continuous "1" bits, which correspond
>> + * to continuous pages, to chunk into sgs.
>> + *
>> + * @page_xb_start and @page_xb_end form the range of bits in the xbitmap that
>> + * need to be searched.
>> + */
>> +static void tell_host_sgs(struct virtio_balloon *vb,
>> +			  struct virtqueue *vq,
>> +			  unsigned long page_xb_start,
>> +			  unsigned long page_xb_end)
>> +{
>> +	unsigned long pfn_start, pfn_end;
>> +	uint64_t addr;
>> +	uint32_t len, max_len = round_down(UINT_MAX, PAGE_SIZE);
>> +
>> +	pfn_start = page_xb_start;
>> +	while (pfn_start < page_xb_end) {
>> +		pfn_start = xb_find_next_set_bit(&vb->page_xb, pfn_start,
>> +						 page_xb_end);
>> +		if (pfn_start == page_xb_end + 1)
>> +			break;
>> +		pfn_end = xb_find_next_zero_bit(&vb->page_xb,
>> +						pfn_start + 1,
>> +						page_xb_end);
>> +		addr = pfn_start << PAGE_SHIFT;
>> +		len = (pfn_end - pfn_start) << PAGE_SHIFT;
> This assugnment can overflow. Next line compares with UINT_MAX but by
> that time it is too late.  I think you should do all math in 64 bit to
> avoid surprises, then truncate to max_len and then it's safe to assign
> to sg.

Sounds reasonable, thanks.


>> +
>> +	xb_clear_bit_range(&vb->page_xb, page_xb_start, page_xb_end);
>> +}
>> +
>> +static inline int xb_set_page(struct virtio_balloon *vb,
>> +			       struct page *page,
>> +			       unsigned long *pfn_min,
>> +			       unsigned long *pfn_max)
>> +{
>> +	unsigned long pfn = page_to_pfn(page);
>> +	int ret;
>> +
>> +	*pfn_min = min(pfn, *pfn_min);
>> +	*pfn_max = max(pfn, *pfn_max);
>> +
>> +	do {
>> +		ret = xb_preload_and_set_bit(&vb->page_xb, pfn,
>> +					     GFP_NOWAIT | __GFP_NOWARN);
>> +	} while (unlikely(ret == -EAGAIN));
> what exactly does this loop do? Does this wait
> forever until there is some free memory? why GFP_NOWAIT?

Basically, "-EAGAIN" is returned from xb_set_bit() in the case when the 
pre-allocated per-cpu ida_bitmap is NULL. In that case, the caller 
re-invokes xb_preload_and_set_bit(), which re-invokes xb_preload to 
allocate ida_bitmap. So "-EAGAIN" actually does not indicate a status 
about memory allocation. "-ENOMEM" is the one to indicate the failure of 
memory allocation, but the loop doesn't re-try on "-ENOMEM".

GFP_NOWAIT is used to avoid memory reclaiming, which could cause the 
deadlock issue we discussed before.




>   	return num_freed_pages;
>   }
>   
> +/*
> + * The regular leak_balloon() with VIRTIO_BALLOON_F_SG needs memory allocation
> + * for xbitmap, which is not suitable for the oom case. This function does not
> + * use xbitmap to chunk pages, so it can be used by oom notifier to deflate
> + * pages when VIRTIO_BALLOON_F_SG is negotiated.
> + */
> I guess we can live with this for now.

Agree, the patchset has been big. We can get the basic implementation in 
first, and leave the following as future work. I can add it in the 
commit log.

> Two things to consider
> - adding support for pre-allocating indirect buffers
> - sorting the internal page queue (how?)


Best,
Wei

WARNING: multiple messages have this Message-ID (diff)
From: Wei Wang <wei.w.wang@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org,
	qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
	kvm@vger.kernel.org, linux-mm@kvack.org, mhocko@kernel.org,
	akpm@linux-foundation.org, mawilcox@microsoft.com,
	david@redhat.com, penguin-kernel@I-love.SAKURA.ne.jp,
	cornelia.huck@de.ibm.com, mgorman@techsingularity.net,
	aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com,
	willy@infradead.org, liliang.opensource@gmail.com,
	yang.zhang.wz@gmail.com, quan.xu@aliyun.com, nilal@redhat.com,
	riel@redhat.com
Subject: Re: [PATCH v18 07/10] virtio-balloon: VIRTIO_BALLOON_F_SG
Date: Mon, 04 Dec 2017 11:46:46 +0800	[thread overview]
Message-ID: <5A24C526.2060400@intel.com> (raw)
In-Reply-To: <20171201171746-mutt-send-email-mst@kernel.org>

On 12/01/2017 11:38 PM, Michael S. Tsirkin wrote:
> On Wed, Nov 29, 2017 at 09:55:23PM +0800, Wei Wang wrote:
>> +static void send_one_desc(struct virtio_balloon *vb,
>> +			  struct virtqueue *vq,
>> +			  uint64_t addr,
>> +			  uint32_t len,
>> +			  bool inbuf,
>> +			  bool batch)
>> +{
>> +	int err;
>> +	unsigned int size;
>> +
>> +	/* Detach all the used buffers from the vq */
>> +	while (virtqueue_get_buf(vq, &size))
>> +		;
>> +
>> +	err = virtqueue_add_one_desc(vq, addr, len, inbuf, vq);
>> +	/*
>> +	 * This is expected to never fail: there is always at least 1 entry
>> +	 * available on the vq, because when the vq is full the worker thread
>> +	 * that adds the desc will be put into sleep until at least 1 entry is
>> +	 * available to use.
>> +	 */
>> +	BUG_ON(err);
>> +
>> +	/* If batching is requested, we batch till the vq is full */
>> +	if (!batch || !vq->num_free)
>> +		kick_and_wait(vq, vb->acked);
>> +}
>> +
> This internal kick complicates callers. I suggest that instead,
> you move this to callers, just return a "kick required" boolean.
> This way callers do not need to play with num_free at all.

Then in what situation would the function return true of "kick required"?

I think this wouldn't make a difference fundamentally. For example, we 
have 257 sgs (batching size=256) to send to host:

while (i < 257) {
     kick_required = send_sgs();
     if (kick_required)
         kick(); // After the 256 sgs have been added, the caller 
performs a kick().
}

Do we still need a kick here for the 257th sg as before? Only the caller 
knows if the last added sgs need a kick (when the send_sgs receives one 
sg, it doesn't know if there are more to come).

There is another approach to checking if the last added sgs haven't been 
sync-ed to the host: expose "vring_virtqueue->num_added" to the caller 
via a virtio_ring API:

     unsigned int virtqueue_num_added(struct virtqueue *_vq)
    {
         struct vring_virtqueue *vq = to_vvq(_vq);

         return vq->num_added;
   }



>> +/*
>> + * Send balloon pages in sgs to host. The balloon pages are recorded in the
>> + * page xbitmap. Each bit in the bitmap corresponds to a page of PAGE_SIZE.
>> + * The page xbitmap is searched for continuous "1" bits, which correspond
>> + * to continuous pages, to chunk into sgs.
>> + *
>> + * @page_xb_start and @page_xb_end form the range of bits in the xbitmap that
>> + * need to be searched.
>> + */
>> +static void tell_host_sgs(struct virtio_balloon *vb,
>> +			  struct virtqueue *vq,
>> +			  unsigned long page_xb_start,
>> +			  unsigned long page_xb_end)
>> +{
>> +	unsigned long pfn_start, pfn_end;
>> +	uint64_t addr;
>> +	uint32_t len, max_len = round_down(UINT_MAX, PAGE_SIZE);
>> +
>> +	pfn_start = page_xb_start;
>> +	while (pfn_start < page_xb_end) {
>> +		pfn_start = xb_find_next_set_bit(&vb->page_xb, pfn_start,
>> +						 page_xb_end);
>> +		if (pfn_start == page_xb_end + 1)
>> +			break;
>> +		pfn_end = xb_find_next_zero_bit(&vb->page_xb,
>> +						pfn_start + 1,
>> +						page_xb_end);
>> +		addr = pfn_start << PAGE_SHIFT;
>> +		len = (pfn_end - pfn_start) << PAGE_SHIFT;
> This assugnment can overflow. Next line compares with UINT_MAX but by
> that time it is too late.  I think you should do all math in 64 bit to
> avoid surprises, then truncate to max_len and then it's safe to assign
> to sg.

Sounds reasonable, thanks.


>> +
>> +	xb_clear_bit_range(&vb->page_xb, page_xb_start, page_xb_end);
>> +}
>> +
>> +static inline int xb_set_page(struct virtio_balloon *vb,
>> +			       struct page *page,
>> +			       unsigned long *pfn_min,
>> +			       unsigned long *pfn_max)
>> +{
>> +	unsigned long pfn = page_to_pfn(page);
>> +	int ret;
>> +
>> +	*pfn_min = min(pfn, *pfn_min);
>> +	*pfn_max = max(pfn, *pfn_max);
>> +
>> +	do {
>> +		ret = xb_preload_and_set_bit(&vb->page_xb, pfn,
>> +					     GFP_NOWAIT | __GFP_NOWARN);
>> +	} while (unlikely(ret == -EAGAIN));
> what exactly does this loop do? Does this wait
> forever until there is some free memory? why GFP_NOWAIT?

Basically, "-EAGAIN" is returned from xb_set_bit() in the case when the 
pre-allocated per-cpu ida_bitmap is NULL. In that case, the caller 
re-invokes xb_preload_and_set_bit(), which re-invokes xb_preload to 
allocate ida_bitmap. So "-EAGAIN" actually does not indicate a status 
about memory allocation. "-ENOMEM" is the one to indicate the failure of 
memory allocation, but the loop doesn't re-try on "-ENOMEM".

GFP_NOWAIT is used to avoid memory reclaiming, which could cause the 
deadlock issue we discussed before.




>   	return num_freed_pages;
>   }
>   
> +/*
> + * The regular leak_balloon() with VIRTIO_BALLOON_F_SG needs memory allocation
> + * for xbitmap, which is not suitable for the oom case. This function does not
> + * use xbitmap to chunk pages, so it can be used by oom notifier to deflate
> + * pages when VIRTIO_BALLOON_F_SG is negotiated.
> + */
> I guess we can live with this for now.

Agree, the patchset has been big. We can get the basic implementation in 
first, and leave the following as future work. I can add it in the 
commit log.

> Two things to consider
> - adding support for pre-allocating indirect buffers
> - sorting the internal page queue (how?)


Best,
Wei

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Wei Wang <wei.w.wang@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org,
	qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
	kvm@vger.kernel.org, linux-mm@kvack.org, mhocko@kernel.org,
	akpm@linux-foundation.org, mawilcox@microsoft.com,
	david@redhat.com, penguin-kernel@I-love.SAKURA.ne.jp,
	cornelia.huck@de.ibm.com, mgorman@techsingularity.net,
	aarcange@redhat.com, amit.shah@redhat.com, pbonzini@redhat.com,
	willy@infradead.org, liliang.opensource@gmail.com,
	yang.zhang.wz@gmail.com, quan.xu@aliyun.com, nilal@redhat.com,
	riel@redhat.com
Subject: Re: [Qemu-devel] [PATCH v18 07/10] virtio-balloon: VIRTIO_BALLOON_F_SG
Date: Mon, 04 Dec 2017 11:46:46 +0800	[thread overview]
Message-ID: <5A24C526.2060400@intel.com> (raw)
In-Reply-To: <20171201171746-mutt-send-email-mst@kernel.org>

On 12/01/2017 11:38 PM, Michael S. Tsirkin wrote:
> On Wed, Nov 29, 2017 at 09:55:23PM +0800, Wei Wang wrote:
>> +static void send_one_desc(struct virtio_balloon *vb,
>> +			  struct virtqueue *vq,
>> +			  uint64_t addr,
>> +			  uint32_t len,
>> +			  bool inbuf,
>> +			  bool batch)
>> +{
>> +	int err;
>> +	unsigned int size;
>> +
>> +	/* Detach all the used buffers from the vq */
>> +	while (virtqueue_get_buf(vq, &size))
>> +		;
>> +
>> +	err = virtqueue_add_one_desc(vq, addr, len, inbuf, vq);
>> +	/*
>> +	 * This is expected to never fail: there is always at least 1 entry
>> +	 * available on the vq, because when the vq is full the worker thread
>> +	 * that adds the desc will be put into sleep until at least 1 entry is
>> +	 * available to use.
>> +	 */
>> +	BUG_ON(err);
>> +
>> +	/* If batching is requested, we batch till the vq is full */
>> +	if (!batch || !vq->num_free)
>> +		kick_and_wait(vq, vb->acked);
>> +}
>> +
> This internal kick complicates callers. I suggest that instead,
> you move this to callers, just return a "kick required" boolean.
> This way callers do not need to play with num_free at all.

Then in what situation would the function return true of "kick required"?

I think this wouldn't make a difference fundamentally. For example, we 
have 257 sgs (batching size=256) to send to host:

while (i < 257) {
     kick_required = send_sgs();
     if (kick_required)
         kick(); // After the 256 sgs have been added, the caller 
performs a kick().
}

Do we still need a kick here for the 257th sg as before? Only the caller 
knows if the last added sgs need a kick (when the send_sgs receives one 
sg, it doesn't know if there are more to come).

There is another approach to checking if the last added sgs haven't been 
sync-ed to the host: expose "vring_virtqueue->num_added" to the caller 
via a virtio_ring API:

     unsigned int virtqueue_num_added(struct virtqueue *_vq)
    {
         struct vring_virtqueue *vq = to_vvq(_vq);

         return vq->num_added;
   }



>> +/*
>> + * Send balloon pages in sgs to host. The balloon pages are recorded in the
>> + * page xbitmap. Each bit in the bitmap corresponds to a page of PAGE_SIZE.
>> + * The page xbitmap is searched for continuous "1" bits, which correspond
>> + * to continuous pages, to chunk into sgs.
>> + *
>> + * @page_xb_start and @page_xb_end form the range of bits in the xbitmap that
>> + * need to be searched.
>> + */
>> +static void tell_host_sgs(struct virtio_balloon *vb,
>> +			  struct virtqueue *vq,
>> +			  unsigned long page_xb_start,
>> +			  unsigned long page_xb_end)
>> +{
>> +	unsigned long pfn_start, pfn_end;
>> +	uint64_t addr;
>> +	uint32_t len, max_len = round_down(UINT_MAX, PAGE_SIZE);
>> +
>> +	pfn_start = page_xb_start;
>> +	while (pfn_start < page_xb_end) {
>> +		pfn_start = xb_find_next_set_bit(&vb->page_xb, pfn_start,
>> +						 page_xb_end);
>> +		if (pfn_start == page_xb_end + 1)
>> +			break;
>> +		pfn_end = xb_find_next_zero_bit(&vb->page_xb,
>> +						pfn_start + 1,
>> +						page_xb_end);
>> +		addr = pfn_start << PAGE_SHIFT;
>> +		len = (pfn_end - pfn_start) << PAGE_SHIFT;
> This assugnment can overflow. Next line compares with UINT_MAX but by
> that time it is too late.  I think you should do all math in 64 bit to
> avoid surprises, then truncate to max_len and then it's safe to assign
> to sg.

Sounds reasonable, thanks.


>> +
>> +	xb_clear_bit_range(&vb->page_xb, page_xb_start, page_xb_end);
>> +}
>> +
>> +static inline int xb_set_page(struct virtio_balloon *vb,
>> +			       struct page *page,
>> +			       unsigned long *pfn_min,
>> +			       unsigned long *pfn_max)
>> +{
>> +	unsigned long pfn = page_to_pfn(page);
>> +	int ret;
>> +
>> +	*pfn_min = min(pfn, *pfn_min);
>> +	*pfn_max = max(pfn, *pfn_max);
>> +
>> +	do {
>> +		ret = xb_preload_and_set_bit(&vb->page_xb, pfn,
>> +					     GFP_NOWAIT | __GFP_NOWARN);
>> +	} while (unlikely(ret == -EAGAIN));
> what exactly does this loop do? Does this wait
> forever until there is some free memory? why GFP_NOWAIT?

Basically, "-EAGAIN" is returned from xb_set_bit() in the case when the 
pre-allocated per-cpu ida_bitmap is NULL. In that case, the caller 
re-invokes xb_preload_and_set_bit(), which re-invokes xb_preload to 
allocate ida_bitmap. So "-EAGAIN" actually does not indicate a status 
about memory allocation. "-ENOMEM" is the one to indicate the failure of 
memory allocation, but the loop doesn't re-try on "-ENOMEM".

GFP_NOWAIT is used to avoid memory reclaiming, which could cause the 
deadlock issue we discussed before.




>   	return num_freed_pages;
>   }
>   
> +/*
> + * The regular leak_balloon() with VIRTIO_BALLOON_F_SG needs memory allocation
> + * for xbitmap, which is not suitable for the oom case. This function does not
> + * use xbitmap to chunk pages, so it can be used by oom notifier to deflate
> + * pages when VIRTIO_BALLOON_F_SG is negotiated.
> + */
> I guess we can live with this for now.

Agree, the patchset has been big. We can get the basic implementation in 
first, and leave the following as future work. I can add it in the 
commit log.

> Two things to consider
> - adding support for pre-allocating indirect buffers
> - sorting the internal page queue (how?)


Best,
Wei

  reply	other threads:[~2017-12-04  3:44 UTC|newest]

Thread overview: 167+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-29 13:55 [PATCH v18 00/10] Virtio-balloon Enhancement Wei Wang
2017-11-29 13:55 ` [virtio-dev] " Wei Wang
2017-11-29 13:55 ` [Qemu-devel] " Wei Wang
2017-11-29 13:55 ` Wei Wang
2017-11-29 13:55 ` [PATCH v18 01/10] idr: add #include <linux/bug.h> Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-30  0:58   ` Matthew Wilcox
2017-11-30  0:58   ` Matthew Wilcox
2017-11-30  0:58     ` [Qemu-devel] " Matthew Wilcox
2017-11-30  0:58     ` Matthew Wilcox
2017-11-30  7:07     ` Michal Hocko
2017-11-30  7:07     ` Michal Hocko
2017-11-30  7:07       ` [Qemu-devel] " Michal Hocko
2017-11-30  7:07       ` Michal Hocko
2017-11-30 21:49     ` Andrew Morton
2017-11-30 21:49     ` Andrew Morton
2017-11-30 21:49       ` Andrew Morton
2017-11-29 13:55 ` [PATCH v18 02/10] radix tree test suite: remove ARRAY_SIZE to avoid redefinition Wei Wang
2017-11-29 13:55 ` Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-29 13:55 ` [PATCH v18 03/10] xbitmap: Introduce xbitmap Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-29 13:55 ` Wei Wang
2017-11-29 13:55 ` [PATCH v18 04/10] xbitmap: potential improvement Wei Wang
2017-11-29 13:55 ` Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-29 13:55 ` [PATCH v18 05/10] xbitmap: add more operations Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-30 10:34   ` Tetsuo Handa
2017-11-30 10:34     ` [Qemu-devel] " Tetsuo Handa
2017-11-30 10:34     ` Tetsuo Handa
2017-11-30 13:35     ` Tetsuo Handa
2017-11-30 13:35       ` [Qemu-devel] " Tetsuo Handa
2017-11-30 13:35       ` Tetsuo Handa
2017-11-30 14:39       ` Matthew Wilcox
2017-11-30 14:39       ` Matthew Wilcox
2017-11-30 14:39         ` [Qemu-devel] " Matthew Wilcox
2017-11-30 14:39         ` Matthew Wilcox
2017-12-03  1:44         ` Tetsuo Handa
2017-12-03  1:44           ` [Qemu-devel] " Tetsuo Handa
2017-12-03  1:44           ` Tetsuo Handa
2017-12-01  8:02     ` Wei Wang
2017-12-01  8:02       ` [virtio-dev] " Wei Wang
2017-12-01  8:02       ` Wei Wang
2017-12-01 13:02       ` Tetsuo Handa
2017-12-01 13:02         ` [Qemu-devel] " Tetsuo Handa
2017-12-01 13:02         ` Tetsuo Handa
2017-12-01 14:13         ` Matthew Wilcox
2017-12-01 14:13         ` Matthew Wilcox
2017-12-01 14:13           ` [Qemu-devel] " Matthew Wilcox
2017-12-01 14:13           ` Matthew Wilcox
2017-12-01 15:09         ` Wang, Wei W
2017-12-01 15:09         ` Wang, Wei W
2017-12-01 15:09           ` [virtio-dev] " Wang, Wei W
2017-12-01 15:09           ` [Qemu-devel] " Wang, Wei W
2017-12-01 15:09           ` Wang, Wei W
2017-12-01 15:09           ` Wang, Wei W
2017-12-01 17:25           ` Matthew Wilcox
2017-12-01 17:25             ` [Qemu-devel] " Matthew Wilcox
2017-12-01 17:25             ` Matthew Wilcox
2017-12-01 17:25             ` Matthew Wilcox
2017-12-03  1:50             ` Tetsuo Handa
2017-12-03  1:50               ` [Qemu-devel] " Tetsuo Handa
2017-12-03  1:50               ` Tetsuo Handa
2017-12-07 12:01               ` Wei Wang
2017-12-07 12:01               ` Wei Wang
2017-12-07 12:01                 ` [virtio-dev] " Wei Wang
2017-12-07 12:01                 ` [Qemu-devel] " Wei Wang
2017-12-07 12:01                 ` Wei Wang
2017-12-07 15:41                 ` Michael S. Tsirkin
2017-12-07 15:41                   ` [virtio-dev] " Michael S. Tsirkin
2017-12-07 15:41                   ` [Qemu-devel] " Michael S. Tsirkin
2017-12-07 15:41                   ` Michael S. Tsirkin
2017-12-07 15:41                 ` Michael S. Tsirkin
2017-12-01 17:25           ` Matthew Wilcox
2017-12-01  8:02     ` Wei Wang
2017-11-29 13:55 ` Wei Wang
2017-11-29 13:55 ` [PATCH v18 06/10] virtio_ring: add a new API, virtqueue_add_one_desc Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-30 19:38   ` Michael S. Tsirkin
2017-11-30 19:38     ` [virtio-dev] " Michael S. Tsirkin
2017-11-30 19:38     ` [Qemu-devel] " Michael S. Tsirkin
2017-11-30 19:38     ` Michael S. Tsirkin
2017-12-01  8:06     ` Wei Wang
2017-12-01  8:06     ` Wei Wang
2017-12-01  8:06       ` [virtio-dev] " Wei Wang
2017-12-01  8:06       ` Wei Wang
2017-11-30 19:38   ` Michael S. Tsirkin
2017-11-29 13:55 ` Wei Wang
2017-11-29 13:55 ` [PATCH v18 07/10] virtio-balloon: VIRTIO_BALLOON_F_SG Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-30 10:35   ` Tetsuo Handa
2017-11-30 10:35     ` [Qemu-devel] " Tetsuo Handa
2017-11-30 10:35     ` Tetsuo Handa
2017-11-30 16:25     ` Wang, Wei W
2017-11-30 16:25     ` Wang, Wei W
2017-11-30 16:25       ` [virtio-dev] " Wang, Wei W
2017-11-30 16:25       ` [Qemu-devel] " Wang, Wei W
2017-11-30 16:25       ` Wang, Wei W
2017-11-30 16:25       ` Wang, Wei W
2017-12-01 15:38   ` Michael S. Tsirkin
2017-12-01 15:38   ` Michael S. Tsirkin
2017-12-01 15:38     ` [virtio-dev] " Michael S. Tsirkin
2017-12-01 15:38     ` [Qemu-devel] " Michael S. Tsirkin
2017-12-01 15:38     ` Michael S. Tsirkin
2017-12-04  3:46     ` Wei Wang [this message]
2017-12-04  3:46       ` [Qemu-devel] " Wei Wang
2017-12-04  3:46       ` Wei Wang
2017-12-04  3:46     ` Wei Wang
2017-11-29 13:55 ` Wei Wang
2017-11-29 13:55 ` [PATCH v18 08/10] mm: support reporting free page blocks Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-29 13:55 ` Wei Wang
2017-11-29 13:55 ` [PATCH v18 09/10] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-29 13:55 ` Wei Wang
2017-11-29 13:55 ` [PATCH v18 10/10] virtio-balloon: don't report free pages when page poisoning is enabled Wei Wang
2017-11-29 13:55   ` [virtio-dev] " Wei Wang
2017-11-29 13:55   ` [Qemu-devel] " Wei Wang
2017-11-29 13:55   ` Wei Wang
2017-11-30 10:45   ` Tetsuo Handa
2017-12-01  8:06     ` Wei Wang
2017-12-01  8:06       ` [virtio-dev] " Wei Wang
2017-12-01 15:49   ` Michael S. Tsirkin
2017-12-01 15:49   ` Michael S. Tsirkin
2017-12-01 15:49     ` [virtio-dev] " Michael S. Tsirkin
2017-12-01 15:49     ` [Qemu-devel] " Michael S. Tsirkin
2017-12-01 15:49     ` Michael S. Tsirkin
2017-12-04  5:39     ` Wei Wang
2017-12-04  5:39       ` [Qemu-devel] " Wei Wang
2017-12-04  5:39       ` Wei Wang
2017-12-04  5:39     ` Wei Wang
2017-12-11  6:38     ` Wei Wang
2017-12-11  6:38     ` Wei Wang
2017-12-11  6:38       ` [virtio-dev] " Wei Wang
2017-12-11  6:38       ` [Qemu-devel] " Wei Wang
2017-12-11  6:38       ` Wei Wang
2017-12-11 13:24       ` Michael S. Tsirkin
2017-12-11 13:24       ` Michael S. Tsirkin
2017-12-11 13:24         ` [virtio-dev] " Michael S. Tsirkin
2017-12-11 13:24         ` [Qemu-devel] " Michael S. Tsirkin
2017-12-11 13:24         ` Michael S. Tsirkin
2017-12-12 12:21         ` Wei Wang
2017-12-12 12:21         ` Wei Wang
2017-12-12 12:21           ` [virtio-dev] " Wei Wang
2017-12-12 12:21           ` [Qemu-devel] " Wei Wang
2017-12-12 12:21           ` Wei Wang
2017-11-29 13:55 ` Wei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5A24C526.2060400@intel.com \
    --to=wei.w.wang@intel.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=amit.shah@redhat.com \
    --cc=cornelia.huck@de.ibm.com \
    --cc=david@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=liliang.opensource@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mawilcox@microsoft.com \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=mst@redhat.com \
    --cc=nilal@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=qemu-devel@nongnu.org \
    --cc=quan.xu@aliyun.com \
    --cc=riel@redhat.com \
    --cc=virtio-dev@lists.oasis-open.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=willy@infradead.org \
    --cc=yang.zhang.wz@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.