All of lore.kernel.org
 help / color / mirror / Atom feed
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [PATCH net-next v2 1/9] xen-netback: Introduce TX grant map definitions
Date: Fri, 13 Dec 2013 18:22:38 +0000	[thread overview]
Message-ID: <52AB506E.3040509__1997.87052483789$1386959058$gmane$org@citrix.com> (raw)
In-Reply-To: <20131213153138.GL21900@zion.uk.xensource.com>

On 13/12/13 15:31, Wei Liu wrote:
> On Thu, Dec 12, 2013 at 11:48:09PM +0000, Zoltan Kiss wrote:
>> This patch contains the new definitions necessary for grant mapping.
>>
>> v2:
>> - move unmapping to separate thread. The NAPI instance has to be scheduled
>>    even from thread context, which can cause huge delays
>> - that causes unfortunately bigger struct xenvif
>> - store grant handle after checking validity
>>
>
> If the size of xenvif really becomes a problem, you can try to make
> sratch space like struct gnttab_copy per-cpu. The downside is that
> approach requires much coding and carefully guard against race
> conditions. You would need to consider cost v.s. benefit.

I mentioned this because for the first series I had comments that I 
should be more vigilant about this. At that time there was a problem 
with struct xenvif allocation which was solved by now. My quick 
calculation showed this patch will increase the size with ~15kb

>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>>
>> ---
> [...]
>>   #define XENVIF_QUEUE_LENGTH 32
>>   #define XENVIF_NAPI_WEIGHT  64
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index c1b7a42..3ddc474 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -772,6 +772,20 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>>   	return page;
>>   }
>>
>> +static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
>> +	       struct xen_netif_tx_request *txp,
>> +	       struct gnttab_map_grant_ref *gop)
>> +{
>> +	vif->pages_to_map[gop-vif->tx_map_ops] = vif->mmap_pages[pending_idx];
>> +	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
>> +			  GNTMAP_host_map | GNTMAP_readonly,
>> +			  txp->gref, vif->domid);
>> +
>> +	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
>> +	       sizeof(*txp));
>> +
>> +}
>> +
>
> This helper function is not used until next patch. Probably you can move
> it to the second patch.
>
> The same applies to other helper functions as well. Move them to the
> patch they are used. It would be easier for people to review.
I just moved them here because the second patch is already huge, and I 
couldn't have an idea to splice it up while keeping it bisectable and 
logically consistent. As I mentioned, I welcome ideas about that.

>
>>   static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
>>   					       struct sk_buff *skb,
>>   					       struct xen_netif_tx_request *txp,
>> @@ -1593,6 +1607,106 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   	return work_done;
>>   }
>>
>> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
>> +{
>
> Do we care about zerocopy_success? I don't see it used in this function.
It will be used in the 5th patch. Anyway, it's in the definition of the 
zerocopy callback.

>
>> +	unsigned long flags;
>> +	pending_ring_idx_t index;
>> +	u16 pending_idx = ubuf->desc;
>> +	struct pending_tx_info *temp =
>> +		container_of(ubuf, struct pending_tx_info, callback_struct);
>> +	struct xenvif *vif =
>> +		container_of(temp - pending_idx, struct xenvif,
>> +			pending_tx_info[0]);
>> +
>
> The third parameter to container_of should be the name of the member
> within the struct.
Here we have the pending_idx, so we get a pointer for the holding struct 
pending_tx_info, then for the beginning of pending_tx_info (temp - 
pending_idx), and then to the struct xenvif. It's a bit tricky and not 
straightforward, I admit :)

>
>> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
>> +	do {
>> +		pending_idx = ubuf->desc;
>> +		ubuf = (struct ubuf_info *) ubuf->ctx;
>> +		index = pending_index(vif->dealloc_prod);
>> +		vif->dealloc_ring[index] = pending_idx;
>> +		/* Sync with xenvif_tx_action_dealloc:
>> +		 * insert idx then incr producer.
>> +		 */
>> +		smp_wmb();
>> +		vif->dealloc_prod++;
>> +	} while (ubuf);
>> +	wake_up(&vif->dealloc_wq);
>> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
>> +}
>> +
>> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
>> +{
>> +	struct gnttab_unmap_grant_ref *gop;
>> +	pending_ring_idx_t dc, dp;
>> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
>> +	unsigned int i = 0;
>> +
>> +	dc = vif->dealloc_cons;
>> +	gop = vif->tx_unmap_ops;
>> +
>> +	/* Free up any grants we have finished using */
>> +	do {
>> +		dp = vif->dealloc_prod;
>> +
>> +		/* Ensure we see all indices enqueued by netif_idx_release(). */
>
> There is no netif_idx_release in netback code. :-)
Oh yes, that's from the classic code, it should be 
xenvif_zerocopy_callback. I will fix it.

>
>> +		smp_rmb();
>> +
>> +		while (dc != dp) {
>> +			pending_idx =
>> +				vif->dealloc_ring[pending_index(dc++)];
>> +
>> +			/* Already unmapped? */
>> +			if (vif->grant_tx_handle[pending_idx] ==
>> +				NETBACK_INVALID_HANDLE) {
>> +				netdev_err(vif->dev,
>> +					"Trying to unmap invalid handle! "
>> +					"pending_idx: %x\n", pending_idx);
>> +				continue;
>> +			}
>
> Should this be BUG_ON? AIUI this kthread should be the only one doing
> unmap, right?
The NAPI instance can do it as well if it is a small packet fits into 
PKT_PROT_LEN. But still this scenario shouldn't really happen, I was 
just not sure we have to crash immediately. Maybe handle it as a fatal 
error and destroy the vif?

  reply	other threads:[~2013-12-13 18:22 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-12 23:48 [PATCH net-next v2 0/9] xen-netback: TX grant mapping with SKBTX_DEV_ZEROCOPY instead of copy Zoltan Kiss
2013-12-12 23:48 ` [PATCH net-next v2 1/9] xen-netback: Introduce TX grant map definitions Zoltan Kiss
2013-12-13 15:31   ` Wei Liu
2013-12-13 15:31   ` Wei Liu
2013-12-13 18:22     ` Zoltan Kiss [this message]
2013-12-13 18:22     ` Zoltan Kiss
2013-12-13 19:14       ` Wei Liu
2013-12-13 19:14       ` Wei Liu
2013-12-16 15:21         ` Zoltan Kiss
2013-12-16 15:21         ` Zoltan Kiss
2013-12-16 17:50           ` Wei Liu
2014-01-07 14:50             ` Zoltan Kiss
2014-01-07 14:50             ` Zoltan Kiss
2013-12-16 17:50           ` Wei Liu
2013-12-12 23:48 ` Zoltan Kiss
2013-12-12 23:48 ` [PATCH net-next v2 2/9] xen-netback: Change TX path from grant copy to mapping Zoltan Kiss
2013-12-12 23:48   ` Zoltan Kiss
2013-12-13 15:36   ` Wei Liu
2013-12-13 15:36   ` Wei Liu
2013-12-16 15:38     ` Zoltan Kiss
2013-12-16 18:21       ` Wei Liu
2013-12-16 18:57         ` Zoltan Kiss
2013-12-16 19:06           ` Wei Liu
2013-12-16 19:06           ` Wei Liu
2013-12-16 18:57         ` Zoltan Kiss
2013-12-16 18:21       ` Wei Liu
2013-12-16 15:38     ` Zoltan Kiss
2013-12-17 21:49   ` [Xen-devel] " Konrad Rzeszutek Wilk
2013-12-30 17:58     ` Zoltan Kiss
2013-12-30 17:58     ` [Xen-devel] " Zoltan Kiss
2013-12-17 21:49   ` Konrad Rzeszutek Wilk
2013-12-12 23:48 ` [PATCH net-next v2 3/9] xen-netback: Remove old TX grant copy definitons and fix indentations Zoltan Kiss
2013-12-12 23:48   ` Zoltan Kiss
2013-12-12 23:48 ` [PATCH net-next v2 4/9] xen-netback: Change RX path for mapped SKB fragments Zoltan Kiss
2013-12-12 23:48 ` Zoltan Kiss
2013-12-12 23:48 ` [PATCH net-next v2 5/9] xen-netback: Add stat counters for zerocopy Zoltan Kiss
2013-12-12 23:48 ` Zoltan Kiss
2013-12-12 23:48 ` [PATCH net-next v2 6/9] xen-netback: Handle guests with too many frags Zoltan Kiss
2013-12-12 23:48 ` Zoltan Kiss
2013-12-13 15:43   ` Wei Liu
2013-12-13 15:43   ` Wei Liu
2013-12-16 16:10     ` Zoltan Kiss
2013-12-16 16:10     ` Zoltan Kiss
2013-12-16 18:09       ` Wei Liu
2014-01-07 15:23         ` Zoltan Kiss
2014-01-07 15:23         ` Zoltan Kiss
2013-12-16 18:09       ` Wei Liu
2013-12-12 23:48 ` [PATCH net-next v2 7/9] xen-netback: Add stat counters for frag_list skbs Zoltan Kiss
2013-12-12 23:48 ` Zoltan Kiss
2013-12-12 23:48 ` [PATCH net-next v2 8/9] xen-netback: Timeout packets in RX path Zoltan Kiss
2013-12-13 15:44   ` Wei Liu
2013-12-16 17:16     ` Zoltan Kiss
2013-12-16 19:03       ` Wei Liu
2013-12-16 19:03       ` Wei Liu
2013-12-16 17:16     ` Zoltan Kiss
2013-12-13 15:44   ` Wei Liu
2013-12-12 23:48 ` Zoltan Kiss
2013-12-12 23:48 ` [PATCH net-next v2 9/9] xen-netback: Aggregate TX unmap operations Zoltan Kiss
2013-12-13 15:44   ` Wei Liu
2013-12-13 15:44   ` Wei Liu
2013-12-16 16:30     ` Zoltan Kiss
2013-12-16 16:30     ` Zoltan Kiss
2013-12-12 23:48 ` Zoltan Kiss
2013-12-16  6:32 ` [PATCH net-next v2 0/9] xen-netback: TX grant mapping with SKBTX_DEV_ZEROCOPY instead of copy annie li
2013-12-16  6:32 ` [Xen-devel] " annie li
2013-12-16 16:13   ` Zoltan Kiss
2013-12-16 16:13   ` Zoltan Kiss

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='52AB506E.3040509__1997.87052483789$1386959058$gmane$org@citrix.com' \
    --to=zoltan.kiss@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=jonathan.davies@citrix.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.