linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Re: 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted
@ 2012-10-23  8:17 Jongman Heo
  2012-10-23  9:05 ` Eric Dumazet
  0 siblings, 1 reply; 12+ messages in thread
From: Jongman Heo @ 2012-10-23  8:17 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: 허종만, linux-kernel, netdev, edumazet

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=utf-8, Size: 19658 bytes --]


Hi,

------- Original Message -------
Sender : Eric Dumazet<eric.dumazet@gmail.com>
Date : 2012-10-23 15:08 (GMT+09:00)
Title : Re: 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted

On Tue, 2012-10-23 at 05:38 +0000, Jongman Heo wrote:
> Hmm,
> 
> I've just met the issue, with the commit 5640f768 reverted.
> It seems that the issue does not always happen. So, my bisection may not be correct.
> 
> At this moment, I don't have enough time to do bisection again..
> 
> Regards.

What happens, if instead of reverting you try the following ?

If this solves the problem, then we shall find the driver that assumes
frags are order-0 pages only.

diff --git a/net/core/sock.c b/net/core/sock.c
index 8a146cf..a743e7c 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1726,7 +1726,7 @@ struct sk_buff *sock_alloc_send_skb(struct sock *sk, unsigned long size,
EXPORT_SYMBOL(sock_alloc_send_skb);

/* On 32bit arches, an skb frag is limited to 2^15 */
-#define SKB_FRAG_PAGE_ORDER get_order(32768)
+#define SKB_FRAG_PAGE_ORDER 0

bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag)
{

------------------------------------------------------------------------------------

With above patch, the issue is not reproduced so far.

 - Current mainline                                    :  issue reproduced on 1st run
 - Reverting commit 5640f768 (tested again)  : not reproduced until 300 runs
 - Applying above patch                             : not reproduced until 300 runs

To be sure, maybe more testing is needed...

FYI, vmxnet3 driver is used for ethernet.

When the issue happens, following error messages were emitted.

[   84.445735] CIFS VFS: default security mechanism requested.  The default security mechanism will be upgraded from ntlm to ntlmv2 in kernel release 3.3
[   87.135291] net eth0: eth0: tq[0] error 0x80000000
[   87.135298] net eth0: eth0: tq[1] error 0x80000000
[   87.135402] eth0: resetting
[   87.146071] eth0: intr type 3, mode 0, 5 vectors allocated
[   87.146695] eth0: NIC Link is Up 10000 Mbps
[   88.925044] CIFS VFS: Error -104 sending data on socket to server
[   98.934656] CIFS VFS: No writable handles for inode
[   98.938317] CIFS VFS: No writable handles for inode
[   98.940895] CIFS VFS: No writable handles for inode
[   98.943712] CIFS VFS: No writable handles for inode
[   98.946050] CIFS VFS: No writable handles for inode
[   98.948569] CIFS VFS: No writable handles for inode
[   98.951320] CIFS VFS: No writable handles for inode
[   98.954675] CIFS VFS: No writable handles for inode
[   98.957188] CIFS VFS: No writable handles for inode
[   98.960622] CIFS VFS: No writable handles for inode
[   98.963083] CIFS VFS: No writable handles for inode
[   98.965851] CIFS VFS: No writable handles for inode
[   98.970822] CIFS VFS: No writable handles for inode
[   98.973132] CIFS VFS: No writable handles for inode
[   98.976405] CIFS VFS: No writable handles for inode
[   98.978737] CIFS VFS: No writable handles for inode
[   98.981645] CIFS VFS: No writable handles for inode
[   98.983794] CIFS VFS: No writable handles for inode
[   98.987325] CIFS VFS: No writable handles for inode
[   98.989483] CIFS VFS: No writable handles for inode
[   98.991317] CIFS VFS: No writable handles for inode
[   98.993488] CIFS VFS: No writable handles for inode
[   98.995697] CIFS VFS: No writable handles for inode
[   98.997633] CIFS VFS: No writable handles for inode
[   98.999935] CIFS VFS: No writable handles for inode
[   99.002929] CIFS VFS: No writable handles for inode
[   99.004821] CIFS VFS: No writable handles for inode
[   99.006988] CIFS VFS: No writable handles for inode
[   99.009084] CIFS VFS: No writable handles for inode
[   99.011119] CIFS VFS: No writable handles for inode
[   99.013276] CIFS VFS: No writable handles for inode
[   99.018433] CIFS VFS: No writable handles for inode
[   99.020087] CIFS VFS: No writable handles for inode
[   99.022303] CIFS VFS: No writable handles for inode
[   99.024561] CIFS VFS: No writable handles for inode
[   99.027029] CIFS VFS: No writable handles for inode
[   99.029989] CIFS VFS: No writable handles for inode
[   99.031945] CIFS VFS: No writable handles for inode
[   99.033966] CIFS VFS: No writable handles for inode
[   99.036478] CIFS VFS: No writable handles for inode
[   99.038424] CIFS VFS: No writable handles for inode
[   99.040542] CIFS VFS: No writable handles for inode
[   99.043051] CIFS VFS: No writable handles for inode
[   99.045687] CIFS VFS: No writable handles for inode
[   99.047890] CIFS VFS: No writable handles for inode
[   99.049974] CIFS VFS: No writable handles for inode
[   99.051911] CIFS VFS: No writable handles for inode
[   99.053984] CIFS VFS: No writable handles for inode
[   99.056304] CIFS VFS: No writable handles for inode
[   99.058446] CIFS VFS: No writable handles for inode
[   99.060956] CIFS VFS: No writable handles for inode
[   99.063243] CIFS VFS: No writable handles for inode
[   99.065744] CIFS VFS: No writable handles for inode
[   99.068592] CIFS VFS: No writable handles for inode
[   99.072413] CIFS VFS: No writable handles for inode
[   99.074762] CIFS VFS: No writable handles for inode
[   99.077070] CIFS VFS: No writable handles for inode
[   99.079275] CIFS VFS: No writable handles for inode
[   99.081153] CIFS VFS: No writable handles for inode
[   99.084438] CIFS VFS: No writable handles for inode
[   99.085381] CIFS VFS: No writable handles for inode
[   99.087665] CIFS VFS: No writable handles for inode
[   99.089626] CIFS VFS: No writable handles for inode
[   99.093592] CIFS VFS: No writable handles for inode
[   99.093917] CIFS VFS: No writable handles for inode
[   99.098944] CIFS VFS: No writable handles for inode
[   99.099862] CIFS VFS: No writable handles for inode
[   99.101750] CIFS VFS: No writable handles for inode
[   99.103742] CIFS VFS: No writable handles for inode
[   99.105721] CIFS VFS: No writable handles for inode
[   99.108514] CIFS VFS: No writable handles for inode
[   99.110649] CIFS VFS: No writable handles for inode
[   99.112777] CIFS VFS: No writable handles for inode
[   99.114931] CIFS VFS: No writable handles for inode
[   99.116961] CIFS VFS: No writable handles for inode
[   99.118876] CIFS VFS: No writable handles for inode
[   99.121018] CIFS VFS: No writable handles for inode
[   99.123855] CIFS VFS: No writable handles for inode
[   99.128161] CIFS VFS: No writable handles for inode
[   99.131628] CIFS VFS: No writable handles for inode
[   99.133981] CIFS VFS: No writable handles for inode
[   99.136556] CIFS VFS: No writable handles for inode
[   99.144838] CIFS VFS: No writable handles for inode
[   99.145400] CIFS VFS: No writable handles for inode
[   99.147937] CIFS VFS: No writable handles for inode
[   99.150207] CIFS VFS: No writable handles for inode
[   99.152736] CIFS VFS: No writable handles for inode
[   99.155053] CIFS VFS: No writable handles for inode
[   99.159129] CIFS VFS: No writable handles for inode
[   99.161704] CIFS VFS: No writable handles for inode
[   99.164052] CIFS VFS: No writable handles for inode
[   99.165932] CIFS VFS: No writable handles for inode
[   99.167853] CIFS VFS: No writable handles for inode
[   99.170069] CIFS VFS: No writable handles for inode
[   99.173877] CIFS VFS: No writable handles for inode
[   99.176124] CIFS VFS: No writable handles for inode
[   99.178481] CIFS VFS: No writable handles for inode
[   99.180650] CIFS VFS: No writable handles for inode
[   99.182594] CIFS VFS: No writable handles for inode
[   99.184490] CIFS VFS: No writable handles for inode
[   99.186808] CIFS VFS: No writable handles for inode
[   99.189290] CIFS VFS: No writable handles for inode
[   99.191268] CIFS VFS: No writable handles for inode
[   99.193450] CIFS VFS: No writable handles for inode
[   99.195688] CIFS VFS: No writable handles for inode
[   99.197691] CIFS VFS: No writable handles for inode
[   99.199917] CIFS VFS: No writable handles for inode
[   99.202435] CIFS VFS: No writable handles for inode
[   99.204365] CIFS VFS: No writable handles for inode
[   99.207785] CIFS VFS: No writable handles for inode
[   99.208852] CIFS VFS: No writable handles for inode
[   99.210881] CIFS VFS: No writable handles for inode
[   99.212819] CIFS VFS: No writable handles for inode
[   99.214687] CIFS VFS: No writable handles for inode
[   99.217646] CIFS VFS: No writable handles for inode
[   99.219113] CIFS VFS: No writable handles for inode
[   99.221024] CIFS VFS: No writable handles for inode
[   99.223119] CIFS VFS: No writable handles for inode
[   99.225714] CIFS VFS: No writable handles for inode
[   99.227819] CIFS VFS: No writable handles for inode
[   99.230155] CIFS VFS: No writable handles for inode
[   99.232507] CIFS VFS: No writable handles for inode
[   99.234721] CIFS VFS: No writable handles for inode
[   99.237112] CIFS VFS: No writable handles for inode
[   99.239099] CIFS VFS: No writable handles for inode
[   99.241165] CIFS VFS: No writable handles for inode
[   99.243192] CIFS VFS: No writable handles for inode
[   99.245056] CIFS VFS: No writable handles for inode
[   99.247091] CIFS VFS: No writable handles for inode
[   99.249537] CIFS VFS: No writable handles for inode
[   99.251581] CIFS VFS: No writable handles for inode
[   99.254136] CIFS VFS: No writable handles for inode
[   99.255693] CIFS VFS: No writable handles for inode
[   99.257751] CIFS VFS: No writable handles for inode
[   99.260073] CIFS VFS: No writable handles for inode
[   99.262034] CIFS VFS: No writable handles for inode
[   99.263945] CIFS VFS: No writable handles for inode
[   99.268429] CIFS VFS: No writable handles for inode
[   99.269245] CIFS VFS: No writable handles for inode
[   99.271384] CIFS VFS: No writable handles for inode
[   99.284337] CIFS VFS: No writable handles for inode
[   99.286461] CIFS VFS: No writable handles for inode
[   99.288751] CIFS VFS: No writable handles for inode
[   99.291391] CIFS VFS: No writable handles for inode
[   99.293489] CIFS VFS: No writable handles for inode
[   99.295453] CIFS VFS: No writable handles for inode
[   99.297539] CIFS VFS: No writable handles for inode
[   99.299699] CIFS VFS: No writable handles for inode
[   99.301625] CIFS VFS: No writable handles for inode
[   99.303612] CIFS VFS: No writable handles for inode
[   99.307376] CIFS VFS: No writable handles for inode
[   99.307746] CIFS VFS: No writable handles for inode
[   99.310013] CIFS VFS: No writable handles for inode
[   99.314325] CIFS VFS: No writable handles for inode
[   99.314975] CIFS VFS: No writable handles for inode
[   99.317209] CIFS VFS: No writable handles for inode
[   99.319170] CIFS VFS: No writable handles for inode
[   99.321063] CIFS VFS: No writable handles for inode
[   99.324000] CIFS VFS: No writable handles for inode
[   99.325868] CIFS VFS: No writable handles for inode
[   99.328080] CIFS VFS: No writable handles for inode
[   99.330139] CIFS VFS: No writable handles for inode
[   99.332033] CIFS VFS: No writable handles for inode
[   99.333893] CIFS VFS: No writable handles for inode
[   99.336164] CIFS VFS: No writable handles for inode
[   99.338178] CIFS VFS: No writable handles for inode
[   99.340240] CIFS VFS: No writable handles for inode
[   99.344359] CIFS VFS: No writable handles for inode
[   99.345060] CIFS VFS: No writable handles for inode
[   99.347397] CIFS VFS: No writable handles for inode
[   99.349643] CIFS VFS: No writable handles for inode
[   99.351537] CIFS VFS: No writable handles for inode
[   99.353579] CIFS VFS: No writable handles for inode
[   99.355830] CIFS VFS: No writable handles for inode
[   99.359613] CIFS VFS: No writable handles for inode
[   99.361690] CIFS VFS: No writable handles for inode
[   99.363793] CIFS VFS: No writable handles for inode
[   99.365703] CIFS VFS: No writable handles for inode
[   99.367615] CIFS VFS: No writable handles for inode
[   99.369506] CIFS VFS: No writable handles for inode
[   99.371413] CIFS VFS: No writable handles for inode
[   99.373504] CIFS VFS: No writable handles for inode
[   99.377574] CIFS VFS: No writable handles for inode
[   99.380010] CIFS VFS: No writable handles for inode
[   99.382033] CIFS VFS: No writable handles for inode
[   99.384279] CIFS VFS: No writable handles for inode
[   99.386217] CIFS VFS: No writable handles for inode
[   99.388558] CIFS VFS: No writable handles for inode
[   99.390884] CIFS VFS: No writable handles for inode
[   99.393282] CIFS VFS: No writable handles for inode
[   99.395154] CIFS VFS: No writable handles for inode
[   99.397046] CIFS VFS: No writable handles for inode
[   99.398986] CIFS VFS: No writable handles for inode
[   99.400920] CIFS VFS: No writable handles for inode
[   99.402976] CIFS VFS: No writable handles for inode
[   99.405042] CIFS VFS: No writable handles for inode
[   99.407172] CIFS VFS: No writable handles for inode
[   99.409224] CIFS VFS: No writable handles for inode
[   99.411218] CIFS VFS: No writable handles for inode
[   99.413171] CIFS VFS: No writable handles for inode
[   99.415035] CIFS VFS: No writable handles for inode
[   99.417124] CIFS VFS: No writable handles for inode
[   99.419719] CIFS VFS: No writable handles for inode
[  158.477345] net eth0: eth0: tq[0] error 0x80000000
[  158.477353] net eth0: eth0: tq[1] error 0x80000000
[  158.477448] eth0: resetting
[  158.486170] eth0: intr type 3, mode 0, 5 vectors allocated
[  158.486718] eth0: NIC Link is Up 10000 Mbps
[  160.223384] net eth0: eth0: tq[0] error 0x80000000
[  160.223392] net eth0: eth0: tq[1] error 0x80000000
[  160.223507] eth0: resetting
[  160.236060] eth0: intr type 3, mode 0, 5 vectors allocated
[  160.236654] eth0: NIC Link is Up 10000 Mbps
[  161.965368] net eth0: eth0: tq[0] error 0x80000000
[  161.965374] net eth0: eth0: tq[1] error 0x80000000
[  161.965409] eth0: resetting
[  161.974675] eth0: intr type 3, mode 0, 5 vectors allocated
[  161.975219] eth0: NIC Link is Up 10000 Mbps
[  162.441348] net eth0: eth0: tq[0] error 0x80000000
[  162.441356] net eth0: eth0: tq[1] error 0x80000000
[  162.441467] eth0: resetting
[  162.451447] eth0: intr type 3, mode 0, 5 vectors allocated
[  162.451791] eth0: NIC Link is Up 10000 Mbps
[  162.733057] net eth0: eth0: tq[0] error 0x80000000
[  162.733064] net eth0: eth0: tq[1] error 0x80000000
[  162.733103] eth0: resetting
[  162.743815] eth0: intr type 3, mode 0, 5 vectors allocated
[  162.744189] eth0: NIC Link is Up 10000 Mbps
[  163.028137] net eth0: eth0: tq[0] error 0x80000000
[  163.028143] net eth0: eth0: tq[1] error 0x80000000
[  163.028184] eth0: resetting
[  163.040926] eth0: intr type 3, mode 0, 5 vectors allocated
[  163.041285] eth0: NIC Link is Up 10000 Mbps
[  163.620144] net eth0: eth0: tq[0] error 0x80000000
[  163.620150] net eth0: eth0: tq[1] error 0x80000000
[  163.620246] eth0: resetting
[  163.632021] eth0: intr type 3, mode 0, 5 vectors allocated
[  163.632586] eth0: NIC Link is Up 10000 Mbps
[  164.016532] net eth0: eth0: tq[0] error 0x80000000
[  164.016540] net eth0: eth0: tq[1] error 0x80000000
[  164.016558] eth0: resetting
[  164.026814] eth0: intr type 3, mode 0, 5 vectors allocated
[  164.027315] eth0: NIC Link is Up 10000 Mbps
[  165.855878] net eth0: eth0: tq[0] error 0x80000000
[  165.855886] net eth0: eth0: tq[1] error 0x80000000
[  165.855935] eth0: resetting
[  165.865117] eth0: intr type 3, mode 0, 5 vectors allocated
[  165.865492] eth0: NIC Link is Up 10000 Mbps
[  166.314014] net eth0: eth0: tq[0] error 0x80000000
[  166.314021] net eth0: eth0: tq[1] error 0x80000000
[  166.314107] eth0: resetting
[  166.323707] eth0: intr type 3, mode 0, 5 vectors allocated
[  166.324065] eth0: NIC Link is Up 10000 Mbps
[  166.615083] net eth0: eth0: tq[0] error 0x80000000
[  166.615089] net eth0: eth0: tq[1] error 0x80000000
[  166.615153] eth0: resetting
[  166.624537] eth0: intr type 3, mode 0, 5 vectors allocated
[  166.624945] eth0: NIC Link is Up 10000 Mbps
[  166.918213] net eth0: eth0: tq[0] error 0x80000000
[  166.918220] net eth0: eth0: tq[1] error 0x80000000
[  166.918296] eth0: resetting
[  166.927277] eth0: intr type 3, mode 0, 5 vectors allocated
[  166.927540] eth0: NIC Link is Up 10000 Mbps
[  167.187279] net eth0: eth0: tq[0] error 0x80000000
[  167.187286] net eth0: eth0: tq[1] error 0x80000000
[  167.187373] eth0: resetting
[  167.196182] eth0: intr type 3, mode 0, 5 vectors allocated
[  167.196639] eth0: NIC Link is Up 10000 Mbps
[  167.479262] net eth0: eth0: tq[0] error 0x80000000
[  167.479270] net eth0: eth0: tq[1] error 0x80000000
[  167.479369] eth0: resetting
[  167.488185] eth0: intr type 3, mode 0, 5 vectors allocated
[  167.488563] eth0: NIC Link is Up 10000 Mbps
[  167.938086] net eth0: eth0: tq[0] error 0x80000000
[  167.938092] net eth0: eth0: tq[1] error 0x80000000
[  167.938128] eth0: resetting
[  167.950384] eth0: intr type 3, mode 0, 5 vectors allocated
[  167.950777] eth0: NIC Link is Up 10000 Mbps
[  168.209071] net eth0: eth0: tq[0] error 0x80000000
[  168.209078] net eth0: eth0: tq[1] error 0x80000000
[  168.209220] eth0: resetting
[  168.218303] eth0: intr type 3, mode 0, 5 vectors allocated
[  168.218577] eth0: NIC Link is Up 10000 Mbps
[  168.527235] net eth0: eth0: tq[0] error 0x80000000
[  168.527241] net eth0: eth0: tq[1] error 0x80000000
[  168.527263] eth0: resetting
[  168.538101] eth0: intr type 3, mode 0, 5 vectors allocated
[  168.538471] eth0: NIC Link is Up 10000 Mbps
[  168.796622] net eth0: eth0: tq[0] error 0x80000000
[  168.796628] net eth0: eth0: tq[1] error 0x80000000
[  168.796664] eth0: resetting
[  168.807190] eth0: intr type 3, mode 0, 5 vectors allocated
[  168.807463] eth0: NIC Link is Up 10000 Mbps
[  169.178494] net eth0: eth0: tq[0] error 0x80000000
[  169.178503] net eth0: eth0: tq[1] error 0x80000000
[  169.178668] eth0: resetting
[  169.187163] eth0: intr type 3, mode 0, 5 vectors allocated
[  169.187518] eth0: NIC Link is Up 10000 Mbps
[  169.425058] net eth0: eth0: tq[0] error 0x80000000
[  169.425064] net eth0: eth0: tq[1] error 0x80000000
[  169.425097] eth0: resetting
[  169.436125] eth0: intr type 3, mode 0, 5 vectors allocated
[  169.436492] eth0: NIC Link is Up 10000 Mbps
[  169.824717] net eth0: eth0: tq[0] error 0x80000000
[  169.824724] net eth0: eth0: tq[1] error 0x80000000
[  169.824808] eth0: resetting
[  169.837647] eth0: intr type 3, mode 0, 5 vectors allocated
[  169.837918] eth0: NIC Link is Up 10000 Mbps
[  170.249090] net eth0: eth0: tq[0] error 0x80000000
[  170.249097] net eth0: eth0: tq[1] error 0x80000000
[  170.249112] eth0: resetting
[  170.259593] eth0: intr type 3, mode 0, 5 vectors allocated
[  170.259888] eth0: NIC Link is Up 10000 Mbps
[  172.100435] net eth0: eth0: tq[0] error 0x80000000
[  172.100442] net eth0: eth0: tq[1] error 0x80000000
[  172.100522] eth0: resetting
[  172.113332] eth0: intr type 3, mode 0, 5 vectors allocated
[  172.113614] eth0: NIC Link is Up 10000 Mbps
[  172.396308] net eth0: eth0: tq[0] error 0x80000000
[  172.396314] net eth0: eth0: tq[1] error 0x80000000
[  172.396412] eth0: resetting
[  172.406927] eth0: intr type 3, mode 0, 5 vectors allocated
[  172.407198] eth0: NIC Link is Up 10000 Mbps
[  172.891922] net eth0: eth0: tq[0] error 0x80000000
[  172.891930] net eth0: eth0: tq[1] error 0x80000000
[  172.891951] eth0: resetting
[  172.902945] eth0: intr type 3, mode 0, 5 vectors allocated
[  172.903226] eth0: NIC Link is Up 10000 Mbps
ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: Re: 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted
  2012-10-23  8:17 Re: 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted Jongman Heo
@ 2012-10-23  9:05 ` Eric Dumazet
  2012-10-23  9:20   ` Shreyas Bhatewara
  0 siblings, 1 reply; 12+ messages in thread
From: Eric Dumazet @ 2012-10-23  9:05 UTC (permalink / raw)
  To: jongman.heo
  Cc: linux-kernel, netdev, edumazet, Shreyas Bhatewara, VMware, Inc.

On Tue, 2012-10-23 at 08:17 +0000, Jongman Heo wrote:

> 
> FYI, vmxnet3 driver is used for ethernet.

Yes, this driver needs some changes

#define VMXNET3_MAX_TX_BUF_SIZE  (1 << 14)

Thats 16KB

As we can now provide up to 32KB fragments we broke something.

vmxnet3_tq_xmit() needs to split large frags into 2 parts.
(And without going to skb_linearize() of course !)

Any volunteer ?

Thanks !



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted
  2012-10-23  9:05 ` Eric Dumazet
@ 2012-10-23  9:20   ` Shreyas Bhatewara
  2012-10-23 10:02     ` [Pv-drivers] " Shreyas Bhatewara
  0 siblings, 1 reply; 12+ messages in thread
From: Shreyas Bhatewara @ 2012-10-23  9:20 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: linux-kernel, netdev, edumazet, VMware, Inc., jongman heo

Eric, thanks for the note. I will submit a patch to do it.

Shreyas

----- Original Message -----
> On Tue, 2012-10-23 at 08:17 +0000, Jongman Heo wrote:
> 
> > 
> > FYI, vmxnet3 driver is used for ethernet.
> 
> Yes, this driver needs some changes
> 
> #define VMXNET3_MAX_TX_BUF_SIZE  (1 << 14)
> 
> Thats 16KB
> 
> As we can now provide up to 32KB fragments we broke something.
> 
> vmxnet3_tq_xmit() needs to split large frags into 2 parts.
> (And without going to skb_linearize() of course !)
> 
> Any volunteer ?
> 
> Thanks !
> 
> 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Pv-drivers] 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted
  2012-10-23  9:20   ` Shreyas Bhatewara
@ 2012-10-23 10:02     ` Shreyas Bhatewara
  2012-10-23 13:50       ` Eric Dumazet
  0 siblings, 1 reply; 12+ messages in thread
From: Shreyas Bhatewara @ 2012-10-23 10:02 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: VMware, Inc., netdev, edumazet, linux-kernel, jongman heo

Well, actually the driver does split large frags into frags of VMXNET3_MAX_TX_BUF_SIZE bytes each.

vmxnet3_drv.c
 711         while (len) {
 712                 u32 buf_size;
 713
 714                 if (len < VMXNET3_MAX_TX_BUF_SIZE) {
 715                         buf_size = len;
 716                         dw2 |= len;
 717                 } else {
 718                         buf_size = VMXNET3_MAX_TX_BUF_SIZE;
 719                         /* spec says that for TxDesc.len, 0 == 2^14 */
 720                 }
 721
....
 743
 744                 len -= buf_size;
 745                 buf_offset += buf_size;
 746         }


----- Original Message -----
> Eric, thanks for the note. I will submit a patch to do it.
> 
> Shreyas
> 
> ----- Original Message -----
> > On Tue, 2012-10-23 at 08:17 +0000, Jongman Heo wrote:
> > 
> > > 
> > > FYI, vmxnet3 driver is used for ethernet.
> > 
> > Yes, this driver needs some changes
> > 
> > #define VMXNET3_MAX_TX_BUF_SIZE  (1 << 14)
> > 
> > Thats 16KB
> > 
> > As we can now provide up to 32KB fragments we broke something.
> > 
> > vmxnet3_tq_xmit() needs to split large frags into 2 parts.
> > (And without going to skb_linearize() of course !)
> > 
> > Any volunteer ?
> > 
> > Thanks !
> > 
> > 
> > 
> _______________________________________________
> Pv-drivers mailing list
> Pv-drivers@vmware.com
> http://mailman2.vmware.com/mailman/listinfo/pv-drivers
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Pv-drivers] 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted
  2012-10-23 10:02     ` [Pv-drivers] " Shreyas Bhatewara
@ 2012-10-23 13:50       ` Eric Dumazet
  2012-10-23 19:39         ` Eric Dumazet
  0 siblings, 1 reply; 12+ messages in thread
From: Eric Dumazet @ 2012-10-23 13:50 UTC (permalink / raw)
  To: Shreyas Bhatewara
  Cc: VMware, Inc., netdev, edumazet, linux-kernel, jongman heo

On Tue, 2012-10-23 at 03:02 -0700, Shreyas Bhatewara wrote:

Please dont top post on netdev or lkml

> Well, actually the driver does split large frags into frags of VMXNET3_MAX_TX_BUF_SIZE bytes each.
> 
> vmxnet3_drv.c
>  711         while (len) {
>  712                 u32 buf_size;
>  713
>  714                 if (len < VMXNET3_MAX_TX_BUF_SIZE) {
>  715                         buf_size = len;
>  716                         dw2 |= len;
>  717                 } else {
>  718                         buf_size = VMXNET3_MAX_TX_BUF_SIZE;
>  719                         /* spec says that for TxDesc.len, 0 == 2^14 */
>  720                 }
>  721
> ....
>  743
>  744                 len -= buf_size;
>  745                 buf_offset += buf_size;
>  746         }

Only the skb head is handled in the code you copy/pasted.

You need to generalize that to code in lines ~754


Then, the number of estimated descriptors is bad :

/* conservatively estimate # of descriptors to use */
count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) +
	skb_shinfo(skb)->nr_frags + 1;


Yes, you need a more precise estimation and vmxnet3_map_pkt() should
eventually split too big frags.




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Pv-drivers] 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted
  2012-10-23 13:50       ` Eric Dumazet
@ 2012-10-23 19:39         ` Eric Dumazet
  2012-10-29 17:30           ` [PATCH] vmxnet3: must split too big fragments Eric Dumazet
  0 siblings, 1 reply; 12+ messages in thread
From: Eric Dumazet @ 2012-10-23 19:39 UTC (permalink / raw)
  To: Shreyas Bhatewara
  Cc: VMware, Inc., netdev, edumazet, linux-kernel, jongman heo

On Tue, 2012-10-23 at 15:50 +0200, Eric Dumazet wrote:

> Only the skb head is handled in the code you copy/pasted.
> 
> You need to generalize that to code in lines ~754
> 
> 
> Then, the number of estimated descriptors is bad :
> 
> /* conservatively estimate # of descriptors to use */
> count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) +
> 	skb_shinfo(skb)->nr_frags + 1;
> 
> 
> Yes, you need a more precise estimation and vmxnet3_map_pkt() should
> eventually split too big frags.

raw patch would be :

diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
index ce9d4f2..0ae1bcc 100644
--- a/drivers/net/vmxnet3/vmxnet3_drv.c
+++ b/drivers/net/vmxnet3/vmxnet3_drv.c
@@ -744,28 +744,43 @@ vmxnet3_map_pkt(struct sk_buff *skb, struct vmxnet3_tx_ctx *ctx,
 
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
 		const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
+		u32 buf_size;
 
-		tbi = tq->buf_info + tq->tx_ring.next2fill;
-		tbi->map_type = VMXNET3_MAP_PAGE;
-		tbi->dma_addr = skb_frag_dma_map(&adapter->pdev->dev, frag,
-						 0, skb_frag_size(frag),
-						 DMA_TO_DEVICE);
+		buf_offset = 0;
+		len = skb_frag_size(frag);
+		while (len) {
+			tbi = tq->buf_info + tq->tx_ring.next2fill;
+			if (len < VMXNET3_MAX_TX_BUF_SIZE) {
+				buf_size = len;
+				dw2 |= len;
+			} else {
+				buf_size = VMXNET3_MAX_TX_BUF_SIZE;
+				/* spec says that for TxDesc.len, 0 == 2^14 */
+			}
+			tbi->map_type = VMXNET3_MAP_PAGE;
+			tbi->dma_addr = skb_frag_dma_map(&adapter->pdev->dev, frag,
+							 buf_offset, buf_size,
+							 DMA_TO_DEVICE);
 
-		tbi->len = skb_frag_size(frag);
+			tbi->len = buf_size;
 
-		gdesc = tq->tx_ring.base + tq->tx_ring.next2fill;
-		BUG_ON(gdesc->txd.gen == tq->tx_ring.gen);
+			gdesc = tq->tx_ring.base + tq->tx_ring.next2fill;
+			BUG_ON(gdesc->txd.gen == tq->tx_ring.gen);
 
-		gdesc->txd.addr = cpu_to_le64(tbi->dma_addr);
-		gdesc->dword[2] = cpu_to_le32(dw2 | skb_frag_size(frag));
-		gdesc->dword[3] = 0;
+			gdesc->txd.addr = cpu_to_le64(tbi->dma_addr);
+			gdesc->dword[2] = cpu_to_le32(dw2);
+			gdesc->dword[3] = 0;
 
-		dev_dbg(&adapter->netdev->dev,
-			"txd[%u]: 0x%llu %u %u\n",
-			tq->tx_ring.next2fill, le64_to_cpu(gdesc->txd.addr),
-			le32_to_cpu(gdesc->dword[2]), gdesc->dword[3]);
-		vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring);
-		dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT;
+			dev_dbg(&adapter->netdev->dev,
+				"txd[%u]: 0x%llu %u %u\n",
+				tq->tx_ring.next2fill, le64_to_cpu(gdesc->txd.addr),
+				le32_to_cpu(gdesc->dword[2]), gdesc->dword[3]);
+			vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring);
+			dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT;
+
+			len -= buf_size;
+			buf_offset += buf_size;
+		}
 	}
 
 	ctx->eop_txd = gdesc;
@@ -886,6 +901,18 @@ vmxnet3_prepare_tso(struct sk_buff *skb,
 	}
 }
 
+static int txd_estimate(const struct sk_buff *skb)
+{
+	int count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + 1;
+	int i;
+
+	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+		const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
+
+		count += VMXNET3_TXD_NEEDED(skb_frag_size(frag));
+	}
+	return count;
+}
 
 /*
  * Transmits a pkt thru a given tq
@@ -914,9 +941,7 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
 	union Vmxnet3_GenericDesc tempTxDesc;
 #endif
 
-	/* conservatively estimate # of descriptors to use */
-	count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) +
-		skb_shinfo(skb)->nr_frags + 1;
+	count = txd_estimate(skb);
 
 	ctx.ipv4 = (vlan_get_protocol(skb) == cpu_to_be16(ETH_P_IP));
 



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] vmxnet3: must split too big fragments
  2012-10-23 19:39         ` Eric Dumazet
@ 2012-10-29 17:30           ` Eric Dumazet
  2012-10-29 17:52             ` [Pv-drivers] " Bhavesh Davda
                               ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Eric Dumazet @ 2012-10-29 17:30 UTC (permalink / raw)
  To: Shreyas Bhatewara, David Miller
  Cc: VMware, Inc., netdev, linux-kernel, jongman heo

From: Eric Dumazet <edumazet@google.com>

vmxnet3 has a 16Kbytes limit per tx descriptor, that happened to work
as long as we provided PAGE_SIZE fragments.

Our stack can now build larger fragments, so we need to split them to
the 16kbytes boundary.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: jongman heo <jongman.heo@samsung.com>
Tested-by: jongman heo <jongman.heo@samsung.com>
Cc: Shreyas Bhatewara <sbhatewara@vmware.com>
---
 drivers/net/vmxnet3/vmxnet3_drv.c |   65 +++++++++++++++++++---------
 1 file changed, 45 insertions(+), 20 deletions(-)

diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
index ce9d4f2..0ae1bcc 100644
--- a/drivers/net/vmxnet3/vmxnet3_drv.c
+++ b/drivers/net/vmxnet3/vmxnet3_drv.c
@@ -744,28 +744,43 @@ vmxnet3_map_pkt(struct sk_buff *skb, struct vmxnet3_tx_ctx *ctx,
 
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
 		const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
+		u32 buf_size;
 
-		tbi = tq->buf_info + tq->tx_ring.next2fill;
-		tbi->map_type = VMXNET3_MAP_PAGE;
-		tbi->dma_addr = skb_frag_dma_map(&adapter->pdev->dev, frag,
-						 0, skb_frag_size(frag),
-						 DMA_TO_DEVICE);
+		buf_offset = 0;
+		len = skb_frag_size(frag);
+		while (len) {
+			tbi = tq->buf_info + tq->tx_ring.next2fill;
+			if (len < VMXNET3_MAX_TX_BUF_SIZE) {
+				buf_size = len;
+				dw2 |= len;
+			} else {
+				buf_size = VMXNET3_MAX_TX_BUF_SIZE;
+				/* spec says that for TxDesc.len, 0 == 2^14 */
+			}
+			tbi->map_type = VMXNET3_MAP_PAGE;
+			tbi->dma_addr = skb_frag_dma_map(&adapter->pdev->dev, frag,
+							 buf_offset, buf_size,
+							 DMA_TO_DEVICE);
 
-		tbi->len = skb_frag_size(frag);
+			tbi->len = buf_size;
 
-		gdesc = tq->tx_ring.base + tq->tx_ring.next2fill;
-		BUG_ON(gdesc->txd.gen == tq->tx_ring.gen);
+			gdesc = tq->tx_ring.base + tq->tx_ring.next2fill;
+			BUG_ON(gdesc->txd.gen == tq->tx_ring.gen);
 
-		gdesc->txd.addr = cpu_to_le64(tbi->dma_addr);
-		gdesc->dword[2] = cpu_to_le32(dw2 | skb_frag_size(frag));
-		gdesc->dword[3] = 0;
+			gdesc->txd.addr = cpu_to_le64(tbi->dma_addr);
+			gdesc->dword[2] = cpu_to_le32(dw2);
+			gdesc->dword[3] = 0;
 
-		dev_dbg(&adapter->netdev->dev,
-			"txd[%u]: 0x%llu %u %u\n",
-			tq->tx_ring.next2fill, le64_to_cpu(gdesc->txd.addr),
-			le32_to_cpu(gdesc->dword[2]), gdesc->dword[3]);
-		vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring);
-		dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT;
+			dev_dbg(&adapter->netdev->dev,
+				"txd[%u]: 0x%llu %u %u\n",
+				tq->tx_ring.next2fill, le64_to_cpu(gdesc->txd.addr),
+				le32_to_cpu(gdesc->dword[2]), gdesc->dword[3]);
+			vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring);
+			dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT;
+
+			len -= buf_size;
+			buf_offset += buf_size;
+		}
 	}
 
 	ctx->eop_txd = gdesc;
@@ -886,6 +901,18 @@ vmxnet3_prepare_tso(struct sk_buff *skb,
 	}
 }
 
+static int txd_estimate(const struct sk_buff *skb)
+{
+	int count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + 1;
+	int i;
+
+	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+		const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
+
+		count += VMXNET3_TXD_NEEDED(skb_frag_size(frag));
+	}
+	return count;
+}
 
 /*
  * Transmits a pkt thru a given tq
@@ -914,9 +941,7 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
 	union Vmxnet3_GenericDesc tempTxDesc;
 #endif
 
-	/* conservatively estimate # of descriptors to use */
-	count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) +
-		skb_shinfo(skb)->nr_frags + 1;
+	count = txd_estimate(skb);
 
 	ctx.ipv4 = (vlan_get_protocol(skb) == cpu_to_be16(ETH_P_IP));
 



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [Pv-drivers] [PATCH] vmxnet3: must split too big fragments
  2012-10-29 17:30           ` [PATCH] vmxnet3: must split too big fragments Eric Dumazet
@ 2012-10-29 17:52             ` Bhavesh Davda
  2012-10-29 18:13               ` Eric Dumazet
  2012-10-29 18:17             ` Shreyas Bhatewara
  2012-11-03  1:58             ` David Miller
  2 siblings, 1 reply; 12+ messages in thread
From: Bhavesh Davda @ 2012-10-29 17:52 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: VMware, Inc.,
	netdev, linux-kernel, jongman heo, Shreyas Bhatewara,
	David Miller

LGTM. Thanks for doing this! Did you do any performance testing with this patch?

Reviewed-by: Bhavesh Davda <bhavesh@vmware.com>

--
Bhavesh Davda

----- Original Message -----
> From: "Eric Dumazet" <eric.dumazet@gmail.com>
> To: "Shreyas Bhatewara" <sbhatewara@vmware.com>, "David Miller" <davem@davemloft.net>
> Cc: "VMware, Inc." <pv-drivers@vmware.com>, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, "jongman heo"
> <jongman.heo@samsung.com>
> Sent: Monday, October 29, 2012 10:30:49 AM
> Subject: [Pv-drivers] [PATCH] vmxnet3: must split too big fragments
> 
> From: Eric Dumazet <edumazet@google.com>
> 
> vmxnet3 has a 16Kbytes limit per tx descriptor, that happened to work
> as long as we provided PAGE_SIZE fragments.
> 
> Our stack can now build larger fragments, so we need to split them to
> the 16kbytes boundary.
> 
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Reported-by: jongman heo <jongman.heo@samsung.com>
> Tested-by: jongman heo <jongman.heo@samsung.com>
> Cc: Shreyas Bhatewara <sbhatewara@vmware.com>
> ---
>  drivers/net/vmxnet3/vmxnet3_drv.c |   65
>  +++++++++++++++++++---------
>  1 file changed, 45 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c
> b/drivers/net/vmxnet3/vmxnet3_drv.c
> index ce9d4f2..0ae1bcc 100644
> --- a/drivers/net/vmxnet3/vmxnet3_drv.c
> +++ b/drivers/net/vmxnet3/vmxnet3_drv.c
> @@ -744,28 +744,43 @@ vmxnet3_map_pkt(struct sk_buff *skb, struct
> vmxnet3_tx_ctx *ctx,
>  
>  	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
>  		const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
> +		u32 buf_size;
>  
> -		tbi = tq->buf_info + tq->tx_ring.next2fill;
> -		tbi->map_type = VMXNET3_MAP_PAGE;
> -		tbi->dma_addr = skb_frag_dma_map(&adapter->pdev->dev, frag,
> -						 0, skb_frag_size(frag),
> -						 DMA_TO_DEVICE);
> +		buf_offset = 0;
> +		len = skb_frag_size(frag);
> +		while (len) {
> +			tbi = tq->buf_info + tq->tx_ring.next2fill;
> +			if (len < VMXNET3_MAX_TX_BUF_SIZE) {
> +				buf_size = len;
> +				dw2 |= len;
> +			} else {
> +				buf_size = VMXNET3_MAX_TX_BUF_SIZE;
> +				/* spec says that for TxDesc.len, 0 == 2^14 */
> +			}
> +			tbi->map_type = VMXNET3_MAP_PAGE;
> +			tbi->dma_addr = skb_frag_dma_map(&adapter->pdev->dev, frag,
> +							 buf_offset, buf_size,
> +							 DMA_TO_DEVICE);
>  
> -		tbi->len = skb_frag_size(frag);
> +			tbi->len = buf_size;
>  
> -		gdesc = tq->tx_ring.base + tq->tx_ring.next2fill;
> -		BUG_ON(gdesc->txd.gen == tq->tx_ring.gen);
> +			gdesc = tq->tx_ring.base + tq->tx_ring.next2fill;
> +			BUG_ON(gdesc->txd.gen == tq->tx_ring.gen);
>  
> -		gdesc->txd.addr = cpu_to_le64(tbi->dma_addr);
> -		gdesc->dword[2] = cpu_to_le32(dw2 | skb_frag_size(frag));
> -		gdesc->dword[3] = 0;
> +			gdesc->txd.addr = cpu_to_le64(tbi->dma_addr);
> +			gdesc->dword[2] = cpu_to_le32(dw2);
> +			gdesc->dword[3] = 0;
>  
> -		dev_dbg(&adapter->netdev->dev,
> -			"txd[%u]: 0x%llu %u %u\n",
> -			tq->tx_ring.next2fill, le64_to_cpu(gdesc->txd.addr),
> -			le32_to_cpu(gdesc->dword[2]), gdesc->dword[3]);
> -		vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring);
> -		dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT;
> +			dev_dbg(&adapter->netdev->dev,
> +				"txd[%u]: 0x%llu %u %u\n",
> +				tq->tx_ring.next2fill, le64_to_cpu(gdesc->txd.addr),
> +				le32_to_cpu(gdesc->dword[2]), gdesc->dword[3]);
> +			vmxnet3_cmd_ring_adv_next2fill(&tq->tx_ring);
> +			dw2 = tq->tx_ring.gen << VMXNET3_TXD_GEN_SHIFT;
> +
> +			len -= buf_size;
> +			buf_offset += buf_size;
> +		}
>  	}
>  
>  	ctx->eop_txd = gdesc;
> @@ -886,6 +901,18 @@ vmxnet3_prepare_tso(struct sk_buff *skb,
>  	}
>  }
>  
> +static int txd_estimate(const struct sk_buff *skb)
> +{
> +	int count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) + 1;
> +	int i;
> +
> +	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> +		const struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i];
> +
> +		count += VMXNET3_TXD_NEEDED(skb_frag_size(frag));
> +	}
> +	return count;
> +}
>  
>  /*
>   * Transmits a pkt thru a given tq
> @@ -914,9 +941,7 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct
> vmxnet3_tx_queue *tq,
>  	union Vmxnet3_GenericDesc tempTxDesc;
>  #endif
>  
> -	/* conservatively estimate # of descriptors to use */
> -	count = VMXNET3_TXD_NEEDED(skb_headlen(skb)) +
> -		skb_shinfo(skb)->nr_frags + 1;
> +	count = txd_estimate(skb);
>  
>  	ctx.ipv4 = (vlan_get_protocol(skb) == cpu_to_be16(ETH_P_IP));
>  
> 
> 
> _______________________________________________
> Pv-drivers mailing list
> Pv-drivers@vmware.com
> http://mailman2.vmware.com/mailman/listinfo/pv-drivers
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Pv-drivers] [PATCH] vmxnet3: must split too big fragments
  2012-10-29 17:52             ` [Pv-drivers] " Bhavesh Davda
@ 2012-10-29 18:13               ` Eric Dumazet
  0 siblings, 0 replies; 12+ messages in thread
From: Eric Dumazet @ 2012-10-29 18:13 UTC (permalink / raw)
  To: Bhavesh Davda
  Cc: VMware, Inc.,
	netdev, linux-kernel, jongman heo, Shreyas Bhatewara,
	David Miller

On Mon, 2012-10-29 at 10:52 -0700, Bhavesh Davda wrote:
> LGTM. Thanks for doing this! Did you do any performance testing with this patch?
> 
> Reviewed-by: Bhavesh Davda <bhavesh@vmware.com>

Just to be clear : I coded the patch and compiled it, but didnt test it.

Jongman did the tests ;)

Thanks !



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] vmxnet3: must split too big fragments
  2012-10-29 17:30           ` [PATCH] vmxnet3: must split too big fragments Eric Dumazet
  2012-10-29 17:52             ` [Pv-drivers] " Bhavesh Davda
@ 2012-10-29 18:17             ` Shreyas Bhatewara
  2012-10-29 18:19               ` Shreyas Bhatewara
  2012-11-03  1:58             ` David Miller
  2 siblings, 1 reply; 12+ messages in thread
From: Shreyas Bhatewara @ 2012-10-29 18:17 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: VMware, Inc., netdev, linux-kernel, jongman heo, David Miller



----- Original Message -----
> From: Eric Dumazet <edumazet@google.com>
> 
> vmxnet3 has a 16Kbytes limit per tx descriptor, that happened to work
> as long as we provided PAGE_SIZE fragments.
> 
> Our stack can now build larger fragments, so we need to split them to
> the 16kbytes boundary.
> 
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Reported-by: jongman heo <jongman.heo@samsung.com>
> Tested-by: jongman heo <jongman.heo@samsung.com>
> Cc: Shreyas Bhatewara <sbhatewara@vmware.com>
> ---
>  drivers/net/vmxnet3/vmxnet3_drv.c |   65
>  +++++++++++++++++++---------
>  1 file changed, 45 insertions(+), 20 deletions(-)
> 

Thanks for the patch Eric.

Signed-of-by: Shreyas Bhatewara <sbhatewara@vmware.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] vmxnet3: must split too big fragments
  2012-10-29 18:17             ` Shreyas Bhatewara
@ 2012-10-29 18:19               ` Shreyas Bhatewara
  0 siblings, 0 replies; 12+ messages in thread
From: Shreyas Bhatewara @ 2012-10-29 18:19 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: VMware, Inc., netdev, linux-kernel, jongman heo, David Miller

> 
> Signed-of-by: Shreyas Bhatewara <sbhatewara@vmware.com>

Pardon the typo.
And also, thanks to Jongman for testing.

Signed-off-by: Shreyas Bhatewara <sbhatewara@vmware.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] vmxnet3: must split too big fragments
  2012-10-29 17:30           ` [PATCH] vmxnet3: must split too big fragments Eric Dumazet
  2012-10-29 17:52             ` [Pv-drivers] " Bhavesh Davda
  2012-10-29 18:17             ` Shreyas Bhatewara
@ 2012-11-03  1:58             ` David Miller
  2 siblings, 0 replies; 12+ messages in thread
From: David Miller @ 2012-11-03  1:58 UTC (permalink / raw)
  To: eric.dumazet; +Cc: sbhatewara, pv-drivers, netdev, linux-kernel, jongman.heo

From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Mon, 29 Oct 2012 18:30:49 +0100

> From: Eric Dumazet <edumazet@google.com>
> 
> vmxnet3 has a 16Kbytes limit per tx descriptor, that happened to work
> as long as we provided PAGE_SIZE fragments.
> 
> Our stack can now build larger fragments, so we need to split them to
> the 16kbytes boundary.
> 
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Reported-by: jongman heo <jongman.heo@samsung.com>
> Tested-by: jongman heo <jongman.heo@samsung.com>
> Cc: Shreyas Bhatewara <sbhatewara@vmware.com>

Applied, thanks everyone.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2012-11-03  1:58 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-10-23  8:17 Re: 3.7-rc2 regression : file copied to CIFS-mounted directory corrupted Jongman Heo
2012-10-23  9:05 ` Eric Dumazet
2012-10-23  9:20   ` Shreyas Bhatewara
2012-10-23 10:02     ` [Pv-drivers] " Shreyas Bhatewara
2012-10-23 13:50       ` Eric Dumazet
2012-10-23 19:39         ` Eric Dumazet
2012-10-29 17:30           ` [PATCH] vmxnet3: must split too big fragments Eric Dumazet
2012-10-29 17:52             ` [Pv-drivers] " Bhavesh Davda
2012-10-29 18:13               ` Eric Dumazet
2012-10-29 18:17             ` Shreyas Bhatewara
2012-10-29 18:19               ` Shreyas Bhatewara
2012-11-03  1:58             ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).