From: Trond Myklebust <trond.myklebust@primarydata.com> To: Anna Schumaker <Anna.Schumaker@netapp.com> Cc: Christoph Hellwig <hch@infradead.org>, "linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>, Marc Eshel <eshel@us.ibm.com>, xfs@oss.sgi.com, "J. Bruce Fields" <bfields@fieldses.org>, linux-nfs-owner@vger.kernel.org Subject: Re: [PATCH v3 3/3] NFSD: Add support for encoding multiple segments Date: Fri, 27 Mar 2015 16:22:50 -0400 [thread overview] Message-ID: <CAHQdGtRaUYxU2JAAWErH7FT=Gy5JLzAKi-RtqqBSkNFZrhaB9Q@mail.gmail.com> (raw) In-Reply-To: <5515A9C8.6090400@Netapp.com> On Fri, Mar 27, 2015 at 3:04 PM, Anna Schumaker <Anna.Schumaker@netapp.com> wrote: > I did two separate dd tests with the same 5G file from yesterday, and still using the same virtual machines. First, I ran dd using direct IO for reads: > dd if=/nfs/file iflag=direct of=/dev/null bs=128K > > Mixed file performance was awful, so I reran without direct IO enabled for comparison: > dd if=/nfs/file iflag=nocache of=/dev/null oflag=nocache bs=128K > > bs=128K sets the block size used by dd to the NFS rsize, without this dd will only read 512 bytes at a time and take forever to complete. > > > ########################## > # # > # Without READ_PLUS # > # # > ########################## > > > NFS v4.1, iflag=direct: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 11.704s | 11.055s | 11.329s | 11.453s | 10.741s | 11.256s | > | Hole | 9.839s | 9.326s | 9.381s | 9.430s | 8.875s | 9.370s | > | Mixed | 19.150s | 19.468s | 18.650s | 18.537s | 19.312s | 19.023s | > |---------|---------|---------|---------|---------|---------|---------| > > > NFS v4.2, iflag=direct: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 10.927s | 10.885s | 11.114s | 11.283s | 10.371s | 10.916s | > | Hole | 9.515s | 9.039s | 9.116s | 8.867s | 8.905s | 9.088s | > | Mixed | 19.149s | 18.656s | 19.400s | 18.834s | 20.041s | 19.216s | > |---------|---------|---------|---------|---------|---------|---------| > > > > > NFS v4.1, iflag=nocache oflag=nocache: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 6.808s | 6.698s | 7.482s | 6.761s | 7.235s | 6.995s | > | Hole | 5.350s | 5.148s | 5.161s | 5.070s | 5.089s | 5.164s | > | Mixed | 9.316s | 8.731s | 9.072s | 9.145s | 8.627s | 8.978s | > |---------|---------|---------|---------|---------|---------|---------| > > > NFS v4.2, iflag=nocache oflag=nocache: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 6.686s | 6.848s | 6.876s | 6.799s | 7.815s | 7.004s | > | Hole | 5.092s | 5.330s | 5.050s | 5.280s | 5.030s | 5.156s | > | Mixed | 8.142s | 7.897s | 8.040s | 7.960s | 8.050s | 8.018s | > |---------|---------|---------|---------|---------|---------|---------| > > > > > > > ####################### > # # > # With READ_PLUS # > # # > ####################### > > > NFS v4.1, iflag=direct: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 9.464s | 10.181s | 10.048s | 9.452s | 10.795s | 9.988s | > | Hole | 7.954s | 8.486s | 7.762s | 7.969s | 8.299s | 8.094s | > | Mixed | 19.037s | 18.323s | 18.965s | 18.156s | 19.185s | 18.733s | > |---------|---------|---------|---------|---------|---------|---------| > > > NFS v4.2, iflag=direct: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 11.923s | 10.026s | 10.222s | 12.387s | 11.431s | 11.198s | > | Hole | 3.247s | 3.155s | 3.191s | 3.243s | 3.202s | 3.208s | > | Mixed | 54.677s | 54.697s | 52.978s | 53.704s | 54.054s | 54.022s | That's a bit nasty. Any idea what is going on with the Mixed case here? > |---------|---------|---------|---------|---------|---------|---------| > > > > > NFS v4.1, iflag=nocache oflag=nocache: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 6.788s | 6.802s | 6.750s | 6.756s | 6.852s | 6.790s | > | Hole | 5.143s | 5.165s | 5.104s | 5.154s | 5.116s | 5.136s | > | Mixed | 7.902s | 7.693s | 9.169s | 8.186s | 9.157s | 8.421s | > |---------|---------|---------|---------|---------|---------|---------| > > > NFS v4.2, iflag=nocache oflag=nocache: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 6.897s | 6.862s | 7.054s | 6.961s | 7.081s | 6.971s | > | Hole | 1.690s | 1.673s | 1.553s | 1.554s | 1.490s | 1.592s | > | Mixed | 9.009s | 7.840s | 7.661s | 8.945s | 7.649s | 8.221s | > |---------|---------|---------|---------|---------|---------|---------| > > > On 03/26/2015 12:13 PM, Trond Myklebust wrote: >> On Thu, Mar 26, 2015 at 12:11 PM, Anna Schumaker >> <Anna.Schumaker@netapp.com> wrote: >>> On 03/26/2015 12:06 PM, Trond Myklebust wrote: >>>> On Thu, Mar 26, 2015 at 11:47 AM, Anna Schumaker >>>> <Anna.Schumaker@netapp.com> wrote: >>>>> On 03/26/2015 11:38 AM, J. Bruce Fields wrote: >>>>>> On Thu, Mar 26, 2015 at 11:32:25AM -0400, Trond Myklebust wrote: >>>>>>> On Thu, Mar 26, 2015 at 11:21 AM, Anna Schumaker >>>>>>> <Anna.Schumaker@netapp.com> wrote: >>>>>>>> Here are my updated numbers! I tested with files 5G in size: one 100% data, one 100% hole, and one alternating between hole and data every 4K. I collected data for both v4.1 and v4.2 with and without the READ_PLUS patches: >>>>>>>> >>>>>>>> ########################## >>>>>>>> # # >>>>>>>> # Without READ_PLUS # >>>>>>>> # # >>>>>>>> ########################## >>>>>>>> >>>>>>>> >>>>>>>> NFS v4.1: >>>>>>>> Trial >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | Data | 8.723s | 7.243s | 8.252s | 6.997s | 6.980s | 7.639s | >>>>>>>> | Hole | 5.271s | 5.224s | 5.060s | 4.897s | 5.321s | 5.155s | >>>>>>>> | Mixed | 8.050s | 10.057s | 7.919s | 8.060s | 9.557s | 8.729s | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> NFS v4.2: >>>>>>>> Trial >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | Data | 6.707s | 7.070s | 6.722s | 6.761s | 6.810s | 6.814s | >>>>>>>> | Hole | 5.152s | 5.149s | 5.213s | 5.206s | 5.312s | 5.206s | >>>>>>>> | Mixed | 7.979s | 7.985s | 8.177s | 7.772s | 8.280s | 8.039s | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ####################### >>>>>>>> # # >>>>>>>> # With READ_PLUS # >>>>>>>> # # >>>>>>>> ####################### >>>>>>>> >>>>>>>> >>>>>>>> NFS v4.1: >>>>>>>> Trial >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | Data | 9.082s | 7.008s | 7.116s | 6.771s | 7.902s | 7.576s | >>>>>>>> | Hole | 5.333s | 5.358s | 5.380s | 5.161s | 5.282s | 5.303s | >>>>>>>> | Mixed | 8.189s | 8.308s | 9.540s | 7.937s | 8.420s | 8.479s | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> NFS v4.2: >>>>>>>> Trial >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | Data | 7.033s | 6.829s | 7.025s | 6.873s | 7.134s | 6.979s | >>>>>>>> | Hole | 1.794s | 1.800s | 1.905s | 1.811s | 1.725s | 1.807s | >>>>>>>> | Mixed | 7.590s | 8.777s | 9.423s | 10.366s | 8.024s | 8.836s | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> >>>>>>> >>>>>>> So there is a clear win in the 100% hole case here, but otherwise the >>>>>>> statistical fluctuations are dominating the numbers. Can you get us a >>>>>>> little more stats and then perhaps run the results through nfsometer? >>>>>> >>>>>> Also, could you describe the setup (are these still kvm's), and how >>>>>> you're clearing the cache between runs? >>>>> >>>>> These are still KVMs and my server is exporting an xfs filesystem. I clear caches by running "echo 3 > /proc/sys/vm/drop_caches" on the server before every read, and I remount my client after reading each set of three files once. >>>> >>>> I agree that you have to use the 'drop_caches' interface on the >>>> server, but why not just use O_DIRECT on the clients? >>> >>> I've been reading by using cat from my test shell script: `time cat /nfs/file > /dev/null`. I can write something to read files with O_DIRECT if that would be more useful! >>> >> >> 'dd' can do that for you if the appropriate incantations are performed. >> > > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Trond Myklebust Linux NFS client maintainer, PrimaryData trond.myklebust@primarydata.com _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs
WARNING: multiple messages have this Message-ID (diff)
From: Trond Myklebust <trond.myklebust@primarydata.com> To: Anna Schumaker <Anna.Schumaker@netapp.com> Cc: "J. Bruce Fields" <bfields@fieldses.org>, Christoph Hellwig <hch@infradead.org>, Marc Eshel <eshel@us.ibm.com>, "linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>, linux-nfs-owner@vger.kernel.org, xfs@oss.sgi.com Subject: Re: [PATCH v3 3/3] NFSD: Add support for encoding multiple segments Date: Fri, 27 Mar 2015 16:22:50 -0400 [thread overview] Message-ID: <CAHQdGtRaUYxU2JAAWErH7FT=Gy5JLzAKi-RtqqBSkNFZrhaB9Q@mail.gmail.com> (raw) In-Reply-To: <5515A9C8.6090400@Netapp.com> On Fri, Mar 27, 2015 at 3:04 PM, Anna Schumaker <Anna.Schumaker@netapp.com> wrote: > I did two separate dd tests with the same 5G file from yesterday, and still using the same virtual machines. First, I ran dd using direct IO for reads: > dd if=/nfs/file iflag=direct of=/dev/null bs=128K > > Mixed file performance was awful, so I reran without direct IO enabled for comparison: > dd if=/nfs/file iflag=nocache of=/dev/null oflag=nocache bs=128K > > bs=128K sets the block size used by dd to the NFS rsize, without this dd will only read 512 bytes at a time and take forever to complete. > > > ########################## > # # > # Without READ_PLUS # > # # > ########################## > > > NFS v4.1, iflag=direct: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 11.704s | 11.055s | 11.329s | 11.453s | 10.741s | 11.256s | > | Hole | 9.839s | 9.326s | 9.381s | 9.430s | 8.875s | 9.370s | > | Mixed | 19.150s | 19.468s | 18.650s | 18.537s | 19.312s | 19.023s | > |---------|---------|---------|---------|---------|---------|---------| > > > NFS v4.2, iflag=direct: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 10.927s | 10.885s | 11.114s | 11.283s | 10.371s | 10.916s | > | Hole | 9.515s | 9.039s | 9.116s | 8.867s | 8.905s | 9.088s | > | Mixed | 19.149s | 18.656s | 19.400s | 18.834s | 20.041s | 19.216s | > |---------|---------|---------|---------|---------|---------|---------| > > > > > NFS v4.1, iflag=nocache oflag=nocache: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 6.808s | 6.698s | 7.482s | 6.761s | 7.235s | 6.995s | > | Hole | 5.350s | 5.148s | 5.161s | 5.070s | 5.089s | 5.164s | > | Mixed | 9.316s | 8.731s | 9.072s | 9.145s | 8.627s | 8.978s | > |---------|---------|---------|---------|---------|---------|---------| > > > NFS v4.2, iflag=nocache oflag=nocache: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 6.686s | 6.848s | 6.876s | 6.799s | 7.815s | 7.004s | > | Hole | 5.092s | 5.330s | 5.050s | 5.280s | 5.030s | 5.156s | > | Mixed | 8.142s | 7.897s | 8.040s | 7.960s | 8.050s | 8.018s | > |---------|---------|---------|---------|---------|---------|---------| > > > > > > > ####################### > # # > # With READ_PLUS # > # # > ####################### > > > NFS v4.1, iflag=direct: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 9.464s | 10.181s | 10.048s | 9.452s | 10.795s | 9.988s | > | Hole | 7.954s | 8.486s | 7.762s | 7.969s | 8.299s | 8.094s | > | Mixed | 19.037s | 18.323s | 18.965s | 18.156s | 19.185s | 18.733s | > |---------|---------|---------|---------|---------|---------|---------| > > > NFS v4.2, iflag=direct: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 11.923s | 10.026s | 10.222s | 12.387s | 11.431s | 11.198s | > | Hole | 3.247s | 3.155s | 3.191s | 3.243s | 3.202s | 3.208s | > | Mixed | 54.677s | 54.697s | 52.978s | 53.704s | 54.054s | 54.022s | That's a bit nasty. Any idea what is going on with the Mixed case here? > |---------|---------|---------|---------|---------|---------|---------| > > > > > NFS v4.1, iflag=nocache oflag=nocache: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 6.788s | 6.802s | 6.750s | 6.756s | 6.852s | 6.790s | > | Hole | 5.143s | 5.165s | 5.104s | 5.154s | 5.116s | 5.136s | > | Mixed | 7.902s | 7.693s | 9.169s | 8.186s | 9.157s | 8.421s | > |---------|---------|---------|---------|---------|---------|---------| > > > NFS v4.2, iflag=nocache oflag=nocache: > Trial > |---------|---------|---------|---------|---------|---------|---------| > | | 1 | 2 | 3 | 4 | 5 | Average | > |---------|---------|---------|---------|---------|---------|---------| > | Data | 6.897s | 6.862s | 7.054s | 6.961s | 7.081s | 6.971s | > | Hole | 1.690s | 1.673s | 1.553s | 1.554s | 1.490s | 1.592s | > | Mixed | 9.009s | 7.840s | 7.661s | 8.945s | 7.649s | 8.221s | > |---------|---------|---------|---------|---------|---------|---------| > > > On 03/26/2015 12:13 PM, Trond Myklebust wrote: >> On Thu, Mar 26, 2015 at 12:11 PM, Anna Schumaker >> <Anna.Schumaker@netapp.com> wrote: >>> On 03/26/2015 12:06 PM, Trond Myklebust wrote: >>>> On Thu, Mar 26, 2015 at 11:47 AM, Anna Schumaker >>>> <Anna.Schumaker@netapp.com> wrote: >>>>> On 03/26/2015 11:38 AM, J. Bruce Fields wrote: >>>>>> On Thu, Mar 26, 2015 at 11:32:25AM -0400, Trond Myklebust wrote: >>>>>>> On Thu, Mar 26, 2015 at 11:21 AM, Anna Schumaker >>>>>>> <Anna.Schumaker@netapp.com> wrote: >>>>>>>> Here are my updated numbers! I tested with files 5G in size: one 100% data, one 100% hole, and one alternating between hole and data every 4K. I collected data for both v4.1 and v4.2 with and without the READ_PLUS patches: >>>>>>>> >>>>>>>> ########################## >>>>>>>> # # >>>>>>>> # Without READ_PLUS # >>>>>>>> # # >>>>>>>> ########################## >>>>>>>> >>>>>>>> >>>>>>>> NFS v4.1: >>>>>>>> Trial >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | Data | 8.723s | 7.243s | 8.252s | 6.997s | 6.980s | 7.639s | >>>>>>>> | Hole | 5.271s | 5.224s | 5.060s | 4.897s | 5.321s | 5.155s | >>>>>>>> | Mixed | 8.050s | 10.057s | 7.919s | 8.060s | 9.557s | 8.729s | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> NFS v4.2: >>>>>>>> Trial >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | Data | 6.707s | 7.070s | 6.722s | 6.761s | 6.810s | 6.814s | >>>>>>>> | Hole | 5.152s | 5.149s | 5.213s | 5.206s | 5.312s | 5.206s | >>>>>>>> | Mixed | 7.979s | 7.985s | 8.177s | 7.772s | 8.280s | 8.039s | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ####################### >>>>>>>> # # >>>>>>>> # With READ_PLUS # >>>>>>>> # # >>>>>>>> ####################### >>>>>>>> >>>>>>>> >>>>>>>> NFS v4.1: >>>>>>>> Trial >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | Data | 9.082s | 7.008s | 7.116s | 6.771s | 7.902s | 7.576s | >>>>>>>> | Hole | 5.333s | 5.358s | 5.380s | 5.161s | 5.282s | 5.303s | >>>>>>>> | Mixed | 8.189s | 8.308s | 9.540s | 7.937s | 8.420s | 8.479s | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> NFS v4.2: >>>>>>>> Trial >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> | Data | 7.033s | 6.829s | 7.025s | 6.873s | 7.134s | 6.979s | >>>>>>>> | Hole | 1.794s | 1.800s | 1.905s | 1.811s | 1.725s | 1.807s | >>>>>>>> | Mixed | 7.590s | 8.777s | 9.423s | 10.366s | 8.024s | 8.836s | >>>>>>>> |---------|---------|---------|---------|---------|---------|---------| >>>>>>>> >>>>>>> >>>>>>> So there is a clear win in the 100% hole case here, but otherwise the >>>>>>> statistical fluctuations are dominating the numbers. Can you get us a >>>>>>> little more stats and then perhaps run the results through nfsometer? >>>>>> >>>>>> Also, could you describe the setup (are these still kvm's), and how >>>>>> you're clearing the cache between runs? >>>>> >>>>> These are still KVMs and my server is exporting an xfs filesystem. I clear caches by running "echo 3 > /proc/sys/vm/drop_caches" on the server before every read, and I remount my client after reading each set of three files once. >>>> >>>> I agree that you have to use the 'drop_caches' interface on the >>>> server, but why not just use O_DIRECT on the clients? >>> >>> I've been reading by using cat from my test shell script: `time cat /nfs/file > /dev/null`. I can write something to read files with O_DIRECT if that would be more useful! >>> >> >> 'dd' can do that for you if the appropriate incantations are performed. >> > > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Trond Myklebust Linux NFS client maintainer, PrimaryData trond.myklebust@primarydata.com
next prev parent reply other threads:[~2015-03-27 20:22 UTC|newest] Thread overview: 75+ messages / expand[flat|nested] mbox.gz Atom feed top 2015-03-16 21:18 [PATCH v3 0/3] NFSD: Add READ_PLUS support Anna Schumaker 2015-03-16 21:18 ` [PATCH v3 1/3] NFSD: nfsd4_encode_read{v}() should encode eof and maxcount Anna Schumaker 2015-03-16 21:18 ` [PATCH v3 2/3] NFSD: Add basic READ_PLUS support Anna Schumaker 2015-03-16 21:18 ` [PATCH v3 3/3] NFSD: Add support for encoding multiple segments Anna Schumaker 2015-03-17 19:56 ` J. Bruce Fields 2015-03-17 20:07 ` J. Bruce Fields 2015-03-17 21:36 ` J. Bruce Fields 2015-03-18 18:16 ` Anna Schumaker 2015-03-18 18:55 ` J. Bruce Fields 2015-03-18 20:39 ` Anna Schumaker 2015-03-18 20:55 ` J. Bruce Fields 2015-03-18 21:03 ` Anna Schumaker 2015-03-18 21:11 ` J. Bruce Fields [not found] ` <OFB111A6D8.016B8BD5-ON88257E0D.001D174D-88257E0D.005268D6@us.ibm.com> 2015-03-19 15:36 ` J. Bruce Fields 2015-03-19 16:28 ` Marc Eshel 2015-03-20 15:17 ` J. Bruce Fields 2015-03-20 15:17 ` J. Bruce Fields 2015-03-20 16:23 ` Christoph Hellwig 2015-03-20 16:23 ` Christoph Hellwig 2015-03-20 18:26 ` J. Bruce Fields 2015-03-20 18:26 ` J. Bruce Fields 2015-03-24 12:43 ` Anna Schumaker 2015-03-24 12:43 ` Anna Schumaker 2015-03-24 17:49 ` Christoph Hellwig 2015-03-24 17:49 ` Christoph Hellwig 2015-03-25 17:15 ` Anna Schumaker 2015-03-25 17:15 ` Anna Schumaker 2015-03-26 15:21 ` Anna Schumaker 2015-03-26 15:21 ` Anna Schumaker 2015-03-26 15:32 ` Trond Myklebust 2015-03-26 15:32 ` Trond Myklebust 2015-03-26 15:36 ` Anna Schumaker 2015-03-26 15:36 ` Anna Schumaker 2015-03-26 15:38 ` J. Bruce Fields 2015-03-26 15:38 ` J. Bruce Fields 2015-03-26 15:47 ` Anna Schumaker 2015-03-26 15:47 ` Anna Schumaker 2015-03-26 16:06 ` Trond Myklebust 2015-03-26 16:06 ` Trond Myklebust 2015-03-26 16:11 ` Anna Schumaker 2015-03-26 16:11 ` Anna Schumaker 2015-03-26 16:13 ` Trond Myklebust 2015-03-26 16:13 ` Trond Myklebust 2015-03-26 16:14 ` Anna Schumaker 2015-03-26 16:14 ` Anna Schumaker 2015-03-27 19:04 ` Anna Schumaker 2015-03-27 19:04 ` Anna Schumaker 2015-03-27 20:22 ` Trond Myklebust [this message] 2015-03-27 20:22 ` Trond Myklebust 2015-03-27 20:46 ` Anna Schumaker 2015-03-27 20:46 ` Anna Schumaker 2015-03-27 20:54 ` J. Bruce Fields 2015-03-27 20:54 ` J. Bruce Fields 2015-03-27 20:55 ` Anna Schumaker 2015-03-27 20:55 ` Anna Schumaker 2015-03-27 21:08 ` J. Bruce Fields 2015-03-27 21:08 ` J. Bruce Fields 2015-04-15 19:32 ` Anna Schumaker 2015-04-15 19:32 ` Anna Schumaker 2015-04-15 19:56 ` J. Bruce Fields 2015-04-15 19:56 ` J. Bruce Fields 2015-04-15 20:00 ` J. Bruce Fields 2015-04-15 20:00 ` J. Bruce Fields 2015-04-15 22:50 ` Dave Chinner 2015-04-15 22:50 ` Dave Chinner 2015-04-17 22:07 ` J. Bruce Fields 2015-04-17 22:07 ` J. Bruce Fields 2015-04-15 22:57 ` Dave Chinner 2015-04-15 22:57 ` Dave Chinner 2015-03-26 16:11 ` J. Bruce Fields 2015-03-26 16:11 ` J. Bruce Fields 2015-03-26 16:18 ` Anna Schumaker 2015-03-26 16:18 ` Anna Schumaker 2015-03-30 14:06 ` Christoph Hellwig 2015-03-30 14:06 ` Christoph Hellwig
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to='CAHQdGtRaUYxU2JAAWErH7FT=Gy5JLzAKi-RtqqBSkNFZrhaB9Q@mail.gmail.com' \ --to=trond.myklebust@primarydata.com \ --cc=Anna.Schumaker@netapp.com \ --cc=bfields@fieldses.org \ --cc=eshel@us.ibm.com \ --cc=hch@infradead.org \ --cc=linux-nfs-owner@vger.kernel.org \ --cc=linux-nfs@vger.kernel.org \ --cc=xfs@oss.sgi.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.