From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:41325) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UUn1l-0004j3-Fj for qemu-devel@nongnu.org; Tue, 23 Apr 2013 19:53:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UUn1i-0008UX-Lo for qemu-devel@nongnu.org; Tue, 23 Apr 2013 19:53:37 -0400 Received: from e39.co.us.ibm.com ([32.97.110.160]:54479) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UUn1i-0008UR-EI for qemu-devel@nongnu.org; Tue, 23 Apr 2013 19:53:34 -0400 Received: from /spool/local by e39.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 23 Apr 2013 17:53:33 -0600 Received: from d01relay06.pok.ibm.com (d01relay06.pok.ibm.com [9.56.227.116]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id 55D00C9001D for ; Tue, 23 Apr 2013 19:53:32 -0400 (EDT) Received: from d01av05.pok.ibm.com (d01av05.pok.ibm.com [9.56.224.195]) by d01relay06.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r3NNrWCe38338590 for ; Tue, 23 Apr 2013 19:53:32 -0400 Received: from d01av05.pok.ibm.com (loopback [127.0.0.1]) by d01av05.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r3NNrWo3014637 for ; Tue, 23 Apr 2013 19:53:32 -0400 Message-ID: <51771EFB.5080201@linux.vnet.ibm.com> Date: Tue, 23 Apr 2013 19:53:31 -0400 From: "Michael R. Hines" MIME-Version: 1.0 References: <1366682139-22122-1-git-send-email-mrhines@linux.vnet.ibm.com> <1366682139-22122-12-git-send-email-mrhines@linux.vnet.ibm.com> <5176F647.6010302@redhat.com> In-Reply-To: <5176F647.6010302@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v5 11/12] rdma: core logic List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: aliguori@us.ibm.com, quintela@redhat.com, qemu-devel@nongnu.org, owasserm@redhat.com, abali@us.ibm.com, mrhines@us.ibm.com, gokul@us.ibm.com On 04/23/2013 04:59 PM, Paolo Bonzini wrote: > Il 23/04/2013 03:55, mrhines@linux.vnet.ibm.com ha scritto: >> +static size_t qemu_rdma_get_max_size(QEMUFile *f, void *opaque, >> + uint64_t transferred_bytes, >> + uint64_t time_spent, >> + uint64_t max_downtime) >> +{ >> + static uint64_t largest = 1; >> + uint64_t max_size = ((double) (transferred_bytes / time_spent)) >> + * max_downtime / 1000000; >> + >> + if (max_size > largest) { >> + largest = max_size; >> + } >> + >> + DPRINTF("MBPS: %f, max_size: %" PRIu64 " largest: %" PRIu64 "\n", >> + qemu_get_mbps(), max_size, largest); >> + >> + return largest; >> +} > Can you point me to the discussion of this algorithmic change and > qemu_get_max_size? It seems to me that it assumes that the IB link is > basically dedicated to migration. > > I think it is a big assumption and it may be hiding a bug elsewhere. At > the very least, it should be moved to a separate commit and described in > the commit message, but actually I'd prefer to not include it in the > first submission. > > Paolo > Until now, I stopped using our 40G hardware (only 10G hardware). But when I switched back to our 40G hardware, the throughput was being artificially limited to < 10G. So, I started investigating the problem, and I noticed that whenever I disabled the limits of max_size, the throughput went back to the normal throughput (peak of 26 gbps). So, rather than change the default max_size calculation for TCP, which would improperly impact existing users of TCP migration, I introduced a new QEMUFileOps change to solve the problem. What do you think? - Michael