All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vinod Koul <vinod.koul@intel.com>
To: hongbo.zhang@freescale.com
Cc: dan.j.williams@intel.com, dmaengine@vger.kernel.org,
	scottwood@freescale.com, LeoLi@freescale.com,
	linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] DMA: Freescale: change BWC from 256 bytes to 1024 bytes
Date: Mon, 20 Jan 2014 13:13:49 +0530	[thread overview]
Message-ID: <20140120074349.GH26823@intel.com> (raw)
In-Reply-To: <1389852653-8806-1-git-send-email-hongbo.zhang@freescale.com>

On Thu, Jan 16, 2014 at 02:10:53PM +0800, hongbo.zhang@freescale.com wrote:
> From: Hongbo Zhang <hongbo.zhang@freescale.com>
> 
> Freescale DMA has a feature of BandWidth Control (ab. BWC), which is currently
> 256 bytes and should be changed to 1024 bytes for best DMA throughput.
> Changing BWC from 256 to 1024 will improve DMA performance much, in cases
> whatever one channel is running or multi channels are running simultanously,
> large or small buffers are copied.  And this change doesn't impact memory
> access performance remarkably, lmbench tests show that for some cases the
> memory performance are decreased very slightly, while the others are even
> better.
> Tested on T4240.

Applied, thanks

--
~Vinod

WARNING: multiple messages have this Message-ID (diff)
From: Vinod Koul <vinod.koul@intel.com>
To: hongbo.zhang@freescale.com
Cc: linux-kernel@vger.kernel.org, scottwood@freescale.com,
	dmaengine@vger.kernel.org, dan.j.williams@intel.com,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH] DMA: Freescale: change BWC from 256 bytes to 1024 bytes
Date: Mon, 20 Jan 2014 13:13:49 +0530	[thread overview]
Message-ID: <20140120074349.GH26823@intel.com> (raw)
In-Reply-To: <1389852653-8806-1-git-send-email-hongbo.zhang@freescale.com>

On Thu, Jan 16, 2014 at 02:10:53PM +0800, hongbo.zhang@freescale.com wrote:
> From: Hongbo Zhang <hongbo.zhang@freescale.com>
> 
> Freescale DMA has a feature of BandWidth Control (ab. BWC), which is currently
> 256 bytes and should be changed to 1024 bytes for best DMA throughput.
> Changing BWC from 256 to 1024 will improve DMA performance much, in cases
> whatever one channel is running or multi channels are running simultanously,
> large or small buffers are copied.  And this change doesn't impact memory
> access performance remarkably, lmbench tests show that for some cases the
> memory performance are decreased very slightly, while the others are even
> better.
> Tested on T4240.

Applied, thanks

--
~Vinod

  reply	other threads:[~2014-01-20  8:45 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-16  6:10 [PATCH] DMA: Freescale: change BWC from 256 bytes to 1024 bytes hongbo.zhang
2014-01-16  6:10 ` hongbo.zhang
2014-01-20  7:43 ` Vinod Koul [this message]
2014-01-20  7:43   ` Vinod Koul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140120074349.GH26823@intel.com \
    --to=vinod.koul@intel.com \
    --cc=LeoLi@freescale.com \
    --cc=dan.j.williams@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=hongbo.zhang@freescale.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=scottwood@freescale.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.