dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
To: Jeffrey Hugo <quic_jhugo@quicinc.com>
Cc: linux-arm-msm@vger.kernel.org, ogabbay@kernel.org,
	dri-devel@lists.freedesktop.org, quic_ajitpals@quicinc.com,
	quic_pkanojiy@quicinc.com, quic_carlv@quicinc.com
Subject: Re: [PATCH v2] accel/qaic: Enable 1 MSI fallback mode
Date: Sun, 22 Oct 2023 12:49:05 +0200	[thread overview]
Message-ID: <20231022104905.GA704032@linux.intel.com> (raw)
In-Reply-To: <20231016170036.5409-1-quic_jhugo@quicinc.com>

On Mon, Oct 16, 2023 at 11:00:36AM -0600, Jeffrey Hugo wrote:
> From: Carl Vanderlip <quic_carlv@quicinc.com>
> 
> Several virtualization use-cases either don't support 32 MultiMSIs
> (Xen/VMware) or have significant drawbacks to their use (KVM's vIOMMU,
> which is required to support 32 MSI, needs to allocate an alternate
> system memory space for each device using vIOMMU (e.g. 8GB VM mem and
> 2 cards => 8 + 2 * 8 = 24GB host memory required)). Support these
> cases by enabling a 1 MSI fallback mode.
> 
> Whenever all 32 MSIs requested are not available, a second request for
> a single MSI is made. Its success is the initiator of single MSI mode.
> This mode causes all interrupts generated by the device to be directed
> to the 0th MSI (firmware >=v1.10 will do this as a response to the PCIe
> MSI capability configuration). Likewise, all interrupt handlers for the
> device are registered to the 0th MSI.
> 
> Since the DBC interrupt handler checks if the DBC is in use or if
> there is any pending changes, the 'spurious' interrupts are
> disregarded. If there is work to be done, the standard threaded IRQ
> handler is dispatched.
> 
> On every interrupt, the MHI handler wakes up its threaded interrupt
> handler, and attempts to wake any waiters for MHI state events.
> 
> Performance is within +-0.6% for test cases that typify real world
> use. Larger differences ([-4,+132]%, avg +47%) exist for very simple
> tasks (e.g. addition) compiled for single NSPs. It is assumed that the
> small work and many interrupts typically cause contention (e.g. 16 NSPs
> vs 4 CPUs), as evidenced by the standard deviation between runs also
> decreasing (r=-0.48 between delta(Performace_test) and
> delta(StdDev_test/Avg_test))
> 
> Signed-off-by: Carl Vanderlip <quic_carlv@quicinc.com>
> Reviewed-by: Pranjal Ramajor Asha Kanojiya <quic_pkanojiy@quicinc.com>
> Reviewed-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
> Signed-off-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>

      reply	other threads:[~2023-10-22 10:49 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-16 17:00 [PATCH v2] accel/qaic: Enable 1 MSI fallback mode Jeffrey Hugo
2023-10-22 10:49 ` Stanislaw Gruszka [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231022104905.GA704032@linux.intel.com \
    --to=stanislaw.gruszka@linux.intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=ogabbay@kernel.org \
    --cc=quic_ajitpals@quicinc.com \
    --cc=quic_carlv@quicinc.com \
    --cc=quic_jhugo@quicinc.com \
    --cc=quic_pkanojiy@quicinc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).