From: Stephen Rothwell <sfr@canb.auug.org.au> To: Daniel Vetter <daniel.vetter@ffwll.ch>, Intel Graphics <intel-gfx@lists.freedesktop.org>, DRI <dri-devel@lists.freedesktop.org> Cc: Linux-Next Mailing List <linux-next@vger.kernel.org>, Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, Min He <min.he@intel.com>, Zhi Wang <zhi.a.wang@intel.com>, Zhenyu Wang <zhenyuw@linux.intel.com> Subject: linux-next: manual merge of the drm-intel tree with Linus' tree Date: Thu, 22 Mar 2018 13:21:29 +1100 [thread overview] Message-ID: <20180322132129.6b953166@canb.auug.org.au> (raw) [-- Attachment #1: Type: text/plain, Size: 3594 bytes --] Hi all, Today's linux-next merge of the drm-intel tree got a conflict in: drivers/gpu/drm/i915/gvt/scheduler.c between commit: fa3dd623e559 ("drm/i915/gvt: keep oa config in shadow ctx") from Linus' tree and commit: b20c0d5ce104 ("drm/i915/gvt: Update PDPs after a vGPU mm object is pinned.") from the drm-intel tree. I fixed it up (see below) and can carry the fix as necessary. This is now fixed as far as linux-next is concerned, but any non trivial conflicts should be mentioned to your upstream maintainer when your tree is submitted for merging. You may also want to consider cooperating with the maintainer of the conflicting tree to minimise any particularly complex conflicts. -- Cheers, Stephen Rothwell diff --cc drivers/gpu/drm/i915/gvt/scheduler.c index 068126404151,a55b4975c154..000000000000 --- a/drivers/gpu/drm/i915/gvt/scheduler.c +++ b/drivers/gpu/drm/i915/gvt/scheduler.c @@@ -52,54 -52,29 +52,77 @@@ static void set_context_pdp_root_pointe pdp_pair[i].val = pdp[7 - i]; } +/* + * when populating shadow ctx from guest, we should not overrride oa related + * registers, so that they will not be overlapped by guest oa configs. Thus + * made it possible to capture oa data from host for both host and guests. + */ +static void sr_oa_regs(struct intel_vgpu_workload *workload, + u32 *reg_state, bool save) +{ + struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv; + u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset; + u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset; + int i = 0; + u32 flex_mmio[] = { + i915_mmio_reg_offset(EU_PERF_CNTL0), + i915_mmio_reg_offset(EU_PERF_CNTL1), + i915_mmio_reg_offset(EU_PERF_CNTL2), + i915_mmio_reg_offset(EU_PERF_CNTL3), + i915_mmio_reg_offset(EU_PERF_CNTL4), + i915_mmio_reg_offset(EU_PERF_CNTL5), + i915_mmio_reg_offset(EU_PERF_CNTL6), + }; + + if (!workload || !reg_state || workload->ring_id != RCS) + return; + + if (save) { + workload->oactxctrl = reg_state[ctx_oactxctrl + 1]; + + for (i = 0; i < ARRAY_SIZE(workload->flex_mmio); i++) { + u32 state_offset = ctx_flexeu0 + i * 2; + + workload->flex_mmio[i] = reg_state[state_offset + 1]; + } + } else { + reg_state[ctx_oactxctrl] = + i915_mmio_reg_offset(GEN8_OACTXCONTROL); + reg_state[ctx_oactxctrl + 1] = workload->oactxctrl; + + for (i = 0; i < ARRAY_SIZE(workload->flex_mmio); i++) { + u32 state_offset = ctx_flexeu0 + i * 2; + u32 mmio = flex_mmio[i]; + + reg_state[state_offset] = mmio; + reg_state[state_offset + 1] = workload->flex_mmio[i]; + } + } +} + + static void update_shadow_pdps(struct intel_vgpu_workload *workload) + { + struct intel_vgpu *vgpu = workload->vgpu; + int ring_id = workload->ring_id; + struct i915_gem_context *shadow_ctx = vgpu->submission.shadow_ctx; + struct drm_i915_gem_object *ctx_obj = + shadow_ctx->engine[ring_id].state->obj; + struct execlist_ring_context *shadow_ring_context; + struct page *page; + + if (WARN_ON(!workload->shadow_mm)) + return; + + if (WARN_ON(!atomic_read(&workload->shadow_mm->pincount))) + return; + + page = i915_gem_object_get_page(ctx_obj, LRC_STATE_PN); + shadow_ring_context = kmap(page); + set_context_pdp_root_pointer(shadow_ring_context, + (void *)workload->shadow_mm->ppgtt_mm.shadow_pdps); + kunmap(page); + } + static int populate_shadow_context(struct intel_vgpu_workload *workload) { struct intel_vgpu *vgpu = workload->vgpu; [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 488 bytes --]
WARNING: multiple messages have this Message-ID (diff)
From: Stephen Rothwell <sfr@canb.auug.org.au> To: Daniel Vetter <daniel.vetter@ffwll.ch>, Intel Graphics <intel-gfx@lists.freedesktop.org>, DRI <dri-devel@lists.freedesktop.org> Cc: Linux-Next Mailing List <linux-next@vger.kernel.org>, Linux Kernel Mailing List <linux-kernel@vger.kernel.org> Subject: linux-next: manual merge of the drm-intel tree with Linus' tree Date: Thu, 22 Mar 2018 13:21:29 +1100 [thread overview] Message-ID: <20180322132129.6b953166@canb.auug.org.au> (raw) [-- Attachment #1.1: Type: text/plain, Size: 3594 bytes --] Hi all, Today's linux-next merge of the drm-intel tree got a conflict in: drivers/gpu/drm/i915/gvt/scheduler.c between commit: fa3dd623e559 ("drm/i915/gvt: keep oa config in shadow ctx") from Linus' tree and commit: b20c0d5ce104 ("drm/i915/gvt: Update PDPs after a vGPU mm object is pinned.") from the drm-intel tree. I fixed it up (see below) and can carry the fix as necessary. This is now fixed as far as linux-next is concerned, but any non trivial conflicts should be mentioned to your upstream maintainer when your tree is submitted for merging. You may also want to consider cooperating with the maintainer of the conflicting tree to minimise any particularly complex conflicts. -- Cheers, Stephen Rothwell diff --cc drivers/gpu/drm/i915/gvt/scheduler.c index 068126404151,a55b4975c154..000000000000 --- a/drivers/gpu/drm/i915/gvt/scheduler.c +++ b/drivers/gpu/drm/i915/gvt/scheduler.c @@@ -52,54 -52,29 +52,77 @@@ static void set_context_pdp_root_pointe pdp_pair[i].val = pdp[7 - i]; } +/* + * when populating shadow ctx from guest, we should not overrride oa related + * registers, so that they will not be overlapped by guest oa configs. Thus + * made it possible to capture oa data from host for both host and guests. + */ +static void sr_oa_regs(struct intel_vgpu_workload *workload, + u32 *reg_state, bool save) +{ + struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv; + u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset; + u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset; + int i = 0; + u32 flex_mmio[] = { + i915_mmio_reg_offset(EU_PERF_CNTL0), + i915_mmio_reg_offset(EU_PERF_CNTL1), + i915_mmio_reg_offset(EU_PERF_CNTL2), + i915_mmio_reg_offset(EU_PERF_CNTL3), + i915_mmio_reg_offset(EU_PERF_CNTL4), + i915_mmio_reg_offset(EU_PERF_CNTL5), + i915_mmio_reg_offset(EU_PERF_CNTL6), + }; + + if (!workload || !reg_state || workload->ring_id != RCS) + return; + + if (save) { + workload->oactxctrl = reg_state[ctx_oactxctrl + 1]; + + for (i = 0; i < ARRAY_SIZE(workload->flex_mmio); i++) { + u32 state_offset = ctx_flexeu0 + i * 2; + + workload->flex_mmio[i] = reg_state[state_offset + 1]; + } + } else { + reg_state[ctx_oactxctrl] = + i915_mmio_reg_offset(GEN8_OACTXCONTROL); + reg_state[ctx_oactxctrl + 1] = workload->oactxctrl; + + for (i = 0; i < ARRAY_SIZE(workload->flex_mmio); i++) { + u32 state_offset = ctx_flexeu0 + i * 2; + u32 mmio = flex_mmio[i]; + + reg_state[state_offset] = mmio; + reg_state[state_offset + 1] = workload->flex_mmio[i]; + } + } +} + + static void update_shadow_pdps(struct intel_vgpu_workload *workload) + { + struct intel_vgpu *vgpu = workload->vgpu; + int ring_id = workload->ring_id; + struct i915_gem_context *shadow_ctx = vgpu->submission.shadow_ctx; + struct drm_i915_gem_object *ctx_obj = + shadow_ctx->engine[ring_id].state->obj; + struct execlist_ring_context *shadow_ring_context; + struct page *page; + + if (WARN_ON(!workload->shadow_mm)) + return; + + if (WARN_ON(!atomic_read(&workload->shadow_mm->pincount))) + return; + + page = i915_gem_object_get_page(ctx_obj, LRC_STATE_PN); + shadow_ring_context = kmap(page); + set_context_pdp_root_pointer(shadow_ring_context, + (void *)workload->shadow_mm->ppgtt_mm.shadow_pdps); + kunmap(page); + } + static int populate_shadow_context(struct intel_vgpu_workload *workload) { struct intel_vgpu *vgpu = workload->vgpu; [-- Attachment #1.2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 488 bytes --] [-- Attachment #2: Type: text/plain, Size: 160 bytes --] _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next reply other threads:[~2018-03-22 2:22 UTC|newest] Thread overview: 109+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-03-22 2:21 Stephen Rothwell [this message] 2018-03-22 2:21 ` linux-next: manual merge of the drm-intel tree with Linus' tree Stephen Rothwell 2018-03-23 0:50 ` Stephen Rothwell 2018-03-23 0:50 ` Stephen Rothwell 2018-03-23 14:16 ` [Intel-gfx] " Joonas Lahtinen 2018-03-23 14:16 ` Joonas Lahtinen -- strict thread matches above, loose matches on Subject: below -- 2024-01-09 0:22 Stephen Rothwell 2024-01-09 0:22 ` Stephen Rothwell 2023-11-22 0:51 Stephen Rothwell 2023-11-22 0:51 ` Stephen Rothwell 2023-11-22 1:10 ` Stephen Rothwell 2023-11-22 1:10 ` Stephen Rothwell 2023-03-06 23:09 Stephen Rothwell 2023-03-06 23:09 ` Stephen Rothwell 2022-11-13 23:23 Stephen Rothwell 2022-11-13 23:23 ` Stephen Rothwell 2022-11-14 8:19 ` Hans de Goede 2022-11-14 8:19 ` Hans de Goede 2022-11-14 10:10 ` Jani Nikula 2022-11-14 10:10 ` Jani Nikula 2022-11-14 10:35 ` Hans de Goede 2022-11-14 10:35 ` Hans de Goede 2022-11-14 11:02 ` Jani Nikula 2022-11-14 11:02 ` Jani Nikula 2022-10-17 22:05 Stephen Rothwell 2022-10-17 22:05 ` Stephen Rothwell 2022-06-07 23:59 Stephen Rothwell 2022-06-07 23:59 ` Stephen Rothwell 2022-06-07 23:53 Stephen Rothwell 2022-06-07 23:53 ` Stephen Rothwell 2022-04-05 1:00 Stephen Rothwell 2022-04-05 1:00 ` Stephen Rothwell 2022-04-05 0:53 Stephen Rothwell 2022-04-05 0:53 ` Stephen Rothwell 2022-01-24 22:39 Stephen Rothwell 2022-01-24 22:39 ` Stephen Rothwell 2022-01-24 22:33 Stephen Rothwell 2022-01-24 22:33 ` Stephen Rothwell 2021-08-02 15:29 Mark Brown 2021-05-20 0:19 Stephen Rothwell 2021-05-20 0:19 ` Stephen Rothwell 2021-05-21 1:45 ` Stephen Rothwell 2021-05-21 1:45 ` Stephen Rothwell 2021-05-12 0:28 Stephen Rothwell 2021-05-12 0:28 ` Stephen Rothwell 2020-09-08 4:00 Stephen Rothwell 2020-09-08 4:00 ` Stephen Rothwell 2020-09-08 8:22 ` Hans de Goede 2020-09-08 8:22 ` Hans de Goede 2020-09-08 11:04 ` Stephen Rothwell 2020-09-08 11:04 ` Stephen Rothwell 2020-09-08 13:20 ` Hans de Goede 2020-09-08 13:20 ` Hans de Goede 2020-06-23 1:35 Stephen Rothwell 2020-06-23 1:35 ` Stephen Rothwell 2017-09-19 1:42 Stephen Rothwell 2016-09-16 0:38 Stephen Rothwell 2016-09-16 0:38 ` Stephen Rothwell 2016-09-08 2:08 Stephen Rothwell 2016-09-08 2:08 ` Stephen Rothwell 2016-05-31 1:06 Stephen Rothwell 2016-05-31 1:06 ` Stephen Rothwell 2016-05-31 1:00 Stephen Rothwell 2016-05-31 1:00 ` Stephen Rothwell 2015-12-22 1:03 Stephen Rothwell 2015-12-22 1:03 ` Stephen Rothwell 2015-11-19 0:23 Stephen Rothwell 2015-11-19 0:23 ` Stephen Rothwell 2015-09-29 1:20 Stephen Rothwell 2015-09-29 1:20 ` Stephen Rothwell 2015-09-29 1:20 Stephen Rothwell 2015-09-29 1:20 ` Stephen Rothwell 2015-09-17 0:13 Stephen Rothwell 2015-09-17 0:13 ` Stephen Rothwell 2015-07-09 1:02 Stephen Rothwell 2015-07-09 1:02 ` Stephen Rothwell 2015-04-29 1:15 Stephen Rothwell 2015-04-29 1:15 ` Stephen Rothwell 2014-09-08 4:32 Stephen Rothwell 2014-09-08 4:32 ` Stephen Rothwell 2013-12-18 2:50 Stephen Rothwell 2013-12-18 2:50 ` Stephen Rothwell 2013-12-13 0:58 Stephen Rothwell 2013-12-13 0:58 ` Stephen Rothwell 2013-09-18 1:25 Stephen Rothwell 2013-09-18 1:25 ` Stephen Rothwell 2013-09-18 1:20 Stephen Rothwell 2013-09-18 1:20 ` Stephen Rothwell 2013-06-26 3:54 Stephen Rothwell 2013-06-26 3:54 ` Stephen Rothwell 2013-06-17 3:32 Stephen Rothwell 2013-06-17 3:32 ` Stephen Rothwell 2013-06-17 3:25 Stephen Rothwell 2013-06-17 3:25 ` Stephen Rothwell 2013-05-21 1:58 Stephen Rothwell 2013-05-21 1:58 ` Stephen Rothwell 2013-05-07 1:27 Stephen Rothwell 2013-05-07 1:27 ` Stephen Rothwell 2013-05-07 8:43 ` Daniel Vetter 2013-05-07 8:43 ` Daniel Vetter 2013-05-08 0:11 ` Stephen Rothwell 2013-05-08 0:11 ` Stephen Rothwell 2013-04-03 2:43 Stephen Rothwell 2013-04-03 2:43 ` Stephen Rothwell 2013-04-03 8:31 ` Daniel Vetter 2013-04-02 2:46 Stephen Rothwell 2013-04-02 2:46 ` Stephen Rothwell 2013-03-04 23:23 Stephen Rothwell 2013-03-04 23:23 ` Stephen Rothwell
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20180322132129.6b953166@canb.auug.org.au \ --to=sfr@canb.auug.org.au \ --cc=daniel.vetter@ffwll.ch \ --cc=dri-devel@lists.freedesktop.org \ --cc=intel-gfx@lists.freedesktop.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-next@vger.kernel.org \ --cc=min.he@intel.com \ --cc=zhenyuw@linux.intel.com \ --cc=zhi.a.wang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.