All of lore.kernel.org
 help / color / mirror / Atom feed
* [curro:for-edward 17/18] drivers/gpu/drm/i915/gt/intel_lrc.c:2680:25: error: ISO C90 forbids mixed declarations and code
@ 2020-10-22  0:34 kernel test robot
  0 siblings, 0 replies; only message in thread
From: kernel test robot @ 2020-10-22  0:34 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 12266 bytes --]

Hi Francisco,

FYI, the error/warning still remains.

tree:   https://github.com/curro/linux for-edward
head:   a2819d0696b5fdd0668c55ec6ae9b1d23e563caa
commit: 9ba143107276ffb71f116efd181424c070805213 [17/18] DEBUG
config: i386-randconfig-a012-20201021 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/curro/linux/commit/9ba143107276ffb71f116efd181424c070805213
        git remote add curro https://github.com/curro/linux
        git fetch --no-tags curro for-edward
        git checkout 9ba143107276ffb71f116efd181424c070805213
        # save the attached .config to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   drivers/gpu/drm/i915/gt/intel_lrc.c: In function 'process_csb':
>> drivers/gpu/drm/i915/gt/intel_lrc.c:2680:25: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]
    2680 |                         bool trace = false;
         |                         ^~~~
   drivers/gpu/drm/i915/gt/intel_lrc.c:2712:25: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]
    2712 |                         bool trace = false;
         |                         ^~~~
   cc1: all warnings being treated as errors

vim +2680 drivers/gpu/drm/i915/gt/intel_lrc.c

  2559	
  2560	static void process_csb(struct intel_engine_cs *engine)
  2561	{
  2562		struct intel_engine_execlists * const execlists = &engine->execlists;
  2563		const u32 * const buf = execlists->csb_status;
  2564		const u8 num_entries = execlists->csb_size;
  2565		u8 head, tail;
  2566	
  2567		/*
  2568		 * As we modify our execlists state tracking we require exclusive
  2569		 * access. Either we are inside the tasklet, or the tasklet is disabled
  2570		 * and we assume that is only inside the reset paths and so serialised.
  2571		 */
  2572		GEM_BUG_ON(!tasklet_is_locked(&execlists->tasklet) &&
  2573			   !reset_in_progress(execlists));
  2574		GEM_BUG_ON(!intel_engine_in_execlists_submission_mode(engine));
  2575	
  2576		/*
  2577		 * Note that csb_write, csb_status may be either in HWSP or mmio.
  2578		 * When reading from the csb_write mmio register, we have to be
  2579		 * careful to only use the GEN8_CSB_WRITE_PTR portion, which is
  2580		 * the low 4bits. As it happens we know the next 4bits are always
  2581		 * zero and so we can simply masked off the low u8 of the register
  2582		 * and treat it identically to reading from the HWSP (without having
  2583		 * to use explicit shifting and masking, and probably bifurcating
  2584		 * the code to handle the legacy mmio read).
  2585		 */
  2586		head = execlists->csb_head;
  2587		tail = READ_ONCE(*execlists->csb_write);
  2588		if (unlikely(head == tail))
  2589			return;
  2590	
  2591		/*
  2592		 * We will consume all events from HW, or at least pretend to.
  2593		 *
  2594		 * The sequence of events from the HW is deterministic, and derived
  2595		 * from our writes to the ELSP, with a smidgen of variability for
  2596		 * the arrival of the asynchronous requests wrt to the inflight
  2597		 * execution. If the HW sends an event that does not correspond with
  2598		 * the one we are expecting, we have to abandon all hope as we lose
  2599		 * all tracking of what the engine is actually executing. We will
  2600		 * only detect we are out of sequence with the HW when we get an
  2601		 * 'impossible' event because we have already drained our own
  2602		 * preemption/promotion queue. If this occurs, we know that we likely
  2603		 * lost track of execution earlier and must unwind and restart, the
  2604		 * simplest way is by stop processing the event queue and force the
  2605		 * engine to reset.
  2606		 */
  2607		execlists->csb_head = tail;
  2608		ENGINE_TRACE(engine, "cs-irq head=%d, tail=%d\n", head, tail);
  2609	
  2610		/*
  2611		 * Hopefully paired with a wmb() in HW!
  2612		 *
  2613		 * We must complete the read of the write pointer before any reads
  2614		 * from the CSB, so that we do not see stale values. Without an rmb
  2615		 * (lfence) the HW may speculatively perform the CSB[] reads *before*
  2616		 * we perform the READ_ONCE(*csb_write).
  2617		 */
  2618		rmb();
  2619		do {
  2620			bool promote;
  2621	
  2622			if (++head == num_entries)
  2623				head = 0;
  2624	
  2625			/*
  2626			 * We are flying near dragons again.
  2627			 *
  2628			 * We hold a reference to the request in execlist_port[]
  2629			 * but no more than that. We are operating in softirq
  2630			 * context and so cannot hold any mutex or sleep. That
  2631			 * prevents us stopping the requests we are processing
  2632			 * in port[] from being retired simultaneously (the
  2633			 * breadcrumb will be complete before we see the
  2634			 * context-switch). As we only hold the reference to the
  2635			 * request, any pointer chasing underneath the request
  2636			 * is subject to a potential use-after-free. Thus we
  2637			 * store all of the bookkeeping within port[] as
  2638			 * required, and avoid using unguarded pointers beneath
  2639			 * request itself. The same applies to the atomic
  2640			 * status notifier.
  2641			 */
  2642	
  2643			ENGINE_TRACE(engine, "csb[%d]: status=0x%08x:0x%08x\n",
  2644				     head, buf[2 * head + 0], buf[2 * head + 1]);
  2645	
  2646			if (INTEL_GEN(engine->i915) >= 12)
  2647				promote = gen12_csb_parse(execlists, buf + 2 * head);
  2648			else
  2649				promote = gen8_csb_parse(execlists, buf + 2 * head);
  2650			if (promote) {
  2651				struct i915_request * const *old = execlists->active;
  2652	
  2653				if (GEM_WARN_ON(!*execlists->pending)) {
  2654					execlists->error_interrupt |= ERROR_CSB;
  2655					break;
  2656				}
  2657	
  2658				ring_set_paused(engine, 0);
  2659	
  2660				/* Point active to the new ELSP; prevent overwriting */
  2661				WRITE_ONCE(execlists->active, execlists->pending);
  2662				smp_wmb(); /* notify execlists_active() */
  2663	
  2664				/* cancel old inflight, prepare for switch */
  2665				trace_ports(execlists, "preempted", old);
  2666				while (*old)
  2667					execlists_schedule_out(*old++);
  2668	
  2669				/* switch pending to inflight */
  2670				GEM_BUG_ON(!assert_pending_valid(execlists, "promote"));
  2671				memcpy(execlists->inflight,
  2672				       execlists->pending,
  2673				       execlists_num_ports(execlists) *
  2674				       sizeof(*execlists->pending));
  2675				smp_wmb(); /* complete the seqlock */
  2676				WRITE_ONCE(execlists->active, execlists->inflight);
  2677	
  2678				WRITE_ONCE(execlists->pending[0], NULL);
  2679	
> 2680	                        bool trace = false;
  2681	                        if (!atomic_xchg(&execlists->busy, 1)) {
  2682	                                if ((engine->gt->qos.debug & 1))
  2683	                                        intel_qos_overload_begin(&engine->gt->qos);
  2684	                                trace = true;
  2685	                        }
  2686	
  2687	                        if (execlists->inflight[1]) {
  2688	                                if (!atomic_xchg(&execlists->overload, 1)) {
  2689	                                        if (!(engine->gt->qos.debug & 1))
  2690	                                                intel_qos_overload_begin(&engine->gt->qos);
  2691	                                        trace = true;
  2692	                                }
  2693	                        } else {
  2694	                                if (atomic_xchg(&execlists->overload, 0)) {
  2695	                                        if (!(engine->gt->qos.debug & 1))
  2696	                                                intel_qos_overload_end(&engine->gt->qos);
  2697	                                        trace = true;
  2698	                                }
  2699	                        }
  2700	
  2701	                        if (trace)
  2702	                                trace_status(engine);
  2703			} else {
  2704				if (GEM_WARN_ON(!*execlists->active)) {
  2705					execlists->error_interrupt |= ERROR_CSB;
  2706					break;
  2707				}
  2708	
  2709				/* port0 completed, advanced to port1 */
  2710				trace_ports(execlists, "completed", execlists->active);
  2711	
  2712	                        bool trace = false;
  2713	                        if (atomic_xchg(&execlists->overload, 0)) {
  2714	                                if (!(engine->gt->qos.debug & 1))
  2715	                                        intel_qos_overload_end(&engine->gt->qos);
  2716	                                trace = true;
  2717	                        }
  2718	
  2719				/*
  2720				 * We rely on the hardware being strongly
  2721				 * ordered, that the breadcrumb write is
  2722				 * coherent (visible from the CPU) before the
  2723				 * user interrupt is processed. One might assume
  2724				 * that the breadcrumb write being before the
  2725				 * user interrupt and the CS event for the context
  2726				 * switch would therefore be before the CS event
  2727				 * itself...
  2728				 */
  2729				if (GEM_SHOW_DEBUG() &&
  2730				    !i915_request_completed(*execlists->active)) {
  2731					struct i915_request *rq = *execlists->active;
  2732					const u32 *regs __maybe_unused =
  2733						rq->context->lrc_reg_state;
  2734	
  2735					ENGINE_TRACE(engine,
  2736						     "context completed before request!\n");
  2737					ENGINE_TRACE(engine,
  2738						     "ring:{start:0x%08x, head:%04x, tail:%04x, ctl:%08x, mode:%08x}\n",
  2739						     ENGINE_READ(engine, RING_START),
  2740						     ENGINE_READ(engine, RING_HEAD) & HEAD_ADDR,
  2741						     ENGINE_READ(engine, RING_TAIL) & TAIL_ADDR,
  2742						     ENGINE_READ(engine, RING_CTL),
  2743						     ENGINE_READ(engine, RING_MI_MODE));
  2744					ENGINE_TRACE(engine,
  2745						     "rq:{start:%08x, head:%04x, tail:%04x, seqno:%llx:%d, hwsp:%d}, ",
  2746						     i915_ggtt_offset(rq->ring->vma),
  2747						     rq->head, rq->tail,
  2748						     rq->fence.context,
  2749						     lower_32_bits(rq->fence.seqno),
  2750						     hwsp_seqno(rq));
  2751					ENGINE_TRACE(engine,
  2752						     "ctx:{start:%08x, head:%04x, tail:%04x}, ",
  2753						     regs[CTX_RING_START],
  2754						     regs[CTX_RING_HEAD],
  2755						     regs[CTX_RING_TAIL]);
  2756				}
  2757	
  2758				execlists_schedule_out(*execlists->active++);
  2759	
  2760	                        if (!*execlists->active && atomic_xchg(&execlists->busy, 0)) {
  2761	                                if ((engine->gt->qos.debug & 1))
  2762	                                        intel_qos_overload_end(&engine->gt->qos);
  2763	                                trace = true;
  2764	                        }
  2765	
  2766	                        if (trace)
  2767	                                trace_status(engine);
  2768	
  2769				GEM_BUG_ON(execlists->active - execlists->inflight >
  2770					   execlists_num_ports(execlists));
  2771			}
  2772		} while (head != tail);
  2773	
  2774		set_timeslice(engine);
  2775	
  2776		/*
  2777		 * Gen11 has proven to fail wrt global observation point between
  2778		 * entry and tail update, failing on the ordering and thus
  2779		 * we see an old entry in the context status buffer.
  2780		 *
  2781		 * Forcibly evict out entries for the next gpu csb update,
  2782		 * to increase the odds that we get a fresh entries with non
  2783		 * working hardware. The cost for doing so comes out mostly with
  2784		 * the wash as hardware, working or not, will need to do the
  2785		 * invalidation before.
  2786		 */
  2787		invalidate_csb_entries(&buf[0], &buf[num_entries - 1]);
  2788	}
  2789	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 34202 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2020-10-22  0:34 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-22  0:34 [curro:for-edward 17/18] drivers/gpu/drm/i915/gt/intel_lrc.c:2680:25: error: ISO C90 forbids mixed declarations and code kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.