linux-media.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Mani, Rajmohan" <rajmohan.mani@intel.com>
To: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Tomasz Figa <tfiga@chromium.org>,
	Jacopo Mondi <jacopo@jmondi.org>,
	"Zhi, Yong" <yong.zhi@intel.com>,
	"Qiu, Tian Shu" <tian.shu.qiu@intel.com>,
	"Cao, Bingbu" <bingbu.cao@intel.com>,
	Linux Media Mailing List <linux-media@vger.kernel.org>,
	Sakari Ailus <sakari.ailus@linux.intel.com>,
	Mauro Carvalho Chehab <mchehab@kernel.org>,
	Hans Verkuil <hans.verkuil@cisco.com>,
	"Hu, Jerry W" <jerry.w.hu@intel.com>,
	"Toivonen, Tuukka" <tuukka.toivonen@intel.com>
Subject: RE: [PATCH v7 00/16] Intel IPU3 ImgU patchset
Date: Tue, 22 Jan 2019 16:21:42 +0000	[thread overview]
Message-ID: <6F87890CF0F5204F892DEA1EF0D77A599B323F0D@fmsmsx122.amr.corp.intel.com> (raw)
In-Reply-To: <20190121080711.GA4420@pendragon.ideasonboard.com>

Hi Tomasz, Laurent,

> Subject: Re: [PATCH v7 00/16] Intel IPU3 ImgU patchset
> 
> Hello Raj,
> 
> On Mon, Jan 21, 2019 at 02:41:03PM +0900, Tomasz Figa wrote:
> >  On Wed, Jan 16, 2019 at 11:16 AM Mani, Rajmohan
> <rajmohan.mani@intel.com> wrote:
> > >> Subject: Re: [PATCH v7 00/16] Intel IPU3 ImgU patchset On Saturday,
> > >> 12 January 2019 04:30:49 EET Mani, Rajmohan wrote:
> > >>
> > >> [snip]
> > >>
> > >>> I finally managed to reproduce the issue with 4.20-rc6, with KASAN
> > >>> enabled and with CONFIG_SLUB_DEBUG_ON with SLAB_STORE_USER.
> > >>
> > >> Nice ! Thank you for your work.
> > >>
> > >>> The following line indicates the crash happens when yavta PID
> > >>> 10289 tries to free the memory.
> > >>>
> > >>> [  452.437844] BUG: KASAN: use-after-free in
> > >>> ipu3_dmamap_free+0x50/0x9c [ipu3_imgu] [  452.446123] Read of size
> > >>> 8 at addr ffff8881503481a0 by task yavta/10289
> > >>>
> > >>> The above looks to be normal, since it's the same task that
> > >>> allocated this memory.
> > >>> [  452.685731] Allocated by task 10289:
> > >>>
> > >>> Before the above happened, yavta/10187 came in and freed this
> > >>> memory per KASAN.
> > >>> [  452.787656] Freed by task 10187:
> > >>>
> > >>> Is this (one instance of yavta freeing the memory allocated by
> > >>> another instance of yavta) expected? Or does it indicate that mmap
> > >>> giving the same address across these 2 instances of yavta? I need
> > >>> to debug / confirm the latter case.
> > >>
> > >> KASAN prints the task name (and process ID) to help you debugging
> > >> the problem, but this doesn't mean that yavta is freeing the
> > >> memory. yavta exercises the V4L2 API exposed by the driver, and
> > >> internally, down the call stack, ipu3_dmamap_free() is called by
> > >> the driver. According to the backtraces you posted, this is in
> > >> response to a VIDIOC_STREAMOFF call from yavta. I would expect
> > >> VIDIOC_STREAMOFF to free DMA mappings created for the buffers on
> > >> the corresponding video nodes, and thus allocated by the same task.
> > >
> > > Ack.
> > >
> > >> The fact
> > >> that memory is allocated in one task and freed in another seems
> > >> weird to me in this case.
> > >>
> > >
> > > I have instrumented the code around ipu3 dma map code, with a change
> > > to skip dma free operations, if the current->pid is not the same as
> > > the pid that originally did the dma alloc.
> > >
> > > There are no crashes in this case, as expected.
> > >
> > > I also confirmed that STREAM_ON/OFF is the one that results in this crash.
> > > I need to spend more time on the alloc / free operations done by the
> > > yavta Instances to see where the problem could be.
> > >
> > > This below line doesn't make sense, as the free call for pid 12986
> > > occurs first, before the alloc calls. Yavta application logs
> > > indicate the dma alloc has been done for pid 12986, although I don't see
> corresponding dma alloc calls from pid 12986.
> >
> >  I wonder if that doesn't mean that for some reason some V4L2 ioctls
> > done from a context other than the owner (the one that first allocated
> >  vb2 buffers) end up triggering some buffer freeing/re-allocation. For
> >  VB2 buffers that's normally prevented by the core, but possibly we do
> > some internal buffer management in non-buffer related V4L2 ioctls in
> > the driver?
> 
> I had a quick look at the driver, and found the following code in the
> VIDIOC_STREAMOFF handler ipu3_vb2_stop_streaming():
> 
>         /* Was this the first node with streaming disabled? */
>         if (imgu->streaming && ipu3_all_nodes_streaming(imgu, node)) {
>                 /* Yes, really stop streaming now */
>                 dev_dbg(dev, "IMGU streaming is ready to stop");
>                 r = imgu_s_stream(imgu, false);
>                 if (!r)
>                         imgu->streaming = false;
>         }
> 
> The queue is initialized in ipu3_v4l2_node_setup() with
> 
>         vbq->lock = &node->lock;
> 
> which means that concurrent VIDIOC_STREAMOFF operations on different
> nodes can race each other. Could you enable dynamic debugging to get the
> "IMGU streaming is ready to stop" message printed to the kernel log, and see if
> this could explain the double-free problem ?
> 
> In any case this race condition should be handled by proper locking.
> Both the imgu->streaming and the ipu3_all_nodes_streaming() tests are very
> racy, and can lead to many different problems (failure at processing start also
> comes to mind).
> 

Thanks for your inputs.

I have confirmed so far that ipu3_vb2_stop_streaming() is called for all 4 instances
of yavta exit, while ipu3_vb2_start_streaming() is called for just one instance. This causes
4 free operations for a single alloc, resulting in this failure.

ipu3_vb2_stop_streaming() should be done just once, just like ipu3_vb2_start_streaming().
ipu3_vb2_stop_streaming() should also be improved a bit to handle multiple applications handling
all the nodes.

Laurent's findings point to the same.

Let me get back soon with more details.

[snip]

  reply	other threads:[~2019-01-22 16:21 UTC|newest]

Thread overview: 123+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-29 22:22 [PATCH v7 00/16] Intel IPU3 ImgU patchset Yong Zhi
2018-10-29 22:22 ` [PATCH v7 01/16] v4l: Add Intel IPU3 meta buffer formats Yong Zhi
2018-11-02 12:59   ` Mauro Carvalho Chehab
2018-11-02 13:05     ` Mauro Carvalho Chehab
2018-11-29 19:16   ` Laurent Pinchart
2018-11-29 23:12     ` Zhi, Yong
2018-10-29 22:22 ` [PATCH v7 02/16] doc-rst: Add Intel IPU3 documentation Yong Zhi
2018-11-29 22:50   ` Laurent Pinchart
2018-12-13  9:38     ` Sakari Ailus
2018-12-13 10:41       ` Laurent Pinchart
2018-12-13 10:50         ` Sakari Ailus
2018-12-13  9:38     ` [PATCH 1/1] staging/ipu3-imgu: Address documentation comments Sakari Ailus
2018-10-29 22:22 ` [PATCH v7 03/16] v4l: Add Intel IPU3 meta data uAPI Yong Zhi
2018-11-02 13:02   ` Sakari Ailus
2018-11-16 22:37     ` Zhi, Yong
     [not found]       ` <20181129224548.qwbkau6suipt2veq@kekkonen.localdomain>
2018-11-29 23:06         ` Zhi, Yong
2018-11-29 23:06           ` Zhi, Yong
2018-12-01 20:57           ` Sakari Ailus
2018-12-01 20:57             ` Sakari Ailus
2018-11-02 13:49   ` Mauro Carvalho Chehab
2018-11-02 14:04     ` Tomasz Figa
2018-11-06 23:27       ` Mani, Rajmohan
2018-11-15 10:52         ` Hans Verkuil
2018-11-29  0:41           ` Mani, Rajmohan
2018-11-06 18:25     ` Zhi, Yong
2018-11-15 12:51   ` Hans Verkuil
2018-11-21 18:45     ` Zhi, Yong
2018-10-29 22:22 ` [PATCH v7 04/16] intel-ipu3: abi: Add register definitions and enum Yong Zhi
2018-10-29 22:22 ` [PATCH v7 05/16] intel-ipu3: abi: Add structs Yong Zhi
2018-11-05  8:27   ` Sakari Ailus
2018-11-05 19:05     ` Mani, Rajmohan
2018-11-06  8:04       ` Sakari Ailus
2018-11-06 23:31         ` Mani, Rajmohan
2018-10-29 22:23 ` [PATCH v7 06/16] intel-ipu3: mmu: Implement driver Yong Zhi
2018-11-05 11:55   ` Sakari Ailus
2018-11-06  5:50     ` Zhi, Yong
2018-11-06  5:56       ` Tomasz Figa
2018-10-29 22:23 ` [PATCH v7 07/16] intel-ipu3: Implement DMA mapping functions Yong Zhi
2018-10-29 22:23 ` [PATCH v7 08/16] intel-ipu3: css: Add dma buff pool utility functions Yong Zhi
2018-11-08 15:36   ` Sakari Ailus
2018-11-09 23:16     ` Zhi, Yong
2018-11-12  9:21       ` Sakari Ailus
2018-10-29 22:23 ` [PATCH v7 09/16] intel-ipu3: css: Add support for firmware management Yong Zhi
2018-11-28 22:22   ` Sakari Ailus
2018-10-29 22:23 ` [PATCH v7 11/16] intel-ipu3: css: Compute and program ccs Yong Zhi
2018-10-29 22:23 ` [PATCH v7 12/16] intel-ipu3: css: Initialize css hardware Yong Zhi
2018-11-09 12:06   ` Sakari Ailus
2018-10-29 22:23 ` [PATCH v7 13/16] intel-ipu3: Add css pipeline programming Yong Zhi
2018-10-29 22:23 ` [PATCH v7 14/16] intel-ipu3: Add v4l2 driver based on media framework Yong Zhi
2018-11-09 12:36   ` Sakari Ailus
2018-11-09 23:26     ` Zhi, Yong
2018-11-15 12:51   ` Hans Verkuil
2018-11-15 16:09     ` Zhi, Yong
2018-10-29 22:23 ` [PATCH v7 15/16] intel-ipu3: Add imgu top level pci device driver Yong Zhi
2018-11-09 12:54   ` Sakari Ailus
2018-11-12 22:16     ` Zhi, Yong
2018-10-29 22:23 ` [PATCH v7 16/16] intel-ipu3: Add dual pipe support Yong Zhi
2018-11-01 12:03 ` [PATCH v7 00/16] Intel IPU3 ImgU patchset Sakari Ailus
2018-11-07  4:16   ` Bing Bu Cao
2018-11-09  1:28     ` Zhi, Yong
2018-11-09 11:28       ` Sakari Ailus
2018-11-09 10:09     ` Sakari Ailus
2018-11-12  4:31       ` Bing Bu Cao
2018-11-13 10:31         ` Sakari Ailus
2018-11-13 11:04           ` Bing Bu Cao
2018-11-13 21:58             ` Sakari Ailus
2018-11-14  7:02               ` Bing Bu Cao
2018-11-29 23:09       ` Laurent Pinchart
2018-11-30 13:37         ` Sakari Ailus
2018-11-29 23:07     ` Laurent Pinchart
2018-12-03  9:51       ` Sakari Ailus
2018-12-03 12:34         ` Laurent Pinchart
2018-11-14  0:25 ` jacopo mondi
2018-11-14  7:40   ` Sakari Ailus
2018-11-18  0:12     ` jacopo mondi
2018-11-29 14:43 ` Laurent Pinchart
2018-11-29 19:51   ` Tomasz Figa
2018-11-29 22:54     ` Laurent Pinchart
2018-11-29 22:58       ` Mani, Rajmohan
2018-12-04 16:07       ` Mani, Rajmohan
2018-12-04 16:42         ` Laurent Pinchart
2018-12-04 16:53           ` Mani, Rajmohan
2018-12-05  0:30           ` Mani, Rajmohan
2018-12-11 13:34             ` Laurent Pinchart
2018-12-11 13:43               ` Laurent Pinchart
2018-12-11 14:20                 ` Laurent Pinchart
2018-12-16  7:26                   ` Laurent Pinchart
2018-12-20 22:25                     ` Laurent Pinchart
2018-12-21  3:04                       ` Tomasz Figa
2019-01-08  6:54                         ` Tomasz Figa
2019-01-09 16:40                           ` Jacopo Mondi
2019-01-09 17:00                             ` Mani, Rajmohan
2019-01-09 17:25                               ` Jacopo Mondi
2019-01-09 18:01                                 ` Mani, Rajmohan
2019-01-09 18:20                                   ` Jacopo Mondi
2019-01-09 18:36                                     ` Mani, Rajmohan
2019-01-10  8:19                                       ` Jacopo Mondi
2019-01-12  2:06                                         ` Mani, Rajmohan
2019-01-12  2:30                                     ` Mani, Rajmohan
2019-01-12 15:10                                       ` Laurent Pinchart
     [not found]                                         ` <6F87890CF0F5204F892DEA1EF0D77A599B323499@fmsmsx122.amr.corp.intel.com>
2019-01-21  5:41                                           ` Tomasz Figa
2019-01-21  8:07                                             ` Laurent Pinchart
2019-01-22 16:21                                               ` Mani, Rajmohan [this message]
     [not found]                   ` <6F87890CF0F5204F892DEA1EF0D77A599B31FAF4@fmsmsx122.amr.corp.intel.com>
2019-01-08 23:34                     ` Laurent Pinchart
2018-12-12  4:55                 ` Bingbu Cao
2018-12-13 22:24                   ` Laurent Pinchart
2018-12-14  2:53                     ` Bingbu Cao
2018-12-17  3:14                     ` Bingbu Cao
2018-12-26 11:03                       ` Laurent Pinchart
2019-01-02  2:38                         ` Bingbu Cao
2019-01-02  8:20                           ` Laurent Pinchart
2019-01-02 20:26                             ` Sakari Ailus
2019-01-28 10:09                               ` Jacopo Mondi
2019-01-29  8:56                                 ` Tomasz Figa
2019-02-01 10:04                                   ` Jacopo Mondi
2019-02-05  6:01                                     ` Tomasz Figa
2019-03-23 13:02                           ` Jacopo Mondi
2019-03-25  3:45                             ` Bingbu Cao
2019-03-25  4:06                               ` Laurent Pinchart
2019-03-25  8:11                                 ` Jacopo Mondi
2019-03-25 10:07                                   ` Bingbu Cao
2019-03-26 11:16                                     ` Jacopo Mondi
2019-04-08  6:35                                       ` Bingbu Cao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6F87890CF0F5204F892DEA1EF0D77A599B323F0D@fmsmsx122.amr.corp.intel.com \
    --to=rajmohan.mani@intel.com \
    --cc=bingbu.cao@intel.com \
    --cc=hans.verkuil@cisco.com \
    --cc=jacopo@jmondi.org \
    --cc=jerry.w.hu@intel.com \
    --cc=laurent.pinchart@ideasonboard.com \
    --cc=linux-media@vger.kernel.org \
    --cc=mchehab@kernel.org \
    --cc=sakari.ailus@linux.intel.com \
    --cc=tfiga@chromium.org \
    --cc=tian.shu.qiu@intel.com \
    --cc=tuukka.toivonen@intel.com \
    --cc=yong.zhi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).