LinuxPPC-Dev Archive on lore.kernel.org
 help / color / Atom feed
From: rananta@codeaurora.org
To: Greg KH <gregkh@linuxfoundation.org>
Cc: andrew@daynix.com, linuxppc-dev@lists.ozlabs.org,
	linux-kernel@vger.kernel.org, jslaby@suse.com
Subject: Re: [PATCH] tty: hvc: Fix data abort due to race in hvc_open
Date: Mon, 11 May 2020 00:34:44 -0700
Message-ID: <a033c31f8d8bf121e2cfdabbca138c1a@codeaurora.org> (raw)
In-Reply-To: <77d889be4e0cb0e6e30f96199e2d843d@codeaurora.org>

On 2020-05-11 00:23, rananta@codeaurora.org wrote:
> On 2020-05-09 23:48, Greg KH wrote:
>> On Sat, May 09, 2020 at 06:30:56PM -0700, rananta@codeaurora.org 
>> wrote:
>>> On 2020-05-06 02:48, Greg KH wrote:
>>> > On Mon, Apr 27, 2020 at 08:26:01PM -0700, Raghavendra Rao Ananta wrote:
>>> > > Potentially, hvc_open() can be called in parallel when two tasks calls
>>> > > open() on /dev/hvcX. In such a scenario, if the
>>> > > hp->ops->notifier_add()
>>> > > callback in the function fails, where it sets the tty->driver_data to
>>> > > NULL, the parallel hvc_open() can see this NULL and cause a memory
>>> > > abort.
>>> > > Hence, serialize hvc_open and check if tty->private_data is NULL
>>> > > before
>>> > > proceeding ahead.
>>> > >
>>> > > The issue can be easily reproduced by launching two tasks
>>> > > simultaneously
>>> > > that does nothing but open() and close() on /dev/hvcX.
>>> > > For example:
>>> > > $ ./simple_open_close /dev/hvc0 & ./simple_open_close /dev/hvc0 &
>>> > >
>>> > > Signed-off-by: Raghavendra Rao Ananta <rananta@codeaurora.org>
>>> > > ---
>>> > >  drivers/tty/hvc/hvc_console.c | 16 ++++++++++++++--
>>> > >  1 file changed, 14 insertions(+), 2 deletions(-)
>>> > >
>>> > > diff --git a/drivers/tty/hvc/hvc_console.c
>>> > > b/drivers/tty/hvc/hvc_console.c
>>> > > index 436cc51c92c3..ebe26fe5ac09 100644
>>> > > --- a/drivers/tty/hvc/hvc_console.c
>>> > > +++ b/drivers/tty/hvc/hvc_console.c
>>> > > @@ -75,6 +75,8 @@ static LIST_HEAD(hvc_structs);
>>> > >   */
>>> > >  static DEFINE_MUTEX(hvc_structs_mutex);
>>> > >
>>> > > +/* Mutex to serialize hvc_open */
>>> > > +static DEFINE_MUTEX(hvc_open_mutex);
>>> > >  /*
>>> > >   * This value is used to assign a tty->index value to a hvc_struct
>>> > > based
>>> > >   * upon order of exposure via hvc_probe(), when we can not match it
>>> > > to
>>> > > @@ -346,16 +348,24 @@ static int hvc_install(struct tty_driver
>>> > > *driver, struct tty_struct *tty)
>>> > >   */
>>> > >  static int hvc_open(struct tty_struct *tty, struct file * filp)
>>> > >  {
>>> > > -	struct hvc_struct *hp = tty->driver_data;
>>> > > +	struct hvc_struct *hp;
>>> > >  	unsigned long flags;
>>> > >  	int rc = 0;
>>> > >
>>> > > +	mutex_lock(&hvc_open_mutex);
>>> > > +
>>> > > +	hp = tty->driver_data;
>>> > > +	if (!hp) {
>>> > > +		rc = -EIO;
>>> > > +		goto out;
>>> > > +	}
>>> > > +
>>> > >  	spin_lock_irqsave(&hp->port.lock, flags);
>>> > >  	/* Check and then increment for fast path open. */
>>> > >  	if (hp->port.count++ > 0) {
>>> > >  		spin_unlock_irqrestore(&hp->port.lock, flags);
>>> > >  		hvc_kick();
>>> > > -		return 0;
>>> > > +		goto out;
>>> > >  	} /* else count == 0 */
>>> > >  	spin_unlock_irqrestore(&hp->port.lock, flags);
>>> >
>>> > Wait, why isn't this driver just calling tty_port_open() instead of
>>> > trying to open-code all of this?
>>> >
>>> > Keeping a single mutext for open will not protect it from close, it will
>>> > just slow things down a bit.  There should already be a tty lock held by
>>> > the tty core for open() to keep it from racing things, right?
>>> The tty lock should have been held, but not likely across ->install() 
>>> and
>>> ->open() callbacks, thus resulting in a race between hvc_install() 
>>> and
>>> hvc_open(),
>> 
>> How?  The tty lock is held in install, and should not conflict with
>> open(), otherwise, we would be seeing this happen in all tty drivers,
>> right?
>> 
> Well, I was expecting the same, but IIRC, I see that the open() was 
> being
> called in parallel for the same device node.
> 
> Is it expected that the tty core would allow only one thread to
> access the dev-node, while blocking the other, or is it the client
> driver's responsibility to handle the exclusiveness?
Or is there any optimization going on where the second call doesn't go 
through
install(), but calls open() directly as the file was already opened by 
the first
thread?
>>> where hvc_install() sets a data and the hvc_open() clears it. 
>>> hvc_open()
>>> doesn't
>>> check if the data was set to NULL and proceeds.
>> 
>> What data is being set that hvc_open is checking?
> hvc_install sets tty->private_data to hp, while hvc_open sets it to
> NULL (in one of the paths).
>> 
>> And you are not grabbing a lock in your install callback, you are only
>> serializing your open call here, I don't see how this is fixing 
>> anything
>> other than perhaps slowing down your codepaths.
> Basically, my intention was to add a NULL check before accessing *hp in 
> open().
> The intention of the lock was to protect against this check.
> If the tty layer would have taken care of this, then perhaps there 
> won't be a
> need to check for NULL.
>> 
>> As an arument why this isn't correct, can you answer why this same 
>> type
>> of change wouldn't be required for all tty drivers in the tree?
>> 
> I agree, that if it's already taken care by the tty-core, we don't need 
> it here.
> Correct me if I'm wrong, but looks like the tty layer is allowing
> parallel accesses
> to open(),
>> thanks,
>> 
>> greg k-h

  reply index

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-28  3:26 Raghavendra Rao Ananta
2020-05-06  9:48 ` Greg KH
2020-05-10  1:30   ` rananta
2020-05-10  6:48     ` Greg KH
2020-05-11  7:23       ` rananta
2020-05-11  7:34         ` rananta [this message]
2020-05-11  7:41           ` Greg KH
2020-05-11  7:39         ` Greg KH
2020-05-12  7:22           ` Jiri Slaby
2020-05-12  8:25             ` Greg KH
2020-05-12 21:39               ` rananta
2020-05-13  7:04                 ` Greg KH
2020-05-14 23:22                   ` rananta
2020-05-15  7:30                     ` Greg KH
2020-05-15 19:21                       ` rananta
2020-05-20  9:38                     ` Jiri Slaby
2020-05-20 13:49                       ` rananta
2020-04-28 12:48 Markus Elfring

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a033c31f8d8bf121e2cfdabbca138c1a@codeaurora.org \
    --to=rananta@codeaurora.org \
    --cc=andrew@daynix.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jslaby@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LinuxPPC-Dev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linuxppc-dev/0 linuxppc-dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linuxppc-dev linuxppc-dev/ https://lore.kernel.org/linuxppc-dev \
		linuxppc-dev@lists.ozlabs.org linuxppc-dev@ozlabs.org
	public-inbox-index linuxppc-dev

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.ozlabs.lists.linuxppc-dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git