linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: James Smart <james.smart@broadcom.com>
Cc: Dick Kennedy <dick.kennedy@broadcom.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Christoph Hellwig <hch@lst.de>,
	James Bottomley <james.bottomley@hansenpartnership.com>,
	linux-scsi@vger.kernel.org, Hannes Reinecke <hare@suse.de>
Subject: [RFC PATCH 0/3] lpfc: nodelist pointer cleanup
Date: Fri, 18 Oct 2019 09:50:07 +0200	[thread overview]
Message-ID: <20191018075010.55653-1-hare@suse.de> (raw)

Hi James,

trying to figure this annoying lpfc_set_rrq_active() bug
I've found the nodelist pointer handling in the lpfc io buffers
a bit strange; there's a 'ndlp' pointer, but for scsi the nodelist
is primarily accessed via the 'rdata' pointer (although not everywhere).
For NVMe it's primarily the 'ndlp' pointer, apparently, but the
usage is quite confusing.
So here's a patchset to straighten things up; it primarily moves
the anonymous protocol-specific structure in the io buffer to a named
one, and always accesses the nodelist through the protocol-specific
rport data structure.

It also has the nice side-effect that the protocol-specific areas are
aligned now, so clearing the 'rdata' pointer on the scsi side will
be equivalent to clearing the 'rport' pointer on the nvme side.
And it reduces the size of the io buffer.

Let me know what you think.

Hannes Reinecke (3):
  lpfc: use named structure for combined I/O buffer
  lpfc: access nodelist through scsi-specific rdata pointer
  lpfc: access nvme nodelist through nvme rport structure

 drivers/scsi/lpfc/lpfc_init.c |   2 +-
 drivers/scsi/lpfc/lpfc_nvme.c |  56 ++++++------
 drivers/scsi/lpfc/lpfc_scsi.c | 196 +++++++++++++++++++++---------------------
 drivers/scsi/lpfc/lpfc_sli.c  |  26 +++---
 drivers/scsi/lpfc/lpfc_sli.h  |   6 +-
 5 files changed, 143 insertions(+), 143 deletions(-)

-- 
2.16.4


             reply	other threads:[~2019-10-18  7:50 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-18  7:50 Hannes Reinecke [this message]
2019-10-18  7:50 ` [PATCH 1/3] lpfc: use named structure for combined I/O buffer Hannes Reinecke
2019-10-18  7:50 ` [PATCH 2/3] lpfc: access nodelist through scsi-specific rdata pointer Hannes Reinecke
2019-10-18  7:50 ` [PATCH 3/3] lpfc: access nvme nodelist through nvme rport structure Hannes Reinecke
2019-10-18 21:45 ` [RFC PATCH 0/3] lpfc: nodelist pointer cleanup James Smart
2019-10-19 15:55   ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191018075010.55653-1-hare@suse.de \
    --to=hare@suse.de \
    --cc=dick.kennedy@broadcom.com \
    --cc=hch@lst.de \
    --cc=james.bottomley@hansenpartnership.com \
    --cc=james.smart@broadcom.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).