All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
To: "Kalderon,
	Michal" <Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>,
	"Nicholas A. Bellinger"
	<nab-IzHhD5pYlfBP7FQvKIMDCQ@public.gmane.org>
Cc: "Amrani,
	Ram" <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>,
	Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>,
	"linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"Elior,
	Ariel" <Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>,
	target-devel
	<target-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Potnuri Bharat Teja
	<bharat-ut6Up61K2wZBDgjK7y7TUQ@public.gmane.org>
Subject: Re: SQ overflow seen running isert traffic with high block sizes
Date: Mon, 15 Jan 2018 09:22:36 -0600	[thread overview]
Message-ID: <20180115152236.GA15484@ssaleem-MOBL4.amr.corp.intel.com> (raw)
In-Reply-To: <CY1PR0701MB2012E53C69D1CE3E16BA320B88EB0-UpKza+2NMNLHMJvQ0dyT705OhdzP3rhOnBOFsp37pqbUKgpGm//BTAC/G2K4zDHf@public.gmane.org>

On Mon, Jan 15, 2018 at 03:12:36AM -0700, Kalderon, Michal wrote:
> > From: linux-rdma-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org [mailto:linux-rdma-
> > owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Nicholas A. Bellinger
> > Sent: Monday, January 15, 2018 6:57 AM
> > To: Shiraz Saleem <shiraz.saleem-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
> > Cc: Amrani, Ram <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>; Sagi Grimberg
> > <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>; linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Elior, Ariel
> > <Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>; target-devel <target-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>;
> > Potnuri Bharat Teja <bharat-ut6Up61K2wZBDgjK7y7TUQ@public.gmane.org>
> > Subject: Re: SQ overflow seen running isert traffic with high block sizes
> > 
> > Hi Shiraz, Ram, Ariel, & Potnuri,
> > 
> > Following up on this old thread, as it relates to Potnuri's recent fix for a iser-
> > target queue-full memory leak:
> > 
> > https://www.spinics.net/lists/target-devel/msg16282.html
> > 
> > Just curious how frequent this happens in practice with sustained large block
> > workloads, as it appears to effect at least three different iwarp RNICS (i40iw,
> > qedr and iw_cxgb4)..?
> > 
> > Is there anything else from an iser-target consumer level that should be
> > changed for iwarp to avoid repeated ib_post_send() failures..?
> > 
> Would like to mention, that although we are an iWARP RNIC as well, we've hit this
> Issue when running RoCE. It's not iWARP related. 
> This is easily reproduced within seconds with IO size of 5121K
> Using 5 Targets with 2 Ram Disk each and 5 targets with FileIO Disks each.
> 
> IO Command used:
> maim -b512k -T32 -t2 -Q8 -M0 -o -u -n -m17 -ftargets.dat -d1
> 
> thanks,
> Michal

Its seen with block size >= 2M on a single target 1 RAM disk config. And similar to Michals report;
rather quickly, in a matter of seconds.

fio --rw=read --bs=2048k --numjobs=1 --iodepth=128 --runtime=30 --size=20g --loops=1 --ioengine=libaio 
--direct=1 --invalidate=1 --fsync_on_close=1 --norandommap --exitall --filename=/dev/sdb --name=sdb 

Shiraz

> 
> > On Fri, 2017-10-06 at 17:40 -0500, Shiraz Saleem wrote:
> > > On Mon, Jul 17, 2017 at 03:26:04AM -0600, Amrani, Ram wrote:
> > > > Hi Nicholas,
> > > >
> > > > > Just to confirm, the following four patches where required to get
> > > > > Potnuri up and running on iser-target + iw_cxgb4 with a similarly
> > > > > small number of hw SGEs:
> > > > >
> > > > > 7a56dc8 iser-target: avoid posting a recv buffer twice 555a65f
> > > > > iser-target: Fix queue-full response handling
> > > > > a446701 iscsi-target: Propigate queue_data_in + queue_status
> > > > > errors fa7e25c target: Fix unknown fabric callback queue-full
> > > > > errors
> > > > >
> > > > > So Did you test with Q-Logic/Cavium with RoCE using these four
> > > > > patches, or just with commit a4467018..?
> > > > >
> > > > > Note these have not been CC'ed to stable yet, as I was reluctant
> > > > > since they didn't have much mileage on them at the time..
> > > > >
> > > > > Now however, they should be OK to consider for stable, especially
> > > > > if they get you unblocked as well.
> > > >
> > > > The issue is still seen with these four patches.
> > > >
> > > > Thanks,
> > > > Ram
> > >
> > > Hi,
> > >
> > > On X722 Iwarp NICs (i40iw) too, we are seeing a similar issue of SQ
> > > overflow being hit on isert for larger block sizes. 4.14-rc2 kernel.
> > >
> > > Eventually there is a timeout/conn-error on iser initiator and the
> > connection is torn down.
> > >
> > > The aforementioned patches dont seem to be alleviating the SQ overflow
> > issue?
> > >
> > > Initiator
> > > ------------
> > >
> > > [17007.465524] scsi host11: iSCSI Initiator over iSER [17007.466295]
> > > iscsi: invalid can_queue of 55. can_queue must be a power of 2.
> > > [17007.466924] iscsi: Rounding can_queue to 32.
> > > [17007.471535] scsi 11:0:0:0: Direct-Access     LIO-ORG  ramdisk1_40G     4.0
> > PQ: 0 ANSI: 5
> > > [17007.471652] scsi 11:0:0:0: alua: supports implicit and explicit
> > > TPGS [17007.471656] scsi 11:0:0:0: alua: device
> > > naa.6001405ab790db5e8e94b0998ab4bf0b port group 0 rel port 1
> > > [17007.471782] sd 11:0:0:0: Attached scsi generic sg2 type 0
> > > [17007.472373] sd 11:0:0:0: [sdb] 83886080 512-byte logical blocks:
> > > (42.9 GB/40.0 GiB) [17007.472405] sd 11:0:0:0: [sdb] Write Protect is
> > > off [17007.472406] sd 11:0:0:0: [sdb] Mode Sense: 43 00 00 08
> > > [17007.472462] sd 11:0:0:0: [sdb] Write cache: disabled, read cache:
> > > enabled, doesn't support DPO or FUA [17007.473412] sd 11:0:0:0: [sdb]
> > > Attached SCSI disk [17007.478184] sd 11:0:0:0: alua: transition
> > > timeout set to 60 seconds [17007.478186] sd 11:0:0:0: alua: port group 00
> > state A non-preferred supports TOlUSNA [17031.269821]  sdb:
> > > [17033.359789] EXT4-fs (sdb1): mounted filesystem with ordered data
> > > mode. Opts: (null) [17049.056155]  connection2:0: ping timeout of 5
> > > secs expired, recv timeout 5, last rx 4311705998, last ping
> > > 4311711232, now 4311716352 [17049.057499]  connection2:0: detected
> > > conn error (1022) [17049.057558] modifyQP to CLOSING qp 3
> > > next_iw_state 3 [..]
> > >
> > >
> > > Target
> > > ----------
> > > [....]
> > > [17066.397179] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397180] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ec020
> > > failed to post RDMA res [17066.397183] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397183] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ea1f8 failed to post RDMA res [17066.397184]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397184] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8e9bf0 failed to post RDMA res
> > > [17066.397187] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397188] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ecc30
> > > failed to post RDMA res [17066.397192] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397192] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8f20a0 failed to post RDMA res [17066.397195]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397196] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea800 failed to post RDMA res
> > > [17066.397196] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397197] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ede48
> > > failed to post RDMA res [17066.397200] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397200] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ec020 failed to post RDMA res [17066.397204]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397204] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea1f8 failed to post RDMA res
> > > [17066.397206] i40iw i40iw_process_aeq ae_id = 0x503 bool qp=1 qp_id =
> > > 3 [17066.397207] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397207] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8e9bf0
> > > failed to post RDMA res [17066.397211] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397211] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ecc30 failed to post RDMA res [17066.397215]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397215] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8f20a0 failed to post RDMA res
> > > [17066.397218] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397219] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea800
> > > failed to post RDMA res [17066.397219] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397220] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ede48 failed to post RDMA res [17066.397232]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397233] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ec020 failed to post RDMA res
> > > [17066.397237] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397237] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea1f8
> > > failed to post RDMA res [17066.397238] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397238] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8e9bf0 failed to post RDMA res [17066.397242]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397242] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ecc30 failed to post RDMA res
> > > [17066.397245] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397247] i40iw i40iw_process_aeq ae_id = 0x501 bool qp=1 qp_id =
> > > 3 [17066.397247] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8f20a0
> > > failed to post RDMA res [17066.397251] QP 3 flush_issued
> > > [17066.397252] i40iw_post_send: qp 3 wr_opcode 0 ret_err -22
> > > [17066.397252] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea800
> > > failed to post RDMA res [17066.397253] Got unknown fabric queue
> > > status: -22 [17066.397254] QP 3 flush_issued [17066.397254]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -22 [17066.397254] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ede48 failed to post RDMA res
> > > [17066.397255] Got unknown fabric queue status: -22 [17066.397258] QP
> > > 3 flush_issued [17066.397258] i40iw_post_send: qp 3 wr_opcode 0
> > > ret_err -22 [17066.397259] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ec020 failed to post RDMA res [17066.397259] Got unknown
> > > fabric queue status: -22 [17066.397267] QP 3 flush_issued
> > > [17066.397267] i40iw_post_send: qp 3 wr_opcode 0 ret_err -22
> > > [17066.397268] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea1f8
> > > failed to post RDMA res [17066.397268] Got unknown fabric queue
> > > status: -22 [17066.397287] QP 3 flush_issued [17066.397287]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -22 [17066.397288] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8e9bf0 failed to post RDMA res
> > > [17066.397288] Got unknown fabric queue status: -22 [17066.397291] QP
> > > 3 flush_issued [17066.397292] i40iw_post_send: qp 3 wr_opcode 0
> > > ret_err -22 [17066.397292] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ecc30 failed to post RDMA res [17066.397292] Got unknown
> > > fabric queue status: -22 [17066.397295] QP 3 flush_issued
> > > [17066.397296] i40iw_post_send: qp 3 wr_opcode 0 ret_err -22
> > > [17066.397296] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8f20a0
> > > failed to post RDMA res [17066.397297] Got unknown fabric queue
> > > status: -22 [17066.397307] QP 3 flush_issued [17066.397307]
> > > i40iw_post_send: qp 3 wr_opcode 8 ret_err -22 [17066.397308] isert:
> > > isert_post_response: ib_post_send failed with -22 [17066.397309] i40iw
> > > i40iw_qp_disconnect Call close API [....]
> > >
> > > Shiraz
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe
> > > target-devel" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the
> > body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at
> > http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

WARNING: multiple messages have this Message-ID (diff)
From: Shiraz Saleem <shiraz.saleem@intel.com>
To: "Kalderon,
	Michal" <Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>,
	"Nicholas A. Bellinger"
	<nab-IzHhD5pYlfBP7FQvKIMDCQ@public.gmane.org>
Cc: "Amrani,
	Ram" <Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>,
	Sagi Grimberg <sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>,
	"linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"Elior,
	Ariel" <Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org>,
	target-devel
	<target-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Potnuri Bharat Teja
	<bharat-ut6Up61K2wZBDgjK7y7TUQ@public.gmane.org>
Subject: Re: SQ overflow seen running isert traffic with high block sizes
Date: Mon, 15 Jan 2018 15:22:36 +0000	[thread overview]
Message-ID: <20180115152236.GA15484@ssaleem-MOBL4.amr.corp.intel.com> (raw)
In-Reply-To: <CY1PR0701MB2012E53C69D1CE3E16BA320B88EB0-UpKza+2NMNLHMJvQ0dyT705OhdzP3rhOnBOFsp37pqbUKgpGm//BTAC/G2K4zDHf@public.gmane.org>

On Mon, Jan 15, 2018 at 03:12:36AM -0700, Kalderon, Michal wrote:
> > From: linux-rdma-owner@vger.kernel.org [mailto:linux-rdma-
> > owner@vger.kernel.org] On Behalf Of Nicholas A. Bellinger
> > Sent: Monday, January 15, 2018 6:57 AM
> > To: Shiraz Saleem <shiraz.saleem@intel.com>
> > Cc: Amrani, Ram <Ram.Amrani@cavium.com>; Sagi Grimberg
> > <sagi@grimberg.me>; linux-rdma@vger.kernel.org; Elior, Ariel
> > <Ariel.Elior@cavium.com>; target-devel <target-devel@vger.kernel.org>;
> > Potnuri Bharat Teja <bharat@chelsio.com>
> > Subject: Re: SQ overflow seen running isert traffic with high block sizes
> > 
> > Hi Shiraz, Ram, Ariel, & Potnuri,
> > 
> > Following up on this old thread, as it relates to Potnuri's recent fix for a iser-
> > target queue-full memory leak:
> > 
> > https://www.spinics.net/lists/target-devel/msg16282.html
> > 
> > Just curious how frequent this happens in practice with sustained large block
> > workloads, as it appears to effect at least three different iwarp RNICS (i40iw,
> > qedr and iw_cxgb4)..?
> > 
> > Is there anything else from an iser-target consumer level that should be
> > changed for iwarp to avoid repeated ib_post_send() failures..?
> > 
> Would like to mention, that although we are an iWARP RNIC as well, we've hit this
> Issue when running RoCE. It's not iWARP related. 
> This is easily reproduced within seconds with IO size of 5121K
> Using 5 Targets with 2 Ram Disk each and 5 targets with FileIO Disks each.
> 
> IO Command used:
> maim -b512k -T32 -t2 -Q8 -M0 -o -u -n -m17 -ftargets.dat -d1
> 
> thanks,
> Michal

Its seen with block size >= 2M on a single target 1 RAM disk config. And similar to Michals report;
rather quickly, in a matter of seconds.

fio --rw=read --bs 48k --numjobs=1 --iodepth\x128 --runtime0 --size g --loops=1 --ioengine=libaio 
--direct=1 --invalidate=1 --fsync_on_close=1 --norandommap --exitall --filename=/dev/sdb --name=sdb 

Shiraz

> 
> > On Fri, 2017-10-06 at 17:40 -0500, Shiraz Saleem wrote:
> > > On Mon, Jul 17, 2017 at 03:26:04AM -0600, Amrani, Ram wrote:
> > > > Hi Nicholas,
> > > >
> > > > > Just to confirm, the following four patches where required to get
> > > > > Potnuri up and running on iser-target + iw_cxgb4 with a similarly
> > > > > small number of hw SGEs:
> > > > >
> > > > > 7a56dc8 iser-target: avoid posting a recv buffer twice 555a65f
> > > > > iser-target: Fix queue-full response handling
> > > > > a446701 iscsi-target: Propigate queue_data_in + queue_status
> > > > > errors fa7e25c target: Fix unknown fabric callback queue-full
> > > > > errors
> > > > >
> > > > > So Did you test with Q-Logic/Cavium with RoCE using these four
> > > > > patches, or just with commit a4467018..?
> > > > >
> > > > > Note these have not been CC'ed to stable yet, as I was reluctant
> > > > > since they didn't have much mileage on them at the time..
> > > > >
> > > > > Now however, they should be OK to consider for stable, especially
> > > > > if they get you unblocked as well.
> > > >
> > > > The issue is still seen with these four patches.
> > > >
> > > > Thanks,
> > > > Ram
> > >
> > > Hi,
> > >
> > > On X722 Iwarp NICs (i40iw) too, we are seeing a similar issue of SQ
> > > overflow being hit on isert for larger block sizes. 4.14-rc2 kernel.
> > >
> > > Eventually there is a timeout/conn-error on iser initiator and the
> > connection is torn down.
> > >
> > > The aforementioned patches dont seem to be alleviating the SQ overflow
> > issue?
> > >
> > > Initiator
> > > ------------
> > >
> > > [17007.465524] scsi host11: iSCSI Initiator over iSER [17007.466295]
> > > iscsi: invalid can_queue of 55. can_queue must be a power of 2.
> > > [17007.466924] iscsi: Rounding can_queue to 32.
> > > [17007.471535] scsi 11:0:0:0: Direct-Access     LIO-ORG  ramdisk1_40G     4.0
> > PQ: 0 ANSI: 5
> > > [17007.471652] scsi 11:0:0:0: alua: supports implicit and explicit
> > > TPGS [17007.471656] scsi 11:0:0:0: alua: device
> > > naa.6001405ab790db5e8e94b0998ab4bf0b port group 0 rel port 1
> > > [17007.471782] sd 11:0:0:0: Attached scsi generic sg2 type 0
> > > [17007.472373] sd 11:0:0:0: [sdb] 83886080 512-byte logical blocks:
> > > (42.9 GB/40.0 GiB) [17007.472405] sd 11:0:0:0: [sdb] Write Protect is
> > > off [17007.472406] sd 11:0:0:0: [sdb] Mode Sense: 43 00 00 08
> > > [17007.472462] sd 11:0:0:0: [sdb] Write cache: disabled, read cache:
> > > enabled, doesn't support DPO or FUA [17007.473412] sd 11:0:0:0: [sdb]
> > > Attached SCSI disk [17007.478184] sd 11:0:0:0: alua: transition
> > > timeout set to 60 seconds [17007.478186] sd 11:0:0:0: alua: port group 00
> > state A non-preferred supports TOlUSNA [17031.269821]  sdb:
> > > [17033.359789] EXT4-fs (sdb1): mounted filesystem with ordered data
> > > mode. Opts: (null) [17049.056155]  connection2:0: ping timeout of 5
> > > secs expired, recv timeout 5, last rx 4311705998, last ping
> > > 4311711232, now 4311716352 [17049.057499]  connection2:0: detected
> > > conn error (1022) [17049.057558] modifyQP to CLOSING qp 3
> > > next_iw_state 3 [..]
> > >
> > >
> > > Target
> > > ----------
> > > [....]
> > > [17066.397179] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397180] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ec020
> > > failed to post RDMA res [17066.397183] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397183] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ea1f8 failed to post RDMA res [17066.397184]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397184] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8e9bf0 failed to post RDMA res
> > > [17066.397187] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397188] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ecc30
> > > failed to post RDMA res [17066.397192] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397192] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8f20a0 failed to post RDMA res [17066.397195]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397196] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea800 failed to post RDMA res
> > > [17066.397196] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397197] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ede48
> > > failed to post RDMA res [17066.397200] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397200] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ec020 failed to post RDMA res [17066.397204]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397204] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea1f8 failed to post RDMA res
> > > [17066.397206] i40iw i40iw_process_aeq ae_id = 0x503 bool qp=1 qp_id > > > 3 [17066.397207] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397207] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8e9bf0
> > > failed to post RDMA res [17066.397211] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397211] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ecc30 failed to post RDMA res [17066.397215]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397215] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8f20a0 failed to post RDMA res
> > > [17066.397218] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397219] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea800
> > > failed to post RDMA res [17066.397219] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397220] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ede48 failed to post RDMA res [17066.397232]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397233] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ec020 failed to post RDMA res
> > > [17066.397237] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397237] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea1f8
> > > failed to post RDMA res [17066.397238] i40iw_post_send: qp 3 wr_opcode
> > > 0 ret_err -12 [17066.397238] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8e9bf0 failed to post RDMA res [17066.397242]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -12 [17066.397242] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ecc30 failed to post RDMA res
> > > [17066.397245] i40iw_post_send: qp 3 wr_opcode 0 ret_err -12
> > > [17066.397247] i40iw i40iw_process_aeq ae_id = 0x501 bool qp=1 qp_id > > > 3 [17066.397247] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8f20a0
> > > failed to post RDMA res [17066.397251] QP 3 flush_issued
> > > [17066.397252] i40iw_post_send: qp 3 wr_opcode 0 ret_err -22
> > > [17066.397252] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea800
> > > failed to post RDMA res [17066.397253] Got unknown fabric queue
> > > status: -22 [17066.397254] QP 3 flush_issued [17066.397254]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -22 [17066.397254] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ede48 failed to post RDMA res
> > > [17066.397255] Got unknown fabric queue status: -22 [17066.397258] QP
> > > 3 flush_issued [17066.397258] i40iw_post_send: qp 3 wr_opcode 0
> > > ret_err -22 [17066.397259] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ec020 failed to post RDMA res [17066.397259] Got unknown
> > > fabric queue status: -22 [17066.397267] QP 3 flush_issued
> > > [17066.397267] i40iw_post_send: qp 3 wr_opcode 0 ret_err -22
> > > [17066.397268] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8ea1f8
> > > failed to post RDMA res [17066.397268] Got unknown fabric queue
> > > status: -22 [17066.397287] QP 3 flush_issued [17066.397287]
> > > i40iw_post_send: qp 3 wr_opcode 0 ret_err -22 [17066.397288] isert:
> > > isert_rdma_rw_ctx_post: Cmd: ffff8817fb8e9bf0 failed to post RDMA res
> > > [17066.397288] Got unknown fabric queue status: -22 [17066.397291] QP
> > > 3 flush_issued [17066.397292] i40iw_post_send: qp 3 wr_opcode 0
> > > ret_err -22 [17066.397292] isert: isert_rdma_rw_ctx_post: Cmd:
> > > ffff8817fb8ecc30 failed to post RDMA res [17066.397292] Got unknown
> > > fabric queue status: -22 [17066.397295] QP 3 flush_issued
> > > [17066.397296] i40iw_post_send: qp 3 wr_opcode 0 ret_err -22
> > > [17066.397296] isert: isert_rdma_rw_ctx_post: Cmd: ffff8817fb8f20a0
> > > failed to post RDMA res [17066.397297] Got unknown fabric queue
> > > status: -22 [17066.397307] QP 3 flush_issued [17066.397307]
> > > i40iw_post_send: qp 3 wr_opcode 8 ret_err -22 [17066.397308] isert:
> > > isert_post_response: ib_post_send failed with -22 [17066.397309] i40iw
> > > i40iw_qp_disconnect Call close API [....]
> > >
> > > Shiraz
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe
> > > target-devel" in the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the
> > body of a message to majordomo@vger.kernel.org More majordomo info at
> > http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2018-01-15 15:22 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-28  9:25 SQ overflow seen running isert traffic with high block sizes Amrani, Ram
     [not found] ` <BN3PR07MB25784033E7FCD062FA0A7855F8DD0-EldUQEzkDQfpW3VS/XPqkOFPX92sqiQdvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2017-06-28 10:35   ` Potnuri Bharat Teja
     [not found]     ` <20170628103505.GA27517-ut6Up61K2wZBDgjK7y7TUQ@public.gmane.org>
2017-06-28 11:29       ` Amrani, Ram
2017-06-28 10:39 ` Sagi Grimberg
2017-06-28 11:32   ` Amrani, Ram
     [not found]     ` <BN3PR07MB25786338EADC77A369A6D493F8DD0-EldUQEzkDQfpW3VS/XPqkOFPX92sqiQdvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2017-07-13 18:29       ` Nicholas A. Bellinger
2017-07-17  9:26         ` Amrani, Ram
     [not found]           ` <BN3PR07MB2578E6561CC669922A322245F8A00-EldUQEzkDQfpW3VS/XPqkOFPX92sqiQdvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2017-10-06 22:40             ` Shiraz Saleem
2017-10-06 22:40               ` Shiraz Saleem
     [not found]               ` <20171006224025.GA23364-GOXS9JX10wfOxmVO0tvppfooFf0ArEBIu+b9c/7xato@public.gmane.org>
2018-01-15  4:56                 ` Nicholas A. Bellinger
2018-01-15  4:56                   ` Nicholas A. Bellinger
     [not found]                   ` <1515992195.24576.156.camel-XoQW25Eq2zs8TOCF0fvnoXxStJ4P+DSV@public.gmane.org>
2018-01-15 10:12                     ` Kalderon, Michal
2018-01-15 10:12                       ` Kalderon, Michal
     [not found]                       ` <CY1PR0701MB2012E53C69D1CE3E16BA320B88EB0-UpKza+2NMNLHMJvQ0dyT705OhdzP3rhOnBOFsp37pqbUKgpGm//BTAC/G2K4zDHf@public.gmane.org>
2018-01-15 15:22                         ` Shiraz Saleem [this message]
2018-01-15 15:22                           ` Shiraz Saleem
2018-01-18  9:58                           ` Nicholas A. Bellinger
2018-01-18  9:58                             ` Nicholas A. Bellinger
2018-01-18 17:53                             ` Potnuri Bharat Teja
2018-01-18 17:53                               ` Potnuri Bharat Teja
     [not found]                               ` <20180118175316.GA11338-ut6Up61K2wZBDgjK7y7TUQ@public.gmane.org>
2018-01-24  7:25                                 ` Nicholas A. Bellinger
2018-01-24  7:25                                   ` Nicholas A. Bellinger
2018-01-24 12:21                                   ` Potnuri Bharat Teja
2018-01-24 12:33                                     ` Potnuri Bharat Teja
     [not found]                               ` <1516778717.24576.319.came l@haakon3.daterainc.com>
     [not found]                                 ` <1516778717.24576.319.camel-XoQW25Eq2zs8TOCF0fvnoXxStJ4P+DSV@public.gmane.org>
2018-01-24 16:03                                   ` Steve Wise
2018-01-24 16:03                                     ` Steve Wise
2018-01-19 19:33                             ` Kalderon, Michal
2018-01-24  7:55                               ` Nicholas A. Bellinger
2018-01-24  7:55                                 ` Nicholas A. Bellinger
2018-01-24  8:09                                 ` Kalderon, Michal
2018-01-24  8:09                                   ` Kalderon, Michal
2018-01-29 19:20                                   ` Sagi Grimberg
2018-01-29 19:20                                     ` Sagi Grimberg
     [not found]                                 ` <1516780534.24576.335.camel-XoQW25Eq2zs8TOCF0fvnoXxStJ4P+DSV@public.gmane.org>
2018-01-29 19:17                                   ` Sagi Grimberg
2018-01-29 19:17                                     ` Sagi Grimberg
     [not found]                                     ` <55569d98-7f8c-7414-ab03-e52e2bfc518b-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2018-01-30 16:30                                       ` Shiraz Saleem
2018-01-30 16:30                                         ` Shiraz Saleem
     [not found]                             ` <1516269522.24576.274.camel-XoQW25Eq2zs8TOCF0fvnoXxStJ4P+DSV@public.gmane.org>
2018-01-22 17:49                               ` Saleem, Shiraz
2018-01-22 17:49                                 ` Saleem, Shiraz
2018-01-24  8:01                                 ` Nicholas A. Bellinger
2018-01-24  8:01                                   ` Nicholas A. Bellinger
2018-01-26 18:52                                   ` Shiraz Saleem
2018-01-26 18:52                                     ` Shiraz Saleem
2018-01-29 19:36                                   ` Sagi Grimberg
2018-01-29 19:36                                     ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180115152236.GA15484@ssaleem-MOBL4.amr.corp.intel.com \
    --to=shiraz.saleem-ral2jqcrhueavxtiumwx3w@public.gmane.org \
    --cc=Ariel.Elior-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org \
    --cc=Michal.Kalderon-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org \
    --cc=Ram.Amrani-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org \
    --cc=bharat-ut6Up61K2wZBDgjK7y7TUQ@public.gmane.org \
    --cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=nab-IzHhD5pYlfBP7FQvKIMDCQ@public.gmane.org \
    --cc=sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org \
    --cc=target-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.