linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rob Townley <rob.townley@gmail.com>
To: Stephen Rust <srust@blockbridge.com>
Cc: Ming Lei <ming.lei@redhat.com>, Christoph Hellwig <hch@lst.de>,
	Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, linux-rdma@vger.kernel.org,
	linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
	target-devel@vger.kernel.org
Subject: Re: Data corruption in kernel 5.1+ with iSER attached ramdisk
Date: Wed, 27 Nov 2019 23:51:56 -0600	[thread overview]
Message-ID: <CA+VdTb-HTXidfFVmzV2ynJgOGdmb-Ps=j9vQfZ-iZBPFm4RWJg@mail.gmail.com> (raw)
In-Reply-To: <CAAFE1bdGCx96tLKgSkNf7=MDZEZMnC==PJghpsRctvZpPLaX5w@mail.gmail.com>

Interesting case to follow as there are many types of RamDisks.  The
common tmpfs kind will use its RAM allocation and all free harddrive
space.

The ramdisk in CentOS 7 backed by LIO will overflow its size in RAM
and fill up all remaining free space on spinning platters.  So if the
RamDisk is 4GB out of 192GB RAM in the lightly used machine. Free
filesystem space is 16GB.  Writes to the 4GB RamDisk will only error
out at 21GB when there is no space left on filesystem.

dd if=/dev/zero of=/dev/iscsiRamDisk
Will keep writing way past 4GB and not stop till hardrive is full
which is totally different than normal disks.

Wonder what exact kind of RamDisk is in that kernel?

On Wed, Nov 27, 2019 at 10:26 PM Stephen Rust <srust@blockbridge.com> wrote:
>
> [Apologies for dup, re-sending without text formatting to lists]
>
> Hi,
>
> Thanks for your reply.
>
> I agree it does seem surprising that the git bisect pointed to this
> particular commit when tracking down this issue.
>
> > Stephen, could you share us how you setup the ramdisk in your test?
>
> The ramdisk we export in LIO is a standard "brd" module ramdisk (ie:
> /dev/ram*). We configure it as a "block" backstore in LIO, not using
> the built-in LIO ramdisk.
>
> LIO configuration is as follows:
>
>   o- backstores .......................................................... [...]
>   | o- block .............................................. [Storage Objects: 1]
>   | | o- Blockbridge-952f0334-2535-5fae-9581-6c6524165067
> [/dev/ram-bb.952f0334-2535-5fae-9581-6c6524165067.cm2 (16.0MiB)
> write-thru activated]
>   | |   o- alua ............................................... [ALUA Groups: 1]
>   | |     o- default_tg_pt_gp ................... [ALUA state: Active/optimized]
>   | o- fileio ............................................. [Storage Objects: 0]
>   | o- pscsi .............................................. [Storage Objects: 0]
>   | o- ramdisk ............................................ [Storage Objects: 0]
>   o- iscsi ........................................................ [Targets: 1]
>   | o- iqn.2009-12.com.blockbridge:rda:1:952f0334-2535-5fae-9581-6c6524165067:rda
>  [TPGs: 1]
>   |   o- tpg1 ...................................... [no-gen-acls, auth per-acl]
>   |     o- acls ...................................................... [ACLs: 1]
>   |     | o- iqn.1994-05.com.redhat:115ecc56a5c .. [mutual auth, Mapped LUNs: 1]
>   |     |   o- mapped_lun0  [lun0
> block/Blockbridge-952f0334-2535-5fae-9581-6c6524165067 (rw)]
>   |     o- luns ...................................................... [LUNs: 1]
>   |     | o- lun0
> [block/Blockbridge-952f0334-2535-5fae-9581-6c6524165067
> (/dev/ram-bb.952f0334-2535-5fae-9581-6c6524165067.cm2)
> (default_tg_pt_gp)]
>   |     o- portals ................................................ [Portals: 1]
>   |       o- 0.0.0.0:3260 ............................................... [iser]
>
> > > > Could you explain a bit what is iSCSI attached with iSER / RDMA? Is the
> > > > actual transport TCP over RDMA? What is related target driver involved?
>
> iSER is the iSCSI extension for RDMA, and it is important to note that
> we have _only_ reproduced this when the writes occur over RDMA, with
> the target portal in LIO having enabled "iser". The iscsi client
> (using iscsiadm) connects to the target directly over iSER. We use the
> Mellanox ConnectX-5 Ethernet NICs (mlx5* module) for this purpose,
> which utilizes RoCE (RDMA over Converged Ethernet) instead of TCP.
>
> The identical ramdisk configuration using TCP/IP target in LIO has
> _not_ reproduced this issue for us.
>
> > > > /usr/share/bcc/tools/stackcount -K rd_execute_rw
>
> I installed bcc and used the stackcount tool to trace rd_execute_rw,
> but I suspect because we are not using the built-in LIO ramdisk this
> did not catch anything. Are there other function traces we can provide
> for you?
>
> Thanks,
> Steve

  reply	other threads:[~2019-11-28  5:52 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAAFE1bd9wuuobpe4VK7Ty175j7mWT+kRmHCNhVD+6R8MWEAqmw@mail.gmail.com>
2019-11-28  1:57 ` Data corruption in kernel 5.1+ with iSER attached ramdisk Ming Lei
     [not found]   ` <CA+VdTb_-CGaPjKUQteKVFSGqDz-5o-tuRRkJYqt8B9iOQypiwQ@mail.gmail.com>
2019-11-28  2:58     ` Ming Lei
     [not found]       ` <CAAFE1bfsXsKGyw7SU_z4NanT+wmtuJT=XejBYbHHMCDQwm73sw@mail.gmail.com>
2019-11-28  4:25         ` Stephen Rust
2019-11-28  5:51           ` Rob Townley [this message]
2019-11-28  9:12         ` Ming Lei
2019-12-02 18:42           ` Stephen Rust
2019-12-03  0:58             ` Ming Lei
2019-12-03  3:04               ` Stephen Rust
2019-12-03  3:14                 ` Ming Lei
2019-12-03  3:26                   ` Stephen Rust
2019-12-03  3:50                     ` Stephen Rust
2019-12-03 12:45                       ` Ming Lei
2019-12-03 19:56                         ` Stephen Rust
2019-12-04  1:05                           ` Ming Lei
2019-12-04 17:23                             ` Stephen Rust
2019-12-04 23:02                               ` Ming Lei
2019-12-05  0:16                                 ` Bart Van Assche
2019-12-05 14:44                                   ` Stephen Rust
2019-12-05  2:28                                 ` Stephen Rust
2019-12-05  3:05                                   ` Ming Lei
2019-12-05  9:17                                 ` Sagi Grimberg
2019-12-05 14:36                                   ` Stephen Rust
     [not found]                                   ` <CAAFE1beqFBQS_zVYEXFTD2qu8PAF9hBSW4j1k9ZD6MhU_gWg5Q@mail.gmail.com>
2020-03-25  0:15                                     ` Sagi Grimberg
2020-03-30 17:08                                       ` Stephen Rust
2020-03-31  1:07                                         ` Sagi Grimberg
2020-04-01  0:38                                         ` Sagi Grimberg
2020-04-02 20:03                                           ` Stephen Rust
2020-04-02 22:16                                             ` Sagi Grimberg
2019-12-04  2:39                           ` Ming Lei
2019-12-03  4:15                     ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+VdTb-HTXidfFVmzV2ynJgOGdmb-Ps=j9vQfZ-iZBPFm4RWJg@mail.gmail.com' \
    --to=rob.townley@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    --cc=srust@blockbridge.com \
    --cc=target-devel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).