From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751454AbbEAVRL (ORCPT ); Fri, 1 May 2015 17:17:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49499 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750812AbbEAVRI (ORCPT ); Fri, 1 May 2015 17:17:08 -0400 Date: Fri, 1 May 2015 17:17:03 -0400 From: Mike Snitzer To: Abelardo Ricart III Cc: dm-devel@redhat.com, mpatocka@redhat.com, linux-kernel@vger.kernel.org Subject: Re: Regression: Disk corruption with dm-crypt and kernels >= 4.0 Message-ID: <20150501211703.GA15030@redhat.com> References: <1430455027.7012.32.camel@memnix.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1430455027.7012.32.camel@memnix.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 01 2015 at 12:37am -0400, Abelardo Ricart III wrote: > I made sure to run a completely vanilla kernel when testing why I was suddenly > seeing some nasty libata errors with all kernels >= v4.0. Here's a snippet: > > -------------------->8-------------------- > [ 165.592136] ata5.00: exception Emask 0x60 SAct 0x7000 SErr 0x800 action 0x6 > frozen > [ 165.592140] ata5.00: irq_stat 0x20000000, host bus error > [ 165.592143] ata5: SError: { HostInt } > [ 165.592145] ata5.00: failed command: READ FPDMA QUEUED > [ 165.592149] ata5.00: cmd 60/08:60:a0:0d:89/00:00:07:00:00/40 tag 12 ncq 4096 > in > res 40/00:74:40:58:5d/00:00:00:00:00/40 Emask 0x60 > (host bus error) > [ 165.592151] ata5.00: status: { DRDY } > -------------------->8-------------------- > > After a few dozen of these errors, I'd suddenly find my system in read-only > mode with corrupted files throughout my encrypted filesystems (seemed like > either a read or a write would corrupt a file, though I could be mistaken). I > decided to do a git bisect with a random read-write-sync test to narrow down > the culprit, which turned out to be this commit (part of a series): > > # first bad commit: [cf2f1abfbd0dba701f7f16ef619e4d2485de3366] dm crypt: don't > allocate pages for a partial request > > Just to be sure, I created a patch to revert the entire nine patch series that > commit belonged to... and the bad behavior disappeared. I've now been running > kernel 4.0 for a few days without issue, and went so far as to stress test my > poor SSD for a few hours to be 100% positive. > > Here's some more info on my setup. > > -------------------->8-------------------- > $ lsblk -f > NAME FSTYPE LABEL MOUNTPOINT > sda > ├─sda1 vfat /boot/EFI > ├─sda2 ext4 /boot > └─sda3 LVM2_member > ├─SSD-root crypto_LUKS > │ └─root f2fs / > └─SSD-home crypto_LUKS > └─home f2fs /home > > $ cat /proc/cmdline > BOOT_IMAGE=/vmlinuz-linux-memnix cryptdevice=/dev/SSD/root:root:allow-discards > root=/dev/mapper/root acpi_osi=Linux security=tomoyo > TOMOYO_trigger=/usr/lib/systemd/systemd intel_iommu=on > modprobe.blacklist=nouveau rw quiet > > $ cat /etc/lvm/lvm.conf | grep "issue_discards" > issue_discards = 1 > -------------------->8-------------------- > > If there's anything else I can do to help diagnose the underlying problem, I'm > more than willing. The patchset in question was tested quite heavily so this is a surprising report. I'm noticing you are opting in to dm-crypt discard support. Have you tested without discards enabled?