From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968857AbdADWaY (ORCPT ); Wed, 4 Jan 2017 17:30:24 -0500 Received: from mail.kernel.org ([198.145.29.136]:39368 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S968808AbdADWaU (ORCPT ); Wed, 4 Jan 2017 17:30:20 -0500 Date: Wed, 4 Jan 2017 14:30:15 -0800 From: Shaohua Li To: MasterPrenium Cc: linux-kernel@vger.kernel.org, xen-users@lists.xen.org, linux-raid@vger.kernel.org, "MasterPrenium@gmail.com" , xen-devel@lists.xenproject.org Subject: Re: PROBLEM: Kernel BUG with raid5 soft + Xen + DRBD - invalid opcode Message-ID: <20170104223015.cr6vtyhxuwxrg76g@kernel.org> References: <585D6C34.2020908@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <585D6C34.2020908@gmail.com> User-Agent: Mutt/1.6.2-neo (2016-08-21) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 23, 2016 at 07:25:56PM +0100, MasterPrenium wrote: > Hello Guys, > > I've having some trouble on a new system I'm setting up. I'm getting a kernel BUG message, seems to be related with the use of Xen (when I boot the system _without_ Xen, I don't get any crash). > Here is configuration : > - 3x Hard Drives running on RAID 5 Software raid created by mdadm > - On top of it, DRBD for replication over another node (Active/passive cluster) > - On top of it, a BTRFS FileSystem with a few subvolumes > - On top of it, XEN VMs running. > > The BUG is happening when I'm making "huge" I/O (20MB/s with a rsync for example) on the RAID5 stack. > I've to reset system to make it work again. what did you mean 'huge' I/O (20M/s)? Is it possible you can reproduce the issue with a raw raid5 raid? It would be even better if you can give me a fio job file with the issue, so I can easily debug it. also please check if upstream patch (e8d7c33 md/raid5: limit request size according to implementation limits) helps. Thanks, Shaohua