From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:51548 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754072AbdCFVzZ (ORCPT ); Mon, 6 Mar 2017 16:55:25 -0500 Subject: Re: [PATCH] zram: set physical queue limits to avoid array out of bounds accesses To: Andrew Morton References: <20170306102335.9180-1-jthumshirn@suse.de> <96ed9003-6299-b303-a901-d040a8cfe03f@fb.com> <20170306121840.aaa0525d3dbdeb82aad3c284@linux-foundation.org> CC: Johannes Thumshirn , Minchan Kim , Nitin Gupta , Christoph Hellwig , "Sergey Senozhatsky" , Hannes Reinecke , , Linux Block Layer Mailinglist , Linux Kernel Mailinglist From: Jens Axboe Message-ID: Date: Mon, 6 Mar 2017 13:19:56 -0700 MIME-Version: 1.0 In-Reply-To: <20170306121840.aaa0525d3dbdeb82aad3c284@linux-foundation.org> Content-Type: text/plain; charset="windows-1252" Sender: linux-block-owner@vger.kernel.org List-Id: linux-block@vger.kernel.org On 03/06/2017 01:18 PM, Andrew Morton wrote: > On Mon, 6 Mar 2017 08:21:11 -0700 Jens Axboe wrote: > >> On 03/06/2017 03:23 AM, Johannes Thumshirn wrote: >>> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using >>> the NVMe over Fabrics loopback target which potentially sends a huge bulk of >>> pages attached to the bio's bvec this results in a kernel panic because of >>> array out of bounds accesses in zram_decompress_page(). >> >> Applied, thanks. > > With an added cc:stable, hopefully? I didn't. But I can email it to stable@ when it lands in Linus's tree. -- Jens Axboe