From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936946AbdAKJAz (ORCPT ); Wed, 11 Jan 2017 04:00:55 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:35236 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1762573AbdAKJAs (ORCPT ); Wed, 11 Jan 2017 04:00:48 -0500 From: Tvrtko Ursulin X-Google-Original-From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org Cc: tursulin@ursulin.net, Tvrtko Ursulin , Masahiro Yamada , linux-kernel@vger.kernel.org Subject: [PATCH 2/4] lib/scatterlist: Avoid potential scatterlist entry overflow Date: Wed, 11 Jan 2017 09:00:36 +0000 Message-Id: <1484125238-2539-2-git-send-email-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1484125238-2539-1-git-send-email-tvrtko.ursulin@linux.intel.com> References: <1484125238-2539-1-git-send-email-tvrtko.ursulin@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Tvrtko Ursulin Since the scatterlist length field is an unsigned int, make sure that sg_alloc_table_from_pages does not overflow it while coallescing pages to a single entry. v2: Drop reference to future use. Use UINT_MAX. v3: max_segment must be page aligned. Signed-off-by: Tvrtko Ursulin Cc: Masahiro Yamada Cc: linux-kernel@vger.kernel.org Reviewed-by: Chris Wilson (v2) --- lib/scatterlist.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/lib/scatterlist.c b/lib/scatterlist.c index e05e7fc98892..4fc54801cd29 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -394,7 +394,8 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, unsigned int offset, unsigned long size, gfp_t gfp_mask) { - unsigned int chunks; + const unsigned int max_segment = rounddown(UINT_MAX, PAGE_SIZE); + unsigned int seg_len, chunks; unsigned int i; unsigned int cur_page; int ret; @@ -402,9 +403,16 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, /* compute number of contiguous chunks */ chunks = 1; - for (i = 1; i < n_pages; ++i) - if (page_to_pfn(pages[i]) != page_to_pfn(pages[i - 1]) + 1) + seg_len = PAGE_SIZE; + for (i = 1; i < n_pages; ++i) { + if (seg_len >= max_segment || + page_to_pfn(pages[i]) != page_to_pfn(pages[i - 1]) + 1) { ++chunks; + seg_len = PAGE_SIZE; + } else { + seg_len += PAGE_SIZE; + } + } ret = sg_alloc_table(sgt, chunks, gfp_mask); if (unlikely(ret)) @@ -413,17 +421,22 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, /* merging chunks and putting them into the scatterlist */ cur_page = 0; for_each_sg(sgt->sgl, s, sgt->orig_nents, i) { - unsigned long chunk_size; + unsigned int chunk_size; unsigned int j; /* look for the end of the current chunk */ + seg_len = PAGE_SIZE; for (j = cur_page + 1; j < n_pages; ++j) - if (page_to_pfn(pages[j]) != + if (seg_len >= max_segment || + page_to_pfn(pages[j]) != page_to_pfn(pages[j - 1]) + 1) break; + else + seg_len += PAGE_SIZE; chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset; - sg_set_page(s, pages[cur_page], min(size, chunk_size), offset); + sg_set_page(s, pages[cur_page], + min_t(unsigned long, size, chunk_size), offset); size -= chunk_size; offset = 0; cur_page = j; -- 2.7.4