From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 974B9C04E53 for ; Wed, 15 May 2019 12:11:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6993A20657 for ; Wed, 15 May 2019 12:11:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1557922311; bh=y3w1d/5Hm7qIf9nhoyxW/DRzHmYT4GzIgiA9U9cOWLQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=dEurTYMJoTMW8NTyt8tMWSx/wD//P9FxLkM3YJF4alQeZIaEJiaiOe6MoCvs453b1 8WO2Q2Z0AWaPU+7NupNNyINts5+aM0c7/CpdIP1dk39WrKv688c2Xb2MYT9TnZf9mJ mx5IrGL4zBLJ2GRSVebA9Z11oAOsh34N69CH2qBY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727445AbfEOMLp (ORCPT ); Wed, 15 May 2019 08:11:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:37202 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726813AbfEOLG1 (ORCPT ); Wed, 15 May 2019 07:06:27 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DB32A216FD; Wed, 15 May 2019 11:06:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1557918386; bh=y3w1d/5Hm7qIf9nhoyxW/DRzHmYT4GzIgiA9U9cOWLQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tx6OdKXwBhtZCE9FUs4KOgnh9O5E5MmVlITRXPmqdfodbsjhLh3ejMa4M9cgYrmj1 yo5soj4DpQpWbDvwwSYoZh7sEkvgs1QW7QH/NZmZe3S8OvM85yOthWMImwpSIlFTSH 3AtjgWRdhLaIVC5e9+FPgIzyRgztXO2/3KjejJgU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Eric Auger , Peter Xu , Cornelia Huck , Alex Williamson , Guenter Roeck Subject: [PATCH 4.4 109/266] vfio/type1: Limit DMA mappings per container Date: Wed, 15 May 2019 12:53:36 +0200 Message-Id: <20190515090726.143970178@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190515090722.696531131@linuxfoundation.org> References: <20190515090722.696531131@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Alex Williamson commit 492855939bdb59c6f947b0b5b44af9ad82b7e38c upstream. Memory backed DMA mappings are accounted against a user's locked memory limit, including multiple mappings of the same memory. This accounting bounds the number of such mappings that a user can create. However, DMA mappings that are not backed by memory, such as DMA mappings of device MMIO via mmaps, do not make use of page pinning and therefore do not count against the user's locked memory limit. These mappings still consume memory, but the memory is not well associated to the process for the purpose of oom killing a task. To add bounding on this use case, we introduce a limit to the total number of concurrent DMA mappings that a user is allowed to create. This limit is exposed as a tunable module option where the default value of 64K is expected to be well in excess of any reasonable use case (a large virtual machine configuration would typically only make use of tens of concurrent mappings). This fixes CVE-2019-3882. Reviewed-by: Eric Auger Tested-by: Eric Auger Reviewed-by: Peter Xu Reviewed-by: Cornelia Huck Signed-off-by: Alex Williamson [groeck: Adjust for missing upstream commit] Signed-off-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman --- drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -53,10 +53,16 @@ module_param_named(disable_hugepages, MODULE_PARM_DESC(disable_hugepages, "Disable VFIO IOMMU support for IOMMU hugepages."); +static unsigned int dma_entry_limit __read_mostly = U16_MAX; +module_param_named(dma_entry_limit, dma_entry_limit, uint, 0644); +MODULE_PARM_DESC(dma_entry_limit, + "Maximum number of user DMA mappings per container (65535)."); + struct vfio_iommu { struct list_head domain_list; struct mutex lock; struct rb_root dma_list; + unsigned int dma_avail; bool v2; bool nesting; }; @@ -382,6 +388,7 @@ static void vfio_remove_dma(struct vfio_ vfio_unmap_unpin(iommu, dma); vfio_unlink_dma(iommu, dma); kfree(dma); + iommu->dma_avail++; } static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu) @@ -582,12 +589,18 @@ static int vfio_dma_do_map(struct vfio_i return -EEXIST; } + if (!iommu->dma_avail) { + mutex_unlock(&iommu->lock); + return -ENOSPC; + } + dma = kzalloc(sizeof(*dma), GFP_KERNEL); if (!dma) { mutex_unlock(&iommu->lock); return -ENOMEM; } + iommu->dma_avail--; dma->iova = iova; dma->vaddr = vaddr; dma->prot = prot; @@ -903,6 +916,7 @@ static void *vfio_iommu_type1_open(unsig INIT_LIST_HEAD(&iommu->domain_list); iommu->dma_list = RB_ROOT; + iommu->dma_avail = dma_entry_limit; mutex_init(&iommu->lock); return iommu;