From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0406FC433DB for ; Wed, 24 Mar 2021 13:04:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 77F8B61992 for ; Wed, 24 Mar 2021 13:04:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 77F8B61992 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C42E06B02C3; Wed, 24 Mar 2021 09:04:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF3AE6B02C5; Wed, 24 Mar 2021 09:04:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A461C6B02C6; Wed, 24 Mar 2021 09:04:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id 85AD76B02C3 for ; Wed, 24 Mar 2021 09:04:53 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3184C1812DD1E for ; Wed, 24 Mar 2021 13:04:53 +0000 (UTC) X-FDA: 77954787666.39.41ACFFE Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by imf17.hostedemail.com (Postfix) with ESMTP id 265F040F8C3D for ; Wed, 24 Mar 2021 13:04:39 +0000 (UTC) Received: by mail-qt1-f175.google.com with SMTP id j7so17425342qtx.5 for ; Wed, 24 Mar 2021 06:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=j0CqnAVsF9iIq72FSY5/2tgqqV4XECVUv8SK3zE7CTY=; b=JMn20Exhn5AF9iVbHFihX0Do2+gE1wVc/NYqqeIDXCp162ejYRW5lNKJVKUKqEA4le KnruIw0K0LVHu04dpMNDEKOXX3KDrf1gk8niODEsLXDxEPcsziJDZlcSPI1wSPKwIKTI a3U4hc1TBHJdqImCMPeqlkT68jGM7D2prbyl4swtp5b3CO0vcRLt2HtA3WhH4YB379HB dzgrq73d08KJbyLOvPPVKUsvNtjsIVl/C5zenj7OeCCf/2SQV3zS554zBMWZkGUXXIBT 0+s0nMwG31Nbq2SjMnVyJp8b8/eAPU3bE+uwTKY3kZdI78wKs2xPp1bz6iYWZ44j8yUs VEOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=j0CqnAVsF9iIq72FSY5/2tgqqV4XECVUv8SK3zE7CTY=; b=cpJcDOxmfISjBbmUcScX+RmcQjsi0SgDnN1mtaav93jLUVjYHA17WzCz3jZLvxWmTX ZZPi1Ffqs38iCfqbwUabYNEXbPPyVyFDmLsFDKaXlrUmA+v06KSM+FUYvxyD1QaNoNkm EJf0QaBEPXsZ+2Ei7SY8re8fLdf076ddhxKJUjQBhFql8smjdB3lZICvi6CekuMBjJ7Y LMw75bkwlm0GnJlAR7oQjqzrPRipbrDYxjZdvVTOAVfzpM5GdDv4n2b1SAXAUaG96ns4 zQAdj2cZRtTUxQpkWy103/O81+xp2pzsDfmnFMuPT5pUJ/oernvgXeWFtgcEoDsGeXh5 j2AQ== X-Gm-Message-State: AOAM5339kjLg+zp3SaG62gv5ZoxhFqwOzSm+Y3IKKjhGcuNVrolGpo57 zISpO3HCwzkDBWMJNCKhPPo= X-Google-Smtp-Source: ABdhPJzvSDWcdG84lomgGb6w6kdk1ML22JHRCIILtxuVx/FBiJCnuNNnY2Osfp2pmAb6X8O77pnI9Q== X-Received: by 2002:ac8:660f:: with SMTP id c15mr2712873qtp.278.1616591079576; Wed, 24 Mar 2021 06:04:39 -0700 (PDT) Received: from Slackware.localdomain ([156.146.36.138]) by smtp.gmail.com with ESMTPSA id d84sm1627313qke.53.2021.03.24.06.04.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Mar 2021 06:04:38 -0700 (PDT) From: Bhaskar Chowdhury To: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: rdunlap@infradead.org, Bhaskar Chowdhury Subject: [PATCH] mm/slub.c: Trivial typo fixes Date: Wed, 24 Mar 2021 18:36:19 +0530 Message-Id: <20210324130619.16872-1-unixbhaskar@gmail.com> X-Mailer: git-send-email 2.30.1 MIME-Version: 1.0 X-Stat-Signature: pp65u8bpiznbn33q8is1qkju4z8qmu9j X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 265F040F8C3D Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=mail-qt1-f175.google.com; client-ip=209.85.160.175 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616591079-462751 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: s/operatios/operation/ s/Mininum/Minimum/ s/mininum/minimum/ ......two different places. Signed-off-by: Bhaskar Chowdhury --- mm/slub.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 3021ce9bf1b3..cd3c7be33f69 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3,7 +3,7 @@ * SLUB: A slab allocator that limits cache line use instead of queuing * objects in per cpu and per node lists. * - * The allocator synchronizes using per slab locks or atomic operatios + * The allocator synchronizes using per slab locks or atomic operation * and only uses a centralized lock to manage a pool of partial slabs. * * (C) 2007 SGI, Christoph Lameter @@ -160,7 +160,7 @@ static inline bool kmem_cache_has_cpu_partial(struct = kmem_cache *s) #undef SLUB_DEBUG_CMPXCHG /* - * Mininum number of partial slabs. These will be left on the partial + * Minimum number of partial slabs. These will be left on the partial * lists even if they are empty. kmem_cache_shrink may reclaim them. */ #define MIN_PARTIAL 5 @@ -832,7 +832,7 @@ static int check_bytes_and_report(struct kmem_cache *= s, struct page *page, * * A. Free pointer (if we cannot overwrite object on free) * B. Tracking data for SLAB_STORE_USER - * C. Padding to reach required alignment boundary or at mininum + * C. Padding to reach required alignment boundary or at minimum * one word if debugging is on to be able to detect writes * before the word boundary. * @@ -3421,7 +3421,7 @@ static unsigned int slub_min_objects; * * Higher order allocations also allow the placement of more objects in = a * slab and thereby reduce object handling overhead. If the user has - * requested a higher mininum order then we start with that one instead = of + * requested a higher minimum order then we start with that one instead = of * the smallest order which will fit the object. */ static inline unsigned int slab_order(unsigned int size, -- 2.30.1