From patchwork Mon May 16 19:36:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 573747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C03DC47084 for ; Mon, 16 May 2022 20:07:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348386AbiEPUGd (ORCPT ); Mon, 16 May 2022 16:06:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348576AbiEPT6o (ORCPT ); Mon, 16 May 2022 15:58:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA2B63DA4D; Mon, 16 May 2022 12:50:48 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5E7E960ABE; Mon, 16 May 2022 19:50:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 72EADC34100; Mon, 16 May 2022 19:50:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1652730647; bh=jpiRU41G7rib5KT8rX+ATDSBSQFLApOjYbiJIWyGoGA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FzBojgvKSlq6PdEqdqObIml/cSWWgbmMBhV47oM8/Jt+3Am2HBvVw+WWRcrSP214W 2dcdNlD5qXF5rfnjUlcL/k/lsEA7jQoKKIJICck1BtH8LeBm2CxNLepEnB875fC7NE cpcBqSEV81gXxr9dryOHhy/7COj+bjsXA0jlm0zc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Adrian-Ken Rueegsegger , Thomas Gleixner , Oscar Salvador , David Hildenbrand Subject: [PATCH 5.15 066/102] x86/mm: Fix marking of unused sub-pmd ranges Date: Mon, 16 May 2022 21:36:40 +0200 Message-Id: <20220516193625.892146402@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220516193623.989270214@linuxfoundation.org> References: <20220516193623.989270214@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Adrian-Ken Rueegsegger commit 280abe14b6e0a38de9cc86fe6a019523aadd8f70 upstream. The unused part precedes the new range spanned by the start, end parameters of vmemmap_use_new_sub_pmd(). This means it actually goes from ALIGN_DOWN(start, PMD_SIZE) up to start. Use the correct address when applying the mark using memset. Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges") Signed-off-by: Adrian-Ken Rueegsegger Signed-off-by: Thomas Gleixner Reviewed-by: Oscar Salvador Reviewed-by: David Hildenbrand Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220509090637.24152-2-ken@codelabs.ch Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/init_64.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -902,6 +902,8 @@ static void __meminit vmemmap_use_sub_pm static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long end) { + const unsigned long page = ALIGN_DOWN(start, PMD_SIZE); + vmemmap_flush_unused_pmd(); /* @@ -914,8 +916,7 @@ static void __meminit vmemmap_use_new_su * Mark with PAGE_UNUSED the unused parts of the new memmap range */ if (!IS_ALIGNED(start, PMD_SIZE)) - memset((void *)start, PAGE_UNUSED, - start - ALIGN_DOWN(start, PMD_SIZE)); + memset((void *)page, PAGE_UNUSED, start - page); /* * We want to avoid memset(PAGE_UNUSED) when populating the vmemmap of