From patchwork Wed May 8 09:52:35 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 16782 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f197.google.com (mail-vc0-f197.google.com [209.85.220.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0B12725DFE for ; Wed, 8 May 2013 09:53:35 +0000 (UTC) Received: by mail-vc0-f197.google.com with SMTP id gf12sf2051057vcb.4 for ; Wed, 08 May 2013 02:53:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-beenthere:x-received:received-spf :x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :x-google-group-id:list-post:list-help:list-archive:list-unsubscribe; bh=yYjbJ7CElZbOW79k4KxQ37b/YqWFS7uc14uVXCINKak=; b=UilLBM+2LdR+/oYStfgXojvzzr7kdOjVIQf/S6WlPiMzQdEhG+Y8VoklFjrdeSPMBy 0Uf35Lwr2SgseskGgLJRusJYzvUnXmgvc06hgO3kcYMBlg+fPQm4SopqAnjeCd6Cbs7E iypwYdfg9MKiu3sPF88irzJQTEcVlrQRImb0BfQOvI3RwH5w+LseXF2T2OUllx84NLvm mYUhtoTYY0Rc/xv4vNU5isgGZ2VAAHwjGjh8/u6Z2VFJGkim7/m1i4CWmpPvb0U/3oIm JSXjg8hq8zodRdJsrld3XlVra42KgO18B6nCk3to+mYpSTLtOTQzvH2QRtaWXoGgu5fC kY4A== X-Received: by 10.224.42.141 with SMTP id s13mr7137442qae.3.1368006793885; Wed, 08 May 2013 02:53:13 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.30.100 with SMTP id r4ls723722qeh.65.gmail; Wed, 08 May 2013 02:53:13 -0700 (PDT) X-Received: by 10.52.37.109 with SMTP id x13mr3424668vdj.10.1368006793694; Wed, 08 May 2013 02:53:13 -0700 (PDT) Received: from mail-vb0-x22c.google.com (mail-vb0-x22c.google.com [2607:f8b0:400c:c02::22c]) by mx.google.com with ESMTPS id fd1si14692852vcb.65.2013.05.08.02.53.13 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 08 May 2013 02:53:13 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c02::22c is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c02::22c; Received: by mail-vb0-f44.google.com with SMTP id e13so1397887vbg.17 for ; Wed, 08 May 2013 02:53:13 -0700 (PDT) X-Received: by 10.52.69.109 with SMTP id d13mr3338365vdu.75.1368006793589; Wed, 08 May 2013 02:53:13 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp146836veb; Wed, 8 May 2013 02:53:13 -0700 (PDT) X-Received: by 10.180.108.3 with SMTP id hg3mr9028534wib.17.1368006792428; Wed, 08 May 2013 02:53:12 -0700 (PDT) Received: from mail-wi0-x235.google.com (mail-wi0-x235.google.com [2a00:1450:400c:c05::235]) by mx.google.com with ESMTPS id br13si1690997wib.21.2013.05.08.02.53.12 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 08 May 2013 02:53:12 -0700 (PDT) Received-SPF: neutral (google.com: 2a00:1450:400c:c05::235 is neither permitted nor denied by best guess record for domain of steve.capper@linaro.org) client-ip=2a00:1450:400c:c05::235; Received: by mail-wi0-f181.google.com with SMTP id ey16so1696429wid.14 for ; Wed, 08 May 2013 02:53:12 -0700 (PDT) X-Received: by 10.180.183.210 with SMTP id eo18mr20522914wic.17.1368006791892; Wed, 08 May 2013 02:53:11 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id m14sm8068040wij.9.2013.05.08.02.53.11 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 08 May 2013 02:53:11 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Michal Hocko , Ken Chen , Mel Gorman , Catalin Marinas , Will Deacon , patches@linaro.org, Steve Capper Subject: [RFC PATCH v2 03/11] mm: hugetlb: Copy general hugetlb code from x86 to mm. Date: Wed, 8 May 2013 10:52:35 +0100 Message-Id: <1368006763-30774-4-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1368006763-30774-1-git-send-email-steve.capper@linaro.org> References: <1368006763-30774-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQnj27FANH7WqZy3j8QakLde2kfS13Q47ZZA5EvP+sdOtmzgRVmUyIkoHlsrTUMmvMl7Exw1 X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c02::22c is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The huge_pte_alloc, huge_pte_offset and follow_huge_p[mu]d functions in x86/mm/hugetlbpage.c do not rely on any architecture specific knowledge other than the fact that pmds and puds can be treated as huge ptes. To allow other architectures to use this code (and reduce the need for code duplication), this patch copies these functions into mm and provides a config flag to activate them: CONFIG_ARCH_WANT_GENERAL_HUGETLB If CONFIG_ARCH_WANT_HUGE_PMD_SHARE is also active then the huge_pmd_share code will be called by huge_pte_alloc (othewise we call pmd_alloc and skip the sharing code). Signed-off-by: Steve Capper --- mm/hugetlb.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 88 insertions(+), 9 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 41179b0..e1dc5ae 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2915,15 +2915,6 @@ out_mutex: return ret; } -/* Can be overriden by architectures */ -__attribute__((weak)) struct page * -follow_huge_pud(struct mm_struct *mm, unsigned long address, - pud_t *pud, int write) -{ - BUG(); - return NULL; -} - long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page **pages, struct vm_area_struct **vmas, unsigned long *position, unsigned long *nr_pages, @@ -3262,8 +3253,96 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep) *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; return 1; } +#define want_pmd_share() (1) +#else /* !CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) +{ + return NULL; +} +#define want_pmd_share() (0) #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +#ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB +pte_t *huge_pte_alloc(struct mm_struct *mm, + unsigned long addr, unsigned long sz) +{ + pgd_t *pgd; + pud_t *pud; + pte_t *pte = NULL; + + pgd = pgd_offset(mm, addr); + pud = pud_alloc(mm, pgd, addr); + if (pud) { + if (sz == PUD_SIZE) { + pte = (pte_t *)pud; + } else { + BUG_ON(sz != PMD_SIZE); + if (want_pmd_share() && pud_none(*pud)) + pte = huge_pmd_share(mm, addr, pud); + else + pte = (pte_t *)pmd_alloc(mm, pud, addr); + } + } + BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte)); + + return pte; +} + +pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd = NULL; + + pgd = pgd_offset(mm, addr); + if (pgd_present(*pgd)) { + pud = pud_offset(pgd, addr); + if (pud_present(*pud)) { + if (pud_large(*pud)) + return (pte_t *)pud; + pmd = pmd_offset(pud, addr); + } + } + return (pte_t *) pmd; +} + +struct page * +follow_huge_pmd(struct mm_struct *mm, unsigned long address, + pmd_t *pmd, int write) +{ + struct page *page; + + page = pte_page(*(pte_t *)pmd); + if (page) + page += ((address & ~PMD_MASK) >> PAGE_SHIFT); + return page; +} + +struct page * +follow_huge_pud(struct mm_struct *mm, unsigned long address, + pud_t *pud, int write) +{ + struct page *page; + + page = pte_page(*(pte_t *)pud); + if (page) + page += ((address & ~PUD_MASK) >> PAGE_SHIFT); + return page; +} + +#else /* !CONFIG_ARCH_WANT_GENERAL_HUGETLB */ + +/* Can be overriden by architectures */ +__attribute__((weak)) struct page * +follow_huge_pud(struct mm_struct *mm, unsigned long address, + pud_t *pud, int write) +{ + BUG(); + return NULL; +} + +#endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */ + #ifdef CONFIG_MEMORY_FAILURE /* Should be called in hugetlb_lock */