From patchwork Mon Oct 14 06:48:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 176117 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp4126327ill; Sun, 13 Oct 2019 23:50:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqypucYqMLTqAaqgIJ1LtT6C4qG4qe+8lday+jfK44zmEZpI/lxJ1HSICo9+0AuTGCujfSeX X-Received: by 2002:a17:906:615:: with SMTP id s21mr27342274ejb.276.1571035800812; Sun, 13 Oct 2019 23:50:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571035800; cv=none; d=google.com; s=arc-20160816; b=kqIOnXKAWDpM6zX3U63Pe7CGRaMAOAOkRXBgDxF7p5icuXdUduRrYqj/MTFHQZFgtL XahB3a7ialEezel8LOuY2JoteofCxB3NPXKHBluZYAWc551IL65P3wkrHDUxirlAihm4 5bO8dFeIEIYb26lYnyBAGi5Fml4oNyGfMDQIRjrN/IgP1VsTJ7ymSDO3vMHOrFlCB14w cT5yfOUFOMadg7lExD+kVAU2FEOH+uXQZZMGdtgJ1Zv1UQhXOqM9QF3E78tFxh2zYhTG fjkmRoPevgAtJCB9DNtbuiVZ5evLziYjLLoEmzoWoz3DnimHr90VCScqvOZmFpFT9GVK 9DCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=jR4Osq1YHZv964THCALPnTonjq6rbt7hGyhV7u1ltlU=; b=HPerxZRSiwgZKHN8O1ALeuXklglV2aLDccPWg07kbZeqKXOTilx/CKZkvJDztSsOLF CQGUf2rxSLbRfWp+vbvlBuB/K8U2Jjy7ZwBOCSlp8b2xZuMc9S6kG37Jtvr4IKYKRr7d uPfKmJrfZj2k3KOEqHOH8qmK4yG6W1uLNG1UmZaguJv4+4o/Z9u0K4KcMWX6NMOLsr1/ 0DkfIvdxdyjYN8Xc6tE0pCBH/t+UOR3+E5TGs1zVEORXR6du5chiuKnXWaUGpEA+6sd7 d7Okb6afAhjlcMSCTrtukCJ1rzM3n2hg5a9fbSlrhOtPoN6ZaAv05emBQQeh4q0s0j5t /NPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=sfCsFVu2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t18si10567962ejr.416.2019.10.13.23.50.00; Sun, 13 Oct 2019 23:50:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=sfCsFVu2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730124AbfJNGt7 (ORCPT + 10 others); Mon, 14 Oct 2019 02:49:59 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:39964 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726646AbfJNGt7 (ORCPT ); Mon, 14 Oct 2019 02:49:59 -0400 Received: by mail-pg1-f194.google.com with SMTP id e13so1302209pga.7 for ; Sun, 13 Oct 2019 23:49:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jR4Osq1YHZv964THCALPnTonjq6rbt7hGyhV7u1ltlU=; b=sfCsFVu2Mtr+V804poiYt1X/S09VfUy+O6nlOdj2xbscbkj5uvas3dM+NGJYalSpa2 4g+19s+yGX6xSmdh8lB3CBnqOxcpeqJEz7Hufu1qeweDSaGxCn09/IuDu/hXoDWYY/rj v5a85B348gKMOSzCgv9ciRzr1Pk/hCh8eV2Naj8qOBP0ONDzSpP4EgFKROHhog+ovZZX 8wZno0puGu7qlK01vRkVXTT9RtzZgFup3sb3+rlDwovp6+nE7bqarNjXzOg3lFgHuufs VxrAQGdFPtJ8hYcyLU23SGZ55fkqKqyERKqXGhYTC4lDW+5IIblf1BmColnxm7Ux6Bht oTaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jR4Osq1YHZv964THCALPnTonjq6rbt7hGyhV7u1ltlU=; b=ItaAEjzE67/Yi0oXFsUVicL4wSOssyxX+JcGKLU2PTTMRMRD5egQWLyY/9abkTZGDt i6fjNxQIpZbZdH3yGxZla43+Uiu6qjJSnd8YGDmC9YmuB+pI9b2dxH/lnG2oF4qAzlvV pJ+fE9zqT+IiW5BxYOBE7Wq8NhkRu7w9C5gJe2Iu0enORgvnkYyrhG8CKBkUCK2TYFT1 QDNpo7IHq+DPT6sEyKQeMB5SUK9jWOw1Vi+tuksLZs6y20jTm25bCttG+9nrzFolrM8k Bc7tbkZ5f9A+J/HAobTE94R5AwhGbdRL3n5XDlyA8PeNUwSNmQJsuEMPnQxXHpoYABnR RLEQ== X-Gm-Message-State: APjAAAWfMZ7MYapbS5XfvMayvkmyu2yo4L/fDSRqAeqVRrXZ7OKee6Wj 3AO1X5ECjXQF1d4Fop/WjFbb6A== X-Received: by 2002:a62:cf42:: with SMTP id b63mr14610888pfg.33.1571035796184; Sun, 13 Oct 2019 23:49:56 -0700 (PDT) Received: from localhost.localdomain ([240e:362:4f9:7100:a8e8:9325:d90d:271f]) by smtp.gmail.com with ESMTPSA id f188sm19580810pfa.170.2019.10.13.23.49.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 13 Oct 2019 23:49:55 -0700 (PDT) From: Zhangfei Gao To: reg Kroah-Hartman , Arnd Bergmann , jonathan.cameron@huawei.com, grant.likely@arm.com, jean-philippe , ilias.apalodimas@linaro.org, francois.ozog@linaro.org, kenneth-lee-2012@foxmail.com, Wangzhou Cc: linux-accelerators@lists.ozlabs.org, linux-kernel@vger.kernel.org, Kenneth Lee , Zaibo Xu , Zhangfei Gao Subject: [PATCH v5 1/3] uacce: Add documents for uacce Date: Mon, 14 Oct 2019 14:48:53 +0800 Message-Id: <1571035735-31882-2-git-send-email-zhangfei.gao@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1571035735-31882-1-git-send-email-zhangfei.gao@linaro.org> References: <1571035735-31882-1-git-send-email-zhangfei.gao@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Kenneth Lee Uacce (Unified/User-space-access-intended Accelerator Framework) is a kernel module targets to provide Shared Virtual Addressing (SVA) between the accelerator and process. This patch add document to explain how it works. Signed-off-by: Kenneth Lee Signed-off-by: Zaibo Xu Signed-off-by: Zhou Wang Signed-off-by: Zhangfei Gao --- Documentation/misc-devices/uacce.rst | 297 +++++++++++++++++++++++++++++++++++ 1 file changed, 297 insertions(+) create mode 100644 Documentation/misc-devices/uacce.rst -- 2.7.4 diff --git a/Documentation/misc-devices/uacce.rst b/Documentation/misc-devices/uacce.rst new file mode 100644 index 0000000..1ddf4ff --- /dev/null +++ b/Documentation/misc-devices/uacce.rst @@ -0,0 +1,297 @@ +.. SPDX-License-Identifier: GPL-2.0 + +Introduction of Uacce +========================= + +Uacce (Unified/User-space-access-intended Accelerator Framework) targets to +provide Shared Virtual Addressing (SVA) between accelerators and processes. +So accelerator can access any data structure of the main cpu. +This differs from the data sharing between cpu and io device, which share +data content rather than address. +Because of the unified address, hardware and user space of process can +share the same virtual address in the communication. +Uacce takes the hardware accelerator as a heterogeneous processor, while +IOMMU share the same CPU page tables and as a result the same translation +from va to pa. + + __________________________ __________________________ + | | | | + | User application (CPU) | | Hardware Accelerator | + |__________________________| |__________________________| + + | | + | va | va + V V + __________ __________ + | | | | + | MMU | | IOMMU | + |__________| |__________| + | | + | | + V pa V pa + _______________________________________ + | | + | Memory | + |_______________________________________| + + + +Architecture +------------ + +Uacce is the kernel module, taking charge of iommu and address sharing. +The user drivers and libraries are called WarpDrive. + +A virtual concept, queue, is used for the communication. It provides a +FIFO-like interface. And it maintains a unified address space between the +application and all involved hardware. + + ___________________ ________________ + | | user API | | + | WarpDrive library | ------------> | user driver | + |___________________| |________________| + | | + | | + | queue fd | + | | + | | + v | + ___________________ _________ | + | | | | | mmap memory + | Other framework | | uacce | | r/w interface + | crypto/nic/others | |_________| | + |___________________| | + | | | + | register | register | + | | | + | | | + | _________________ __________ | + | | | | | | + ------------- | Device Driver | | IOMMU | | + |_________________| |__________| | + | | + | V + | ___________________ + | | | + -------------------------- | Device(Hardware) | + |___________________| + + +How does it work +================ + +Uacce uses mmap and IOMMU to play the trick. + +Uacce create a chrdev for every device registered to it. New queue is +created when user application open the chrdev. The file descriptor is used +as the user handle of the queue. +The accelerator device present itself as an Uacce object, which exports as +chrdev to the user space. The user application communicates with the +hardware by ioctl (as control path) or share memory (as data path). + +The control path to the hardware is via file operation, while data path is +via mmap space of the queue fd. + +The queue file address space: + +enum uacce_qfrt { + UACCE_QFRT_MMIO = 0, /* device mmio region */ + UACCE_QFRT_DKO = 1, /* device kernel-only region */ + UACCE_QFRT_DUS = 2, /* device user share region */ + UACCE_QFRT_SS = 3, /* static shared memory (for non-sva devices) */ + UACCE_QFRT_MAX = 16, +}; + +All regions are optional and differ from device type to type. The +communication protocol is wrapped by the user driver. + +The device mmio region is mapped to the hardware mmio space. It is generally +used for doorbell or other notification to the hardware. It is not fast enough +as data channel. + +The device kernel-only region is necessary only if the device IOMMU has no +PASID support or it cannot send kernel-only address request. In this case, if +kernel need to share memory with the device, kernel has to share iova address +space with the user process via mmap, to prevent iova conflict. + +The device user share region is used for share data buffer between user process +and device. It can be merged into other regions. But a separated region can help +on device state management. For example, the device can be started when this +region is mapped. + +The static share virtual memory region is used for share data buffer with the +device and can be shared among queues / devices. +Its size is set according to the application requirement. + + +The user API +------------ + +We adopt a polling style interface in the user space: :: + + int wd_request_queue(struct wd_queue *q); + void wd_release_queue(struct wd_queue *q); + int wd_send(struct wd_queue *q, void *req); + int wd_recv(struct wd_queue *q, void **req); + int wd_recv_sync(struct wd_queue *q, void **req); + void wd_flush(struct wd_queue *q); + +wd_recv_sync() is a wrapper to its non-sync version. It will trap into +kernel and wait until the queue become available. + +If the queue do not support SVA/SVM. The following helper functions +can be used to create Static Virtual Share Memory: :: + + void *wd_reserve_memory(struct wd_queue *q, size_t size); + int wd_share_reserved_memory(struct wd_queue *q, + struct wd_queue *target_q); + +The user API is not mandatory. It is simply a suggestion and hint what the +kernel interface is supposed to be. + + +The user driver +--------------- + +The queue file mmap space will need a user driver to wrap the communication +protocol. Uacce provides some attributes in sysfs for the user driver to +match the right accelerator accordingly. +More details in Documentation/ABI/testing/sysfs-driver-uacce. + + +The Uacce register API +----------------------- +The register API is defined in uacce.h. + +struct uacce_interface { + char name[32]; + unsigned int flags; + struct uacce_ops *ops; +}; + +According to the IOMMU capability, uacce_interface flags can be: + +UACCE_DEV_SVA (0x1) + Support shared virtual address + +UACCE_DEV_SHARE_DOMAIN (0) + This is used for device which does not support pasid. + +struct uacce_device *uacce_register(struct device *parent, + struct uacce_interface *interface); +void uacce_unregister(struct uacce_device *uacce); + +uacce_register resultes can be: +a. If uacce module is not compiled, ERR_PTR(-ENODEV) +b. Succeed with the desired flags +c. Succeed with the negotiated flags, for example + uacce_interface.flags = UACCE_DEV_SVA but uacce->flags = ~UACCE_DEV_SVA +So user driver need check return value as well as the negotiated uacce->flags. + + +The Memory Sharing Model +------------------------ +The perfect form of a Uacce device is to support SVM/SVA. We built this upon +Jean Philippe Brucker's SVA patches. [1] + +If the hardware support UACCE_DEV_SVA, the user process's page table is +shared to the opened queue. So the device can access any address in the +process address space. +And it can raise a page fault if the physical page is not available yet. +It can also access the address in the kernel space, which is referred by +another page table particular to the kernel. Most of IOMMU implementation can +handle this by a tag on the address request of the device. For example, ARM +SMMU uses SSV bit to indicate that the address request is for kernel or user +space. +Queue file regions can be used: +UACCE_QFRT_MMIO: device mmio region (map to user) +UACCE_QFRT_DUS: device user share (map to dev and user) + +If the device does not support UACCE_DEV_SVA, Uacce allow only one process at +the same time. DMA API cannot be used as well, since Uacce will create an +unmanaged iommu_domain for the device. +Queue file regions can be used: +UACCE_QFRT_MMIO: device mmio region (map to user) +UACCE_QFRT_DKO: device kernel-only (map to dev, no user) +UACCE_QFRT_DUS: device user share (map to dev and user) +UACCE_QFRT_SS: static share memory (map to devs and user) + + +The Fork Scenario +================= +For a process with allocated queues and shared memory, what happen if it forks +a child? + +The fd of the queue will be duplicated on folk, so the child can send request +to the same queue as its parent. But the requests which is sent from processes +except for the one who opens the queue will be blocked. + +It is recommended to add O_CLOEXEC to the queue file. + +The queue mmap space has a VM_DONTCOPY in its VMA. So the child will lose all +those VMAs. + +This is a reason why Uacce does not adopt the mode used in VFIO and +InfiniBand. Both solutions can set any user pointer for hardware sharing. +But they cannot support fork when the dma is in process. Or the +"Copy-On-Write" procedure will make the parent process lost its physical +pages. + + +Difference to the VFIO and IB framework +--------------------------------------- +The essential function of Uacce is to let the device access the user +address directly. There are many device drivers doing the same in the kernel. +And both VFIO and IB can provide similar function in framework level. + +But Uacce has a different goal: "share address space". It is +not taken the request to the accelerator as an enclosure data structure. It +takes the accelerator as another thread of the same process. So the +accelerator can refer to any address used by the process. + +Both VFIO and IB are taken this as "memory sharing", not "address sharing". +They care more on sharing the block of memory. But if there is an address +stored in the block and referring to another memory region. The address may +not be valid. + +By adding more constraints to the VFIO and IB framework, in some sense, we may +achieve a similar goal. But we gave it up finally. Both VFIO and IB have extra +assumption which is unnecessary to Uacce. They may hurt each other if we +try to merge them together. + +VFIO manages resource of a hardware as a "virtual device". If a device need to +serve a separated application. It must isolate the resource as a separate +virtual device. And the life cycle of the application and virtual device are +unnecessary unrelated. And most concepts, such as bus, driver, probe and +so on, to make it as a "device" is unnecessary either. And the logic added to +VFIO to make address sharing do no help on "creating a virtual device". + +IB creates a "verbs" standard for sharing memory region to another remote +entity. Most of these verbs are to make memory region between entities to be +synchronized. This is not what accelerator need. Accelerator is in the same +memory system with the CPU. It refers to the same memory system among CPU and +devices. So the local memory terms/verbs are good enough for it. Extra "verbs" +are not necessary. And its queue (like queue pair in IB) is the communication +channel direct to the accelerator hardware. There is nothing about memory +itself. + +Further, both VFIO and IB use the "pin" (get_user_page) way to lock local +memory in place. This is flexible. But it can cause other problems. For +example, if the user process fork a child process. The COW procedure may make +the parent process lost its pages which are sharing with the device. These may +be fixed in the future. But is not going to be easy. (There is a discussion +about this on Linux Plumbers Conference 2018 [2]) + +So we choose to build the solution directly on top of IOMMU interface. IOMMU +is the essential way for device and process to share their page mapping from +the hardware perspective. It will be safe to create a software solution on +this assumption. Uacce manages the IOMMU interface for the accelerator +device, so the device driver can export some of the resources to the user +space. Uacce than can make sure the device and the process have the same +address space. + + +References +========== +.. [1] http://jpbrucker.net/sva/ +.. [2] https://lwn.net/Articles/774411/ From patchwork Mon Oct 14 06:48:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 176118 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp4126601ill; Sun, 13 Oct 2019 23:50:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqwAkob8rcoEcIQdbhtfh4Or/RbO78qZZKkS/JB/bybCT9CESaq0TJwnkmddcFSW4dQpOViT X-Received: by 2002:a50:a781:: with SMTP id i1mr26685191edc.17.1571035823196; Sun, 13 Oct 2019 23:50:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571035823; cv=none; d=google.com; s=arc-20160816; b=K7D0gEyKGWWb13S4cFfcf8ul7FjrYI3d2AiIkJmaIh3OCT5BTJNDBSeC9kskk4YpnY onn4H7YjbMqsBo+FbqGkPncJh70AxHyYUedi+3WI2CC2k3fwtkX25gL4bvQZTgtyxDKX LMGZK86+6VAL3M2N2X/+iK7UgaLW5lDVUoCeEFupA1YRqenM9ovpjcHv2ZXCWxhm/Hyq 7cTduSkklgWkxRMZ8U36KvFKxb+HPiAugx0mgYznh+Wx39/m0mPuf2SszJr+z3DDje4i dsnN8ik+n7An/tIL02bu8GsuNy2n0y73UcA3V4dKqQfEihwXpr5oSHc0Qe63it+TXNA6 wlCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=vDj75VvUBqaXUmDJiSNMv+FExhmzYtdrUksSYcV1dh0=; b=RYcq5OiM1nj+j/6fr0BlxLguNuRrbmNVTQ1mMHnvKYPxeUeRlp82oXap/cPxidWgvV YY+Lh4KjPGMkRghIEptw5ehOPUn95J8mwzAqHRzWprlYNMnqiCfY2Okw+IMQru0rW2JA oi9ZEoxT2SS4s2W/g4dZuyCrvgvnW1o9tHviFxo+IeMlYbHlPa6m9tKp40omMJVRjBQI tzL2ziLl/pQaxVZzQ27LEFkzrPXi6lgmUW1y1JdN6ChHAl3xAIwXhA+MoaTwzHC4rGm4 msWuGgtxb+ked9HKqcrAxODKElEeLl6Qxj7B80Q/7gkPCuxzvyu8IX2+5xZ2XddhQLEN Cv+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=gPoUclqO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g8si12336112edb.335.2019.10.13.23.50.22; Sun, 13 Oct 2019 23:50:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=gPoUclqO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730167AbfJNGuV (ORCPT + 10 others); Mon, 14 Oct 2019 02:50:21 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:44055 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729823AbfJNGuV (ORCPT ); Mon, 14 Oct 2019 02:50:21 -0400 Received: by mail-pf1-f193.google.com with SMTP id q21so9801874pfn.11 for ; Sun, 13 Oct 2019 23:50:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vDj75VvUBqaXUmDJiSNMv+FExhmzYtdrUksSYcV1dh0=; b=gPoUclqOCXnbtT/YwZ2E6GokbdZBofQJJIypFNPjeG5IHdZXCkx1GJ/15/2v6FhPm8 CGceqQh5hMiCu/jGRm/OwwA+MhvEIiegVp3p30WG7x9r0fa7Z8or52qq4ioZZ7PDc4iH uCsV4P2HK/FYuMuegKOrPJyv73r5Ni/I+1raYKsitM6PJ7OO1SuqtX/WX2PaDkZhalQQ kMIS6+O9JCevWpKlrk1+Zo0X6hiLdsBtdmqy9Z+u6YP/ioiQPcH/8qFjDWLJ8hGLnvIM ZSvIErywqljBy+f6grLITcqOpcdoSdgHqelMQlJ+CIQJVOIsnT2fJPctHr0KBrhel4m3 db5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vDj75VvUBqaXUmDJiSNMv+FExhmzYtdrUksSYcV1dh0=; b=R2ccG1DwpjKPODcNAkPteCGlgKpBwvO7VTZpw21h1CYCMPX1n5qmSPzeMyrleNuvvH gYDoTprQSjQZ6msrTUXGc1aQOGJUMlVLH4BBPfS963BGDNDzm9itMpHEXknPiAvQfUaX 6hFQrP5Tjoopn4ez+B0eVQz0F+oQRNG5m6Dr2L8FAAEoB9mwTCRK3YuMG5K/OUfp+Sh0 ruwCodrqhYxbjyZ1nonPsx8H+ay9Glex91icwza39+qHNwTWcOtGqjAhp5OHU1zBMu3Z rk9B+9mUSAuUl5Q2airhRFxNn1FXHqfNvC933MSvoG/VfR9afYTvzCpBLTW8e47VmiX1 xnhg== X-Gm-Message-State: APjAAAVw1/Zt+pydIUkRIeugpHnGmzkDsfzmkiRApyuJtAPpcJZiNrJp Knvz73yvD+Gzp7LPVsVFF/SzWA== X-Received: by 2002:a65:66d1:: with SMTP id c17mr31050841pgw.169.1571035819014; Sun, 13 Oct 2019 23:50:19 -0700 (PDT) Received: from localhost.localdomain ([240e:362:4f9:7100:a8e8:9325:d90d:271f]) by smtp.gmail.com with ESMTPSA id f188sm19580810pfa.170.2019.10.13.23.49.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 13 Oct 2019 23:50:18 -0700 (PDT) From: Zhangfei Gao To: reg Kroah-Hartman , Arnd Bergmann , jonathan.cameron@huawei.com, grant.likely@arm.com, jean-philippe , ilias.apalodimas@linaro.org, francois.ozog@linaro.org, kenneth-lee-2012@foxmail.com, Wangzhou Cc: linux-accelerators@lists.ozlabs.org, linux-kernel@vger.kernel.org, Kenneth Lee , Zaibo Xu , Zhangfei Gao Subject: [PATCH v5 2/3] uacce: add uacce driver Date: Mon, 14 Oct 2019 14:48:54 +0800 Message-Id: <1571035735-31882-3-git-send-email-zhangfei.gao@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1571035735-31882-1-git-send-email-zhangfei.gao@linaro.org> References: <1571035735-31882-1-git-send-email-zhangfei.gao@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Kenneth Lee Uacce (Unified/User-space-access-intended Accelerator Framework) targets to provide Shared Virtual Addressing (SVA) between accelerators and processes. So accelerator can access any data structure of the main cpu. This differs from the data sharing between cpu and io device, which share data content rather than address. Since unified address, hardware and user space of process can share the same virtual address in the communication. Uacce create a chrdev for every registration, the queue is allocated to the process when the chrdev is opened. Then the process can access the hardware resource by interact with the queue file. By mmap the queue file space to user space, the process can directly put requests to the hardware without syscall to the kernel space. Signed-off-by: Kenneth Lee Signed-off-by: Zaibo Xu Signed-off-by: Zhou Wang Signed-off-by: Zhangfei Gao --- Documentation/ABI/testing/sysfs-driver-uacce | 47 ++ drivers/misc/Kconfig | 1 + drivers/misc/Makefile | 1 + drivers/misc/uacce/Kconfig | 13 + drivers/misc/uacce/Makefile | 2 + drivers/misc/uacce/uacce.c | 974 +++++++++++++++++++++++++++ include/linux/uacce.h | 167 +++++ include/uapi/misc/uacce/uacce.h | 34 + 8 files changed, 1239 insertions(+) create mode 100644 Documentation/ABI/testing/sysfs-driver-uacce create mode 100644 drivers/misc/uacce/Kconfig create mode 100644 drivers/misc/uacce/Makefile create mode 100644 drivers/misc/uacce/uacce.c create mode 100644 include/linux/uacce.h create mode 100644 include/uapi/misc/uacce/uacce.h -- 2.7.4 diff --git a/Documentation/ABI/testing/sysfs-driver-uacce b/Documentation/ABI/testing/sysfs-driver-uacce new file mode 100644 index 0000000..b1a2c60 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-driver-uacce @@ -0,0 +1,47 @@ +What: /sys/class/uacce/hisi_zip-/id +Date: Oct 2019 +KernelVersion: 5.5 +Contact: linux-accelerators@lists.ozlabs.org +Description: Id of the device. + +What: /sys/class/uacce/hisi_zip-/api +Date: Oct 2019 +KernelVersion: 5.5 +Contact: linux-accelerators@lists.ozlabs.org +Description: Api of the device, used by application to match the correct driver + +What: /sys/class/uacce/hisi_zip-/flags +Date: Oct 2019 +KernelVersion: 5.5 +Contact: linux-accelerators@lists.ozlabs.org +Description: Attributes of the device, see UACCE_DEV_xxx flag defined in uacce.h + +What: /sys/class/uacce/hisi_zip-/available_instances +Date: Oct 2019 +KernelVersion: 5.5 +Contact: linux-accelerators@lists.ozlabs.org +Description: Available instances left of the device + +What: /sys/class/uacce/hisi_zip-/algorithms +Date: Oct 2019 +KernelVersion: 5.5 +Contact: linux-accelerators@lists.ozlabs.org +Description: Algorithms supported by this accelerator + +What: /sys/class/uacce/hisi_zip-/qfrs_size +Date: Oct 2019 +KernelVersion: 5.5 +Contact: linux-accelerators@lists.ozlabs.org +Description: Page size of each queue file regions + +What: /sys/class/uacce/hisi_zip-/numa_distance +Date: Oct 2019 +KernelVersion: 5.5 +Contact: linux-accelerators@lists.ozlabs.org +Description: Distance of device node to cpu node + +What: /sys/class/uacce/hisi_zip-/node_id +Date: Oct 2019 +KernelVersion: 5.5 +Contact: linux-accelerators@lists.ozlabs.org +Description: Id of the numa node diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig index c55b637..929feb0 100644 --- a/drivers/misc/Kconfig +++ b/drivers/misc/Kconfig @@ -481,4 +481,5 @@ source "drivers/misc/cxl/Kconfig" source "drivers/misc/ocxl/Kconfig" source "drivers/misc/cardreader/Kconfig" source "drivers/misc/habanalabs/Kconfig" +source "drivers/misc/uacce/Kconfig" endmenu diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile index c1860d3..9abf292 100644 --- a/drivers/misc/Makefile +++ b/drivers/misc/Makefile @@ -56,4 +56,5 @@ obj-$(CONFIG_OCXL) += ocxl/ obj-y += cardreader/ obj-$(CONFIG_PVPANIC) += pvpanic.o obj-$(CONFIG_HABANA_AI) += habanalabs/ +obj-$(CONFIG_UACCE) += uacce/ obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o diff --git a/drivers/misc/uacce/Kconfig b/drivers/misc/uacce/Kconfig new file mode 100644 index 0000000..e854354 --- /dev/null +++ b/drivers/misc/uacce/Kconfig @@ -0,0 +1,13 @@ +config UACCE + tristate "Accelerator Framework for User Land" + depends on IOMMU_API + help + UACCE provides interface for the user process to access the hardware + without interaction with the kernel space in data path. + + The user-space interface is described in + include/uapi/misc/uacce.h + + See Documentation/misc-devices/uacce.rst for more details. + + If you don't know what to do here, say N. diff --git a/drivers/misc/uacce/Makefile b/drivers/misc/uacce/Makefile new file mode 100644 index 0000000..5b4374e --- /dev/null +++ b/drivers/misc/uacce/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-or-later +obj-$(CONFIG_UACCE) += uacce.o diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c new file mode 100644 index 0000000..785e77a --- /dev/null +++ b/drivers/misc/uacce/uacce.c @@ -0,0 +1,974 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +#include +#include +#include +#include +#include +#include +#include +#include + +static struct class *uacce_class; +static DEFINE_IDR(uacce_idr); +static dev_t uacce_devt; +static DEFINE_MUTEX(uacce_mutex); +static const struct file_operations uacce_fops; + +static int uacce_queue_map_qfr(struct uacce_queue *q, + struct uacce_qfile_region *qfr) +{ + struct device *dev = q->uacce->pdev; + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + int i, j, ret; + + if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA)) + return 0; + + if (!domain) + return -ENODEV; + + for (i = 0; i < qfr->nr_pages; i++) { + ret = iommu_map(domain, qfr->iova + i * PAGE_SIZE, + page_to_phys(qfr->pages[i]), + PAGE_SIZE, qfr->prot | q->uacce->prot); + if (ret) + goto err_with_map_pages; + + get_page(qfr->pages[i]); + } + + return 0; + +err_with_map_pages: + for (j = i - 1; j >= 0; j--) { + iommu_unmap(domain, qfr->iova + j * PAGE_SIZE, PAGE_SIZE); + put_page(qfr->pages[j]); + } + return ret; +} + +static void uacce_queue_unmap_qfr(struct uacce_queue *q, + struct uacce_qfile_region *qfr) +{ + struct device *dev = q->uacce->pdev; + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + int i; + + if (!domain || !qfr) + return; + + if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA)) + return; + + for (i = qfr->nr_pages - 1; i >= 0; i--) { + iommu_unmap(domain, qfr->iova + i * PAGE_SIZE, PAGE_SIZE); + put_page(qfr->pages[i]); + } +} + +static int uacce_qfr_alloc_pages(struct uacce_qfile_region *qfr) +{ + int i, j; + + qfr->pages = kcalloc(qfr->nr_pages, sizeof(*qfr->pages), GFP_ATOMIC); + if (!qfr->pages) + return -ENOMEM; + + for (i = 0; i < qfr->nr_pages; i++) { + qfr->pages[i] = alloc_page(GFP_ATOMIC | __GFP_ZERO); + if (!qfr->pages[i]) + goto err_with_pages; + } + + return 0; + +err_with_pages: + for (j = i - 1; j >= 0; j--) + put_page(qfr->pages[j]); + + kfree(qfr->pages); + return -ENOMEM; +} + +static void uacce_qfr_free_pages(struct uacce_qfile_region *qfr) +{ + int i; + + for (i = 0; i < qfr->nr_pages; i++) + put_page(qfr->pages[i]); + + kfree(qfr->pages); +} + +static inline int uacce_queue_mmap_qfr(struct uacce_queue *q, + struct uacce_qfile_region *qfr, + struct vm_area_struct *vma) +{ + int i, ret; + + for (i = 0; i < qfr->nr_pages; i++) { + ret = remap_pfn_range(vma, vma->vm_start + (i << PAGE_SHIFT), + page_to_pfn(qfr->pages[i]), PAGE_SIZE, + vma->vm_page_prot); + if (ret) + return ret; + } + + return 0; +} + +static struct uacce_qfile_region * +uacce_create_region(struct uacce_queue *q, struct vm_area_struct *vma, + enum uacce_qfrt type, unsigned int flags) +{ + struct uacce_qfile_region *qfr; + struct uacce_device *uacce = q->uacce; + unsigned long vm_pgoff; + int ret = -ENOMEM; + + qfr = kzalloc(sizeof(*qfr), GFP_ATOMIC); + if (!qfr) + return ERR_PTR(-ENOMEM); + + qfr->type = type; + qfr->flags = flags; + qfr->iova = vma->vm_start; + qfr->nr_pages = vma_pages(vma); + + if (vma->vm_flags & VM_READ) + qfr->prot |= IOMMU_READ; + + if (vma->vm_flags & VM_WRITE) + qfr->prot |= IOMMU_WRITE; + + if (flags & UACCE_QFRF_SELFMT) { + if (!uacce->ops->mmap) { + ret = -EINVAL; + goto err_with_qfr; + } + + ret = uacce->ops->mmap(q, vma, qfr); + if (ret) + goto err_with_qfr; + return qfr; + } + + /* allocate memory */ + if (flags & UACCE_QFRF_DMA) { + qfr->kaddr = dma_alloc_coherent(uacce->pdev, + qfr->nr_pages << PAGE_SHIFT, + &qfr->dma, GFP_KERNEL); + if (!qfr->kaddr) { + ret = -ENOMEM; + goto err_with_qfr; + } + } else { + ret = uacce_qfr_alloc_pages(qfr); + if (ret) + goto err_with_qfr; + } + + /* map to device */ + ret = uacce_queue_map_qfr(q, qfr); + if (ret) + goto err_with_pages; + + /* mmap to user space */ + if (flags & UACCE_QFRF_MMAP) { + if (flags & UACCE_QFRF_DMA) { + /* dma_mmap_coherent() requires vm_pgoff as 0 + * restore vm_pfoff to initial value for mmap() + */ + vm_pgoff = vma->vm_pgoff; + vma->vm_pgoff = 0; + ret = dma_mmap_coherent(uacce->pdev, vma, qfr->kaddr, + qfr->dma, + qfr->nr_pages << PAGE_SHIFT); + vma->vm_pgoff = vm_pgoff; + } else { + ret = uacce_queue_mmap_qfr(q, qfr, vma); + } + + if (ret) + goto err_with_mapped_qfr; + } + + return qfr; + +err_with_mapped_qfr: + uacce_queue_unmap_qfr(q, qfr); +err_with_pages: + if (flags & UACCE_QFRF_DMA) + dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT, + qfr->kaddr, qfr->dma); + else + uacce_qfr_free_pages(qfr); +err_with_qfr: + kfree(qfr); + + return ERR_PTR(ret); +} + +static void uacce_destroy_region(struct uacce_queue *q, + struct uacce_qfile_region *qfr) +{ + struct uacce_device *uacce = q->uacce; + + if (qfr->flags & UACCE_QFRF_DMA) { + dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT, + qfr->kaddr, qfr->dma); + } else if (qfr->pages) { + if (qfr->flags & UACCE_QFRF_KMAP && qfr->kaddr) { + vunmap(qfr->kaddr); + qfr->kaddr = NULL; + } + + uacce_qfr_free_pages(qfr); + } + kfree(qfr); +} + +static long uacce_cmd_share_qfr(struct uacce_queue *tgt, int fd) +{ + struct file *filep; + struct uacce_queue *src; + int ret = -EINVAL; + + mutex_lock(&uacce_mutex); + + if (tgt->state != UACCE_Q_STARTED) + goto out_with_lock; + + filep = fget(fd); + if (!filep) + goto out_with_lock; + + if (filep->f_op != &uacce_fops) + goto out_with_fd; + + src = filep->private_data; + if (!src) + goto out_with_fd; + + if (tgt->uacce->flags & UACCE_DEV_SVA) + goto out_with_fd; + + if (!src->qfrs[UACCE_QFRT_SS] || tgt->qfrs[UACCE_QFRT_SS]) + goto out_with_fd; + + ret = uacce_queue_map_qfr(tgt, src->qfrs[UACCE_QFRT_SS]); + if (ret) + goto out_with_fd; + + tgt->qfrs[UACCE_QFRT_SS] = src->qfrs[UACCE_QFRT_SS]; + list_add(&tgt->list, &src->qfrs[UACCE_QFRT_SS]->qs); + +out_with_fd: + fput(filep); +out_with_lock: + mutex_unlock(&uacce_mutex); + return ret; +} + +static int uacce_start_queue(struct uacce_queue *q) +{ + struct uacce_qfile_region *qfr; + int ret = -EINVAL; + int i, j; + + mutex_lock(&uacce_mutex); + + if (q->state != UACCE_Q_INIT) + goto out_with_lock; + + /* + * map KMAP qfr to kernel + * vmap should be done in non-spinlocked context! + */ + for (i = 0; i < UACCE_QFRT_MAX; i++) { + qfr = q->qfrs[i]; + if (qfr && (qfr->flags & UACCE_QFRF_KMAP) && !qfr->kaddr) { + qfr->kaddr = vmap(qfr->pages, qfr->nr_pages, VM_MAP, + PAGE_KERNEL); + if (!qfr->kaddr) { + ret = -ENOMEM; + goto err_with_vmap; + } + } + } + + if (q->uacce->ops->start_queue) { + ret = q->uacce->ops->start_queue(q); + if (ret < 0) + goto err_with_vmap; + } + + q->state = UACCE_Q_STARTED; + mutex_unlock(&uacce_mutex); + + return 0; + +err_with_vmap: + for (j = i; j >= 0; j--) { + qfr = q->qfrs[j]; + if (qfr && qfr->kaddr) { + vunmap(qfr->kaddr); + qfr->kaddr = NULL; + } + } +out_with_lock: + mutex_unlock(&uacce_mutex); + return ret; +} + +/* + * While user space releases a queue, all the relatives on the queue + * should be released immediately by this putting. + */ +static long uacce_put_queue(struct uacce_queue *q) +{ + struct uacce_device *uacce = q->uacce; + + mutex_lock(&uacce_mutex); + + if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue) + uacce->ops->stop_queue(q); + + if ((q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED) && + uacce->ops->put_queue) + uacce->ops->put_queue(q); + + q->state = UACCE_Q_ZOMBIE; + mutex_unlock(&uacce_mutex); + + return 0; +} + +static long uacce_fops_unl_ioctl(struct file *filep, + unsigned int cmd, unsigned long arg) +{ + struct uacce_queue *q = filep->private_data; + struct uacce_device *uacce = q->uacce; + + switch (cmd) { + case UACCE_CMD_SHARE_SVAS: + return uacce_cmd_share_qfr(q, arg); + + case UACCE_CMD_START: + return uacce_start_queue(q); + + case UACCE_CMD_PUT_Q: + return uacce_put_queue(q); + + default: + if (!uacce->ops->ioctl) + return -EINVAL; + + return uacce->ops->ioctl(q, cmd, arg); + } +} + +#ifdef CONFIG_COMPAT +static long uacce_fops_compat_ioctl(struct file *filep, + unsigned int cmd, unsigned long arg) +{ + arg = (unsigned long)compat_ptr(arg); + + return uacce_fops_unl_ioctl(filep, cmd, arg); +} +#endif + +static int uacce_dev_open_check(struct uacce_device *uacce) +{ + if (uacce->flags & UACCE_DEV_SVA) + return 0; + + /* + * The device can be opened once if it dose not support pasid + */ + if (kref_read(&uacce->cdev->kobj.kref) > 2) + return -EBUSY; + + return 0; +} + +static int uacce_fops_open(struct inode *inode, struct file *filep) +{ + struct uacce_queue *q; + struct iommu_sva *handle = NULL; + struct uacce_device *uacce; + int ret; + int pasid = 0; + + uacce = idr_find(&uacce_idr, iminor(inode)); + if (!uacce) + return -ENODEV; + + if (!try_module_get(uacce->pdev->driver->owner)) + return -ENODEV; + + ret = uacce_dev_open_check(uacce); + if (ret) + goto out_with_module; + + if (uacce->flags & UACCE_DEV_SVA) { + handle = iommu_sva_bind_device(uacce->pdev, current->mm, NULL); + if (IS_ERR(handle)) + goto out_with_module; + pasid = iommu_sva_get_pasid(handle); + } + + q = kzalloc(sizeof(struct uacce_queue), GFP_KERNEL); + if (!q) { + ret = -ENOMEM; + goto out_with_module; + } + + if (uacce->ops->get_queue) { + ret = uacce->ops->get_queue(uacce, pasid, q); + if (ret < 0) + goto out_with_mem; + } + + q->pasid = pasid; + q->handle = handle; + q->uacce = uacce; + q->mm = current->mm; + memset(q->qfrs, 0, sizeof(q->qfrs)); + INIT_LIST_HEAD(&q->list); + init_waitqueue_head(&q->wait); + filep->private_data = q; + q->state = UACCE_Q_INIT; + + return 0; + +out_with_mem: + kfree(q); +out_with_module: + module_put(uacce->pdev->driver->owner); + return ret; +} + +static int uacce_fops_release(struct inode *inode, struct file *filep) +{ + struct uacce_queue *q = filep->private_data; + struct uacce_qfile_region *qfr; + struct uacce_device *uacce = q->uacce; + bool is_to_free_region; + int free_pages = 0; + int i; + + mutex_lock(&uacce_mutex); + + if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue) + uacce->ops->stop_queue(q); + + for (i = 0; i < UACCE_QFRT_MAX; i++) { + qfr = q->qfrs[i]; + if (!qfr) + continue; + + is_to_free_region = false; + uacce_queue_unmap_qfr(q, qfr); + if (i == UACCE_QFRT_SS) { + list_del(&q->list); + if (list_empty(&qfr->qs)) + is_to_free_region = true; + } else + is_to_free_region = true; + + if (is_to_free_region) { + free_pages += qfr->nr_pages; + uacce_destroy_region(q, qfr); + } + + qfr = NULL; + } + + if (current->mm == q->mm) { + down_write(&q->mm->mmap_sem); + q->mm->data_vm -= free_pages; + up_write(&q->mm->mmap_sem); + } + + if (uacce->flags & UACCE_DEV_SVA) + iommu_sva_unbind_device(q->handle); + + if ((q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED) && + uacce->ops->put_queue) + uacce->ops->put_queue(q); + + kfree(q); + mutex_unlock(&uacce_mutex); + + module_put(uacce->pdev->driver->owner); + + return 0; +} + +static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma) +{ + struct uacce_queue *q = filep->private_data; + struct uacce_device *uacce = q->uacce; + struct uacce_qfile_region *qfr; + enum uacce_qfrt type = 0; + unsigned int flags = 0; + int ret; + + if (vma->vm_pgoff < UACCE_QFRT_MAX) + type = vma->vm_pgoff; + + vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND; + + mutex_lock(&uacce_mutex); + + /* fixme: if the region need no pages, we don't need to check it */ + if (q->mm->data_vm + vma_pages(vma) > + rlimit(RLIMIT_DATA) >> PAGE_SHIFT) { + ret = -ENOMEM; + goto out_with_lock; + } + + if (q->qfrs[type]) { + ret = -EBUSY; + goto out_with_lock; + } + + switch (type) { + case UACCE_QFRT_MMIO: + flags = UACCE_QFRF_SELFMT; + break; + + case UACCE_QFRT_SS: + if (q->state != UACCE_Q_STARTED) { + ret = -EINVAL; + goto out_with_lock; + } + + if (uacce->flags & UACCE_DEV_SVA) { + ret = -EINVAL; + goto out_with_lock; + } + + flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP; + + break; + + case UACCE_QFRT_DKO: + if (uacce->flags & UACCE_DEV_SVA) { + ret = -EINVAL; + goto out_with_lock; + } + + flags = UACCE_QFRF_MAP | UACCE_QFRF_KMAP; + + break; + + case UACCE_QFRT_DUS: + if (uacce->flags & UACCE_DEV_SVA) { + flags = UACCE_QFRF_SELFMT; + break; + } + + flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP; + break; + + default: + WARN_ON(&uacce->dev); + break; + } + + qfr = uacce_create_region(q, vma, type, flags); + if (IS_ERR(qfr)) { + ret = PTR_ERR(qfr); + goto out_with_lock; + } + q->qfrs[type] = qfr; + + if (type == UACCE_QFRT_SS) { + INIT_LIST_HEAD(&qfr->qs); + list_add(&q->list, &q->qfrs[type]->qs); + } + + mutex_unlock(&uacce_mutex); + + if (qfr->pages) + q->mm->data_vm += qfr->nr_pages; + + return 0; + +out_with_lock: + mutex_unlock(&uacce_mutex); + return ret; +} + +static __poll_t uacce_fops_poll(struct file *file, poll_table *wait) +{ + struct uacce_queue *q = file->private_data; + struct uacce_device *uacce = q->uacce; + + poll_wait(file, &q->wait, wait); + if (uacce->ops->is_q_updated && uacce->ops->is_q_updated(q)) + return EPOLLIN | EPOLLRDNORM; + + return 0; +} + +static const struct file_operations uacce_fops = { + .owner = THIS_MODULE, + .open = uacce_fops_open, + .release = uacce_fops_release, + .unlocked_ioctl = uacce_fops_unl_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = uacce_fops_compat_ioctl, +#endif + .mmap = uacce_fops_mmap, + .poll = uacce_fops_poll, +}; + +#define to_uacce_device(dev) container_of(dev, struct uacce_device, dev) + +static ssize_t id_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct uacce_device *uacce = to_uacce_device(dev); + + return sprintf(buf, "%d\n", uacce->dev_id); +} + +static ssize_t api_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct uacce_device *uacce = to_uacce_device(dev); + + return sprintf(buf, "%s\n", uacce->api_ver); +} + +static ssize_t numa_distance_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct uacce_device *uacce = to_uacce_device(dev); + int distance; + + distance = node_distance(smp_processor_id(), uacce->pdev->numa_node); + + return sprintf(buf, "%d\n", abs(distance)); +} + +static ssize_t node_id_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct uacce_device *uacce = to_uacce_device(dev); + int node_id; + + node_id = dev_to_node(uacce->pdev); + + return sprintf(buf, "%d\n", node_id); +} + +static ssize_t flags_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct uacce_device *uacce = to_uacce_device(dev); + + return sprintf(buf, "%u\n", uacce->flags); +} + +static ssize_t available_instances_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct uacce_device *uacce = to_uacce_device(dev); + int val = 0; + + if (uacce->ops->get_available_instances) + val = uacce->ops->get_available_instances(uacce); + + return sprintf(buf, "%d\n", val); +} + +static ssize_t algorithms_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct uacce_device *uacce = to_uacce_device(dev); + + return sprintf(buf, "%s", uacce->algs); +} + +static ssize_t qfrs_size_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct uacce_device *uacce = to_uacce_device(dev); + unsigned long size; + int i, ret; + + for (i = 0, ret = 0; i < UACCE_QFRT_MAX; i++) { + size = uacce->qf_pg_size[i] << PAGE_SHIFT; + if (i == UACCE_QFRT_SS) + break; + ret += sprintf(buf + ret, "%lu\t", size); + } + ret += sprintf(buf + ret, "%lu\n", size); + + return ret; +} + +static DEVICE_ATTR_RO(id); +static DEVICE_ATTR_RO(api); +static DEVICE_ATTR_RO(numa_distance); +static DEVICE_ATTR_RO(node_id); +static DEVICE_ATTR_RO(flags); +static DEVICE_ATTR_RO(available_instances); +static DEVICE_ATTR_RO(algorithms); +static DEVICE_ATTR_RO(qfrs_size); + +static struct attribute *uacce_dev_attrs[] = { + &dev_attr_id.attr, + &dev_attr_api.attr, + &dev_attr_node_id.attr, + &dev_attr_numa_distance.attr, + &dev_attr_flags.attr, + &dev_attr_available_instances.attr, + &dev_attr_algorithms.attr, + &dev_attr_qfrs_size.attr, + NULL, +}; +ATTRIBUTE_GROUPS(uacce_dev); + +static void uacce_release(struct device *dev) +{ + struct uacce_device *uacce = to_uacce_device(dev); + + kfree(uacce); +} + +/* Borrowed from VFIO to fix msi translation */ +static bool uacce_iommu_has_sw_msi(struct iommu_group *group, + phys_addr_t *base) +{ + struct list_head group_resv_regions; + struct iommu_resv_region *region, *next; + bool ret = false; + + INIT_LIST_HEAD(&group_resv_regions); + iommu_get_group_resv_regions(group, &group_resv_regions); + list_for_each_entry(region, &group_resv_regions, list) { + /* + * The presence of any 'real' MSI regions should take + * precedence over the software-managed one if the + * IOMMU driver happens to advertise both types. + */ + if (region->type == IOMMU_RESV_MSI) { + ret = false; + break; + } + + if (region->type == IOMMU_RESV_SW_MSI) { + *base = region->start; + ret = true; + } + } + + list_for_each_entry_safe(region, next, &group_resv_regions, list) + kfree(region); + + return ret; +} + +static int uacce_set_iommu_domain(struct uacce_device *uacce) +{ + struct iommu_domain *domain; + struct iommu_group *group; + struct device *dev = uacce->pdev; + bool resv_msi; + phys_addr_t resv_msi_base = 0; + int ret; + + if (uacce->flags & UACCE_DEV_SVA) + return 0; + + /* allocate and attach a unmanged domain */ + domain = iommu_domain_alloc(uacce->pdev->bus); + if (!domain) { + dev_err(&uacce->dev, "cannot get domain for iommu\n"); + return -ENODEV; + } + + ret = iommu_attach_device(domain, uacce->pdev); + if (ret) + goto err_with_domain; + + if (iommu_capable(dev->bus, IOMMU_CAP_CACHE_COHERENCY)) + uacce->prot |= IOMMU_CACHE; + + group = iommu_group_get(dev); + if (!group) { + ret = -EINVAL; + goto err_with_domain; + } + + resv_msi = uacce_iommu_has_sw_msi(group, &resv_msi_base); + iommu_group_put(group); + + if (resv_msi) { + if (!irq_domain_check_msi_remap() && + !iommu_capable(dev->bus, IOMMU_CAP_INTR_REMAP)) { + dev_warn(dev, "No interrupt remapping support!"); + ret = -EPERM; + goto err_with_domain; + } + + ret = iommu_get_msi_cookie(domain, resv_msi_base); + if (ret) + goto err_with_domain; + } + + return 0; + +err_with_domain: + iommu_domain_free(domain); + return ret; +} + +static void uacce_unset_iommu_domain(struct uacce_device *uacce) +{ + struct iommu_domain *domain; + + if (uacce->flags & UACCE_DEV_SVA) + return; + + domain = iommu_get_domain_for_dev(uacce->pdev); + if (!domain) { + dev_err(&uacce->dev, "bug: no domain attached to device\n"); + return; + } + + iommu_detach_device(domain, uacce->pdev); + iommu_domain_free(domain); +} + +/** + * uacce_register - register an accelerator + * @uacce: the accelerator structure + */ +struct uacce_device *uacce_register(struct device *parent, + struct uacce_interface *interface) +{ + int ret; + struct uacce_device *uacce; + unsigned int flags = interface->flags; + + uacce = kzalloc(sizeof(struct uacce_device), GFP_KERNEL); + if (!uacce) + return ERR_PTR(-ENOMEM); + + if (flags & UACCE_DEV_SVA) { + ret = iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA); + if (ret) + flags &= ~UACCE_DEV_SVA; + } + + uacce->pdev = parent; + uacce->flags = flags; + uacce->ops = interface->ops; + + ret = uacce_set_iommu_domain(uacce); + if (ret) + goto err_free; + + mutex_lock(&uacce_mutex); + + ret = idr_alloc(&uacce_idr, uacce, 0, 0, GFP_KERNEL); + if (ret < 0) + goto err_with_lock; + + uacce->cdev = cdev_alloc(); + uacce->cdev->ops = &uacce_fops; + uacce->dev_id = ret; + uacce->cdev->owner = THIS_MODULE; + device_initialize(&uacce->dev); + uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id); + uacce->dev.class = uacce_class; + uacce->dev.groups = uacce_dev_groups; + uacce->dev.parent = uacce->pdev; + uacce->dev.release = uacce_release; + dev_set_name(&uacce->dev, "%s-%d", interface->name, uacce->dev_id); + ret = cdev_device_add(uacce->cdev, &uacce->dev); + if (ret) + goto err_with_idr; + + mutex_unlock(&uacce_mutex); + + return uacce; + +err_with_idr: + idr_remove(&uacce_idr, uacce->dev_id); +err_with_lock: + mutex_unlock(&uacce_mutex); + uacce_unset_iommu_domain(uacce); +err_free: + if (flags & UACCE_DEV_SVA) + iommu_dev_disable_feature(uacce->pdev, IOMMU_DEV_FEAT_SVA); + kfree(uacce); + return ERR_PTR(ret); +} +EXPORT_SYMBOL_GPL(uacce_register); + +/** + * uacce_unregister - unregisters a uacce + * @uacce: the accelerator to unregister + * + * Unregister an accelerator that wat previously successully registered with + * uacce_register(). + */ +void uacce_unregister(struct uacce_device *uacce) +{ + mutex_lock(&uacce_mutex); + + if (uacce->flags & UACCE_DEV_SVA) + iommu_dev_disable_feature(uacce->pdev, IOMMU_DEV_FEAT_SVA); + + uacce_unset_iommu_domain(uacce); + cdev_device_del(uacce->cdev, &uacce->dev); + idr_remove(&uacce_idr, uacce->dev_id); + put_device(&uacce->dev); + + mutex_unlock(&uacce_mutex); +} +EXPORT_SYMBOL_GPL(uacce_unregister); + +static int __init uacce_init(void) +{ + int ret; + + uacce_class = class_create(THIS_MODULE, UACCE_NAME); + if (IS_ERR(uacce_class)) { + ret = PTR_ERR(uacce_class); + goto err; + } + + ret = alloc_chrdev_region(&uacce_devt, 0, MINORMASK, UACCE_NAME); + if (ret) + goto err_with_class; + + return 0; + +err_with_class: + class_destroy(uacce_class); +err: + return ret; +} + +static __exit void uacce_exit(void) +{ + unregister_chrdev_region(uacce_devt, MINORMASK); + class_destroy(uacce_class); + idr_destroy(&uacce_idr); +} + +subsys_initcall(uacce_init); +module_exit(uacce_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Hisilicon Tech. Co., Ltd."); +MODULE_DESCRIPTION("Accelerator interface for Userland applications"); diff --git a/include/linux/uacce.h b/include/linux/uacce.h new file mode 100644 index 0000000..9137f3d --- /dev/null +++ b/include/linux/uacce.h @@ -0,0 +1,167 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _LINUX_UACCE_H +#define _LINUX_UACCE_H + +#include +#include + +#define UACCE_NAME "uacce" + +struct uacce_queue; +struct uacce_device; + +/* uacce queue file flag, requires different operation */ +#define UACCE_QFRF_MAP BIT(0) /* map to current queue */ +#define UACCE_QFRF_MMAP BIT(1) /* map to user space */ +#define UACCE_QFRF_KMAP BIT(2) /* map to kernel space */ +#define UACCE_QFRF_DMA BIT(3) /* use dma api for the region */ +#define UACCE_QFRF_SELFMT BIT(4) /* self maintained qfr */ + +/** + * struct uacce_qfile_region - structure of queue file region + * @type: type of the qfr + * @iova: iova share between user and device space + * @pages: pages pointer of the qfr memory + * @nr_pages: page numbers of the qfr memory + * @prot: qfr protection flag + * @flags: flags of qfr + * @qs: list sharing the same region, for ss region + * @kaddr: kernel addr of the qfr + * @dma: dma address, if created by dma api + */ +struct uacce_qfile_region { + enum uacce_qfrt type; + unsigned long iova; + struct page **pages; + u32 nr_pages; + u32 prot; + u32 flags; + struct list_head qs; + void *kaddr; + dma_addr_t dma; +}; + +/** + * struct uacce_ops - uacce device operations + * @get_available_instances: get available instances left of the device + * @get_queue: get a queue from the device + * @put_queue: free a queue to the device + * @start_queue: make the queue start work after get_queue + * @stop_queue: make the queue stop work before put_queue + * @is_q_updated: check whether the task is finished + * @mask_notify: mask the task irq of queue + * @mmap: mmap addresses of queue to user space + * @reset: reset the uacce device + * @reset_queue: reset the queue + * @ioctl: ioctl for user space users of the queue + */ +struct uacce_ops { + int (*get_available_instances)(struct uacce_device *uacce); + int (*get_queue)(struct uacce_device *uacce, unsigned long arg, + struct uacce_queue *q); + void (*put_queue)(struct uacce_queue *q); + int (*start_queue)(struct uacce_queue *q); + void (*stop_queue)(struct uacce_queue *q); + int (*is_q_updated)(struct uacce_queue *q); + void (*mask_notify)(struct uacce_queue *q, int event_mask); + int (*mmap)(struct uacce_queue *q, struct vm_area_struct *vma, + struct uacce_qfile_region *qfr); + int (*reset)(struct uacce_device *uacce); + int (*reset_queue)(struct uacce_queue *q); + long (*ioctl)(struct uacce_queue *q, unsigned int cmd, + unsigned long arg); +}; + +/** + * struct uacce_interface + * @name: the uacce device name. Will show up in sysfs + * @flags: uacce device attributes + * @ops: pointer to the struct uacce_ops + * + * This structure is used for the uacce_register() + */ +struct uacce_interface { + char name[32]; + unsigned int flags; + struct uacce_ops *ops; +}; + +enum uacce_q_state { + UACCE_Q_INIT, + UACCE_Q_STARTED, + UACCE_Q_ZOMBIE, +}; + +/** + * struct uacce_queue + * @uacce: pointer to uacce + * @priv: private pointer + * @wait: wait queue head + * @pasid: pasid of the queue + * @handle: iommu_sva handle return from iommu_sva_bind_device + * @list: share list for qfr->qs + * @mm: current->mm + * @qfrs: pointer of qfr regions + */ +struct uacce_queue { + struct uacce_device *uacce; + void *priv; + wait_queue_head_t wait; + int pasid; + struct iommu_sva *handle; + struct list_head list; + struct mm_struct *mm; + struct uacce_qfile_region *qfrs[UACCE_QFRT_MAX]; + enum uacce_q_state state; +}; + +/** + * struct uacce_device + * @algs: supported algorithms + * @api_ver: api version + * @qf_pg_size: page size of the queue file regions + * @ops: pointer to the struct uacce_ops + * @pdev: pointer to the parent device + * @is_vf: whether virtual function + * @flags: uacce attributes + * @dev_id: id of the uacce device + * @prot: uacce protection flag + * @cdev: cdev of the uacce + * @dev: dev of the uacce + * @priv: private pointer of the uacce + */ +struct uacce_device { + const char *algs; + const char *api_ver; + unsigned long qf_pg_size[UACCE_QFRT_MAX]; + struct uacce_ops *ops; + struct device *pdev; + bool is_vf; + u32 flags; + u32 dev_id; + u32 prot; + struct cdev *cdev; + struct device dev; + void *priv; +}; + +#if IS_ENABLED(CONFIG_UACCE) + +struct uacce_device *uacce_register(struct device *parent, + struct uacce_interface *interface); +void uacce_unregister(struct uacce_device *uacce); + +#else /* CONFIG_UACCE */ + +static inline +struct uacce_device *uacce_register(struct device *parent, + struct uacce_interface *interface) +{ + return ERR_PTR(-ENODEV); +} + +static inline void uacce_unregister(struct uacce_device *uacce) {} + +#endif /* CONFIG_UACCE */ + +#endif /* _LINUX_UACCE_H */ diff --git a/include/uapi/misc/uacce/uacce.h b/include/uapi/misc/uacce/uacce.h new file mode 100644 index 0000000..5c64780 --- /dev/null +++ b/include/uapi/misc/uacce/uacce.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _UAPIUUACCE_H +#define _UAPIUUACCE_H + +#include +#include + +#define UACCE_CMD_SHARE_SVAS _IO('W', 0) +#define UACCE_CMD_START _IO('W', 1) +#define UACCE_CMD_PUT_Q _IO('W', 2) + +/** + * UACCE Device flags: + * + * SHARE_DOMAIN: no PASID, can share sva only for one process and the kernel + * SVA: Shared Virtual Addresses + * Support PASID + * Support device page fault (pcie device) or smmu stall (platform device) + */ + +enum { + UACCE_DEV_SHARE_DOMAIN = 0x0, + UACCE_DEV_SVA = 0x1, +}; + +enum uacce_qfrt { + UACCE_QFRT_MMIO = 0, /* device mmio region */ + UACCE_QFRT_DKO = 1, /* device kernel-only */ + UACCE_QFRT_DUS = 2, /* device user share */ + UACCE_QFRT_SS = 3, /* static share memory */ + UACCE_QFRT_MAX = 16, +}; + +#endif From patchwork Mon Oct 14 06:48:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 176119 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp4126789ill; Sun, 13 Oct 2019 23:50:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqz9l/4JwoLpIzt6j2n18t3A5mto3bcRNlS2HWBZ6Ryax+JvX26gYdzBd+oQwTzmMv2ppL5Y X-Received: by 2002:a17:906:3010:: with SMTP id 16mr27595260ejz.74.1571035839831; Sun, 13 Oct 2019 23:50:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571035839; cv=none; d=google.com; s=arc-20160816; b=Hxy9/lBdHl2f7+FLlLxTRJdiApig8OoWpD8qMVrn7HZR31QJMk95bXsJu5IRy74Drt toMAa+tzS8Oe8PApIGanzmtfSrPmvEK3E2/PpN6MEJaMO+Eclhx83iMt+PghZIUqi3D4 26Pt0VnAvpxbstwMolYWcLTbaeVfkZy4MIG7r/3vYt0BHNckACP4IUEC6FXogmp+uaHB ArFEJkkZjB53NT3gFUb11Sz/nhYwhBf3qcuMKqftvldCm/5BsyC6LHLFQFhes/hQYUJQ GSI3j4KvdcOAEbrK1irK5B1wvUQHfUSg+bT7HMJH8n4LJY6DomXrE9eQ7RWPAX7EX4SC viVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=u642RcI9Sn6DT3hAjg9K/2Ph/wnOuv1v0S9cInSRvf0=; b=K7Hniocp6A8HaquKrIzSTq9EhHSYqZU7OhVDMA55h5WiZv1Tx+q0PNOmvndAv/DIJV m3iFeljn/WUPXbWyQG/KiG5hrVfQEgLzF5nBVtcQZVsRKMoJxT1xzB9lIzLj2z+urpla 6CiDqphuhCkz6b0AV/keasMHcTswnJC/rMYhfcDE9/i7z1ANgRD59En7F3TVPcroDGgR wx4Nay0lJZM6rrjuPS7sQhfPOEdnqRHVhloYWNC4wQvRp/XY2Q6xZcNrkjK7GEIJDfRe C7T8bpFijDrexcQyFYcgt7zFeX5LEDdBp0F1XVidMN76kngaioGXTxvwa5U1/VBy+Ebn ceBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YAVOPZ+n; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id jo18si11140131ejb.27.2019.10.13.23.50.39; Sun, 13 Oct 2019 23:50:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YAVOPZ+n; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730182AbfJNGuh (ORCPT + 10 others); Mon, 14 Oct 2019 02:50:37 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:37971 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729823AbfJNGug (ORCPT ); Mon, 14 Oct 2019 02:50:36 -0400 Received: by mail-pg1-f193.google.com with SMTP id w3so2596503pgt.5 for ; Sun, 13 Oct 2019 23:50:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=u642RcI9Sn6DT3hAjg9K/2Ph/wnOuv1v0S9cInSRvf0=; b=YAVOPZ+nmt/G3Z7SDaQi8WZNbG8mW3zbo6WqHvS3Tr7qnE45cYBmSddIyW/8HVSKAC iayae9STkYGbxfXhbUfw+3RVA/F39KX0RRJ5qGt2ciXZi09XZiPB6MINO4WixBF3i7Ky yrXmio9q1pzcrfafmpAOVVr0LvT7z/Pi2mqLv9TcbrlsjwdFE2ccp5TRibsH9tzOOD01 w2G4/CSPHoX2j0jYf6zeSduZvc7v9iVdJelINUc+tfJs+wclDpX1WQobWPfPTiTJlhzP FisQNpvyE7CLiBDMTWDuYmWYnXl67IvwupfOOJv9OtN6KItobv5pY9ykjD+06Yfi1ltt XoQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=u642RcI9Sn6DT3hAjg9K/2Ph/wnOuv1v0S9cInSRvf0=; b=IW3Q+dEY9JndLs4J+QUTXNTbB3KykZJNeS7GOeHnpDVFJcXSoatB7MXRT5X8FbgNmf sKJzYoD/9aSDuSwiNfW10pFWfUFl0LIs7ttGapQC/aE82JF/I+uAuOpdynCj/ydedRkx CMguYq0q60Huu4oB5PFuMP1O3lgIObCTWYB3QWRyDSndHAS+3rxBluFFHdyDbtkSrda1 70BJbLZOkfCfNbR2sEDix7gkaV1ATl7sxTPf44V5RdnjoTvtDKtM/l5T5EA5M3IsagYP c1VFwd0KqZvX4Rcf5SUTtiIEkobduewdV7mj1yGG++Sk3vryuwIafAIKMTKBf/uob+tt am2w== X-Gm-Message-State: APjAAAV3RqVuy1qXEI3vIOsx+YMWJXiHqsrkcyNjlYPO0OypIQDXW5/W kJeuq2iSK+W/l76vk11WTsQuhQ== X-Received: by 2002:a63:931a:: with SMTP id b26mr5886764pge.217.1571035835569; Sun, 13 Oct 2019 23:50:35 -0700 (PDT) Received: from localhost.localdomain ([240e:362:4f9:7100:a8e8:9325:d90d:271f]) by smtp.gmail.com with ESMTPSA id f188sm19580810pfa.170.2019.10.13.23.50.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 13 Oct 2019 23:50:35 -0700 (PDT) From: Zhangfei Gao To: reg Kroah-Hartman , Arnd Bergmann , jonathan.cameron@huawei.com, grant.likely@arm.com, jean-philippe , ilias.apalodimas@linaro.org, francois.ozog@linaro.org, kenneth-lee-2012@foxmail.com, Wangzhou Cc: linux-accelerators@lists.ozlabs.org, linux-kernel@vger.kernel.org, Zhangfei Gao Subject: [PATCH v5 3/3] crypto: hisilicon - register zip engine to uacce Date: Mon, 14 Oct 2019 14:48:55 +0800 Message-Id: <1571035735-31882-4-git-send-email-zhangfei.gao@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1571035735-31882-1-git-send-email-zhangfei.gao@linaro.org> References: <1571035735-31882-1-git-send-email-zhangfei.gao@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org qm using uacce as an example, will resubmit after uacce is merged. Signed-off-by: Zhangfei Gao Signed-off-by: Zhou Wang --- drivers/crypto/hisilicon/qm.c | 254 ++++++++++++++++++++++++++++++-- drivers/crypto/hisilicon/qm.h | 13 +- drivers/crypto/hisilicon/zip/zip_main.c | 39 ++--- include/uapi/misc/uacce/qm.h | 15 ++ 4 files changed, 285 insertions(+), 36 deletions(-) create mode 100644 include/uapi/misc/uacce/qm.h -- 2.7.4 diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c index f975c39..60067d8 100644 --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c @@ -9,6 +9,9 @@ #include #include #include +#include +#include +#include #include "qm.h" /* eq/aeq irq enable */ @@ -459,17 +462,22 @@ static void qm_cq_head_update(struct hisi_qp *qp) static void qm_poll_qp(struct hisi_qp *qp, struct hisi_qm *qm) { - struct qm_cqe *cqe = qp->cqe + qp->qp_status.cq_head; - - if (qp->req_cb) { - while (QM_CQE_PHASE(cqe) == qp->qp_status.cqc_phase) { - dma_rmb(); - qp->req_cb(qp, qp->sqe + qm->sqe_size * cqe->sq_head); - qm_cq_head_update(qp); - cqe = qp->cqe + qp->qp_status.cq_head; - qm_db(qm, qp->qp_id, QM_DOORBELL_CMD_CQ, - qp->qp_status.cq_head, 0); - atomic_dec(&qp->qp_status.used); + struct qm_cqe *cqe; + + if (qp->event_cb) { + qp->event_cb(qp); + } else { + cqe = qp->cqe + qp->qp_status.cq_head; + + if (qp->req_cb) { + while (QM_CQE_PHASE(cqe) == qp->qp_status.cqc_phase) { + dma_rmb(); + qp->req_cb(qp, qp->sqe + qm->sqe_size * + cqe->sq_head); + qm_cq_head_update(qp); + cqe = qp->cqe + qp->qp_status.cq_head; + atomic_dec(&qp->qp_status.used); + } } /* set c_flag */ @@ -1391,6 +1399,221 @@ static void hisi_qm_cache_wb(struct hisi_qm *qm) } } +static void qm_qp_event_notifier(struct hisi_qp *qp) +{ + wake_up_interruptible(&qp->uacce_q->wait); +} + +static int hisi_qm_get_available_instances(struct uacce_device *uacce) +{ + int i, ret; + struct hisi_qm *qm = uacce->priv; + + read_lock(&qm->qps_lock); + for (i = 0, ret = 0; i < qm->qp_num; i++) + if (!qm->qp_array[i]) + ret++; + read_unlock(&qm->qps_lock); + + return ret; +} + +static int hisi_qm_uacce_get_queue(struct uacce_device *uacce, + unsigned long arg, + struct uacce_queue *q) +{ + struct hisi_qm *qm = uacce->priv; + struct hisi_qp *qp; + u8 alg_type = 0; + + qp = hisi_qm_create_qp(qm, alg_type); + if (IS_ERR(qp)) + return PTR_ERR(qp); + + q->priv = qp; + q->uacce = uacce; + qp->uacce_q = q; + qp->event_cb = qm_qp_event_notifier; + qp->pasid = arg; + + return 0; +} + +static void hisi_qm_uacce_put_queue(struct uacce_queue *q) +{ + struct hisi_qp *qp = q->priv; + + /* + * As put_queue is only called in uacce_mode=1, and only one queue can + * be used in this mode. we flush all sqc cache back in put queue. + */ + hisi_qm_cache_wb(qp->qm); + + /* need to stop hardware, but can not support in v1 */ + hisi_qm_release_qp(qp); +} + +/* map sq/cq/doorbell to user space */ +static int hisi_qm_uacce_mmap(struct uacce_queue *q, + struct vm_area_struct *vma, + struct uacce_qfile_region *qfr) +{ + struct hisi_qp *qp = q->priv; + struct hisi_qm *qm = qp->qm; + size_t sz = vma->vm_end - vma->vm_start; + struct pci_dev *pdev = qm->pdev; + struct device *dev = &pdev->dev; + unsigned long vm_pgoff; + int ret; + + switch (qfr->type) { + case UACCE_QFRT_MMIO: + if (qm->ver == QM_HW_V2) { + if (sz > PAGE_SIZE * (QM_DOORBELL_PAGE_NR + + QM_DOORBELL_SQ_CQ_BASE_V2 / PAGE_SIZE)) + return -EINVAL; + } else { + if (sz > PAGE_SIZE * QM_DOORBELL_PAGE_NR) + return -EINVAL; + } + + vma->vm_flags |= VM_IO; + + return remap_pfn_range(vma, vma->vm_start, + qm->phys_base >> PAGE_SHIFT, + sz, pgprot_noncached(vma->vm_page_prot)); + case UACCE_QFRT_DUS: + if (sz != qp->qdma.size) + return -EINVAL; + + /* dma_mmap_coherent() requires vm_pgoff as 0 + * restore vm_pfoff to initial value for mmap() + */ + vm_pgoff = vma->vm_pgoff; + vma->vm_pgoff = 0; + ret = dma_mmap_coherent(dev, vma, qp->qdma.va, + qp->qdma.dma, sz); + vma->vm_pgoff = vm_pgoff; + return ret; + + default: + return -EINVAL; + } +} + +static int hisi_qm_uacce_start_queue(struct uacce_queue *q) +{ + struct hisi_qp *qp = q->priv; + + return hisi_qm_start_qp(qp, qp->pasid); +} + +static void hisi_qm_uacce_stop_queue(struct uacce_queue *q) +{ + struct hisi_qp *qp = q->priv; + + hisi_qm_stop_qp(qp); +} + +static int qm_set_sqctype(struct uacce_queue *q, u16 type) +{ + struct hisi_qm *qm = q->uacce->priv; + struct hisi_qp *qp = q->priv; + + write_lock(&qm->qps_lock); + qp->alg_type = type; + write_unlock(&qm->qps_lock); + + return 0; +} + +static long hisi_qm_uacce_ioctl(struct uacce_queue *q, unsigned int cmd, + unsigned long arg) +{ + struct hisi_qp *qp = q->priv; + struct hisi_qp_ctx qp_ctx; + + if (cmd == UACCE_CMD_QM_SET_QP_CTX) { + if (copy_from_user(&qp_ctx, (void __user *)arg, + sizeof(struct hisi_qp_ctx))) + return -EFAULT; + + if (qp_ctx.qc_type != 0 && qp_ctx.qc_type != 1) + return -EINVAL; + + qm_set_sqctype(q, qp_ctx.qc_type); + qp_ctx.id = qp->qp_id; + + if (copy_to_user((void __user *)arg, &qp_ctx, + sizeof(struct hisi_qp_ctx))) + return -EFAULT; + } else { + return -EINVAL; + } + + return 0; +} + +static struct uacce_ops uacce_qm_ops = { + .get_available_instances = hisi_qm_get_available_instances, + .get_queue = hisi_qm_uacce_get_queue, + .put_queue = hisi_qm_uacce_put_queue, + .start_queue = hisi_qm_uacce_start_queue, + .stop_queue = hisi_qm_uacce_stop_queue, + .mmap = hisi_qm_uacce_mmap, + .ioctl = hisi_qm_uacce_ioctl, +}; + +static int qm_register_uacce(struct hisi_qm *qm) +{ + struct pci_dev *pdev = qm->pdev; + struct uacce_device *uacce; + unsigned long mmio_page_nr; + unsigned long dus_page_nr; + struct uacce_interface interface = { + .flags = UACCE_DEV_SVA, + .ops = &uacce_qm_ops, + }; + + strncpy(interface.name, pdev->driver->name, sizeof(interface.name)); + + uacce = uacce_register(&pdev->dev, &interface); + if (IS_ERR(uacce)) + return PTR_ERR(uacce); + + if (uacce->flags & UACCE_DEV_SVA) { + qm->use_sva = true; + } else { + /* only consider sva case */ + uacce_unregister(qm->uacce); + return -EINVAL; + } + + uacce->is_vf = pdev->is_virtfn; + uacce->priv = qm; + uacce->algs = qm->algs; + + if (qm->ver == QM_HW_V1) { + mmio_page_nr = QM_DOORBELL_PAGE_NR; + uacce->api_ver = HISI_QM_API_VER_BASE; + } else { + mmio_page_nr = QM_DOORBELL_PAGE_NR + + QM_DOORBELL_SQ_CQ_BASE_V2 / PAGE_SIZE; + uacce->api_ver = HISI_QM_API_VER2_BASE; + } + + dus_page_nr = (PAGE_SIZE - 1 + qm->sqe_size * QM_Q_DEPTH + + sizeof(struct qm_cqe) * QM_Q_DEPTH) >> PAGE_SHIFT; + + uacce->qf_pg_size[UACCE_QFRT_MMIO] = mmio_page_nr; + uacce->qf_pg_size[UACCE_QFRT_DUS] = dus_page_nr; + uacce->qf_pg_size[UACCE_QFRT_SS] = 0; + + qm->uacce = uacce; + + return 0; +} + /** * hisi_qm_init() - Initialize configures about qm. * @qm: The qm needing init. @@ -1415,6 +1638,10 @@ int hisi_qm_init(struct hisi_qm *qm) return -EINVAL; } + ret = qm_register_uacce(qm); + if (ret < 0) + dev_warn(&pdev->dev, "fail to register uacce (%d)\n", ret); + ret = pci_enable_device_mem(pdev); if (ret < 0) { dev_err(&pdev->dev, "Failed to enable device mem!\n"); @@ -1427,6 +1654,8 @@ int hisi_qm_init(struct hisi_qm *qm) goto err_disable_pcidev; } + qm->phys_base = pci_resource_start(pdev, PCI_BAR_2); + qm->size = pci_resource_len(qm->pdev, PCI_BAR_2); qm->io_base = ioremap(pci_resource_start(pdev, PCI_BAR_2), pci_resource_len(qm->pdev, PCI_BAR_2)); if (!qm->io_base) { @@ -1498,6 +1727,9 @@ void hisi_qm_uninit(struct hisi_qm *qm) iounmap(qm->io_base); pci_release_mem_regions(pdev); pci_disable_device(pdev); + + if (qm->uacce) + uacce_unregister(qm->uacce); } EXPORT_SYMBOL_GPL(hisi_qm_uninit); diff --git a/drivers/crypto/hisilicon/qm.h b/drivers/crypto/hisilicon/qm.h index 70e672ae..58af252 100644 --- a/drivers/crypto/hisilicon/qm.h +++ b/drivers/crypto/hisilicon/qm.h @@ -75,6 +75,10 @@ #define QM_Q_DEPTH 1024 +/* page number for queue file region */ +#define QM_DOORBELL_PAGE_NR 1 + + enum qp_state { QP_STOP, }; @@ -159,7 +163,12 @@ struct hisi_qm { u32 error_mask; u32 msi_mask; + const char *algs; bool use_dma_api; + bool use_sva; + resource_size_t phys_base; + resource_size_t size; + struct uacce_device *uacce; }; struct hisi_qp_status { @@ -189,10 +198,12 @@ struct hisi_qp { struct hisi_qp_ops *hw_ops; void *qp_ctx; void (*req_cb)(struct hisi_qp *qp, void *data); + void (*event_cb)(struct hisi_qp *qp); struct work_struct work; struct workqueue_struct *wq; - struct hisi_qm *qm; + u16 pasid; + struct uacce_queue *uacce_q; }; int hisi_qm_init(struct hisi_qm *qm); diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c index 1b2ee96..48860d2 100644 --- a/drivers/crypto/hisilicon/zip/zip_main.c +++ b/drivers/crypto/hisilicon/zip/zip_main.c @@ -316,8 +316,14 @@ static void hisi_zip_set_user_domain_and_cache(struct hisi_zip *hisi_zip) writel(AXUSER_BASE, base + HZIP_BD_RUSER_32_63); writel(AXUSER_BASE, base + HZIP_SGL_RUSER_32_63); writel(AXUSER_BASE, base + HZIP_BD_WUSER_32_63); - writel(AXUSER_BASE, base + HZIP_DATA_RUSER_32_63); - writel(AXUSER_BASE, base + HZIP_DATA_WUSER_32_63); + + if (hisi_zip->qm.use_sva) { + writel(AXUSER_BASE | AXUSER_SSV, base + HZIP_DATA_RUSER_32_63); + writel(AXUSER_BASE | AXUSER_SSV, base + HZIP_DATA_WUSER_32_63); + } else { + writel(AXUSER_BASE, base + HZIP_DATA_RUSER_32_63); + writel(AXUSER_BASE, base + HZIP_DATA_WUSER_32_63); + } /* let's open all compression/decompression cores */ writel(DECOMP_CHECK_ENABLE | ALL_COMP_DECOMP_EN, @@ -671,24 +677,12 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id) qm = &hisi_zip->qm; qm->pdev = pdev; qm->ver = rev_id; - + qm->use_dma_api = true; + qm->algs = "zlib\ngzip\n"; qm->sqe_size = HZIP_SQE_SIZE; qm->dev_name = hisi_zip_name; qm->fun_type = (pdev->device == PCI_DEVICE_ID_ZIP_PF) ? QM_HW_PF : QM_HW_VF; - switch (uacce_mode) { - case 0: - qm->use_dma_api = true; - break; - case 1: - qm->use_dma_api = false; - break; - case 2: - qm->use_dma_api = true; - break; - default: - return -EINVAL; - } ret = hisi_qm_init(qm); if (ret) { @@ -976,12 +970,10 @@ static int __init hisi_zip_init(void) goto err_pci; } - if (uacce_mode == 0 || uacce_mode == 2) { - ret = hisi_zip_register_to_crypto(); - if (ret < 0) { - pr_err("Failed to register driver to crypto.\n"); - goto err_crypto; - } + ret = hisi_zip_register_to_crypto(); + if (ret < 0) { + pr_err("Failed to register driver to crypto.\n"); + goto err_crypto; } return 0; @@ -996,8 +988,7 @@ static int __init hisi_zip_init(void) static void __exit hisi_zip_exit(void) { - if (uacce_mode == 0 || uacce_mode == 2) - hisi_zip_unregister_from_crypto(); + hisi_zip_unregister_from_crypto(); pci_unregister_driver(&hisi_zip_pci_driver); hisi_zip_unregister_debugfs(); } diff --git a/include/uapi/misc/uacce/qm.h b/include/uapi/misc/uacce/qm.h new file mode 100644 index 0000000..b4ad3ae --- /dev/null +++ b/include/uapi/misc/uacce/qm.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +#ifndef HISI_QM_USR_IF_H +#define HISI_QM_USR_IF_H + +struct hisi_qp_ctx { + __u16 id; + __u16 qc_type; +}; + +#define HISI_QM_API_VER_BASE "hisi_qm_v1" +#define HISI_QM_API_VER2_BASE "hisi_qm_v2" + +#define UACCE_CMD_QM_SET_QP_CTX _IOWR('H', 10, struct hisi_qp_ctx) + +#endif