From patchwork Tue Sep 7 07:59:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 507513 Delivered-To: patch@linaro.org Received: by 2002:a02:8629:0:0:0:0:0 with SMTP id e38csp4012835jai; Tue, 7 Sep 2021 01:05:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwTHtoiMSuypstTry+LKDerqOoTdLXVGfrkoS4wJ6WoF2aM2RLBrPzqTqPp8ZCb2Ncg0A0a X-Received: by 2002:a05:6402:5188:: with SMTP id q8mr16761762edd.138.1631001924925; Tue, 07 Sep 2021 01:05:24 -0700 (PDT) Return-Path: Received: from mails.dpdk.org (mails.dpdk.org. [217.70.189.124]) by mx.google.com with ESMTP id l4si14234393edj.282.2021.09.07.01.05.24; Tue, 07 Sep 2021 01:05:24 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) client-ip=217.70.189.124; Authentication-Results: mx.google.com; dkim=fail header.i=@nxp.com header.s=selector2 header.b=XvICJGS2; arc=fail (signature failed); spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E400741120; Tue, 7 Sep 2021 10:04:15 +0200 (CEST) Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05on2069.outbound.protection.outlook.com [40.107.21.69]) by mails.dpdk.org (Postfix) with ESMTP id C2158410EC for ; Tue, 7 Sep 2021 10:04:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UK2vzX2Ayi2axg8vBL5ooaHfK/MphXIqeSTLhbxsSaoM8bYzSB9nwLeRbXhU4RXXSQTZSusnf+yrsIpe6yqf1HVpFRER2d99vVfq5qvTHTz29KEzNawJsPAl1ZAPsvFHZnTpzd6BRPMGSeIphKsgXe8Ac+zMX1M8TvOP1HIDYZLmSKeXkxpFwNENT+U/0JVQnJsrne5TiNWVFhNtAnSs/rjcOr2JJke/23IVGoS++Sl2DAP/IxcdBAtmoNvd3n4vUbHNUsNxI6270eU782S3PViAaZyn491h8hI8GRWLkI+JA4RoMaIR2JlRjodIzT3tTCrYO3s0c++eOKWIlqexDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=isvj40dQdVzcxvur19PCDW/05rRaNKL0lynuAxTXZI8=; b=FIqQK+5gvI/wdkvxC5Ib9fIfLDkX795lrVOBJ8t9dX7veqYJDOYpcH97B6WapQSnCXVC1fNA/vAfnRi+j858xhv8yz5mzWiO5o1VYwy7vk1jtBebwTrL7L2ntJT5pMt9J9UGJjCa84GE73MhqhsLwcRC8DQWtKsdWQQBpKIRWd2M3SwqDpVH7RTEdMm+TEZ1zWf4P/XKpBhWCLHcDx+C4yTP579xtz+Z3kZRB3IlowBK+a1c4FsOA6nKxYJDWVqbWYykJFZvNtKfEcuwJvj6EJyy1nnINQHybnktzI01T8MHeufAzRBUCWdTTUM+E+WZuUQVIc/eX6ft8rWnRXmLXw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=isvj40dQdVzcxvur19PCDW/05rRaNKL0lynuAxTXZI8=; b=XvICJGS2oUhzniJL7wT2lcjhihFh1FISrpXBSY2LPGJWV37gSyWbAj3xbtD+0bnMJk6cGfhnxZePcvjklFfKM2vgWUkPbwZ9icCk4jaMMjejUF2fFTMivQCL5Rwa5gsdceycTgDODiFuCbSpKeEemgsDirGZQJzcFKRnDci9j4M= Authentication-Results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=nxp.com; Received: from DU2PR04MB8630.eurprd04.prod.outlook.com (2603:10a6:10:2dd::15) by DU2PR04MB8709.eurprd04.prod.outlook.com (2603:10a6:10:2dc::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4500.14; Tue, 7 Sep 2021 08:04:13 +0000 Received: from DU2PR04MB8630.eurprd04.prod.outlook.com ([fe80::945d:e362:712d:1b80]) by DU2PR04MB8630.eurprd04.prod.outlook.com ([fe80::945d:e362:712d:1b80%4]) with mapi id 15.20.4478.025; Tue, 7 Sep 2021 08:04:13 +0000 From: Hemant Agrawal To: dev@dpdk.org, gakhil@marvell.com Cc: konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, Gagandeep Singh Date: Tue, 7 Sep 2021 13:29:53 +0530 Message-Id: <20210907075957.28848-12-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210907075957.28848-1-hemant.agrawal@nxp.com> References: <20210825071510.7913-1-hemant.agrawal@nxp.com> <20210907075957.28848-1-hemant.agrawal@nxp.com> X-ClientProxiedBy: SG2PR02CA0068.apcprd02.prod.outlook.com (2603:1096:4:54::32) To DU2PR04MB8630.eurprd04.prod.outlook.com (2603:10a6:10:2dd::15) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from dpdk-xeon.ap.freescale.net (92.120.0.67) by SG2PR02CA0068.apcprd02.prod.outlook.com (2603:1096:4:54::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4478.19 via Frontend Transport; Tue, 7 Sep 2021 08:04:11 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 48df3b84-133d-43ee-c6c6-08d971d614bb X-MS-TrafficTypeDiagnostic: DU2PR04MB8709: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4303; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hOurevkZRxXbTbsQa2Qk//riuPlbHyMTCM+eRrwy0IquLShlMT843LuTqF9ap94EUEUpw1uxmU2by8Ddy1htUyDFeGh+Xu7M4xPJAbEhEmY8mCUOmh0iinQfqPoXW/nZ8X9j1XPzf+Cag3jZ/sre62z+2yxnTRXuqIjmFNfZTlwcHpahhg76DmPTnPQL95D6uzaAMHnstb6gWpqj3crAurmdtObg+uE7I/Vk1N2R4PbThd6F9zHy+Ur5t8/Sm+L3xbGrJ9IF0F2e4MhVxFBbSg5prjbiDRoJL80dj5UVIbP5NGrN6bFli4kxZ7LFPxft1E7n4KhFBnsYaUXRmL5ttLD2iDGbfAT705jrYFCV1SdFe4CoeIkZGlsoNrQjOZGHrmkfmxkDFwiWNoTqoIBAthq8fGeJHXBZc+/JUfDqVfNbvkCuS5z/MIzq1ZWdQ6CgBgIWLyzV/gnk5FrGlIrK4DDQzKGlwGlXMbOoEB3E2yrGcFRIgdbtdxvN3luEV6T7wfHq6vNEwNN0b3WiSqZzHB9+ngCanFSL+G+sKj3+bM66U2Aa0uUpkulyX8X0OeRkF6+CYoeTGRgFa121oMzeSfEWLgIJ3ORRfBYfLkQKnRM/4yeUa+20P2n40VTLUNHcYzYMYFaLTlsokCIqXTHcRn+r6OSk5J8V3FwJbmVVGgYuOk2FMUaJTxYxXS4Mwc6MtfoZbVQyeaTwwYawxzIu1g== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DU2PR04MB8630.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(396003)(376002)(346002)(39860400002)(136003)(366004)(36756003)(30864003)(38350700002)(186003)(52116002)(1076003)(6666004)(38100700002)(5660300002)(66476007)(66556008)(6486002)(6506007)(8936002)(8676002)(44832011)(26005)(66946007)(4326008)(86362001)(2616005)(2906002)(956004)(316002)(478600001)(6512007)(83380400001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: fq8/fOcqJcVE+J4tBNUprZsAjiOBYvhxmln7NPNbJbrOfHKe5o7N1YwhMzJpf4e/Yd/acKXDM/WiV91/R9a1taNujP+1Mkm2BXOXxCWCykFmaMN7cSGfC5cZ3lnL+e1n38T2O1OJbxchyNKICCy1mChOCfnW0+w4w6TejLwJjKdjpLUS2yHtyTsDTkc2APJ/dyZucrVDiqtFLvxYw43OhOWmtb0lG4beZBmbsWeoeELwVvacV2zi8m8vbHfuffOkQEJUusaH8AkFdgpkIQtgzGR/gWHoyq8Hbm6Ccm7q784j0mL+3Hc1mcZanhoOwIdLWVkH90vLH3L+9asooKLqPVb0k1uB60hUwZIJNOA5ssp8YP/sI1l6U8tMpWrJYNRHU3bmk12f3o92yt8n42eoKuvH02PvrC/RqE5EaL/9Hhz1uch15dPPryGus+d0nhLHOec+J33WYq6XMqAfDz0KQYnk0yD2hgYRW0x/7AJ7tEkbTNpA3WYIJBfcSZwfz7tuXBiGDYbTFNT/UKiD4HkaKu11ek3SsPOw7UQOwr36Nk3NGcLatD4qcj6KN+sIVt6R8ZTiKSrW8Q1dzTVwM3Y3UBYWNWEgPQjnEcTSCgbLee+NCAPtLRXxTR190ulr2ln71Dv2lR/ylb8aEZnabyVD7MkOaQtlqkzcCJe4+878xMF++FGSXqlXIrOtDQiXwcaKOtoSjiD5nOJc2Z2cNR2f7ZTB7XwymHTGuCNfyY/RulthQbU6L3Cx6BAYYqFJSb46zk/aL+1Z7r4j3TNOXE5VcGhOwvu4GVJkdRz+wpwCJNmVhzO59ngRo9b0orcmzvyPl36lfFAF8S17DxsHQdzVZ00CX/Ferjda0EcWO+7b68Ruw4QlA2o/ZPFQQdEo4ONAMjZAG7IF4RNKeKMNbj2+CBINCbkE7LcM6MikR1YWgaqZblwQQ6ITcApsoza14Ya3UHoFO8/Bq9/O1T7+vplnpMyWvCKKYO93W4seHIHZ2r9nKvj/Njpl7t34cHRp7RV1PB0EbvlmiOLrN0dSPVNH5tLH1oEkGV03l7JOXcdPqb1VMckHF3S3SCJDycmKv9S4BNKrSRTT/i2lL46rJg6UwjclnORnTKyozloMgsfB/S4bNssNhoeWV4m4jMngMMB5Q69DSC+jGg9i2T4oRU6dO7F4O2a+WTwkdE37oaKzIKd+amm7FxQKBu+BjO6QK0nTeEN2GQc1fotfsnRAKFvq+DDi0Bh9wZ83BL4deFzACt96DOZ2944pGXA53qy1t50as8F/CXXiepHnwvAnFTZBhAHyc6csXmjVzHigIHLG1ExmmPPiEHlfjN3yCQiqhUjw X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 48df3b84-133d-43ee-c6c6-08d971d614bb X-MS-Exchange-CrossTenant-AuthSource: DU2PR04MB8630.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Sep 2021 08:04:13.6546 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: t+NeXQeKyHYgxtzK/XGZH45JKFkEZAefCQXbqUuWrIhY/WAXGJ+MfkXv+M1s1rRDhujXjFZI0t+97mrL1edf2A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8709 Subject: [dpdk-dev] [PATCH v2 11/15] crypto/dpaa_sec: support raw datapath APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gagandeep Singh This patch add raw vector API framework for dpaa_sec driver. Signed-off-by: Gagandeep Singh --- doc/guides/rel_notes/release_21_11.rst | 4 + drivers/crypto/dpaa_sec/dpaa_sec.c | 23 +- drivers/crypto/dpaa_sec/dpaa_sec.h | 39 +- drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 485 ++++++++++++++++++++++ drivers/crypto/dpaa_sec/meson.build | 4 +- 5 files changed, 541 insertions(+), 14 deletions(-) create mode 100644 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c -- 2.17.1 diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 9cbe960dbe..0afd21812f 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -76,6 +76,10 @@ New Features * Added raw vector datapath API support +* **Updated NXP dpaa_sec crypto PMD.** + + * Added raw vector datapath API support + Removed Items ------------- diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index 19d4684e24..7534f80195 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -45,10 +45,7 @@ #include #include -static uint8_t cryptodev_driver_id; - -static int -dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess); +uint8_t dpaa_cryptodev_driver_id; static inline void dpaa_sec_op_ending(struct dpaa_sec_op_ctx *ctx) @@ -1745,8 +1742,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, case RTE_CRYPTO_OP_WITH_SESSION: ses = (dpaa_sec_session *) get_sym_session_private_data( - op->sym->session, - cryptodev_driver_id); + op->sym->session, + dpaa_cryptodev_driver_id); break; #ifdef RTE_LIB_SECURITY case RTE_CRYPTO_OP_SECURITY_SESSION: @@ -2307,7 +2304,7 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq) return -1; } -static int +int dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess) { int ret; @@ -3115,7 +3112,7 @@ dpaa_sec_dev_infos_get(struct rte_cryptodev *dev, info->feature_flags = dev->feature_flags; info->capabilities = dpaa_sec_capabilities; info->sym.max_nb_sessions = internals->max_nb_sessions; - info->driver_id = cryptodev_driver_id; + info->driver_id = dpaa_cryptodev_driver_id; } } @@ -3311,7 +3308,10 @@ static struct rte_cryptodev_ops crypto_ops = { .queue_pair_release = dpaa_sec_queue_pair_release, .sym_session_get_size = dpaa_sec_sym_session_get_size, .sym_session_configure = dpaa_sec_sym_session_configure, - .sym_session_clear = dpaa_sec_sym_session_clear + .sym_session_clear = dpaa_sec_sym_session_clear, + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = dpaa_sec_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = dpaa_sec_configure_raw_dp_ctx, }; #ifdef RTE_LIB_SECURITY @@ -3362,7 +3362,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev) PMD_INIT_FUNC_TRACE(); - cryptodev->driver_id = cryptodev_driver_id; + cryptodev->driver_id = dpaa_cryptodev_driver_id; cryptodev->dev_ops = &crypto_ops; cryptodev->enqueue_burst = dpaa_sec_enqueue_burst; @@ -3371,6 +3371,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev) RTE_CRYPTODEV_FF_HW_ACCELERATED | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_SECURITY | + RTE_CRYPTODEV_FF_SYM_RAW_DP | RTE_CRYPTODEV_FF_IN_PLACE_SGL | RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | @@ -3536,5 +3537,5 @@ static struct cryptodev_driver dpaa_sec_crypto_drv; RTE_PMD_REGISTER_DPAA(CRYPTODEV_NAME_DPAA_SEC_PMD, rte_dpaa_sec_driver); RTE_PMD_REGISTER_CRYPTO_DRIVER(dpaa_sec_crypto_drv, rte_dpaa_sec_driver.driver, - cryptodev_driver_id); + dpaa_cryptodev_driver_id); RTE_LOG_REGISTER(dpaa_logtype_sec, pmd.crypto.dpaa, NOTICE); diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h index 368699678b..f6e83d46e7 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.h +++ b/drivers/crypto/dpaa_sec/dpaa_sec.h @@ -19,6 +19,8 @@ #define AES_CTR_IV_LEN 16 #define AES_GCM_IV_LEN 12 +extern uint8_t dpaa_cryptodev_driver_id; + #define DPAA_IPv6_DEFAULT_VTC_FLOW 0x60000000 /* Minimum job descriptor consists of a oneword job descriptor HEADER and @@ -117,6 +119,24 @@ struct sec_pdcp_ctxt { uint32_t hfn_threshold; /*!< HFN Threashold for key renegotiation */ }; #endif + +typedef int (*dpaa_sec_build_fd_t)( + void *qp, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec, + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad_or_auth_iv, + void *user_data); + +typedef struct dpaa_sec_job* (*dpaa_sec_build_raw_dp_fd_t)(uint8_t *drv_ctx, + struct rte_crypto_sgl *sgl, + struct rte_crypto_sgl *dest_sgl, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + union rte_crypto_sym_ofs ofs, + void *userdata); + typedef struct dpaa_sec_session_entry { struct sec_cdb cdb; /**< cmd block associated with qp */ struct dpaa_sec_qp *qp[MAX_DPAA_CORES]; @@ -129,6 +149,8 @@ typedef struct dpaa_sec_session_entry { #ifdef RTE_LIB_SECURITY enum rte_security_session_protocol proto_alg; /*!< Security Algorithm*/ #endif + dpaa_sec_build_fd_t build_fd; + dpaa_sec_build_raw_dp_fd_t build_raw_dp_fd; union { struct { uint8_t *data; /**< pointer to key data */ @@ -211,7 +233,10 @@ struct dpaa_sec_job { #define DPAA_MAX_NB_MAX_DIGEST 32 struct dpaa_sec_op_ctx { struct dpaa_sec_job job; - struct rte_crypto_op *op; + union { + struct rte_crypto_op *op; + void *userdata; + }; struct rte_mempool *ctx_pool; /* mempool pointer for dpaa_sec_op_ctx */ uint32_t fd_status; int64_t vtop_offset; @@ -803,4 +828,16 @@ calc_chksum(void *buffer, int len) return result; } +int +dpaa_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, + struct rte_crypto_raw_dp_ctx *raw_dp_ctx, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, uint8_t is_update); + +int +dpaa_sec_get_dp_ctx_size(struct rte_cryptodev *dev); + +int +dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess); + #endif /* _DPAA_SEC_H_ */ diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c new file mode 100644 index 0000000000..ee0ca2e0d5 --- /dev/null +++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c @@ -0,0 +1,485 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 NXP + */ + +#include +#include +#include +#include +#include +#ifdef RTE_LIB_SECURITY +#include +#endif + +/* RTA header files */ +#include + +#include +#include +#include + +struct dpaa_sec_raw_dp_ctx { + dpaa_sec_session *session; + uint32_t tail; + uint32_t head; + uint16_t cached_enqueue; + uint16_t cached_dequeue; +}; + +static __rte_always_inline int +dpaa_sec_raw_enqueue_done(void *qp_data, uint8_t *drv_ctx, uint32_t n) +{ + RTE_SET_USED(qp_data); + RTE_SET_USED(drv_ctx); + RTE_SET_USED(n); + + return 0; +} + +static __rte_always_inline int +dpaa_sec_raw_dequeue_done(void *qp_data, uint8_t *drv_ctx, uint32_t n) +{ + RTE_SET_USED(qp_data); + RTE_SET_USED(drv_ctx); + RTE_SET_USED(n); + + return 0; +} + +static inline struct dpaa_sec_op_ctx * +dpaa_sec_alloc_raw_ctx(dpaa_sec_session *ses, int sg_count) +{ + struct dpaa_sec_op_ctx *ctx; + int i, retval; + + retval = rte_mempool_get( + ses->qp[rte_lcore_id() % MAX_DPAA_CORES]->ctx_pool, + (void **)(&ctx)); + if (!ctx || retval) { + DPAA_SEC_DP_WARN("Alloc sec descriptor failed!"); + return NULL; + } + /* + * Clear SG memory. There are 16 SG entries of 16 Bytes each. + * one call to dcbz_64() clear 64 bytes, hence calling it 4 times + * to clear all the SG entries. dpaa_sec_alloc_ctx() is called for + * each packet, memset is costlier than dcbz_64(). + */ + for (i = 0; i < sg_count && i < MAX_JOB_SG_ENTRIES; i += 4) + dcbz_64(&ctx->job.sg[i]); + + ctx->ctx_pool = ses->qp[rte_lcore_id() % MAX_DPAA_CORES]->ctx_pool; + ctx->vtop_offset = (size_t) ctx - rte_mempool_virt2iova(ctx); + + return ctx; +} + +static struct dpaa_sec_job * +build_dpaa_raw_dp_auth_fd(uint8_t *drv_ctx, + struct rte_crypto_sgl *sgl, + struct rte_crypto_sgl *dest_sgl, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + union rte_crypto_sym_ofs ofs, + void *userdata) +{ + RTE_SET_USED(drv_ctx); + RTE_SET_USED(sgl); + RTE_SET_USED(dest_sgl); + RTE_SET_USED(iv); + RTE_SET_USED(digest); + RTE_SET_USED(auth_iv); + RTE_SET_USED(ofs); + RTE_SET_USED(userdata); + + return NULL; +} + +static struct dpaa_sec_job * +build_dpaa_raw_dp_cipher_fd(uint8_t *drv_ctx, + struct rte_crypto_sgl *sgl, + struct rte_crypto_sgl *dest_sgl, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *auth_iv, + union rte_crypto_sym_ofs ofs, + void *userdata) +{ + RTE_SET_USED(digest); + RTE_SET_USED(auth_iv); + dpaa_sec_session *ses = + ((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session; + struct dpaa_sec_job *cf; + struct dpaa_sec_op_ctx *ctx; + struct qm_sg_entry *sg, *out_sg, *in_sg; + unsigned int i; + uint8_t *IV_ptr = iv->va; + int data_len, total_len = 0, data_offset; + + for (i = 0; i < sgl->num; i++) + total_len += sgl->vec[i].len; + + data_len = total_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail; + data_offset = ofs.ofs.cipher.head; + + /* Support lengths in bits only for SNOW3G and ZUC */ + if (sgl->num > MAX_SG_ENTRIES) { + DPAA_SEC_DP_ERR("Cipher: Max sec segs supported is %d", + MAX_SG_ENTRIES); + return NULL; + } + + ctx = dpaa_sec_alloc_raw_ctx(ses, sgl->num * 2 + 3); + if (!ctx) + return NULL; + + cf = &ctx->job; + ctx->userdata = (void *)userdata; + + /* output */ + out_sg = &cf->sg[0]; + out_sg->extension = 1; + out_sg->length = data_len; + qm_sg_entry_set64(out_sg, rte_dpaa_mem_vtop(&cf->sg[2])); + cpu_to_hw_sg(out_sg); + + if (dest_sgl) { + /* 1st seg */ + sg = &cf->sg[2]; + qm_sg_entry_set64(sg, dest_sgl->vec[0].iova); + sg->length = dest_sgl->vec[0].len - data_offset; + sg->offset = data_offset; + + /* Successive segs */ + for (i = 1; i < dest_sgl->num; i++) { + cpu_to_hw_sg(sg); + sg++; + qm_sg_entry_set64(sg, dest_sgl->vec[i].iova); + sg->length = dest_sgl->vec[i].len; + } + } else { + /* 1st seg */ + sg = &cf->sg[2]; + qm_sg_entry_set64(sg, sgl->vec[0].iova); + sg->length = sgl->vec[0].len - data_offset; + sg->offset = data_offset; + + /* Successive segs */ + for (i = 1; i < sgl->num; i++) { + cpu_to_hw_sg(sg); + sg++; + qm_sg_entry_set64(sg, sgl->vec[i].iova); + sg->length = sgl->vec[i].len; + } + + } + sg->final = 1; + cpu_to_hw_sg(sg); + + /* input */ + in_sg = &cf->sg[1]; + in_sg->extension = 1; + in_sg->final = 1; + in_sg->length = data_len + ses->iv.length; + + sg++; + qm_sg_entry_set64(in_sg, rte_dpaa_mem_vtop(sg)); + cpu_to_hw_sg(in_sg); + + /* IV */ + qm_sg_entry_set64(sg, rte_dpaa_mem_vtop(IV_ptr)); + sg->length = ses->iv.length; + cpu_to_hw_sg(sg); + + /* 1st seg */ + sg++; + qm_sg_entry_set64(sg, sgl->vec[0].iova); + sg->length = sgl->vec[0].len - data_offset; + sg->offset = data_offset; + + /* Successive segs */ + for (i = 1; i < sgl->num; i++) { + cpu_to_hw_sg(sg); + sg++; + qm_sg_entry_set64(sg, sgl->vec[i].iova); + sg->length = sgl->vec[i].len; + } + sg->final = 1; + cpu_to_hw_sg(sg); + + return cf; +} + +static uint32_t +dpaa_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status) +{ + /* Function to transmit the frames to given device and queuepair */ + uint32_t loop; + struct dpaa_sec_qp *dpaa_qp = (struct dpaa_sec_qp *)qp_data; + uint16_t num_tx = 0; + struct qm_fd fds[DPAA_SEC_BURST], *fd; + uint32_t frames_to_send; + struct dpaa_sec_job *cf; + dpaa_sec_session *ses = + ((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session; + uint32_t flags[DPAA_SEC_BURST] = {0}; + struct qman_fq *inq[DPAA_SEC_BURST]; + + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { + if (rte_dpaa_portal_init((void *)0)) { + DPAA_SEC_ERR("Failure in affining portal"); + return 0; + } + } + + while (vec->num) { + frames_to_send = (vec->num > DPAA_SEC_BURST) ? + DPAA_SEC_BURST : vec->num; + for (loop = 0; loop < frames_to_send; loop++) { + if (unlikely(!ses->qp[rte_lcore_id() % MAX_DPAA_CORES])) { + if (dpaa_sec_attach_sess_q(dpaa_qp, ses)) { + frames_to_send = loop; + goto send_pkts; + } + } else if (unlikely(ses->qp[rte_lcore_id() % + MAX_DPAA_CORES] != dpaa_qp)) { + DPAA_SEC_DP_ERR("Old:sess->qp = %p" + " New qp = %p\n", + ses->qp[rte_lcore_id() % + MAX_DPAA_CORES], dpaa_qp); + frames_to_send = loop; + goto send_pkts; + } + + /*Clear the unused FD fields before sending*/ + fd = &fds[loop]; + memset(fd, 0, sizeof(struct qm_fd)); + cf = ses->build_raw_dp_fd(drv_ctx, + &vec->src_sgl[loop], + &vec->dest_sgl[loop], + &vec->iv[loop], + &vec->digest[loop], + &vec->auth_iv[loop], + ofs, + user_data[loop]); + if (!cf) { + DPAA_SEC_ERR("error: Improper packet contents" + " for crypto operation"); + goto skip_tx; + } + inq[loop] = ses->inq[rte_lcore_id() % MAX_DPAA_CORES]; + fd->opaque_addr = 0; + fd->cmd = 0; + qm_fd_addr_set64(fd, rte_dpaa_mem_vtop(cf->sg)); + fd->_format1 = qm_fd_compound; + fd->length29 = 2 * sizeof(struct qm_sg_entry); + + status[loop] = 1; + } +send_pkts: + loop = 0; + while (loop < frames_to_send) { + loop += qman_enqueue_multi_fq(&inq[loop], &fds[loop], + &flags[loop], frames_to_send - loop); + } + vec->num -= frames_to_send; + num_tx += frames_to_send; + } + +skip_tx: + dpaa_qp->tx_pkts += num_tx; + dpaa_qp->tx_errs += vec->num - num_tx; + + return num_tx; +} + +static int +dpaa_sec_deq_raw(struct dpaa_sec_qp *qp, void **out_user_data, + uint8_t is_user_data_array, + rte_cryptodev_raw_post_dequeue_t post_dequeue, + int nb_ops) +{ + struct qman_fq *fq; + unsigned int pkts = 0; + int num_rx_bufs, ret; + struct qm_dqrr_entry *dq; + uint32_t vdqcr_flags = 0; + uint8_t is_success = 0; + + fq = &qp->outq; + /* + * Until request for four buffers, we provide exact number of buffers. + * Otherwise we do not set the QM_VDQCR_EXACT flag. + * Not setting QM_VDQCR_EXACT flag can provide two more buffers than + * requested, so we request two less in this case. + */ + if (nb_ops < 4) { + vdqcr_flags = QM_VDQCR_EXACT; + num_rx_bufs = nb_ops; + } else { + num_rx_bufs = nb_ops > DPAA_MAX_DEQUEUE_NUM_FRAMES ? + (DPAA_MAX_DEQUEUE_NUM_FRAMES - 2) : (nb_ops - 2); + } + ret = qman_set_vdq(fq, num_rx_bufs, vdqcr_flags); + if (ret) + return 0; + + do { + const struct qm_fd *fd; + struct dpaa_sec_job *job; + struct dpaa_sec_op_ctx *ctx; + + dq = qman_dequeue(fq); + if (!dq) + continue; + + fd = &dq->fd; + /* sg is embedded in an op ctx, + * sg[0] is for output + * sg[1] for input + */ + job = rte_dpaa_mem_ptov(qm_fd_addr_get64(fd)); + + ctx = container_of(job, struct dpaa_sec_op_ctx, job); + ctx->fd_status = fd->status; + if (is_user_data_array) + out_user_data[pkts] = ctx->userdata; + else + out_user_data[0] = ctx->userdata; + + if (!ctx->fd_status) { + is_success = true; + } else { + is_success = false; + DPAA_SEC_DP_WARN("SEC return err:0x%x", ctx->fd_status); + } + post_dequeue(ctx->op, pkts, is_success); + pkts++; + + /* report op status to sym->op and then free the ctx memory */ + rte_mempool_put(ctx->ctx_pool, (void *)ctx); + + qman_dqrr_consume(fq, dq); + } while (fq->flags & QMAN_FQ_STATE_VDQCR); + + return pkts; +} + + +static __rte_always_inline uint32_t +dpaa_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx, + rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count, + uint32_t max_nb_to_dequeue, + rte_cryptodev_raw_post_dequeue_t post_dequeue, + void **out_user_data, uint8_t is_user_data_array, + uint32_t *n_success, int *dequeue_status) +{ + RTE_SET_USED(drv_ctx); + RTE_SET_USED(get_dequeue_count); + uint16_t num_rx; + struct dpaa_sec_qp *dpaa_qp = (struct dpaa_sec_qp *)qp_data; + uint32_t nb_ops = max_nb_to_dequeue; + + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { + if (rte_dpaa_portal_init((void *)0)) { + DPAA_SEC_ERR("Failure in affining portal"); + return 0; + } + } + + num_rx = dpaa_sec_deq_raw(dpaa_qp, out_user_data, + is_user_data_array, post_dequeue, nb_ops); + + dpaa_qp->rx_pkts += num_rx; + *dequeue_status = 1; + *n_success = num_rx; + + DPAA_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx); + + return num_rx; +} + +static __rte_always_inline int +dpaa_sec_raw_enqueue(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data_vec, + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad_or_auth_iv, + void *user_data) +{ + RTE_SET_USED(qp_data); + RTE_SET_USED(drv_ctx); + RTE_SET_USED(data_vec); + RTE_SET_USED(n_data_vecs); + RTE_SET_USED(ofs); + RTE_SET_USED(iv); + RTE_SET_USED(digest); + RTE_SET_USED(aad_or_auth_iv); + RTE_SET_USED(user_data); + + return 0; +} + +static __rte_always_inline void * +dpaa_sec_raw_dequeue(void *qp_data, uint8_t *drv_ctx, int *dequeue_status, + enum rte_crypto_op_status *op_status) +{ + RTE_SET_USED(qp_data); + RTE_SET_USED(drv_ctx); + RTE_SET_USED(dequeue_status); + RTE_SET_USED(op_status); + + return NULL; +} + +int +dpaa_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, + struct rte_crypto_raw_dp_ctx *raw_dp_ctx, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, uint8_t is_update) +{ + dpaa_sec_session *sess; + struct dpaa_sec_raw_dp_ctx *dp_ctx; + RTE_SET_USED(qp_id); + + if (!is_update) { + memset(raw_dp_ctx, 0, sizeof(*raw_dp_ctx)); + raw_dp_ctx->qp_data = dev->data->queue_pairs[qp_id]; + } + + if (sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) + sess = (dpaa_sec_session *)get_sec_session_private_data( + session_ctx.sec_sess); + else if (sess_type == RTE_CRYPTO_OP_WITH_SESSION) + sess = (dpaa_sec_session *)get_sym_session_private_data( + session_ctx.crypto_sess, dpaa_cryptodev_driver_id); + else + return -ENOTSUP; + raw_dp_ctx->dequeue_burst = dpaa_sec_raw_dequeue_burst; + raw_dp_ctx->dequeue = dpaa_sec_raw_dequeue; + raw_dp_ctx->dequeue_done = dpaa_sec_raw_dequeue_done; + raw_dp_ctx->enqueue_burst = dpaa_sec_raw_enqueue_burst; + raw_dp_ctx->enqueue = dpaa_sec_raw_enqueue; + raw_dp_ctx->enqueue_done = dpaa_sec_raw_enqueue_done; + + if (sess->ctxt == DPAA_SEC_CIPHER) + sess->build_raw_dp_fd = build_dpaa_raw_dp_cipher_fd; + else if (sess->ctxt == DPAA_SEC_AUTH) + sess->build_raw_dp_fd = build_dpaa_raw_dp_auth_fd; + else + return -ENOTSUP; + dp_ctx = (struct dpaa_sec_raw_dp_ctx *)raw_dp_ctx->drv_ctx_data; + dp_ctx->session = sess; + + return 0; +} + +int +dpaa_sec_get_dp_ctx_size(__rte_unused struct rte_cryptodev *dev) +{ + return sizeof(struct dpaa_sec_raw_dp_ctx); +} diff --git a/drivers/crypto/dpaa_sec/meson.build b/drivers/crypto/dpaa_sec/meson.build index 44fd60e5ae..f87ad6c7e7 100644 --- a/drivers/crypto/dpaa_sec/meson.build +++ b/drivers/crypto/dpaa_sec/meson.build @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright 2018 NXP +# Copyright 2018-2021 NXP if not is_linux build = false @@ -7,7 +7,7 @@ if not is_linux endif deps += ['bus_dpaa', 'mempool_dpaa', 'security'] -sources = files('dpaa_sec.c') +sources = files('dpaa_sec.c', 'dpaa_sec_raw_dp.c') includes += include_directories('../../bus/dpaa/include') includes += include_directories('../../common/dpaax')