diff mbox series

crypto: x86/aes-xts - simplify loop in xts_crypt_slowpath()

Message ID 20240420055455.25179-1-ebiggers@kernel.org
State New
Headers show
Series crypto: x86/aes-xts - simplify loop in xts_crypt_slowpath() | expand

Commit Message

Eric Biggers April 20, 2024, 5:54 a.m. UTC
From: Eric Biggers <ebiggers@google.com>

Since the total length processed by the loop in xts_crypt_slowpath() is
a multiple of AES_BLOCK_SIZE, just round the length down to
AES_BLOCK_SIZE even on the last step.  This doesn't change behavior, as
the last step will process a multiple of AES_BLOCK_SIZE regardless.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/aesni-intel_glue.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)


base-commit: 543ea178fbfadeaf79e15766ac989f3351349f02

Comments

Herbert Xu April 26, 2024, 9:35 a.m. UTC | #1
Eric Biggers <ebiggers@kernel.org> wrote:
> From: Eric Biggers <ebiggers@google.com>
> 
> Since the total length processed by the loop in xts_crypt_slowpath() is
> a multiple of AES_BLOCK_SIZE, just round the length down to
> AES_BLOCK_SIZE even on the last step.  This doesn't change behavior, as
> the last step will process a multiple of AES_BLOCK_SIZE regardless.
> 
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> ---
> arch/x86/crypto/aesni-intel_glue.c | 13 +++++--------
> 1 file changed, 5 insertions(+), 8 deletions(-)

Patch applied.  Thanks.
diff mbox series

Patch

diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 110b3282a1f2..02a4c0c276df 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -933,20 +933,17 @@  xts_crypt_slowpath(struct skcipher_request *req, xts_crypt_func crypt_func)
 	}
 
 	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes) {
-		unsigned int nbytes = walk.nbytes;
-
-		if (nbytes < walk.total)
-			nbytes = round_down(nbytes, AES_BLOCK_SIZE);
-
 		kernel_fpu_begin();
-		(*crypt_func)(&ctx->crypt_ctx, walk.src.virt.addr,
-			      walk.dst.virt.addr, nbytes, req->iv);
+		(*crypt_func)(&ctx->crypt_ctx,
+			      walk.src.virt.addr, walk.dst.virt.addr,
+			      walk.nbytes & ~(AES_BLOCK_SIZE - 1), req->iv);
 		kernel_fpu_end();
-		err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
+		err = skcipher_walk_done(&walk,
+					 walk.nbytes & (AES_BLOCK_SIZE - 1));
 	}
 
 	if (err || !tail)
 		return err;