x86-64: Optimize memrchr with AVX2

System Internals / glibc - H.J. Lu [gmail.com] - 9 June 2017 08:44 EDT

Optimize memrchr with AVX2 to search 32 bytes with a single vector compare instruction. It is as fast as SSE2 memrchr for small data sizes and up to 1X faster for large data sizes on Haswell. Select AVX2 memrchr on AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast.

- sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add memrchr-sse2 and memrchr-avx2.
- sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add tests for __memrchr_avx2 and __memrchr_sse2.
- sysdeps/x86_64/multiarch/memrchr-avx2.S: New file.
- sysdeps/x86_64/multiarch/memrchr-sse2.S: Likewise.
- sysdeps/x86_64/multiarch/memrchr.c: Likewise.

5ac7aa1 x86-64: Optimize memrchr with AVX2
ChangeLog | 11 +
sysdeps/x86_64/multiarch/Makefile | 1 +
sysdeps/x86_64/multiarch/ifunc-impl-list.c | 7 +
sysdeps/x86_64/multiarch/memrchr-avx2.S | 359 +++++++++++++++++++++++++++++
sysdeps/x86_64/multiarch/memrchr-sse2.S | 26 +++
sysdeps/x86_64/multiarch/memrchr.c | 31 +++
6 files changed, 435 insertions(+)

Upstream: sourceware.org


  • Share