This implementation is based on __memset_power8 and integrates a lot of suggestions from Anton Blanchard.
The biggest difference is that it makes extensive use of stxvl to alignment and tail code to avoid branches and small stores. It has three main execution paths:
a) "Short lengths" for lengths up to 64 bytes, avoiding as many branches as possible.
b) "General case" for larger lengths, it has an alignment section using stxvl to avoid branches, a 128 bytes loop and then a tail code, again using stxvl with few branches.
c) "Zeroing cache blocks" for lengths from 256 bytes upwards and set
value being zero. It is mostly the __memset_power8 code but the alignment phase was simplified because, at this point, address is already 16-bytes aligned and also changed to use vector stores. The tail code was also simplified to reuse the general case tail.
All unaligned stores use stxvl instructions that do not generate alignment interrupts on POWER10, making it safe to use on caching-inhibited memory.
On average, this implementation provides something around 30% improvement when compared to __memset_power8.
23fdf8178c powerpc64le: Optimize memset for POWER10
sysdeps/powerpc/powerpc64/le/power10/memset.S | 256 +++++++++++++++++++++
sysdeps/powerpc/powerpc64/multiarch/Makefile | 2 +-
sysdeps/powerpc/powerpc64/multiarch/bzero.c | 8 +
.../powerpc/powerpc64/multiarch/ifunc-impl-list.c | 14 ++
.../powerpc/powerpc64/multiarch/memset-power10.S | 27 +++
sysdeps/powerpc/powerpc64/multiarch/memset.c | 8 +
6 files changed, 314 insertions(+), 1 deletion(-)