This patch replaces the previous hashed lock implementaiton of bitops with assembly optimized ones taken from Linux v3.10-rc4.
The Linux derived ASM only supports 8 byte aligned bitmaps (which under Linux are unsigned long * rather than our void *). We do have actually uses of 4 byte alignment (i.e. the bitmaps in struct xmem_pool) which trigger alignment faults.
Therefore adjust the assembly to work in 4 byte increments, which involved:- bit offset now bits 4:0 => mask #31 not #63
- use wN register not xN for load/modify/store loop.
There is no need to adjust the shift used to calculate the word offset, the difference is already acounted for in the #63->#31 change.
NB: Xen's build system cannot cope with the change from .c to .S file, remove xen/arch/arm/arm64/lib/.bitops.o.d or clean your build tree.
7947483 xen/arm64: Assembly optimized bitops from Linux
xen/arch/arm/arm64/lib/bitops.S | 68 ++++++++++++
xen/arch/arm/arm64/lib/bitops.c | 22 ----
xen/include/asm-arm/arm64/bitops.h | 203 ++----------------------------------
3 files changed, 76 insertions(+), 217 deletions(-)
Upstream: xenbits.xen.org