[][src]Module core::arch::aarch64

🔬 This is a nightly-only experimental API. (stdsimd #27731)
This is supported on AArch64 only.

Platform-specific intrinsics for the aarch64 platform.

See the module documentation for more details.

Structs

APSRExperimentalAArch64

Application Program Status Register

ISHExperimentalAArch64

Inner Shareable is the required shareability domain, reads and writes are the required access types

ISHSTExperimentalAArch64

Inner Shareable is the required shareability domain, writes are the required access type

NSHExperimentalAArch64

Non-shareable is the required shareability domain, reads and writes are the required access types

NSHSTExperimentalAArch64

Non-shareable is the required shareability domain, writes are the required access type

OSHExperimentalAArch64

Outer Shareable is the required shareability domain, reads and writes are the required access types

OSHSTExperimentalAArch64

Outer Shareable is the required shareability domain, writes are the required access type

STExperimentalAArch64

Full system is the required shareability domain, writes are the required access type

SYExperimentalAArch64

Full system is the required shareability domain, reads and writes are the required access types

float32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed f32.

float32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed f32.

float64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed f64.

float64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed f64.

int8x4_tExperimentalAArch64

ARM-specific 32-bit wide vector of four packed i8.

int8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed i8.

int8x8x2_tExperimentalAArch64

ARM-specific type containing two int8x8_t vectors.

int8x8x3_tExperimentalAArch64

ARM-specific type containing three int8x8_t vectors.

int8x8x4_tExperimentalAArch64

ARM-specific type containing four int8x8_t vectors.

int8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed i8.

int8x16x2_tExperimentalAArch64

ARM-specific type containing two int8x16_t vectors.

int8x16x3_tExperimentalAArch64

ARM-specific type containing three int8x16_t vectors.

int8x16x4_tExperimentalAArch64

ARM-specific type containing four int8x16_t vectors.

int16x2_tExperimentalAArch64

ARM-specific 32-bit wide vector of two packed i16.

int16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed i16.

int16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed i16.

int32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed i32.

int32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed i32.

int64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed i64.

int64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed i64.

poly8x8_tExperimentalAArch64

ARM-specific 64-bit wide polynomial vector of eight packed u8.

poly8x8x2_tExperimentalAArch64

ARM-specific type containing two poly8x8_t vectors.

poly8x8x3_tExperimentalAArch64

ARM-specific type containing three poly8x8_t vectors.

poly8x8x4_tExperimentalAArch64

ARM-specific type containing four poly8x8_t vectors.

poly8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

poly8x16x2_tExperimentalAArch64

ARM-specific type containing two poly8x16_t vectors.

poly8x16x3_tExperimentalAArch64

ARM-specific type containing three poly8x16_t vectors.

poly8x16x4_tExperimentalAArch64

ARM-specific type containing four poly8x16_t vectors.

poly16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

poly16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

poly64_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed p64.

poly64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed p64.

poly64x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed p64.

poly128_tExperimentalAArch64

ARM-specific 128-bit wide vector of one packed p64.

uint8x4_tExperimentalAArch64

ARM-specific 32-bit wide vector of four packed u8.

uint8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed u8.

uint8x8x2_tExperimentalAArch64

ARM-specific type containing two uint8x8_t vectors.

uint8x8x3_tExperimentalAArch64

ARM-specific type containing three uint8x8_t vectors.

uint8x8x4_tExperimentalAArch64

ARM-specific type containing four uint8x8_t vectors.

uint8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

uint8x16x2_tExperimentalAArch64

ARM-specific type containing two uint8x16_t vectors.

uint8x16x3_tExperimentalAArch64

ARM-specific type containing three uint8x16_t vectors.

uint8x16x4_tExperimentalAArch64

ARM-specific type containing four uint8x16_t vectors.

uint16x2_tExperimentalAArch64

ARM-specific 32-bit wide vector of two packed u16.

uint16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

uint16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

uint32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed u32.

uint32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed u32.

uint64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed u64.

uint64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed u64.

Constants

_TMFAILURE_CNCLExperimentalAArch64

Transaction executed a TCANCEL instruction

_TMFAILURE_DBGExperimentalAArch64

Transaction aborted due to a debug trap.

_TMFAILURE_ERRExperimentalAArch64

Transaction aborted because a non-permissible operation was attempted

_TMFAILURE_IMPExperimentalAArch64

Fallback error type for any other reason

_TMFAILURE_INTExperimentalAArch64

Transaction failed from interrupt

_TMFAILURE_MEMExperimentalAArch64

Transaction aborted because a conflict occurred

_TMFAILURE_NESTExperimentalAArch64

Transaction aborted due to transactional nesting level was exceeded

_TMFAILURE_REASONExperimentalAArch64

Extraction mask for failure reason

_TMFAILURE_RTRYExperimentalAArch64

Transaction retry is possible.

_TMFAILURE_SIZEExperimentalAArch64

Transaction aborted due to read or write set limit was exceeded

_TMFAILURE_TRIVIALExperimentalAArch64

Indicates a TRIVIAL version of TM is available

_TMSTART_SUCCESSExperimentalAArch64

Transaction successfully started.

Functions

__breakpointExperimentalAArch64

Inserts a breakpoint instruction.

__clrexExperimentalAArch64

Removes the exclusive lock created by LDREX

__crc32bExperimentalcrc and v8 and AArch64

CRC32 single round checksum for bytes (8 bits).

__crc32cbExperimentalcrc and v8 and AArch64

CRC32-C single round checksum for bytes (8 bits).

__crc32cdExperimentalAArch64 and crc

CRC32-C single round checksum for quad words (64 bits).

__crc32chExperimentalcrc and v8 and AArch64

CRC32-C single round checksum for half words (16 bits).

__crc32cwExperimentalcrc and v8 and AArch64

CRC32-C single round checksum for words (32 bits).

__crc32dExperimentalAArch64 and crc

CRC32 single round checksum for quad words (64 bits).

__crc32hExperimentalcrc and v8 and AArch64

CRC32 single round checksum for half words (16 bits).

__crc32wExperimentalcrc and v8 and AArch64

CRC32 single round checksum for words (32 bits).

__dbgExperimentalAArch64

Generates a DBG instruction.

__dmbExperimentalAArch64

Generates a DMB (data memory barrier) instruction or equivalent CP15 instruction.

__dsbExperimentalAArch64

Generates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction.

__isbExperimentalAArch64

Generates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction.

__ldrexExperimentalAArch64

Executes a exclusive LDR instruction for 32 bit value.

__ldrexbExperimentalAArch64

Executes a exclusive LDR instruction for 8 bit value.

__ldrexhExperimentalAArch64

Executes a exclusive LDR instruction for 16 bit value.

__nopExperimentalAArch64

Generates an unspecified no-op instruction.

__qaddExperimentalAArch64

Signed saturating addition

__qadd8ExperimentalAArch64

Saturating four 8-bit integer additions

__qadd16ExperimentalAArch64

Saturating two 16-bit integer additions

__qasxExperimentalAArch64

Returns the 16-bit signed saturated equivalent of

__qdblExperimentalAArch64

Insert a QADD instruction

__qsaxExperimentalAArch64

Returns the 16-bit signed saturated equivalent of

__qsubExperimentalAArch64

Signed saturating subtraction

__qsub8ExperimentalAArch64

Saturating two 8-bit integer subtraction

__qsub16ExperimentalAArch64

Saturating two 16-bit integer subtraction

__rsrExperimentalAArch64

Reads a 32-bit system register

__rsrpExperimentalAArch64

Reads a system register containing an address

__sadd8ExperimentalAArch64

Returns the 8-bit signed saturated equivalent of

__sadd16ExperimentalAArch64

Returns the 16-bit signed saturated equivalent of

__sasxExperimentalAArch64

Returns the 16-bit signed equivalent of

__selExperimentalAArch64

Select bytes from each operand according to APSR GE flags

__sevExperimentalAArch64

Generates a SEV (send a global event) hint instruction.

__shadd8ExperimentalAArch64

Signed halving parallel byte-wise addition.

__shadd16ExperimentalAArch64

Signed halving parallel halfword-wise addition.

__shsub8ExperimentalAArch64

Signed halving parallel byte-wise subtraction.

__shsub16ExperimentalAArch64

Signed halving parallel halfword-wise subtraction.

__smlabbExperimentalAArch64

Insert a SMLABB instruction

__smlabtExperimentalAArch64

Insert a SMLABT instruction

__smladExperimentalAArch64

Dual 16-bit Signed Multiply with Addition of products and 32-bit accumulation.

__smlatbExperimentalAArch64

Insert a SMLATB instruction

__smlattExperimentalAArch64

Insert a SMLATT instruction

__smlawbExperimentalAArch64

Insert a SMLAWB instruction

__smlawtExperimentalAArch64

Insert a SMLAWT instruction

__smlsdExperimentalAArch64

Dual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection.

__smuadExperimentalAArch64

Signed Dual Multiply Add.

__smuadxExperimentalAArch64

Signed Dual Multiply Add Reversed.

__smulbbExperimentalAArch64

Insert a SMULBB instruction

__smulbtExperimentalAArch64

Insert a SMULTB instruction

__smultbExperimentalAArch64

Insert a SMULTB instruction

__smulttExperimentalAArch64

Insert a SMULTT instruction

__smulwbExperimentalAArch64

Insert a SMULWB instruction

__smulwtExperimentalAArch64

Insert a SMULWT instruction

__smusdExperimentalAArch64

Signed Dual Multiply Subtract.

__smusdxExperimentalAArch64

Signed Dual Multiply Subtract Reversed.

__ssub8ExperimentalAArch64

Inserts a SSUB8 instruction.

__strexExperimentalAArch64

Executes a exclusive STR instruction for 32 bit values

__strexbExperimentalAArch64

Executes a exclusive STR instruction for 8 bit values

__strexhExperimentalAArch64

Executes a exclusive STR instruction for 16 bit values

__tcancelExperimentalAArch64 and tme

Cancels the current transaction and discards all state modifications that were performed transactionally.

__tcommitExperimentalAArch64 and tme

Commits the current transaction. For a nested transaction, the only effect is that the transactional nesting depth is decreased. For an outer transaction, the state modifications performed transactionally are committed to the architectural state.

__tstartExperimentalAArch64 and tme

Starts a new transaction. When the transaction starts successfully the return value is 0. If the transaction fails, all state modifications are discarded and a cause of the failure is encoded in the return value.

__ttestExperimentalAArch64 and tme

Tests if executing inside a transaction. If no transaction is currently executing, the return value is 0. Otherwise, this intrinsic returns the depth of the transaction.

__usad8ExperimentalAArch64

Sum of 8-bit absolute differences.

__usada8ExperimentalAArch64

Sum of 8-bit absolute differences and constant.

__usub8ExperimentalAArch64

Inserts a USUB8 instruction.

__wfeExperimentalAArch64

Generates a WFE (wait for event) hint instruction, or nothing.

__wfiExperimentalAArch64

Generates a WFI (wait for interrupt) hint instruction, or nothing.

__wsrExperimentalAArch64

Writes a 32-bit system register

__wsrpExperimentalAArch64

Writes a system register containing an address

__yieldExperimentalAArch64

Generates a YIELD hint instruction.

_cls_u32ExperimentalAArch64

Counts the leading most significant bits set.

_cls_u64ExperimentalAArch64

Counts the leading most significant bits set.

_clz_u8ExperimentalAArch64 and v7

Count Leading Zeros.

_clz_u16ExperimentalAArch64 and v7

Count Leading Zeros.

_clz_u32ExperimentalAArch64 and v7

Count Leading Zeros.

_clz_u64ExperimentalAArch64

Count Leading Zeros.

_rbit_u32ExperimentalAArch64 and v7

Reverse the bit order.

_rbit_u64ExperimentalAArch64

Reverse the bit order.

_rev_u16ExperimentalAArch64

Reverse the order of the bytes.

_rev_u32ExperimentalAArch64

Reverse the order of the bytes.

_rev_u64ExperimentalAArch64

Reverse the order of the bytes.

brkExperimentalAArch64

Generates the trap instruction BRK 1

udfExperimentalAArch64

Generates the trap instruction UDF

vabs_s8Experimentalneon and v7 and AArch64

Absolute value (wrapping).

vabs_s16Experimentalneon and v7 and AArch64

Absolute value (wrapping).

vabs_s32Experimentalneon and v7 and AArch64

Absolute value (wrapping).

vabs_s64ExperimentalAArch64 and neon

Absolute Value (wrapping).

vabsd_s64ExperimentalAArch64 and neon

Absolute Value (wrapping).

vabsq_s8Experimentalneon and v7 and AArch64

Absolute value (wrapping).

vabsq_s16Experimentalneon and v7 and AArch64

Absolute value (wrapping).

vabsq_s32Experimentalneon and v7 and AArch64

Absolute value (wrapping).

vabsq_s64ExperimentalAArch64 and neon

Absolute Value (wrapping).

vadd_f32Experimentalneon and v7 and AArch64

Vector add.

vadd_f64ExperimentalAArch64 and neon

Vector add.

vadd_s8Experimentalneon and v7 and AArch64

Vector add.

vadd_s16Experimentalneon and v7 and AArch64

Vector add.

vadd_s32Experimentalneon and v7 and AArch64

Vector add.

vadd_u8Experimentalneon and v7 and AArch64

Vector add.

vadd_u16Experimentalneon and v7 and AArch64

Vector add.

vadd_u32Experimentalneon and v7 and AArch64

Vector add.

vaddd_s64ExperimentalAArch64 and neon

Vector add.

vaddd_u64ExperimentalAArch64 and neon

Vector add.

vaddl_s8Experimentalneon and v7 and AArch64

Vector long add.

vaddl_s16Experimentalneon and v7 and AArch64

Vector long add.

vaddl_s32Experimentalneon and v7 and AArch64

Vector long add.

vaddl_u8Experimentalneon and v7 and AArch64

Vector long add.

vaddl_u16Experimentalneon and v7 and AArch64

Vector long add.

vaddl_u32Experimentalneon and v7 and AArch64

Vector long add.

vaddq_f32Experimentalneon and v7 and AArch64

Vector add.

vaddq_f64ExperimentalAArch64 and neon

Vector add.

vaddq_s8Experimentalneon and v7 and AArch64

Vector add.

vaddq_s16Experimentalneon and v7 and AArch64

Vector add.

vaddq_s32Experimentalneon and v7 and AArch64

Vector add.

vaddq_s64Experimentalneon and v7 and AArch64

Vector add.

vaddq_u8Experimentalneon and v7 and AArch64

Vector add.

vaddq_u16Experimentalneon and v7 and AArch64

Vector add.

vaddq_u32Experimentalneon and v7 and AArch64

Vector add.

vaddq_u64Experimentalneon and v7 and AArch64

Vector add.

vaesdq_u8ExperimentalAArch64 and crypto

AES single round decryption.

vaeseq_u8ExperimentalAArch64 and crypto

AES single round encryption.

vaesimcq_u8ExperimentalAArch64 and crypto

AES inverse mix columns.

vaesmcq_u8ExperimentalAArch64 and crypto

AES mix columns.

vand_s8Experimentalneon and v7 and AArch64

Vector bitwise and

vand_s16Experimentalneon and v7 and AArch64

Vector bitwise and

vand_s32Experimentalneon and v7 and AArch64

Vector bitwise and

vand_s64Experimentalneon and v7 and AArch64

Vector bitwise and

vand_u8Experimentalneon and v7 and AArch64

Vector bitwise and

vand_u16Experimentalneon and v7 and AArch64

Vector bitwise and

vand_u32Experimentalneon and v7 and AArch64

Vector bitwise and

vand_u64Experimentalneon and v7 and AArch64

Vector bitwise and

vandq_s8Experimentalneon and v7 and AArch64

Vector bitwise and

vandq_s16Experimentalneon and v7 and AArch64

Vector bitwise and

vandq_s32Experimentalneon and v7 and AArch64

Vector bitwise and

vandq_s64Experimentalneon and v7 and AArch64

Vector bitwise and

vandq_u8Experimentalneon and v7 and AArch64

Vector bitwise and

vandq_u16Experimentalneon and v7 and AArch64

Vector bitwise and

vandq_u32Experimentalneon and v7 and AArch64

Vector bitwise and

vandq_u64Experimentalneon and v7 and AArch64

Vector bitwise and

vceq_f32Experimentalneon and v7 and AArch64

Floating-point compare equal

vceq_f64ExperimentalAArch64 and neon

Floating-point compare equal

vceq_p64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_s8Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceq_s16Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceq_s32Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceq_s64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceq_u8Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceq_u16Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceq_u32Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceq_u64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_f32Experimentalneon and v7 and AArch64

Floating-point compare equal

vceqq_f64ExperimentalAArch64 and neon

Floating-point compare equal

vceqq_p64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_s8Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceqq_s16Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceqq_s32Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceqq_s64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vceqq_u8Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceqq_u16Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceqq_u32Experimentalneon and v7 and AArch64

Compare bitwise Equal (vector)

vceqq_u64ExperimentalAArch64 and neon

Compare bitwise Equal (vector)

vcge_f32Experimentalneon and v7 and AArch64

Floating-point compare greater than or equal

vcge_f64ExperimentalAArch64 and neon

Floating-point compare greater than or equal

vcge_s8Experimentalneon and v7 and AArch64

Compare signed greater than or equal

vcge_s16Experimentalneon and v7 and AArch64

Compare signed greater than or equal

vcge_s32Experimentalneon and v7 and AArch64

Compare signed greater than or equal

vcge_s64ExperimentalAArch64 and neon

Compare signed greater than or equal

vcge_u8Experimentalneon and v7 and AArch64

Compare unsigned greater than or equal

vcge_u16Experimentalneon and v7 and AArch64

Compare unsigned greater than or equal

vcge_u32Experimentalneon and v7 and AArch64

Compare unsigned greater than or equal

vcge_u64ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcgeq_f32Experimentalneon and v7 and AArch64

Floating-point compare greater than or equal

vcgeq_f64ExperimentalAArch64 and neon

Floating-point compare greater than or equal

vcgeq_s8Experimentalneon and v7 and AArch64

Compare signed greater than or equal

vcgeq_s16Experimentalneon and v7 and AArch64

Compare signed greater than or equal

vcgeq_s32Experimentalneon and v7 and AArch64

Compare signed greater than or equal

vcgeq_s64ExperimentalAArch64 and neon

Compare signed greater than or equal

vcgeq_u8Experimentalneon and v7 and AArch64

Compare unsigned greater than or equal

vcgeq_u16Experimentalneon and v7 and AArch64

Compare unsigned greater than or equal

vcgeq_u32Experimentalneon and v7 and AArch64

Compare unsigned greater than or equal

vcgeq_u64ExperimentalAArch64 and neon

Compare unsigned greater than or equal

vcgt_f32Experimentalneon and v7 and AArch64

Floating-point compare greater than

vcgt_f64ExperimentalAArch64 and neon

Floating-point compare greater than

vcgt_s8Experimentalneon and v7 and AArch64

Compare signed greater than

vcgt_s16Experimentalneon and v7 and AArch64

Compare signed greater than

vcgt_s32Experimentalneon and v7 and AArch64

Compare signed greater than

vcgt_s64ExperimentalAArch64 and neon

Compare signed greater than

vcgt_u8Experimentalneon and v7 and AArch64

Compare unsigned highe

vcgt_u16Experimentalneon and v7 and AArch64

Compare unsigned highe

vcgt_u32Experimentalneon and v7 and AArch64

Compare unsigned highe

vcgt_u64ExperimentalAArch64 and neon

Compare unsigned highe

vcgtq_f32Experimentalneon and v7 and AArch64

Floating-point compare greater than

vcgtq_f64ExperimentalAArch64 and neon

Floating-point compare greater than

vcgtq_s8Experimentalneon and v7 and AArch64

Compare signed greater than

vcgtq_s16Experimentalneon and v7 and AArch64

Compare signed greater than

vcgtq_s32Experimentalneon and v7 and AArch64

Compare signed greater than

vcgtq_s64ExperimentalAArch64 and neon

Compare signed greater than

vcgtq_u8Experimentalneon and v7 and AArch64

Compare unsigned highe

vcgtq_u16Experimentalneon and v7 and AArch64

Compare unsigned highe

vcgtq_u32Experimentalneon and v7 and AArch64

Compare unsigned highe

vcgtq_u64ExperimentalAArch64 and neon

Compare unsigned highe

vcle_f32Experimentalneon and v7 and AArch64

Floating-point compare less than or equal

vcle_f64ExperimentalAArch64 and neon

Floating-point compare less than or equal

vcle_s8Experimentalneon and v7 and AArch64

Compare signed less than or equal

vcle_s16Experimentalneon and v7 and AArch64

Compare signed less than or equal

vcle_s32Experimentalneon and v7 and AArch64

Compare signed less than or equal

vcle_s64ExperimentalAArch64 and neon

Compare signed less than or equal

vcle_u8Experimentalneon and v7 and AArch64

Compare unsigned less than or equal

vcle_u16Experimentalneon and v7 and AArch64

Compare unsigned less than or equal

vcle_u32Experimentalneon and v7 and AArch64

Compare unsigned less than or equal

vcle_u64ExperimentalAArch64 and neon

Compare unsigned less than or equal

vcleq_f32Experimentalneon and v7 and AArch64

Floating-point compare less than or equal

vcleq_f64ExperimentalAArch64 and neon

Floating-point compare less than or equal

vcleq_s8Experimentalneon and v7 and AArch64

Compare signed less than or equal

vcleq_s16Experimentalneon and v7 and AArch64

Compare signed less than or equal

vcleq_s32Experimentalneon and v7 and AArch64

Compare signed less than or equal

vcleq_s64ExperimentalAArch64 and neon

Compare signed less than or equal

vcleq_u8Experimentalneon and v7 and AArch64

Compare unsigned less than or equal

vcleq_u16Experimentalneon and v7 and AArch64

Compare unsigned less than or equal

vcleq_u32Experimentalneon and v7 and AArch64

Compare unsigned less than or equal

vcleq_u64ExperimentalAArch64 and neon

Compare unsigned less than or equal

vclt_f32Experimentalneon and v7 and AArch64

Floating-point compare less than

vclt_f64ExperimentalAArch64 and neon

Floating-point compare less than

vclt_s8Experimentalneon and v7 and AArch64

Compare signed less than

vclt_s16Experimentalneon and v7 and AArch64

Compare signed less than

vclt_s32Experimentalneon and v7 and AArch64

Compare signed less than

vclt_s64ExperimentalAArch64 and neon

Compare signed less than

vclt_u8Experimentalneon and v7 and AArch64

Compare unsigned less than

vclt_u16Experimentalneon and v7 and AArch64

Compare unsigned less than

vclt_u32Experimentalneon and v7 and AArch64

Compare unsigned less than

vclt_u64ExperimentalAArch64 and neon

Compare unsigned less than

vcltq_f32Experimentalneon and v7 and AArch64

Floating-point compare less than

vcltq_f64ExperimentalAArch64 and neon

Floating-point compare less than

vcltq_s8Experimentalneon and v7 and AArch64

Compare signed less than

vcltq_s16Experimentalneon and v7 and AArch64

Compare signed less than

vcltq_s32Experimentalneon and v7 and AArch64

Compare signed less than

vcltq_s64ExperimentalAArch64 and neon

Compare signed less than

vcltq_u8Experimentalneon and v7 and AArch64

Compare unsigned less than

vcltq_u16Experimentalneon and v7 and AArch64

Compare unsigned less than

vcltq_u32Experimentalneon and v7 and AArch64

Compare unsigned less than

vcltq_u64ExperimentalAArch64 and neon

Compare unsigned less than

vcombine_f32ExperimentalAArch64 and neon

Vector combine

vcombine_f64ExperimentalAArch64 and neon

Vector combine

vcombine_p8ExperimentalAArch64 and neon

Vector combine

vcombine_p16ExperimentalAArch64 and neon

Vector combine

vcombine_p64ExperimentalAArch64 and neon

Vector combine

vcombine_s8ExperimentalAArch64 and neon

Vector combine

vcombine_s16ExperimentalAArch64 and neon

Vector combine

vcombine_s32ExperimentalAArch64 and neon

Vector combine

vcombine_s64ExperimentalAArch64 and neon

Vector combine

vcombine_u8ExperimentalAArch64 and neon

Vector combine

vcombine_u16ExperimentalAArch64 and neon

Vector combine

vcombine_u32ExperimentalAArch64 and neon

Vector combine

vcombine_u64ExperimentalAArch64 and neon

Vector combine

vdupq_n_s8Experimentalneon and v7 and AArch64

Duplicate vector element to vector or scalar

vdupq_n_u8Experimentalneon and v7 and AArch64

Duplicate vector element to vector or scalar

veor_s8Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veor_s16Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veor_s32Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veor_s64Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veor_u8Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veor_u16Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veor_u32Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veor_u64Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veorq_s8Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veorq_s16Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veorq_s32Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veorq_s64Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veorq_u8Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veorq_u16Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veorq_u32Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

veorq_u64Experimentalneon and v7 and AArch64

Vector bitwise exclusive or (vector)

vextq_s8Experimentalneon and v7 and AArch64

Extract vector from pair of vectors

vextq_u8Experimentalneon and v7 and AArch64

Extract vector from pair of vectors

vget_lane_u8Experimentalneon and v7 and AArch64

Move vector element to general-purpose register

vget_lane_u64Experimentalneon and v7 and AArch64

Move vector element to general-purpose register

vgetq_lane_u16Experimentalneon and v7 and AArch64

Move vector element to general-purpose register

vgetq_lane_u32Experimentalneon and v7 and AArch64

Move vector element to general-purpose register

vgetq_lane_u64Experimentalneon and v7 and AArch64

Move vector element to general-purpose register

vhadd_s8Experimentalneon and v7 and AArch64

Halving add

vhadd_s16Experimentalneon and v7 and AArch64

Halving add

vhadd_s32Experimentalneon and v7 and AArch64

Halving add

vhadd_u8Experimentalneon and v7 and AArch64

Halving add

vhadd_u16Experimentalneon and v7 and AArch64

Halving add

vhadd_u32Experimentalneon and v7 and AArch64

Halving add

vhaddq_s8Experimentalneon and v7 and AArch64

Halving add

vhaddq_s16Experimentalneon and v7 and AArch64

Halving add

vhaddq_s32Experimentalneon and v7 and AArch64

Halving add

vhaddq_u8Experimentalneon and v7 and AArch64

Halving add

vhaddq_u16Experimentalneon and v7 and AArch64

Halving add

vhaddq_u32Experimentalneon and v7 and AArch64

Halving add

vhsub_s8Experimentalneon and v7 and AArch64

Signed halving subtract

vhsub_s16Experimentalneon and v7 and AArch64

Signed halving subtract

vhsub_s32Experimentalneon and v7 and AArch64

Signed halving subtract

vhsub_u8Experimentalneon and v7 and AArch64

Signed halving subtract

vhsub_u16Experimentalneon and v7 and AArch64

Signed halving subtract

vhsub_u32Experimentalneon and v7 and AArch64

Signed halving subtract

vhsubq_s8Experimentalneon and v7 and AArch64

Signed halving subtract

vhsubq_s16Experimentalneon and v7 and AArch64

Signed halving subtract

vhsubq_s32Experimentalneon and v7 and AArch64

Signed halving subtract

vhsubq_u8Experimentalneon and v7 and AArch64

Signed halving subtract

vhsubq_u16Experimentalneon and v7 and AArch64

Signed halving subtract

vhsubq_u32Experimentalneon and v7 and AArch64

Signed halving subtract

vld1q_s8Experimentalneon and v7 and AArch64

Load multiple single-element structures to one, two, three, or four registers

vld1q_u8Experimentalneon and v7 and AArch64

Load multiple single-element structures to one, two, three, or four registers

vmaxv_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f64ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u32ExperimentalAArch64 and neon

Horizontal vector max.

vminv_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f64ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u32ExperimentalAArch64 and neon

Horizontal vector min.

vmovl_s8Experimentalneon and v7 and AArch64

Vector long move.

vmovl_s16Experimentalneon and v7 and AArch64

Vector long move.

vmovl_s32Experimentalneon and v7 and AArch64

Vector long move.

vmovl_u8Experimentalneon and v7 and AArch64

Vector long move.

vmovl_u16Experimentalneon and v7 and AArch64

Vector long move.

vmovl_u32Experimentalneon and v7 and AArch64

Vector long move.

vmovn_s16Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_s32Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_s64Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_u16Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_u32Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_u64Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovq_n_u8Experimentalneon and v7 and AArch64

Duplicate vector element to vector or scalar

vmul_f32Experimentalneon and v7 and AArch64

Multiply

vmul_f64ExperimentalAArch64 and neon

Multiply

vmul_s8Experimentalneon and v7 and AArch64

Multiply

vmul_s16Experimentalneon and v7 and AArch64

Multiply

vmul_s32Experimentalneon and v7 and AArch64

Multiply

vmul_u8Experimentalneon and v7 and AArch64

Multiply

vmul_u16Experimentalneon and v7 and AArch64

Multiply

vmul_u32Experimentalneon and v7 and AArch64

Multiply

vmull_p64ExperimentalAArch64 and neon

Polynomial multiply long

vmulq_f32Experimentalneon and v7 and AArch64

Multiply

vmulq_f64ExperimentalAArch64 and neon

Multiply

vmulq_s8Experimentalneon and v7 and AArch64

Multiply

vmulq_s16Experimentalneon and v7 and AArch64

Multiply

vmulq_s32Experimentalneon and v7 and AArch64

Multiply

vmulq_u8Experimentalneon and v7 and AArch64

Multiply

vmulq_u16Experimentalneon and v7 and AArch64

Multiply

vmulq_u32Experimentalneon and v7 and AArch64

Multiply

vmvn_p8Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvn_s8Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvn_s16Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvn_s32Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvn_u8Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvn_u16Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvn_u32Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvnq_p8Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvnq_s8Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvnq_s16Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvnq_s32Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvnq_u8Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvnq_u16Experimentalneon and v7 and AArch64

Vector bitwise not.

vmvnq_u32Experimentalneon and v7 and AArch64

Vector bitwise not.

vorr_s8Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorr_s16Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorr_s32Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorr_s64Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorr_u8Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorr_u16Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorr_u32Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorr_u64Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorrq_s8Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorrq_s16Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorrq_s32Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorrq_s64Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorrq_u8Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorrq_u16Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorrq_u32Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vorrq_u64Experimentalneon and v7 and AArch64

Vector bitwise or (immediate, inclusive)

vpaddq_u8ExperimentalAArch64 and neon

Add pairwise

vpmax_f32Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_s8Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_s16Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_s32Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_u8Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_u16Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_u32Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmaxq_f32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_f64ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmin_f32Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_s8Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_s16Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_s32Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_u8Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_u16Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_u32Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpminq_f32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_f64ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vqadd_s8Experimentalneon and v7 and AArch64

Saturating add

vqadd_s16Experimentalneon and v7 and AArch64

Saturating add

vqadd_s32Experimentalneon and v7 and AArch64

Saturating add

vqadd_u8Experimentalneon and v7 and AArch64

Saturating add

vqadd_u16Experimentalneon and v7 and AArch64

Saturating add

vqadd_u32Experimentalneon and v7 and AArch64

Saturating add

vqaddq_s8Experimentalneon and v7 and AArch64

Saturating add

vqaddq_s16Experimentalneon and v7 and AArch64

Saturating add

vqaddq_s32Experimentalneon and v7 and AArch64

Saturating add

vqaddq_u8Experimentalneon and v7 and AArch64

Saturating add

vqaddq_u16Experimentalneon and v7 and AArch64

Saturating add

vqaddq_u32Experimentalneon and v7 and AArch64

Saturating add

vqmovn_u64Experimentalneon and v7 and AArch64

Unsigned saturating extract narrow.

vqsub_s8Experimentalneon and v7 and AArch64

Saturating subtract

vqsub_s16Experimentalneon and v7 and AArch64

Saturating subtract

vqsub_s32Experimentalneon and v7 and AArch64

Saturating subtract

vqsub_u8Experimentalneon and v7 and AArch64

Saturating subtract

vqsub_u16Experimentalneon and v7 and AArch64

Saturating subtract

vqsub_u32Experimentalneon and v7 and AArch64

Saturating subtract

vqsubq_s8Experimentalneon and v7 and AArch64

Saturating subtract

vqsubq_s16Experimentalneon and v7 and AArch64

Saturating subtract

vqsubq_s32Experimentalneon and v7 and AArch64

Saturating subtract

vqsubq_u8Experimentalneon and v7 and AArch64

Saturating subtract

vqsubq_u16Experimentalneon and v7 and AArch64

Saturating subtract

vqsubq_u32Experimentalneon and v7 and AArch64

Saturating subtract

vqtbl1_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1_u8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_u8ExperimentalAArch64 and neon

Table look-up

vqtbx1_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_u8ExperimentalAArch64 and neon

Extended table look-up

vreinterpret_u64_u32Experimentalneon and v7 and AArch64

Vector reinterpret cast operation

vreinterpretq_s8_u8Experimentalneon and v7 and AArch64

Vector reinterpret cast operation

vreinterpretq_u8_s8Experimentalneon and v7 and AArch64

Vector reinterpret cast operation

vreinterpretq_u16_u8Experimentalneon and v7 and AArch64

Vector reinterpret cast operation

vreinterpretq_u32_u8Experimentalneon and v7 and AArch64

Vector reinterpret cast operation

vreinterpretq_u64_u8Experimentalneon and v7 and AArch64

Vector reinterpret cast operation

vrhadd_s8Experimentalneon and v7 and AArch64

Rounding halving add

vrhadd_s16Experimentalneon and v7 and AArch64

Rounding halving add

vrhadd_s32Experimentalneon and v7 and AArch64

Rounding halving add

vrhadd_u8Experimentalneon and v7 and AArch64

Rounding halving add

vrhadd_u16Experimentalneon and v7 and AArch64

Rounding halving add

vrhadd_u32Experimentalneon and v7 and AArch64

Rounding halving add

vrhaddq_s8Experimentalneon and v7 and AArch64

Rounding halving add

vrhaddq_s16Experimentalneon and v7 and AArch64

Rounding halving add

vrhaddq_s32Experimentalneon and v7 and AArch64

Rounding halving add

vrhaddq_u8Experimentalneon and v7 and AArch64

Rounding halving add

vrhaddq_u16Experimentalneon and v7 and AArch64

Rounding halving add

vrhaddq_u32Experimentalneon and v7 and AArch64

Rounding halving add

vrsqrte_f32ExperimentalAArch64 and neon

Reciprocal square-root estimate.

vsha1cq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, choose.

vsha1h_u32ExperimentalAArch64 and crypto

SHA1 fixed rotate.

vsha1mq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, majority.

vsha1pq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, parity.

vsha1su0q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, first part.

vsha1su1q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, second part.

vsha256h2q_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator, upper part.

vsha256hq_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator.

vsha256su0q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, first part.

vsha256su1q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, second part.

vshlq_n_u8Experimentalneon and v7 and AArch64

Shift right

vshrq_n_u8Experimentalneon and v7 and AArch64

Unsigned shift right

vsub_f32Experimentalneon and v7 and AArch64

Subtract

vsub_f64ExperimentalAArch64 and neon

Subtract

vsub_s8Experimentalneon and v7 and AArch64

Subtract

vsub_s16Experimentalneon and v7 and AArch64

Subtract

vsub_s32Experimentalneon and v7 and AArch64

Subtract

vsub_s64Experimentalneon and v7 and AArch64

Subtract

vsub_u8Experimentalneon and v7 and AArch64

Subtract

vsub_u16Experimentalneon and v7 and AArch64

Subtract

vsub_u32Experimentalneon and v7 and AArch64

Subtract

vsub_u64Experimentalneon and v7 and AArch64

Subtract

vsubq_f32Experimentalneon and v7 and AArch64

Subtract

vsubq_f64ExperimentalAArch64 and neon

Subtract

vsubq_s8Experimentalneon and v7 and AArch64

Subtract

vsubq_s16Experimentalneon and v7 and AArch64

Subtract

vsubq_s32Experimentalneon and v7 and AArch64

Subtract

vsubq_s64Experimentalneon and v7 and AArch64

Subtract

vsubq_u8Experimentalneon and v7 and AArch64

Subtract

vsubq_u16Experimentalneon and v7 and AArch64

Subtract

vsubq_u32Experimentalneon and v7 and AArch64

Subtract

vsubq_u64Experimentalneon and v7 and AArch64

Subtract

vtbl1_p8ExperimentalAArch64 and neon,v7

Table look-up

vtbl1_s8ExperimentalAArch64 and neon,v7

Table look-up

vtbl1_u8ExperimentalAArch64 and neon,v7

Table look-up

vtbl2_p8ExperimentalAArch64 and neon,v7

Table look-up

vtbl2_s8ExperimentalAArch64 and neon,v7

Table look-up

vtbl2_u8ExperimentalAArch64 and neon,v7

Table look-up

vtbl3_p8ExperimentalAArch64 and neon,v7

Table look-up

vtbl3_s8ExperimentalAArch64 and neon,v7

Table look-up

vtbl3_u8ExperimentalAArch64 and neon,v7

Table look-up

vtbl4_p8ExperimentalAArch64 and neon,v7

Table look-up

vtbl4_s8ExperimentalAArch64 and neon,v7

Table look-up

vtbl4_u8ExperimentalAArch64 and neon,v7

Table look-up

vtbx1_p8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx1_s8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx1_u8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx2_p8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx2_s8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx2_u8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx3_p8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx3_s8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx3_u8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx4_p8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx4_s8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx4_u8ExperimentalAArch64 and neon,v7

Extended table look-up