🔬This is a nightly-only experimental API. (
stdsimd #27731)Available on ARM only.
Expand description
Platform-specific intrinsics for the arm platform.
See the module documentation for more details.
Modules
- dspExperimentalReferences:
Structs
- ISHExperimentalInner Shareable is the required shareability domain, reads and writes are the required access types
- ISHLDExperimentalInner Shareable is the required shareability domain, reads are the required access type
- ISHSTExperimentalInner Shareable is the required shareability domain, writes are the required access type
- LDExperimentalFull system is the required shareability domain, reads are the required access type
- NSHExperimentalNon-shareable is the required shareability domain, reads and writes are the required access types
- NSHLDExperimentalNon-shareable is the required shareability domain, reads are the required access type
- NSHSTExperimentalNon-shareable is the required shareability domain, writes are the required access type
- OSHExperimentalOuter Shareable is the required shareability domain, reads and writes are the required access types
- OSHLDExperimentalOuter Shareable is the required shareability domain, reads are the required access type
- OSHSTExperimentalOuter Shareable is the required shareability domain, writes are the required access type
- STExperimentalFull system is the required shareability domain, writes are the required access type
- SYExperimentalFull system is the required shareability domain, reads and writes are the required access types
- int8x4_tExperimentalARM-specific 32-bit wide vector of four packedi8.
- int16x2_tExperimentalARM-specific 32-bit wide vector of two packedi16.
- uint8x4_tExperimentalARM-specific 32-bit wide vector of four packedu8.
- uint16x2_tExperimentalARM-specific 32-bit wide vector of two packedu16.
- ARM-specific 64-bit wide vector of two packedf32.
- ARM-specific type containing twofloat32x2_tvectors.
- ARM-specific type containing threefloat32x2_tvectors.
- ARM-specific type containing fourfloat32x2_tvectors.
- ARM-specific 128-bit wide vector of four packedf32.
- ARM-specific type containing twofloat32x4_tvectors.
- ARM-specific type containing threefloat32x4_tvectors.
- ARM-specific type containing fourfloat32x4_tvectors.
- ARM-specific 64-bit wide vector of eight packedi8.
- ARM-specific type containing twoint8x8_tvectors.
- ARM-specific type containing threeint8x8_tvectors.
- ARM-specific type containing fourint8x8_tvectors.
- ARM-specific 128-bit wide vector of sixteen packedi8.
- ARM-specific type containing twoint8x16_tvectors.
- ARM-specific type containing threeint8x16_tvectors.
- ARM-specific type containing fourint8x16_tvectors.
- ARM-specific 64-bit wide vector of four packedi16.
- ARM-specific type containing twoint16x4_tvectors.
- ARM-specific type containing threeint16x4_tvectors.
- ARM-specific type containing fourint16x4_tvectors.
- ARM-specific 128-bit wide vector of eight packedi16.
- ARM-specific type containing twoint16x8_tvectors.
- ARM-specific type containing threeint16x8_tvectors.
- ARM-specific type containing fourint16x8_tvectors.
- ARM-specific 64-bit wide vector of two packedi32.
- ARM-specific type containing twoint32x2_tvectors.
- ARM-specific type containing threeint32x2_tvectors.
- ARM-specific type containing fourint32x2_tvectors.
- ARM-specific 128-bit wide vector of four packedi32.
- ARM-specific type containing twoint32x4_tvectors.
- ARM-specific type containing threeint32x4_tvectors.
- ARM-specific type containing fourint32x4_tvectors.
- ARM-specific 64-bit wide vector of one packedi64.
- ARM-specific type containing fourint64x1_tvectors.
- ARM-specific type containing fourint64x1_tvectors.
- ARM-specific type containing fourint64x1_tvectors.
- ARM-specific 128-bit wide vector of two packedi64.
- ARM-specific type containing fourint64x2_tvectors.
- ARM-specific type containing fourint64x2_tvectors.
- ARM-specific type containing fourint64x2_tvectors.
- ARM-specific 64-bit wide polynomial vector of eight packedp8.
- ARM-specific type containing twopoly8x8_tvectors.
- ARM-specific type containing threepoly8x8_tvectors.
- ARM-specific type containing fourpoly8x8_tvectors.
- ARM-specific 128-bit wide vector of sixteen packedp8.
- ARM-specific type containing twopoly8x16_tvectors.
- ARM-specific type containing threepoly8x16_tvectors.
- ARM-specific type containing fourpoly8x16_tvectors.
- ARM-specific 64-bit wide vector of four packedp16.
- ARM-specific type containing twopoly16x4_tvectors.
- ARM-specific type containing threepoly16x4_tvectors.
- ARM-specific type containing fourpoly16x4_tvectors.
- ARM-specific 128-bit wide vector of eight packedp16.
- ARM-specific type containing twopoly16x8_tvectors.
- ARM-specific type containing threepoly16x8_tvectors.
- ARM-specific type containing fourpoly16x8_tvectors.
- ARM-specific 64-bit wide vector of one packedp64.
- ARM-specific type containing fourpoly64x1_tvectors.
- ARM-specific type containing fourpoly64x1_tvectors.
- ARM-specific type containing fourpoly64x1_tvectors.
- ARM-specific 128-bit wide vector of two packedp64.
- ARM-specific type containing fourpoly64x2_tvectors.
- ARM-specific type containing fourpoly64x2_tvectors.
- ARM-specific type containing fourpoly64x2_tvectors.
- ARM-specific 64-bit wide vector of eight packedu8.
- ARM-specific type containing twouint8x8_tvectors.
- ARM-specific type containing threeuint8x8_tvectors.
- ARM-specific type containing fouruint8x8_tvectors.
- ARM-specific 128-bit wide vector of sixteen packedu8.
- ARM-specific type containing twouint8x16_tvectors.
- ARM-specific type containing threeuint8x16_tvectors.
- ARM-specific type containing fouruint8x16_tvectors.
- ARM-specific 64-bit wide vector of four packedu16.
- ARM-specific type containing twouint16x4_tvectors.
- ARM-specific type containing threeuint16x4_tvectors.
- ARM-specific type containing fouruint16x4_tvectors.
- ARM-specific 128-bit wide vector of eight packedu16.
- ARM-specific type containing twouint16x8_tvectors.
- ARM-specific type containing threeuint16x8_tvectors.
- ARM-specific type containing fouruint16x8_tvectors.
- ARM-specific 64-bit wide vector of two packedu32.
- ARM-specific type containing twouint32x2_tvectors.
- ARM-specific type containing threeuint32x2_tvectors.
- ARM-specific type containing fouruint32x2_tvectors.
- ARM-specific 128-bit wide vector of four packedu32.
- ARM-specific type containing twouint32x4_tvectors.
- ARM-specific type containing threeuint32x4_tvectors.
- ARM-specific type containing fouruint32x4_tvectors.
- ARM-specific 64-bit wide vector of one packedu64.
- ARM-specific type containing fouruint64x1_tvectors.
- ARM-specific type containing fouruint64x1_tvectors.
- ARM-specific type containing fouruint64x1_tvectors.
- ARM-specific 128-bit wide vector of two packedu64.
- ARM-specific type containing fouruint64x2_tvectors.
- ARM-specific type containing fouruint64x2_tvectors.
- ARM-specific type containing fouruint64x2_tvectors.
Functions
- CRC32 single round checksum for bytes (8 bits).
- CRC32-C single round checksum for bytes (8 bits).
- CRC32-C single round checksum for half words (16 bits).
- CRC32-C single round checksum for words (32 bits).
- CRC32 single round checksum for half words (16 bits).
- CRC32 single round checksum for words (32 bits).
- __dbg⚠ExperimentalGenerates a DBG instruction.
- __dmb⚠ExperimentalGenerates a DMB (data memory barrier) instruction or equivalent CP15 instruction.
- __dsb⚠ExperimentalGenerates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction.
- __isb⚠ExperimentalGenerates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction.
- __nop⚠ExperimentalGenerates an unspecified no-op instruction.
- __qadd⚠ExperimentalSigned saturating addition
- __qadd8⚠ExperimentalSaturating four 8-bit integer additions
- __qadd16⚠ExperimentalSaturating two 16-bit integer additions
- __qasx⚠ExperimentalReturns the 16-bit signed saturated equivalent of
- __qdbl⚠ExperimentalInsert a QADD instruction
- __qsax⚠ExperimentalReturns the 16-bit signed saturated equivalent of
- __qsub⚠ExperimentalSigned saturating subtraction
- __qsub8⚠ExperimentalSaturating two 8-bit integer subtraction
- __qsub16⚠ExperimentalSaturating two 16-bit integer subtraction
- __sadd8⚠ExperimentalReturns the 8-bit signed saturated equivalent of
- __sadd16⚠ExperimentalReturns the 16-bit signed saturated equivalent of
- __sasx⚠ExperimentalReturns the 16-bit signed equivalent of
- __sel⚠ExperimentalSelect bytes from each operand according to APSR GE flags
- __sev⚠ExperimentalGenerates a SEV (send a global event) hint instruction.
- __sevl⚠ExperimentalGenerates a send a local event hint instruction.
- __shadd8⚠ExperimentalSigned halving parallel byte-wise addition.
- __shadd16⚠ExperimentalSigned halving parallel halfword-wise addition.
- __shsub8⚠ExperimentalSigned halving parallel byte-wise subtraction.
- __shsub16⚠ExperimentalSigned halving parallel halfword-wise subtraction.
- __smlabb⚠ExperimentalInsert a SMLABB instruction
- __smlabt⚠ExperimentalInsert a SMLABT instruction
- __smlad⚠ExperimentalDual 16-bit Signed Multiply with Addition of products and 32-bit accumulation.
- __smlatb⚠ExperimentalInsert a SMLATB instruction
- __smlatt⚠ExperimentalInsert a SMLATT instruction
- __smlawb⚠ExperimentalInsert a SMLAWB instruction
- __smlawt⚠ExperimentalInsert a SMLAWT instruction
- __smlsd⚠ExperimentalDual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection.
- __smuad⚠ExperimentalSigned Dual Multiply Add.
- __smuadx⚠ExperimentalSigned Dual Multiply Add Reversed.
- __smulbb⚠ExperimentalInsert a SMULBB instruction
- __smulbt⚠ExperimentalInsert a SMULTB instruction
- __smultb⚠ExperimentalInsert a SMULTB instruction
- __smultt⚠ExperimentalInsert a SMULTT instruction
- __smulwb⚠ExperimentalInsert a SMULWB instruction
- __smulwt⚠ExperimentalInsert a SMULWT instruction
- __smusd⚠ExperimentalSigned Dual Multiply Subtract.
- __smusdx⚠ExperimentalSigned Dual Multiply Subtract Reversed.
- __ssub8⚠ExperimentalInserts aSSUB8instruction.
- __usad8⚠ExperimentalSum of 8-bit absolute differences.
- __usada8⚠ExperimentalSum of 8-bit absolute differences and constant.
- __usub8⚠ExperimentalInserts aUSUB8instruction.
- __wfe⚠ExperimentalGenerates a WFE (wait for event) hint instruction, or nothing.
- __wfi⚠ExperimentalGenerates a WFI (wait for interrupt) hint instruction, or nothing.
- __yield⚠ExperimentalGenerates a YIELD hint instruction.
- Dot product arithmetic (indexed)
- Dot product arithmetic (indexed)
- Dot product arithmetic (vector)
- Dot product arithmetic (vector)
- Dot product arithmetic (indexed)
- Dot product arithmetic (indexed)
- Dot product arithmetic (vector)
- Dot product arithmetic (vector)
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- 8-bit integer matrix multiply-accumulate
- 8-bit integer matrix multiply-accumulate
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Dot product index form with signed and unsigned integers
- Dot product index form with signed and unsigned integers
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Dot product index form with unsigned and signed integers
- Dot product vector form with unsigned and signed integers
- Dot product index form with unsigned and signed integers
- Dot product vector form with unsigned and signed integers
- Unsigned and signed 8-bit integer matrix multiply-accumulate
- vaba_s8⚠neon
- vaba_s16⚠neon
- vaba_s32⚠neon
- vaba_u8⚠neon
- vaba_u16⚠neon
- vaba_u32⚠neon
- vabal_s8⚠neonSigned Absolute difference and Accumulate Long
- vabal_s16⚠neonSigned Absolute difference and Accumulate Long
- vabal_s32⚠neonSigned Absolute difference and Accumulate Long
- vabal_u8⚠neonUnsigned Absolute difference and Accumulate Long
- vabal_u16⚠neonUnsigned Absolute difference and Accumulate Long
- vabal_u32⚠neonUnsigned Absolute difference and Accumulate Long
- vabaq_s8⚠neon
- vabaq_s16⚠neon
- vabaq_s32⚠neon
- vabaq_u8⚠neon
- vabaq_u16⚠neon
- vabaq_u32⚠neon
- vabd_f32⚠neonAbsolute difference between the arguments of Floating
- vabd_s8⚠neonAbsolute difference between the arguments
- vabd_s16⚠neonAbsolute difference between the arguments
- vabd_s32⚠neonAbsolute difference between the arguments
- vabd_u8⚠neonAbsolute difference between the arguments
- vabd_u16⚠neonAbsolute difference between the arguments
- vabd_u32⚠neonAbsolute difference between the arguments
- vabdl_s8⚠neonSigned Absolute difference Long
- vabdl_s16⚠neonSigned Absolute difference Long
- vabdl_s32⚠neonSigned Absolute difference Long
- vabdl_u8⚠neonUnsigned Absolute difference Long
- vabdl_u16⚠neonUnsigned Absolute difference Long
- vabdl_u32⚠neonUnsigned Absolute difference Long
- vabdq_f32⚠neonAbsolute difference between the arguments of Floating
- vabdq_s8⚠neonAbsolute difference between the arguments
- vabdq_s16⚠neonAbsolute difference between the arguments
- vabdq_s32⚠neonAbsolute difference between the arguments
- vabdq_u8⚠neonAbsolute difference between the arguments
- vabdq_u16⚠neonAbsolute difference between the arguments
- vabdq_u32⚠neonAbsolute difference between the arguments
- vabs_f32⚠neonFloating-point absolute value
- vabs_s8⚠neonAbsolute value (wrapping).
- vabs_s16⚠neonAbsolute value (wrapping).
- vabs_s32⚠neonAbsolute value (wrapping).
- vabsq_f32⚠neonFloating-point absolute value
- vabsq_s8⚠neonAbsolute value (wrapping).
- vabsq_s16⚠neonAbsolute value (wrapping).
- vabsq_s32⚠neonAbsolute value (wrapping).
- vadd_f32⚠neonVector add.
- vadd_p8⚠neonBitwise exclusive OR
- vadd_p16⚠neonBitwise exclusive OR
- vadd_p64⚠neonBitwise exclusive OR
- vadd_s8⚠neonVector add.
- vadd_s16⚠neonVector add.
- vadd_s32⚠neonVector add.
- vadd_u8⚠neonVector add.
- vadd_u16⚠neonVector add.
- vadd_u32⚠neonVector add.
- vaddhn_high_s16⚠neonAdd returning High Narrow (high half).
- vaddhn_high_s32⚠neonAdd returning High Narrow (high half).
- vaddhn_high_s64⚠neonAdd returning High Narrow (high half).
- vaddhn_high_u16⚠neonAdd returning High Narrow (high half).
- vaddhn_high_u32⚠neonAdd returning High Narrow (high half).
- vaddhn_high_u64⚠neonAdd returning High Narrow (high half).
- vaddhn_s16⚠neonAdd returning High Narrow.
- vaddhn_s32⚠neonAdd returning High Narrow.
- vaddhn_s64⚠neonAdd returning High Narrow.
- vaddhn_u16⚠neonAdd returning High Narrow.
- vaddhn_u32⚠neonAdd returning High Narrow.
- vaddhn_u64⚠neonAdd returning High Narrow.
- vaddl_high_s8⚠neonSigned Add Long (vector, high half).
- vaddl_high_s16⚠neonSigned Add Long (vector, high half).
- vaddl_high_s32⚠neonSigned Add Long (vector, high half).
- vaddl_high_u8⚠neonUnsigned Add Long (vector, high half).
- vaddl_high_u16⚠neonUnsigned Add Long (vector, high half).
- vaddl_high_u32⚠neonUnsigned Add Long (vector, high half).
- vaddl_s8⚠neonSigned Add Long (vector).
- vaddl_s16⚠neonSigned Add Long (vector).
- vaddl_s32⚠neonSigned Add Long (vector).
- vaddl_u8⚠neonUnsigned Add Long (vector).
- vaddl_u16⚠neonUnsigned Add Long (vector).
- vaddl_u32⚠neonUnsigned Add Long (vector).
- vaddq_f32⚠neonVector add.
- vaddq_p8⚠neonBitwise exclusive OR
- vaddq_p16⚠neonBitwise exclusive OR
- vaddq_p64⚠neonBitwise exclusive OR
- vaddq_p128⚠neonBitwise exclusive OR
- vaddq_s8⚠neonVector add.
- vaddq_s16⚠neonVector add.
- vaddq_s32⚠neonVector add.
- vaddq_s64⚠neonVector add.
- vaddq_u8⚠neonVector add.
- vaddq_u16⚠neonVector add.
- vaddq_u32⚠neonVector add.
- vaddq_u64⚠neonVector add.
- vaddw_high_s8⚠neonSigned Add Wide (high half).
- vaddw_high_s16⚠neonSigned Add Wide (high half).
- vaddw_high_s32⚠neonSigned Add Wide (high half).
- vaddw_high_u8⚠neonUnsigned Add Wide (high half).
- vaddw_high_u16⚠neonUnsigned Add Wide (high half).
- vaddw_high_u32⚠neonUnsigned Add Wide (high half).
- vaddw_s8⚠neonSigned Add Wide.
- vaddw_s16⚠neonSigned Add Wide.
- vaddw_s32⚠neonSigned Add Wide.
- vaddw_u8⚠neonUnsigned Add Wide.
- vaddw_u16⚠neonUnsigned Add Wide.
- vaddw_u32⚠neonUnsigned Add Wide.
- vaesdq_u8⚠aesAES single round decryption.
- vaeseq_u8⚠aesAES single round encryption.
- vaesimcq_u8⚠aesAES inverse mix columns.
- vaesmcq_u8⚠aesAES mix columns.
- vand_s8⚠neonVector bitwise and
- vand_s16⚠neonVector bitwise and
- vand_s32⚠neonVector bitwise and
- vand_s64⚠neonVector bitwise and
- vand_u8⚠neonVector bitwise and
- vand_u16⚠neonVector bitwise and
- vand_u32⚠neonVector bitwise and
- vand_u64⚠neonVector bitwise and
- vandq_s8⚠neonVector bitwise and
- vandq_s16⚠neonVector bitwise and
- vandq_s32⚠neonVector bitwise and
- vandq_s64⚠neonVector bitwise and
- vandq_u8⚠neonVector bitwise and
- vandq_u16⚠neonVector bitwise and
- vandq_u32⚠neonVector bitwise and
- vandq_u64⚠neonVector bitwise and
- vbic_s8⚠neonVector bitwise bit clear
- vbic_s16⚠neonVector bitwise bit clear
- vbic_s32⚠neonVector bitwise bit clear
- vbic_s64⚠neonVector bitwise bit clear
- vbic_u8⚠neonVector bitwise bit clear
- vbic_u16⚠neonVector bitwise bit clear
- vbic_u32⚠neonVector bitwise bit clear
- vbic_u64⚠neonVector bitwise bit clear
- vbicq_s8⚠neonVector bitwise bit clear
- vbicq_s16⚠neonVector bitwise bit clear
- vbicq_s32⚠neonVector bitwise bit clear
- vbicq_s64⚠neonVector bitwise bit clear
- vbicq_u8⚠neonVector bitwise bit clear
- vbicq_u16⚠neonVector bitwise bit clear
- vbicq_u32⚠neonVector bitwise bit clear
- vbicq_u64⚠neonVector bitwise bit clear
- vbsl_f32⚠neonBitwise Select.
- vbsl_p8⚠neonBitwise Select.
- vbsl_p16⚠neonBitwise Select.
- vbsl_s8⚠neonBitwise Select instructions. This instruction sets each bit in the destination SIMD&FP register to the corresponding bit from the first source SIMD&FP register when the original destination bit was 1, otherwise from the second source SIMD&FP register. Bitwise Select.
- vbsl_s16⚠neonBitwise Select.
- vbsl_s32⚠neonBitwise Select.
- vbsl_s64⚠neonBitwise Select.
- vbsl_u8⚠neonBitwise Select.
- vbsl_u16⚠neonBitwise Select.
- vbsl_u32⚠neonBitwise Select.
- vbsl_u64⚠neonBitwise Select.
- vbslq_f32⚠neonBitwise Select. (128-bit)
- vbslq_p8⚠neonBitwise Select. (128-bit)
- vbslq_p16⚠neonBitwise Select. (128-bit)
- vbslq_s8⚠neonBitwise Select. (128-bit)
- vbslq_s16⚠neonBitwise Select. (128-bit)
- vbslq_s32⚠neonBitwise Select. (128-bit)
- vbslq_s64⚠neonBitwise Select. (128-bit)
- vbslq_u8⚠neonBitwise Select. (128-bit)
- vbslq_u16⚠neonBitwise Select. (128-bit)
- vbslq_u32⚠neonBitwise Select. (128-bit)
- vbslq_u64⚠neonBitwise Select. (128-bit)
- vcage_f32⚠neonFloating-point absolute compare greater than or equal
- vcageq_f32⚠neonFloating-point absolute compare greater than or equal
- vcagt_f32⚠neonFloating-point absolute compare greater than
- vcagtq_f32⚠neonFloating-point absolute compare greater than
- vcale_f32⚠neonFloating-point absolute compare less than or equal
- vcaleq_f32⚠neonFloating-point absolute compare less than or equal
- vcalt_f32⚠neonFloating-point absolute compare less than
- vcaltq_f32⚠neonFloating-point absolute compare less than
- vceq_f32⚠neonFloating-point compare equal
- vceq_p8⚠neonCompare bitwise Equal (vector)
- vceq_s8⚠neonCompare bitwise Equal (vector)
- vceq_s16⚠neonCompare bitwise Equal (vector)
- vceq_s32⚠neonCompare bitwise Equal (vector)
- vceq_u8⚠neonCompare bitwise Equal (vector)
- vceq_u16⚠neonCompare bitwise Equal (vector)
- vceq_u32⚠neonCompare bitwise Equal (vector)
- vceqq_f32⚠neonFloating-point compare equal
- vceqq_p8⚠neonCompare bitwise Equal (vector)
- vceqq_s8⚠neonCompare bitwise Equal (vector)
- vceqq_s16⚠neonCompare bitwise Equal (vector)
- vceqq_s32⚠neonCompare bitwise Equal (vector)
- vceqq_u8⚠neonCompare bitwise Equal (vector)
- vceqq_u16⚠neonCompare bitwise Equal (vector)
- vceqq_u32⚠neonCompare bitwise Equal (vector)
- vcge_f32⚠neonFloating-point compare greater than or equal
- vcge_s8⚠neonCompare signed greater than or equal
- vcge_s16⚠neonCompare signed greater than or equal
- vcge_s32⚠neonCompare signed greater than or equal
- vcge_u8⚠neonCompare unsigned greater than or equal
- vcge_u16⚠neonCompare unsigned greater than or equal
- vcge_u32⚠neonCompare unsigned greater than or equal
- vcgeq_f32⚠neonFloating-point compare greater than or equal
- vcgeq_s8⚠neonCompare signed greater than or equal
- vcgeq_s16⚠neonCompare signed greater than or equal
- vcgeq_s32⚠neonCompare signed greater than or equal
- vcgeq_u8⚠neonCompare unsigned greater than or equal
- vcgeq_u16⚠neonCompare unsigned greater than or equal
- vcgeq_u32⚠neonCompare unsigned greater than or equal
- vcgt_f32⚠neonFloating-point compare greater than
- vcgt_s8⚠neonCompare signed greater than
- vcgt_s16⚠neonCompare signed greater than
- vcgt_s32⚠neonCompare signed greater than
- vcgt_u8⚠neonCompare unsigned greater than
- vcgt_u16⚠neonCompare unsigned greater than
- vcgt_u32⚠neonCompare unsigned greater than
- vcgtq_f32⚠neonFloating-point compare greater than
- vcgtq_s8⚠neonCompare signed greater than
- vcgtq_s16⚠neonCompare signed greater than
- vcgtq_s32⚠neonCompare signed greater than
- vcgtq_u8⚠neonCompare unsigned greater than
- vcgtq_u16⚠neonCompare unsigned greater than
- vcgtq_u32⚠neonCompare unsigned greater than
- vcle_f32⚠neonFloating-point compare less than or equal
- vcle_s8⚠neonCompare signed less than or equal
- vcle_s16⚠neonCompare signed less than or equal
- vcle_s32⚠neonCompare signed less than or equal
- vcle_u8⚠neonCompare unsigned less than or equal
- vcle_u16⚠neonCompare unsigned less than or equal
- vcle_u32⚠neonCompare unsigned less than or equal
- vcleq_f32⚠neonFloating-point compare less than or equal
- vcleq_s8⚠neonCompare signed less than or equal
- vcleq_s16⚠neonCompare signed less than or equal
- vcleq_s32⚠neonCompare signed less than or equal
- vcleq_u8⚠neonCompare unsigned less than or equal
- vcleq_u16⚠neonCompare unsigned less than or equal
- vcleq_u32⚠neonCompare unsigned less than or equal
- vcls_s8⚠neonCount leading sign bits
- vcls_s16⚠neonCount leading sign bits
- vcls_s32⚠neonCount leading sign bits
- vcls_u8⚠neonCount leading sign bits
- vcls_u16⚠neonCount leading sign bits
- vcls_u32⚠neonCount leading sign bits
- vclsq_s8⚠neonCount leading sign bits
- vclsq_s16⚠neonCount leading sign bits
- vclsq_s32⚠neonCount leading sign bits
- vclsq_u8⚠neonCount leading sign bits
- vclsq_u16⚠neonCount leading sign bits
- vclsq_u32⚠neonCount leading sign bits
- vclt_f32⚠neonFloating-point compare less than
- vclt_s8⚠neonCompare signed less than
- vclt_s16⚠neonCompare signed less than
- vclt_s32⚠neonCompare signed less than
- vclt_u8⚠neonCompare unsigned less than
- vclt_u16⚠neonCompare unsigned less than
- vclt_u32⚠neonCompare unsigned less than
- vcltq_f32⚠neonFloating-point compare less than
- vcltq_s8⚠neonCompare signed less than
- vcltq_s16⚠neonCompare signed less than
- vcltq_s32⚠neonCompare signed less than
- vcltq_u8⚠neonCompare unsigned less than
- vcltq_u16⚠neonCompare unsigned less than
- vcltq_u32⚠neonCompare unsigned less than
- vclz_s8⚠neonCount leading zero bits
- vclz_s16⚠neonCount leading zero bits
- vclz_s32⚠neonCount leading zero bits
- vclz_u8⚠neonCount leading zero bits
- vclz_u16⚠neonCount leading zero bits
- vclz_u32⚠neonCount leading zero bits
- vclzq_s8⚠neonCount leading zero bits
- vclzq_s16⚠neonCount leading zero bits
- vclzq_s32⚠neonCount leading zero bits
- vclzq_u8⚠neonCount leading zero bits
- vclzq_u16⚠neonCount leading zero bits
- vclzq_u32⚠neonCount leading zero bits
- vcnt_p8⚠neonPopulation count per byte.
- vcnt_s8⚠neonPopulation count per byte.
- vcnt_u8⚠neonPopulation count per byte.
- vcntq_p8⚠neonPopulation count per byte.
- vcntq_s8⚠neonPopulation count per byte.
- vcntq_u8⚠neonPopulation count per byte.
- vcombine_f32⚠neonVector combine
- vcombine_p8⚠neonVector combine
- vcombine_p16⚠neonVector combine
- vcombine_p64⚠neonVector combine
- vcombine_s8⚠neonVector combine
- vcombine_s16⚠neonVector combine
- vcombine_s32⚠neonVector combine
- vcombine_s64⚠neonVector combine
- vcombine_u8⚠neonVector combine
- vcombine_u16⚠neonVector combine
- vcombine_u32⚠neonVector combine
- vcombine_u64⚠neonVector combine
- vcreate_f32⚠neonInsert vector element from another vector element
- vcreate_p8⚠neonInsert vector element from another vector element
- vcreate_p16⚠neonInsert vector element from another vector element
- vcreate_p64⚠neon,aesInsert vector element from another vector element
- vcreate_s8⚠neonInsert vector element from another vector element
- vcreate_s16⚠neonInsert vector element from another vector element
- vcreate_s32⚠neonInsert vector element from another vector element
- vcreate_s64⚠neonInsert vector element from another vector element
- vcreate_u8⚠neonInsert vector element from another vector element
- vcreate_u16⚠neonInsert vector element from another vector element
- vcreate_u32⚠neonInsert vector element from another vector element
- vcreate_u64⚠neonInsert vector element from another vector element
- vcvt_f32_s32⚠neonFixed-point convert to floating-point
- vcvt_f32_u32⚠neonFixed-point convert to floating-point
- vcvt_n_f32_s32⚠neonFixed-point convert to floating-point
- vcvt_n_f32_u32⚠neonFixed-point convert to floating-point
- vcvt_n_s32_f32⚠neonFloating-point convert to fixed-point, rounding toward zero
- vcvt_n_u32_f32⚠neonFloating-point convert to fixed-point, rounding toward zero
- vcvt_s32_f32⚠neonFloating-point convert to signed fixed-point, rounding toward zero
- vcvt_u32_f32⚠neonFloating-point convert to unsigned fixed-point, rounding toward zero
- vcvtq_f32_s32⚠neonFixed-point convert to floating-point
- vcvtq_f32_u32⚠neonFixed-point convert to floating-point
- vcvtq_n_f32_s32⚠neonFixed-point convert to floating-point
- vcvtq_n_f32_u32⚠neonFixed-point convert to floating-point
- vcvtq_n_s32_f32⚠neonFloating-point convert to fixed-point, rounding toward zero
- vcvtq_n_u32_f32⚠neonFloating-point convert to fixed-point, rounding toward zero
- vcvtq_s32_f32⚠neonFloating-point convert to signed fixed-point, rounding toward zero
- vcvtq_u32_f32⚠neonFloating-point convert to unsigned fixed-point, rounding toward zero
- vdup_lane_f32⚠neonSet all vector lanes to the same value
- vdup_lane_p8⚠neonSet all vector lanes to the same value
- vdup_lane_p16⚠neonSet all vector lanes to the same value
- vdup_lane_s8⚠neonSet all vector lanes to the same value
- vdup_lane_s16⚠neonSet all vector lanes to the same value
- vdup_lane_s32⚠neonSet all vector lanes to the same value
- vdup_lane_s64⚠neonSet all vector lanes to the same value
- vdup_lane_u8⚠neonSet all vector lanes to the same value
- vdup_lane_u16⚠neonSet all vector lanes to the same value
- vdup_lane_u32⚠neonSet all vector lanes to the same value
- vdup_lane_u64⚠neonSet all vector lanes to the same value
- vdup_laneq_f32⚠neonSet all vector lanes to the same value
- vdup_laneq_p8⚠neonSet all vector lanes to the same value
- vdup_laneq_p16⚠neonSet all vector lanes to the same value
- vdup_laneq_s8⚠neonSet all vector lanes to the same value
- vdup_laneq_s16⚠neonSet all vector lanes to the same value
- vdup_laneq_s32⚠neonSet all vector lanes to the same value
- vdup_laneq_s64⚠neonSet all vector lanes to the same value
- vdup_laneq_u8⚠neonSet all vector lanes to the same value
- vdup_laneq_u16⚠neonSet all vector lanes to the same value
- vdup_laneq_u32⚠neonSet all vector lanes to the same value
- vdup_laneq_u64⚠neonSet all vector lanes to the same value
- vdup_n_f32⚠neonDuplicate vector element to vector or scalar
- vdup_n_p8⚠neonDuplicate vector element to vector or scalar
- vdup_n_p16⚠neonDuplicate vector element to vector or scalar
- vdup_n_s8⚠neonDuplicate vector element to vector or scalar
- vdup_n_s16⚠neonDuplicate vector element to vector or scalar
- vdup_n_s32⚠neonDuplicate vector element to vector or scalar
- vdup_n_s64⚠neonDuplicate vector element to vector or scalar
- vdup_n_u8⚠neonDuplicate vector element to vector or scalar
- vdup_n_u16⚠neonDuplicate vector element to vector or scalar
- vdup_n_u32⚠neonDuplicate vector element to vector or scalar
- vdup_n_u64⚠neonDuplicate vector element to vector or scalar
- vdupq_lane_f32⚠neonSet all vector lanes to the same value
- vdupq_lane_p8⚠neonSet all vector lanes to the same value
- vdupq_lane_p16⚠neonSet all vector lanes to the same value
- vdupq_lane_s8⚠neonSet all vector lanes to the same value
- vdupq_lane_s16⚠neonSet all vector lanes to the same value
- vdupq_lane_s32⚠neonSet all vector lanes to the same value
- vdupq_lane_s64⚠neonSet all vector lanes to the same value
- vdupq_lane_u8⚠neonSet all vector lanes to the same value
- vdupq_lane_u16⚠neonSet all vector lanes to the same value
- vdupq_lane_u32⚠neonSet all vector lanes to the same value
- vdupq_lane_u64⚠neonSet all vector lanes to the same value
- vdupq_laneq_f32⚠neonSet all vector lanes to the same value
- vdupq_laneq_p8⚠neonSet all vector lanes to the same value
- vdupq_laneq_p16⚠neonSet all vector lanes to the same value
- vdupq_laneq_s8⚠neonSet all vector lanes to the same value
- vdupq_laneq_s16⚠neonSet all vector lanes to the same value
- vdupq_laneq_s32⚠neonSet all vector lanes to the same value
- vdupq_laneq_s64⚠neonSet all vector lanes to the same value
- vdupq_laneq_u8⚠neonSet all vector lanes to the same value
- vdupq_laneq_u16⚠neonSet all vector lanes to the same value
- vdupq_laneq_u32⚠neonSet all vector lanes to the same value
- vdupq_laneq_u64⚠neonSet all vector lanes to the same value
- vdupq_n_f32⚠neonDuplicate vector element to vector or scalar
- vdupq_n_p8⚠neonDuplicate vector element to vector or scalar
- vdupq_n_p16⚠neonDuplicate vector element to vector or scalar
- vdupq_n_s8⚠neonDuplicate vector element to vector or scalar
- vdupq_n_s16⚠neonDuplicate vector element to vector or scalar
- vdupq_n_s32⚠neonDuplicate vector element to vector or scalar
- vdupq_n_s64⚠neonDuplicate vector element to vector or scalar
- vdupq_n_u8⚠neonDuplicate vector element to vector or scalar
- vdupq_n_u16⚠neonDuplicate vector element to vector or scalar
- vdupq_n_u32⚠neonDuplicate vector element to vector or scalar
- vdupq_n_u64⚠neonDuplicate vector element to vector or scalar
- veor_s8⚠neonVector bitwise exclusive or (vector)
- veor_s16⚠neonVector bitwise exclusive or (vector)
- veor_s32⚠neonVector bitwise exclusive or (vector)
- veor_s64⚠neonVector bitwise exclusive or (vector)
- veor_u8⚠neonVector bitwise exclusive or (vector)
- veor_u16⚠neonVector bitwise exclusive or (vector)
- veor_u32⚠neonVector bitwise exclusive or (vector)
- veor_u64⚠neonVector bitwise exclusive or (vector)
- veorq_s8⚠neonVector bitwise exclusive or (vector)
- veorq_s16⚠neonVector bitwise exclusive or (vector)
- veorq_s32⚠neonVector bitwise exclusive or (vector)
- veorq_s64⚠neonVector bitwise exclusive or (vector)
- veorq_u8⚠neonVector bitwise exclusive or (vector)
- veorq_u16⚠neonVector bitwise exclusive or (vector)
- veorq_u32⚠neonVector bitwise exclusive or (vector)
- veorq_u64⚠neonVector bitwise exclusive or (vector)
- vext_f32⚠neonExtract vector from pair of vectors
- vext_p8⚠neonExtract vector from pair of vectors
- vext_p16⚠neonExtract vector from pair of vectors
- vext_s8⚠neonExtract vector from pair of vectors
- vext_s16⚠neonExtract vector from pair of vectors
- vext_s32⚠neonExtract vector from pair of vectors
- vext_s64⚠neonExtract vector from pair of vectors
- vext_u8⚠neonExtract vector from pair of vectors
- vext_u16⚠neonExtract vector from pair of vectors
- vext_u32⚠neonExtract vector from pair of vectors
- vext_u64⚠neonExtract vector from pair of vectors
- vextq_f32⚠neonExtract vector from pair of vectors
- vextq_p8⚠neonExtract vector from pair of vectors
- vextq_p16⚠neonExtract vector from pair of vectors
- vextq_s8⚠neonExtract vector from pair of vectors
- vextq_s16⚠neonExtract vector from pair of vectors
- vextq_s32⚠neonExtract vector from pair of vectors
- vextq_s64⚠neonExtract vector from pair of vectors
- vextq_u8⚠neonExtract vector from pair of vectors
- vextq_u16⚠neonExtract vector from pair of vectors
- vextq_u32⚠neonExtract vector from pair of vectors
- vextq_u64⚠neonExtract vector from pair of vectors
- vfma_f32⚠neonFloating-point fused Multiply-Add to accumulator(vector)
- vfma_n_f32⚠neonFloating-point fused Multiply-Add to accumulator(vector)
- vfmaq_f32⚠neonFloating-point fused Multiply-Add to accumulator(vector)
- vfmaq_n_f32⚠neonFloating-point fused Multiply-Add to accumulator(vector)
- vfms_f32⚠neonFloating-point fused multiply-subtract from accumulator
- vfms_n_f32⚠neonFloating-point fused Multiply-subtract to accumulator(vector)
- vfmsq_f32⚠neonFloating-point fused multiply-subtract from accumulator
- vfmsq_n_f32⚠neonFloating-point fused Multiply-subtract to accumulator(vector)
- vget_high_f32⚠neonDuplicate vector element to vector or scalar
- vget_high_p8⚠neonDuplicate vector element to vector or scalar
- vget_high_p16⚠neonDuplicate vector element to vector or scalar
- vget_high_s8⚠neonDuplicate vector element to vector or scalar
- vget_high_s16⚠neonDuplicate vector element to vector or scalar
- vget_high_s32⚠neonDuplicate vector element to vector or scalar
- vget_high_s64⚠neonDuplicate vector element to vector or scalar
- vget_high_u8⚠neonDuplicate vector element to vector or scalar
- vget_high_u16⚠neonDuplicate vector element to vector or scalar
- vget_high_u32⚠neonDuplicate vector element to vector or scalar
- vget_high_u64⚠neonDuplicate vector element to vector or scalar
- vget_lane_f32⚠neonDuplicate vector element to vector or scalar
- vget_lane_p8⚠neonMove vector element to general-purpose register
- vget_lane_p16⚠neonMove vector element to general-purpose register
- vget_lane_p64⚠neonMove vector element to general-purpose register
- vget_lane_s8⚠neonMove vector element to general-purpose register
- vget_lane_s16⚠neonMove vector element to general-purpose register
- vget_lane_s32⚠neonMove vector element to general-purpose register
- vget_lane_s64⚠neonMove vector element to general-purpose register
- vget_lane_u8⚠neonMove vector element to general-purpose register
- vget_lane_u16⚠neonMove vector element to general-purpose register
- vget_lane_u32⚠neonMove vector element to general-purpose register
- vget_lane_u64⚠neonMove vector element to general-purpose register
- vget_low_f32⚠neonDuplicate vector element to vector or scalar
- vget_low_p8⚠neonDuplicate vector element to vector or scalar
- vget_low_p16⚠neonDuplicate vector element to vector or scalar
- vget_low_s8⚠neonDuplicate vector element to vector or scalar
- vget_low_s16⚠neonDuplicate vector element to vector or scalar
- vget_low_s32⚠neonDuplicate vector element to vector or scalar
- vget_low_s64⚠neonDuplicate vector element to vector or scalar
- vget_low_u8⚠neonDuplicate vector element to vector or scalar
- vget_low_u16⚠neonDuplicate vector element to vector or scalar
- vget_low_u32⚠neonDuplicate vector element to vector or scalar
- vget_low_u64⚠neonDuplicate vector element to vector or scalar
- vgetq_lane_f32⚠neonDuplicate vector element to vector or scalar
- vgetq_lane_p8⚠neonMove vector element to general-purpose register
- vgetq_lane_p16⚠neonMove vector element to general-purpose register
- vgetq_lane_p64⚠neonMove vector element to general-purpose register
- vgetq_lane_s8⚠neonMove vector element to general-purpose register
- vgetq_lane_s16⚠neonMove vector element to general-purpose register
- vgetq_lane_s32⚠neonMove vector element to general-purpose register
- vgetq_lane_s64⚠neonMove vector element to general-purpose register
- vgetq_lane_u8⚠neonMove vector element to general-purpose register
- vgetq_lane_u16⚠neonMove vector element to general-purpose register
- vgetq_lane_u32⚠neonMove vector element to general-purpose register
- vgetq_lane_u64⚠neonMove vector element to general-purpose register
- vhadd_s8⚠neonHalving add
- vhadd_s16⚠neonHalving add
- vhadd_s32⚠neonHalving add
- vhadd_u8⚠neonHalving add
- vhadd_u16⚠neonHalving add
- vhadd_u32⚠neonHalving add
- vhaddq_s8⚠neonHalving add
- vhaddq_s16⚠neonHalving add
- vhaddq_s32⚠neonHalving add
- vhaddq_u8⚠neonHalving add
- vhaddq_u16⚠neonHalving add
- vhaddq_u32⚠neonHalving add
- vhsub_s8⚠neonSigned halving subtract
- vhsub_s16⚠neonSigned halving subtract
- vhsub_s32⚠neonSigned halving subtract
- vhsub_u8⚠neonSigned halving subtract
- vhsub_u16⚠neonSigned halving subtract
- vhsub_u32⚠neonSigned halving subtract
- vhsubq_s8⚠neonSigned halving subtract
- vhsubq_s16⚠neonSigned halving subtract
- vhsubq_s32⚠neonSigned halving subtract
- vhsubq_u8⚠neonSigned halving subtract
- vhsubq_u16⚠neonSigned halving subtract
- vhsubq_u32⚠neonSigned halving subtract
- vld1_dup_f32⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_p8⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_p16⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_p64⚠neon,aesLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_s8⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_s16⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_s32⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_s64⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_u8⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_u16⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_u32⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_dup_u64⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1_f32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_f32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_f32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_lane_f32⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_p8⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_p16⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_p64⚠neon,aesLoad one single-element structure to one lane of one register.
- vld1_lane_s8⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_s16⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_s32⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_s64⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_u8⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_u16⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_u32⚠neonLoad one single-element structure to one lane of one register.
- vld1_lane_u64⚠neonLoad one single-element structure to one lane of one register.
- vld1_p8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p64_x2⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1_p64_x3⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1_p64_x4⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1_s8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s64_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s64_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s64_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u64_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u64_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u64_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_dup_f32⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_p8⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_p16⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_p64⚠neon,aesLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_s8⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_s16⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_s32⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_s64⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_u8⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_u16⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_u32⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_dup_u64⚠neonLoad one single-element structure and Replicate to all lanes (of one register).
- vld1q_f32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_f32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_f32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_lane_f32⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_p8⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_p16⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_p64⚠neon,aesLoad one single-element structure to one lane of one register.
- vld1q_lane_s8⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_s16⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_s32⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_s64⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_u8⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_u16⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_u32⚠neonLoad one single-element structure to one lane of one register.
- vld1q_lane_u64⚠neonLoad one single-element structure to one lane of one register.
- vld1q_p8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p64_x2⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p64_x3⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p64_x4⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s64_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s64_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s64_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u64_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u64_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u64_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld2_dup_f32⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_p8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_p16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_p64⚠neon,aesLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_s8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_s16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_s32⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_s64⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_u8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_u16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_u32⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_u64⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_f32⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_f32⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_p8⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_p16⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_s8⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_s16⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_s32⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_u8⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_u16⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_u32⚠neonLoad multiple 2-element structures to two registers
- vld2_p8⚠neonLoad multiple 2-element structures to two registers
- vld2_p16⚠neonLoad multiple 2-element structures to two registers
- vld2_p64⚠neon,aesLoad multiple 2-element structures to two registers
- vld2_s8⚠neonLoad multiple 2-element structures to two registers
- vld2_s16⚠neonLoad multiple 2-element structures to two registers
- vld2_s32⚠neonLoad multiple 2-element structures to two registers
- vld2_s64⚠neonLoad multiple 2-element structures to two registers
- vld2_u8⚠neonLoad multiple 2-element structures to two registers
- vld2_u16⚠neonLoad multiple 2-element structures to two registers
- vld2_u32⚠neonLoad multiple 2-element structures to two registers
- vld2_u64⚠neonLoad multiple 2-element structures to two registers
- vld2q_dup_f32⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_p8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_p16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_s8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_s16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_s32⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_u8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_u16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_u32⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_f32⚠neonLoad multiple 2-element structures to two registers
- vld2q_lane_f32⚠neonLoad multiple 2-element structures to two registers
- vld2q_lane_p16⚠neonLoad multiple 2-element structures to two registers
- vld2q_lane_s16⚠neonLoad multiple 2-element structures to two registers
- vld2q_lane_s32⚠neonLoad multiple 2-element structures to two registers
- vld2q_lane_u16⚠neonLoad multiple 2-element structures to two registers
- vld2q_lane_u32⚠neonLoad multiple 2-element structures to two registers
- vld2q_p8⚠neonLoad multiple 2-element structures to two registers
- vld2q_p16⚠neonLoad multiple 2-element structures to two registers
- vld2q_s8⚠neonLoad multiple 2-element structures to two registers
- vld2q_s16⚠neonLoad multiple 2-element structures to two registers
- vld2q_s32⚠neonLoad multiple 2-element structures to two registers
- vld2q_u8⚠neonLoad multiple 2-element structures to two registers
- vld2q_u16⚠neonLoad multiple 2-element structures to two registers
- vld2q_u32⚠neonLoad multiple 2-element structures to two registers
- vld3_dup_f32⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_p8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_p16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_p64⚠neon,aesLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_s8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_s16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_s32⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_s64⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_u8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_u16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_u32⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_u64⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_f32⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_f32⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_p8⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_p16⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_s8⚠neonLoad multiple 3-element structures to two registers
- vld3_lane_s16⚠neonLoad multiple 3-element structures to two registers
- vld3_lane_s32⚠neonLoad multiple 3-element structures to two registers
- vld3_lane_u8⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_u16⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_u32⚠neonLoad multiple 3-element structures to three registers
- vld3_p8⚠neonLoad multiple 3-element structures to three registers
- vld3_p16⚠neonLoad multiple 3-element structures to three registers
- vld3_p64⚠neon,aesLoad multiple 3-element structures to three registers
- vld3_s8⚠neonLoad multiple 3-element structures to three registers
- vld3_s16⚠neonLoad multiple 3-element structures to three registers
- vld3_s32⚠neonLoad multiple 3-element structures to three registers
- vld3_s64⚠neonLoad multiple 3-element structures to three registers
- vld3_u8⚠neonLoad multiple 3-element structures to three registers
- vld3_u16⚠neonLoad multiple 3-element structures to three registers
- vld3_u32⚠neonLoad multiple 3-element structures to three registers
- vld3_u64⚠neonLoad multiple 3-element structures to three registers
- vld3q_dup_f32⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_p8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_p16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_s8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_s16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_s32⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_u8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_u16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_u32⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_f32⚠neonLoad multiple 3-element structures to three registers
- vld3q_lane_f32⚠neonLoad multiple 3-element structures to three registers
- vld3q_lane_p16⚠neonLoad multiple 3-element structures to three registers
- vld3q_lane_s16⚠neonLoad multiple 3-element structures to two registers
- vld3q_lane_s32⚠neonLoad multiple 3-element structures to two registers
- vld3q_lane_u16⚠neonLoad multiple 3-element structures to three registers
- vld3q_lane_u32⚠neonLoad multiple 3-element structures to three registers
- vld3q_p8⚠neonLoad multiple 3-element structures to three registers
- vld3q_p16⚠neonLoad multiple 3-element structures to three registers
- vld3q_s8⚠neonLoad multiple 3-element structures to three registers
- vld3q_s16⚠neonLoad multiple 3-element structures to three registers
- vld3q_s32⚠neonLoad multiple 3-element structures to three registers
- vld3q_u8⚠neonLoad multiple 3-element structures to three registers
- vld3q_u16⚠neonLoad multiple 3-element structures to three registers
- vld3q_u32⚠neonLoad multiple 3-element structures to three registers
- vld4_dup_f32⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_p8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_p16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_p64⚠neon,aesLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_s8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_s16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_s32⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_s64⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_u8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_u16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_u32⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_u64⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_f32⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_f32⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_p8⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_p16⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_s8⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_s16⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_s32⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_u8⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_u16⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_u32⚠neonLoad multiple 4-element structures to four registers
- vld4_p8⚠neonLoad multiple 4-element structures to four registers
- vld4_p16⚠neonLoad multiple 4-element structures to four registers
- vld4_p64⚠neon,aesLoad multiple 4-element structures to four registers
- vld4_s8⚠neonLoad multiple 4-element structures to four registers
- vld4_s16⚠neonLoad multiple 4-element structures to four registers
- vld4_s32⚠neonLoad multiple 4-element structures to four registers
- vld4_s64⚠neonLoad multiple 4-element structures to four registers
- vld4_u8⚠neonLoad multiple 4-element structures to four registers
- vld4_u16⚠neonLoad multiple 4-element structures to four registers
- vld4_u32⚠neonLoad multiple 4-element structures to four registers
- vld4_u64⚠neonLoad multiple 4-element structures to four registers
- vld4q_dup_f32⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_p8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_p16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_s8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_s16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_s32⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_u8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_u16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_u32⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_f32⚠neonLoad multiple 4-element structures to four registers
- vld4q_lane_f32⚠neonLoad multiple 4-element structures to four registers
- vld4q_lane_p16⚠neonLoad multiple 4-element structures to four registers
- vld4q_lane_s16⚠neonLoad multiple 4-element structures to four registers
- vld4q_lane_s32⚠neonLoad multiple 4-element structures to four registers
- vld4q_lane_u16⚠neonLoad multiple 4-element structures to four registers
- vld4q_lane_u32⚠neonLoad multiple 4-element structures to four registers
- vld4q_p8⚠neonLoad multiple 4-element structures to four registers
- vld4q_p16⚠neonLoad multiple 4-element structures to four registers
- vld4q_s8⚠neonLoad multiple 4-element structures to four registers
- vld4q_s16⚠neonLoad multiple 4-element structures to four registers
- vld4q_s32⚠neonLoad multiple 4-element structures to four registers
- vld4q_u8⚠neonLoad multiple 4-element structures to four registers
- vld4q_u16⚠neonLoad multiple 4-element structures to four registers
- vld4q_u32⚠neonLoad multiple 4-element structures to four registers
- vldrq_p128⚠neonLoad SIMD&FP register (immediate offset)
- vmax_f32⚠neonMaximum (vector)
- vmax_s8⚠neonMaximum (vector)
- vmax_s16⚠neonMaximum (vector)
- vmax_s32⚠neonMaximum (vector)
- vmax_u8⚠neonMaximum (vector)
- vmax_u16⚠neonMaximum (vector)
- vmax_u32⚠neonMaximum (vector)
- vmaxnm_f32⚠neonFloating-point Maximum Number (vector)
- vmaxnmq_f32⚠neonFloating-point Maximum Number (vector)
- vmaxq_f32⚠neonMaximum (vector)
- vmaxq_s8⚠neonMaximum (vector)
- vmaxq_s16⚠neonMaximum (vector)
- vmaxq_s32⚠neonMaximum (vector)
- vmaxq_u8⚠neonMaximum (vector)
- vmaxq_u16⚠neonMaximum (vector)
- vmaxq_u32⚠neonMaximum (vector)
- vmin_f32⚠neonMinimum (vector)
- vmin_s8⚠neonMinimum (vector)
- vmin_s16⚠neonMinimum (vector)
- vmin_s32⚠neonMinimum (vector)
- vmin_u8⚠neonMinimum (vector)
- vmin_u16⚠neonMinimum (vector)
- vmin_u32⚠neonMinimum (vector)
- vminnm_f32⚠neonFloating-point Minimum Number (vector)
- vminnmq_f32⚠neonFloating-point Minimum Number (vector)
- vminq_f32⚠neonMinimum (vector)
- vminq_s8⚠neonMinimum (vector)
- vminq_s16⚠neonMinimum (vector)
- vminq_s32⚠neonMinimum (vector)
- vminq_u8⚠neonMinimum (vector)
- vminq_u16⚠neonMinimum (vector)
- vminq_u32⚠neonMinimum (vector)
- vmla_f32⚠neonFloating-point multiply-add to accumulator
- vmla_lane_f32⚠neonVector multiply accumulate with scalar
- vmla_lane_s16⚠neonVector multiply accumulate with scalar
- vmla_lane_s32⚠neonVector multiply accumulate with scalar
- vmla_lane_u16⚠neonVector multiply accumulate with scalar
- vmla_lane_u32⚠neonVector multiply accumulate with scalar
- vmla_laneq_f32⚠neonVector multiply accumulate with scalar
- vmla_laneq_s16⚠neonVector multiply accumulate with scalar
- vmla_laneq_s32⚠neonVector multiply accumulate with scalar
- vmla_laneq_u16⚠neonVector multiply accumulate with scalar
- vmla_laneq_u32⚠neonVector multiply accumulate with scalar
- vmla_n_f32⚠neonVector multiply accumulate with scalar
- vmla_n_s16⚠neonVector multiply accumulate with scalar
- vmla_n_s32⚠neonVector multiply accumulate with scalar
- vmla_n_u16⚠neonVector multiply accumulate with scalar
- vmla_n_u32⚠neonVector multiply accumulate with scalar
- vmla_s8⚠neonMultiply-add to accumulator
- vmla_s16⚠neonMultiply-add to accumulator
- vmla_s32⚠neonMultiply-add to accumulator
- vmla_u8⚠neonMultiply-add to accumulator
- vmla_u16⚠neonMultiply-add to accumulator
- vmla_u32⚠neonMultiply-add to accumulator
- vmlal_lane_s16⚠neonVector widening multiply accumulate with scalar
- vmlal_lane_s32⚠neonVector widening multiply accumulate with scalar
- vmlal_lane_u16⚠neonVector widening multiply accumulate with scalar
- vmlal_lane_u32⚠neonVector widening multiply accumulate with scalar
- vmlal_laneq_s16⚠neonVector widening multiply accumulate with scalar
- vmlal_laneq_s32⚠neonVector widening multiply accumulate with scalar
- vmlal_laneq_u16⚠neonVector widening multiply accumulate with scalar
- vmlal_laneq_u32⚠neonVector widening multiply accumulate with scalar
- vmlal_n_s16⚠neonVector widening multiply accumulate with scalar
- vmlal_n_s32⚠neonVector widening multiply accumulate with scalar
- vmlal_n_u16⚠neonVector widening multiply accumulate with scalar
- vmlal_n_u32⚠neonVector widening multiply accumulate with scalar
- vmlal_s8⚠neonSigned multiply-add long
- vmlal_s16⚠neonSigned multiply-add long
- vmlal_s32⚠neonSigned multiply-add long
- vmlal_u8⚠neonUnsigned multiply-add long
- vmlal_u16⚠neonUnsigned multiply-add long
- vmlal_u32⚠neonUnsigned multiply-add long
- vmlaq_f32⚠neonFloating-point multiply-add to accumulator
- vmlaq_lane_f32⚠neonVector multiply accumulate with scalar
- vmlaq_lane_s16⚠neonVector multiply accumulate with scalar
- vmlaq_lane_s32⚠neonVector multiply accumulate with scalar
- vmlaq_lane_u16⚠neonVector multiply accumulate with scalar
- vmlaq_lane_u32⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_f32⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_s16⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_s32⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_u16⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_u32⚠neonVector multiply accumulate with scalar
- vmlaq_n_f32⚠neonVector multiply accumulate with scalar
- vmlaq_n_s16⚠neonVector multiply accumulate with scalar
- vmlaq_n_s32⚠neonVector multiply accumulate with scalar
- vmlaq_n_u16⚠neonVector multiply accumulate with scalar
- vmlaq_n_u32⚠neonVector multiply accumulate with scalar
- vmlaq_s8⚠neonMultiply-add to accumulator
- vmlaq_s16⚠neonMultiply-add to accumulator
- vmlaq_s32⚠neonMultiply-add to accumulator
- vmlaq_u8⚠neonMultiply-add to accumulator
- vmlaq_u16⚠neonMultiply-add to accumulator
- vmlaq_u32⚠neonMultiply-add to accumulator
- vmls_f32⚠neonFloating-point multiply-subtract from accumulator
- vmls_lane_f32⚠neonVector multiply subtract with scalar
- vmls_lane_s16⚠neonVector multiply subtract with scalar
- vmls_lane_s32⚠neonVector multiply subtract with scalar
- vmls_lane_u16⚠neonVector multiply subtract with scalar
- vmls_lane_u32⚠neonVector multiply subtract with scalar
- vmls_laneq_f32⚠neonVector multiply subtract with scalar
- vmls_laneq_s16⚠neonVector multiply subtract with scalar
- vmls_laneq_s32⚠neonVector multiply subtract with scalar
- vmls_laneq_u16⚠neonVector multiply subtract with scalar
- vmls_laneq_u32⚠neonVector multiply subtract with scalar
- vmls_n_f32⚠neonVector multiply subtract with scalar
- vmls_n_s16⚠neonVector multiply subtract with scalar
- vmls_n_s32⚠neonVector multiply subtract with scalar
- vmls_n_u16⚠neonVector multiply subtract with scalar
- vmls_n_u32⚠neonVector multiply subtract with scalar
- vmls_s8⚠neonMultiply-subtract from accumulator
- vmls_s16⚠neonMultiply-subtract from accumulator
- vmls_s32⚠neonMultiply-subtract from accumulator
- vmls_u8⚠neonMultiply-subtract from accumulator
- vmls_u16⚠neonMultiply-subtract from accumulator
- vmls_u32⚠neonMultiply-subtract from accumulator
- vmlsl_lane_s16⚠neonVector widening multiply subtract with scalar
- vmlsl_lane_s32⚠neonVector widening multiply subtract with scalar
- vmlsl_lane_u16⚠neonVector widening multiply subtract with scalar
- vmlsl_lane_u32⚠neonVector widening multiply subtract with scalar
- vmlsl_laneq_s16⚠neonVector widening multiply subtract with scalar
- vmlsl_laneq_s32⚠neonVector widening multiply subtract with scalar
- vmlsl_laneq_u16⚠neonVector widening multiply subtract with scalar
- vmlsl_laneq_u32⚠neonVector widening multiply subtract with scalar
- vmlsl_n_s16⚠neonVector widening multiply subtract with scalar
- vmlsl_n_s32⚠neonVector widening multiply subtract with scalar
- vmlsl_n_u16⚠neonVector widening multiply subtract with scalar
- vmlsl_n_u32⚠neonVector widening multiply subtract with scalar
- vmlsl_s8⚠neonSigned multiply-subtract long
- vmlsl_s16⚠neonSigned multiply-subtract long
- vmlsl_s32⚠neonSigned multiply-subtract long
- vmlsl_u8⚠neonUnsigned multiply-subtract long
- vmlsl_u16⚠neonUnsigned multiply-subtract long
- vmlsl_u32⚠neonUnsigned multiply-subtract long
- vmlsq_f32⚠neonFloating-point multiply-subtract from accumulator
- vmlsq_lane_f32⚠neonVector multiply subtract with scalar
- vmlsq_lane_s16⚠neonVector multiply subtract with scalar
- vmlsq_lane_s32⚠neonVector multiply subtract with scalar
- vmlsq_lane_u16⚠neonVector multiply subtract with scalar
- vmlsq_lane_u32⚠neonVector multiply subtract with scalar
- vmlsq_laneq_f32⚠neonVector multiply subtract with scalar
- vmlsq_laneq_s16⚠neonVector multiply subtract with scalar
- vmlsq_laneq_s32⚠neonVector multiply subtract with scalar
- vmlsq_laneq_u16⚠neonVector multiply subtract with scalar
- vmlsq_laneq_u32⚠neonVector multiply subtract with scalar
- vmlsq_n_f32⚠neonVector multiply subtract with scalar
- vmlsq_n_s16⚠neonVector multiply subtract with scalar
- vmlsq_n_s32⚠neonVector multiply subtract with scalar
- vmlsq_n_u16⚠neonVector multiply subtract with scalar
- vmlsq_n_u32⚠neonVector multiply subtract with scalar
- vmlsq_s8⚠neonMultiply-subtract from accumulator
- vmlsq_s16⚠neonMultiply-subtract from accumulator
- vmlsq_s32⚠neonMultiply-subtract from accumulator
- vmlsq_u8⚠neonMultiply-subtract from accumulator
- vmlsq_u16⚠neonMultiply-subtract from accumulator
- vmlsq_u32⚠neonMultiply-subtract from accumulator
- vmov_n_f32⚠neonDuplicate vector element to vector or scalar
- vmov_n_p8⚠neonDuplicate vector element to vector or scalar
- vmov_n_p16⚠neonDuplicate vector element to vector or scalar
- vmov_n_s8⚠neonDuplicate vector element to vector or scalar
- vmov_n_s16⚠neonDuplicate vector element to vector or scalar
- vmov_n_s32⚠neonDuplicate vector element to vector or scalar
- vmov_n_s64⚠neonDuplicate vector element to vector or scalar
- vmov_n_u8⚠neonDuplicate vector element to vector or scalar
- vmov_n_u16⚠neonDuplicate vector element to vector or scalar
- vmov_n_u32⚠neonDuplicate vector element to vector or scalar
- vmov_n_u64⚠neonDuplicate vector element to vector or scalar
- vmovl_s8⚠neonVector long move.
- vmovl_s16⚠neonVector long move.
- vmovl_s32⚠neonVector long move.
- vmovl_u8⚠neonVector long move.
- vmovl_u16⚠neonVector long move.
- vmovl_u32⚠neonVector long move.
- vmovn_s16⚠neonVector narrow integer.
- vmovn_s32⚠neonVector narrow integer.
- vmovn_s64⚠neonVector narrow integer.
- vmovn_u16⚠neonVector narrow integer.
- vmovn_u32⚠neonVector narrow integer.
- vmovn_u64⚠neonVector narrow integer.
- vmovq_n_f32⚠neonDuplicate vector element to vector or scalar
- vmovq_n_p8⚠neonDuplicate vector element to vector or scalar
- vmovq_n_p16⚠neonDuplicate vector element to vector or scalar
- vmovq_n_s8⚠neonDuplicate vector element to vector or scalar
- vmovq_n_s16⚠neonDuplicate vector element to vector or scalar
- vmovq_n_s32⚠neonDuplicate vector element to vector or scalar
- vmovq_n_s64⚠neonDuplicate vector element to vector or scalar
- vmovq_n_u8⚠neonDuplicate vector element to vector or scalar
- vmovq_n_u16⚠neonDuplicate vector element to vector or scalar
- vmovq_n_u32⚠neonDuplicate vector element to vector or scalar
- vmovq_n_u64⚠neonDuplicate vector element to vector or scalar
- vmul_f32⚠neonMultiply
- vmul_lane_f32⚠neonFloating-point multiply
- vmul_lane_s16⚠neonMultiply
- vmul_lane_s32⚠neonMultiply
- vmul_lane_u16⚠neonMultiply
- vmul_lane_u32⚠neonMultiply
- vmul_laneq_f32⚠neonFloating-point multiply
- vmul_laneq_s16⚠neonMultiply
- vmul_laneq_s32⚠neonMultiply
- vmul_laneq_u16⚠neonMultiply
- vmul_laneq_u32⚠neonMultiply
- vmul_n_f32⚠neonVector multiply by scalar
- vmul_n_s16⚠neonVector multiply by scalar
- vmul_n_s32⚠neonVector multiply by scalar
- vmul_n_u16⚠neonVector multiply by scalar
- vmul_n_u32⚠neonVector multiply by scalar
- vmul_p8⚠neonPolynomial multiply
- vmul_s8⚠neonMultiply
- vmul_s16⚠neonMultiply
- vmul_s32⚠neonMultiply
- vmul_u8⚠neonMultiply
- vmul_u16⚠neonMultiply
- vmul_u32⚠neonMultiply
- vmull_lane_s16⚠neonVector long multiply by scalar
- vmull_lane_s32⚠neonVector long multiply by scalar
- vmull_lane_u16⚠neonVector long multiply by scalar
- vmull_lane_u32⚠neonVector long multiply by scalar
- vmull_laneq_s16⚠neonVector long multiply by scalar
- vmull_laneq_s32⚠neonVector long multiply by scalar
- vmull_laneq_u16⚠neonVector long multiply by scalar
- vmull_laneq_u32⚠neonVector long multiply by scalar
- vmull_n_s16⚠neonVector long multiply with scalar
- vmull_n_s32⚠neonVector long multiply with scalar
- vmull_n_u16⚠neonVector long multiply with scalar
- vmull_n_u32⚠neonVector long multiply with scalar
- vmull_p8⚠neonPolynomial multiply long
- vmull_s8⚠neonSigned multiply long
- vmull_s16⚠neonSigned multiply long
- vmull_s32⚠neonSigned multiply long
- vmull_u8⚠neonUnsigned multiply long
- vmull_u16⚠neonUnsigned multiply long
- vmull_u32⚠neonUnsigned multiply long
- vmulq_f32⚠neonMultiply
- vmulq_lane_f32⚠neonFloating-point multiply
- vmulq_lane_s16⚠neonMultiply
- vmulq_lane_s32⚠neonMultiply
- vmulq_lane_u16⚠neonMultiply
- vmulq_lane_u32⚠neonMultiply
- vmulq_laneq_f32⚠neonFloating-point multiply
- vmulq_laneq_s16⚠neonMultiply
- vmulq_laneq_s32⚠neonMultiply
- vmulq_laneq_u16⚠neonMultiply
- vmulq_laneq_u32⚠neonMultiply
- vmulq_n_f32⚠neonVector multiply by scalar
- vmulq_n_s16⚠neonVector multiply by scalar
- vmulq_n_s32⚠neonVector multiply by scalar
- vmulq_n_u16⚠neonVector multiply by scalar
- vmulq_n_u32⚠neonVector multiply by scalar
- vmulq_p8⚠neonPolynomial multiply
- vmulq_s8⚠neonMultiply
- vmulq_s16⚠neonMultiply
- vmulq_s32⚠neonMultiply
- vmulq_u8⚠neonMultiply
- vmulq_u16⚠neonMultiply
- vmulq_u32⚠neonMultiply
- vmvn_p8⚠neonVector bitwise not.
- vmvn_s8⚠neonVector bitwise not.
- vmvn_s16⚠neonVector bitwise not.
- vmvn_s32⚠neonVector bitwise not.
- vmvn_u8⚠neonVector bitwise not.
- vmvn_u16⚠neonVector bitwise not.
- vmvn_u32⚠neonVector bitwise not.
- vmvnq_p8⚠neonVector bitwise not.
- vmvnq_s8⚠neonVector bitwise not.
- vmvnq_s16⚠neonVector bitwise not.
- vmvnq_s32⚠neonVector bitwise not.
- vmvnq_u8⚠neonVector bitwise not.
- vmvnq_u16⚠neonVector bitwise not.
- vmvnq_u32⚠neonVector bitwise not.
- vneg_f32⚠neonNegate
- vneg_s8⚠neonNegate
- vneg_s16⚠neonNegate
- vneg_s32⚠neonNegate
- vnegq_f32⚠neonNegate
- vnegq_s8⚠neonNegate
- vnegq_s16⚠neonNegate
- vnegq_s32⚠neonNegate
- vorn_s8⚠neonVector bitwise inclusive OR NOT
- vorn_s16⚠neonVector bitwise inclusive OR NOT
- vorn_s32⚠neonVector bitwise inclusive OR NOT
- vorn_s64⚠neonVector bitwise inclusive OR NOT
- vorn_u8⚠neonVector bitwise inclusive OR NOT
- vorn_u16⚠neonVector bitwise inclusive OR NOT
- vorn_u32⚠neonVector bitwise inclusive OR NOT
- vorn_u64⚠neonVector bitwise inclusive OR NOT
- vornq_s8⚠neonVector bitwise inclusive OR NOT
- vornq_s16⚠neonVector bitwise inclusive OR NOT
- vornq_s32⚠neonVector bitwise inclusive OR NOT
- vornq_s64⚠neonVector bitwise inclusive OR NOT
- vornq_u8⚠neonVector bitwise inclusive OR NOT
- vornq_u16⚠neonVector bitwise inclusive OR NOT
- vornq_u32⚠neonVector bitwise inclusive OR NOT
- vornq_u64⚠neonVector bitwise inclusive OR NOT
- vorr_s8⚠neonVector bitwise or (immediate, inclusive)
- vorr_s16⚠neonVector bitwise or (immediate, inclusive)
- vorr_s32⚠neonVector bitwise or (immediate, inclusive)
- vorr_s64⚠neonVector bitwise or (immediate, inclusive)
- vorr_u8⚠neonVector bitwise or (immediate, inclusive)
- vorr_u16⚠neonVector bitwise or (immediate, inclusive)
- vorr_u32⚠neonVector bitwise or (immediate, inclusive)
- vorr_u64⚠neonVector bitwise or (immediate, inclusive)
- vorrq_s8⚠neonVector bitwise or (immediate, inclusive)
- vorrq_s16⚠neonVector bitwise or (immediate, inclusive)
- vorrq_s32⚠neonVector bitwise or (immediate, inclusive)
- vorrq_s64⚠neonVector bitwise or (immediate, inclusive)
- vorrq_u8⚠neonVector bitwise or (immediate, inclusive)
- vorrq_u16⚠neonVector bitwise or (immediate, inclusive)
- vorrq_u32⚠neonVector bitwise or (immediate, inclusive)
- vorrq_u64⚠neonVector bitwise or (immediate, inclusive)
- vpadal_s8⚠neonSigned Add and Accumulate Long Pairwise.
- vpadal_s16⚠neonSigned Add and Accumulate Long Pairwise.
- vpadal_s32⚠neonSigned Add and Accumulate Long Pairwise.
- vpadal_u8⚠neonUnsigned Add and Accumulate Long Pairwise.
- vpadal_u16⚠neonUnsigned Add and Accumulate Long Pairwise.
- vpadal_u32⚠neonUnsigned Add and Accumulate Long Pairwise.
- vpadalq_s8⚠neonSigned Add and Accumulate Long Pairwise.
- vpadalq_s16⚠neonSigned Add and Accumulate Long Pairwise.
- vpadalq_s32⚠neonSigned Add and Accumulate Long Pairwise.
- vpadalq_u8⚠neonUnsigned Add and Accumulate Long Pairwise.
- vpadalq_u16⚠neonUnsigned Add and Accumulate Long Pairwise.
- vpadalq_u32⚠neonUnsigned Add and Accumulate Long Pairwise.
- vpadd_f32⚠neonFloating-point add pairwise
- vpadd_s8⚠neonAdd pairwise.
- vpadd_s16⚠neonAdd pairwise.
- vpadd_s32⚠neonAdd pairwise.
- vpadd_u8⚠neonAdd pairwise.
- vpadd_u16⚠neonAdd pairwise.
- vpadd_u32⚠neonAdd pairwise.
- vpaddl_s8⚠neonSigned Add Long Pairwise.
- vpaddl_s16⚠neonSigned Add Long Pairwise.
- vpaddl_s32⚠neonSigned Add Long Pairwise.
- vpaddl_u8⚠neonUnsigned Add Long Pairwise.
- vpaddl_u16⚠neonUnsigned Add Long Pairwise.
- vpaddl_u32⚠neonUnsigned Add Long Pairwise.
- vpaddlq_s8⚠neonSigned Add Long Pairwise.
- vpaddlq_s16⚠neonSigned Add Long Pairwise.
- vpaddlq_s32⚠neonSigned Add Long Pairwise.
- vpaddlq_u8⚠neonUnsigned Add Long Pairwise.
- vpaddlq_u16⚠neonUnsigned Add Long Pairwise.
- vpaddlq_u32⚠neonUnsigned Add Long Pairwise.
- vpmax_f32⚠neonFolding maximum of adjacent pairs
- vpmax_s8⚠neonFolding maximum of adjacent pairs
- vpmax_s16⚠neonFolding maximum of adjacent pairs
- vpmax_s32⚠neonFolding maximum of adjacent pairs
- vpmax_u8⚠neonFolding maximum of adjacent pairs
- vpmax_u16⚠neonFolding maximum of adjacent pairs
- vpmax_u32⚠neonFolding maximum of adjacent pairs
- vpmin_f32⚠neonFolding minimum of adjacent pairs
- vpmin_s8⚠neonFolding minimum of adjacent pairs
- vpmin_s16⚠neonFolding minimum of adjacent pairs
- vpmin_s32⚠neonFolding minimum of adjacent pairs
- vpmin_u8⚠neonFolding minimum of adjacent pairs
- vpmin_u16⚠neonFolding minimum of adjacent pairs
- vpmin_u32⚠neonFolding minimum of adjacent pairs
- vqabs_s8⚠neonSigned saturating Absolute value
- vqabs_s16⚠neonSigned saturating Absolute value
- vqabs_s32⚠neonSigned saturating Absolute value
- vqabsq_s8⚠neonSigned saturating Absolute value
- vqabsq_s16⚠neonSigned saturating Absolute value
- vqabsq_s32⚠neonSigned saturating Absolute value
- vqadd_s8⚠neonSaturating add
- vqadd_s16⚠neonSaturating add
- vqadd_s32⚠neonSaturating add
- vqadd_s64⚠neonSaturating add
- vqadd_u8⚠neonSaturating add
- vqadd_u16⚠neonSaturating add
- vqadd_u32⚠neonSaturating add
- vqadd_u64⚠neonSaturating add
- vqaddq_s8⚠neonSaturating add
- vqaddq_s16⚠neonSaturating add
- vqaddq_s32⚠neonSaturating add
- vqaddq_s64⚠neonSaturating add
- vqaddq_u8⚠neonSaturating add
- vqaddq_u16⚠neonSaturating add
- vqaddq_u32⚠neonSaturating add
- vqaddq_u64⚠neonSaturating add
- vqdmlal_lane_s16⚠neonVector widening saturating doubling multiply accumulate with scalar
- vqdmlal_lane_s32⚠neonVector widening saturating doubling multiply accumulate with scalar
- vqdmlal_n_s16⚠neonVector widening saturating doubling multiply accumulate with scalar
- vqdmlal_n_s32⚠neonVector widening saturating doubling multiply accumulate with scalar
- vqdmlal_s16⚠neonSigned saturating doubling multiply-add long
- vqdmlal_s32⚠neonSigned saturating doubling multiply-add long
- vqdmlsl_lane_s16⚠neonVector widening saturating doubling multiply subtract with scalar
- vqdmlsl_lane_s32⚠neonVector widening saturating doubling multiply subtract with scalar
- vqdmlsl_n_s16⚠neonVector widening saturating doubling multiply subtract with scalar
- vqdmlsl_n_s32⚠neonVector widening saturating doubling multiply subtract with scalar
- vqdmlsl_s16⚠neonSigned saturating doubling multiply-subtract long
- vqdmlsl_s32⚠neonSigned saturating doubling multiply-subtract long
- vqdmulh_laneq_s16⚠neonVector saturating doubling multiply high by scalar
- vqdmulh_laneq_s32⚠neonVector saturating doubling multiply high by scalar
- vqdmulh_n_s16⚠neonVector saturating doubling multiply high with scalar
- vqdmulh_n_s32⚠neonVector saturating doubling multiply high with scalar
- vqdmulh_s16⚠neonSigned saturating doubling multiply returning high half
- vqdmulh_s32⚠neonSigned saturating doubling multiply returning high half
- vqdmulhq_laneq_s16⚠neonVector saturating doubling multiply high by scalar
- vqdmulhq_laneq_s32⚠neonVector saturating doubling multiply high by scalar
- vqdmulhq_n_s16⚠neonVector saturating doubling multiply high with scalar
- vqdmulhq_n_s32⚠neonVector saturating doubling multiply high with scalar
- vqdmulhq_s16⚠neonSigned saturating doubling multiply returning high half
- vqdmulhq_s32⚠neonSigned saturating doubling multiply returning high half
- vqdmull_lane_s16⚠neonVector saturating doubling long multiply by scalar
- vqdmull_lane_s32⚠neonVector saturating doubling long multiply by scalar
- vqdmull_n_s16⚠neonVector saturating doubling long multiply with scalar
- vqdmull_n_s32⚠neonVector saturating doubling long multiply with scalar
- vqdmull_s16⚠neonSigned saturating doubling multiply long
- vqdmull_s32⚠neonSigned saturating doubling multiply long
- vqmovn_s16⚠neonSigned saturating extract narrow
- vqmovn_s32⚠neonSigned saturating extract narrow
- vqmovn_s64⚠neonSigned saturating extract narrow
- vqmovn_u16⚠neonUnsigned saturating extract narrow
- vqmovn_u32⚠neonUnsigned saturating extract narrow
- vqmovn_u64⚠neonUnsigned saturating extract narrow
- vqmovun_s16⚠neonSigned saturating extract unsigned narrow
- vqmovun_s32⚠neonSigned saturating extract unsigned narrow
- vqmovun_s64⚠neonSigned saturating extract unsigned narrow
- vqneg_s8⚠neonSigned saturating negate
- vqneg_s16⚠neonSigned saturating negate
- vqneg_s32⚠neonSigned saturating negate
- vqnegq_s8⚠neonSigned saturating negate
- vqnegq_s16⚠neonSigned saturating negate
- vqnegq_s32⚠neonSigned saturating negate
- vqrdmulh_lane_s16⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulh_lane_s32⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulh_laneq_s16⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulh_laneq_s32⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulh_n_s16⚠neonVector saturating rounding doubling multiply high with scalar
- vqrdmulh_n_s32⚠neonVector saturating rounding doubling multiply high with scalar
- vqrdmulh_s16⚠neonSigned saturating rounding doubling multiply returning high half
- vqrdmulh_s32⚠neonSigned saturating rounding doubling multiply returning high half
- vqrdmulhq_lane_s16⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulhq_lane_s32⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulhq_laneq_s16⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulhq_laneq_s32⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulhq_n_s16⚠neonVector saturating rounding doubling multiply high with scalar
- vqrdmulhq_n_s32⚠neonVector saturating rounding doubling multiply high with scalar
- vqrdmulhq_s16⚠neonSigned saturating rounding doubling multiply returning high half
- vqrdmulhq_s32⚠neonSigned saturating rounding doubling multiply returning high half
- vqrshl_s8⚠neonSigned saturating rounding shift left
- vqrshl_s16⚠neonSigned saturating rounding shift left
- vqrshl_s32⚠neonSigned saturating rounding shift left
- vqrshl_s64⚠neonSigned saturating rounding shift left
- vqrshl_u8⚠neonUnsigned signed saturating rounding shift left
- vqrshl_u16⚠neonUnsigned signed saturating rounding shift left
- vqrshl_u32⚠neonUnsigned signed saturating rounding shift left
- vqrshl_u64⚠neonUnsigned signed saturating rounding shift left
- vqrshlq_s8⚠neonSigned saturating rounding shift left
- vqrshlq_s16⚠neonSigned saturating rounding shift left
- vqrshlq_s32⚠neonSigned saturating rounding shift left
- vqrshlq_s64⚠neonSigned saturating rounding shift left
- vqrshlq_u8⚠neonUnsigned signed saturating rounding shift left
- vqrshlq_u16⚠neonUnsigned signed saturating rounding shift left
- vqrshlq_u32⚠neonUnsigned signed saturating rounding shift left
- vqrshlq_u64⚠neonUnsigned signed saturating rounding shift left
- vqrshrn_n_s16⚠neonSigned saturating rounded shift right narrow
- vqrshrn_n_s32⚠neonSigned saturating rounded shift right narrow
- vqrshrn_n_s64⚠neonSigned saturating rounded shift right narrow
- vqrshrn_n_u16⚠neonUnsigned signed saturating rounded shift right narrow
- vqrshrn_n_u32⚠neonUnsigned signed saturating rounded shift right narrow
- vqrshrn_n_u64⚠neonUnsigned signed saturating rounded shift right narrow
- vqrshrun_n_s16⚠neonSigned saturating rounded shift right unsigned narrow
- vqrshrun_n_s32⚠neonSigned saturating rounded shift right unsigned narrow
- vqrshrun_n_s64⚠neonSigned saturating rounded shift right unsigned narrow
- vqshl_n_s8⚠neonSigned saturating shift left
- vqshl_n_s16⚠neonSigned saturating shift left
- vqshl_n_s32⚠neonSigned saturating shift left
- vqshl_n_s64⚠neonSigned saturating shift left
- vqshl_n_u8⚠neonUnsigned saturating shift left
- vqshl_n_u16⚠neonUnsigned saturating shift left
- vqshl_n_u32⚠neonUnsigned saturating shift left
- vqshl_n_u64⚠neonUnsigned saturating shift left
- vqshl_s8⚠neonSigned saturating shift left
- vqshl_s16⚠neonSigned saturating shift left
- vqshl_s32⚠neonSigned saturating shift left
- vqshl_s64⚠neonSigned saturating shift left
- vqshl_u8⚠neonUnsigned saturating shift left
- vqshl_u16⚠neonUnsigned saturating shift left
- vqshl_u32⚠neonUnsigned saturating shift left
- vqshl_u64⚠neonUnsigned saturating shift left
- vqshlq_n_s8⚠neonSigned saturating shift left
- vqshlq_n_s16⚠neonSigned saturating shift left
- vqshlq_n_s32⚠neonSigned saturating shift left
- vqshlq_n_s64⚠neonSigned saturating shift left
- vqshlq_n_u8⚠neonUnsigned saturating shift left
- vqshlq_n_u16⚠neonUnsigned saturating shift left
- vqshlq_n_u32⚠neonUnsigned saturating shift left
- vqshlq_n_u64⚠neonUnsigned saturating shift left
- vqshlq_s8⚠neonSigned saturating shift left
- vqshlq_s16⚠neonSigned saturating shift left
- vqshlq_s32⚠neonSigned saturating shift left
- vqshlq_s64⚠neonSigned saturating shift left
- vqshlq_u8⚠neonUnsigned saturating shift left
- vqshlq_u16⚠neonUnsigned saturating shift left
- vqshlq_u32⚠neonUnsigned saturating shift left
- vqshlq_u64⚠neonUnsigned saturating shift left
- vqshlu_n_s8⚠neonSigned saturating shift left unsigned
- vqshlu_n_s16⚠neonSigned saturating shift left unsigned
- vqshlu_n_s32⚠neonSigned saturating shift left unsigned
- vqshlu_n_s64⚠neonSigned saturating shift left unsigned
- vqshluq_n_s8⚠neonSigned saturating shift left unsigned
- vqshluq_n_s16⚠neonSigned saturating shift left unsigned
- vqshluq_n_s32⚠neonSigned saturating shift left unsigned
- vqshluq_n_s64⚠neonSigned saturating shift left unsigned
- vqshrn_n_s16⚠neonSigned saturating shift right narrow
- vqshrn_n_s32⚠neonSigned saturating shift right narrow
- vqshrn_n_s64⚠neonSigned saturating shift right narrow
- vqshrn_n_u16⚠neonUnsigned saturating shift right narrow
- vqshrn_n_u32⚠neonUnsigned saturating shift right narrow
- vqshrn_n_u64⚠neonUnsigned saturating shift right narrow
- vqshrun_n_s16⚠neonSigned saturating shift right unsigned narrow
- vqshrun_n_s32⚠neonSigned saturating shift right unsigned narrow
- vqshrun_n_s64⚠neonSigned saturating shift right unsigned narrow
- vqsub_s8⚠neonSaturating subtract
- vqsub_s16⚠neonSaturating subtract
- vqsub_s32⚠neonSaturating subtract
- vqsub_s64⚠neonSaturating subtract
- vqsub_u8⚠neonSaturating subtract
- vqsub_u16⚠neonSaturating subtract
- vqsub_u32⚠neonSaturating subtract
- vqsub_u64⚠neonSaturating subtract
- vqsubq_s8⚠neonSaturating subtract
- vqsubq_s16⚠neonSaturating subtract
- vqsubq_s32⚠neonSaturating subtract
- vqsubq_s64⚠neonSaturating subtract
- vqsubq_u8⚠neonSaturating subtract
- vqsubq_u16⚠neonSaturating subtract
- vqsubq_u32⚠neonSaturating subtract
- vqsubq_u64⚠neonSaturating subtract
- vraddhn_high_s16⚠neonRounding Add returning High Narrow (high half).
- vraddhn_high_s32⚠neonRounding Add returning High Narrow (high half).
- vraddhn_high_s64⚠neonRounding Add returning High Narrow (high half).
- vraddhn_high_u16⚠neonRounding Add returning High Narrow (high half).
- vraddhn_high_u32⚠neonRounding Add returning High Narrow (high half).
- vraddhn_high_u64⚠neonRounding Add returning High Narrow (high half).
- vraddhn_s16⚠neonRounding Add returning High Narrow.
- vraddhn_s32⚠neonRounding Add returning High Narrow.
- vraddhn_s64⚠neonRounding Add returning High Narrow.
- vraddhn_u16⚠neonRounding Add returning High Narrow.
- vraddhn_u32⚠neonRounding Add returning High Narrow.
- vraddhn_u64⚠neonRounding Add returning High Narrow.
- vrecpe_f32⚠neonReciprocal estimate.
- vrecpe_u32⚠neonUnsigned reciprocal estimate
- vrecpeq_f32⚠neonReciprocal estimate.
- vrecpeq_u32⚠neonUnsigned reciprocal estimate
- vrecps_f32⚠neonFloating-point reciprocal step
- vrecpsq_f32⚠neonFloating-point reciprocal step
- vreinterpret_f32_p8⚠neonVector reinterpret cast operation
- vreinterpret_f32_p16⚠neonVector reinterpret cast operation
- vreinterpret_f32_s8⚠neonVector reinterpret cast operation
- vreinterpret_f32_s16⚠neonVector reinterpret cast operation
- vreinterpret_f32_s32⚠neonVector reinterpret cast operation
- vreinterpret_f32_s64⚠neonVector reinterpret cast operation
- vreinterpret_f32_u8⚠neonVector reinterpret cast operation
- vreinterpret_f32_u16⚠neonVector reinterpret cast operation
- vreinterpret_f32_u32⚠neonVector reinterpret cast operation
- vreinterpret_f32_u64⚠neonVector reinterpret cast operation
- vreinterpret_p8_f32⚠neonVector reinterpret cast operation
- vreinterpret_p8_p16⚠neonVector reinterpret cast operation
- vreinterpret_p8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_p8_s8⚠neonVector reinterpret cast operation
- vreinterpret_p8_s16⚠neonVector reinterpret cast operation
- vreinterpret_p8_s32⚠neonVector reinterpret cast operation
- vreinterpret_p8_s64⚠neonVector reinterpret cast operation
- vreinterpret_p8_u8⚠neonVector reinterpret cast operation
- vreinterpret_p8_u16⚠neonVector reinterpret cast operation
- vreinterpret_p8_u32⚠neonVector reinterpret cast operation
- vreinterpret_p8_u64⚠neonVector reinterpret cast operation
- vreinterpret_p16_f32⚠neonVector reinterpret cast operation
- vreinterpret_p16_p8⚠neonVector reinterpret cast operation
- vreinterpret_p16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_p16_s8⚠neonVector reinterpret cast operation
- vreinterpret_p16_s16⚠neonVector reinterpret cast operation
- vreinterpret_p16_s32⚠neonVector reinterpret cast operation
- vreinterpret_p16_s64⚠neonVector reinterpret cast operation
- vreinterpret_p16_u8⚠neonVector reinterpret cast operation
- vreinterpret_p16_u16⚠neonVector reinterpret cast operation
- vreinterpret_p16_u32⚠neonVector reinterpret cast operation
- vreinterpret_p16_u64⚠neonVector reinterpret cast operation
- vreinterpret_p64_p8⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_p16⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_s8⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_s16⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_s32⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_u8⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_u16⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_u32⚠neon,aesVector reinterpret cast operation
- vreinterpret_s8_f32⚠neonVector reinterpret cast operation
- vreinterpret_s8_p8⚠neonVector reinterpret cast operation
- vreinterpret_s8_p16⚠neonVector reinterpret cast operation
- vreinterpret_s8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_s8_s16⚠neonVector reinterpret cast operation
- vreinterpret_s8_s32⚠neonVector reinterpret cast operation
- vreinterpret_s8_s64⚠neonVector reinterpret cast operation
- vreinterpret_s8_u8⚠neonVector reinterpret cast operation
- vreinterpret_s8_u16⚠neonVector reinterpret cast operation
- vreinterpret_s8_u32⚠neonVector reinterpret cast operation
- vreinterpret_s8_u64⚠neonVector reinterpret cast operation
- vreinterpret_s16_f32⚠neonVector reinterpret cast operation
- vreinterpret_s16_p8⚠neonVector reinterpret cast operation
- vreinterpret_s16_p16⚠neonVector reinterpret cast operation
- vreinterpret_s16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_s16_s8⚠neonVector reinterpret cast operation
- vreinterpret_s16_s32⚠neonVector reinterpret cast operation
- vreinterpret_s16_s64⚠neonVector reinterpret cast operation
- vreinterpret_s16_u8⚠neonVector reinterpret cast operation
- vreinterpret_s16_u16⚠neonVector reinterpret cast operation
- vreinterpret_s16_u32⚠neonVector reinterpret cast operation
- vreinterpret_s16_u64⚠neonVector reinterpret cast operation
- vreinterpret_s32_f32⚠neonVector reinterpret cast operation
- vreinterpret_s32_p8⚠neonVector reinterpret cast operation
- vreinterpret_s32_p16⚠neonVector reinterpret cast operation
- vreinterpret_s32_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_s32_s8⚠neonVector reinterpret cast operation
- vreinterpret_s32_s16⚠neonVector reinterpret cast operation
- vreinterpret_s32_s64⚠neonVector reinterpret cast operation
- vreinterpret_s32_u8⚠neonVector reinterpret cast operation
- vreinterpret_s32_u16⚠neonVector reinterpret cast operation
- vreinterpret_s32_u32⚠neonVector reinterpret cast operation
- vreinterpret_s32_u64⚠neonVector reinterpret cast operation
- vreinterpret_s64_f32⚠neonVector reinterpret cast operation
- vreinterpret_s64_p8⚠neonVector reinterpret cast operation
- vreinterpret_s64_p16⚠neonVector reinterpret cast operation
- vreinterpret_s64_s8⚠neonVector reinterpret cast operation
- vreinterpret_s64_s16⚠neonVector reinterpret cast operation
- vreinterpret_s64_s32⚠neonVector reinterpret cast operation
- vreinterpret_s64_u8⚠neonVector reinterpret cast operation
- vreinterpret_s64_u16⚠neonVector reinterpret cast operation
- vreinterpret_s64_u32⚠neonVector reinterpret cast operation
- vreinterpret_s64_u64⚠neonVector reinterpret cast operation
- vreinterpret_u8_f32⚠neonVector reinterpret cast operation
- vreinterpret_u8_p8⚠neonVector reinterpret cast operation
- vreinterpret_u8_p16⚠neonVector reinterpret cast operation
- vreinterpret_u8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_u8_s8⚠neonVector reinterpret cast operation
- vreinterpret_u8_s16⚠neonVector reinterpret cast operation
- vreinterpret_u8_s32⚠neonVector reinterpret cast operation
- vreinterpret_u8_s64⚠neonVector reinterpret cast operation
- vreinterpret_u8_u16⚠neonVector reinterpret cast operation
- vreinterpret_u8_u32⚠neonVector reinterpret cast operation
- vreinterpret_u8_u64⚠neonVector reinterpret cast operation
- vreinterpret_u16_f32⚠neonVector reinterpret cast operation
- vreinterpret_u16_p8⚠neonVector reinterpret cast operation
- vreinterpret_u16_p16⚠neonVector reinterpret cast operation
- vreinterpret_u16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_u16_s8⚠neonVector reinterpret cast operation
- vreinterpret_u16_s16⚠neonVector reinterpret cast operation
- vreinterpret_u16_s32⚠neonVector reinterpret cast operation
- vreinterpret_u16_s64⚠neonVector reinterpret cast operation
- vreinterpret_u16_u8⚠neonVector reinterpret cast operation
- vreinterpret_u16_u32⚠neonVector reinterpret cast operation
- vreinterpret_u16_u64⚠neonVector reinterpret cast operation
- vreinterpret_u32_f32⚠neonVector reinterpret cast operation
- vreinterpret_u32_p8⚠neonVector reinterpret cast operation
- vreinterpret_u32_p16⚠neonVector reinterpret cast operation
- vreinterpret_u32_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_u32_s8⚠neonVector reinterpret cast operation
- vreinterpret_u32_s16⚠neonVector reinterpret cast operation
- vreinterpret_u32_s32⚠neonVector reinterpret cast operation
- vreinterpret_u32_s64⚠neonVector reinterpret cast operation
- vreinterpret_u32_u8⚠neonVector reinterpret cast operation
- vreinterpret_u32_u16⚠neonVector reinterpret cast operation
- vreinterpret_u32_u64⚠neonVector reinterpret cast operation
- vreinterpret_u64_f32⚠neonVector reinterpret cast operation
- vreinterpret_u64_p8⚠neonVector reinterpret cast operation
- vreinterpret_u64_p16⚠neonVector reinterpret cast operation
- vreinterpret_u64_s8⚠neonVector reinterpret cast operation
- vreinterpret_u64_s16⚠neonVector reinterpret cast operation
- vreinterpret_u64_s32⚠neonVector reinterpret cast operation
- vreinterpret_u64_s64⚠neonVector reinterpret cast operation
- vreinterpret_u64_u8⚠neonVector reinterpret cast operation
- vreinterpret_u64_u16⚠neonVector reinterpret cast operation
- vreinterpret_u64_u32⚠neonVector reinterpret cast operation
- vreinterpretq_f32_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_f32_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_f32_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p8_f32⚠neonVector reinterpret cast operation
- vreinterpretq_p8_p16⚠neonVector reinterpret cast operation
- vreinterpretq_p8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p8_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p8_s8⚠neonVector reinterpret cast operation
- vreinterpretq_p8_s16⚠neonVector reinterpret cast operation
- vreinterpretq_p8_s32⚠neonVector reinterpret cast operation
- vreinterpretq_p8_s64⚠neonVector reinterpret cast operation
- vreinterpretq_p8_u8⚠neonVector reinterpret cast operation
- vreinterpretq_p8_u16⚠neonVector reinterpret cast operation
- vreinterpretq_p8_u32⚠neonVector reinterpret cast operation
- vreinterpretq_p8_u64⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p16_p8⚠neonVector reinterpret cast operation
- vreinterpretq_p16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p16_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p16_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p16_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p64_p8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_p16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_s8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_s16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_s32⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_u8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_u16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_u32⚠neon,aesVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p128_p8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_p16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_s8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_s16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_s32⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_s64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_u8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_u16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_u32⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_u64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s8_f32⚠neonVector reinterpret cast operation
- vreinterpretq_s8_p8⚠neonVector reinterpret cast operation
- vreinterpretq_s8_p16⚠neonVector reinterpret cast operation
- vreinterpretq_s8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s8_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s8_s16⚠neonVector reinterpret cast operation
- vreinterpretq_s8_s32⚠neonVector reinterpret cast operation
- vreinterpretq_s8_s64⚠neonVector reinterpret cast operation
- vreinterpretq_s8_u8⚠neonVector reinterpret cast operation
- vreinterpretq_s8_u16⚠neonVector reinterpret cast operation
- vreinterpretq_s8_u32⚠neonVector reinterpret cast operation
- vreinterpretq_s8_u64⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s16_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s16_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s16_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s16_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s32_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s32_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s32_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s32_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s32_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s64_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s64_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s64_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s64_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u8_f32⚠neonVector reinterpret cast operation
- vreinterpretq_u8_p8⚠neonVector reinterpret cast operation
- vreinterpretq_u8_p16⚠neonVector reinterpret cast operation
- vreinterpretq_u8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u8_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u8_s8⚠neonVector reinterpret cast operation
- vreinterpretq_u8_s16⚠neonVector reinterpret cast operation
- vreinterpretq_u8_s32⚠neonVector reinterpret cast operation
- vreinterpretq_u8_s64⚠neonVector reinterpret cast operation
- vreinterpretq_u8_u16⚠neonVector reinterpret cast operation
- vreinterpretq_u8_u32⚠neonVector reinterpret cast operation
- vreinterpretq_u8_u64⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u16_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u16_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u16_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u16_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u32_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u32_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u32_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u32_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u32_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u64_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u64_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u64_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u64_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vrev16_p8⚠neonReversing vector elements (swap endianness)
- vrev16_s8⚠neonReversing vector elements (swap endianness)
- vrev16_u8⚠neonReversing vector elements (swap endianness)
- vrev16q_p8⚠neonReversing vector elements (swap endianness)
- vrev16q_s8⚠neonReversing vector elements (swap endianness)
- vrev16q_u8⚠neonReversing vector elements (swap endianness)
- vrev32_p8⚠neonReversing vector elements (swap endianness)
- vrev32_p16⚠neonReversing vector elements (swap endianness)
- vrev32_s8⚠neonReversing vector elements (swap endianness)
- vrev32_s16⚠neonReversing vector elements (swap endianness)
- vrev32_u8⚠neonReversing vector elements (swap endianness)
- vrev32_u16⚠neonReversing vector elements (swap endianness)
- vrev32q_p8⚠neonReversing vector elements (swap endianness)
- vrev32q_p16⚠neonReversing vector elements (swap endianness)
- vrev32q_s8⚠neonReversing vector elements (swap endianness)
- vrev32q_s16⚠neonReversing vector elements (swap endianness)
- vrev32q_u8⚠neonReversing vector elements (swap endianness)
- vrev32q_u16⚠neonReversing vector elements (swap endianness)
- vrev64_f32⚠neonReversing vector elements (swap endianness)
- vrev64_p8⚠neonReversing vector elements (swap endianness)
- vrev64_p16⚠neonReversing vector elements (swap endianness)
- vrev64_s8⚠neonReversing vector elements (swap endianness)
- vrev64_s16⚠neonReversing vector elements (swap endianness)
- vrev64_s32⚠neonReversing vector elements (swap endianness)
- vrev64_u8⚠neonReversing vector elements (swap endianness)
- vrev64_u16⚠neonReversing vector elements (swap endianness)
- vrev64_u32⚠neonReversing vector elements (swap endianness)
- vrev64q_f32⚠neonReversing vector elements (swap endianness)
- vrev64q_p8⚠neonReversing vector elements (swap endianness)
- vrev64q_p16⚠neonReversing vector elements (swap endianness)
- vrev64q_s8⚠neonReversing vector elements (swap endianness)
- vrev64q_s16⚠neonReversing vector elements (swap endianness)
- vrev64q_s32⚠neonReversing vector elements (swap endianness)
- vrev64q_u8⚠neonReversing vector elements (swap endianness)
- vrev64q_u16⚠neonReversing vector elements (swap endianness)
- vrev64q_u32⚠neonReversing vector elements (swap endianness)
- vrhadd_s8⚠neonRounding halving add
- vrhadd_s16⚠neonRounding halving add
- vrhadd_s32⚠neonRounding halving add
- vrhadd_u8⚠neonRounding halving add
- vrhadd_u16⚠neonRounding halving add
- vrhadd_u32⚠neonRounding halving add
- vrhaddq_s8⚠neonRounding halving add
- vrhaddq_s16⚠neonRounding halving add
- vrhaddq_s32⚠neonRounding halving add
- vrhaddq_u8⚠neonRounding halving add
- vrhaddq_u16⚠neonRounding halving add
- vrhaddq_u32⚠neonRounding halving add
- vrndn_f32⚠neonFloating-point round to integral, to nearest with ties to even
- vrndnq_f32⚠neonFloating-point round to integral, to nearest with ties to even
- vrshl_s8⚠neonSigned rounding shift left
- vrshl_s16⚠neonSigned rounding shift left
- vrshl_s32⚠neonSigned rounding shift left
- vrshl_s64⚠neonSigned rounding shift left
- vrshl_u8⚠neonUnsigned rounding shift left
- vrshl_u16⚠neonUnsigned rounding shift left
- vrshl_u32⚠neonUnsigned rounding shift left
- vrshl_u64⚠neonUnsigned rounding shift left
- vrshlq_s8⚠neonSigned rounding shift left
- vrshlq_s16⚠neonSigned rounding shift left
- vrshlq_s32⚠neonSigned rounding shift left
- vrshlq_s64⚠neonSigned rounding shift left
- vrshlq_u8⚠neonUnsigned rounding shift left
- vrshlq_u16⚠neonUnsigned rounding shift left
- vrshlq_u32⚠neonUnsigned rounding shift left
- vrshlq_u64⚠neonUnsigned rounding shift left
- vrshr_n_s8⚠neonSigned rounding shift right
- vrshr_n_s16⚠neonSigned rounding shift right
- vrshr_n_s32⚠neonSigned rounding shift right
- vrshr_n_s64⚠neonSigned rounding shift right
- vrshr_n_u8⚠neonUnsigned rounding shift right
- vrshr_n_u16⚠neonUnsigned rounding shift right
- vrshr_n_u32⚠neonUnsigned rounding shift right
- vrshr_n_u64⚠neonUnsigned rounding shift right
- vrshrn_n_s16⚠neonRounding shift right narrow
- vrshrn_n_s32⚠neonRounding shift right narrow
- vrshrn_n_s64⚠neonRounding shift right narrow
- vrshrn_n_u16⚠neonRounding shift right narrow
- vrshrn_n_u32⚠neonRounding shift right narrow
- vrshrn_n_u64⚠neonRounding shift right narrow
- vrshrq_n_s8⚠neonSigned rounding shift right
- vrshrq_n_s16⚠neonSigned rounding shift right
- vrshrq_n_s32⚠neonSigned rounding shift right
- vrshrq_n_s64⚠neonSigned rounding shift right
- vrshrq_n_u8⚠neonUnsigned rounding shift right
- vrshrq_n_u16⚠neonUnsigned rounding shift right
- vrshrq_n_u32⚠neonUnsigned rounding shift right
- vrshrq_n_u64⚠neonUnsigned rounding shift right
- vrsqrte_f32⚠neonReciprocal square-root estimate.
- vrsqrte_u32⚠neonUnsigned reciprocal square root estimate
- vrsqrteq_f32⚠neonReciprocal square-root estimate.
- vrsqrteq_u32⚠neonUnsigned reciprocal square root estimate
- vrsqrts_f32⚠neonFloating-point reciprocal square root step
- vrsqrtsq_f32⚠neonFloating-point reciprocal square root step
- vrsra_n_s8⚠neonSigned rounding shift right and accumulate
- vrsra_n_s16⚠neonSigned rounding shift right and accumulate
- vrsra_n_s32⚠neonSigned rounding shift right and accumulate
- vrsra_n_s64⚠neonSigned rounding shift right and accumulate
- vrsra_n_u8⚠neonUnsigned rounding shift right and accumulate
- vrsra_n_u16⚠neonUnsigned rounding shift right and accumulate
- vrsra_n_u32⚠neonUnsigned rounding shift right and accumulate
- vrsra_n_u64⚠neonUnsigned rounding shift right and accumulate
- vrsraq_n_s8⚠neonSigned rounding shift right and accumulate
- vrsraq_n_s16⚠neonSigned rounding shift right and accumulate
- vrsraq_n_s32⚠neonSigned rounding shift right and accumulate
- vrsraq_n_s64⚠neonSigned rounding shift right and accumulate
- vrsraq_n_u8⚠neonUnsigned rounding shift right and accumulate
- vrsraq_n_u16⚠neonUnsigned rounding shift right and accumulate
- vrsraq_n_u32⚠neonUnsigned rounding shift right and accumulate
- vrsraq_n_u64⚠neonUnsigned rounding shift right and accumulate
- vrsubhn_s16⚠neonRounding subtract returning high narrow
- vrsubhn_s32⚠neonRounding subtract returning high narrow
- vrsubhn_s64⚠neonRounding subtract returning high narrow
- vrsubhn_u16⚠neonRounding subtract returning high narrow
- vrsubhn_u32⚠neonRounding subtract returning high narrow
- vrsubhn_u64⚠neonRounding subtract returning high narrow
- vset_lane_f32⚠neonInsert vector element from another vector element
- vset_lane_p8⚠neonInsert vector element from another vector element
- vset_lane_p16⚠neonInsert vector element from another vector element
- vset_lane_p64⚠neon,aesInsert vector element from another vector element
- vset_lane_s8⚠neonInsert vector element from another vector element
- vset_lane_s16⚠neonInsert vector element from another vector element
- vset_lane_s32⚠neonInsert vector element from another vector element
- vset_lane_s64⚠neonInsert vector element from another vector element
- vset_lane_u8⚠neonInsert vector element from another vector element
- vset_lane_u16⚠neonInsert vector element from another vector element
- vset_lane_u32⚠neonInsert vector element from another vector element
- vset_lane_u64⚠neonInsert vector element from another vector element
- vsetq_lane_f32⚠neonInsert vector element from another vector element
- vsetq_lane_p8⚠neonInsert vector element from another vector element
- vsetq_lane_p16⚠neonInsert vector element from another vector element
- vsetq_lane_p64⚠neon,aesInsert vector element from another vector element
- vsetq_lane_s8⚠neonInsert vector element from another vector element
- vsetq_lane_s16⚠neonInsert vector element from another vector element
- vsetq_lane_s32⚠neonInsert vector element from another vector element
- vsetq_lane_s64⚠neonInsert vector element from another vector element
- vsetq_lane_u8⚠neonInsert vector element from another vector element
- vsetq_lane_u16⚠neonInsert vector element from another vector element
- vsetq_lane_u32⚠neonInsert vector element from another vector element
- vsetq_lane_u64⚠neonInsert vector element from another vector element
- vsha1cq_u32⚠sha2SHA1 hash update accelerator, choose.
- vsha1h_u32⚠sha2SHA1 fixed rotate.
- vsha1mq_u32⚠sha2SHA1 hash update accelerator, majority.
- vsha1pq_u32⚠sha2SHA1 hash update accelerator, parity.
- vsha1su0q_u32⚠sha2SHA1 schedule update accelerator, first part.
- vsha1su1q_u32⚠sha2SHA1 schedule update accelerator, second part.
- vsha256h2q_u32⚠sha2SHA256 hash update accelerator, upper part.
- vsha256hq_u32⚠sha2SHA256 hash update accelerator.
- vsha256su0q_u32⚠sha2SHA256 schedule update accelerator, first part.
- vsha256su1q_u32⚠sha2SHA256 schedule update accelerator, second part.
- vshl_n_s8⚠neonShift left
- vshl_n_s16⚠neonShift left
- vshl_n_s32⚠neonShift left
- vshl_n_s64⚠neonShift left
- vshl_n_u8⚠neonShift left
- vshl_n_u16⚠neonShift left
- vshl_n_u32⚠neonShift left
- vshl_n_u64⚠neonShift left
- vshl_s8⚠neonSigned Shift left
- vshl_s16⚠neonSigned Shift left
- vshl_s32⚠neonSigned Shift left
- vshl_s64⚠neonSigned Shift left
- vshl_u8⚠neonUnsigned Shift left
- vshl_u16⚠neonUnsigned Shift left
- vshl_u32⚠neonUnsigned Shift left
- vshl_u64⚠neonUnsigned Shift left
- vshll_n_s8⚠neonSigned shift left long
- vshll_n_s16⚠neonSigned shift left long
- vshll_n_s32⚠neonSigned shift left long
- vshll_n_u8⚠neonSigned shift left long
- vshll_n_u16⚠neonSigned shift left long
- vshll_n_u32⚠neonSigned shift left long
- vshlq_n_s8⚠neonShift left
- vshlq_n_s16⚠neonShift left
- vshlq_n_s32⚠neonShift left
- vshlq_n_s64⚠neonShift left
- vshlq_n_u8⚠neonShift left
- vshlq_n_u16⚠neonShift left
- vshlq_n_u32⚠neonShift left
- vshlq_n_u64⚠neonShift left
- vshlq_s8⚠neonSigned Shift left
- vshlq_s16⚠neonSigned Shift left
- vshlq_s32⚠neonSigned Shift left
- vshlq_s64⚠neonSigned Shift left
- vshlq_u8⚠neonUnsigned Shift left
- vshlq_u16⚠neonUnsigned Shift left
- vshlq_u32⚠neonUnsigned Shift left
- vshlq_u64⚠neonUnsigned Shift left
- vshr_n_s8⚠neonShift right
- vshr_n_s16⚠neonShift right
- vshr_n_s32⚠neonShift right
- vshr_n_s64⚠neonShift right
- vshr_n_u8⚠neonShift right
- vshr_n_u16⚠neonShift right
- vshr_n_u32⚠neonShift right
- vshr_n_u64⚠neonShift right
- vshrn_n_s16⚠neonShift right narrow
- vshrn_n_s32⚠neonShift right narrow
- vshrn_n_s64⚠neonShift right narrow
- vshrn_n_u16⚠neonShift right narrow
- vshrn_n_u32⚠neonShift right narrow
- vshrn_n_u64⚠neonShift right narrow
- vshrq_n_s8⚠neonShift right
- vshrq_n_s16⚠neonShift right
- vshrq_n_s32⚠neonShift right
- vshrq_n_s64⚠neonShift right
- vshrq_n_u8⚠neonShift right
- vshrq_n_u16⚠neonShift right
- vshrq_n_u32⚠neonShift right
- vshrq_n_u64⚠neonShift right
- vsra_n_s8⚠neonSigned shift right and accumulate
- vsra_n_s16⚠neonSigned shift right and accumulate
- vsra_n_s32⚠neonSigned shift right and accumulate
- vsra_n_s64⚠neonSigned shift right and accumulate
- vsra_n_u8⚠neonUnsigned shift right and accumulate
- vsra_n_u16⚠neonUnsigned shift right and accumulate
- vsra_n_u32⚠neonUnsigned shift right and accumulate
- vsra_n_u64⚠neonUnsigned shift right and accumulate
- vsraq_n_s8⚠neonSigned shift right and accumulate
- vsraq_n_s16⚠neonSigned shift right and accumulate
- vsraq_n_s32⚠neonSigned shift right and accumulate
- vsraq_n_s64⚠neonSigned shift right and accumulate
- vsraq_n_u8⚠neonUnsigned shift right and accumulate
- vsraq_n_u16⚠neonUnsigned shift right and accumulate
- vsraq_n_u32⚠neonUnsigned shift right and accumulate
- vsraq_n_u64⚠neonUnsigned shift right and accumulate
- vst1_f32_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_f32_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_f32_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_lane_f32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_p8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_p16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_p64⚠neon,aesStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_s8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_s16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_s32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_s64⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_u8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_u16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_u32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_u64⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_p8_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p8_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p8_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p16_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p16_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p16_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p64_x2⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1_p64_x3⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1_p64_x4⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1_s8_x2⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s8_x3⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s8_x4⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s16_x2⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s16_x3⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s16_x4⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s32_x2⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s32_x3⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s32_x4⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s64_x2⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s64_x3⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_s64_x4⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_u8_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u8_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u8_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u16_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u16_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u16_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u32_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u32_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u32_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u64_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u64_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u64_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_f32_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_f32_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_f32_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_lane_f32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_p8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_p16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_p64⚠neon,aesStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_s8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_s16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_s32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_s64⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_u8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_u16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_u32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_u64⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_p8_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p8_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p8_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p16_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p16_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p16_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p64_x2⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1q_p64_x3⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1q_p64_x4⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1q_s8_x2⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s8_x3⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s8_x4⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s16_x2⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s16_x3⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s16_x4⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s32_x2⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s32_x3⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s32_x4⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s64_x2⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s64_x3⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_s64_x4⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_u8_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u8_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u8_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u16_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u16_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u16_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u32_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u32_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u32_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u64_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u64_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u64_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst2_f32⚠neonStore multiple 2-element structures from two registers
- vst2_lane_f32⚠neonStore multiple 2-element structures from two registers
- vst2_lane_p8⚠neonStore multiple 2-element structures from two registers
- vst2_lane_p16⚠neonStore multiple 2-element structures from two registers
- vst2_lane_s8⚠neonStore multiple 2-element structures from two registers
- vst2_lane_s16⚠neonStore multiple 2-element structures from two registers
- vst2_lane_s32⚠neonStore multiple 2-element structures from two registers
- vst2_lane_u8⚠neonStore multiple 2-element structures from two registers
- vst2_lane_u16⚠neonStore multiple 2-element structures from two registers
- vst2_lane_u32⚠neonStore multiple 2-element structures from two registers
- vst2_p8⚠neonStore multiple 2-element structures from two registers
- vst2_p16⚠neonStore multiple 2-element structures from two registers
- vst2_p64⚠neon,aesStore multiple 2-element structures from two registers
- vst2_s8⚠neonStore multiple 2-element structures from two registers
- vst2_s16⚠neonStore multiple 2-element structures from two registers
- vst2_s32⚠neonStore multiple 2-element structures from two registers
- vst2_s64⚠neonStore multiple 2-element structures from two registers
- vst2_u8⚠neonStore multiple 2-element structures from two registers
- vst2_u16⚠neonStore multiple 2-element structures from two registers
- vst2_u32⚠neonStore multiple 2-element structures from two registers
- vst2_u64⚠neonStore multiple 2-element structures from two registers
- vst2q_f32⚠neonStore multiple 2-element structures from two registers
- vst2q_lane_f32⚠neonStore multiple 2-element structures from two registers
- vst2q_lane_p16⚠neonStore multiple 2-element structures from two registers
- vst2q_lane_s16⚠neonStore multiple 2-element structures from two registers
- vst2q_lane_s32⚠neonStore multiple 2-element structures from two registers
- vst2q_lane_u16⚠neonStore multiple 2-element structures from two registers
- vst2q_lane_u32⚠neonStore multiple 2-element structures from two registers
- vst2q_p8⚠neonStore multiple 2-element structures from two registers
- vst2q_p16⚠neonStore multiple 2-element structures from two registers
- vst2q_s8⚠neonStore multiple 2-element structures from two registers
- vst2q_s16⚠neonStore multiple 2-element structures from two registers
- vst2q_s32⚠neonStore multiple 2-element structures from two registers
- vst2q_u8⚠neonStore multiple 2-element structures from two registers
- vst2q_u16⚠neonStore multiple 2-element structures from two registers
- vst2q_u32⚠neonStore multiple 2-element structures from two registers
- vst3_f32⚠neonStore multiple 3-element structures from three registers
- vst3_lane_f32⚠neonStore multiple 3-element structures from three registers
- vst3_lane_p8⚠neonStore multiple 3-element structures from three registers
- vst3_lane_p16⚠neonStore multiple 3-element structures from three registers
- vst3_lane_s8⚠neonStore multiple 3-element structures from three registers
- vst3_lane_s16⚠neonStore multiple 3-element structures from three registers
- vst3_lane_s32⚠neonStore multiple 3-element structures from three registers
- vst3_lane_u8⚠neonStore multiple 3-element structures from three registers
- vst3_lane_u16⚠neonStore multiple 3-element structures from three registers
- vst3_lane_u32⚠neonStore multiple 3-element structures from three registers
- vst3_p8⚠neonStore multiple 3-element structures from three registers
- vst3_p16⚠neonStore multiple 3-element structures from three registers
- vst3_p64⚠neon,aesStore multiple 3-element structures from three registers
- vst3_s8⚠neonStore multiple 3-element structures from three registers
- vst3_s16⚠neonStore multiple 3-element structures from three registers
- vst3_s32⚠neonStore multiple 3-element structures from three registers
- vst3_s64⚠neonStore multiple 3-element structures from three registers
- vst3_u8⚠neonStore multiple 3-element structures from three registers
- vst3_u16⚠neonStore multiple 3-element structures from three registers
- vst3_u32⚠neonStore multiple 3-element structures from three registers
- vst3_u64⚠neonStore multiple 3-element structures from three registers
- vst3q_f32⚠neonStore multiple 3-element structures from three registers
- vst3q_lane_f32⚠neonStore multiple 3-element structures from three registers
- vst3q_lane_p16⚠neonStore multiple 3-element structures from three registers
- vst3q_lane_s16⚠neonStore multiple 3-element structures from three registers
- vst3q_lane_s32⚠neonStore multiple 3-element structures from three registers
- vst3q_lane_u16⚠neonStore multiple 3-element structures from three registers
- vst3q_lane_u32⚠neonStore multiple 3-element structures from three registers
- vst3q_p8⚠neonStore multiple 3-element structures from three registers
- vst3q_p16⚠neonStore multiple 3-element structures from three registers
- vst3q_s8⚠neonStore multiple 3-element structures from three registers
- vst3q_s16⚠neonStore multiple 3-element structures from three registers
- vst3q_s32⚠neonStore multiple 3-element structures from three registers
- vst3q_u8⚠neonStore multiple 3-element structures from three registers
- vst3q_u16⚠neonStore multiple 3-element structures from three registers
- vst3q_u32⚠neonStore multiple 3-element structures from three registers
- vst4_f32⚠neonStore multiple 4-element structures from four registers
- vst4_lane_f32⚠neonStore multiple 4-element structures from four registers
- vst4_lane_p8⚠neonStore multiple 4-element structures from four registers
- vst4_lane_p16⚠neonStore multiple 4-element structures from four registers
- vst4_lane_s8⚠neonStore multiple 4-element structures from four registers
- vst4_lane_s16⚠neonStore multiple 4-element structures from four registers
- vst4_lane_s32⚠neonStore multiple 4-element structures from four registers
- vst4_lane_u8⚠neonStore multiple 4-element structures from four registers
- vst4_lane_u16⚠neonStore multiple 4-element structures from four registers
- vst4_lane_u32⚠neonStore multiple 4-element structures from four registers
- vst4_p8⚠neonStore multiple 4-element structures from four registers
- vst4_p16⚠neonStore multiple 4-element structures from four registers
- vst4_p64⚠neon,aesStore multiple 4-element structures from four registers
- vst4_s8⚠neonStore multiple 4-element structures from four registers
- vst4_s16⚠neonStore multiple 4-element structures from four registers
- vst4_s32⚠neonStore multiple 4-element structures from four registers
- vst4_s64⚠neonStore multiple 4-element structures from four registers
- vst4_u8⚠neonStore multiple 4-element structures from four registers
- vst4_u16⚠neonStore multiple 4-element structures from four registers
- vst4_u32⚠neonStore multiple 4-element structures from four registers
- vst4_u64⚠neonStore multiple 4-element structures from four registers
- vst4q_f32⚠neonStore multiple 4-element structures from four registers
- vst4q_lane_f32⚠neonStore multiple 4-element structures from four registers
- vst4q_lane_p16⚠neonStore multiple 4-element structures from four registers
- vst4q_lane_s16⚠neonStore multiple 4-element structures from four registers
- vst4q_lane_s32⚠neonStore multiple 4-element structures from four registers
- vst4q_lane_u16⚠neonStore multiple 4-element structures from four registers
- vst4q_lane_u32⚠neonStore multiple 4-element structures from four registers
- vst4q_p8⚠neonStore multiple 4-element structures from four registers
- vst4q_p16⚠neonStore multiple 4-element structures from four registers
- vst4q_s8⚠neonStore multiple 4-element structures from four registers
- vst4q_s16⚠neonStore multiple 4-element structures from four registers
- vst4q_s32⚠neonStore multiple 4-element structures from four registers
- vst4q_u8⚠neonStore multiple 4-element structures from four registers
- vst4q_u16⚠neonStore multiple 4-element structures from four registers
- vst4q_u32⚠neonStore multiple 4-element structures from four registers
- vstrq_p128⚠neonStore SIMD&FP register (immediate offset)
- vsub_f32⚠neonSubtract
- vsub_s8⚠neonSubtract
- vsub_s16⚠neonSubtract
- vsub_s32⚠neonSubtract
- vsub_s64⚠neonSubtract
- vsub_u8⚠neonSubtract
- vsub_u16⚠neonSubtract
- vsub_u32⚠neonSubtract
- vsub_u64⚠neonSubtract
- vsubhn_high_s16⚠neonSubtract returning high narrow
- vsubhn_high_s32⚠neonSubtract returning high narrow
- vsubhn_high_s64⚠neonSubtract returning high narrow
- vsubhn_high_u16⚠neonSubtract returning high narrow
- vsubhn_high_u32⚠neonSubtract returning high narrow
- vsubhn_high_u64⚠neonSubtract returning high narrow
- vsubhn_s16⚠neonSubtract returning high narrow
- vsubhn_s32⚠neonSubtract returning high narrow
- vsubhn_s64⚠neonSubtract returning high narrow
- vsubhn_u16⚠neonSubtract returning high narrow
- vsubhn_u32⚠neonSubtract returning high narrow
- vsubhn_u64⚠neonSubtract returning high narrow
- vsubl_s8⚠neonSigned Subtract Long
- vsubl_s16⚠neonSigned Subtract Long
- vsubl_s32⚠neonSigned Subtract Long
- vsubl_u8⚠neonUnsigned Subtract Long
- vsubl_u16⚠neonUnsigned Subtract Long
- vsubl_u32⚠neonUnsigned Subtract Long
- vsubq_f32⚠neonSubtract
- vsubq_s8⚠neonSubtract
- vsubq_s16⚠neonSubtract
- vsubq_s32⚠neonSubtract
- vsubq_s64⚠neonSubtract
- vsubq_u8⚠neonSubtract
- vsubq_u16⚠neonSubtract
- vsubq_u32⚠neonSubtract
- vsubq_u64⚠neonSubtract
- vsubw_s8⚠neonSigned Subtract Wide
- vsubw_s16⚠neonSigned Subtract Wide
- vsubw_s32⚠neonSigned Subtract Wide
- vsubw_u8⚠neonUnsigned Subtract Wide
- vsubw_u16⚠neonUnsigned Subtract Wide
- vsubw_u32⚠neonUnsigned Subtract Wide
- vtrn_f32⚠neonTranspose elements
- vtrn_p8⚠neonTranspose elements
- vtrn_p16⚠neonTranspose elements
- vtrn_s8⚠neonTranspose elements
- vtrn_s16⚠neonTranspose elements
- vtrn_s32⚠neonTranspose elements
- vtrn_u8⚠neonTranspose elements
- vtrn_u16⚠neonTranspose elements
- vtrn_u32⚠neonTranspose elements
- vtrnq_f32⚠neonTranspose elements
- vtrnq_p8⚠neonTranspose elements
- vtrnq_p16⚠neonTranspose elements
- vtrnq_s8⚠neonTranspose elements
- vtrnq_s16⚠neonTranspose elements
- vtrnq_s32⚠neonTranspose elements
- vtrnq_u8⚠neonTranspose elements
- vtrnq_u16⚠neonTranspose elements
- vtrnq_u32⚠neonTranspose elements
- vtst_p8⚠neonSigned compare bitwise Test bits nonzero
- vtst_p16⚠neonSigned compare bitwise Test bits nonzero
- vtst_s8⚠neonSigned compare bitwise Test bits nonzero
- vtst_s16⚠neonSigned compare bitwise Test bits nonzero
- vtst_s32⚠neonSigned compare bitwise Test bits nonzero
- vtst_u8⚠neonUnsigned compare bitwise Test bits nonzero
- vtst_u16⚠neonUnsigned compare bitwise Test bits nonzero
- vtst_u32⚠neonUnsigned compare bitwise Test bits nonzero
- vtstq_p8⚠neonSigned compare bitwise Test bits nonzero
- vtstq_p16⚠neonSigned compare bitwise Test bits nonzero
- vtstq_s8⚠neonSigned compare bitwise Test bits nonzero
- vtstq_s16⚠neonSigned compare bitwise Test bits nonzero
- vtstq_s32⚠neonSigned compare bitwise Test bits nonzero
- vtstq_u8⚠neonUnsigned compare bitwise Test bits nonzero
- vtstq_u16⚠neonUnsigned compare bitwise Test bits nonzero
- vtstq_u32⚠neonUnsigned compare bitwise Test bits nonzero
- vuzp_f32⚠neonUnzip vectors
- vuzp_p8⚠neonUnzip vectors
- vuzp_p16⚠neonUnzip vectors
- vuzp_s8⚠neonUnzip vectors
- vuzp_s16⚠neonUnzip vectors
- vuzp_s32⚠neonUnzip vectors
- vuzp_u8⚠neonUnzip vectors
- vuzp_u16⚠neonUnzip vectors
- vuzp_u32⚠neonUnzip vectors
- vuzpq_f32⚠neonUnzip vectors
- vuzpq_p8⚠neonUnzip vectors
- vuzpq_p16⚠neonUnzip vectors
- vuzpq_s8⚠neonUnzip vectors
- vuzpq_s16⚠neonUnzip vectors
- vuzpq_s32⚠neonUnzip vectors
- vuzpq_u8⚠neonUnzip vectors
- vuzpq_u16⚠neonUnzip vectors
- vuzpq_u32⚠neonUnzip vectors
- vzip_f32⚠neonZip vectors
- vzip_p8⚠neonZip vectors
- vzip_p16⚠neonZip vectors
- vzip_s8⚠neonZip vectors
- vzip_s16⚠neonZip vectors
- vzip_s32⚠neonZip vectors
- vzip_u8⚠neonZip vectors
- vzip_u16⚠neonZip vectors
- vzip_u32⚠neonZip vectors
- vzipq_f32⚠neonZip vectors
- vzipq_p8⚠neonZip vectors
- vzipq_p16⚠neonZip vectors
- vzipq_s8⚠neonZip vectors
- vzipq_s16⚠neonZip vectors
- vzipq_s32⚠neonZip vectors
- vzipq_u8⚠neonZip vectors
- vzipq_u16⚠neonZip vectors
- vzipq_u32⚠neonZip vectors