Report a bug
If you spot a problem with this page, click here to create a Bugzilla issue.
Improve this page
Quickly fork, edit online, and submit a pull request for this page. Requires a signed-in GitHub account. This works well for small changes. If you'd like to make larger changes you may want to consider using a local clone.

Vector Extensions

Modern CPUs often support specialized vector types and vector operations, sometimes called "media instructions". Vector types are a fixed array of floating or integer types, and vector operations operate simultaneously on them, thus achieving great speed-ups.

When the compiler takes advantage of these instructions with standard D code to speed up loops over arithmetic data, this is called auto-vectorization. Auto-vectorization, however, has had only limited success and has not been able to really take advantage of the richness (and often quirkiness) of the native vector instructions.

D has the array operation notation, such as:

int[] a,b;
a[] += b[];

which can be vectorized by the compiler, but again success is limited for the same reason auto-vectorization is.

The difficulties with trying to use vector instructions on regular arrays are:

  1. The vector types have stringent alignment requirements that are not and cannot be met by conventional arrays.
  2. C ABI's often have vector extensions and have special name mangling for them, call/return conventions, and symbolic debug support.
  3. The only way to get at the full vector instruction set would be to use inline assembler - but the compiler cannot do register allocation across inline assembler blocks (or other optimizations), leading to poor code performance.
  4. Interleaving conventional array code with vector operations on the same data can unwittingly lead to extremely poor runtime performance.

These issues are cleared up by using special vector types.


Vector types and operations are introduced to D code by importing core.simd:

import core.simd;

These types and operations will be the ones defined for the architecture the compiler is targeting. If a particular CPU family has varying support for vector types, an additional runtime check may be necessary. The compiler does not emit runtime checks; those must be done by the programmer.

Depending on the architecture, compiler flags may be required to activate support for SIMD types.

The types defined will all follow the naming convention:


where type is the vector element type and NN is the number of those elements in the vector type. The type names will not be keywords.


Vector types have the property:

Vector Type Properties
.arrayReturns static array representation

All the properties of the static array representation also work.


Vector types of the same size can be implicitly converted among each other. Vector types can be cast to the static array representation.

Integers and floating point values can be implicitly converted to their vector equivalents:

int4 v = 7;
v = 3 + v;   // add 3 to each element in v

Accessing Individual Vector Elements

They cannot be accessed directly, but can be when converted to an array type:

int4 v;
(cast(int*)&v)[3] = 2;   // set 3rd element of the 4 int vector
(cast(int[4])v)[3] = 2;  // set 3rd element of the 4 int vector
v.array[3] = 2;          // set 3rd element of the 4 int vector
v.ptr[3] = 2;            // set 3rd element of the 4 int vector

Conditional Compilation

If vector extensions are implemented, the version identifier D_SIMD is set.

Whether a type exists or not can be tested at compile time with an IsExpression:

static if (is(typeNN))
    ... yes, it is supported ...
    ... nope, use workaround ...

Whether a particular operation on a type is supported can be tested at compile time with:

float4 a,b;
static if (__traits(compiles, a+b))
    ... yes, it is supported ...
    ... nope, use workaround ...

For runtime testing to see if certain vector instructions are available, see the functions in core.cpuid.

A typical workaround would be to use array vector operations instead:

float4 a,b;
static if (__traits(compiles, a/b))
    c = a / b;
    c[] = a[] / b[];

X86 And X86_64 Vector Extension Implementation

The rest of this document describes the specific implementation of the vector types for the X86 and X86_64 architectures.

The vector extensions are currently implemented for the OS X 32 bit target, and all 64 bit targets.

core.simd defines the following types:

Vector Types
Type NameDescriptiongcc Equivalent
void1616 bytes of untyped datano equivalent
byte1616 byte'ssigned char __attribute__((vector_size(16)))
ubyte1616 ubyte'sunsigned char __attribute__((vector_size(16)))
short88 short'sshort __attribute__((vector_size(16)))
ushort88 ushort'sushort __attribute__((vector_size(16)))
int44 int'sint __attribute__((vector_size(16)))
uint44 uint'sunsigned __attribute__((vector_size(16)))
long22 long'slong __attribute__((vector_size(16)))
ulong22 ulong'sunsigned long __attribute__((vector_size(16)))
float44 float'sfloat __attribute__((vector_size(16)))
double22 double'sdouble __attribute__((vector_size(16)))
void3232 bytes of untyped datano equivalent
byte3232 byte'ssigned char __attribute__((vector_size(32)))
ubyte3232 ubyte'sunsigned char __attribute__((vector_size(32)))
short1616 short'sshort __attribute__((vector_size(32)))
ushort1616 ushort'sushort __attribute__((vector_size(32)))
int88 int'sint __attribute__((vector_size(32)))
uint88 uint'sunsigned __attribute__((vector_size(32)))
long44 long'slong __attribute__((vector_size(32)))
ulong44 ulong'sunsigned long __attribute__((vector_size(32)))
float88 float'sfloat __attribute__((vector_size(32)))
double44 double'sdouble __attribute__((vector_size(32)))

Note: for 32 bit gcc, it's long long instead of long.

Supported 128-bit Vector Operators
Supported 256-bit Vector Operators

Operators not listed are not supported at all.

Vector Operation Intrinsics

See core.simd for the supported intrinsics.

Application Binary Interface