New D Compiler Release: DMD 2.075.0

Posted on

DMD 2.075.0 was released a few days back. As with every release, the changelog is available so you can browse the list of fixed bugs and new features. 2.075.0 can be fetched from the download page, which always makes available the latest DMD release alongside a nightly build.

Notable Changes

Every DMD release brings with it a number of bug fixes, changes, and enhancements. Here are some of the more noteworthy changes in this release.

Two array properties removed

Anyone who does a lot of work with D’s ranges will likely have encountered this little annoyance that arises from the built-in .sort property of arrays.

void main()
    import std.algorithm : remove, sort;
    import std.array : array;
    int[] nums = [5, 3, 1, 2, 4];
    nums = nums.sort.remove(2).array;

The .sort property has been deprecated for ages, so the above would result in the following error:

sorted.d(6): Deprecation: use std.algorithm.sort instead of .sort property

The workaround would be to add an empty set of parentheses to the sort call. With DMD 2.075.0, this is no longer necessary and the above will compile. Both the .sort and .reverse array properties have finally been removed from the language.

For the uninitiated, D has two features that have proven convenient in the functional pipeline programming style typically used with ranges. One is that parentheses on a function call are optional when there are no parameters. The other is Universal Function Call Syntax (UFCS), which allows a function call to be made using the dot notation on the first argument, so that a function int add(int a, int b) can be called as: 10.add(5).

Each of D’s built-in types comes with a set of built-in properties. Given that the built-in properties are not functions, no parentheses are used to access them. The .sort array property has been around since the early days of D1. At the time, it was rather useful and convenient for anyone who was happy with the default implementation. When D2 came along with the range paradigm, the standard library was given a set of functions that can treat arrays as ranges, opening them up to use with the many range based functions in the std.algorithm package and elsewhere.

With optional parentheses, UFCS, and a range-based function in std.algorithm called sort, conflict was inevitable. Now range-based programmers can put that behind them and take one more pair of parentheses out of their pipelines.

The breaking up of std.datetime

The std.datetime module has had a reputation as the largest module in D’s standard library. Some developers have been known to use it a stress test for their tooling. It was added to the library long before D got the special package module feature, which allows multiple modules in a package to be imported as a single module.

Once package modules were added, Jonathan M. Davis, the original std.datetime developer, found it challenging to split the monolith into multiple modules. Then, at DConf 2017, he could be seen toiling away on his laptop in the conference hall and the hotel lobby. On the final day of the conference, the day of the DConf Hackathon, he announced that std.datetime was now a package. DMD 2.075.0 is the first release where the new module structure is available.

Any existing code using the old module should still compile. However, any static libraries or object files lying around with the old symbols stuffed inside may need to be recompiled.

Colorized compiler messages

This one is missing from the changelog. DMD now has the ability to output colorized messages. The implementation required going through the existing error messages and properly annotating them where appropriate, so there may well be some messages for which the colors are missing. Also, given that this is a brand new feature and people can be picky about their terminal colors, more work will likely be done on this in the future. Perhaps that might include support for customization.


Compiler Ddoc documentation online

DMD, though originally written in C++, was converted to D some time ago. Now that more D programmers are able to contribute to the compiler, work has gone into documenting its source using D’s built-in Ddoc syntax. The result is now online, accessible from the sidebar of the existing library reference. A good starting point is the ddmd.mars module.

And more…

The above is a small part of the bigger picture. The bugfix list shows 89 bugs, regressions, and enhancements across the compiler, runtime, standard library, and web site. See the full changelog for the details.

Thanks to everyone who contributed to this release, whether it was by reporting issues, submitting or reviewing pull requests, testing out the beta, or carrying out any of the numerous small tasks that help a new release see the light of day.

DCompute: GPGPU with Native D for OpenCL and CUDA

Posted on

Nicholas Wilson is a student at Murdoch University, studying for his BEng (Hons)/BSc in Industrial Computer Systems (Hons) and Instrumentation & Control/ Molecular Biology & Genetics and Biomedical Science. He just finished his thesis on low-cost defect detection of solar cells by electroluminescence imaging, which gives him time to work on DCompute and write about it for the D Blog. He plays the piano, ice skates, and has spent 7 years putting D to use on number bashing, automation, and anything else that he could make a computer do for him.

DCompute is a framework and compiler extension to support writing native kernels for OpenCL and CUDA in D to utilise GPUs and other accelerators for computationally intensive code. In development are drivers to automate the interactions between user code and the tedious and error prone compute APIs with the goal of enabling the rapid development of high performance D libraries and applications.


After watching John Colvin’s DConf 2016 presentation in May of last year on using D’s metaprogramming to make the OpenCL API marginally less horrible to use, I thought, “This would be so much easier to do if we were able to write kernels in D, rather than doing string manipulations in OpenCL C”. At the time, I was coming up to the end of a rather busy semester and thought that would make a good winter[1] project. After all, LDC, the LLVM D Compiler, has access to LLVM’s SPIR-V and PTX backends, and I thought, “It can’t be too hard, its only glue code”. I slightly underestimated the time it would take, finishing the first stage of DCompute (because naming things is hard), mainlining the changes I made to LDC at the end of February, eight months later — just in time for the close of submissions to DConf, where I gave a talk on the progress I had made.

Apart from familiarising myself with the LDC and DMD front-end codebases, I also had to understand the LLVM SPIR-V and PTX backends that I was trying to target, because they require the use of special metadata (for e.g. denoting a function is a kernel) and address spaces, used to represent __global & friends in OpenCL C and __global__ & friends in CUDA, and introduce these concepts into LDC.

But once I was familiar with the code and had sorted the above discrepancies, it was mostly smooth sailing translating the OpenCL and CUDA modifiers into compiler-recognised attributes and wrapping the intrinsics into an easy to use and consistent interface.

When it was all working and almost ready to merge into mainline LDC, I hit a bit of a snag with regards to CI: the SPIR-V backend that was being developed by Khronos was based on the quite old LLVM 3.6.1 and, despite my suggestions, did not have any releases. So I forward ported the backend and the conversion utility to the master branch of LLVM and made a release myself. Still in progress on this front are converting magic intrinsics to proper LLVM intrinsics and transitioning to a TableGen-driven approach for the backend in preparation for merging the backend into LLVM Trunk. This should hopefully be done soon™.

Current state of DCompute

With the current state of DCompute we are able to write kernels natively in D and have access to most of its language-defining features like templates & static introspection, UFCS, scope guards, ranges & algorithms and CTFE. Notably missing, for hardware and performance reasons, are those features commonly excluded in kernel languages, like function pointers, virtual functions, dynamic recursion, RTTI, exceptions and the use of the garbage collector. Note that unlike OpenCL C++ we allow kernel functions to be templated and have overloads and default values. Still in development is support for images and pipes.

Example code

To write kernels in D, we need to pass -mdcompute-targets=<targets> to LDC, where <targets> is a comma-separated list of the desired targets to build for, e.g. ocl-120,cuda-350 for OpenCL 1.2 and CUDA compute capability 3.5, respectively (yes, we can do them all at once!). We get one file for each target, e.g. kernels_ocl120_64.spv, when built in 64-bit mode, which contains all of the code for that device.

The vector add kernel in D is:

@compute(CompileFor.deviceOnly) module example;
import ldc.dcompute;
import dcompute.std.index;

alias gf = GlobalPointer!float;

@kernel void vadd(gf a, gf b, gf c) 
	auto x = GlobalIndex.x;
	a[x] = b[x]+c[x];

Modules marked with the @compute attribute are compiled for each of the command line targets, @kernel makes a function a kernel, and GlobalPointer is the equivalent of the __global qualifier in OpenCL.

Kernels are not restricted to just functions — lambdas & tamplates also work:

@kernel void map(alias F)(KernelArgs!F args)
//In host code
AutoBuffer!float x,y,z; // y & z initialised with data
q.enqueue!(map!((a,b,c) => a=b+c))(x.length)(x, y, z);

Where KernelArgs translates host types to device types (e.g. buffers to pointers or, as in this example, AutoBuffers to AutoIndexed Pointers) so that we encapsulate the differences in the host and device types.

The last line is the expected syntax for launching kernels, q.enqueue!kernel(dimensions)(args), akin to CUDA’s kernel<<<dimensions,queue>>>(args). The libraries for launching kernels are in development.

Unlike CUDA, where all the magic for transforming the above expression into code on the host lies in the compiler, q.enqueue!func(sizes)(args) will be processed by static introspection of the driver library of DCompute.
The sole reason we can do this in D is that we are able to query the mangled name the compiler will give to a symbol via the symbol’s .mangleof property. This, in combination with D’s easy to use and powerful templates, means we can significantly reduce the mental overhead associated with using the compute APIs. Also, implementing this in the library will be much simpler, and therefore faster to implement, than putting the same behaviour in the compiler. While this may not seem much for CUDA users, this will be a breath of fresh air to OpenCL users (just look at the OpenCL vector add host code example steps 7-11).

While you cant do that just yet in DCompute, development should start to progress quickly and hopefully become a reality soon.

I would like to thank John Colvin for the initial inspiration, Mike Parker for editing, and the LDC folks, David Nadlinger, Kai Nacke, Martin Kinke, with a special thanks to Johan Engelen, for their help with understanding the LDC codebase and reviewing my work.

If you would like to help develop DCompute (or be kept in the loop), feel free to drop a line at the libmir Gitter. Similarly, any efforts preparing the SPIR-V backend for inclusion into LLVM are also greatly appreciated.

[1] Southern hemisphere.

Go Your Own Way (Part One: The Stack)

Posted on

This is my third post in the GC series. In the first post, I introduced D’s garbage collector and the language features that require it, and touched on simple strategies to use it effectively. In the second post, I showed off the tools provided by the language and library to disable or prohibit the GC in specific parts of a code base, how to use the compiler to assist in that endeavor, and recommended that D programs be written initially to embrace the GC, taking advantage of simple strategies to mitigate its impact, and later tuned to avoid it or further optimize its usage only when profiling shows it’s warranted.

When garbage collection is turned off via GC.disable or prevented by the @nogc function annotation, memory will still need to be allocated from somewhere. And even when the GC is fully embraced, it’s still desirable to minimize the size and quantity of GC heap allocations. That means allocating either via the stack, or via the non-GC heap. The focus of this post is the former. Non-GC heap allocations will be covered in my next post in this series.

Stack allocation

The simplest allocation strategy in D is the same as it is in C: avoid the heap and use the stack whenever possible. When a local array is needed and the size can be known at compile time, use a static rather than a dynamic array. Structs, which are value types and stack-allocated by default, should be preferred where possible over classes, which are reference types and are usually allocated from one heap or another. D’s compile-time features can present opportunities here that might not otherwise be available.

Static arrays

Static array declarations in D require the length to be known at compile-time.

// OK
int[10] nums;

// Error: variable x cannot be read at compile time
int x = 10;
int[x] err;

Unlike dynamic arrays, static arrays can be initialized with array literals with no allocation taking place on the GC heap. The lengths must match, otherwise the compiler will emit an error.

@nogc void main() {
    int[3] nums = [1, 2, 3];

Static arrays are automatically sliced when passed to any function taking a slice as a parameter, making them interchangeable with dynamic arrays.

void printNums(int[] nums) {
    import std.stdio : writeln;

void main() {
    int[]  dnums = [0, 1, 2];
    int[3] snums = [0, 1, 2];

When compiling with -vgc to find the potential GC allocations in a program and eliminate them where possible, this is an easy win. Just be wary of situations like the following:

int[] foo() {
    auto nums = [0, 1, 2];

    // Do work with nums...

    return nums;

Converting nums in this example to a static array would be a mistake. The return statement in that case would be returning a slice to stack-allocated memory, which is a programming error. Luckily, doing so will generate a compiler error.

On the other hand, if the return is conditional, it may be desirable to heap-allocate the array only when absolutely necessary rather than every time the function is called. In that scenario, a static array can be declared locally and a dynamic copy made on return. Enter the .dup property:

int[] foo() {
    int[3] nums = [0, 1, 2];
    // Let x = the result of some work with nums
    bool condtion = x;

    if(condition) return nums.dup;
    else return [];

This function still uses the GC via .dup, but only allocates if it needs to and avoids allocation when it doesn’t. Note that [] is equivalent to null in this case, a slice (or dynamic array) with a .length of 0 and a .ptr of null.

Structs vs. classes

Struct instances in D are allocated on the stack by default, but can be allocated on the heap when desired. Stack-allocated structs have deterministic destruction, with their destructors called as soon as the enclosing scope exits.

struct Foo {
    int x;
    ~this() {
        import std.stdio;
        writefln("#%s says bye!", x);
void main() {
    Foo f1 = Foo(1);
    Foo f2 = Foo(2);
    Foo f3 = Foo(3);

As expected, this prints:

#3 says bye!
#2 says bye!
#1 says bye!

Classes, being reference types, are almost always allocated on the heap. Usually, that’s the GC heap via new, though it could also be the non-GC heap through a custom allocator. But there’s no rule saying they can’t be allocated on the stack. The standard library template std.typecons.scoped allows us to easily do so.

class Foo {
    int x;

    this(int x) { 
        this.x = x; 
    ~this() {
        import std.stdio;
        writefln("#%s says bye!", x);
void main() {
    import std.typecons : scoped;
    auto f1 = scoped!Foo(1);
    auto f2 = scoped!Foo(2);
    auto f3 = scoped!Foo(3);

Functionally, this is identical to the struct example above; it prints the same results. Deterministic destruction is achieved via the core.object.destroy function, which allows destructors to be called outside of GC collections.

Note that neither scoped nor destroy are currently usable in @nogc functions. This isn’t necessarily a problem, as a function doesn’t have to be annotated such to avoid the GC, but it can be a headache if you are trying to fit everything into a @nogc call tree. In future posts, we’ll look at some of the design issues that crop up when using @nogc and how to avoid them.

Generally, when implementing custom types in D, the choice between struct and class should be dependent on the type’s intended usage. POD types are obvious candidates for struct, whereas for types in something like a GUI system, where inheritance heirarchies and runtime interfaces are extremely useful, class is a more appropriate choice. Beyond those obvious cases, there are a number of other considerations which could be the focus of a separate blog post on the topic. For our purposes, just keep in mind that whether or not a type is implemented as a struct or a class need not always dictate whether or not instances can be allocated on the stack.


Given that D makes the standard C library available via DRuntime, alloca is also an option for stack allocation. This is a candidate especially for arrays when you want to avoid or eliminate a local GC allocation, but the array size is only known at run time. The following example allocates a dynamic array with a runtime size on the stack.

import core.stdc.stdlib : alloca;

void main() {
    size_t size = 10;
    void* mem = alloca(size);

    // Slice the memory block
    int[] arr = cast(int[])mem[0 .. size];

The same caution about using alloca in C applies here: be careful not to blow up the stack. And as with local static arrays, don’t return a slice of arr. Return arr.dup instead.

A simple example

Consider an implementation of a Queue data type. An idiomatic implementation in D is going to be a struct that’s templated on the type it’s intended to contain. In Java, collection usage is interface heavy and it’s recommended to declare an instance using an interface type rather than the implementation type. Structs in D can’t implement interfaces, but in many cases they can still be used to program to interfaces thanks to Design by Introspection (DbI). This allows programming to a common interface that is verified via compile-time introspection without the need for an interface type, so it can work with structs, classes and, thanks to Universal Function Call Syntax (UFCS), even free functions (when the functions are in scope).

D’s arrays are an obvious choice as the backing store for a Queue implementation. Moreover, there’s an opportunity to make the backing store a static array when a queue is intended to be bounded with a fixed size. Since it’s already a templated type, an additional parameter, a template value parameter with a default value can easily be added to decide at compile time if the array should be static or not and, if so, how much space it should require.

// A default Size of 0 means to use a dynamic array for the
// backing store; non-zero indicates a static array.
struct Queue(T, size_t Size = 0) 
    // This constant will be inferred as a boolean. By making it
    // public, a DbI template outside of this module can determine
    // whether or not the Queue might grow. 
    enum isFixedSize = Size > 0;

    void enqueue(T item) 
        static if(isFixedSize) {
            assert(_itemCount < _items.length);
        else {

    T dequeue() {
        assert(_itemCount != 0);
        static if(isFixedSize) {
            return pop();
        else {
            auto ret = pop();
            return ret;

    // Only available on a growable array
    static if(!isFixedSize) {
        void reserve(size_t capacity) { /* Allocate space for new items */ }

    static if(isFixedSize) {
        T[Size] _items;     
    else T[] _items;
    size_t _head, _tail;
    size_t _itemCount;

    void push(T item) { 
        /* Add item, update _head and _tail */
        static if(isFixedSize) { ... }
        else { ... }

    T pop() { 
        /* Remove item, update _head and _tail */ 
        static if(isFixedSize) { ... }
        else { ... }

    // These are only available on a growable array
    static if(!isFixedSize) {
        void ensureCapacity() { /* Alloc memory if needed */ }
        void ensurePacked() { /* Shrink the array if needed */}

With this, the client can declare instances like so:

Queue!Foo qUnbounded;
Queue!(Foo, 128) qBounded;

qBounded requires no heap allocations. What happens with qUnbounded depends on the implementation. Moreover, compile-time introspection can be used to test if an instance is a fixed size or not. The isFixedSize constant is a convenience for that. Clients could alternatively use the built-in __traits(hasMember, T, "reserve") or the standard library function std.traits.hasMember!T("reserve") in one compile-time construct or another (__traits and std.traits are great for DbI, and the latter should be preferred when it provides similar functionality), but including the constant in the type is more convenient.

void doSomethingWithQueueInterface(T)(T queue)
    static if(T.isFixedSize) { ... }
    else { ... }


This has been a brief overview of a few options for stack allocation in D to avoid allocations from the GC heap. Making use of them when possible is an easy way to minimize the size and quantity of GC allocations, a proactive strategy for mitigating potential negative performance impacts from garbage collection.

The next post in this series will cover some of the options available for non-GC heap allocations.