Category Archives: The Language

The ABC’s of Templates in D

D was not supposed to have templates.

Several months before Walter Bright released the first alpha version of the DMD compiler in December 2001, he completed a draft language specification in which he wrote:

Templates. A great idea in theory, but in practice leads to numbingly complex code to implement trivial things like a “next” pointer in a singly linked list.

The (freely available) HOPL IV paper, Origins of the D Programming Language, expands on this:

[Walter] initially objected to templates on the grounds that they added disproportionate complexity to the front end, they could lead to overly complex user code, and, as he had heard many C++ users complain, they had a syntax that was difficult to understand. He would later be convinced to change his mind.

It did take some convincing. As activity grew in the (then singular) D newsgroup, requests that templates be added to the language became more frequent. Eventually, Walter started mulling over how to approach templates in a way that was less confusing for both the programmer and the compiler. Then he had an epiphany: the difference between template parameters and function parameters is one of compile time vs. run time.

From this perspective, there’s no need to introduce a special template syntax (like the C++ style <T>) when there’s already a syntax for parameter lists in the form of (T). So a template declaration in D could look like this:

template foo(T, U) {
    // template members here
}

From there, the basic features fell into place and were expanded and enhanced over time.

In this article, we’re going to lay the foundation for future articles by introducing the most basic concepts and terminology of D’s template implementation. If you’ve never used templates before in any language, this may be confusing. That’s not unexpected. Even though many find D’s templates easier to understand than other implementations, the concept itself can still be confusing. You’ll find links at the end to some tutorial resources to help build a better understanding.

Template declarations

Inside a template declaration, one can nest any other valid D declaration:

template foo(T, U) {
    int x;
    T y;

    struct Bar {
        U thing;
    }

    void doSomething(T t, U u) {
        ...
    }
}

In the above example, the parameters T and U are template type parameters, meaning they are generic substitutes for concrete types, which might be built-in types like int, float, or any type the programmer might implement with a class or struct declaration. By declaring a template, it’s possible to, for example, write a single implementation of a function like doSomething that can accept multiple types for the same parameters. The compiler will generate as many copies of the function as it needs to accomodate the concrete types used in each unique template instantiation.

Other kinds of parameters are supported: value parameters, alias parameters, sequence (or variadic) parameters, and this parameters, all of which we’ll explore in future blog posts.

One name to rule them all

In practice, it’s not very common to implement templates with multiple members. By far, the most common form of template declaration is the single-member eponymous template. Consider the following:

template max(T) {
    T max(T a, T b) { ... }
}

An eponymous template can have multiple members that share the template name, but when there is only one, D provides us with an alternate template declaration syntax. In this example, we can opt for a normal function declaration that has the template parameter list in front of the function parameter list:

T max(T)(T a, T b) { ... }

The same holds for eponymous templates that declare an aggregate type:

// Instead of the longhand template declaration...
/* 
template MyStruct(T, U) {
    struct MyStruct { 
        T t;
        U u;
    }
}
*/

// ...just declare a struct with a type parameter list
struct MyStruct(T, U) {
    T t;
    U u;
}

Eponymous templates also offer a shortcut for instantiation, as we’ll see in the next section.

Instantiating templates

In relation to templates, the term instantiate means essentially the same as it does in relation to classes and structs: create an instance of something. A template instance is, essentially, a concrete implementation of the template in which the template parameters have been replaced by the arguments provided in the instantiation. For a template function, that means a new copy of the function is generated, just as if the programmer had written it. For a type declaration, a new copy of the declaration is generated, just as if the programmer had written it.

We’ll see an example, but first we need to see the syntax.

Explicit instantiations

An explicit instantiation is a template instance created by the programmer using the template instantiation syntax. To easily disambiguate template instantiations from function calls, D requires the template instantiation operator, !, to be present in explicit instantiations. If the template has multiple members, they can be accessed in the same manner that members of aggregates are accessed: using dot notation.

import std;

template Temp(T, U) {
    T x;
    struct Pair {
        T t;
        U u;
    }
}

void main()
{
    Temp!(int, float).x = 10;
    Temp!(int, float).Pair p;
    p.t = 4;
    p.u = 3.2;
    writeln(Temp!(int, float).x);
    writeln(p);            
}

Run it online at run.dlang.io

There is one template instantiation in this example: Temp!(int, float). Although it appears three times, it refers to the same instance of Temp, one in which T == int and U == float. The compiler will generate declarations of x and the Pair type as if the programmer had written the following:

int x;
struct Pair {
    int t;
    float u;
}

However, we can’t just refer to x and Pair by themselves. We might have other instantiations of the same template, like Temp!(double, long), or Temp(MyStruct, short). To avoid conflict, the template members must be accessed through a namespace unique to each instantiation. In that regard, Temp!(int, float) is like a class or struct with static members; just as you would access a static x member variable in a struct Foo using the struct name, Foo.x, you access a template member using the template name, Temp!(int, float).x.

There is only ever one instance of the variable x for the instantiation Temp!(int float), so no matter where you use it in a code base, you will always be reading and writing the same x. Hence, the first line of main isn’t declaring and initializing the variable x, but is assigning to the already declared variable. Temp!(int, float).Pair is a struct type, so that after the declaration Temp!(int, float).Pair p, we can refer to p by itself. Unlike x, p is not a member of the template. The type Pair is a member, so we can’t refer to it without the prefix.

Aliased instantiations

It’s possible to simplify the syntax by using an alias declaration for the instantiation:

import std;

template Temp(T, U) {
    T x;
    struct Pair {
        T t;
        U u;
    }
}
alias TempIF = Temp!(int, float);

void main()
{
    TempIF.x = 10;
    TempIF.Pair p = TempIF.Pair(4, 3.2);
    writeln(TempIF.x);
    writeln(p);            
}

Run it online at run.dlang.io

Since we no longer need to type the template argument list, using a struct literal to initialize p, in this case TempIF.Pair(3, 3.2), looks cleaner than it would with the template arguments. So I opted to use that here rather than first declare p and then initialize its members. We can trim it down still more using D’s auto attribute, but whether this is cleaner is a matter of preference:

auto p = TempIF.Pair(4, 3.2);

Run it online at run.dlang.io

Instantiating eponymous templates

Not only do eponymous templates have a shorthand declaration syntax, they also allow for a shorthand instantiation syntax. Let’s take the x out of Temp and rename the template to Pair. We’re left with a Pair template that provides a declaration struct Pair. Then we can take advantage of both the shorthand declaration and instantiation syntaxes:

import std;

struct Pair(T, U) {
    T t;
    U u;
}

// We can instantiate Pair without the dot operator, but still use
// the alias to avoid writing the argument list every time
alias PairIF = Pair!(int, float);

void main()
{
    PairIF p = PairIF(4, 3.2);
    writeln(p);            
}

Run it online at run.dlang.io

The shorthand instantiation syntax means we don’t have to use the dot operator to access the Pair type.

Even shorter syntax

When a template instantiation is passed only one argument, and the argument’s symbol is a single token (e.g., int as opposed to int[] or int*), the parentheses can be dropped from the template argument list. Take the standard library template function std.conv.to as an example:

void main() {
    import std.stdio : writeln;
    import std.conv : to;
    writeln(to!(int)("42"));
}

Run it online at run.dlang.io

std.conv.to is an eponymous template, so we can use the shortened instantiation syntax. But the fact that we’ve instantiated it as an argument to the writeln function means we’ve got three pairs of parentheses in close proximity. That sort of thing can impair readability if it pops up too often. We could move it out and store it in a variable if we really care about it, but since we’ve only got one template argument, this is a good place to drop the parentheses from the template argument list.

writeln(to!int("42"));

Whether that looks better is another case where it’s up to preference, but it’s fairly idiomatic these days to drop the parentheses for a single template argument no matter where the instantiation appears.

Not done with the short stuff yet

std.conv.to is an interesting example because it’s an eponymous template with multiple members that share the template name. That means that it must be declared using the longform syntax (as you can see in the source code), but we can still instantiate it without the dot notation. It’s also interesting because, even though it accepts two template arguments, it is generally only instantiated with one. This is because the second template argument can be deduced by the compiler based on the function argument.

For a somewhat simpler example, take a look at std.utf.toUTF8:

void main()
{    
    import std.stdio : writeln;
    import std.utf : toUTF8;
    string s1 = toUTF8("This was a UTF-16 string."w);
    string s2 = toUTF8("This was a UTF-32 string."d);
    writeln(s1);
    writeln(s2);
}

Run it online at run.dlang.io

Unlike std.conv.to, toUTF8 takes exactly one parameter. The signature of the declaration looks like this:

string toUTF8(S)(S s)

But in the example, we aren’t passing a type as a template argument. Just as the compiler was able to deduce to’s second argument, it’s able to deduce toUTF8’s sole argument.

toUTF8 is an eponymous function template with a template parameter S and a function parameter s of type S. There are two things we can say about this: 1) the return type is independent of the template parameter and 2) the template parameter is the type of the function parameter. Because of 2), the compiler has all the information it needs from the function call itself and has no need for the template argument in the instantiation.

Take the first call to the toUTF8 function in the declaration of s1. In long form, it would be toUTF8!(wstring)("blah"w). The w at the end of the string literal indicates it is of type wstring, with UTF-16 encoding, as opposed to string, with UTF-8 encoding (the default for string literals). In this situation, having to specify !(wstring) in the template instantiation is completely redundant. The compiler already knows that the argument to the function is a wstring. It doesn’t need the programmer to tell it that. So we can drop the template instantiation operator and the template argument list, leaving what looks like a simple function call. The compiler knows that toUTF8 is a template, knows that the template is declared with one type parameter, and knows that the type should be wstring.

Similarly, the d suffix on the literal in the initialization of s2 indicates a UTF-32 encoded dstring, and the compiler knows all it needs to know for that instantiation. So in this case also, we drop the template argument and make the instantiation appear as a function call.

It does seem silly to convert a wstring or dstring literal to a string when we could just drop the w and d prefixes and have string literals that we can directly assign to s1 and s2. Contrived examples and all that. But the syntax the examples are demonstrating really shines when we work with variables.

wstring ws = "This is a UTF-16 string"w;
string s = ws.toUTF8;
writeln(s);

Run it online at run.dlang.io

Take a close look at the initialization of s. This combines the shorthand template instantiation syntax with Uniform Function Call Syntax (UFCS) and D’s shorthand function call syntax. We’ve already seen the template syntax in this post. As for the other two:

  • UFCS allows using dot notation on the first argument of any function call so that it appears as if a member function is being called. By itself, it doesn’t offer much benefit aside from, some believe, readability. Generally, it’s a matter of preference. But this feature can seriously simplify the implementation of generic templates that operate on aggregate types and built-in types.
  • The parentheses in a function call can be dropped when the function takes no arguments, so that foo() becomes foo. In this case, the function takes one argument, but we’ve taken it out of the argument list using UFCS, so the argument list is now empty and the parentheses can be dropped. (The compiler will lower this to a normal function call, toUTF8(ws)—it’s purely programmer convenience.) When and whether to do this in the general case is a matter of preference. The big win, again, is in the simplification of template implementations: a template can be implemented to accept a type T with a member variable foo or a member function foo, or a free function foo for which the first argument is of type T.

All of this shorthand syntax is employed to great effect in D’s range API, which allows chained function calls on types that are completely hidden from the public-facing API (aka Voldemort types).

More to come

In future articles, we’ll explore the different kinds of template parameters, introduce template constraints, see inside a template instantiation, and take a look at some of the ways people combine templates with D’s powerful compile-time features in the real world. In the meantime, here are some template tutorial resources to keep you busy:

  • Ali Çehreli’s Programming in D is an excellent introduction to D in general, suitable even for those with little programming experience. The two chapters on templates (the first called ‘Templates’ and the second ‘More Templates’) provide a great introduction. (The online version of the book is free, but if you find it useful, please consider throwing Ali some love by buying the ebook or print version linked at the top of the TOC.)
  • More experienced programmers may find Phillipe Sigaud’s ‘D Template Tutorial’ a good read. It’s almost a decade old, but still relevant (and still free!). This tutorial goes beyond the basics into some of the more advanced template features. It includes a look at D’s compile-time features, provides a number of examples, and sports an appendix detailing the is expression (a key component of template constraints). It can also serve as a great reference when reading open source D code until you learn your way around D templates.

There are other resources, though other than my book ‘Learning D’ (this is a referral link that will benefit the D Language Foundation), I’m not aware of any that provide as much detail as the above. (And my book is currently not freely available). Eventually, we’ll be able to add this blog series to the list.

Thanks to Stefan Koch for reviewing this article.

A Pattern for Head-mutable Structures

When Andrei Alexandrescu introduced ranges to the D programming language, the gap between built-in and user-defined types (UDTs) narrowed, enabling new abstractions and greater composability. Even today, though, UDTs are still second-class citizens in D. One example of this is support for head mutability—the ability to manipulate a reference without changing the referenced value(s). This document details a pattern that will further narrow the UDT gap by introducing functions for defining and working with head-mutable user-defined types.

Introduction

D is neither Kernel nor Scheme—it has first-class and second-class citizens. Among its first-class citizens are arrays and pointers. One of the benefits these types enjoy is implicit conversion to head-mutable. For instance, const(T[]) is implicitly convertible to const(T)[]. Partly to address this difference, D has many ways to define how one type may convert to or behave like another – alias this, constructors, opDispatch, opCast, and, of course, subclassing. The way pointers and dynamic arrays decay into their head-mutable variants is different from the semantics of any of these features, so we would need to define a new type of conversion if we were to mimic this behavior.

Changing the compiler and the language to permit yet another way of converting one type into another is not desirable: it makes the job harder for compiler writers, makes an already complex language even harder to learn, and any implicit conversion can make code harder to read and maintain. If we can define conversions to head-mutable data structures without introducing compiler or language changes, this will also make the feature available to users sooner, since such a mechanism would not necessarily require changes in the standard library, and users could gradually implement it in their own code and benefit from the code in the standard library catching up at a later point.

Unqual

The tool used today to get a head-mutable version of a type is std.traits.Unqual. In some cases, this is the right tool—it strips away one layer of const, immutable, inout, and shared. For some types though, it either does not give a head-mutable result, or it gives a head-mutable result with mutable indirections:

struct S(T) {
    T[] arr;
}

With Unqual, this code fails to compile:

void foo(T)(T a) {
    Unqual!T b = a; // cannot implicitly convert immutable(S!int) to S!int
}

unittest {
    immutable s = S!int([1,2,3]);
    foo(s);
}

A programmer who sees that message hopefully finds a different way to achieve the same goal. However, the error message says that the conversion failed, indicating that a conversion is possible, perhaps even without issue. An inexperienced programmer, or one who knows that doing so is safe right now, could use a cast to shut the compiler up:

void bar(T)(T a) {
    Unqual!T b = cast(Unqual!T)a;
    b.arr[0] = 4;
}

unittest {
    immutable s = S!int([1,2,3]);
    bar(s);
    assert(s.arr[0] == 1); // Fails, since bar() changed it.
}

If, instead of S!int, the programmer had used int[], the first example would have compiled, and the cast in the second example would have never seen the light of day. However, since S!int is a user-defined type, we are forced to write a templated function that either fails to compile for some types it really should support or gives undesirable behavior at run time.

headMutable()

Clearly, we should be able to do better than Unqual, and in fact we can. D has template this parameters which pick up on the dynamic type of the this reference, and with that, its const or immutable status:

struct S {
    void foo(this T)() {
        import std.stdio : writeln;
        writeln(T.stringof);
    }
}
unittest {
    S s1;
    const S s2;
    s1.foo(); // Prints "S".
    s2.foo(); // Prints "const(S)".
}

This way, the type has the necessary knowledge of which type qualifiers a head-mutable version needs. We can now define a method that uses this information to create the correct head-mutable type:

struct S(T) {
    T[] arr;
    auto headMutable(this This)() const {
        import std.traits : CopyTypeQualifiers;
        return S!(CopyTypeQualifiers!(This, T))(arr);
    }
}
unittest {
    const a = S!int([1,2,3]);
    auto b = a.headMutable();
    assert(is(typeof(b) == S!(const int))); // The correct part of the type is now const.
    assert(a.arr is b.arr); // It's the same array, no copying has taken place.
    b.arr[0] = 3; // Correctly fails to compile: cannot modify const expression.
}

Thanks to the magic of Uniform Function Call Syntax, we can also define headMutable() for built-in types:

auto headMutable(T)(T value) {
    import std.traits;
    import std.typecons : rebindable;
    static if (isPointer!T) {
        // T is a pointer and decays naturally.
        return value;
    } else static if (isDynamicArray!T) {
        // T is a dynamic array and decays naturally.
        return value;
    } else static if (!hasAliasing!(Unqual!T)) {
        // T is a POD datatype - either a built-in type, or a struct with only POD members.
        return cast(Unqual!T)value;
    } else static if (is(T == class)) {
        // Classes are reference types, so only the reference may be made head-mutable.
        return rebindable(value);
    } else static if (isAssociativeArray!T) {
        // AAs are reference types, so only the reference may be made head-mutable.
        return rebindable(value);
    } else {
        static assert(false, "Type "~T.stringof~" cannot be made head-mutable.");
    }
}
unittest {
    const(int*[3]) a = [null, null, null];
    auto b = a.headMutable();
    assert(is(typeof(b) == const(int)*[3]));
}

Now, whenever we need a head-mutable variable to point to tail-const data, we can simply call headMutable() on the value we need to store. Unlike the ham-fisted approach of casting to Unqual!T, which may throw away important type information and also silences any error messages that may inform you of the foolishness of your actions, attempting to call headMutable() on a type that doesn’t support it will give an error message explaining what you tried to do and why it didn’t work (“Type T cannot be made head-mutable.”). The only thing missing now is a way to get the head-mutable type. Since headMutable() returns a value of that type, and is defined for all types we can convert to head-mutable, that’s a template one-liner:

import std.traits : ReturnType;
alias HeadMutable(T) = ReturnType!((T t) => t.headMutable());

Where Unqual returns a type with potentially the wrong semantics and only gives an error once you try assigning to it, HeadMutable disallows creating the type in the first place. The programmer will have to deal with that before casting or otherwise coercing a value into the variable. Since HeadMutable uses headMutable() to figure out the type, it also gives the same informative error message when it fails.

Lastly, since one common use case requires us to preserve the tail-const or tail-immutable properties of a type, it is beneficial to define a template that converts to head-mutable while propagating const or immutable using std.traits.CopyTypeQualifiers:

import std.traits : CopyTypeQualifiers;
alias HeadMutable(T, ConstSource) = HeadMutable!(CopyTypeQualifiers!(ConstSource, T));

This way, immutable(MyStruct!int) can become MyStruct!(immutable int), while the const version would propagate constness instead of immutability.

Example Code

Since the pattern for range functions in Phobos is to have a constructor function (e.g. map) that forwards its arguments to a range type (e.g. MapResult), the code changes required to use headMutable() are rather limited. Likewise, user code should generally not need to change at all in order to use headMutable(). To give an impression of the code changes needed, I have implemented map and equal:

import std.range;

// Note that we check not if R is a range, but if HeadMutable!R is
auto map(alias Fn, R)(R range) if (isInputRange!(HeadMutable!R)) {
    // Using HeadMutable!R and range.headMutable() here.
    // This is basically the extent to which code that uses head-mutable data types will need to change.
    return MapResult!(Fn, HeadMutable!R)(range.headMutable());
}

struct MapResult(alias Fn, R) {
    R range;
    
    this(R _range) {
        range = _range;
    }
    
    void popFront() {
        range.popFront();
    }
    
    @property
    auto ref front() {
        return Fn(range.front);
    }
    
    @property
    bool empty() {
        return range.empty;
    }
    
    static if (isBidirectionalRange!R) {
        @property
        auto ref back() {
            return Fn(range.back);
        }

        void popBack() {
            range.popBack();
        }
    }

    static if (hasLength!R) {
        @property
        auto length() {
            return range.length;
        }
        alias opDollar = length;
    }

    static if (isRandomAccessRange!R) {
        auto ref opIndex(size_t idx) {
            return Fn(range[idx]);
        }
    }

    static if (isForwardRange!R) {
        @property
        auto save() {
            return MapResult(range.save);
        }
    }
    
    static if (hasSlicing!R) {
        auto opSlice(size_t from, size_t to) {
            return MapResult(range[from..to]);
        }
    }
    
    // All the above is as you would normally write it.
    // We also need to implement headMutable().
    // Generally, headMutable() will look very much like this - instantiate the same
    // type template that defines typeof(this), use HeadMutable!(T, ConstSource) to make
    // the right parts const or immutable, and call headMutable() on fields as we pass
    // them to the head-mutable type.
    auto headMutable(this This)() const {
        alias HeadMutableMapResult = MapResult!(Fn, HeadMutable!(R, This));
        return HeadMutableMapResult(range.headMutable());
    }
}

auto equal(R1, R2)(R1 r1, R2 r2) if (isInputRange!(HeadMutable!R1) && isInputRange!(HeadMutable!R2)) {
    // Need to get head-mutable version of the parameters to iterate over them.
    auto _r1 = r1.headMutable();
    auto _r2 = r2.headMutable();
    while (!_r1.empty && !_r2.empty) {
        if (_r1.front != _r2.front) return false;
        _r1.popFront();
        _r2.popFront();
    }
    return _r1.empty && _r2.empty;
}

unittest {
    // User code does not use headMutable at all:
    const arr = [1,2,3];
    const squares = arr.map!(a => a*a);
    const squaresPlusTwo = squares.map!(a => a+2);
    assert(equal(squaresPlusTwo, [3, 6, 11]));
}

(Note that these implementations are simplified slightly from Phobos code to better showcase the use of headMutable)

The unittest block shows a use case where the current Phobos map would fail—it is perfectly possible to create a const MapResult, but there is no way of iterating over it. Note that only two functions are impacted by the addition of headMutable(): map tests if HeadMutable!R is an input range and converts its arguments to head-mutable when passing them to MapResult, and MapResult needs to implement headMutable(). The rest of the code is exactly as you would otherwise write it.

The implementation of equal() shows a situation where implicit conversions would be beneficial. For const(int[]) the call to headMutable() is superfluous—it is implicitly converted to const(int)[] when passed to the function. For user-defined types however, this is not the case, so the call is necessary in the general case.

While I have chosen to implement a range here, ranges are merely the most common example of a place where headmutable would be useful; the idea has merits beyond ranges. Another type in the standard library that would benefit from headmutable is RefCounted!T: const(RefCounted!(T)) should convert to RefCounted!(const(T)).

Why not Tail-Const?

In previous discussions of this problem, the solution has been described as tail-const, and a function tailConst() has been proposed. While this idea might at first seem the most intuitive solution, it has some problems, which together make headMutable() far superior.

The main problem with tailConst() is that it does not play well with D’s existing const system. It needs to be called on a mutable value, and there is no way to convert a const(Foo!T) to Foo!(const(T)). It thus requires that the programmer explicitly call tailConst() on any value that is to be passed to a function expecting a non-mutable value and, abstain from using const or immutable to convey the same information. This creates a separate world of tail-constness and plays havoc with generic code, which consequently has no way to guarantee that it won’t mutate its arguments.

Secondly, the onus is placed on library users to call tailConst() whenever they pass an argument anywhere, causing an inversion of responsibility: the user has to tell the library that it is not allowed to edit the data instead of the other way around. In the best case, this merely causes unnecessary verbiage. In other cases, the omission of const will lead to mutation of data expected to be immutable.

A minor quibble in comparison is that the tail-const solution also requires the existence of tailImmutable to cover the cases where the values are immutable.

Issues

The ideas outlined in this document concern only conversion to head-mutable. A related issue is conversion to tail const, e.g. from RefCounted!T or RefCounted!(immutable T) to RefCounted!(const T), a conversion that, again, is implicit for arrays and pointers in D today.

One issue that may be serious is the fact that headMutable often cannot be @safe and may, in fact, need to rely on undefined behavior in some places. For instance, RefCounted!T contains a pointer to the actual ref count. For immutable(RefCounted!T), headMutable() would need to cast away immutable, which is undefined behavior per the spec.

The Compiler Solution

It is logical to think that, as with built-in types, headMutable() could be elided in its entirety, and the compiler could handle the conversions for us. In many cases, this would be possible, and in fact the compiler already does so for POD types like struct S { int n; }—a const or immutable S may be assigned to a mutable variable of type S. This breaks down, however, when the type includes some level of mutable indirection. For templated types it would be possible to wiggle the template parameters to see if the resulting type compiles and has fields with the same offsets and similar types, but even such an intelligent solution breaks down in the presence of D’s Turing-complete template system, and some cases will always need to be handled by the implementer of a type.

It is also a virtue that the logic behind such an implementation be understandable to the average D programmer. The best case result of that not being true is that the forums would be inundated with a flood of posts about why types don’t convert the way users expect them to.

For these reasons, headMutable() will be necessary even with compiler support. But what would that support look like? Implicit casting to head-mutable happens in the language today in two situations:

  • Assignment to head-mutable variables: const(int)[] a = create!(const(int[]))(); (all POD types, pointers and arrays)
  • Function calls: fun(create!(const(int[]))(); (only pointers and arrays)

The first is covered by existing language features (alias headMutable this; fits the bill perfectly). The second is not but is equivalent to calling .headMutable whenever a const or immutable value is passed to a function that does not explicitly expect a const or immutable argument. This would change the behavior of existing code, in that templated functions would prefer a.headMutable over a, but would greatly improve the experience of working with const types that do define headMutable(). If headMutable is correctly implemented, the different choice of template instantiations should not cause any actual breakage.

Future Work

While this document proposes to implement the described feature without any changes to the compiler or language, it would be possible for the compiler in the future to recognize headMutable() and call it whenever a type that defines that method is passed to a function that doesn’t explicitly take exactly that type, or upon assignment to a variable that matches headMutable()’s return value. This behavior would mirror the current behavior of pointers and arrays.

Conclusion

It is possible to create a framework for defining head-mutable types in D today without compiler or language changes. It requires a little more code in the methods that use head-mutable types but offers a solution to a problem that has bothered the D community for a long time.

While this document deals mostly with ranges, other types will also benefit from this pattern: smart pointers and mutable graphs with immutable nodes are but two possible examples.

Definitions

Head-mutable

A type is head-mutable if some or all of its members without indirections are mutable. Note that a head-mutable datatype may also have const or immutable members without indirections; the requirement is merely that some subset of its members may be mutated. A head-mutable datatype may be tail-const, tail-immutable or tail-mutable—head-mutable only refers to its non-indirected members. Examples of head-mutable types include const(int)[], int*, string, and Rebindable!MyClass. Types without indirections (like int, float and struct S { int n; }) are trivially head-mutable.

Tail-const

A type is tail-const if some of its members with indirections have the const type qualifier. A tail-const type may be head-mutable or head-const. Examples of tail-const types are const(int)*, const(int[]), const(immutable(int)[])* and string.

Source

The source code for HeadMutable and headMutable is available here.

wc in D: 712 Characters Without a Single Branch

After reading “Beating C With 80 Lines Of Haskell: Wc”, which I found on Hacker News, I thought D could do better. So I wrote a wc in D.

The Program

It consists of one file and has 34 lines and 712 characters.

import std.stdio : writefln, File;
import std.algorithm : map, fold, splitter;
import std.range : walkLength;
import std.typecons : Yes;
import std.uni : byCodePoint;

struct Line {
	size_t chars;
	size_t words;
}

struct Output {
	size_t lines;
	size_t words;
	size_t chars;
}

Output combine(Output a, Line b) pure nothrow {
	return Output(a.lines + 1, a.words + b.words, a.chars + b.chars);
}

Line toLine(char[] l) pure {
	return Line(l.byCodePoint.walkLength, l.splitter.walkLength);
}

void main(string[] args) {
	auto f = File(args[1]);
	Output o = f
		.byLine(Yes.keepTerminator)
		.map!(l => toLine(l))
		.fold!(combine)(Output(0, 0, 0));

	writefln!"%u %u %u %s"(o.lines, o.words, o.chars, args[1]);
}

Sure, it is using Phobos, D’s standard library, but then why wouldn’t it? Phobos is awesome and ships with every D compiler. The program itself does not contain a single if statement. The Haskell wc implementation has several if statements. The D program, apart from the main function, contains three tiny functions. I could have easily put all the functionally in one range chain, but then it probably would have exceeded 80 characters per line. That’s a major code-smell.

The Performance

Is the D wc faster than the coreutils wc? No, but it took me 15 minutes to write mine (I had to search for walkLength, because I forgot its name).

file lines bytes coreutils haskell D
app.d 46 906 3.5 ms +- 1.9 ms 39.6 ms +- 7.8 ms 8.9 ms +- 2.1 ms
big.txt 862 64k 4.7 ms +- 2.0 ms 39.6 ms +- 7.8 ms 9.8 ms +- 2.1 ms
vbig.txt 1.7M 96M 658.6ms +- 24.5ms 226.4 ms +- 29.5 ms 1.102 s +- 0.022 s
vbig2.txt 12.1M 671M 4.4 s +- 0.058 s 1.1 s +- 0.039 s 7.4 s +- 0.085 s

Memory:

file coreutils haskell D
app.d 2052K 7228K 7708K
big.txt 2112K 7512K 7616K
vbig.txt 2288K 42620K 7712K
vbig2.txt 2360K 50860K 7736K

Is the Haskell wc faster? For big files, absolutely, but then it is using threads. For small files, GNU’s coreutils still beats the competition. At this stage my version is very likely IO bound, and it’s fast enough anyway.

I’ll not claim that one language is faster than another. If you spend a chunk of time on optimizing a micro-benchmark, you are likely going to beat the competition. That’s not real life. But I will claim that functional programming in D gives functional programming in Haskell a run for its money.

A Bit About Ranges

Digital Mars D logoA range is an abstraction that you can consume through iteration without consuming the underlying collection (if there is one). Technically, a range can be a struct or a class that adheres to one of a handful of Range interfaces. The most basic form, the InputRange, requires the function

void popFront();

and two members or properties:

T front;
bool empty;

T is the generic type of the elements the range iterates.

In D, ranges are special in a way that other objects are not. When a range is given to a foreach statement, the compiler does a little rewrite.

foreach (e; range) { ... }

is rewritten to

for (auto __r = range; !__r.empty; __r.popFront()) {
    auto e = __r.front;
    ...
}

auto e = infers the type and is equivalent to T e =.

Given this knowledge, building a range that can be used by foreach is easy.

struct Iota {
	int front;
	int end;

	@property bool empty() const {
		return this.front == this.end;
	}

	void popFront() {
		++this.front;
	}
}

unittest {
	import std.stdio;
	foreach(it; Iota(0, 10)) {
		writeln(it);
	}
}

Iota is a very simple range. It functions as a generator, having no underlying collection. It iterates integers from front to end in steps of one. This snippet introduces a little bit of D syntax.

@property bool empty() const {

The @property attribute allows us to use the function empty the same way as a member variable (calling the function without the parenthesis). The trailing const means that we don’t modify any data of the instance we call empty on. The built-in unit test prints the numbers 0 to 10.

Another small feature is the lack of an explicit constructor. The struct Iota has two member variables of type int. In the foreach statement in the test, we create an Iota instance as if it had a constructor that takes two ints. This is a struct literal. When the D compiler sees this, and the struct has no matching constructor, the ints will be assigned to the struct’s member variables from top to bottom in the order of declaration.

The relation between the three members is really simple. If empty is false, front is guaranteed to return a different element, the next one in the iteration, after a call to popFront. After calling popFront the value of empty might have changed. If it is true, this means there are no more elements to iterate and any further calls to front are not valid. According to the InputRange documentation:

  • front can be legally evaluated if and only if evaluating empty has, or would have, equaled false.
  • front can be evaluated multiple times without calling popFront or otherwise mutating the range object or the underlying data, and it yields the same result for every evaluation.

Now, using foreach statements, or loops in general, is not really functional in my book. Lets say we want to filter all uneven numbers of the Iota range. We could put an if inside the foreach block, but that would only make it worse. It would be nicer if we had a range that takes a range and a predicate that can decide if an element is okay to pass along or not.

struct Filter {
	Iota input;
	bool function(int) predicate;

	this(Iota input, bool function(int) predicate) {
		this.input = input;
		this.predicate = predicate;
		this.testAndIterate();
	}

	void testAndIterate() {
		while(!this.input.empty
				&& !this.predicate(this.input.front))
		{
			this.input.popFront();
		}
	}

	void popFront() {
		this.input.popFront();
		this.testAndIterate();
	}

	@property int front() {
		return this.input.front;
	}

	@property bool empty() const {
		return this.input.empty;
	}
}

bool isEven(int a) {
	return a % 2 == 0;
}

unittest {
	foreach(it; Filter(Iota(0,10), &isEven)) {
		writeln(it);
	}
}

Filter is again really simple: it takes one Iota and a function pointer. On construction of Filter, we call testAndIterate, which pops elements from Iota until it is either empty or the predicate returns false. The idea is that the passed predicate decides what to filter out and what to keep. The properties front and empty just forward to Iota. The only thing that actually does any work is popFront. It pops the current element and calls testAndIterate. That’s it. That’s an implementation of filter.

Sure, there is a new while loop in testAndIterate, but rewriting that with recursion is just silly, in my opinion. What makes D great is that you can use the right tool for the job. Functional programming is fine and dandy a lot of the time, but sometimes it’s not. If a bit of inline assembly would be necessary or nicer, use that.

The call to Filter still does not look very nice. Assuming, you are used to reading from left to right, Filter comes before Iota, even though it is executed after Iota. D has no pipe operator, but it does have Uniform Function Call Syntax (UFCS). If an expression can be implicitly converted to the first parameter of a function, the function can be called like it is a member function of the type of the expression. That’s a lot of words, I know. An example helps:

string foo(string a) {
	return a ~ "World";
}

unittest {
	string a = foo("Hello ");
	string b = "Hello ".foo();
	assert(a == b);
}

The above example shows two calls to the function foo. As the assert indicates, both calls are equivalent. What does that mean for our Iota Filter example? UFCS allows us to rewrite the unit test to:

unittest {
	foreach(it; Iota(1,10).Filter(&isEven)) {
		writeln(it);
	}
}

Implementing a map/transform range should now be possible for every reader. Sure, Filter can be made more abstract through the use of templates, but that’s just work, nothing conceptually new.

Of course, there are different kinds of ranges, like a bidirectional range. Guess what that allows you to do. A small tip: a bidirectional range has two new primitives called back and popBack. There are other range types as well, but after you understand the input range demonstrated twice above, you pretty much know them all.

P.S. Just to be clear, do not implement your own filter, map, or fold; the D standard library Phobos has everything you every need. Have a look at std.algorithm and std.range.

About the Author

Robert Schadek received a master’s degree in Computer Science at the University of Oldenburg. His master thesis was titled “DMCD A Distributed Multithreading Caching D Compiler” where he work on building a D compiler from scratch. He was a computer science PhD student from 2012–2018 at the University of Oldenburg. His PhD research focuses on quorum systems in combination with graphs. Since 2018 he is happily using D in his day job working for Symmetry Investments.

What is Symmetry Investments?

Symmetry Investments is a global investment company with offices in Hong Kong, Singapore and London. We have been in business since 2014 after successfully spinning off from a major New York-based hedge fund.

At Symmetry, we seek to engage in intelligent risk-taking to create value for our clients, partners and employees. We derive our edge from our capacity to generate Win-Wins – in the broadest sense. Win-Win is our fundamental ethical and strategic principle. By generating Win-Wins, we can create unique solutions that reconcile perspectives that are usually seen as incompatible or opposites, and encompass the best that each side has to offer. We integrate fixed-income arbitrage with global macro strategies in a novel way. We invent and develop technology that focuses on the potential of human-machine integration. We build systems where machines do what they do best, supporting people to do what people do best. We are creating a collaborative meritocracy: a culture where individual contribution serves both personal and collective goals – and is rewarded accordingly. We value both ownership thinking AND cooperative team spirit, self-realisation AND community.

People at Symmetry Investments have been active participants in the D community since 2014. We have sponsored the development of excel-d, dpp, autowrap, libmir, and various other projects. We started Symmetry Autumn of Code in 2018 and hosted DConf 2019 in London.

My Vision of D’s Future

When Andrei Alexandrescu stepped down as deputy leader of the D programming language, I was asked to take over the role going forward. It’s needless to say, but I’ll say it anyway, that those are some pretty big shoes to fill.

I’m still settling into my new role in the community and figuring out how I want to do things and what those things even are. None of this happens in a vacuum either, since Walter needs to be on board as well.

I was asked on the D forums to write a blog post on my “dreams and way forward for D”, so here it is. What I’d like to happen with D in the near future:

Memory safety

“But D has a GC!”, I hear you exclaim. Yes, but it’s also a systems programming language with value types and pointers, meaning that today, D isn’t memory safe. DIP1000 was a step in the right direction, but we have to be memory safe unless programmers opt-out via a “I know what I’m doing” @trusted block or function. This includes transitioning to @safe by default.

Safe and easy concurrency

We’re mostly there—using the actor model eliminates a lot of problems that would otherwise naturally occur. We need to finalize shared and make everything @safe as well.

Make D the default implementation language

D’s static reflection and code generation capabilities make it an ideal candidate to implement a codebase that needs to be called from several different languages and environments (e.g. Python, Excel, R, …). Traditionally this is done by specifying data structures and RPC calls in an Interface Definition Language (IDL) then translating that to the supported languages, with a wire protocol to go along with it.

With D, none of that is necessary. One can write the production code in D and have libraries automagically make that code callable from other languages. Add to all of this that it’s possible and easy to write D code that runs as fast or faster than the alternatives, and it’s a win on all fronts.

Second to none reflection.

Instead of disparate ways of getting things done with fragmented APIs (__traits, std.traits, custom code), I’d like for there to be a library that centralizes all reflection needs with a great API. I’m currently working on it.

Easier interop with C++.

As I mentioned in my DConf 2019 talk, C++ has had the success it’s enjoyed so far by making the transition from C virtually seamless. I want current C++ programmers with legacy codebases to just as easily be able to start writing D code. That’s why I wrote dpp, but it’s not quite there yet and we might have to make language changes to accommodate this going forward.

Fast development times.

I think we need a ridiculously fast interpreter so that we can skip machine code generation and linking. To me, this should be the default way of running unittest blocks for faster feedback, with programmers only compiling their code for runtime performance and/or to ship binaries to final users. This would also enable a REPL.

String interpolation

I was initially against this, but the more I think about it the more it seems to make sense for D. Why? String mixins. Code generation is one of D’s greatest strengths, and token strings enable visually pleasing blocks of code that are actually “just strings”. String interpolation would make them vastly easier to use. As it happens, there’s a draft DIP for it in the pipeline.

That’s what I came up with after a long walk by Lake Geneva. I’d love to know what the community thinks of this, what their pet peeves and pet features would be, and how they think this would help or hinder D going forward.

Ownership and Borrowing in D

Digital Mars logoNearly all non-trivial programs allocate and manage memory. Getting it right is becoming increasingly important, as programs get ever more complex and mistakes get ever more costly. The usual problems are:

  1. memory leaks (failure to free memory when no longer in use)
  2. double frees (freeing memory more than once)
  3. use-after-free (continuing to refer to memory already freed)

The challenge is in keeping track of which pointers are responsible for freeing the memory (i.e. owning the memory), which pointers are merely referring to the memory, where they are, and which are active (in scope).

The common solutions are:

  1. Garbage Collection – The GC owns the memory and periodically scans memory looking for any pointers to that memory. If none are found, the memory is released. This scheme is reliable and in common use in languages like Go and Java. It tends to use much more memory than strictly necessary, have pauses, and slow down code because of inserted write gates.
  2. Reference Counting – The RC object owns the memory and keeps a count of how many pointers point to it. When that count goes to zero, the memory is released. This is also reliable and is commonly used in languages like C++ and ObjectiveC. RC is memory efficient, needing only a slot for the count. The downside of RC is the expense of maintaining the count, building an exception handler to ensure the decrement is done, and the locking for all this needed for objects shared between threads. To regain efficiency, sometimes the programmer will cheat and temporarily refer to the RC object without dealing with the count, engendering a risk that this is not done correctly.
  3. Manual – Manual memory management is exemplified by C’s malloc and free. It is fast and memory efficient, but there’s no language help at all in using them correctly. It’s entirely up to the programmer’s skill and diligence in using it. I’ve been using malloc and free for 35 years, and through bitter and endless experience rarely make a mistake with them anymore. But that’s not the sort of thing a programming shop can rely on, and note I said “rarely” and not “never”.

Solutions 2 and 3 more or less rely on faith in the programmer to do it right. Faith-based systems do not scale well, and memory management issues have proven to be very difficult to audit (so difficult that some coding standards prohibit use of memory allocation).

But there is a fourth way – Ownership and Borrowing. It’s memory efficient, as performant as manual management, and mechanically auditable. It has been recently popularized by the Rust programming language. It has its downsides, too, in the form of a reputation for having to rethink how one composes algorithms and data structures.

The downsides are manageable, and the rest of this article is an outline of how the ownership/borrowing (OB) system works, and how we propose to fit it into D. I had originally thought this would be impossible, but after spending a lot of time thinking about it I’ve found a way to fit it in, much like we’ve fit functional programming into D (with transitive immutability and function purity).

Ownership

The solution to who owns the memory object is ridiculously simple—there is only one pointer to it, so that pointer must be the owner. It is responsible for releasing the memory, after which it will cease to be valid. It follows that any pointers in the memory object are the owners of what they point to, there are no other pointers into the data structure, and the data structure therefore forms a tree.

It also follows that pointers are not copied, they are moved:

T* f();
void g(T*);
T* p = f();
T* q = p; // value of p is moved to q, not copied
g(p);     // error, p has invalid value

Moving a pointer out of a data structure is not allowed:

struct S { T* p; }
S* f();
S* s = f();
T* q = s.p; // error, can't have two pointers to s.p

Why not just mark s.p as being invalid? The trouble there is one would need to do so with a runtime mark, and this is supposed to be a compile-time solution, so attempting it is simply flagged as an error.

Having an owning pointer fall out of scope is also an error:

void h() {
  T* p = f();
} // error, forgot to release p?

It’s necessary to move the pointer somewhere else:

void g(T*);
void h() {
  T* p = f();
  g(p);  // move to g(), it's now g()'s problem
}

This neatly solves memory leaks and use-after-free problems. (Hint: to make it clearer, replace f() with malloc(), and g() with free().)

This can all be tracked at compile time through a function by using Data Flow Analysis (DFA) techniques, like those used to compute Common Subexpressions. DFA can unravel whatever rat’s nest of gotos happen to be there.

Borrowing

The ownership system described above is sound, but it is a little too restrictive. Consider:

struct S { void car(); void bar(); }
struct S* f();
S* s = f();
s.car();  // s is moved to car()
s.bar();  // error, s is now invalid

To make it work, s.car() would have to have some way of moving the pointer value back into s when s.car() returns.

In a way, this is how borrowing works. s.car() borrows a copy of s for the duration of the execution of s.car(). s is invalid during that execution and becomes valid again when s.car() returns.

In D, struct member functions take the this by reference, so we can accommodate borrowing through an enhancement: taking an argument by ref borrows it.

D also supports scope pointers, which are also a natural fit for borrowing:

void g(scope T*);
T* f();
T* p = f();
g(p);      // g() borrows p
g(p);      // we can use p again after g() returns

(When functions take arguments by ref, or pointers by scope, they are not allowed to escape the ref or the pointer. This fits right in with borrow semantics.)

Borrowing in this way fulfills the promise that only one pointer to the memory object exists at any one time, so it works.

Borrowing can be enhanced further with a little insight that the ownership system is still safe if there are multiple const pointers to it, as long as there are no mutable pointers. (Const pointers can neither release their memory nor mutate it.) That means multiple const pointers can be borrowed from the owning mutable pointer, as long as the owning mutable pointer cannot be used while the const pointers are active.

For example:

T* f();
void g(T*);
T* p = f();  // p becomes owner
{
  scope const T* q = p; // borrow const pointer
  scope const T* r = p; // borrow another one
  g(p); // error, p is invalid while q and r are in scope
}
g(p); // ok

Principles

The above can be distilled into the notion that a memory object behaves as if it is in one of two states:

  1. there exists exactly one mutable pointer to it
  2. there exist one or more const pointers to it

The careful reader will notice something peculiar in what I wrote: “as if”. What do I mean by that weasel wording? Is there some skullduggery going on? Why yes, there is. Computer languages are full of “as if” dirty deeds under the hood, like the money you deposit in your bank account isn’t actually there (I apologize if this is a rude shock to anyone), and this isn’t any different. Read on!

But first, a bit more necessary exposition.

Folding Ownership/Borrowing into D

Isn’t this scheme incompatible with the way people normally write D code, and won’t it break pretty much every D program in existence? And not break them in an easily fixed way, but break them so badly they’ll have to redesign their algorithms from the ground up?

Yup, it sure is. Except that D has a (not so) secret weapon: function attributes. It turns out that the semantics for the Ownership/Borrowing (aka OB) system can be run on a per-function basis after the usual semantic pass has been run. The careful reader may have noticed that no new syntax is added, just restrictions on existing code. D has a history of using function attributes to alter the semantics of a function—for example, adding the pure attribute causes a function to behave as if it were pure. To enable OB semantics for a function, an attribute @live is added.

This means that OB can be added to D code incrementally, as needed, and as time and resources permit. It becomes possible to add OB while, and this is critical, keeping your project in a fully functioning, tested, and releasable state. It’s mechanically auditable how much of the project is memory safe in this manner. It adds to the list of D’s many other memory-safe guarantees (such as no pointers to the stack escaping).

As If

Some necessary things cannot be done with strict OB, such as reference counted memory objects. After all, the whole point of an RC object is to have multiple pointers to it. Since RC objects are memory safe (if built correctly), they can work with OB without negatively impinging on memory safety. They just cannot be built with OB. The solution is that D has other attributes for functions, like @system. @system is where much of the safety checking is turned off. Of course, OB will also be turned off in @system code. It’s there that the RC object’s implementation hides from the OB checker.

But in OB code, the RC object looks to the OB checker like it is obeying the rules, so no problemo!

A number of such library types will be needed to successfully use OB.

Conclusion

This article is a basic overview of OB. I am working on a much more comprehensive specification. It’s always possible I’ve missed something and that there’s a hole below the waterline, but so far it’s looking good. It’s a very exciting development for D and I’m looking forward to getting it implemented.

For further discussion and comments from Walter, see the discussion threads on the /r/programming subreddit and at Hacker News.

Using const to Enforce Design Decisions

The saying goes that the best code is no code. As soon as a project starts to grow, technical debt is introduced. When a team is forced to adapt to a new company guideline inconsistent with their previous vision, the debt results from a business decision. This could be tackled at the company level. Sometimes technical debt can arise simply due to the passage of time, when new versions of dependencies or of the compiler introduce breaking changes. You can try to tackle this by letting your local physicist stop the flow of time. More often however, technical debt is caused when issues are fixed by a quick hack, due to time pressure or a lack of knowledge of the code base. Design strategies that were carefully crafted are temporarily neglected. This blog post will focus on using the const modifier. It is one of the convenient tools D offers to minimize the increase of technical debt and enforce design decisions.

To keep a code base consistent, often design guidelines, either explicit or implicit, are put in place. Every developer on the team is expected to adhere to the guidelines as a gentleman’s agreement. This effectively results in a policy that is only enforced if both the programmer and the reviewer have had enough coffee. Simple changes, like adding a call to an object method, might seem innocent, but can reduce the consistency and the traceability of errors. To detect this in a code review requires in-depth knowledge of the method’s implementation.

An example from a real world project I’ve worked on is generating financial transactions for a read-only view, the display function in the following code fragment. Nothing seemed wrong with it, until I realized that those transactions were persisted and eventually used for actual payments without being calculated again, as seen in the process method. Potentially different payments occurred depending on whether the user decided to glance at the summary, thereby triggering the generation with new currency exchange rates, or just blindly clicked the OK button. That’s not what an innocent bystander like myself expects and has caused many frowns already.

public class Order
{
    private Transaction[] _transactions;

    public Transaction[] getTransactions()
    {
        _transactions = calculate();
        return _transactions;
    }

    public void process()
    {
        foreach(t; _transactions){
            // ...
        }
    }
}

void display(Order order)
{
    auto t = order.getTransactions();
    show(t);
}

The internet has taught me that if it is possible, it will one day happen. Therefore, we should make an attempt to make the undesired impossible.

A constant feature

By default, variables and object instances in D are mutable, just like in many other programming languages. If we want to prevent objects from ever changing, we can mark them immutable, i.e. immutable MyClass obj = new MyClass();. immutable means that we can modify neither the object reference (head constant) nor the object’s properties (tail constant). The first case corresponds to final in Java and readonly in C#, both of which signify head constant only. D’s implementation means that nobody can ever modify an object marked immutable. What if an object needs to be mutable in one place, but immutable in another? That’s where D’s const pops in.

Unlike immutable, whose contract states that an object cannot be mutated through any reference, const allows an object to be modified through another, non-const reference. This means it’s illegal to initialize an immutable reference with a mutable one, but a const reference can be initialized with a mutable, const, or immutable reference. In a function parameter list, const is preferred over immutable because it can accept arguments with either qualifier or none. Schematically, it can be visualized as in the following figure.

const relationships

A constant detour

D’s const differs from C++ const in a significant way: it’s transitive (see the const(FAQ) for more details). In other words, it’s not possible to declare any object in D as head constant. This isn’t obvious from examples with classes, since classes in D are reference types, but is more apparent with pointers and arrays. Consider these C++ declarations:

const int *cp0;         // mutable pointer to const data
int const *cp1;         // alternative syntax for the same

Variable declarations in C++ are best read from right to left. Although const int is likely the more common syntax, int const matches the way the declaration should be read: cp0 is a mutable pointer to a const int. In D, the equivalent of cp0 and cp1 is:

const(int)* dp0;

The next example shows what head constant looks like in C++.

int *const cp2;         // const pointer to mutable data

We can read the declaration of cp2 as: cp2 is a const pointer to a mutable int. There is no equivalent in D. It’s possible in C++ to have any number and combination of const and mutable pointers to const or mutable data. But in D, if const is applied to the outermost pointer, then it applies to all the inner pointers and the data as well. Or, as they say, it’s turtles all the way down.

The equivalent in C++ looks like this:

int const *const cp3;         // const pointer to const data

This declaration says cp3 is a const pointer to const data, and is possible in D like so:

const(int*) dp1;
const int* dp2;     // same as dp1

All of the above syntax holds true for D arrays:

const(int)[] a1;    // mutable reference to const data
const(int[]) a2;    // const reference to const data
const int[] a3;     // same as a2

const in the examples can be replaced with immutable, with the caveat that initializers must match the declaration, e.g. immutable(int)* can only be initialized with immutable(int)*.

Finally, note that classes in D are reference types; the reference is baked in, so applying const or immutable to a class reference is always equivalent to const(p*) and there is no equivalent to const(p)* for classes. Structs in D, on the other hand, are value types, so pointers to structs can be declared like the int pointers in the examples above.

A constant example

For the sake of argument, we assume that updating the currency exchange rates is a function that, by definition, needs to mutate the order. After updating the order, we want to show the updated prices to our user. Conceptually, the display function should not modify the order. We can prevent mutation by adding the const modifier to our function parameter. The implicit rule is now made explicit: the function takes an Order as input, and treats the object as a constant. We no longer have a gentleman’s agreement, but a formal contract. Changing the contract will, hopefully, require thorough negotiation with your peer reviewer.

void updateExchangeRates(Order order);
void display(const Order order);

void updateAndDisplay()
{
    Order order = //…
    updateExchangeRates(order);
    display(order); // Implicitly converted to const.
}

As with all contracts, defining it is the easiest part. The hard part is enforcing it. The D compiler is our friend in this problem. If we try to compile the code, we will get a compiler error.

void display(const Order order)
{
    // ERROR: cannot call mutable method
    auto t = order.getTransactions();
    show(t);
}

We never explicitly stated that getTransactions doesn’t modify the object. As the method is virtual by default, the compiler cannot derive the behavior either way. Without that knowledge, the compiler is required to assume that the method might modify the object. In other words, in the D justice system one is guilty until proven innocent. Let’s prove our innocence by marking the method itself const, telling the compiler that we do not intend to modify our data.

public class Order
{
    private Transaction[] _transactions;

    public Transaction[] getTransactions() const
    {
        _transactions = calculate(); // ERROR: cannot mutate field
        return _transactions;
    }
}

void display(const Order order)
{
    auto t = order.getTransactions(); // Now compiles :)
    show(t);
}

By marking the method const, the original compile error has moved away. The promise that we do not modify any object state is part of the method signature. The compiler is now satisfied with the method call in the display function, but finds another problem. Our getter, which we stated should not modify data, actually does modify it. We found our code smell by formalizing our guidelines and letting the compiler figure out the rest.

It seems promising enough to try it on a real project.

A constant application

I had a pet project lying around and decided to put the effort into enforcing the constraint. This is what inspired me to write this post. The project is a four-player mahjong game. The relevant part, in abstraction, is highlighted in the image.

Mahjong abstraction

The main engine behind the game is the white box in the center. A player or AI is sent a message with a const view of the game data for display purposes and to determine their next move. A message is sent back to the engine, which then determines the mutation on the internally mutable game data. The most obvious win is that I cannot accidentally modify the game data when drawing my UI. Which, of course, appeared to be the case before I refactored in the const-ness of the game data.

Upon closer inspection, coming from the UI there is only one way to manipulate the state of the game. The UI sends a message to the engine and remains oblivious of the state changes that need to be applied. This also encourages layered development and improves testability of the code. So far, so good. But during the refactoring, a problem arose. Recall that marking an object with const means that only member functions that promise not to modify the object, those marked with const themselves, can be called. Some of these could be trivially fixed by applying const or, at worst, inout (a sort of wildcard). However, the more persistent issues, like in the Order example, required me to go back to the drawing board and rethink my domain. In the end, being forced to think about mutability versus immutability improved my understanding of my own code base.

A constant verdict

Is const all good? It’s not a universal answer and certainly has downsides. The most notable one is that this kills lazy initialization where a property is computed only when first requested and then the result is cached. Sometimes, like in the earlier example, this is a code smell, but there are legit use cases. In my game, I have a class that composes the dashboard per player. Updating it is expensive and rarely required. The screen, however, gets rendered sixty times a second. It makes sense to cache the dashboards and only update them when the player objects change. I could split the method in two, but then my abstraction would leak its optimization. I settled for not using const here, as it was a module-private class and didn’t have a large impact on my codebase.

A complaint that is sometimes heard regarding const is that it does not work well with one of D’s main selling points, ranges. A range is D’s implementation of a lazy iterator, usable in foreach loops and heavily used in std.algorithm. The functions in std.algorithm can handle ranges with const elements perfectly fine. However, iterating a range changes the internal values and ultimately consumes the range. Therefore, when a range itself is const, it cannot be iterated over. I think this makes sense by design, but I can imagine that this behavior can be surprising in edge cases. I haven’t encountered this as I generate new ranges on every query, both on mutable and const objects.

A guideline for when to use const would be to separate the queries from the command, a.k.a. ye olde Command-Query Separation (CQS). All queries should, in principle, be const. To perform a mutation, even on nested objects, one should call a method on the object. I’d double down on this and state that member functions should be commands, with logic that can be overridden, and therefore never be marked constant. Queries basically serve as a means to peek at encapsulated data and don’t need to be overridden. This should be a final, non-virtual, function that simply exposes a read-only view on the inner field. For example, using D’s module-private modifier on the field in conjunction with an acquainted function in the same module, we can put the logic inside the class definition and the queries outside.

// in order.d
public class Order
{
    private Transaction[] _transactions; // Accessible in order.d

    public void process(); // Virtual and not restricted
}

public const(Transaction)[] getTransactions(const Order order)
{
    // Function, not virtual, operates on a read-only view of Order
    return order._transactions;
}

// in view.d
void display(const Order order)
{
    auto t = order.getTransactions();
    show(t);
}

We should take care, however, not to overapply the modifier. The question that we need to answer is not “Do I modify this object here?”, but rather, “Does it make sense that the object is modified here?” Wrongly assuming constant objects will result in trouble when you finally need to change the instance due to a new feature. For example, in my game a central Game class contains the players’ hands, but doesn’t explicitly modify them. However, given the structure of my code, it does not make sense to make the player objects constant, as free functions in the engine do use mutable player instances of the game object.

Reflecting on my design, even when writing this blog post, gave me valuable insights. Taking the effort to properly use the tool called const has paid off for me. It improved the structure of my code and improved my understanding of the ramblings I write. It is like any other tool not a silver bullet. It serves to formalize our gentleman’s agreement and is therefore just as fragile as one.


Marco graduated in physics, were he used Fortran and Matlab. He used Programming in D to learn application programming. After 3 years of being a C# developer, he is now trainer-coach of mainly Java and C# developers at Sogyo. Marco uses D for programming experiments and side projects including his darling mahjong game.

Lost in Translation: Encapsulation

I first learned programming in BASIC. Outgrew it, and switched to Fortran. Amusingly, my early Fortran code looked just like BASIC. My early C code looked like Fortran. My early C++ code looked like C. – Walter Bright, the creator of D

Programming in a language is not the same as thinking in that language. A natural side effect of experience with one programming language is that we view other languages through the prism of its features and idioms. Languages in the same family may look and feel similar, but there are guaranteed to be subtle differences that, when not accounted for, can lead to compiler errors, bugs, and missed opportunities. Even when good docs, books, and other materials are available, most misunderstandings are only going to be solved through trial-and-error.

D programmers come from a variety of programming backgrounds, C-family languages perhaps being the most common among them. Understanding the differences and how familiar features are tailored to D can open the door to more possibilities for organizing a code base, and designing and implementing an API. This article is the first of a few that will examine D features that can be overlooked or misunderstood by those experienced in similar languages.

We’re starting with a look at a particular feature that’s common among languages that support Object-Oriented Programming (OOP). There’s one aspect in particular of the D implementation that experienced programmers are sure they already fully understand and are often surprised to later learn they don’t.

Encapsulation

Most readers will already be familiar with the concept of encapsulation, but I want to make sure we’re on the same page. For the purpose of this article, I’m talking about encapsulation in the form of separating interface from implementation. Some people tend to think of it strictly as it relates to object-oriented programming, but it’s a concept that’s more broad than that. Consider this C code:

#include <stdio.h>
static size_t s_count;

void print_message(const char* msg) {
    puts(msg);
    s_count++;
}

size_t num_prints() { return s_count; }

In C, functions and global variables decorated with static become private to the translation unit (i.e. the source file along with any headers brought in via #include) in which they are declared. Non-static declarations are publicly accessible, usually provided in header files that lay out the public API for clients to use. Static functions and variables are used to hide implementation details from the public API.

Encapsulation in C is a minimal approach. C++ supports the same feature, but it also has anonymous namespaces that can encapsulate type definitions in addition to declarations. Like Java, C#, and other languages that support OOP, C++ also has access modifiers (alternatively known as access specifiers, protection attributes, visibility attributes) which can be applied to class and struct member declarations.

C++ supports the following three access modifiers, common among OOP languages:

  • public – accessible to the world
  • private – accessible only within the class
  • protected – accessible only within the class and its derived classes

An experienced Java programmer might raise a hand to say, “Um, excuse me. That’s not a complete definition of protected.” That’s because in Java, it looks like this:

  • protected – accessible within the class, its derived classes, and classes in the same package.

Every class in Java belongs to a package, so it makes sense to factor packages into the equation. Then there’s this:

  • package-private (not a keyword) – accessible within the class and classes in the same package.

This is the default access level in Java when no access modifier is specified. This combined with protected make packages a tool for encapsulation beyond classes in Java.

Similarly, C# has assemblies, which MSDN defines as “a collection of types and resources that forms a logical unit of functionality”. In C#, the meaning of protected is identical to that of C++, but the language has two additional forms of protection that relate to assemblies and that are analogous to Java’s protected and package-private.

  • internal – accessible within the class and classes in the same assembly.
  • protected internal – accessible within the class, its derived classes, and classes in the same assembly.

Examining encapsulation in other programming languages will continue to turn up similarities and differences. Common encapsulation idioms are generally adapted to language-specific features. The fundamental concept remains the same, but the scope and implementation vary. So it should come as no surprise that D also approaches encapsulation in its own, language-specific manner.

Modules

The foundation of D’s approach to encapsulation is the module. Consider this D version of the C snippet from above:

module mymod;

private size_t _count;

void printMessage(string msg) {
    import std.stdio : writeln;

    writeln(msg);
    _count++;
}

size_t numPrints() { return _count; }

In D, access modifiers can apply to module-scope declarations, not just class and struct members. _count is private, meaning it is not visible outside of the module. printMessage and numPrints have no access modifiers; they are public by default, making them visible and accessible outside of the module. Both functions could have been annotated with the keyword public.

Note that imports in module scope are private by default, meaning the symbols in the imported modules are not visible outside the module, and local imports, as in the example, are never visible outside of their parent scope.

Alternative syntaxes are supported, giving more flexibility to the layout of a module. For example, there’s C++ style:

module mymod;

// Everything below this is private until either another
// protection attribute or the end of file is encountered.
private:
    size_t _count;

// Turn public back on
public:
    void printMessage(string msg) {
        import std.stdio : writeln;

        writeln(msg);
        _count++;
    }

    size_t numPrints() { return _count; }

And this:

module mymod;

private {
    // Everything declared within these braces is private.
    size_t _count;
}

// The functions are still public by default
void printMessage(string msg) {
    import std.stdio : writeln;

    writeln(msg);
    _count++;
}

size_t numPrints() { return _count; }

Modules can belong to packages. A package is a way to group related modules together. In practice, the source files corresponding to each module should be grouped together in the same directory on disk. Then, in the source file, each directory becomes part of the module declaration:

// mypack/amodule.d
mypack.amodule;

// mypack/subpack/anothermodule.d
mypack.subpack.anothermodule;

Note that it’s possible to have package names that don’t correspond to directories and module names that don’t correspond to files, but it’s bad practice to do so. A deep dive into packages and modules will have to wait for a future post.

mymod does not belong to a package, as no packages were included in the module declaration. Inside printMessage, the function writeln is imported from the stdio module, which belongs to the std package. Packages have no special properties in D and primarily serve as namespaces, but they are a common part of the codescape.

In addition to public and private, the package access modifier can be applied to module-scope declarations to make them visible only within modules in the same package.

Consider the following example. There are three modules in three files (only one module per file is allowed), each belonging to the same root package.

// src/rootpack/subpack1/mod2.d
module rootpack.subpack1.mod2;
import std.stdio;

package void sayHello() {
    writeln("Hello!");
}

// src/rootpack/subpack1/mod1.d
module rootpack.subpack1.mod1;
import rootpack.subpack1.mod2;

class Speaker {
    this() { sayHello(); }
}

// src/rootpack/app.d
module rootpack.app;
import rootpack.subpack1.mod1;

void main() {
    auto speaker = new Speaker;
}

Compile this with the following command line:

cd src
dmd -i rootpack/app.d

The -i switch tells the compiler to automatically compile and link imported modules (excluding those in the standard library namespaces core and std). Without it, each module would have to be passed on the command line, else they wouldn’t be compiled and linked.

The class Speaker has access to sayHello because they belong to modules that are in the same package. Now imagine we do a refactor and we decide that it could be useful to have access to sayHello throughout the rootpack package. D provides the means to make that happen by allowing the package attribute to be parameterized with the fully-qualified name (FQN) of a package. So we can change the declaration of sayHello like so:

package(rootpack) void sayHello() {
    writeln("Hello!");
}

Now all modules in rootpack and all modules in packages that descend from rootpack will have access to sayHello. Don’t overlook that last part. A parameter to the package attribute is saying that a package and all of its descendants can access this symbol. It may sound overly broad, but it isn’t.

For one thing, only a package that is a direct ancestor of the module’s parent package can be used as a parameter. Consider a module rootpack.subpack.subsub.mymod. That name contains all of the packages that are legal parameters to the package attribute in mymod.d, namely rootpack, subpack, and subsub. So we can say the following about symbols declared in mymod:

  • package – visible only to modules in the parent package of mymod, i.e. the subsub package.
  • package(subsub) – visible to modules in the subsub package and modules in all packages descending from subsub.
  • package(subpack) – visible to modules in the subpack package and modules in all packages descending from subpack.
  • package(rootpack) – visible to modules in the rootpack package and modules in all packages descending from rootpack.

This feature makes packages another tool for encapsulation, allowing symbols to be hidden from the outside world but visible and accessible in specific subtrees of a package hierarchy. In practice, there are probably few cases where expanding access to a broad range of packages in an entire subtree is desirable.

It’s common to see parameterized package protection in situations where a package exposes a common public interface and hides implementations in one or more subpackages, such as a graphics package with subpackages containing implementations for DirectX, Metal, OpenGL, and Vulkan. Here, D’s access modifiers allow for three levels of encapsulation:

  • the graphics package as a whole
  • each subpackage containing the implementations
  • individual modules in each package

Notice that I didn’t include class or struct types as a fourth level. The next section explains why.

Classes and structs

Now we come to the motivation for this article. I can’t recall ever seeing anyone come to the D forums professing surprise about package protection, but the behavior of access modifiers in classes and structs is something that pops up now and then, largely because of expectations derived from experience in other languages.

Classes and structs use the same access modifiers as modules: public, package, package(some.pack), and private. The protected attribute can only be used in classes, as inheritance is not supported for structs (nor for modules, which aren’t even objects). public, package, and package(some.pack) behave exactly as they do at the module level. The thing that surprises some people is that private also behaves the same way.

import std.stdio;

class C {
    private int x;
}

void main() {
    C c = new C();
    c.x = 10;
    writeln(c.x);
}

Run this example online

Snippets like this are posted in the forums now and again by people exploring D, accompanying a question along the lines of, “Why does this compile?” (and sometimes, “I think I’ve found a bug!”). This is an example of where experience can cloud expectations. Everyone knows what private means, so it’s not something most people bother to look up in the language docs. However, those who do would find this:

Symbols with private visibility can only be accessed from within the same module.

private in D always means private to the module. The module is the lowest level of encapsulation. It’s easy to understand why some experience an initial resistance to this, that it breaks encapsulation, but the intent behind the design is to strengthen encapsulation. It’s inspired by the C++ friend feature.

Having implemented and maintained a C++ compiler for many years, Walter understood the need for a feature like friend, but felt that it wasn’t the best way to go about it.

Being able to declare a “friend” that is somewhere in some other file runs against notions of encapsulation.

An alternative is to take a Java-like approach of one class per module, but he felt that was too restrictive.

One may desire a set of closely interrelated classes that encapsulate a concept, and those should go into a module.

So the way to view a module in D is not just as a single source file, but as a unit of encapsulation. It can contain free functions, classes, and structs, all operating on the same data declared in module scope and class scope. The public interface is still protected from changes to the private implementation inside the module. Along those same lines, protected class members are accessible not just in derived classes, but also in the module.

Sometimes though, there really is a benefit to denying access to private members in a module. The bigger a module becomes, the more of a burden it is to maintain, especially when it’s being maintained by a team. Every place a private member of a class is accessed in a module means more places to update when a change is made, thereby increasing the maintenance burden. The language provides the means to alleviate the burden in the form of the special package module.

In some cases, we don’t want to require the user to import multiple modules individually. Splitting a large module into smaller ones is one of those cases. Consider the following file tree:

-- mypack
---- mod1.d
---- mod2.d

We have two modules in a package called mypack. Let’s say that mod1.d has grown extremely large and we’re starting to worry about maintaining it. For one, we want to ensure that private members aren’t manipulated outside of class declarations with hundreds or thousands of lines in between. We want to split the module into smaller ones, but at the same time we don’t want to break user code. Currently, users can get at the module’s symbols by importing it with import mypack.mod1. We want that to continue to work. Here’s how we do it:

-- mypack
---- mod1
------ package.d
------ split1.d
------ split2.d
---- mod2.d

We’ve split mod1.d into two new modules and put them in a package named mod1. We’ve also created a special package.d file, which looks like this:

module mypack.mod1;

public import mypack.mod1.split1,
              mypack.mod1.split2;

When the compiler sees package.d, it knows to treat it specially. Users will be able to continue using import mypack.mod1 without ever caring that it’s now split into two modules in a new package. The key is the module declaration at the top of package.d. It’s telling the compiler to treat this package as the module mod1. And instead of automatically importing all modules in the package, the requirement to list them as public imports in package.d allows more freedom in implementing the package. Sometimes, you might want to require the user to explicitly import a module even when a package.d is present.

Now users will continue seeing mod1 as a single module and can continue to import it as such. Meanwhile, encapsulation is now more stringently enforced internally. Because split1 and split2 are now separate modules, they can’t touch each other’s private parts. Any part of the API that needs to be shared by both modules can be annotated with package protection. Despite the internal transformation, the public interface remains unchanged, and encapsulation is maintained.

Wrapping up

The full list of access modifiers in D can be defined as such:

  • public – accessible everywhere.
  • package – accessible to modules in the same package.
  • package(some.pack) – accessible to modules in the package some.pack and to the modules in all of its descendant packages.
  • private – accessible only in the module.
  • protected (classes only) – accessible in the module and in derived classes.

Hopefully, this article has provided you with the perspective to think in D instead of your “native” language when thinking about encapsulation in D.

Thanks to Ali Çehreli, Joakim Noah, and Nicholas Wilson for reviewing and providing feedback on this article.

Go Your Own Way (Part Two: The Heap)

This post is part of an ongoing series on garbage collection in the D Programming Language, and the second of two regarding the allocation of memory outside of the GC. Part One discusses stack allocation. Here, we’ll look at allocating memory from the non-GC heap.

Although this is only my fourth post in the series, it’s the third in which I talk about ways to avoid the GC. Lest anyone jump to the wrong conclusion, that fact does not signify an intent to warn programmers away from the D garbage collector. Quite the contrary. Knowing how and when to avoid the GC is integral to understanding how to efficiently embrace it.

To hammer home a repeated point, efficient garbage collection requires reducing stress on the GC. As highlighted in the first and subsequent posts in this series, that doesn’t necessarily mean avoiding it completely. It means being judicious in how often and how much GC memory is allocated. Fewer GC allocations means fewer opportunities for a collection to trigger. Less total memory allocated from the GC heap means less total memory to scan.

It’s impossible to make any accurate, generalized statement about what sort of applications may or may not feel an impact from the GC; such is highly application specific. What can be said is that it may not be necessary for many applications to temporarily avoid or disable the GC, but when it is, it’s important to know how. Allocating from the stack is an obvious approach, but D also allows allocating from the non-GC heap.

The ubiquitous C

For better or worse, C is everywhere. Any software written today, no matter the source language, is probably interacting with a C API at some level. Despite the C specification defining no standard ABI, its platform-specific quirks and differences are understood well enough that most languages know how to interface with it. D is no exception. In fact, all D programs have access to the C standard library by default.

The core.stdc package, part of DRuntime, is a collection of D modules translated from C standard library headers. When a D executable is linked, the C standard library is linked along with it. All that need be done to gain access is to import the appropriate modules.

import core.stdc.stdio : puts;
void main() 
{
    puts("Hello C standard library.");
}

Some who are new to D may be laboring under a misunderstanding that functions which call into C require an extern(C) annotation, or, after Walter’s Bright’s recent ‘D as a Better C’ article, must be compiled with -betterC on the command line. Neither is true. Normal D functions can call into C without any special effort beyond the presence of an extern(C) declaration of the function being called. In the snippet above, the declaration of puts is in the core.stdc.stdio module, and that’s all we need to call it.

malloc and friends

Given that we have access to C’s standard library in D, we therefore have access to the functions malloc, calloc, realloc and free. All of these can be made available by importing core.stdc.stdlib. And thanks to D’s slicing magic, using these functions as the foundation of a non-GC memory management strategy is a breeze.

import core.stdc.stdlib;
void main() 
{
    enum totalInts = 10;
    
    // Allocate memory for 10 ints
    int* intPtr = cast(int*)malloc(int.sizeof * totalInts);

    // assert(0) (and assert(false)) will always remain in the binary,
    // even when asserts are disabled, which makes it nice for handling
    // malloc failures    
    if(!intPtr) assert(0, "Out of memory!");

    // Free when the function exits. Not necessary for this example, but
    // a potentially useful strategy for temporary allocations in functions 
    // other than main.
    scope(exit) free(intPtr);

    // Slice the D pointer to get a more manageable length/pointer pair.
    int[] intArray = intPtr[0 .. totalInts];
}

Not only does this bypass the GC, it also bypasses D’s default initialization. A GC-allocated array of type T would have all of its elements initialized to T.init, which is 0 for int. If mimicking D’s default initialization is the desired behavior, more work needs to be done. In this example, we could replace malloc with calloc for the same effect, but that would only be correct for integrals. float.init, for example, is float.nan rather than 0.0f. We’ll come back to this later in the article.

Of course, it would be more idiomatic to wrap both malloc and free and work with slices of memory. A minimal example:

import core.stdc.stdlib;

// Allocate a block of untyped bytes that can be managed
// as a slice.
void[] allocate(size_t size)
{
    // malloc(0) is implementation defined (might return null 
    // or an address), but is almost certainly not what we want.
    assert(size != 0);

    void* ptr = malloc(size);
    if(!ptr) assert(0, "Out of memory!");
    
    // Return a slice of the pointer so that the address is coupled
    // with the size of the memory block.
    return ptr[0 .. size];
}

T[] allocArray(T)(size_t count) 
{ 
    // Make sure to account for the size of the
    // array element type!
    return cast(T[])allocate(T.sizeof * count); 
}

// Two versions of deallocate for convenience
void deallocate(void* ptr)
{   
    // free handles null pointers fine.
    free(ptr);
}

void deallocate(void[] mem) 
{ 
    deallocate(mem.ptr); 
}

void main() {
    import std.stdio : writeln;
    int[] ints = allocArray!int(10);
    scope(exit) deallocate(ints);
    
    foreach(i; 0 .. 10) {
        ints[i] = i;
    }

    foreach(i; ints[]) {
        writeln(i);
    }
}

allocate returns void[] rather than void* because it carries with it the number of allocated bytes in its length property. In this case, since we’re allocating an array, we could instead rewrite allocArray to slice the returned pointer immediately, but anyone calling allocate directly would still have to take into account the size of the memory. The disassociation between arrays and their length in C is a major source of bugs, so the sooner we can associate them the better. Toss in some templates for calloc and realloc and you’ve got the foundation of a memory manager based on the C heap.

On a side note, the preceding three snippets (yes, even the one with the allocArray template) work with and without -betterC. But from here on out, we’ll restrict ourselves to features in normal D code.

Avoid leaking like a sieve

When working directly with slices of memory allocated outside of the GC heap, be careful about appending, concatenating, and resizing. By default, the append (~=) and concatenate (~) operators on built-in dynamic arrays and slices will allocate from the GC heap. Concatenation will always allocate a new memory block for the combined string. Normally, the append operator will allocate to expand the backing memory only when it needs to. As the following example demonstrates, it always needs to when it’s given a slice of non-GC memory.

import core.stdc.stdlib : malloc;
import std.stdio : writeln;

void main()
{
    int[] ints = (cast(int*)malloc(int.sizeof * 10))[0 .. 10];
    writeln("Capacity: ", ints.capacity);

    // Save the array pointer for comparison
    int* ptr = ints.ptr;
    ints ~= 22;
    writeln(ptr == ints.ptr);
}

This should print the following:

Capacity: 0
false

A capacity of 0 on a slice indicates that the next append will trigger an allocation. Arrays allocated from the GC heap normally have space for extra elements beyond what was requested, meaning some appending can occur without triggering a new allocation. It’s more like a property of the memory backing the array rather than of the array itself. Memory allocated from the GC does some internal bookkeeping to keep track of how many elements the memory block can hold so that it knows at any given time if a new allocation is needed. Here, because the memory for ints was not allocated by the GC, none of that bookkeeping is being done by the runtime on the existing memory block, so it must allocate on the next append (see Steven Schveighoffer’s ’D Slices article for more info).

This isn’t necessarily a bad thing when it’s the desired behavior, but anyone who’s not prepared for it can easily run into ballooning memory usage thanks to leaks from malloced memory never being deallocated. Consider these two functions:

void leaker(ref int[] arr)
{
    ...
    arr ~= 10;
    ...
}

void cleaner(int[] arr)
{
    ...
    arr ~= 10;
    ...
}

Although arrays are reference types, meaning that modifying existing elements of an array argument inside a function will modify the elements in the original array, they are passed by value as function parameters. Any activity that modifies the structure of an array argument, i.e. its length and ptr properties, only affects the local variable inside the function. The original will remain unchanged unless the array is passed by reference.

So if an array backed by the C heap is passed to leaker, the append will cause a new array to be allocated from the GC heap. Worse, if free is subsequently called on the ptr property of the original array, which now points into the GC heap rather than the C heap, we’re in undefined behavior territory. cleaner, on the other hand, is fine. Any array passed into it will remain unchanged. Internally, the GC will allocate, but the ptr property of the original array still points to the original memory block.

As long as the original array isn’t overwritten or allowed to go out of scope, this is a non-issue. Functions like cleaner can do what they want with their local slice and things will be fine externally. Otherwise, if the original array is to be discarded, you can prevent all of this by tagging functions that you control with @nogc. Where that’s either not possible or not desirable, then either a copy of the pointer to the original malloced memory must be kept and freeed at some point after the reallocation takes place, custom appending and concatenation needs to be implemented, or the allocation strategy needs to be reevaluated.

Note that std.container.array contains an Array type that does not rely on the GC and may be preferable over managing all of this manually.

Other APIs

The C standard library isn’t the only game in town for heap allocations. A number of alternative malloc implementations exist and any of those can be used instead. This requires manually compiling the source and linking with the resultant objects, but that’s not an onerous task. Heap memory can also be allocated through system APIs, like the Win32 HeapAlloc function on Windows (available by importing core.sys.windows.windows). As long as there’s a way to get a pointer to a block of heap memory, it can be sliced and manipulated in a D program in place of a block of GC memory.

Aggregate types

If we only had to worry about allocating arrays in D, then we could jump straight on to the next section. However, we also need to concern ourselves with struct and class types. For this discussion, however, we will only focus on the former. The next couple of posts in the series will focus exclusively on classes.

Allocating an array of struct types, or a single instance of one, is often no different than when the type is int.

struct Point { int x, y; }
Point* onePoint = cast(Point*)malloc(Point.sizeof);
Point* tenPoints = cast(Point*)malloc(Point.sizeof * 10);

Where things break down is when contructors enter the mix. malloc and friends know nothing about constructing D object instances. Thankfully, Phobos provides us with a function template that does.

std.conv.emplace can take either a pointer to typed memory or an untyped void[], along with an optional number of arguments, and return a pointer to a single, fully initialized and constructed instance of that type. This example shows how to do so using both malloc and the allocate function template from above:

struct Vertex4f 
{ 
    float x, y, z, w; 
    this(float x, float y, float z, float w = 1.0f)
    {
        this.x = x;
        this.y = y;
        this.z = z;
        this.w = w;
    }
}

void main()
{
    import core.stdc.stdlib : malloc;
    import std.conv : emplace;
    import std.stdio : writeln;
    
    Vertex4f* temp1 = cast(Vertex4f*)malloc(Vertex4f.sizeof);
    Vertex4f* vert1 = emplace(temp1, 4.0f, 3.0f, 2.0f); 
    writeln(*vert1);

    void[] temp2 = allocate(Vertex4f.sizeof);
    Vertex4f* vert2 = emplace!Vertex4f(temp2, 10.0f, 9.0f, 8.0f);
    writeln(*vert2);
}

Another feature of emplace is that it also handles default initialization. Consider that struct types in D need not implement constructors. Here’s what happens when we change the implementation of Vertex4f to remove the constructor:

struct Vertex4f 
{
    // x, y, z are default inited to float.nan
    float x, y, z;

    // w is default inited to 1.0f
    float w = 1.0f;
}

void main()
{
    import core.stdc.stdlib : malloc;
    import std.conv : emplace;
    import std.stdio : writeln;

    Vertex4f vert1, vert2 = Vertex4f(4.0f, 3.0f, 2.0f);
    writeln(vert1);
    writeln(vert2);    
    
    auto vert3 = emplace!Vertex4f(allocate(Vertex4f.sizeof));
    auto vert4 = emplace!Vertex4f(allocate(Vertex4f.sizeof), 4.0f, 3.0f, 2.0f);
    writeln(*vert3);
    writeln(*vert4);
}

This prints the following:

Vertex4f(nan, nan, nan, 1)
Vertex4f(4, 3, 2, 1)
Vertex4f(nan, nan, nan, 1)
Vertex4f(4, 3, 2, 1)

So emplace allows heap-allocated struct instances to be initialized in the same manner as stack allocated struct instances, with or without a constructor. It also works with the built-in types like int and float. Just always remember that emplace is intended to initialize and construct a single instance, not an array of instances.

If the aggregate type has a destructor, it should be invoked before its memory is deallocated. This can be achieved with the destroy function (always available through the implicit import of std.object).

std.experimental.allocator

The entirety of the text above describes the fundamental building blocks of a custom memory manager. For many use cases, it may be sufficient to forego cobbling something together by hand and instead take advantage of the D standard library’s std.experimental.allocator package. This is a high-level API that makes use of low-level techniques like those described above, along with Design by Introspection, to facilitate the assembly of different types of allocators that know how to allocate, initialize, and construct arrays and type instances. Allocators like Mallocator and GCAllocator can be used to grab chunks of memory directly, or combined with other building blocks for specialized behavior. See the emsi-containers library for a real-world example.

Keeping the GC informed

Given that it’s rarely recommended to disable the GC entirely, most D programs allocating outside the GC heap will likely also be using memory from the GC heap in the same program. In order for the GC to properly do its job, it needs to be informed of any non-GC memory that contains, or may potentially contain, references to memory from the GC heap. For example, a linked list whose nodes are allocated with malloc might contain references to classes allocated with new.

The GC can be given the news via GC.addRange.

import core.memory;
enum size = int.sizeof * 10;
void* p1 = malloc(size);
GC.addRange(p1, size);

void[] p2 = allocate!int(10);
GC.addRange(p2.ptr, p2.length);

When the memory block is no longer needed, the corresponding GC.removeRange can be called to prevent it from being scanned. This does not deallocate the memory block. That will need to be done manually via free or whatever allocator interface was used to allocate it. Be sure to read the documentation before using either function.

Given that one of the goals of allocating from outside the GC heap is to reduce the amount of memory the GC must scan, this may seem self-defeating. That’s the wrong way to look at it. If non-GC memory is going to hold references to GC memory, then it’s vital to let the GC know about it. Not doing so can cause the GC to free up memory to which a reference still exists. addRange is a tool specifically designed for that situation. If it can be guaranteed that no GC-memory references live inside a non-GC memory block, such as a malloced array of vertices, then addRange need not be called on that memory block.

A word of warning

Be careful when passing typed pointers to addRange. Because the function was implemented with the C like approach of taking a pointer to a block of memory and the number of bytes it contains, there is an opportunity for error.

struct Item { SomeClass foo; }
auto items = (cast(Item*)malloc(Item.sizeof * 10))[0 .. 10];
GC.addRange(items.ptr, items.length);

With this, the GC would be scanning a block of memory exactly ten bytes in size. The length property returns the number of elements the slice refers to. Only when the type is void (or the element type is one-byte long, like byte and ubyte) does it equate to the size of the memory block the slice refers to. The correct thing to do here is:

GC.addRange(items.ptr, items.length * Item.sizeof);

However, until DRuntime is updated with an alternative, it may be best to implement a wrapper that takes a void[] parameter.

void addRange(void[] mem) 
{
    import core.memory;
    GC.addRange(mem.ptr, mem.length);
}

Then calling addRange(items) will do the correct thing. The implicit conversion of the slice to void[] in the function call will mean that mem.length is the same as items.length * Item.sizeof.

The GC series marches on

This post has covered the very basics of using the non-GC heap in D programs. One glaring omission, in addition to class types, is what to do about destructors. I’m saving that topic for the post about classes, where it is highly relevant. That’s the next scheduled post in the GC series. Stay tuned!

Thanks to Walter Bright, Guillaume Piolat, Adam D. Ruppe, and Steven Schveighoffer for their valuable feedback on a draft of this article.

D as a Better C

Walter Bright is the BDFL of the D Programming Language and founder of Digital Mars. He has decades of experience implementing compilers and interpreters for multiple languages, including Zortech C++, the first native C++ compiler. He also created Empire, the Wargame of the Century. This post is the first in a series about D’s BetterC mode


D was designed from the ground up to interface directly and easily to C, and to a lesser extent C++. This provides access to endless C libraries, the Standard C runtime library, and of course the operating system APIs, which are usually C APIs.

But there’s much more to C than that. There are large and immensely useful programs written in C, such as the Linux operating system and a very large chunk of the programs written for it. While D programs can interface with C libraries, the reverse isn’t true. C programs cannot interface with D ones. It’s not possible (at least not without considerable effort) to compile a couple of D files and link them in to a C program. The trouble is that compiled D files refer to things that only exist in the D runtime library, and linking that in (it’s a bit large) tends to be impractical.

D code also can’t exist in a program unless D controls the main() function, which is how the startup code in the D runtime library is managed. Hence D libraries remain inaccessible to C programs, and chimera programs (a mix of C and D) are not practical. One cannot pragmatically “try out” D by add D modules to an existing C program.

That is, until Better C came along.

It’s been done before, it’s an old idea. Bjarne Stroustrup wrote a paper in 1988 entitled “A Better C“. His early C++ compiler was able to compile C code pretty much unchanged, and then one could start using C++ features here and there as they made sense, all without disturbing the existing investment in C. This was a brilliant strategy, and drove the early success of C++.

A more modern example is Kotlin, which uses a different method. Kotlin syntax is not compatible with Java, but it is fully interoperable with Java, relies on the existing Java libraries, and allows a gradual migration of Java code to Kotlin. Kotlin is indeed a “Better Java”, and this shows in its success.

D as Better C

D takes a radically different approach to making a better C. It is not an extension of C, it is not a superset of C, and does not bring along C’s longstanding issues (such as the preprocessor, array overflows, etc.). D’s solution is to subset the D language, removing or altering features that require the D startup code and runtime library. This is, simply, the charter of the -betterC compiler switch.

Doesn’t removing things from D make it no longer D? That’s a hard question to answer, and it’s really a matter of individual preference. The vast bulk of the core language remains. Certainly the D characteristics that are analogous to C remain. The result is a language somewhere in between C and D, but that is fully upward compatible with D.

Removed Things

Most obviously, the garbage collector is removed, along with the features that depend on the garbage collector. Memory can still be allocated the same way as in C – using malloc() or some custom allocator.

Although C++ classes and COM classes will still work, D polymorphic classes will not, as they rely on the garbage collector.

Exceptions, typeid, static construction/destruction, RAII, and unittests are removed. But it is possible we can find ways to add them back in.

Asserts are altered to call the C runtime library assert fail functions rather than the D runtime library ones.

(This isn’t a complete list, for that see http://dlang.org/dmd-windows.html#switch-betterC.)

Retained Things

More importantly, what remains?

What may be initially most important to C programmers is memory safety in the form of array overflow checking, no more stray pointers into expired stack frames, and guaranteed initialization of locals. This is followed by what is expected in a modern language — modules, function overloading, constructors, member functions, Unicode, nested functions, dynamic closures, Compile Time Function Execution, automated documentation generation, highly advanced metaprogramming, and Design by Introspection.

Footprint

Consider a C program:

#include <stdio.h>

int main(int argc, char** argv) {
    printf("hello world\n");
    return 0;
}

It compiles to:

_main:
push EAX
mov [ESP],offset FLAT:_DATA
call near ptr _printf
xor EAX,EAX
pop ECX
ret

The executable size is 23,068 bytes.

Translate it to D:

import core.stdc.stdio;

extern (C) int main(int argc, char** argv) {
    printf("hello world\n");
    return 0;
}

The executable size is the same, 23,068 bytes. This is unsurprising because the C compiler and D compiler generate the same code, as they share the same code generator. (The equivalent full D program would clock in at 194Kb.) In other words, nothing extra is paid for using D rather than C for the same code.

The Hello World program is a little too trivial. Let’s step up in complexity to the infamous sieve benchmark program:

#include <stdio.h>

/* Eratosthenes Sieve prime number calculation. */

#define true    1
#define false   0
#define size    8190
#define sizepl  8191

char flags[sizepl];

int main() {
    int i, prime, k, count, iter;

    printf ("10 iterations\n");
    for (iter = 1; iter <= 10; iter++) {
        count = 0;
        for (i = 0; i <= size; i++)
            flags[i] = true;
        for (i = 0; i <= size; i++) {
            if (flags[i]) {
                prime = i + i + 3;
                k = i + prime;
                while (k <= size) {
                    flags[k] = false;
                    k += prime;
                }
                count += 1;
            }
        }
    }
    printf ("\n%d primes", count);
    return 0;
}

Rewriting it in Better C:

import core.stdc.stdio;

extern (C):

__gshared bool[8191] flags;

int main() {
    int count;

    printf("10 iterations\n");
    foreach (iter; 1 .. 11) {
        count = 0;
        flags[] = true;
        foreach (i; 0 .. flags.length) {
            if (flags[i]) {
                const prime = i + i + 3;
                auto k = i + prime;
                while (k < flags.length) {
                    flags[k] = false;
                    k += prime;
                }
                count += 1;
            }
        }
    }
    printf("%d primes\n", count);
    return 0;
}

It looks much the same, but some things are worthy of note:

  • extern (C): means use the C calling convention.
  • D normally puts static data into thread local storage. C sticks them in global storage. __gshared accomplishes that.
  • foreach is a simpler way of doing for loops over known endpoints.
  • flags[] = true; sets all the elements in flags to true in one go.
  • Using const tells the reader that prime never changes once it is initialized.
  • The types of iter, i, prime and k are inferred, preventing inadvertent type coercion errors.
  • The number of elements in flags is given by flags.length, not some independent variable.

And the last item leads to a very important hidden advantage: accesses to the flags array are bounds checked. No more overflow errors! We didn’t have to do anything
in particular to get that, either.

This is only the beginning of how D as Better C can improve the expressivity, readability, and safety of your existing C programs. For example, D has nested functions, which in my experience work very well at prying goto’s from my cold, dead fingers.

On a more personal note, ever since -betterC started working, I’ve been converting many of my old C programs still in use into D, one function at a time. Doing it one function at a time, and running the test suite after each change, keeps the program in a correctly working state at all times. If the program doesn’t work, I only have one function to look at to see where it went wrong. I don’t particularly care to maintain C programs anymore, and with -betterC there’s no longer any reason to.

The Better C ability of D is available in the 2.076.0 beta: download it and read the changelog.

Don’t Fear the Reaper

D, like many other programming languages in active use today, comes with a garbage collector out of the box. There are many types of software that can be written without worrying at all about the GC, taking full advantage of its benefits. But the GC does have drawbacks, and there are certainly scenarios in which garbage collection is undesirable. For those situations, the language provides the means to temporarily disable it, or even avoid it completely.

In order to maximize the positive impacts of garbage collection and minimize any negative, it’s necessary to have a good grounding in how the GC works in D. A good place to start is the Garbage Collection page at dlang.org, which outlines the rationale for D’s GC and provides some tips on working with it. This post is the first in a series intended to expand on the information provided on that page.

This time, we’ll look at the very basics, focusing on the language features that can trigger GC allocations. Future posts will introduce ways to disable the GC when necessary, as well as idioms useful in dealing with its nondeterministic nature (e.g. managing resources in the destructors of GC-managed objects).

The first thing to understand about D’s garbage collector is that it only runs during allocation, and only if there is no memory available to allocate. It isn’t sitting in the background, periodically scanning the heap and collecting garbage. This knowledge is fundamental to writing code that makes efficient use of GC-managed memory. Take the following example:

void main() {
    int[] ints;
    foreach(i; 0..100) {
        ints ~= i;
    }
}

This declares a dynamic array of int, then uses D’s append operator to append the numbers 0 to 99 in a foreach range loop. What isn’t obvious to the untrained eye is that the append operator makes use of the GC heap to allocate space for the values it adds to the array.

DRuntime’s array implementation isn’t a dumb one. In the example, there aren’t going to be one hundred allocations, one for each value. When more memory is needed, the implementation will allocate more space than is requested. In this particular case, it’s possible to determine how many allocations are actually made by making use of the capacity property of D’s dynamic arrays and slices. This returns the total number of elements the array can hold before an allocation is necessary.

void main() {
    import std.stdio : writefln;
    int[] ints;
    size_t before, after;
    foreach(i; 0..100) {
        before = ints.capacity;
        ints ~= i;
        after = ints.capacity;
        if(before != after) {
            writefln("Before: %s After: %s",
                before, after);
        }
    }
}

Executing this when compiled with DMD 2.073.2 shows the message is printed six times, meaning there were six total GC allocations in the loop. That means there were six opportunities for the GC to collect garbage. In this small example, it almost certainly didn’t. If this loop were part of a larger program, with GC allocations throughout, it very well might.

On a side note, it’s informative to examine the values of before and after. Doing so shows a sequence of 0, 3, 7, 15, 31, 63, and 127. So by the end, ints contains 100 values and has space for appending 27 more before the next allocation, which, extrapolating from the values in the sequence, should be 255. That’s an implementation detail of DRuntime, though, and could be tweaked or changed in any release. For more details on how arrays and slices are managed by the GC, take a look at Steven Schveighoffer’s excellent article on the topic.

So, six allocations, six opportunities for the GC to initiate one of its pauses of unpredictable length even in that simple little loop. In the general case, that could become an issue depending on if the loop is in a hot part of code and how much total memory is allocated from the GC heap. But even then, it’s not necessarily a reason to disable the GC in that part of the code.

Even with languages that don’t come with a stock GC out of the box, like C and C++, most programmers learn at some point that it’s better for overall performance to allocate as much as possible up front and minimize allocations in the inner loops. It’s one of the many types of premature optimization that are not actually the root of all evil, something we tend to call best practice. Given that D’s GC only runs when memory is allocated, the same strategy can be applied as a simple way to mitigate any potential impact it could have on performance. Here’s one way to rewrite the example:

void main() {
    int[] ints = new int[](100);
    foreach(i; 0..100) {
        ints[i] = i;
    }
}

Now we’ve gone from six allocations down to one. The only opportunity the GC has to run is before the inner loop. This actually allocates space for at least 100 values and initializes them all to 0 before entering the loop. The array will have a length of 100 after new, but will almost certainly have additional capacity.

There’s an alternative to new for arrays: the reserve function:

void main() {
    int[] ints;
    ints.reserve(100);
    foreach(i; 0..100) {
        ints ~= i;
    }
}

This allocates memory for at least 100 values, but the array is still empty (its length property will return 0) when it returns, so nothing is default initialized. Given that the loop only appends 100 values, it’s guaranteed not to allocate.

In addition to new and reserve, it’s possible to call GC.malloc directly for explicit allocation.

import core.memory;
void* intsPtr = GC.malloc(int.sizeof * 100);
auto ints = (cast(int*)intsPtr)[0 .. 100];

Array literals will usually allocate.

auto ints = [0, 1, 2];

This is also true when an array literal enum is used.

enum intsLiteral = [0, 1, 2];
auto ints1 = intsLiteral;
auto ints2 = intsLiteral;

An enum exists only at compile time and has no memory address. Its name is a synonym for its value. Everywhere you use one, it’s like copying and pasting its value in place of its name. Both ints1 and ints2 trigger allocations exactly as if they were declared like so:

auto ints1 = [0, 1, 2];
auto ints2 = [0, 1, 2];

Array literals do not allocate when the target is a static array. Also, string literals (strings in D are arrays under the hood) are an exception to the rule.

int[3] noAlloc1 = [0, 1, 2];
auto noAlloc2 = "No Allocation!";

The concatenate operator will always allocate:

auto a1 = [0, 1, 2];
auto a2 = [3, 4, 5];
auto a3 = a1 ~ a2;

D’s associative arrays have their own allocation strategy, but you can expect them to allocate when items are added and potentially when removed. They also expose two properties, keys and values, which both allocate arrays and fill them with copies of the respective items. When its desirable to modify the underlying associative array during iteration, or when the items need to be sorted or otherwise manipulated independently of the associative array, these properties are just what the doctor ordered. Otherwise, they’re needless allocations that put undue pressure on the GC.

When the GC does run, the total amount of memory it needs to scan is going to determine how long it takes. The smaller, the better. Avoiding unnecessary allocations isn’t going to hurt anyone and is another good mitigation strategy. D’s associative arrays provide three properties that help do just that: byKey, byValue, and byKeyValue. These each return forward ranges that can be iterated lazily. They do not allocate because they actually refer to the items in the associative array, so it should not be modified while iterating them. For more on ranges, see the chapters titled Ranges and More Ranges in Ali Çehreli’s Programming in D.

Closures, which are delegates or function literals that need to carry around a pointer to the local stack frame, may also allocate. The last allocating language feature listed on the Garbage Collection page is the assertion. An assertion will allocate when it fails because it needs to throw an AssertError, which is part of D’s class-based exception hierarchy (we’ll look at how classes interact with the GC in a future post).

Then there’s Phobos, D’s standard library. Once upon a time, much of Phobos was implemented with little concern for GC allocations, making it difficult to use in situations where they were undesirable. However, a massive effort was initiated
to make it more conservative in its GC usage. Some functions were made to work with lazy ranges, others were rewritten to take buffers supplied by the caller, and some were reimplemented to avoid unnecessary allocations internally. The result is a standard library more amenable to GC-free code (though there are still probably some corners of the library that have not yet been renovated — PRs welcome).

Now that we’ve seen the basics of using the GC, the next post in this series looks at the tools the language and the compiler provide for turning the GC off and making sure specific sections of code are GC-free.

Thanks to Guillaume Piolat and Steven Schveighoffer for their help with this article.