Category Archives: The Language

Memory Safety in a Systems Programming Language Part 3

The first entry in this series shows how to use the new DIP1000 rules to have slices and pointers refer to the stack, all while being memory safe. The second entry in this series teaches about the ref storage class and how DIP1000 works with aggregate types (classes, structs, and unions).

So far the series has deliberately avoided templates and auto functions. This kept the first two posts simpler in that they did not have to deal with function attribute inference, which I have referred to as “attribute auto inference” in earlier posts. However, both auto functions and templates are very common in D code, so a series on DIP1000 can’t be complete without explaining how those features work with the language changes. Function attribute inference is our most important tool in avoiding so-called “attribute soup”, where a function is decorated with several attributes, which arguably decreases readability.

We will also dig deeper into unsafe code. The previous two posts in this series focused on the scope attribute, but this post is more focused on attributes and memory safety in general. Since DIP1000 is ultimately about memory safety, we can’t get around discussing those topics.

Avoiding repetition with attributes

Function attribute inference means that the language will analyze the body of a function and will automatically add the @safe, pure, nothrow, and @nogc attributes where applicable. It will also attempt to add scope or return scope attributes to parameters and return ref to ref parameters that can’t otherwise be compiled. Some attributes are never inferred. For instance, the compiler will not insert any ref, lazy, out or @trusted attributes, because very likely they are explicitly not wanted where they are left out.

There are many ways to turn on function attribute inference. One is by omitting the return type in the function signature. Note that the auto keyword is not required for this. auto is a placeholder keyword used when no return type, storage class, or attribute is specified. For example, the declaration half(int x) { return x/2; } does not parse, so we use auto half(int x) { return x/2; } instead. But we could just as well write @safe half(int x) { return x/2; } and the rest of the attributes (pure, nothrow, and @nogc) will be inferred just as they are with the auto keyword.

The second way to enable attribute inference is to templatize the function. With our half example, it can be done this way:

int divide(int denominator)(int x) { return x/denominator; }
alias half = divide!2;

The D spec does not say that a template must have any parameters. An empty parameter list can be used to turn attribute inference on: int half()(int x) { return x/2; }. Calling this function doesn’t even require the template instantiation syntax at the call site, e.g., half!()(12) is not required as half(12) will compile.

Another means to turn on attribute inference is to store the function inside another function. These are called nested functions. Inference is enabled not only on functions nested directly inside another function but also on most things nested in a type or a template inside the function. Example:

@safe void parentFun()
{
    // This is auto-inferred.
    int half(int x){ return x/2; }

    class NestedType
    {
        // This is auto inferred
        final int half1(int x) { return x/2; }

        // This is not auto inferred; it's a
        // virtual function and the compiler
        // can't know if it has an unsafe override
        // in a derived class.
        int half2(int x) { return x/2; }
    }

    int a = half(12); // Works. Inferred as @safe.
    auto cl = new NestedType;
    int b = cl.half1(18); // Works. Inferred as @safe.
    int c = cl.half2(26); // Error.
}

A downside of nested functions is that they can only be used in lexical order (the call site must be below the function declaration) unless both the nested function and the call are inside the same struct, class, union, or template that is in turn inside the parent function. Another downside is that they don’t work with Uniform Function Call Syntax.

Finally, attribute inference is always enabled for function literals (a.k.a. lambda functions). The halving function would be defined as enum half = (int x) => x/2; and called exactly as normal. However, the language does not consider this declaration a function. It considers it a function pointer. This means that in global scope it’s important to use enum or immutable instead of auto. Otherwise, the lambda can be changed to something else from anywhere in the program and cannot be accessed from pure functions. In rare cases, such mutability can be desirable, but most often it is an antipattern (like global variables in general).

Limits of inference

Aiming for minimal manual typing isn’t always wise. Neither is aiming for maximal attribute bloat.

The primary problem of auto inference is that subtle changes in the code can lead to inferred attributes turning on and off in an uncontrolled manner. To see when it matters, we need to have an idea of what will be inferred and what will not.

The compiler in general will go to great lengths to infer @safe, pure, nothrow, and @nogc attributes. If your function can have those, it almost always will. The specification says that recursion is an exception: a function calling itself should not be @safe, pure, or nothrow unless explicitly specified as such. But in my testing, I found those attributes actually are inferred for recursive functions. It turns out, there is an ongoing effort to get recursive attribute inference working, and it partially works already.

Inference of scope and return on function parameters is less reliable. In the most mundane cases, it’ll work, but the compiler gives up pretty quickly. The smarter the inference engine is, the more time it takes to compile, so the current design decision is to infer those attributes in only the simplest of cases.

Where to let the compiler infer?

A D programmer should get into the habit of asking, “What will happen if I mistakenly do something that makes this function unsafe, impure, throwing, garbage-collecting, or escaping?” If the answer is “immediate compiler error”, auto inference is probably fine. On the other hand, the answer could be “user code will break when updating this library I’m maintaining”. In that case, annotate manually.

In addition to the potential of losing attributes the author intends to apply, there is also another risk:

@safe pure nothrow @nogc firstNewline(string from)
{
    foreach(i; 0 .. from.length) switch(from[i])
    {
        case '\r':
        if(from.length > i+1 && from[i+1] == '\n')
        {
            return "\r\n";
        }
        else return "\r";

        case '\n': return "\n";

        default: break;
    }

    return "";
}

You might think that since the author is manually specifying the attributes, there’s no problem. Unfortunately, that’s wrong. Suppose the author decides to rewrite the function such that all the return values are slices of the from parameter rather than string literals:

@safe pure nothrow @nogc firstNewline(string from)
{
    foreach(i; 0 .. from.length) switch(from[i])
    {
        case '\r':
        if (from.length > i + 1 && from[i + 1] == '\n')
        {
            return from[i .. i + 2];
        }
        else return from[i .. i + 1];

        case '\n': return from[i .. i + 1];

        default: break;
    }

    return "";
}

Surprise! The parameter from was previously inferred as scope, and a library user was relying on that, but now it’s inferred as return scope instead, breaking client code.

Still, for internal functions, auto inference is a great way to save both our fingers when writing and our eyes when reading. Note that it’s perfectly fine to rely on auto inference of the @safe attribute as long as the function is used in explicitly in @safe functions or unit tests. If something potentially unsafe is done inside the auto-inferred function, it gets inferred as @system, not @trusted. Calling a @system function from a @safe function results in a compiler error, meaning auto inference is safe to rely on in this case.

It still sometimes makes sense to manually apply attributes to internal functions, because the error messages generated when they are violated tend to be better with manual attributes.

What about templates?

Auto inference is always enabled for templated functions. What if a library interface needs to expose one? There is a way to block the inference, albeit an ugly one:

private template FunContainer(T)
{
    // Not auto inferred
    // (only eponymous template functions are)
    @safe T fun(T arg){return arg + 3;}
}

// Auto inferred per se, but since the function it calls
// is not, only @safe is inferred.
auto addThree(T)(T arg){return FunContainer!T.fun(arg);}

However, which attributes a template should have often depends on its compile-time parameters. It would be possible to use metaprogramming to designate attributes depending on the template parameters, but that would be a lot of work, hard to read, and easily as error-prone as relying on auto inference.

It’s more practical to just test that the function template infers the wanted attributes. Such testing doesn’t have to, and probably shouldn’t, be done manually each time the function is changed. Instead:

float multiplyResults(alias fun)(float[] arr)
    if (is(typeof(fun(new float)) : float))
{
    float result = 1.0f;
    foreach (ref e; arr) result *= fun(&e);
    return result;
}

@safe pure nothrow unittest
{
    float fun(float* x){return *x+1;}
    // Using a static array makes sure
    // arr argument is inferred as scope or
    // return scope
    float[5] elements = [1.0f, 2.0f, 3.0f, 4.0f, 5.0f];

    // No need to actually do anything with
    // the result. The idea is that since this
    // compiles, multiplyResults is proven @safe
    // pure nothrow, and its argument is scope or
    // return scope.
    multiplyResults!fun(elements);
}

Thanks to D’s compile-time introspection powers, testing against unwanted attributes is also covered:

@safe unittest
{
    import std.traits : attr = FunctionAttribute,
        functionAttributes, isSafe;

    float fun(float* x)
    {
        // Makes the function both throwing
        // and garbage collector dependant.
        if (*x > 5) throw new Exception("");
        static float* impureVar;

        // Makes the function impure.
        auto result = impureVar? *impureVar: 5;

        // Makes the argument unscoped.
        impureVar = x;
        return result;
    }

    enum attrs = functionAttributes!(multiplyResults!fun);

    assert(!(attrs & attr.nothrow_));
    assert(!(attrs & attr.nogc));

    // Checks against accepting scope arguments.
    // Note that this check does not work with
    // @system functions.
    assert(!isSafe!(
    {
        float[5] stackFloats;
        multiplyResults!fun(stackFloats[]);
    }));

    // It's a good idea to do positive tests with
    // similar methods to make sure the tests above
    // would fail if the tested function had the
    // wrong attributes.
    assert(attrs & attr.safe);
    assert(isSafe!(
    {
        float[] heapFloats;
        multiplyResults!fun(heapFloats[]);
    }));
}

If assertion failures are wanted at compile time before the unit tests are run, adding the static keyword before each of those asserts will get the job done. Those compiler errors can even be had in non-unittest builds by converting that unit test to a regular function, e.g., by replacing @safe unittest with, say, private @safe testAttrs().

The live fire exercise: @system

Let’s not forget that D is a systems programming language. As this series has shown, in most D code the programmer is well protected from memory errors, but D would not be D if it didn’t allow going low-level and bypassing the type system in the same manner as C or C++: bit arithmetic on pointers, writing and reading directly to hardware ports, executing a struct destructor on a raw byte blob… D is designed to do all of that.

The difference is that in C and C++ it takes only one mistake to break the type system and cause undefined behavior anywhere in the code. A D programmer is only at risk when not in a @safe function, or when using dangerous compiler switches such as -release or -check=assert=off (failing a disabled assertion is undefined behavior), and even then the semantics tend to be less UB-prone. For example:

float cube(float arg)
{
    float result;
    result *= arg;
    result *= arg;
    return result;
}

This is a language-agnostic function that compiles in C, C++, and D. Someone intended to calculate the cube of arg but forgot to initialize result with arg. In D, nothing dangerous happens despite this being a @system function. No initialization value means result is default initialized to NaN (not-a-number), which leads to the result also being NaN, which is a glaringly obvious “error” value when using this function the first time.

However, in C and C++, not initializing a local variable means reading it is (sans a few narrow exceptions) undefined behavior. This function does not even handle pointers, yet according to the standard, calling this function could just as well have *(int*) rand() = 0XDEADBEEF; in it, all due to a trivial mistake. While many compilers with enabled warnings will catch this one, not all do, and these languages are full of similar examples where even warnings don’t help.

In D, even if you explicitly requested no default initialization with float result = void, it’d just mean the return value of the function is undefined, not anything and everything that happens if the function is called. Consequently, that function could be annotated @safe even with such an initializer.

Still, for anyone who cares about memory safety, as they probably should for anything intended for a wide audience, it’s a bad idea to assume that D @system code is safe enough to be the default mode. Two examples will demonstrate what can happen.

What undefined behavior can do

Some people assume that “Undefined Behavior” simply means “erroneous behavior” or crashing at runtime. While that is often what ultimately happens, undefined behavior is far more dangerous than, say, an uncaught exception or an infinite loop. The difference is that with undefined behavior, you have no guarantees at all about what happens. This might not sound any worse than an infinite loop, but an accidental infinite loop is discovered the first time it’s entered. Code with undefined behavior, on the other hand, might do what was intended when it’s tested, but then do something completely different in production. Even if the code is tested with the same flags it’s compiled with in production, the behavior may change from one compiler version to another, or when making completely unrelated changes to the code. Time for an example:

// return whether the exception itself is in the array
bool replaceExceptions(Object[] arr, ref Exception e)
{
    bool result;
    foreach (ref o; arr)
    {
        if (&o is &e) result = true;
        if (cast(Exception) o) o = e;
    }

    return result;
}

The idea here is that the function replaces all exceptions in the array with e. If e itself is in the array, it returns true, otherwise false. And indeed, testing confirms it works. The function is used like this:

auto arr = [new Exception("a"), null, null, new Exception("c")];
auto result = replaceExceptions
(
    cast(Object[]) arr,
    arr[3]
);

This cast is not a problem, right? Object references are always of the same size regardless of their type, and we’re casting the exceptions to the parent type, Object. It’s not like the array contains anything other than object references.

Unfortunately, that’s not how the D specification views it. Having two class references (or any references, for that matter) in the same memory location but with different types, and then assigning one of them to the other, is undefined behavior. That’s exactly what happens in

if (cast(Exception) o) o = e;

if the array does contain the e argument. Since true can only be returned when undefined behavior is triggered, it means that any compiler would be free to optimize replaceExceptions to always return false. This is a dormant bug no amount of testing will find, but that might, years later, completely mess up the application when compiled with the powerful optimizations of an advanced compiler.

It may seem that requiring a cast to use a function is an obvious warning sign that a good D programmer would not ignore. I wouldn’t be so sure. Casts aren’t that rare even in fine high-level code. Even if you disagree, other cases are provably bad enough to bite anyone. Last summer, this case appeared in the D forums:

string foo(in string s)
{
    return s;
}

void main()
{
    import std.stdio;
    string[] result;
    foreach(c; "hello")
    {
        result ~= foo([c]);
    }
    writeln(result);
}

This problem was encountered by Steven Schveighoffer, a long-time D veteran who has himself lectured about @safe and @system on more than one occasion. Anything that can burn him can burn any of us.

Normally, this works just as one would think and is fine according to the spec. However, if one enables another soon-to-be-default language feature with the -preview=in compiler switch along with DIP1000, the program starts malfunctioning. The old semantics for in are the same as const, but the new semantics make it const scope.

Since the argument of foo is scope, the compiler assumes that foo will copy [c] before returning it, or return something else, and therefore it allocates [c] on the same stack position for each of the “hello” letters. The result is that the program prints ["o", "o, "o", "o", "o"]. At least for me, it’s already somewhat hard to understand what’s happening in this simple example. Hunting down this sort of bug in a complex codebase could be a nightmare.

(With my nightly DMD version somewhere between 2.100 and 2.101 a compile-time error is printed instead. With 2.100.2, the example runs as described above.)

The fundamental problem in both of these examples is the same: @safe is not used. Had it been, both of these undefined behaviors would have resulted in compilation errors (the replaceExceptions function itself can be @safe, but the cast at the usage site cannot). By now it should be clear that @system code should be used sparingly.

When to proceed anyway

Sooner or later, though, the time comes when the guard rail has to be temporarily lowered. Here’s an example of a good use case:

/// Undefined behavior: Passing a non-null pointer
/// to a standalone character other than '\0', or
/// to an array without '\0' at or after the
/// pointed character, as utf8Stringz
extern(C) @system pure
bool phobosValidateUTF8(const char* utf8Stringz)
{
    import std.string, std.utf;

    try utf8Stringz.fromStringz.validate();
    catch (UTFException) return false;

    return true;
}

This function lets code written in another language validate a UTF-8 string using Phobos. C being C, it tends to use zero-terminated strings, so the function accepts a pointer to one as the argument instead of a D array. This is why the function has to be unsafe. There is no way to safely check that utf8Stringz is pointing to either null or a valid C string. If the character being pointed to is not '\0', meaning the next character has to be read, the function has no way of knowing whether that character belongs to the memory allocated for the string. It can only trust that the calling code got it right.

Still, this function is a good use of the @system attribute. First, it is presumably called primarily from C or C++. Those languages do not get any safety guarantees anyway. Even a @safe function is safe only if it gets only those parameters that can be created in @safe D code. Passing cast(const char*) 0xFE0DA1 as an argument to a function is unsafe no matter what the attribute says, and nothing in C or C++ verifies what arguments are passed.

Second, the function clearly documents the cases that would trigger undefined behavior. However, it does not mention that passing an invalid pointer, such as the aforementioned cast(const char*) 0xFE0DA1, is UB, because UB is always the default assumption with @system-only values unless it can be shown otherwise.

Third, the function is small and easy to review manually. No function should be needlessly big, but it’s many times more important than usual to keep @system and @trusted functions small and simple to review. @safe functions can be debugged to pretty good shape by testing, but as we saw earlier, undefined behavior can be immune to testing. Analyzing the code is the only general answer to UB.

There is a reason why the parameter does not have a scope attribute. It could have it, no pointers to the string are escaped. However, it would not provide many benefits. Any code calling the function has to be @system, @trusted, or in a foreign language, meaning they can pass a pointer to the stack in any case. scope could potentially improve the performance of D client code in exchange for the increased potential for undefined behavior if this function is erroneously refactored. Such a tradeoff is unwanted in general unless it can be shown that the attribute helps with a performance problem. On the other hand, the attribute would make it clearer for the reader that the string is not supposed to escape. It’s a difficult judgment call whether scope would be a wise addition here.

Further improvements

It should be documented why a @system function is @system when it’s not obvious. Often there is a safer alternative—our example function could have taken a D array or the CString struct from the previous post in this series. Why was an alternative not taken? In our case, we could write that the ABI would be different for either of those options, complicating matters on the C side, and the intended client (C code) is unsafe anyway.

@trusted functions are like @system functions, except they can be called from @safe functions, whereas @system functions cannot. When something is declared @trusted, it means the authors have verified that it’s just as safe to use as an actual @safe function with any arguments that can be created within safe code. They need to be just as carefully reviewed, if not more so, as @system functions.

In these situations, it should be documented (for other developers, not users) how the function was deemed to be safe in all situations. Or, if the function is not fully safe to use and the attribute is just a temporary hack, it should have a big ugly warning about that.

Such greenwashing is of course highly discouraged, but if there’s a codebase full of @system code that’s just too difficult to make @safe otherwise, it’s better than giving up. Even as we often talk about the dangers of UB and memory corruption, in our actual work our attitudes tend to be much more carefree, meaning such codebases are unfortunately common.

It might be tempting to define a small @trusted function inside a bigger @safe function to do something unsafe without disabling checks for the whole function:

extern(C) @safe pure
bool phobosValidateUTF8(const char* utf8Stringz)
{
    import std.string, std.utf;

    try (() @trusted => utf8Stringz.fromStringz)()
      .validate();
    catch (UTFException) return false;

    return true;
}

Keep in mind though, that the parent function needs to be documented and reviewed like an overt @trusted function because the encapsulated @trusted function can let the parent function do anything. In addition, since the function is marked @safe, it isn’t obvious on a first look that it’s a function that needs special care. Thus, a visible warning comment is needed if you elect to use @trusted like this.

Most importantly, don’t trust yourself! Just like any codebase of non-trivial size has bugs, more than a handful of @system functions will include latent UB at some point. The remaining hardening features of D, meaning asserts, contracts, invariants, and bounds checking should be used aggressively and kept enabled in production. This is recommended even if the program is fully @safe. In addition, a project with a considerable amount of unsafe code should use external tools like LLVM address sanitizer and Valgrind to at least some extent.

Note that the idea in many of these hardening tools, both those in the language and those of the external tools, is to crash as soon as any fault is detected. It decreases the chance of any surprise from undefined behavior doing more serious damage.

This requires that the program is designed to accept a crash at any moment. The program must never hold such amounts of unsaved data that there would be any hesitation in crashing it. If it controls anything important, it must be able to regain control after being restarted by a user or another process, or it must have another backup program. Any program that “can’t afford” to run potentially crashing checks is in no business to be trusted with systems programming either.

Conclusion

That concludes this blog series on DIP1000. There are some topics related to DIP1000 we have left up to the readers to experiment with themselves, such as associative arrays. Still, this should be enough to get them going.

Though we have uncovered some practical tips in addition to language rules, there surely is a lot more that could be said. Tell us your memory safety tips in the D forums!

Thanks to Walter Bright and Dennis Korpel for providing feedback on this article.

Memory Safety in a Modern Systems Programming Language Part 2

DIP1000: Memory Safety in a Modern System Programming Language Pt. 2

The previous entry in this series shows how to use the new DIP1000 rules to have slices and pointers refer to the stack, all while being memory safe. But D can refer to the stack in other ways, too, and that’s the topic of this article.

Object-oriented instances are the easiest case

In Part 1, I said that if you understand how DIP1000 works with pointers, then you understand how it works with classes. An example is worth more than mere words:

@safe Object ifNull(return scope Object a, return scope Object b)
{
    return a? a: b;
}

The return scope in the above example works exactly as it does in the following:

@safe int* ifNull(return scope int* a, return scope int* b)
{
    return a? a: b;
}

The principle is: if the scope or return scope storage class is applied to an object in a parameter list, the address of the object instance is protected just as if the parameter were a pointer to the instance. From the perspective of machine code, it is a pointer to the instance.

From the point of view of regular functions, that’s all there is to it. What about member functions of a class or an interface? This is how it’s done:

interface Talkative
{
    @safe const(char)[] saySomething() scope;
}

class Duck : Talkative
{
    char[8] favoriteWord;
    @safe const(char)[] saySomething() scope
    {
        import std.random : dice;

        // This wouldn't work
        // return favoriteWord[];

        // This does
        return favoriteWord[].dup;

        // Also returning something totally
        // different works. This
        // returns the first entry 40% of the time,
        // The second entry 40% of the time, and
        // the third entry the rest of the time.
        return
        [
            "quack!",
            "Quack!!",
            "QUAAACK!!!"
        ][dice(2,2,1)];
    }
}

scope positioned either before or after the member function name marks the this reference as scope, preventing it from leaking out of the function. Because the address of the instance is protected, nothing that refers directly to the address of the fields is allowed to escape either. That’s why return favoriteWord[] is disallowed; it’s a static array stored inside the class instance, so the returned slice would refer directly to it. favoriteWord[].dup on the other hand returns a copy of the data that isn’t located in the class instance, which is why it’s okay.

Alternatively one could replace the scope attributes of both Talkative.saySomething and Duck.saySomething with return scope, allowing the return of favoriteWord without duplication.

DIP1000 and Liskov Substitution Principle

The Liskov substitution principle states, in simplified terms, that an inherited function can give the caller more guarantees than its parent function, but never fewer. DIP1000-related attributes fall in that category. The rule works like this:

  • if a parameter (including the implicit this reference) in the parent functions has no DIP1000 attributes, the child function may designate it scope or return scope
  • if a parameter is designated scope in the parent, it must be designated scope in the child
  • if a parameter is return scope in the parent, it must be either scope or return scope in the child

If there is no attribute, the caller can not assume anything; the function might store the address of the argument somewhere. If return scope is present, the caller can assume the address of the argument is not stored other than in the return value. With scope, the guarantee is that the address is not stored anywhere, which is an even stronger guarantee. Example:

class C1
{   double*[] incomeLog;
    @safe double* imposeTax(double* pIncome)
    {
        incomeLog ~= pIncome;
        return new double(*pIncome * .15);
    }
}

class C2 : C1
{
    // Okay from language perspective (but maybe not fair
    // for the taxpayer)
    override @safe double* imposeTax
        (return scope double* pIncome)
    {
        return pIncome;
    }
}

class C3 : C2
{
    // Also okay.
    override @safe double* imposeTax
        (scope double* pIncome)
    {
        return new double(*pIncome * .18);
    }
}

class C4: C3
{
    // Not okay. The pIncome parameter of C3.imposeTax
    // is scope, and this tries to relax the restriction.
    override @safe double* imposeTax
        (double* pIncome)
    {
        incomeLog ~= pIncome;
        return new double(*pIncome * .16);
    }
}

The special pointer, ref

We still have not uncovered how to use structs and unions with DIP1000. Well, obviously we’ve uncovered pointers and arrays. When referring to a struct or a union, they work the same as they do when referring to any other type. But pointers and arrays are not the canonical way to use structs in D. They are most often passed around by value, or by reference when bound to ref parameters. Now is a good time to explain how ref works with DIP1000.

They don’t work like just any pointer. Once you understand ref, you can use DIP1000 in many ways you otherwise could not.

A simple ref int parameter

The simplest possible way to use ref is probably this:

@safe void fun(ref int arg) {
    arg = 5;
}

What does this mean? ref is internally a pointer—think int* pArg—but is used like a value in the source code. arg = 5 works internally like *pArg = 5. Also, the client calls the function as if the argument were passed by value:

auto anArray = [1,2];
fun(anArray[1]); // or, via UFCS: anArray[1].fun;
// anArray is now [1, 5]

instead of fun(&anArray[1]). Unlike C++ references, D references can be null, but the application will instantly terminate with a segmentation fault if a null ref is used for something other than reading the address with the & operator. So this:

int* ptr = null;
fun(*ptr);

…compiles, but crashes at runtime because the assignment inside fun lands at the null address.

The address of a ref variable is always guarded against escape. In this sense @safe void fun(ref int arg){arg = 5;} is like @safe void fun(scope int* pArg){*pArg = 5;}. For example, @safe int* fun(ref int arg){return &arg;} will not compile, just like @safe int* fun(scope int* pArg){return pArg;} will not.

There is a return ref storage class, however, that allows returning the address of the parameter but no other form of escape, just like return scope. This means that @safe int* fun(return ref int arg){return &arg;} works.

reference to a reference

reference to an int or similar type already allows much nicer syntax than one can get with pointers. But the real power of ref shows when it refers to a type that is a reference itself—a pointer or a class, for instance. scope or return scope can be applied to a reference that is referenced to by ref. For example:

@safe float[] mergeSort(ref return scope float[] arr)
{
    import std.algorithm: merge;
    import std.array : Appender;

    if(arr.length < 2) return arr;

    auto firstHalf = arr[0 .. $/2];
    auto secondHalf = arr[$/2 .. $];

    Appender!(float[]) output;
    output.reserve(arr.length);

    foreach
    (
        el;
        firstHalf.mergeSort
        .merge!floatLess(secondHalf.mergeSort)
    )   output ~= el;

    arr = output[];
    return arr;
}

@safe bool floatLess(float a, float b)
{
    import std.math: isNaN;

    return a.isNaN? false:
          b.isNaN? true:
          a<b;
}

mergeSort here guarantees it won’t leak the address of the floats in arr except in the return value. This is the same guarantee that would be had from a return scope float[] arr parameter. But at the same time, because arr is a ref parameter, mergeSort can mutate the array passed to it. Then the client can write:

float[] values = [5, 1.5, 0, 19, 1.5, 1];
values.mergeSort;

With a non-ref argument, the client would have to write values = values.sort instead (not using ref would be a perfectly reasonable API in this case, because we do not always want to mutate the original array). This is something that cannot be accomplished with pointers, because return scope float[]* arr would protect the address of the array’s metadata (the length and ptr fields of the array), not the address of it’s contents.

It is also possible to have a returnable ref argument to a scope reference. Since this example has a unit test, remember to use the -unittest compile flag to include it in the compiled binary.

@safe ref Exception nullify(return ref scope Exception obj)
{
    obj = null;
    return obj;
}

@safe unittest
{
    scope obj = new Exception("Error!");
    assert(obj.msg == "Error!");
    obj.nullify;
    assert(obj is null);
    // Since nullify returns by ref, we can assign
    // to it's return value.
    obj.nullify = new Exception("Fail!");
    assert(obj.msg == "Fail!");
}

Here we return the address of the argument passed to nullify, but still guard both the address of the object pointer and the address of the class instance against being leaked by other channels.

return is a free keyword that does not mandate ref or scope to follow it. What does void* fun(ref scope return int*) mean then? The spec states that return without a trailing scope is always treated as ref return. This example thus is equivalent to void* fun(return ref scope int*). However, this only applies if there is reference to bind to. Writing void* fun(scope return int*) means void* fun(return scope int*). It’s even possible to write void* fun(return int*) with the latter meaning, but I leave it up to you to decide whether this qualifies as conciseness or obfuscation.

Member functions and ref

ref and return ref often require careful consideration to keep track of which address is protected and what can be returned. It takes some experience to get confortable with them. But once you do, understanding how structs and unions work with DIP1000 is pretty straightforward.

The major difference to classes is that where the this reference is just a regular class reference in class member functions, this in a struct or union member function is ref StructOrUnionName.

union Uni
{
    int asInt;
    char[4] asCharArr;

    // Return value contains a reference to
    // this union, won't escape references
    // to it via any other channel
    @safe char[] latterHalf() return
    {
        return asCharArr[2 .. $];
    }

    // This argument is implicitly ref, so the
    // following means the return value does
    // not refer to this union, and also that
    // we don't leak it in any other way.
    @safe char[] latterHalfCopy()
    {
        return latterHalf.dup;
    }
}

Note that return ref should not be used with the this argument. char[] latterHalf() return ref fails to parse. The language already has to understand what ref char[] latterHalf() return means: the return value is a reference. The “ref” in return ref would be redundant anyway.

Note that we did not use the scope keyword here. scope would be meaningless with this union, because it does not contain references to anything. Just like it is meaningless to have a scope ref int, or a scope int function argument. scope makes sense only for types that refer to memory elsewhere.

scope in a struct or union means the same thing as it means in a static array. It means that the memory its members refer to cannot be escaped. Example:

struct CString
{
    // We need to put the pointer in an anonymous
    // union with a dummy member, otherwise @safe user
    // code could assign ptr to point to a character
    // not in a C string.
    union
    {
        // Empty string literals get optimised to null pointers by D
        // compiler, we have to do this for the .init value to really point to
        // a '\0'.
        immutable(char)* ptr = &nullChar;
        size_t dummy;
    }

    // In constructors, the "return value" is the
    // constructed data object. Thus, the return scope
    // here makes sure this struct won't live longer
    // than the memory in arr.
    @trusted this(return scope string arr)
    {
        // Note: Normal assert would not do! They may be
        // removed from release builds, but this assert
        // is necessary for memory safety so we need
        // to use assert(0) instead which never gets
        // removed.
        if(arr[$-1] != '\0') assert(0, "not a C string!");
        ptr = arr.ptr;
    }

    // The return value refers to the same memory as the
    // members in this struct, but we don't leak references
    // to it via any other way, so return scope.
    @trusted ref immutable(char) front() return scope
    {
        return *ptr;
    }

    // No references to the pointed-to array passed
    // anywhere.
    @trusted void popFront() scope
    {
        // Otherwise the user could pop past the
        // end of the string and then read it!
        if(empty) assert(0, "out of bounds!");
        ptr++;
    }

    // Same.
    @safe bool empty() scope
    {
        return front == '\0';
    }
}

immutable nullChar = '\0';

@safe unittest
{
    import std.array : staticArray;

    auto localStr = "hello world!\0".staticArray;
    auto localCStr = localStr.CString;
    assert(localCStr.front == 'h');

    static immutable(char)* staticPtr;

    // Error, escaping reference to local.
    // staticPtr = &localCStr.front();

    // Fine.
    staticPtr = &CString("global\0").front();

    localCStr.popFront;
    assert(localCStr.front == 'e');
    assert(!localCStr.empty);
}

Part One said that @trusted is a terrible footgun with DIP1000. This example demonstrates why. Imagine how easy it’d be to use a regular assert or forget about them totally, or overlook the need to use the anonymous union. I think this struct is safe to use, but it’s entirely possible I overlooked something.

Finally

We almost know all there is to know about using structs, unions, and classes with DIP1000. We have two final things to learn today.

But before that, a short digression regarding the scope keyword. It is not used for just annotating parameters and local variables as illustrated. It is also used for scope classes and scope guard statements. This guide won’t be discussing those, because the former feature is deprecated, and the latter is not related to DIP1000 or control of variable lifetimes. The point of mentioning them is to dispel a potential misconception that scope always means limiting the lifetime of something. Learning about scope guard statements is still a good idea, as it’s a useful feature.

Back to the topic. The first thing is not really specific to structs or classes. We discussed what return, return ref, and return scope usually mean, but there’s an alternative meaning to them. Consider:

@safe void getFirstSpace
(
    ref scope string result,
    return scope string where
)
{
    //...
}

The usual meaning of the return attribute makes no sense here, as the function has a void return type. A special rule applies in this case: if the return type is void, and the first argument is ref or out, any subsequent return [ref/scope] is assumed to be escaped by assigning to the first argument. With struct member functions, they are assumed to be assigned to the struct itself.

@safe unittest
{
    static string output;
    immutable(char)[8] input = "on stack";
    //Trying to assign stack contents to a static
    //variable. Won't compile.
    getFirstSpace(output, input);
}

Since out came up, it should be said it would be a better choice for result here than ref. out works like ref, with the one difference that the referenced data is automatically default-initialized at the beginning of the function, meaning any data to which the out parameter refers is guaranteed to not affect the function.

The second thing to learn is that scope is used by the compiler to optimize class allocations inside function bodies. If a new class is used to initialize a scope variable, the compiler can put it on the stack. Example:

class C{int a, b, c;}
@safe @nogc unittest
{
    // Since this unittest is @nogc, this wouldn't
    // compile without the scope optimization.
    scope C c = new C();
}

This feature requires using the scope keyword explicitly. Inference of scope does not work, because initializing a class this way does not normally (meaning, without the @nogc attribute) mandate limiting the lifetime of c. The feature currently works only with classes, but there is no reason it couldn’t work with newed struct pointers and array literals too.

Until next time

This is pretty much all that there is to manual DIP1000 usage. But this blog series shall not be over yet! DIP1000 is not intended to always be used explicitly—it works with attribute inference. That’s what the next post will cover.

It will also cover some considerations when daring to use @trusted and @system code. The need for dangerous systems programming exists and is part of the D language domain. But even systems programming is a responsible affair when people do what they can to minimize risks. We will see that even there it’s possible to do a lot.

Thanks to Walter Bright and Dennis Korpel for reviewing this article

Memory Safety in a Modern Systems Programming Language Part 1

Memory safety needs no checks

D is both a garbage-collected programming language and an efficient raw memory access language. Modern high-level languages like D are memory safe, preventing users from accidently reading or writing to unused memory or breaking the type system of the language.

As a systems programming language, not all of D can give such guarantees, but it does have a memory-safe subset that uses the garbage collector to take care of memory management much like Java, C#, or Go. A D codebase, even in a systems programming project, should aim to remain within that memory-safe subset where practical. D provides the @safe function attribute to verify that a function uses only memory-safe features of the language. For instance, try this.

@safe string getBeginning(immutable(char)* cString)
{
    return cString[0..3];
}

The compiler will refuse to compile this code. There’s no way to know what will result from the three-character slice of cString, which could be referring to an empty string (i.e., cString[0] is \0), a string with a length of 1, or even one or two characters without the terminating NUL. The result in those cases would be a memory violation.

@safe does not mean slow

Note that I said above that even a low-level systems programming project should use @safe where practical. How is that possible, given that such projects sometimes cannot use the garbage collector, a major tool used in D to guarantee memory safety?

Indeed, such projects must resort to memory-unsafe constructs every now and then. Even higher-level projects often have reasons to do so, as they want to create interfaces to C or C++ libraries, or avoid the garbage collector when indicated by runtime performance. But still, surprisingly large parts of code can be made @safe without using the garbage collector at all.

D can do this because the memory safe subset does not prevent raw memory access per se.

@safe void add(int* a, int* b, int* sum)
{
    *sum = *a + *b;
}

This compiles and is fully memory safe, despite dereferencing those pointers in the same completely unchecked way they are dereferenced in C. This is memory safe because @safe D does not allow creating an int* that points to unallocated memory areas, or to a float**, for instance. int* can point to the null address, but this is generally not a memory safety problem because the null address is protected by the operating system. Any attempt to dereference it would crash the program before any memory corruption can happen. The garbage collector isn’t involved, because D’s GC can only run if more memory is requestend from it, or if the collection is explicitly called.

D slices are similar. When indexed at runtime, they will check at runtime that the index is less than their length and that’s it. They will do no checking whatsoever on whether they are referring to a legal memory area. Memory safety is achieved by preventing creation of slices that could refer to illegal memory in the first place, as demonstrated in the first example of this article. And again, there’s no GC involved.

This enables many patterns that are memory-safe, efficient, and independent of the garbage collector.

struct Struct
{
    int[] slice;
    int* pointer;
    int[10] staticArray;
}

@safe @nogc Struct examples(Struct arg)
{
    arg.slice[5] = *arg.pointer;
    arg.staticArray[0..5] = arg.slice[5..10];
    arg.pointer = &arg.slice[8];
    return arg;
}

As demonstrated, D liberally lets one do unchecked memory handling in @safe code. The memory referred to by arg.slice and arg.pointer may be on the garbage collected heap, or it may be in the static program memory. There is no reason the language needs to care. The program will probably need to either call the garbage collector or do some unsafe memory management to allocate memory for the pointer and the slice, but handling already allocated memory does not need to do either. If this function needed the garbage collector, it would fail to compile because of the @nogc attribute.

However…

There’s a historical design flaw here in that the memory may also be on the stack. Consider what happens if we change our function a bit.

@safe @nogc Struct examples(Struct arg)
{
    arg.pointer = &arg.staticArray[8];
    arg.slice = arg.staticArray[0..8];
    return arg;
}

Struct arg is a value type. Its contents are copied to the stack when examples is called and can be ovewritten after the function returns. staticArray is also a value type. It’s copied along with the rest of the struct just as if there were ten integers in the struct instead. When we return arg, the contents of staticArray are copied to the return value, but ptr and slice continue to point to arg, not the returned copy!

But we have a fix. It allows one to write code just as performant in @safe functions as before, including references to the stack. It even enables a few formerly @system (the opposite of @safe) tricks to be written in a safe way. That fix is DIP1000. It’s the reason why this example already causes a deprecation warning by default if it’s compiled with the latest nightly dmd.

Born first, dead last

DIP1000 is a set of enhancements to the language rules regarding pointers, slices, and other references. The name stands for D Improvement Proposal number 1000, as that document is what the new rules were initially based on. One can enable the new rules with the preview compiler switch, -preview=dip1000. Existing code may need some changes to work with the new rules, which is why the switch is not enabled by default. It’s going to be the default in the future, so it’s best to enable it where possible and work to make code compatible with it where not.

The basic idea is to let people limit the lifetime of a reference (an array or pointer, for example). A pointer to the stack is not dangerous if it does not exist longer than the stack variable it is pointing to. Regular references continue to exist, but they can refer only to data with an unlimited lifetime—that is, garbage collected memory, or static or global variables.

Let’s get started

The simplest way to construct limited lifetime references is to assign to it something with a limited lifetime.

@safe int* test(int arg1, int arg2)
{
    int* notScope = new int(5);
    int* thisIsScope = &arg1;
    int* alsoScope; // Not initially scope...
    alsoScope = thisIsScope; // ...but this makes it so.

    // Error! The variable declared earlier is
    // considered to have a longer lifetime,
    // so disallowed.
    thisIsScope = alsoScope;

    return notScope; // ok
    return thisIsScope; // error
    return alsoScope; // error
}

When testing these examples, remember to use the compiler switch -preview=dip1000 and to mark the function @safe. The checks are not done for non-@safe functions.

Alternatively, the scope keyword can be explicitly used to limit the lifetime of a reference.

@safe int[] test()
{
    int[] normalRef;
    scope int[] limitedRef;

    if(true)
    {
        int[5] stackData = [-1, -2, -3, -4, -5];

        // Lifetime of stackData ends
        // before limitedRef, so this is
        // disallowed.
        limitedRef = stackData[];

        //This is how you do it
        scope int[] evenMoreLimited
            = stackData[];
    }

    return normalRef; // Okay.
    return limitedRef; // Forbidden.
}

If we can’t return limited lifetime references, how they are used at all? Easy. Remember, only the address of the data is protected, not the data itself. It means that we have many ways to pass scoped data out of the function.

@safe int[] fun()
{
    scope int[] dontReturnMe = [1,2,3];

    int[] result = new int[](dontReturnMe.length);
    // This copies the data, instead of having
    // result refer to protected memory.
    result[] = dontReturnMe[];
    return result;

    // Shorthand way of doing the same as above
    return dontReturnMe.dup;

    // Also you are not always interested
    // in the contents as a whole; you
    // might want to calculate something else
    // from them
    return
    [
        dontReturnMe[0] * dontReturnMe[1],
        cast(int) dontReturnMe.length
    ];
}

Getting interprocedural

With the tricks discussed so far, DIP1000 would be restricted to language primitives when handling limited lifetime references, but the scope storage class can be applied to function parameters, too. Because this guarantees the memory won’t be used after the function exits, local data references can be used as arguments to scope parameters.

@safe double average(scope int[] data)
{
    double result = 0;
    foreach(el; data) result += el;
    return result / data.length;
}

@safe double use()
{
    int[10] data = [1,2,3,4,5,6,7,8,9,10];
    return data[].average; // works!
}

Initially, it’s probably best to keep attribute auto inference off. Auto inference in general is a good tool, but it silently adds scope attributes to all parameters it can, meaning it’s easy to lose track of what’s happening. That makes the learning process a lot harder. Avoid this by always explicitly specifying the return type (or lack thereof with void or noreturn): @safe const(char[]) fun(int* val) as opposed to @safe auto fun(int* val) or @safe const fun(int* val). The function also must not be a template or inside a template. We’ll dig deeper on scope auto inference in a future post.

scope allows handling pointers and arrays that point to the stack, but forbids returning them. What if that’s the goal? Enter the return scope attribute:

//Being character arrays, strings also work with DIP1000.
@safe string latterHalf(return scope string arg)
{
    return arg[$/2 .. $];
}

@safe string test()
{
    // allocated in static program memory
    auto hello1 = "Hello world!";
    // allocated on the stack, copied from hello1
    immutable(char)[12] hello2 = hello1;

    auto result1 = hello1.latterHalf; // ok
    return result1; // ok

    auto result2 = hello2[].latterHalf; // ok
    // Nice try! result2 is scope and can't
    // be returned.
    return result2;
}

return scope parameters work by checking if any of the arguments passed to them are scope. If so, the return value is treated as a scope value that may not outlive any of the return scope arguments. If none are scope, the return value is treated as a global reference that can be copied freely. Like scope, return scope is conservative. Even if one does not actually return the address protected by return scope, the compiler will still perform the call site lifetime checks just as if one did.

scope is shallow

@safe void test()
{
    scope a = "first";
    scope b = "second";
    string[] arr = [a, b];
}

In test, initializing arr does not compile. This may be surprising given that the language automatically adds scope to a variable on initialization if needed.

However, consider what the scope on scope string[] arr would protect. There are two things it could potentially protect: the addresses of the strings in the array, or the addresses of the characters in the strings. For this assignment to be safe, scope would have to protect the characters in the strings, but it only protects the top-level reference, i.e., the strings in the array. Thus, the example does not work. Now change arr so that it’s a static array:

@safe void test()
{
    scope a = "first";
    scope b = "second";
    string[2] arr = [a, b];
}

This works because static arrays are not references. Memory for all of their elements is allocated in place on the stack (i.e., they contain their elements), as opposed to dynamic arrays which contain a reference to elements stored elsewhere. When a static array is scope, its elements are treated as scope. And since the example would not compile were arr not scope, it follows that scope is inferred.

Some practical tips

Let’s face it, the DIP1000 rules take time to understand, and many would rather spend that time coding something useful. The first and most important tip is: avoid non-@safe code like the plague if doable. Of course, this advice is not new, but it appears even more important with DIP1000. In a nutshell, the language does not check the validity of scope and return scope in a non-@safe function, but when calling those functions the compiler assumes that the attributes are respected.

This makes scope and return scope terrible footguns in unsafe code. But by resisting the temptation to mark code @trusted to avoid thinking, a D coder can hardly do damage. Misusing DIP1000 in @safe code can cause needless compilation errors, but it won’t corrupt memory and is unlikely to cause other bugs either.

A second important point worth mentioning is that there is no need for scope and return scope for function attributes if they receive only static or GC-allocated data. Many langauges do not let coders refer to the stack at all; just because D programmers can do so does not mean they must. This way, they don’t have to spend any more time solving compiler errors than they did before DIP1000. And if a desire to work with the stack arises after all, the authors can then return to annotate the functions. Most likely they will accomplish this without breaking the interface.

What’s next?

This concludes today’s blog post. This is enough to know how to use arrays and pointers with DIP1000. In principle, it also enables readers to use DIP1000 with classes and interfaces. The only thing to learn is that a class reference, including the this pointer in member functions, works with DIP1000 just like a pointer would. Still, it’s hard to grasp what that means from one sentence, so later posts shall illustrate the subject.

In any case, there is more to know. DIP1000 has some features for ref function parameters, structs, and unions that we didn’t cover here. We’ll also dig deeper on how DIP1000 plays with non-@safe functions and attribute auto inference. Currently, the plan is to do two more posts for this series.

Do let us know in the comment section or the D forums if you have any useful DIP1000 tips that were not covered!

Thanks to Walter Bright for reviewing this article.

The ABC’s of Templates in D

D was not supposed to have templates.

Several months before Walter Bright released the first alpha version of the DMD compiler in December 2001, he completed a draft language specification in which he wrote:

Templates. A great idea in theory, but in practice leads to numbingly complex code to implement trivial things like a “next” pointer in a singly linked list.

The (freely available) HOPL IV paper, Origins of the D Programming Language, expands on this:

[Walter] initially objected to templates on the grounds that they added disproportionate complexity to the front end, they could lead to overly complex user code, and, as he had heard many C++ users complain, they had a syntax that was difficult to understand. He would later be convinced to change his mind.

It did take some convincing. As activity grew in the (then singular) D newsgroup, requests that templates be added to the language became more frequent. Eventually, Walter started mulling over how to approach templates in a way that was less confusing for both the programmer and the compiler. Then he had an epiphany: the difference between template parameters and function parameters is one of compile time vs. run time.

From this perspective, there’s no need to introduce a special template syntax (like the C++ style <T>) when there’s already a syntax for parameter lists in the form of (T). So a template declaration in D could look like this:

template foo(T, U) {
    // template members here
}

From there, the basic features fell into place and were expanded and enhanced over time.

In this article, we’re going to lay the foundation for future articles by introducing the most basic concepts and terminology of D’s template implementation. If you’ve never used templates before in any language, this may be confusing. That’s not unexpected. Even though many find D’s templates easier to understand than other implementations, the concept itself can still be confusing. You’ll find links at the end to some tutorial resources to help build a better understanding.

Template declarations

Inside a template declaration, one can nest any other valid D declaration:

template foo(T, U) {
    int x;
    T y;

    struct Bar {
        U thing;
    }

    void doSomething(T t, U u) {
        ...
    }
}

In the above example, the parameters T and U are template type parameters, meaning they are generic substitutes for concrete types, which might be built-in types like int, float, or any type the programmer might implement with a class or struct declaration. By declaring a template, it’s possible to, for example, write a single implementation of a function like doSomething that can accept multiple types for the same parameters. The compiler will generate as many copies of the function as it needs to accomodate the concrete types used in each unique template instantiation.

Other kinds of parameters are supported: value parameters, alias parameters, sequence (or variadic) parameters, and this parameters, all of which we’ll explore in future blog posts.

One name to rule them all

In practice, it’s not very common to implement templates with multiple members. By far, the most common form of template declaration is the single-member eponymous template. Consider the following:

template max(T) {
    T max(T a, T b) { ... }
}

An eponymous template can have multiple members that share the template name, but when there is only one, D provides us with an alternate template declaration syntax. In this example, we can opt for a normal function declaration that has the template parameter list in front of the function parameter list:

T max(T)(T a, T b) { ... }

The same holds for eponymous templates that declare an aggregate type:

// Instead of the longhand template declaration...
/* 
template MyStruct(T, U) {
    struct MyStruct { 
        T t;
        U u;
    }
}
*/

// ...just declare a struct with a type parameter list
struct MyStruct(T, U) {
    T t;
    U u;
}

Eponymous templates also offer a shortcut for instantiation, as we’ll see in the next section.

Instantiating templates

In relation to templates, the term instantiate means essentially the same as it does in relation to classes and structs: create an instance of something. A template instance is, essentially, a concrete implementation of the template in which the template parameters have been replaced by the arguments provided in the instantiation. For a template function, that means a new copy of the function is generated, just as if the programmer had written it. For a type declaration, a new copy of the declaration is generated, just as if the programmer had written it.

We’ll see an example, but first we need to see the syntax.

Explicit instantiations

An explicit instantiation is a template instance created by the programmer using the template instantiation syntax. To easily disambiguate template instantiations from function calls, D requires the template instantiation operator, !, to be present in explicit instantiations. If the template has multiple members, they can be accessed in the same manner that members of aggregates are accessed: using dot notation.

import std;

template Temp(T, U) {
    T x;
    struct Pair {
        T t;
        U u;
    }
}

void main()
{
    Temp!(int, float).x = 10;
    Temp!(int, float).Pair p;
    p.t = 4;
    p.u = 3.2;
    writeln(Temp!(int, float).x);
    writeln(p);            
}

Run it online at run.dlang.io

There is one template instantiation in this example: Temp!(int, float). Although it appears three times, it refers to the same instance of Temp, one in which T == int and U == float. The compiler will generate declarations of x and the Pair type as if the programmer had written the following:

int x;
struct Pair {
    int t;
    float u;
}

However, we can’t just refer to x and Pair by themselves. We might have other instantiations of the same template, like Temp!(double, long), or Temp(MyStruct, short). To avoid conflict, the template members must be accessed through a namespace unique to each instantiation. In that regard, Temp!(int, float) is like a class or struct with static members; just as you would access a static x member variable in a struct Foo using the struct name, Foo.x, you access a template member using the template name, Temp!(int, float).x.

There is only ever one instance of the variable x for the instantiation Temp!(int float), so no matter where you use it in a code base, you will always be reading and writing the same x. Hence, the first line of main isn’t declaring and initializing the variable x, but is assigning to the already declared variable. Temp!(int, float).Pair is a struct type, so that after the declaration Temp!(int, float).Pair p, we can refer to p by itself. Unlike x, p is not a member of the template. The type Pair is a member, so we can’t refer to it without the prefix.

Aliased instantiations

It’s possible to simplify the syntax by using an alias declaration for the instantiation:

import std;

template Temp(T, U) {
    T x;
    struct Pair {
        T t;
        U u;
    }
}
alias TempIF = Temp!(int, float);

void main()
{
    TempIF.x = 10;
    TempIF.Pair p = TempIF.Pair(4, 3.2);
    writeln(TempIF.x);
    writeln(p);            
}

Run it online at run.dlang.io

Since we no longer need to type the template argument list, using a struct literal to initialize p, in this case TempIF.Pair(3, 3.2), looks cleaner than it would with the template arguments. So I opted to use that here rather than first declare p and then initialize its members. We can trim it down still more using D’s auto attribute, but whether this is cleaner is a matter of preference:

auto p = TempIF.Pair(4, 3.2);

Run it online at run.dlang.io

Instantiating eponymous templates

Not only do eponymous templates have a shorthand declaration syntax, they also allow for a shorthand instantiation syntax. Let’s take the x out of Temp and rename the template to Pair. We’re left with a Pair template that provides a declaration struct Pair. Then we can take advantage of both the shorthand declaration and instantiation syntaxes:

import std;

struct Pair(T, U) {
    T t;
    U u;
}

// We can instantiate Pair without the dot operator, but still use
// the alias to avoid writing the argument list every time
alias PairIF = Pair!(int, float);

void main()
{
    PairIF p = PairIF(4, 3.2);
    writeln(p);            
}

Run it online at run.dlang.io

The shorthand instantiation syntax means we don’t have to use the dot operator to access the Pair type.

Even shorter syntax

When a template instantiation is passed only one argument, and the argument’s symbol is a single token (e.g., int as opposed to int[] or int*), the parentheses can be dropped from the template argument list. Take the standard library template function std.conv.to as an example:

void main() {
    import std.stdio : writeln;
    import std.conv : to;
    writeln(to!(int)("42"));
}

Run it online at run.dlang.io

std.conv.to is an eponymous template, so we can use the shortened instantiation syntax. But the fact that we’ve instantiated it as an argument to the writeln function means we’ve got three pairs of parentheses in close proximity. That sort of thing can impair readability if it pops up too often. We could move it out and store it in a variable if we really care about it, but since we’ve only got one template argument, this is a good place to drop the parentheses from the template argument list.

writeln(to!int("42"));

Whether that looks better is another case where it’s up to preference, but it’s fairly idiomatic these days to drop the parentheses for a single template argument no matter where the instantiation appears.

Not done with the short stuff yet

std.conv.to is an interesting example because it’s an eponymous template with multiple members that share the template name. That means that it must be declared using the longform syntax (as you can see in the source code), but we can still instantiate it without the dot notation. It’s also interesting because, even though it accepts two template arguments, it is generally only instantiated with one. This is because the second template argument can be deduced by the compiler based on the function argument.

For a somewhat simpler example, take a look at std.utf.toUTF8:

void main()
{    
    import std.stdio : writeln;
    import std.utf : toUTF8;
    string s1 = toUTF8("This was a UTF-16 string."w);
    string s2 = toUTF8("This was a UTF-32 string."d);
    writeln(s1);
    writeln(s2);
}

Run it online at run.dlang.io

Unlike std.conv.to, toUTF8 takes exactly one parameter. The signature of the declaration looks like this:

string toUTF8(S)(S s)

But in the example, we aren’t passing a type as a template argument. Just as the compiler was able to deduce to’s second argument, it’s able to deduce toUTF8’s sole argument.

toUTF8 is an eponymous function template with a template parameter S and a function parameter s of type S. There are two things we can say about this: 1) the return type is independent of the template parameter and 2) the template parameter is the type of the function parameter. Because of 2), the compiler has all the information it needs from the function call itself and has no need for the template argument in the instantiation.

Take the first call to the toUTF8 function in the declaration of s1. In long form, it would be toUTF8!(wstring)("blah"w). The w at the end of the string literal indicates it is of type wstring, with UTF-16 encoding, as opposed to string, with UTF-8 encoding (the default for string literals). In this situation, having to specify !(wstring) in the template instantiation is completely redundant. The compiler already knows that the argument to the function is a wstring. It doesn’t need the programmer to tell it that. So we can drop the template instantiation operator and the template argument list, leaving what looks like a simple function call. The compiler knows that toUTF8 is a template, knows that the template is declared with one type parameter, and knows that the type should be wstring.

Similarly, the d suffix on the literal in the initialization of s2 indicates a UTF-32 encoded dstring, and the compiler knows all it needs to know for that instantiation. So in this case also, we drop the template argument and make the instantiation appear as a function call.

It does seem silly to convert a wstring or dstring literal to a string when we could just drop the w and d prefixes and have string literals that we can directly assign to s1 and s2. Contrived examples and all that. But the syntax the examples are demonstrating really shines when we work with variables.

wstring ws = "This is a UTF-16 string"w;
string s = ws.toUTF8;
writeln(s);

Run it online at run.dlang.io

Take a close look at the initialization of s. This combines the shorthand template instantiation syntax with Uniform Function Call Syntax (UFCS) and D’s shorthand function call syntax. We’ve already seen the template syntax in this post. As for the other two:

  • UFCS allows using dot notation on the first argument of any function call so that it appears as if a member function is being called. By itself, it doesn’t offer much benefit aside from, some believe, readability. Generally, it’s a matter of preference. But this feature can seriously simplify the implementation of generic templates that operate on aggregate types and built-in types.
  • The parentheses in a function call can be dropped when the function takes no arguments, so that foo() becomes foo. In this case, the function takes one argument, but we’ve taken it out of the argument list using UFCS, so the argument list is now empty and the parentheses can be dropped. (The compiler will lower this to a normal function call, toUTF8(ws)—it’s purely programmer convenience.) When and whether to do this in the general case is a matter of preference. The big win, again, is in the simplification of template implementations: a template can be implemented to accept a type T with a member variable foo or a member function foo, or a free function foo for which the first argument is of type T.

All of this shorthand syntax is employed to great effect in D’s range API, which allows chained function calls on types that are completely hidden from the public-facing API (aka Voldemort types).

More to come

In future articles, we’ll explore the different kinds of template parameters, introduce template constraints, see inside a template instantiation, and take a look at some of the ways people combine templates with D’s powerful compile-time features in the real world. In the meantime, here are some template tutorial resources to keep you busy:

  • Ali Çehreli’s Programming in D is an excellent introduction to D in general, suitable even for those with little programming experience. The two chapters on templates (the first called ‘Templates’ and the second ‘More Templates’) provide a great introduction. (The online version of the book is free, but if you find it useful, please consider throwing Ali some love by buying the ebook or print version linked at the top of the TOC.)
  • More experienced programmers may find Phillipe Sigaud’s ‘D Template Tutorial’ a good read. It’s almost a decade old, but still relevant (and still free!). This tutorial goes beyond the basics into some of the more advanced template features. It includes a look at D’s compile-time features, provides a number of examples, and sports an appendix detailing the is expression (a key component of template constraints). It can also serve as a great reference when reading open source D code until you learn your way around D templates.

There are other resources, though other than my book ‘Learning D’ (this is a referral link that will benefit the D Language Foundation), I’m not aware of any that provide as much detail as the above. (And my book is currently not freely available). Eventually, we’ll be able to add this blog series to the list.

Thanks to Stefan Koch for reviewing this article.

A Pattern for Head-mutable Structures

When Andrei Alexandrescu introduced ranges to the D programming language, the gap between built-in and user-defined types (UDTs) narrowed, enabling new abstractions and greater composability. Even today, though, UDTs are still second-class citizens in D. One example of this is support for head mutability—the ability to manipulate a reference without changing the referenced value(s). This document details a pattern that will further narrow the UDT gap by introducing functions for defining and working with head-mutable user-defined types.

Introduction

D is neither Kernel nor Scheme—it has first-class and second-class citizens. Among its first-class citizens are arrays and pointers. One of the benefits these types enjoy is implicit conversion to head-mutable. For instance, const(T[]) is implicitly convertible to const(T)[]. Partly to address this difference, D has many ways to define how one type may convert to or behave like another – alias this, constructors, opDispatch, opCast, and, of course, subclassing. The way pointers and dynamic arrays decay into their head-mutable variants is different from the semantics of any of these features, so we would need to define a new type of conversion if we were to mimic this behavior.

Changing the compiler and the language to permit yet another way of converting one type into another is not desirable: it makes the job harder for compiler writers, makes an already complex language even harder to learn, and any implicit conversion can make code harder to read and maintain. If we can define conversions to head-mutable data structures without introducing compiler or language changes, this will also make the feature available to users sooner, since such a mechanism would not necessarily require changes in the standard library, and users could gradually implement it in their own code and benefit from the code in the standard library catching up at a later point.

Unqual

The tool used today to get a head-mutable version of a type is std.traits.Unqual. In some cases, this is the right tool—it strips away one layer of const, immutable, inout, and shared. For some types though, it either does not give a head-mutable result, or it gives a head-mutable result with mutable indirections:

struct S(T) {
    T[] arr;
}

With Unqual, this code fails to compile:

void foo(T)(T a) {
    Unqual!T b = a; // cannot implicitly convert immutable(S!int) to S!int
}

unittest {
    immutable s = S!int([1,2,3]);
    foo(s);
}

A programmer who sees that message hopefully finds a different way to achieve the same goal. However, the error message says that the conversion failed, indicating that a conversion is possible, perhaps even without issue. An inexperienced programmer, or one who knows that doing so is safe right now, could use a cast to shut the compiler up:

void bar(T)(T a) {
    Unqual!T b = cast(Unqual!T)a;
    b.arr[0] = 4;
}

unittest {
    immutable s = S!int([1,2,3]);
    bar(s);
    assert(s.arr[0] == 1); // Fails, since bar() changed it.
}

If, instead of S!int, the programmer had used int[], the first example would have compiled, and the cast in the second example would have never seen the light of day. However, since S!int is a user-defined type, we are forced to write a templated function that either fails to compile for some types it really should support or gives undesirable behavior at run time.

headMutable()

Clearly, we should be able to do better than Unqual, and in fact we can. D has template this parameters which pick up on the dynamic type of the this reference, and with that, its const or immutable status:

struct S {
    void foo(this T)() {
        import std.stdio : writeln;
        writeln(T.stringof);
    }
}
unittest {
    S s1;
    const S s2;
    s1.foo(); // Prints "S".
    s2.foo(); // Prints "const(S)".
}

This way, the type has the necessary knowledge of which type qualifiers a head-mutable version needs. We can now define a method that uses this information to create the correct head-mutable type:

struct S(T) {
    T[] arr;
    auto headMutable(this This)() const {
        import std.traits : CopyTypeQualifiers;
        return S!(CopyTypeQualifiers!(This, T))(arr);
    }
}
unittest {
    const a = S!int([1,2,3]);
    auto b = a.headMutable();
    assert(is(typeof(b) == S!(const int))); // The correct part of the type is now const.
    assert(a.arr is b.arr); // It's the same array, no copying has taken place.
    b.arr[0] = 3; // Correctly fails to compile: cannot modify const expression.
}

Thanks to the magic of Uniform Function Call Syntax, we can also define headMutable() for built-in types:

auto headMutable(T)(T value) {
    import std.traits;
    import std.typecons : rebindable;
    static if (isPointer!T) {
        // T is a pointer and decays naturally.
        return value;
    } else static if (isDynamicArray!T) {
        // T is a dynamic array and decays naturally.
        return value;
    } else static if (!hasAliasing!(Unqual!T)) {
        // T is a POD datatype - either a built-in type, or a struct with only POD members.
        return cast(Unqual!T)value;
    } else static if (is(T == class)) {
        // Classes are reference types, so only the reference may be made head-mutable.
        return rebindable(value);
    } else static if (isAssociativeArray!T) {
        // AAs are reference types, so only the reference may be made head-mutable.
        return rebindable(value);
    } else {
        static assert(false, "Type "~T.stringof~" cannot be made head-mutable.");
    }
}
unittest {
    const(int*[3]) a = [null, null, null];
    auto b = a.headMutable();
    assert(is(typeof(b) == const(int)*[3]));
}

Now, whenever we need a head-mutable variable to point to tail-const data, we can simply call headMutable() on the value we need to store. Unlike the ham-fisted approach of casting to Unqual!T, which may throw away important type information and also silences any error messages that may inform you of the foolishness of your actions, attempting to call headMutable() on a type that doesn’t support it will give an error message explaining what you tried to do and why it didn’t work (“Type T cannot be made head-mutable.”). The only thing missing now is a way to get the head-mutable type. Since headMutable() returns a value of that type, and is defined for all types we can convert to head-mutable, that’s a template one-liner:

import std.traits : ReturnType;
alias HeadMutable(T) = ReturnType!((T t) => t.headMutable());

Where Unqual returns a type with potentially the wrong semantics and only gives an error once you try assigning to it, HeadMutable disallows creating the type in the first place. The programmer will have to deal with that before casting or otherwise coercing a value into the variable. Since HeadMutable uses headMutable() to figure out the type, it also gives the same informative error message when it fails.

Lastly, since one common use case requires us to preserve the tail-const or tail-immutable properties of a type, it is beneficial to define a template that converts to head-mutable while propagating const or immutable using std.traits.CopyTypeQualifiers:

import std.traits : CopyTypeQualifiers;
alias HeadMutable(T, ConstSource) = HeadMutable!(CopyTypeQualifiers!(ConstSource, T));

This way, immutable(MyStruct!int) can become MyStruct!(immutable int), while the const version would propagate constness instead of immutability.

Example Code

Since the pattern for range functions in Phobos is to have a constructor function (e.g. map) that forwards its arguments to a range type (e.g. MapResult), the code changes required to use headMutable() are rather limited. Likewise, user code should generally not need to change at all in order to use headMutable(). To give an impression of the code changes needed, I have implemented map and equal:

import std.range;

// Note that we check not if R is a range, but if HeadMutable!R is
auto map(alias Fn, R)(R range) if (isInputRange!(HeadMutable!R)) {
    // Using HeadMutable!R and range.headMutable() here.
    // This is basically the extent to which code that uses head-mutable data types will need to change.
    return MapResult!(Fn, HeadMutable!R)(range.headMutable());
}

struct MapResult(alias Fn, R) {
    R range;
    
    this(R _range) {
        range = _range;
    }
    
    void popFront() {
        range.popFront();
    }
    
    @property
    auto ref front() {
        return Fn(range.front);
    }
    
    @property
    bool empty() {
        return range.empty;
    }
    
    static if (isBidirectionalRange!R) {
        @property
        auto ref back() {
            return Fn(range.back);
        }

        void popBack() {
            range.popBack();
        }
    }

    static if (hasLength!R) {
        @property
        auto length() {
            return range.length;
        }
        alias opDollar = length;
    }

    static if (isRandomAccessRange!R) {
        auto ref opIndex(size_t idx) {
            return Fn(range[idx]);
        }
    }

    static if (isForwardRange!R) {
        @property
        auto save() {
            return MapResult(range.save);
        }
    }
    
    static if (hasSlicing!R) {
        auto opSlice(size_t from, size_t to) {
            return MapResult(range[from..to]);
        }
    }
    
    // All the above is as you would normally write it.
    // We also need to implement headMutable().
    // Generally, headMutable() will look very much like this - instantiate the same
    // type template that defines typeof(this), use HeadMutable!(T, ConstSource) to make
    // the right parts const or immutable, and call headMutable() on fields as we pass
    // them to the head-mutable type.
    auto headMutable(this This)() const {
        alias HeadMutableMapResult = MapResult!(Fn, HeadMutable!(R, This));
        return HeadMutableMapResult(range.headMutable());
    }
}

auto equal(R1, R2)(R1 r1, R2 r2) if (isInputRange!(HeadMutable!R1) && isInputRange!(HeadMutable!R2)) {
    // Need to get head-mutable version of the parameters to iterate over them.
    auto _r1 = r1.headMutable();
    auto _r2 = r2.headMutable();
    while (!_r1.empty && !_r2.empty) {
        if (_r1.front != _r2.front) return false;
        _r1.popFront();
        _r2.popFront();
    }
    return _r1.empty && _r2.empty;
}

unittest {
    // User code does not use headMutable at all:
    const arr = [1,2,3];
    const squares = arr.map!(a => a*a);
    const squaresPlusTwo = squares.map!(a => a+2);
    assert(equal(squaresPlusTwo, [3, 6, 11]));
}

(Note that these implementations are simplified slightly from Phobos code to better showcase the use of headMutable)

The unittest block shows a use case where the current Phobos map would fail—it is perfectly possible to create a const MapResult, but there is no way of iterating over it. Note that only two functions are impacted by the addition of headMutable(): map tests if HeadMutable!R is an input range and converts its arguments to head-mutable when passing them to MapResult, and MapResult needs to implement headMutable(). The rest of the code is exactly as you would otherwise write it.

The implementation of equal() shows a situation where implicit conversions would be beneficial. For const(int[]) the call to headMutable() is superfluous—it is implicitly converted to const(int)[] when passed to the function. For user-defined types however, this is not the case, so the call is necessary in the general case.

While I have chosen to implement a range here, ranges are merely the most common example of a place where headmutable would be useful; the idea has merits beyond ranges. Another type in the standard library that would benefit from headmutable is RefCounted!T: const(RefCounted!(T)) should convert to RefCounted!(const(T)).

Why not Tail-Const?

In previous discussions of this problem, the solution has been described as tail-const, and a function tailConst() has been proposed. While this idea might at first seem the most intuitive solution, it has some problems, which together make headMutable() far superior.

The main problem with tailConst() is that it does not play well with D’s existing const system. It needs to be called on a mutable value, and there is no way to convert a const(Foo!T) to Foo!(const(T)). It thus requires that the programmer explicitly call tailConst() on any value that is to be passed to a function expecting a non-mutable value and, abstain from using const or immutable to convey the same information. This creates a separate world of tail-constness and plays havoc with generic code, which consequently has no way to guarantee that it won’t mutate its arguments.

Secondly, the onus is placed on library users to call tailConst() whenever they pass an argument anywhere, causing an inversion of responsibility: the user has to tell the library that it is not allowed to edit the data instead of the other way around. In the best case, this merely causes unnecessary verbiage. In other cases, the omission of const will lead to mutation of data expected to be immutable.

A minor quibble in comparison is that the tail-const solution also requires the existence of tailImmutable to cover the cases where the values are immutable.

Issues

The ideas outlined in this document concern only conversion to head-mutable. A related issue is conversion to tail const, e.g. from RefCounted!T or RefCounted!(immutable T) to RefCounted!(const T), a conversion that, again, is implicit for arrays and pointers in D today.

One issue that may be serious is the fact that headMutable often cannot be @safe and may, in fact, need to rely on undefined behavior in some places. For instance, RefCounted!T contains a pointer to the actual ref count. For immutable(RefCounted!T), headMutable() would need to cast away immutable, which is undefined behavior per the spec.

The Compiler Solution

It is logical to think that, as with built-in types, headMutable() could be elided in its entirety, and the compiler could handle the conversions for us. In many cases, this would be possible, and in fact the compiler already does so for POD types like struct S { int n; }—a const or immutable S may be assigned to a mutable variable of type S. This breaks down, however, when the type includes some level of mutable indirection. For templated types it would be possible to wiggle the template parameters to see if the resulting type compiles and has fields with the same offsets and similar types, but even such an intelligent solution breaks down in the presence of D’s Turing-complete template system, and some cases will always need to be handled by the implementer of a type.

It is also a virtue that the logic behind such an implementation be understandable to the average D programmer. The best case result of that not being true is that the forums would be inundated with a flood of posts about why types don’t convert the way users expect them to.

For these reasons, headMutable() will be necessary even with compiler support. But what would that support look like? Implicit casting to head-mutable happens in the language today in two situations:

  • Assignment to head-mutable variables: const(int)[] a = create!(const(int[]))(); (all POD types, pointers and arrays)
  • Function calls: fun(create!(const(int[]))(); (only pointers and arrays)

The first is covered by existing language features (alias headMutable this; fits the bill perfectly). The second is not but is equivalent to calling .headMutable whenever a const or immutable value is passed to a function that does not explicitly expect a const or immutable argument. This would change the behavior of existing code, in that templated functions would prefer a.headMutable over a, but would greatly improve the experience of working with const types that do define headMutable(). If headMutable is correctly implemented, the different choice of template instantiations should not cause any actual breakage.

Future Work

While this document proposes to implement the described feature without any changes to the compiler or language, it would be possible for the compiler in the future to recognize headMutable() and call it whenever a type that defines that method is passed to a function that doesn’t explicitly take exactly that type, or upon assignment to a variable that matches headMutable()’s return value. This behavior would mirror the current behavior of pointers and arrays.

Conclusion

It is possible to create a framework for defining head-mutable types in D today without compiler or language changes. It requires a little more code in the methods that use head-mutable types but offers a solution to a problem that has bothered the D community for a long time.

While this document deals mostly with ranges, other types will also benefit from this pattern: smart pointers and mutable graphs with immutable nodes are but two possible examples.

Definitions

Head-mutable

A type is head-mutable if some or all of its members without indirections are mutable. Note that a head-mutable datatype may also have const or immutable members without indirections; the requirement is merely that some subset of its members may be mutated. A head-mutable datatype may be tail-const, tail-immutable or tail-mutable—head-mutable only refers to its non-indirected members. Examples of head-mutable types include const(int)[], int*, string, and Rebindable!MyClass. Types without indirections (like int, float and struct S { int n; }) are trivially head-mutable.

Tail-const

A type is tail-const if some of its members with indirections have the const type qualifier. A tail-const type may be head-mutable or head-const. Examples of tail-const types are const(int)*, const(int[]), const(immutable(int)[])* and string.

Source

The source code for HeadMutable and headMutable is available here.

wc in D: 712 Characters Without a Single Branch

After reading “Beating C With 80 Lines Of Haskell: Wc”, which I found on Hacker News, I thought D could do better. So I wrote a wc in D.

The Program

It consists of one file and has 34 lines and 712 characters.

import std.stdio : writefln, File;
import std.algorithm : map, fold, splitter;
import std.range : walkLength;
import std.typecons : Yes;
import std.uni : byCodePoint;

struct Line {
	size_t chars;
	size_t words;
}

struct Output {
	size_t lines;
	size_t words;
	size_t chars;
}

Output combine(Output a, Line b) pure nothrow {
	return Output(a.lines + 1, a.words + b.words, a.chars + b.chars);
}

Line toLine(char[] l) pure {
	return Line(l.byCodePoint.walkLength, l.splitter.walkLength);
}

void main(string[] args) {
	auto f = File(args[1]);
	Output o = f
		.byLine(Yes.keepTerminator)
		.map!(l => toLine(l))
		.fold!(combine)(Output(0, 0, 0));

	writefln!"%u %u %u %s"(o.lines, o.words, o.chars, args[1]);
}

Sure, it is using Phobos, D’s standard library, but then why wouldn’t it? Phobos is awesome and ships with every D compiler. The program itself does not contain a single if statement. The Haskell wc implementation has several if statements. The D program, apart from the main function, contains three tiny functions. I could have easily put all the functionally in one range chain, but then it probably would have exceeded 80 characters per line. That’s a major code-smell.

The Performance

Is the D wc faster than the coreutils wc? No, but it took me 15 minutes to write mine (I had to search for walkLength, because I forgot its name).

file lines bytes coreutils haskell D
app.d 46 906 3.5 ms +- 1.9 ms 39.6 ms +- 7.8 ms 8.9 ms +- 2.1 ms
big.txt 862 64k 4.7 ms +- 2.0 ms 39.6 ms +- 7.8 ms 9.8 ms +- 2.1 ms
vbig.txt 1.7M 96M 658.6ms +- 24.5ms 226.4 ms +- 29.5 ms 1.102 s +- 0.022 s
vbig2.txt 12.1M 671M 4.4 s +- 0.058 s 1.1 s +- 0.039 s 7.4 s +- 0.085 s

Memory:

file coreutils haskell D
app.d 2052K 7228K 7708K
big.txt 2112K 7512K 7616K
vbig.txt 2288K 42620K 7712K
vbig2.txt 2360K 50860K 7736K

Is the Haskell wc faster? For big files, absolutely, but then it is using threads. For small files, GNU’s coreutils still beats the competition. At this stage my version is very likely IO bound, and it’s fast enough anyway.

I’ll not claim that one language is faster than another. If you spend a chunk of time on optimizing a micro-benchmark, you are likely going to beat the competition. That’s not real life. But I will claim that functional programming in D gives functional programming in Haskell a run for its money.

A Bit About Ranges

Digital Mars D logoA range is an abstraction that you can consume through iteration without consuming the underlying collection (if there is one). Technically, a range can be a struct or a class that adheres to one of a handful of Range interfaces. The most basic form, the InputRange, requires the function

void popFront();

and two members or properties:

T front;
bool empty;

T is the generic type of the elements the range iterates.

In D, ranges are special in a way that other objects are not. When a range is given to a foreach statement, the compiler does a little rewrite.

foreach (e; range) { ... }

is rewritten to

for (auto __r = range; !__r.empty; __r.popFront()) {
    auto e = __r.front;
    ...
}

auto e = infers the type and is equivalent to T e =.

Given this knowledge, building a range that can be used by foreach is easy.

struct Iota {
	int front;
	int end;

	@property bool empty() const {
		return this.front == this.end;
	}

	void popFront() {
		++this.front;
	}
}

unittest {
	import std.stdio;
	foreach(it; Iota(0, 10)) {
		writeln(it);
	}
}

Iota is a very simple range. It functions as a generator, having no underlying collection. It iterates integers from front to end in steps of one. This snippet introduces a little bit of D syntax.

@property bool empty() const {

The @property attribute allows us to use the function empty the same way as a member variable (calling the function without the parenthesis). The trailing const means that we don’t modify any data of the instance we call empty on. The built-in unit test prints the numbers 0 to 10.

Another small feature is the lack of an explicit constructor. The struct Iota has two member variables of type int. In the foreach statement in the test, we create an Iota instance as if it had a constructor that takes two ints. This is a struct literal. When the D compiler sees this, and the struct has no matching constructor, the ints will be assigned to the struct’s member variables from top to bottom in the order of declaration.

The relation between the three members is really simple. If empty is false, front is guaranteed to return a different element, the next one in the iteration, after a call to popFront. After calling popFront the value of empty might have changed. If it is true, this means there are no more elements to iterate and any further calls to front are not valid. According to the InputRange documentation:

  • front can be legally evaluated if and only if evaluating empty has, or would have, equaled false.
  • front can be evaluated multiple times without calling popFront or otherwise mutating the range object or the underlying data, and it yields the same result for every evaluation.

Now, using foreach statements, or loops in general, is not really functional in my book. Lets say we want to filter all uneven numbers of the Iota range. We could put an if inside the foreach block, but that would only make it worse. It would be nicer if we had a range that takes a range and a predicate that can decide if an element is okay to pass along or not.

struct Filter {
	Iota input;
	bool function(int) predicate;

	this(Iota input, bool function(int) predicate) {
		this.input = input;
		this.predicate = predicate;
		this.testAndIterate();
	}

	void testAndIterate() {
		while(!this.input.empty
				&& !this.predicate(this.input.front))
		{
			this.input.popFront();
		}
	}

	void popFront() {
		this.input.popFront();
		this.testAndIterate();
	}

	@property int front() {
		return this.input.front;
	}

	@property bool empty() const {
		return this.input.empty;
	}
}

bool isEven(int a) {
	return a % 2 == 0;
}

unittest {
	foreach(it; Filter(Iota(0,10), &isEven)) {
		writeln(it);
	}
}

Filter is again really simple: it takes one Iota and a function pointer. On construction of Filter, we call testAndIterate, which pops elements from Iota until it is either empty or the predicate returns false. The idea is that the passed predicate decides what to filter out and what to keep. The properties front and empty just forward to Iota. The only thing that actually does any work is popFront. It pops the current element and calls testAndIterate. That’s it. That’s an implementation of filter.

Sure, there is a new while loop in testAndIterate, but rewriting that with recursion is just silly, in my opinion. What makes D great is that you can use the right tool for the job. Functional programming is fine and dandy a lot of the time, but sometimes it’s not. If a bit of inline assembly would be necessary or nicer, use that.

The call to Filter still does not look very nice. Assuming, you are used to reading from left to right, Filter comes before Iota, even though it is executed after Iota. D has no pipe operator, but it does have Uniform Function Call Syntax (UFCS). If an expression can be implicitly converted to the first parameter of a function, the function can be called like it is a member function of the type of the expression. That’s a lot of words, I know. An example helps:

string foo(string a) {
	return a ~ "World";
}

unittest {
	string a = foo("Hello ");
	string b = "Hello ".foo();
	assert(a == b);
}

The above example shows two calls to the function foo. As the assert indicates, both calls are equivalent. What does that mean for our Iota Filter example? UFCS allows us to rewrite the unit test to:

unittest {
	foreach(it; Iota(1,10).Filter(&isEven)) {
		writeln(it);
	}
}

Implementing a map/transform range should now be possible for every reader. Sure, Filter can be made more abstract through the use of templates, but that’s just work, nothing conceptually new.

Of course, there are different kinds of ranges, like a bidirectional range. Guess what that allows you to do. A small tip: a bidirectional range has two new primitives called back and popBack. There are other range types as well, but after you understand the input range demonstrated twice above, you pretty much know them all.

P.S. Just to be clear, do not implement your own filter, map, or fold; the D standard library Phobos has everything you every need. Have a look at std.algorithm and std.range.

About the Author

Robert Schadek received a master’s degree in Computer Science at the University of Oldenburg. His master thesis was titled “DMCD A Distributed Multithreading Caching D Compiler” where he work on building a D compiler from scratch. He was a computer science PhD student from 2012–2018 at the University of Oldenburg. His PhD research focuses on quorum systems in combination with graphs. Since 2018 he is happily using D in his day job working for Symmetry Investments.

What is Symmetry Investments?

Symmetry Investments is a global investment company with offices in Hong Kong, Singapore and London. We have been in business since 2014 after successfully spinning off from a major New York-based hedge fund.

At Symmetry, we seek to engage in intelligent risk-taking to create value for our clients, partners and employees. We derive our edge from our capacity to generate Win-Wins – in the broadest sense. Win-Win is our fundamental ethical and strategic principle. By generating Win-Wins, we can create unique solutions that reconcile perspectives that are usually seen as incompatible or opposites, and encompass the best that each side has to offer. We integrate fixed-income arbitrage with global macro strategies in a novel way. We invent and develop technology that focuses on the potential of human-machine integration. We build systems where machines do what they do best, supporting people to do what people do best. We are creating a collaborative meritocracy: a culture where individual contribution serves both personal and collective goals – and is rewarded accordingly. We value both ownership thinking AND cooperative team spirit, self-realisation AND community.

People at Symmetry Investments have been active participants in the D community since 2014. We have sponsored the development of excel-d, dpp, autowrap, libmir, and various other projects. We started Symmetry Autumn of Code in 2018 and hosted DConf 2019 in London.

My Vision of D’s Future

When Andrei Alexandrescu stepped down as deputy leader of the D programming language, I was asked to take over the role going forward. It’s needless to say, but I’ll say it anyway, that those are some pretty big shoes to fill.

I’m still settling into my new role in the community and figuring out how I want to do things and what those things even are. None of this happens in a vacuum either, since Walter needs to be on board as well.

I was asked on the D forums to write a blog post on my “dreams and way forward for D”, so here it is. What I’d like to happen with D in the near future:

Memory safety

“But D has a GC!”, I hear you exclaim. Yes, but it’s also a systems programming language with value types and pointers, meaning that today, D isn’t memory safe. DIP1000 was a step in the right direction, but we have to be memory safe unless programmers opt-out via a “I know what I’m doing” @trusted block or function. This includes transitioning to @safe by default.

Safe and easy concurrency

We’re mostly there—using the actor model eliminates a lot of problems that would otherwise naturally occur. We need to finalize shared and make everything @safe as well.

Make D the default implementation language

D’s static reflection and code generation capabilities make it an ideal candidate to implement a codebase that needs to be called from several different languages and environments (e.g. Python, Excel, R, …). Traditionally this is done by specifying data structures and RPC calls in an Interface Definition Language (IDL) then translating that to the supported languages, with a wire protocol to go along with it.

With D, none of that is necessary. One can write the production code in D and have libraries automagically make that code callable from other languages. Add to all of this that it’s possible and easy to write D code that runs as fast or faster than the alternatives, and it’s a win on all fronts.

Second to none reflection.

Instead of disparate ways of getting things done with fragmented APIs (__traits, std.traits, custom code), I’d like for there to be a library that centralizes all reflection needs with a great API. I’m currently working on it.

Easier interop with C++.

As I mentioned in my DConf 2019 talk, C++ has had the success it’s enjoyed so far by making the transition from C virtually seamless. I want current C++ programmers with legacy codebases to just as easily be able to start writing D code. That’s why I wrote dpp, but it’s not quite there yet and we might have to make language changes to accommodate this going forward.

Fast development times.

I think we need a ridiculously fast interpreter so that we can skip machine code generation and linking. To me, this should be the default way of running unittest blocks for faster feedback, with programmers only compiling their code for runtime performance and/or to ship binaries to final users. This would also enable a REPL.

String interpolation

I was initially against this, but the more I think about it the more it seems to make sense for D. Why? String mixins. Code generation is one of D’s greatest strengths, and token strings enable visually pleasing blocks of code that are actually “just strings”. String interpolation would make them vastly easier to use. As it happens, there’s a draft DIP for it in the pipeline.

That’s what I came up with after a long walk by Lake Geneva. I’d love to know what the community thinks of this, what their pet peeves and pet features would be, and how they think this would help or hinder D going forward.

Ownership and Borrowing in D

Digital Mars logoNearly all non-trivial programs allocate and manage memory. Getting it right is becoming increasingly important, as programs get ever more complex and mistakes get ever more costly. The usual problems are:

  1. memory leaks (failure to free memory when no longer in use)
  2. double frees (freeing memory more than once)
  3. use-after-free (continuing to refer to memory already freed)

The challenge is in keeping track of which pointers are responsible for freeing the memory (i.e. owning the memory), which pointers are merely referring to the memory, where they are, and which are active (in scope).

The common solutions are:

  1. Garbage Collection – The GC owns the memory and periodically scans memory looking for any pointers to that memory. If none are found, the memory is released. This scheme is reliable and in common use in languages like Go and Java. It tends to use much more memory than strictly necessary, have pauses, and slow down code because of inserted write gates.
  2. Reference Counting – The RC object owns the memory and keeps a count of how many pointers point to it. When that count goes to zero, the memory is released. This is also reliable and is commonly used in languages like C++ and ObjectiveC. RC is memory efficient, needing only a slot for the count. The downside of RC is the expense of maintaining the count, building an exception handler to ensure the decrement is done, and the locking for all this needed for objects shared between threads. To regain efficiency, sometimes the programmer will cheat and temporarily refer to the RC object without dealing with the count, engendering a risk that this is not done correctly.
  3. Manual – Manual memory management is exemplified by C’s malloc and free. It is fast and memory efficient, but there’s no language help at all in using them correctly. It’s entirely up to the programmer’s skill and diligence in using it. I’ve been using malloc and free for 35 years, and through bitter and endless experience rarely make a mistake with them anymore. But that’s not the sort of thing a programming shop can rely on, and note I said “rarely” and not “never”.

Solutions 2 and 3 more or less rely on faith in the programmer to do it right. Faith-based systems do not scale well, and memory management issues have proven to be very difficult to audit (so difficult that some coding standards prohibit use of memory allocation).

But there is a fourth way – Ownership and Borrowing. It’s memory efficient, as performant as manual management, and mechanically auditable. It has been recently popularized by the Rust programming language. It has its downsides, too, in the form of a reputation for having to rethink how one composes algorithms and data structures.

The downsides are manageable, and the rest of this article is an outline of how the ownership/borrowing (OB) system works, and how we propose to fit it into D. I had originally thought this would be impossible, but after spending a lot of time thinking about it I’ve found a way to fit it in, much like we’ve fit functional programming into D (with transitive immutability and function purity).

Ownership

The solution to who owns the memory object is ridiculously simple—there is only one pointer to it, so that pointer must be the owner. It is responsible for releasing the memory, after which it will cease to be valid. It follows that any pointers in the memory object are the owners of what they point to, there are no other pointers into the data structure, and the data structure therefore forms a tree.

It also follows that pointers are not copied, they are moved:

T* f();
void g(T*);
T* p = f();
T* q = p; // value of p is moved to q, not copied
g(p);     // error, p has invalid value

Moving a pointer out of a data structure is not allowed:

struct S { T* p; }
S* f();
S* s = f();
T* q = s.p; // error, can't have two pointers to s.p

Why not just mark s.p as being invalid? The trouble there is one would need to do so with a runtime mark, and this is supposed to be a compile-time solution, so attempting it is simply flagged as an error.

Having an owning pointer fall out of scope is also an error:

void h() {
  T* p = f();
} // error, forgot to release p?

It’s necessary to move the pointer somewhere else:

void g(T*);
void h() {
  T* p = f();
  g(p);  // move to g(), it's now g()'s problem
}

This neatly solves memory leaks and use-after-free problems. (Hint: to make it clearer, replace f() with malloc(), and g() with free().)

This can all be tracked at compile time through a function by using Data Flow Analysis (DFA) techniques, like those used to compute Common Subexpressions. DFA can unravel whatever rat’s nest of gotos happen to be there.

Borrowing

The ownership system described above is sound, but it is a little too restrictive. Consider:

struct S { void car(); void bar(); }
struct S* f();
S* s = f();
s.car();  // s is moved to car()
s.bar();  // error, s is now invalid

To make it work, s.car() would have to have some way of moving the pointer value back into s when s.car() returns.

In a way, this is how borrowing works. s.car() borrows a copy of s for the duration of the execution of s.car(). s is invalid during that execution and becomes valid again when s.car() returns.

In D, struct member functions take the this by reference, so we can accommodate borrowing through an enhancement: taking an argument by ref borrows it.

D also supports scope pointers, which are also a natural fit for borrowing:

void g(scope T*);
T* f();
T* p = f();
g(p);      // g() borrows p
g(p);      // we can use p again after g() returns

(When functions take arguments by ref, or pointers by scope, they are not allowed to escape the ref or the pointer. This fits right in with borrow semantics.)

Borrowing in this way fulfills the promise that only one pointer to the memory object exists at any one time, so it works.

Borrowing can be enhanced further with a little insight that the ownership system is still safe if there are multiple const pointers to it, as long as there are no mutable pointers. (Const pointers can neither release their memory nor mutate it.) That means multiple const pointers can be borrowed from the owning mutable pointer, as long as the owning mutable pointer cannot be used while the const pointers are active.

For example:

T* f();
void g(T*);
T* p = f();  // p becomes owner
{
  scope const T* q = p; // borrow const pointer
  scope const T* r = p; // borrow another one
  g(p); // error, p is invalid while q and r are in scope
}
g(p); // ok

Principles

The above can be distilled into the notion that a memory object behaves as if it is in one of two states:

  1. there exists exactly one mutable pointer to it
  2. there exist one or more const pointers to it

The careful reader will notice something peculiar in what I wrote: “as if”. What do I mean by that weasel wording? Is there some skullduggery going on? Why yes, there is. Computer languages are full of “as if” dirty deeds under the hood, like the money you deposit in your bank account isn’t actually there (I apologize if this is a rude shock to anyone), and this isn’t any different. Read on!

But first, a bit more necessary exposition.

Folding Ownership/Borrowing into D

Isn’t this scheme incompatible with the way people normally write D code, and won’t it break pretty much every D program in existence? And not break them in an easily fixed way, but break them so badly they’ll have to redesign their algorithms from the ground up?

Yup, it sure is. Except that D has a (not so) secret weapon: function attributes. It turns out that the semantics for the Ownership/Borrowing (aka OB) system can be run on a per-function basis after the usual semantic pass has been run. The careful reader may have noticed that no new syntax is added, just restrictions on existing code. D has a history of using function attributes to alter the semantics of a function—for example, adding the pure attribute causes a function to behave as if it were pure. To enable OB semantics for a function, an attribute @live is added.

This means that OB can be added to D code incrementally, as needed, and as time and resources permit. It becomes possible to add OB while, and this is critical, keeping your project in a fully functioning, tested, and releasable state. It’s mechanically auditable how much of the project is memory safe in this manner. It adds to the list of D’s many other memory-safe guarantees (such as no pointers to the stack escaping).

As If

Some necessary things cannot be done with strict OB, such as reference counted memory objects. After all, the whole point of an RC object is to have multiple pointers to it. Since RC objects are memory safe (if built correctly), they can work with OB without negatively impinging on memory safety. They just cannot be built with OB. The solution is that D has other attributes for functions, like @system. @system is where much of the safety checking is turned off. Of course, OB will also be turned off in @system code. It’s there that the RC object’s implementation hides from the OB checker.

But in OB code, the RC object looks to the OB checker like it is obeying the rules, so no problemo!

A number of such library types will be needed to successfully use OB.

Conclusion

This article is a basic overview of OB. I am working on a much more comprehensive specification. It’s always possible I’ve missed something and that there’s a hole below the waterline, but so far it’s looking good. It’s a very exciting development for D and I’m looking forward to getting it implemented.

For further discussion and comments from Walter, see the discussion threads on the /r/programming subreddit and at Hacker News.

Using const to Enforce Design Decisions

The saying goes that the best code is no code. As soon as a project starts to grow, technical debt is introduced. When a team is forced to adapt to a new company guideline inconsistent with their previous vision, the debt results from a business decision. This could be tackled at the company level. Sometimes technical debt can arise simply due to the passage of time, when new versions of dependencies or of the compiler introduce breaking changes. You can try to tackle this by letting your local physicist stop the flow of time. More often however, technical debt is caused when issues are fixed by a quick hack, due to time pressure or a lack of knowledge of the code base. Design strategies that were carefully crafted are temporarily neglected. This blog post will focus on using the const modifier. It is one of the convenient tools D offers to minimize the increase of technical debt and enforce design decisions.

To keep a code base consistent, often design guidelines, either explicit or implicit, are put in place. Every developer on the team is expected to adhere to the guidelines as a gentleman’s agreement. This effectively results in a policy that is only enforced if both the programmer and the reviewer have had enough coffee. Simple changes, like adding a call to an object method, might seem innocent, but can reduce the consistency and the traceability of errors. To detect this in a code review requires in-depth knowledge of the method’s implementation.

An example from a real world project I’ve worked on is generating financial transactions for a read-only view, the display function in the following code fragment. Nothing seemed wrong with it, until I realized that those transactions were persisted and eventually used for actual payments without being calculated again, as seen in the process method. Potentially different payments occurred depending on whether the user decided to glance at the summary, thereby triggering the generation with new currency exchange rates, or just blindly clicked the OK button. That’s not what an innocent bystander like myself expects and has caused many frowns already.

public class Order
{
    private Transaction[] _transactions;

    public Transaction[] getTransactions()
    {
        _transactions = calculate();
        return _transactions;
    }

    public void process()
    {
        foreach(t; _transactions){
            // ...
        }
    }
}

void display(Order order)
{
    auto t = order.getTransactions();
    show(t);
}

The internet has taught me that if it is possible, it will one day happen. Therefore, we should make an attempt to make the undesired impossible.

A constant feature

By default, variables and object instances in D are mutable, just like in many other programming languages. If we want to prevent objects from ever changing, we can mark them immutable, i.e. immutable MyClass obj = new MyClass();. immutable means that we can modify neither the object reference (head constant) nor the object’s properties (tail constant). The first case corresponds to final in Java and readonly in C#, both of which signify head constant only. D’s implementation means that nobody can ever modify an object marked immutable. What if an object needs to be mutable in one place, but immutable in another? That’s where D’s const pops in.

Unlike immutable, whose contract states that an object cannot be mutated through any reference, const allows an object to be modified through another, non-const reference. This means it’s illegal to initialize an immutable reference with a mutable one, but a const reference can be initialized with a mutable, const, or immutable reference. In a function parameter list, const is preferred over immutable because it can accept arguments with either qualifier or none. Schematically, it can be visualized as in the following figure.

const relationships

A constant detour

D’s const differs from C++ const in a significant way: it’s transitive (see the const(FAQ) for more details). In other words, it’s not possible to declare any object in D as head constant. This isn’t obvious from examples with classes, since classes in D are reference types, but is more apparent with pointers and arrays. Consider these C++ declarations:

const int *cp0;         // mutable pointer to const data
int const *cp1;         // alternative syntax for the same

Variable declarations in C++ are best read from right to left. Although const int is likely the more common syntax, int const matches the way the declaration should be read: cp0 is a mutable pointer to a const int. In D, the equivalent of cp0 and cp1 is:

const(int)* dp0;

The next example shows what head constant looks like in C++.

int *const cp2;         // const pointer to mutable data

We can read the declaration of cp2 as: cp2 is a const pointer to a mutable int. There is no equivalent in D. It’s possible in C++ to have any number and combination of const and mutable pointers to const or mutable data. But in D, if const is applied to the outermost pointer, then it applies to all the inner pointers and the data as well. Or, as they say, it’s turtles all the way down.

The equivalent in C++ looks like this:

int const *const cp3;         // const pointer to const data

This declaration says cp3 is a const pointer to const data, and is possible in D like so:

const(int*) dp1;
const int* dp2;     // same as dp1

All of the above syntax holds true for D arrays:

const(int)[] a1;    // mutable reference to const data
const(int[]) a2;    // const reference to const data
const int[] a3;     // same as a2

const in the examples can be replaced with immutable, with the caveat that initializers must match the declaration, e.g. immutable(int)* can only be initialized with immutable(int)*.

Finally, note that classes in D are reference types; the reference is baked in, so applying const or immutable to a class reference is always equivalent to const(p*) and there is no equivalent to const(p)* for classes. Structs in D, on the other hand, are value types, so pointers to structs can be declared like the int pointers in the examples above.

A constant example

For the sake of argument, we assume that updating the currency exchange rates is a function that, by definition, needs to mutate the order. After updating the order, we want to show the updated prices to our user. Conceptually, the display function should not modify the order. We can prevent mutation by adding the const modifier to our function parameter. The implicit rule is now made explicit: the function takes an Order as input, and treats the object as a constant. We no longer have a gentleman’s agreement, but a formal contract. Changing the contract will, hopefully, require thorough negotiation with your peer reviewer.

void updateExchangeRates(Order order);
void display(const Order order);

void updateAndDisplay()
{
    Order order = //…
    updateExchangeRates(order);
    display(order); // Implicitly converted to const.
}

As with all contracts, defining it is the easiest part. The hard part is enforcing it. The D compiler is our friend in this problem. If we try to compile the code, we will get a compiler error.

void display(const Order order)
{
    // ERROR: cannot call mutable method
    auto t = order.getTransactions();
    show(t);
}

We never explicitly stated that getTransactions doesn’t modify the object. As the method is virtual by default, the compiler cannot derive the behavior either way. Without that knowledge, the compiler is required to assume that the method might modify the object. In other words, in the D justice system one is guilty until proven innocent. Let’s prove our innocence by marking the method itself const, telling the compiler that we do not intend to modify our data.

public class Order
{
    private Transaction[] _transactions;

    public Transaction[] getTransactions() const
    {
        _transactions = calculate(); // ERROR: cannot mutate field
        return _transactions;
    }
}

void display(const Order order)
{
    auto t = order.getTransactions(); // Now compiles :)
    show(t);
}

By marking the method const, the original compile error has moved away. The promise that we do not modify any object state is part of the method signature. The compiler is now satisfied with the method call in the display function, but finds another problem. Our getter, which we stated should not modify data, actually does modify it. We found our code smell by formalizing our guidelines and letting the compiler figure out the rest.

It seems promising enough to try it on a real project.

A constant application

I had a pet project lying around and decided to put the effort into enforcing the constraint. This is what inspired me to write this post. The project is a four-player mahjong game. The relevant part, in abstraction, is highlighted in the image.

Mahjong abstraction

The main engine behind the game is the white box in the center. A player or AI is sent a message with a const view of the game data for display purposes and to determine their next move. A message is sent back to the engine, which then determines the mutation on the internally mutable game data. The most obvious win is that I cannot accidentally modify the game data when drawing my UI. Which, of course, appeared to be the case before I refactored in the const-ness of the game data.

Upon closer inspection, coming from the UI there is only one way to manipulate the state of the game. The UI sends a message to the engine and remains oblivious of the state changes that need to be applied. This also encourages layered development and improves testability of the code. So far, so good. But during the refactoring, a problem arose. Recall that marking an object with const means that only member functions that promise not to modify the object, those marked with const themselves, can be called. Some of these could be trivially fixed by applying const or, at worst, inout (a sort of wildcard). However, the more persistent issues, like in the Order example, required me to go back to the drawing board and rethink my domain. In the end, being forced to think about mutability versus immutability improved my understanding of my own code base.

A constant verdict

Is const all good? It’s not a universal answer and certainly has downsides. The most notable one is that this kills lazy initialization where a property is computed only when first requested and then the result is cached. Sometimes, like in the earlier example, this is a code smell, but there are legit use cases. In my game, I have a class that composes the dashboard per player. Updating it is expensive and rarely required. The screen, however, gets rendered sixty times a second. It makes sense to cache the dashboards and only update them when the player objects change. I could split the method in two, but then my abstraction would leak its optimization. I settled for not using const here, as it was a module-private class and didn’t have a large impact on my codebase.

A complaint that is sometimes heard regarding const is that it does not work well with one of D’s main selling points, ranges. A range is D’s implementation of a lazy iterator, usable in foreach loops and heavily used in std.algorithm. The functions in std.algorithm can handle ranges with const elements perfectly fine. However, iterating a range changes the internal values and ultimately consumes the range. Therefore, when a range itself is const, it cannot be iterated over. I think this makes sense by design, but I can imagine that this behavior can be surprising in edge cases. I haven’t encountered this as I generate new ranges on every query, both on mutable and const objects.

A guideline for when to use const would be to separate the queries from the command, a.k.a. ye olde Command-Query Separation (CQS). All queries should, in principle, be const. To perform a mutation, even on nested objects, one should call a method on the object. I’d double down on this and state that member functions should be commands, with logic that can be overridden, and therefore never be marked constant. Queries basically serve as a means to peek at encapsulated data and don’t need to be overridden. This should be a final, non-virtual, function that simply exposes a read-only view on the inner field. For example, using D’s module-private modifier on the field in conjunction with an acquainted function in the same module, we can put the logic inside the class definition and the queries outside.

// in order.d
public class Order
{
    private Transaction[] _transactions; // Accessible in order.d

    public void process(); // Virtual and not restricted
}

public const(Transaction)[] getTransactions(const Order order)
{
    // Function, not virtual, operates on a read-only view of Order
    return order._transactions;
}

// in view.d
void display(const Order order)
{
    auto t = order.getTransactions();
    show(t);
}

We should take care, however, not to overapply the modifier. The question that we need to answer is not “Do I modify this object here?”, but rather, “Does it make sense that the object is modified here?” Wrongly assuming constant objects will result in trouble when you finally need to change the instance due to a new feature. For example, in my game a central Game class contains the players’ hands, but doesn’t explicitly modify them. However, given the structure of my code, it does not make sense to make the player objects constant, as free functions in the engine do use mutable player instances of the game object.

Reflecting on my design, even when writing this blog post, gave me valuable insights. Taking the effort to properly use the tool called const has paid off for me. It improved the structure of my code and improved my understanding of the ramblings I write. It is like any other tool not a silver bullet. It serves to formalize our gentleman’s agreement and is therefore just as fragile as one.


Marco graduated in physics, were he used Fortran and Matlab. He used Programming in D to learn application programming. After 3 years of being a C# developer, he is now trainer-coach of mainly Java and C# developers at Sogyo. Marco uses D for programming experiments and side projects including his darling mahjong game.

Lost in Translation: Encapsulation

I first learned programming in BASIC. Outgrew it, and switched to Fortran. Amusingly, my early Fortran code looked just like BASIC. My early C code looked like Fortran. My early C++ code looked like C. – Walter Bright, the creator of D

Programming in a language is not the same as thinking in that language. A natural side effect of experience with one programming language is that we view other languages through the prism of its features and idioms. Languages in the same family may look and feel similar, but there are guaranteed to be subtle differences that, when not accounted for, can lead to compiler errors, bugs, and missed opportunities. Even when good docs, books, and other materials are available, most misunderstandings are only going to be solved through trial-and-error.

D programmers come from a variety of programming backgrounds, C-family languages perhaps being the most common among them. Understanding the differences and how familiar features are tailored to D can open the door to more possibilities for organizing a code base, and designing and implementing an API. This article is the first of a few that will examine D features that can be overlooked or misunderstood by those experienced in similar languages.

We’re starting with a look at a particular feature that’s common among languages that support Object-Oriented Programming (OOP). There’s one aspect in particular of the D implementation that experienced programmers are sure they already fully understand and are often surprised to later learn they don’t.

Encapsulation

Most readers will already be familiar with the concept of encapsulation, but I want to make sure we’re on the same page. For the purpose of this article, I’m talking about encapsulation in the form of separating interface from implementation. Some people tend to think of it strictly as it relates to object-oriented programming, but it’s a concept that’s more broad than that. Consider this C code:

#include <stdio.h>
static size_t s_count;

void print_message(const char* msg) {
    puts(msg);
    s_count++;
}

size_t num_prints() { return s_count; }

In C, functions and global variables decorated with static become private to the translation unit (i.e. the source file along with any headers brought in via #include) in which they are declared. Non-static declarations are publicly accessible, usually provided in header files that lay out the public API for clients to use. Static functions and variables are used to hide implementation details from the public API.

Encapsulation in C is a minimal approach. C++ supports the same feature, but it also has anonymous namespaces that can encapsulate type definitions in addition to declarations. Like Java, C#, and other languages that support OOP, C++ also has access modifiers (alternatively known as access specifiers, protection attributes, visibility attributes) which can be applied to class and struct member declarations.

C++ supports the following three access modifiers, common among OOP languages:

  • public – accessible to the world
  • private – accessible only within the class
  • protected – accessible only within the class and its derived classes

An experienced Java programmer might raise a hand to say, “Um, excuse me. That’s not a complete definition of protected.” That’s because in Java, it looks like this:

  • protected – accessible within the class, its derived classes, and classes in the same package.

Every class in Java belongs to a package, so it makes sense to factor packages into the equation. Then there’s this:

  • package-private (not a keyword) – accessible within the class and classes in the same package.

This is the default access level in Java when no access modifier is specified. This combined with protected make packages a tool for encapsulation beyond classes in Java.

Similarly, C# has assemblies, which MSDN defines as “a collection of types and resources that forms a logical unit of functionality”. In C#, the meaning of protected is identical to that of C++, but the language has two additional forms of protection that relate to assemblies and that are analogous to Java’s protected and package-private.

  • internal – accessible within the class and classes in the same assembly.
  • protected internal – accessible within the class, its derived classes, and classes in the same assembly.

Examining encapsulation in other programming languages will continue to turn up similarities and differences. Common encapsulation idioms are generally adapted to language-specific features. The fundamental concept remains the same, but the scope and implementation vary. So it should come as no surprise that D also approaches encapsulation in its own, language-specific manner.

Modules

The foundation of D’s approach to encapsulation is the module. Consider this D version of the C snippet from above:

module mymod;

private size_t _count;

void printMessage(string msg) {
    import std.stdio : writeln;

    writeln(msg);
    _count++;
}

size_t numPrints() { return _count; }

In D, access modifiers can apply to module-scope declarations, not just class and struct members. _count is private, meaning it is not visible outside of the module. printMessage and numPrints have no access modifiers; they are public by default, making them visible and accessible outside of the module. Both functions could have been annotated with the keyword public.

Note that imports in module scope are private by default, meaning the symbols in the imported modules are not visible outside the module, and local imports, as in the example, are never visible outside of their parent scope.

Alternative syntaxes are supported, giving more flexibility to the layout of a module. For example, there’s C++ style:

module mymod;

// Everything below this is private until either another
// protection attribute or the end of file is encountered.
private:
    size_t _count;

// Turn public back on
public:
    void printMessage(string msg) {
        import std.stdio : writeln;

        writeln(msg);
        _count++;
    }

    size_t numPrints() { return _count; }

And this:

module mymod;

private {
    // Everything declared within these braces is private.
    size_t _count;
}

// The functions are still public by default
void printMessage(string msg) {
    import std.stdio : writeln;

    writeln(msg);
    _count++;
}

size_t numPrints() { return _count; }

Modules can belong to packages. A package is a way to group related modules together. In practice, the source files corresponding to each module should be grouped together in the same directory on disk. Then, in the source file, each directory becomes part of the module declaration:

// mypack/amodule.d
mypack.amodule;

// mypack/subpack/anothermodule.d
mypack.subpack.anothermodule;

Note that it’s possible to have package names that don’t correspond to directories and module names that don’t correspond to files, but it’s bad practice to do so. A deep dive into packages and modules will have to wait for a future post.

mymod does not belong to a package, as no packages were included in the module declaration. Inside printMessage, the function writeln is imported from the stdio module, which belongs to the std package. Packages have no special properties in D and primarily serve as namespaces, but they are a common part of the codescape.

In addition to public and private, the package access modifier can be applied to module-scope declarations to make them visible only within modules in the same package.

Consider the following example. There are three modules in three files (only one module per file is allowed), each belonging to the same root package.

// src/rootpack/subpack1/mod2.d
module rootpack.subpack1.mod2;
import std.stdio;

package void sayHello() {
    writeln("Hello!");
}

// src/rootpack/subpack1/mod1.d
module rootpack.subpack1.mod1;
import rootpack.subpack1.mod2;

class Speaker {
    this() { sayHello(); }
}

// src/rootpack/app.d
module rootpack.app;
import rootpack.subpack1.mod1;

void main() {
    auto speaker = new Speaker;
}

Compile this with the following command line:

cd src
dmd -i rootpack/app.d

The -i switch tells the compiler to automatically compile and link imported modules (excluding those in the standard library namespaces core and std). Without it, each module would have to be passed on the command line, else they wouldn’t be compiled and linked.

The class Speaker has access to sayHello because they belong to modules that are in the same package. Now imagine we do a refactor and we decide that it could be useful to have access to sayHello throughout the rootpack package. D provides the means to make that happen by allowing the package attribute to be parameterized with the fully-qualified name (FQN) of a package. So we can change the declaration of sayHello like so:

package(rootpack) void sayHello() {
    writeln("Hello!");
}

Now all modules in rootpack and all modules in packages that descend from rootpack will have access to sayHello. Don’t overlook that last part. A parameter to the package attribute is saying that a package and all of its descendants can access this symbol. It may sound overly broad, but it isn’t.

For one thing, only a package that is a direct ancestor of the module’s parent package can be used as a parameter. Consider a module rootpack.subpack.subsub.mymod. That name contains all of the packages that are legal parameters to the package attribute in mymod.d, namely rootpack, subpack, and subsub. So we can say the following about symbols declared in mymod:

  • package – visible only to modules in the parent package of mymod, i.e. the subsub package.
  • package(subsub) – visible to modules in the subsub package and modules in all packages descending from subsub.
  • package(subpack) – visible to modules in the subpack package and modules in all packages descending from subpack.
  • package(rootpack) – visible to modules in the rootpack package and modules in all packages descending from rootpack.

This feature makes packages another tool for encapsulation, allowing symbols to be hidden from the outside world but visible and accessible in specific subtrees of a package hierarchy. In practice, there are probably few cases where expanding access to a broad range of packages in an entire subtree is desirable.

It’s common to see parameterized package protection in situations where a package exposes a common public interface and hides implementations in one or more subpackages, such as a graphics package with subpackages containing implementations for DirectX, Metal, OpenGL, and Vulkan. Here, D’s access modifiers allow for three levels of encapsulation:

  • the graphics package as a whole
  • each subpackage containing the implementations
  • individual modules in each package

Notice that I didn’t include class or struct types as a fourth level. The next section explains why.

Classes and structs

Now we come to the motivation for this article. I can’t recall ever seeing anyone come to the D forums professing surprise about package protection, but the behavior of access modifiers in classes and structs is something that pops up now and then, largely because of expectations derived from experience in other languages.

Classes and structs use the same access modifiers as modules: public, package, package(some.pack), and private. The protected attribute can only be used in classes, as inheritance is not supported for structs (nor for modules, which aren’t even objects). public, package, and package(some.pack) behave exactly as they do at the module level. The thing that surprises some people is that private also behaves the same way.

import std.stdio;

class C {
    private int x;
}

void main() {
    C c = new C();
    c.x = 10;
    writeln(c.x);
}

Run this example online

Snippets like this are posted in the forums now and again by people exploring D, accompanying a question along the lines of, “Why does this compile?” (and sometimes, “I think I’ve found a bug!”). This is an example of where experience can cloud expectations. Everyone knows what private means, so it’s not something most people bother to look up in the language docs. However, those who do would find this:

Symbols with private visibility can only be accessed from within the same module.

private in D always means private to the module. The module is the lowest level of encapsulation. It’s easy to understand why some experience an initial resistance to this, that it breaks encapsulation, but the intent behind the design is to strengthen encapsulation. It’s inspired by the C++ friend feature.

Having implemented and maintained a C++ compiler for many years, Walter understood the need for a feature like friend, but felt that it wasn’t the best way to go about it.

Being able to declare a “friend” that is somewhere in some other file runs against notions of encapsulation.

An alternative is to take a Java-like approach of one class per module, but he felt that was too restrictive.

One may desire a set of closely interrelated classes that encapsulate a concept, and those should go into a module.

So the way to view a module in D is not just as a single source file, but as a unit of encapsulation. It can contain free functions, classes, and structs, all operating on the same data declared in module scope and class scope. The public interface is still protected from changes to the private implementation inside the module. Along those same lines, protected class members are accessible not just in derived classes, but also in the module.

Sometimes though, there really is a benefit to denying access to private members in a module. The bigger a module becomes, the more of a burden it is to maintain, especially when it’s being maintained by a team. Every place a private member of a class is accessed in a module means more places to update when a change is made, thereby increasing the maintenance burden. The language provides the means to alleviate the burden in the form of the special package module.

In some cases, we don’t want to require the user to import multiple modules individually. Splitting a large module into smaller ones is one of those cases. Consider the following file tree:

-- mypack
---- mod1.d
---- mod2.d

We have two modules in a package called mypack. Let’s say that mod1.d has grown extremely large and we’re starting to worry about maintaining it. For one, we want to ensure that private members aren’t manipulated outside of class declarations with hundreds or thousands of lines in between. We want to split the module into smaller ones, but at the same time we don’t want to break user code. Currently, users can get at the module’s symbols by importing it with import mypack.mod1. We want that to continue to work. Here’s how we do it:

-- mypack
---- mod1
------ package.d
------ split1.d
------ split2.d
---- mod2.d

We’ve split mod1.d into two new modules and put them in a package named mod1. We’ve also created a special package.d file, which looks like this:

module mypack.mod1;

public import mypack.mod1.split1,
              mypack.mod1.split2;

When the compiler sees package.d, it knows to treat it specially. Users will be able to continue using import mypack.mod1 without ever caring that it’s now split into two modules in a new package. The key is the module declaration at the top of package.d. It’s telling the compiler to treat this package as the module mod1. And instead of automatically importing all modules in the package, the requirement to list them as public imports in package.d allows more freedom in implementing the package. Sometimes, you might want to require the user to explicitly import a module even when a package.d is present.

Now users will continue seeing mod1 as a single module and can continue to import it as such. Meanwhile, encapsulation is now more stringently enforced internally. Because split1 and split2 are now separate modules, they can’t touch each other’s private parts. Any part of the API that needs to be shared by both modules can be annotated with package protection. Despite the internal transformation, the public interface remains unchanged, and encapsulation is maintained.

Wrapping up

The full list of access modifiers in D can be defined as such:

  • public – accessible everywhere.
  • package – accessible to modules in the same package.
  • package(some.pack) – accessible to modules in the package some.pack and to the modules in all of its descendant packages.
  • private – accessible only in the module.
  • protected (classes only) – accessible in the module and in derived classes.

Hopefully, this article has provided you with the perspective to think in D instead of your “native” language when thinking about encapsulation in D.

Thanks to Ali Çehreli, Joakim Noah, and Nicholas Wilson for reviewing and providing feedback on this article.