Category Archives: DIPs

Memory Safety in a Systems Programming Language Part 3

The first entry in this series shows how to use the new DIP1000 rules to have slices and pointers refer to the stack, all while being memory safe. The second entry in this series teaches about the ref storage class and how DIP1000 works with aggregate types (classes, structs, and unions).

So far the series has deliberately avoided templates and auto functions. This kept the first two posts simpler in that they did not have to deal with function attribute inference, which I have referred to as “attribute auto inference” in earlier posts. However, both auto functions and templates are very common in D code, so a series on DIP1000 can’t be complete without explaining how those features work with the language changes. Function attribute inference is our most important tool in avoiding so-called “attribute soup”, where a function is decorated with several attributes, which arguably decreases readability.

We will also dig deeper into unsafe code. The previous two posts in this series focused on the scope attribute, but this post is more focused on attributes and memory safety in general. Since DIP1000 is ultimately about memory safety, we can’t get around discussing those topics.

Avoiding repetition with attributes

Function attribute inference means that the language will analyze the body of a function and will automatically add the @safe, pure, nothrow, and @nogc attributes where applicable. It will also attempt to add scope or return scope attributes to parameters and return ref to ref parameters that can’t otherwise be compiled. Some attributes are never inferred. For instance, the compiler will not insert any ref, lazy, out or @trusted attributes, because very likely they are explicitly not wanted where they are left out.

There are many ways to turn on function attribute inference. One is by omitting the return type in the function signature. Note that the auto keyword is not required for this. auto is a placeholder keyword used when no return type, storage class, or attribute is specified. For example, the declaration half(int x) { return x/2; } does not parse, so we use auto half(int x) { return x/2; } instead. But we could just as well write @safe half(int x) { return x/2; } and the rest of the attributes (pure, nothrow, and @nogc) will be inferred just as they are with the auto keyword.

The second way to enable attribute inference is to templatize the function. With our half example, it can be done this way:

int divide(int denominator)(int x) { return x/denominator; }
alias half = divide!2;

The D spec does not say that a template must have any parameters. An empty parameter list can be used to turn attribute inference on: int half()(int x) { return x/2; }. Calling this function doesn’t even require the template instantiation syntax at the call site, e.g., half!()(12) is not required as half(12) will compile.

Another means to turn on attribute inference is to store the function inside another function. These are called nested functions. Inference is enabled not only on functions nested directly inside another function but also on most things nested in a type or a template inside the function. Example:

@safe void parentFun()
{
    // This is auto-inferred.
    int half(int x){ return x/2; }

    class NestedType
    {
        // This is auto inferred
        final int half1(int x) { return x/2; }

        // This is not auto inferred; it's a
        // virtual function and the compiler
        // can't know if it has an unsafe override
        // in a derived class.
        int half2(int x) { return x/2; }
    }

    int a = half(12); // Works. Inferred as @safe.
    auto cl = new NestedType;
    int b = cl.half1(18); // Works. Inferred as @safe.
    int c = cl.half2(26); // Error.
}

A downside of nested functions is that they can only be used in lexical order (the call site must be below the function declaration) unless both the nested function and the call are inside the same struct, class, union, or template that is in turn inside the parent function. Another downside is that they don’t work with Uniform Function Call Syntax.

Finally, attribute inference is always enabled for function literals (a.k.a. lambda functions). The halving function would be defined as enum half = (int x) => x/2; and called exactly as normal. However, the language does not consider this declaration a function. It considers it a function pointer. This means that in global scope it’s important to use enum or immutable instead of auto. Otherwise, the lambda can be changed to something else from anywhere in the program and cannot be accessed from pure functions. In rare cases, such mutability can be desirable, but most often it is an antipattern (like global variables in general).

Limits of inference

Aiming for minimal manual typing isn’t always wise. Neither is aiming for maximal attribute bloat.

The primary problem of auto inference is that subtle changes in the code can lead to inferred attributes turning on and off in an uncontrolled manner. To see when it matters, we need to have an idea of what will be inferred and what will not.

The compiler in general will go to great lengths to infer @safe, pure, nothrow, and @nogc attributes. If your function can have those, it almost always will. The specification says that recursion is an exception: a function calling itself should not be @safe, pure, or nothrow unless explicitly specified as such. But in my testing, I found those attributes actually are inferred for recursive functions. It turns out, there is an ongoing effort to get recursive attribute inference working, and it partially works already.

Inference of scope and return on function parameters is less reliable. In the most mundane cases, it’ll work, but the compiler gives up pretty quickly. The smarter the inference engine is, the more time it takes to compile, so the current design decision is to infer those attributes in only the simplest of cases.

Where to let the compiler infer?

A D programmer should get into the habit of asking, “What will happen if I mistakenly do something that makes this function unsafe, impure, throwing, garbage-collecting, or escaping?” If the answer is “immediate compiler error”, auto inference is probably fine. On the other hand, the answer could be “user code will break when updating this library I’m maintaining”. In that case, annotate manually.

In addition to the potential of losing attributes the author intends to apply, there is also another risk:

@safe pure nothrow @nogc firstNewline(string from)
{
    foreach(i; 0 .. from.length) switch(from[i])
    {
        case '\r':
        if(from.length > i+1 && from[i+1] == '\n')
        {
            return "\r\n";
        }
        else return "\r";

        case '\n': return "\n";

        default: break;
    }

    return "";
}

You might think that since the author is manually specifying the attributes, there’s no problem. Unfortunately, that’s wrong. Suppose the author decides to rewrite the function such that all the return values are slices of the from parameter rather than string literals:

@safe pure nothrow @nogc firstNewline(string from)
{
    foreach(i; 0 .. from.length) switch(from[i])
    {
        case '\r':
        if (from.length > i + 1 && from[i + 1] == '\n')
        {
            return from[i .. i + 2];
        }
        else return from[i .. i + 1];

        case '\n': return from[i .. i + 1];

        default: break;
    }

    return "";
}

Surprise! The parameter from was previously inferred as scope, and a library user was relying on that, but now it’s inferred as return scope instead, breaking client code.

Still, for internal functions, auto inference is a great way to save both our fingers when writing and our eyes when reading. Note that it’s perfectly fine to rely on auto inference of the @safe attribute as long as the function is used in explicitly in @safe functions or unit tests. If something potentially unsafe is done inside the auto-inferred function, it gets inferred as @system, not @trusted. Calling a @system function from a @safe function results in a compiler error, meaning auto inference is safe to rely on in this case.

It still sometimes makes sense to manually apply attributes to internal functions, because the error messages generated when they are violated tend to be better with manual attributes.

What about templates?

Auto inference is always enabled for templated functions. What if a library interface needs to expose one? There is a way to block the inference, albeit an ugly one:

private template FunContainer(T)
{
    // Not auto inferred
    // (only eponymous template functions are)
    @safe T fun(T arg){return arg + 3;}
}

// Auto inferred per se, but since the function it calls
// is not, only @safe is inferred.
auto addThree(T)(T arg){return FunContainer!T.fun(arg);}

However, which attributes a template should have often depends on its compile-time parameters. It would be possible to use metaprogramming to designate attributes depending on the template parameters, but that would be a lot of work, hard to read, and easily as error-prone as relying on auto inference.

It’s more practical to just test that the function template infers the wanted attributes. Such testing doesn’t have to, and probably shouldn’t, be done manually each time the function is changed. Instead:

float multiplyResults(alias fun)(float[] arr)
    if (is(typeof(fun(new float)) : float))
{
    float result = 1.0f;
    foreach (ref e; arr) result *= fun(&e);
    return result;
}

@safe pure nothrow unittest
{
    float fun(float* x){return *x+1;}
    // Using a static array makes sure
    // arr argument is inferred as scope or
    // return scope
    float[5] elements = [1.0f, 2.0f, 3.0f, 4.0f, 5.0f];

    // No need to actually do anything with
    // the result. The idea is that since this
    // compiles, multiplyResults is proven @safe
    // pure nothrow, and its argument is scope or
    // return scope.
    multiplyResults!fun(elements);
}

Thanks to D’s compile-time introspection powers, testing against unwanted attributes is also covered:

@safe unittest
{
    import std.traits : attr = FunctionAttribute,
        functionAttributes, isSafe;

    float fun(float* x)
    {
        // Makes the function both throwing
        // and garbage collector dependant.
        if (*x > 5) throw new Exception("");
        static float* impureVar;

        // Makes the function impure.
        auto result = impureVar? *impureVar: 5;

        // Makes the argument unscoped.
        impureVar = x;
        return result;
    }

    enum attrs = functionAttributes!(multiplyResults!fun);

    assert(!(attrs & attr.nothrow_));
    assert(!(attrs & attr.nogc));

    // Checks against accepting scope arguments.
    // Note that this check does not work with
    // @system functions.
    assert(!isSafe!(
    {
        float[5] stackFloats;
        multiplyResults!fun(stackFloats[]);
    }));

    // It's a good idea to do positive tests with
    // similar methods to make sure the tests above
    // would fail if the tested function had the
    // wrong attributes.
    assert(attrs & attr.safe);
    assert(isSafe!(
    {
        float[] heapFloats;
        multiplyResults!fun(heapFloats[]);
    }));
}

If assertion failures are wanted at compile time before the unit tests are run, adding the static keyword before each of those asserts will get the job done. Those compiler errors can even be had in non-unittest builds by converting that unit test to a regular function, e.g., by replacing @safe unittest with, say, private @safe testAttrs().

The live fire exercise: @system

Let’s not forget that D is a systems programming language. As this series has shown, in most D code the programmer is well protected from memory errors, but D would not be D if it didn’t allow going low-level and bypassing the type system in the same manner as C or C++: bit arithmetic on pointers, writing and reading directly to hardware ports, executing a struct destructor on a raw byte blob… D is designed to do all of that.

The difference is that in C and C++ it takes only one mistake to break the type system and cause undefined behavior anywhere in the code. A D programmer is only at risk when not in a @safe function, or when using dangerous compiler switches such as -release or -check=assert=off (failing a disabled assertion is undefined behavior), and even then the semantics tend to be less UB-prone. For example:

float cube(float arg)
{
    float result;
    result *= arg;
    result *= arg;
    return result;
}

This is a language-agnostic function that compiles in C, C++, and D. Someone intended to calculate the cube of arg but forgot to initialize result with arg. In D, nothing dangerous happens despite this being a @system function. No initialization value means result is default initialized to NaN (not-a-number), which leads to the result also being NaN, which is a glaringly obvious “error” value when using this function the first time.

However, in C and C++, not initializing a local variable means reading it is (sans a few narrow exceptions) undefined behavior. This function does not even handle pointers, yet according to the standard, calling this function could just as well have *(int*) rand() = 0XDEADBEEF; in it, all due to a trivial mistake. While many compilers with enabled warnings will catch this one, not all do, and these languages are full of similar examples where even warnings don’t help.

In D, even if you explicitly requested no default initialization with float result = void, it’d just mean the return value of the function is undefined, not anything and everything that happens if the function is called. Consequently, that function could be annotated @safe even with such an initializer.

Still, for anyone who cares about memory safety, as they probably should for anything intended for a wide audience, it’s a bad idea to assume that D @system code is safe enough to be the default mode. Two examples will demonstrate what can happen.

What undefined behavior can do

Some people assume that “Undefined Behavior” simply means “erroneous behavior” or crashing at runtime. While that is often what ultimately happens, undefined behavior is far more dangerous than, say, an uncaught exception or an infinite loop. The difference is that with undefined behavior, you have no guarantees at all about what happens. This might not sound any worse than an infinite loop, but an accidental infinite loop is discovered the first time it’s entered. Code with undefined behavior, on the other hand, might do what was intended when it’s tested, but then do something completely different in production. Even if the code is tested with the same flags it’s compiled with in production, the behavior may change from one compiler version to another, or when making completely unrelated changes to the code. Time for an example:

// return whether the exception itself is in the array
bool replaceExceptions(Object[] arr, ref Exception e)
{
    bool result;
    foreach (ref o; arr)
    {
        if (&o is &e) result = true;
        if (cast(Exception) o) o = e;
    }

    return result;
}

The idea here is that the function replaces all exceptions in the array with e. If e itself is in the array, it returns true, otherwise false. And indeed, testing confirms it works. The function is used like this:

auto arr = [new Exception("a"), null, null, new Exception("c")];
auto result = replaceExceptions
(
    cast(Object[]) arr,
    arr[3]
);

This cast is not a problem, right? Object references are always of the same size regardless of their type, and we’re casting the exceptions to the parent type, Object. It’s not like the array contains anything other than object references.

Unfortunately, that’s not how the D specification views it. Having two class references (or any references, for that matter) in the same memory location but with different types, and then assigning one of them to the other, is undefined behavior. That’s exactly what happens in

if (cast(Exception) o) o = e;

if the array does contain the e argument. Since true can only be returned when undefined behavior is triggered, it means that any compiler would be free to optimize replaceExceptions to always return false. This is a dormant bug no amount of testing will find, but that might, years later, completely mess up the application when compiled with the powerful optimizations of an advanced compiler.

It may seem that requiring a cast to use a function is an obvious warning sign that a good D programmer would not ignore. I wouldn’t be so sure. Casts aren’t that rare even in fine high-level code. Even if you disagree, other cases are provably bad enough to bite anyone. Last summer, this case appeared in the D forums:

string foo(in string s)
{
    return s;
}

void main()
{
    import std.stdio;
    string[] result;
    foreach(c; "hello")
    {
        result ~= foo([c]);
    }
    writeln(result);
}

This problem was encountered by Steven Schveighoffer, a long-time D veteran who has himself lectured about @safe and @system on more than one occasion. Anything that can burn him can burn any of us.

Normally, this works just as one would think and is fine according to the spec. However, if one enables another soon-to-be-default language feature with the -preview=in compiler switch along with DIP1000, the program starts malfunctioning. The old semantics for in are the same as const, but the new semantics make it const scope.

Since the argument of foo is scope, the compiler assumes that foo will copy [c] before returning it, or return something else, and therefore it allocates [c] on the same stack position for each of the “hello” letters. The result is that the program prints ["o", "o, "o", "o", "o"]. At least for me, it’s already somewhat hard to understand what’s happening in this simple example. Hunting down this sort of bug in a complex codebase could be a nightmare.

(With my nightly DMD version somewhere between 2.100 and 2.101 a compile-time error is printed instead. With 2.100.2, the example runs as described above.)

The fundamental problem in both of these examples is the same: @safe is not used. Had it been, both of these undefined behaviors would have resulted in compilation errors (the replaceExceptions function itself can be @safe, but the cast at the usage site cannot). By now it should be clear that @system code should be used sparingly.

When to proceed anyway

Sooner or later, though, the time comes when the guard rail has to be temporarily lowered. Here’s an example of a good use case:

/// Undefined behavior: Passing a non-null pointer
/// to a standalone character other than '\0', or
/// to an array without '\0' at or after the
/// pointed character, as utf8Stringz
extern(C) @system pure
bool phobosValidateUTF8(const char* utf8Stringz)
{
    import std.string, std.utf;

    try utf8Stringz.fromStringz.validate();
    catch (UTFException) return false;

    return true;
}

This function lets code written in another language validate a UTF-8 string using Phobos. C being C, it tends to use zero-terminated strings, so the function accepts a pointer to one as the argument instead of a D array. This is why the function has to be unsafe. There is no way to safely check that utf8Stringz is pointing to either null or a valid C string. If the character being pointed to is not '\0', meaning the next character has to be read, the function has no way of knowing whether that character belongs to the memory allocated for the string. It can only trust that the calling code got it right.

Still, this function is a good use of the @system attribute. First, it is presumably called primarily from C or C++. Those languages do not get any safety guarantees anyway. Even a @safe function is safe only if it gets only those parameters that can be created in @safe D code. Passing cast(const char*) 0xFE0DA1 as an argument to a function is unsafe no matter what the attribute says, and nothing in C or C++ verifies what arguments are passed.

Second, the function clearly documents the cases that would trigger undefined behavior. However, it does not mention that passing an invalid pointer, such as the aforementioned cast(const char*) 0xFE0DA1, is UB, because UB is always the default assumption with @system-only values unless it can be shown otherwise.

Third, the function is small and easy to review manually. No function should be needlessly big, but it’s many times more important than usual to keep @system and @trusted functions small and simple to review. @safe functions can be debugged to pretty good shape by testing, but as we saw earlier, undefined behavior can be immune to testing. Analyzing the code is the only general answer to UB.

There is a reason why the parameter does not have a scope attribute. It could have it, no pointers to the string are escaped. However, it would not provide many benefits. Any code calling the function has to be @system, @trusted, or in a foreign language, meaning they can pass a pointer to the stack in any case. scope could potentially improve the performance of D client code in exchange for the increased potential for undefined behavior if this function is erroneously refactored. Such a tradeoff is unwanted in general unless it can be shown that the attribute helps with a performance problem. On the other hand, the attribute would make it clearer for the reader that the string is not supposed to escape. It’s a difficult judgment call whether scope would be a wise addition here.

Further improvements

It should be documented why a @system function is @system when it’s not obvious. Often there is a safer alternative—our example function could have taken a D array or the CString struct from the previous post in this series. Why was an alternative not taken? In our case, we could write that the ABI would be different for either of those options, complicating matters on the C side, and the intended client (C code) is unsafe anyway.

@trusted functions are like @system functions, except they can be called from @safe functions, whereas @system functions cannot. When something is declared @trusted, it means the authors have verified that it’s just as safe to use as an actual @safe function with any arguments that can be created within safe code. They need to be just as carefully reviewed, if not more so, as @system functions.

In these situations, it should be documented (for other developers, not users) how the function was deemed to be safe in all situations. Or, if the function is not fully safe to use and the attribute is just a temporary hack, it should have a big ugly warning about that.

Such greenwashing is of course highly discouraged, but if there’s a codebase full of @system code that’s just too difficult to make @safe otherwise, it’s better than giving up. Even as we often talk about the dangers of UB and memory corruption, in our actual work our attitudes tend to be much more carefree, meaning such codebases are unfortunately common.

It might be tempting to define a small @trusted function inside a bigger @safe function to do something unsafe without disabling checks for the whole function:

extern(C) @safe pure
bool phobosValidateUTF8(const char* utf8Stringz)
{
    import std.string, std.utf;

    try (() @trusted => utf8Stringz.fromStringz)()
      .validate();
    catch (UTFException) return false;

    return true;
}

Keep in mind though, that the parent function needs to be documented and reviewed like an overt @trusted function because the encapsulated @trusted function can let the parent function do anything. In addition, since the function is marked @safe, it isn’t obvious on a first look that it’s a function that needs special care. Thus, a visible warning comment is needed if you elect to use @trusted like this.

Most importantly, don’t trust yourself! Just like any codebase of non-trivial size has bugs, more than a handful of @system functions will include latent UB at some point. The remaining hardening features of D, meaning asserts, contracts, invariants, and bounds checking should be used aggressively and kept enabled in production. This is recommended even if the program is fully @safe. In addition, a project with a considerable amount of unsafe code should use external tools like LLVM address sanitizer and Valgrind to at least some extent.

Note that the idea in many of these hardening tools, both those in the language and those of the external tools, is to crash as soon as any fault is detected. It decreases the chance of any surprise from undefined behavior doing more serious damage.

This requires that the program is designed to accept a crash at any moment. The program must never hold such amounts of unsaved data that there would be any hesitation in crashing it. If it controls anything important, it must be able to regain control after being restarted by a user or another process, or it must have another backup program. Any program that “can’t afford” to run potentially crashing checks is in no business to be trusted with systems programming either.

Conclusion

That concludes this blog series on DIP1000. There are some topics related to DIP1000 we have left up to the readers to experiment with themselves, such as associative arrays. Still, this should be enough to get them going.

Though we have uncovered some practical tips in addition to language rules, there surely is a lot more that could be said. Tell us your memory safety tips in the D forums!

Thanks to Walter Bright and Dennis Korpel for providing feedback on this article.

Memory Safety in a Modern Systems Programming Language Part 2

DIP1000: Memory Safety in a Modern System Programming Language Pt. 2

The previous entry in this series shows how to use the new DIP1000 rules to have slices and pointers refer to the stack, all while being memory safe. But D can refer to the stack in other ways, too, and that’s the topic of this article.

Object-oriented instances are the easiest case

In Part 1, I said that if you understand how DIP1000 works with pointers, then you understand how it works with classes. An example is worth more than mere words:

@safe Object ifNull(return scope Object a, return scope Object b)
{
    return a? a: b;
}

The return scope in the above example works exactly as it does in the following:

@safe int* ifNull(return scope int* a, return scope int* b)
{
    return a? a: b;
}

The principle is: if the scope or return scope storage class is applied to an object in a parameter list, the address of the object instance is protected just as if the parameter were a pointer to the instance. From the perspective of machine code, it is a pointer to the instance.

From the point of view of regular functions, that’s all there is to it. What about member functions of a class or an interface? This is how it’s done:

interface Talkative
{
    @safe const(char)[] saySomething() scope;
}

class Duck : Talkative
{
    char[8] favoriteWord;
    @safe const(char)[] saySomething() scope
    {
        import std.random : dice;

        // This wouldn't work
        // return favoriteWord[];

        // This does
        return favoriteWord[].dup;

        // Also returning something totally
        // different works. This
        // returns the first entry 40% of the time,
        // The second entry 40% of the time, and
        // the third entry the rest of the time.
        return
        [
            "quack!",
            "Quack!!",
            "QUAAACK!!!"
        ][dice(2,2,1)];
    }
}

scope positioned either before or after the member function name marks the this reference as scope, preventing it from leaking out of the function. Because the address of the instance is protected, nothing that refers directly to the address of the fields is allowed to escape either. That’s why return favoriteWord[] is disallowed; it’s a static array stored inside the class instance, so the returned slice would refer directly to it. favoriteWord[].dup on the other hand returns a copy of the data that isn’t located in the class instance, which is why it’s okay.

Alternatively one could replace the scope attributes of both Talkative.saySomething and Duck.saySomething with return scope, allowing the return of favoriteWord without duplication.

DIP1000 and Liskov Substitution Principle

The Liskov substitution principle states, in simplified terms, that an inherited function can give the caller more guarantees than its parent function, but never fewer. DIP1000-related attributes fall in that category. The rule works like this:

  • if a parameter (including the implicit this reference) in the parent functions has no DIP1000 attributes, the child function may designate it scope or return scope
  • if a parameter is designated scope in the parent, it must be designated scope in the child
  • if a parameter is return scope in the parent, it must be either scope or return scope in the child

If there is no attribute, the caller can not assume anything; the function might store the address of the argument somewhere. If return scope is present, the caller can assume the address of the argument is not stored other than in the return value. With scope, the guarantee is that the address is not stored anywhere, which is an even stronger guarantee. Example:

class C1
{   double*[] incomeLog;
    @safe double* imposeTax(double* pIncome)
    {
        incomeLog ~= pIncome;
        return new double(*pIncome * .15);
    }
}

class C2 : C1
{
    // Okay from language perspective (but maybe not fair
    // for the taxpayer)
    override @safe double* imposeTax
        (return scope double* pIncome)
    {
        return pIncome;
    }
}

class C3 : C2
{
    // Also okay.
    override @safe double* imposeTax
        (scope double* pIncome)
    {
        return new double(*pIncome * .18);
    }
}

class C4: C3
{
    // Not okay. The pIncome parameter of C3.imposeTax
    // is scope, and this tries to relax the restriction.
    override @safe double* imposeTax
        (double* pIncome)
    {
        incomeLog ~= pIncome;
        return new double(*pIncome * .16);
    }
}

The special pointer, ref

We still have not uncovered how to use structs and unions with DIP1000. Well, obviously we’ve uncovered pointers and arrays. When referring to a struct or a union, they work the same as they do when referring to any other type. But pointers and arrays are not the canonical way to use structs in D. They are most often passed around by value, or by reference when bound to ref parameters. Now is a good time to explain how ref works with DIP1000.

They don’t work like just any pointer. Once you understand ref, you can use DIP1000 in many ways you otherwise could not.

A simple ref int parameter

The simplest possible way to use ref is probably this:

@safe void fun(ref int arg) {
    arg = 5;
}

What does this mean? ref is internally a pointer—think int* pArg—but is used like a value in the source code. arg = 5 works internally like *pArg = 5. Also, the client calls the function as if the argument were passed by value:

auto anArray = [1,2];
fun(anArray[1]); // or, via UFCS: anArray[1].fun;
// anArray is now [1, 5]

instead of fun(&anArray[1]). Unlike C++ references, D references can be null, but the application will instantly terminate with a segmentation fault if a null ref is used for something other than reading the address with the & operator. So this:

int* ptr = null;
fun(*ptr);

…compiles, but crashes at runtime because the assignment inside fun lands at the null address.

The address of a ref variable is always guarded against escape. In this sense @safe void fun(ref int arg){arg = 5;} is like @safe void fun(scope int* pArg){*pArg = 5;}. For example, @safe int* fun(ref int arg){return &arg;} will not compile, just like @safe int* fun(scope int* pArg){return pArg;} will not.

There is a return ref storage class, however, that allows returning the address of the parameter but no other form of escape, just like return scope. This means that @safe int* fun(return ref int arg){return &arg;} works.

reference to a reference

reference to an int or similar type already allows much nicer syntax than one can get with pointers. But the real power of ref shows when it refers to a type that is a reference itself—a pointer or a class, for instance. scope or return scope can be applied to a reference that is referenced to by ref. For example:

@safe float[] mergeSort(ref return scope float[] arr)
{
    import std.algorithm: merge;
    import std.array : Appender;

    if(arr.length < 2) return arr;

    auto firstHalf = arr[0 .. $/2];
    auto secondHalf = arr[$/2 .. $];

    Appender!(float[]) output;
    output.reserve(arr.length);

    foreach
    (
        el;
        firstHalf.mergeSort
        .merge!floatLess(secondHalf.mergeSort)
    )   output ~= el;

    arr = output[];
    return arr;
}

@safe bool floatLess(float a, float b)
{
    import std.math: isNaN;

    return a.isNaN? false:
          b.isNaN? true:
          a<b;
}

mergeSort here guarantees it won’t leak the address of the floats in arr except in the return value. This is the same guarantee that would be had from a return scope float[] arr parameter. But at the same time, because arr is a ref parameter, mergeSort can mutate the array passed to it. Then the client can write:

float[] values = [5, 1.5, 0, 19, 1.5, 1];
values.mergeSort;

With a non-ref argument, the client would have to write values = values.sort instead (not using ref would be a perfectly reasonable API in this case, because we do not always want to mutate the original array). This is something that cannot be accomplished with pointers, because return scope float[]* arr would protect the address of the array’s metadata (the length and ptr fields of the array), not the address of it’s contents.

It is also possible to have a returnable ref argument to a scope reference. Since this example has a unit test, remember to use the -unittest compile flag to include it in the compiled binary.

@safe ref Exception nullify(return ref scope Exception obj)
{
    obj = null;
    return obj;
}

@safe unittest
{
    scope obj = new Exception("Error!");
    assert(obj.msg == "Error!");
    obj.nullify;
    assert(obj is null);
    // Since nullify returns by ref, we can assign
    // to it's return value.
    obj.nullify = new Exception("Fail!");
    assert(obj.msg == "Fail!");
}

Here we return the address of the argument passed to nullify, but still guard both the address of the object pointer and the address of the class instance against being leaked by other channels.

return is a free keyword that does not mandate ref or scope to follow it. What does void* fun(ref scope return int*) mean then? The spec states that return without a trailing scope is always treated as ref return. This example thus is equivalent to void* fun(return ref scope int*). However, this only applies if there is reference to bind to. Writing void* fun(scope return int*) means void* fun(return scope int*). It’s even possible to write void* fun(return int*) with the latter meaning, but I leave it up to you to decide whether this qualifies as conciseness or obfuscation.

Member functions and ref

ref and return ref often require careful consideration to keep track of which address is protected and what can be returned. It takes some experience to get confortable with them. But once you do, understanding how structs and unions work with DIP1000 is pretty straightforward.

The major difference to classes is that where the this reference is just a regular class reference in class member functions, this in a struct or union member function is ref StructOrUnionName.

union Uni
{
    int asInt;
    char[4] asCharArr;

    // Return value contains a reference to
    // this union, won't escape references
    // to it via any other channel
    @safe char[] latterHalf() return
    {
        return asCharArr[2 .. $];
    }

    // This argument is implicitly ref, so the
    // following means the return value does
    // not refer to this union, and also that
    // we don't leak it in any other way.
    @safe char[] latterHalfCopy()
    {
        return latterHalf.dup;
    }
}

Note that return ref should not be used with the this argument. char[] latterHalf() return ref fails to parse. The language already has to understand what ref char[] latterHalf() return means: the return value is a reference. The “ref” in return ref would be redundant anyway.

Note that we did not use the scope keyword here. scope would be meaningless with this union, because it does not contain references to anything. Just like it is meaningless to have a scope ref int, or a scope int function argument. scope makes sense only for types that refer to memory elsewhere.

scope in a struct or union means the same thing as it means in a static array. It means that the memory its members refer to cannot be escaped. Example:

struct CString
{
    // We need to put the pointer in an anonymous
    // union with a dummy member, otherwise @safe user
    // code could assign ptr to point to a character
    // not in a C string.
    union
    {
        // Empty string literals get optimised to null pointers by D
        // compiler, we have to do this for the .init value to really point to
        // a '\0'.
        immutable(char)* ptr = &nullChar;
        size_t dummy;
    }

    // In constructors, the "return value" is the
    // constructed data object. Thus, the return scope
    // here makes sure this struct won't live longer
    // than the memory in arr.
    @trusted this(return scope string arr)
    {
        // Note: Normal assert would not do! They may be
        // removed from release builds, but this assert
        // is necessary for memory safety so we need
        // to use assert(0) instead which never gets
        // removed.
        if(arr[$-1] != '\0') assert(0, "not a C string!");
        ptr = arr.ptr;
    }

    // The return value refers to the same memory as the
    // members in this struct, but we don't leak references
    // to it via any other way, so return scope.
    @trusted ref immutable(char) front() return scope
    {
        return *ptr;
    }

    // No references to the pointed-to array passed
    // anywhere.
    @trusted void popFront() scope
    {
        // Otherwise the user could pop past the
        // end of the string and then read it!
        if(empty) assert(0, "out of bounds!");
        ptr++;
    }

    // Same.
    @safe bool empty() scope
    {
        return front == '\0';
    }
}

immutable nullChar = '\0';

@safe unittest
{
    import std.array : staticArray;

    auto localStr = "hello world!\0".staticArray;
    auto localCStr = localStr.CString;
    assert(localCStr.front == 'h');

    static immutable(char)* staticPtr;

    // Error, escaping reference to local.
    // staticPtr = &localCStr.front();

    // Fine.
    staticPtr = &CString("global\0").front();

    localCStr.popFront;
    assert(localCStr.front == 'e');
    assert(!localCStr.empty);
}

Part One said that @trusted is a terrible footgun with DIP1000. This example demonstrates why. Imagine how easy it’d be to use a regular assert or forget about them totally, or overlook the need to use the anonymous union. I think this struct is safe to use, but it’s entirely possible I overlooked something.

Finally

We almost know all there is to know about using structs, unions, and classes with DIP1000. We have two final things to learn today.

But before that, a short digression regarding the scope keyword. It is not used for just annotating parameters and local variables as illustrated. It is also used for scope classes and scope guard statements. This guide won’t be discussing those, because the former feature is deprecated, and the latter is not related to DIP1000 or control of variable lifetimes. The point of mentioning them is to dispel a potential misconception that scope always means limiting the lifetime of something. Learning about scope guard statements is still a good idea, as it’s a useful feature.

Back to the topic. The first thing is not really specific to structs or classes. We discussed what return, return ref, and return scope usually mean, but there’s an alternative meaning to them. Consider:

@safe void getFirstSpace
(
    ref scope string result,
    return scope string where
)
{
    //...
}

The usual meaning of the return attribute makes no sense here, as the function has a void return type. A special rule applies in this case: if the return type is void, and the first argument is ref or out, any subsequent return [ref/scope] is assumed to be escaped by assigning to the first argument. With struct member functions, they are assumed to be assigned to the struct itself.

@safe unittest
{
    static string output;
    immutable(char)[8] input = "on stack";
    //Trying to assign stack contents to a static
    //variable. Won't compile.
    getFirstSpace(output, input);
}

Since out came up, it should be said it would be a better choice for result here than ref. out works like ref, with the one difference that the referenced data is automatically default-initialized at the beginning of the function, meaning any data to which the out parameter refers is guaranteed to not affect the function.

The second thing to learn is that scope is used by the compiler to optimize class allocations inside function bodies. If a new class is used to initialize a scope variable, the compiler can put it on the stack. Example:

class C{int a, b, c;}
@safe @nogc unittest
{
    // Since this unittest is @nogc, this wouldn't
    // compile without the scope optimization.
    scope C c = new C();
}

This feature requires using the scope keyword explicitly. Inference of scope does not work, because initializing a class this way does not normally (meaning, without the @nogc attribute) mandate limiting the lifetime of c. The feature currently works only with classes, but there is no reason it couldn’t work with newed struct pointers and array literals too.

Until next time

This is pretty much all that there is to manual DIP1000 usage. But this blog series shall not be over yet! DIP1000 is not intended to always be used explicitly—it works with attribute inference. That’s what the next post will cover.

It will also cover some considerations when daring to use @trusted and @system code. The need for dangerous systems programming exists and is part of the D language domain. But even systems programming is a responsible affair when people do what they can to minimize risks. We will see that even there it’s possible to do a lot.

Thanks to Walter Bright and Dennis Korpel for reviewing this article

Memory Safety in a Modern Systems Programming Language Part 1

Memory safety needs no checks

D is both a garbage-collected programming language and an efficient raw memory access language. Modern high-level languages like D are memory safe, preventing users from accidently reading or writing to unused memory or breaking the type system of the language.

As a systems programming language, not all of D can give such guarantees, but it does have a memory-safe subset that uses the garbage collector to take care of memory management much like Java, C#, or Go. A D codebase, even in a systems programming project, should aim to remain within that memory-safe subset where practical. D provides the @safe function attribute to verify that a function uses only memory-safe features of the language. For instance, try this.

@safe string getBeginning(immutable(char)* cString)
{
    return cString[0..3];
}

The compiler will refuse to compile this code. There’s no way to know what will result from the three-character slice of cString, which could be referring to an empty string (i.e., cString[0] is \0), a string with a length of 1, or even one or two characters without the terminating NUL. The result in those cases would be a memory violation.

@safe does not mean slow

Note that I said above that even a low-level systems programming project should use @safe where practical. How is that possible, given that such projects sometimes cannot use the garbage collector, a major tool used in D to guarantee memory safety?

Indeed, such projects must resort to memory-unsafe constructs every now and then. Even higher-level projects often have reasons to do so, as they want to create interfaces to C or C++ libraries, or avoid the garbage collector when indicated by runtime performance. But still, surprisingly large parts of code can be made @safe without using the garbage collector at all.

D can do this because the memory safe subset does not prevent raw memory access per se.

@safe void add(int* a, int* b, int* sum)
{
    *sum = *a + *b;
}

This compiles and is fully memory safe, despite dereferencing those pointers in the same completely unchecked way they are dereferenced in C. This is memory safe because @safe D does not allow creating an int* that points to unallocated memory areas, or to a float**, for instance. int* can point to the null address, but this is generally not a memory safety problem because the null address is protected by the operating system. Any attempt to dereference it would crash the program before any memory corruption can happen. The garbage collector isn’t involved, because D’s GC can only run if more memory is requestend from it, or if the collection is explicitly called.

D slices are similar. When indexed at runtime, they will check at runtime that the index is less than their length and that’s it. They will do no checking whatsoever on whether they are referring to a legal memory area. Memory safety is achieved by preventing creation of slices that could refer to illegal memory in the first place, as demonstrated in the first example of this article. And again, there’s no GC involved.

This enables many patterns that are memory-safe, efficient, and independent of the garbage collector.

struct Struct
{
    int[] slice;
    int* pointer;
    int[10] staticArray;
}

@safe @nogc Struct examples(Struct arg)
{
    arg.slice[5] = *arg.pointer;
    arg.staticArray[0..5] = arg.slice[5..10];
    arg.pointer = &arg.slice[8];
    return arg;
}

As demonstrated, D liberally lets one do unchecked memory handling in @safe code. The memory referred to by arg.slice and arg.pointer may be on the garbage collected heap, or it may be in the static program memory. There is no reason the language needs to care. The program will probably need to either call the garbage collector or do some unsafe memory management to allocate memory for the pointer and the slice, but handling already allocated memory does not need to do either. If this function needed the garbage collector, it would fail to compile because of the @nogc attribute.

However…

There’s a historical design flaw here in that the memory may also be on the stack. Consider what happens if we change our function a bit.

@safe @nogc Struct examples(Struct arg)
{
    arg.pointer = &arg.staticArray[8];
    arg.slice = arg.staticArray[0..8];
    return arg;
}

Struct arg is a value type. Its contents are copied to the stack when examples is called and can be ovewritten after the function returns. staticArray is also a value type. It’s copied along with the rest of the struct just as if there were ten integers in the struct instead. When we return arg, the contents of staticArray are copied to the return value, but ptr and slice continue to point to arg, not the returned copy!

But we have a fix. It allows one to write code just as performant in @safe functions as before, including references to the stack. It even enables a few formerly @system (the opposite of @safe) tricks to be written in a safe way. That fix is DIP1000. It’s the reason why this example already causes a deprecation warning by default if it’s compiled with the latest nightly dmd.

Born first, dead last

DIP1000 is a set of enhancements to the language rules regarding pointers, slices, and other references. The name stands for D Improvement Proposal number 1000, as that document is what the new rules were initially based on. One can enable the new rules with the preview compiler switch, -preview=dip1000. Existing code may need some changes to work with the new rules, which is why the switch is not enabled by default. It’s going to be the default in the future, so it’s best to enable it where possible and work to make code compatible with it where not.

The basic idea is to let people limit the lifetime of a reference (an array or pointer, for example). A pointer to the stack is not dangerous if it does not exist longer than the stack variable it is pointing to. Regular references continue to exist, but they can refer only to data with an unlimited lifetime—that is, garbage collected memory, or static or global variables.

Let’s get started

The simplest way to construct limited lifetime references is to assign to it something with a limited lifetime.

@safe int* test(int arg1, int arg2)
{
    int* notScope = new int(5);
    int* thisIsScope = &arg1;
    int* alsoScope; // Not initially scope...
    alsoScope = thisIsScope; // ...but this makes it so.

    // Error! The variable declared earlier is
    // considered to have a longer lifetime,
    // so disallowed.
    thisIsScope = alsoScope;

    return notScope; // ok
    return thisIsScope; // error
    return alsoScope; // error
}

When testing these examples, remember to use the compiler switch -preview=dip1000 and to mark the function @safe. The checks are not done for non-@safe functions.

Alternatively, the scope keyword can be explicitly used to limit the lifetime of a reference.

@safe int[] test()
{
    int[] normalRef;
    scope int[] limitedRef;

    if(true)
    {
        int[5] stackData = [-1, -2, -3, -4, -5];

        // Lifetime of stackData ends
        // before limitedRef, so this is
        // disallowed.
        limitedRef = stackData[];

        //This is how you do it
        scope int[] evenMoreLimited
            = stackData[];
    }

    return normalRef; // Okay.
    return limitedRef; // Forbidden.
}

If we can’t return limited lifetime references, how they are used at all? Easy. Remember, only the address of the data is protected, not the data itself. It means that we have many ways to pass scoped data out of the function.

@safe int[] fun()
{
    scope int[] dontReturnMe = [1,2,3];

    int[] result = new int[](dontReturnMe.length);
    // This copies the data, instead of having
    // result refer to protected memory.
    result[] = dontReturnMe[];
    return result;

    // Shorthand way of doing the same as above
    return dontReturnMe.dup;

    // Also you are not always interested
    // in the contents as a whole; you
    // might want to calculate something else
    // from them
    return
    [
        dontReturnMe[0] * dontReturnMe[1],
        cast(int) dontReturnMe.length
    ];
}

Getting interprocedural

With the tricks discussed so far, DIP1000 would be restricted to language primitives when handling limited lifetime references, but the scope storage class can be applied to function parameters, too. Because this guarantees the memory won’t be used after the function exits, local data references can be used as arguments to scope parameters.

@safe double average(scope int[] data)
{
    double result = 0;
    foreach(el; data) result += el;
    return result / data.length;
}

@safe double use()
{
    int[10] data = [1,2,3,4,5,6,7,8,9,10];
    return data[].average; // works!
}

Initially, it’s probably best to keep attribute auto inference off. Auto inference in general is a good tool, but it silently adds scope attributes to all parameters it can, meaning it’s easy to lose track of what’s happening. That makes the learning process a lot harder. Avoid this by always explicitly specifying the return type (or lack thereof with void or noreturn): @safe const(char[]) fun(int* val) as opposed to @safe auto fun(int* val) or @safe const fun(int* val). The function also must not be a template or inside a template. We’ll dig deeper on scope auto inference in a future post.

scope allows handling pointers and arrays that point to the stack, but forbids returning them. What if that’s the goal? Enter the return scope attribute:

//Being character arrays, strings also work with DIP1000.
@safe string latterHalf(return scope string arg)
{
    return arg[$/2 .. $];
}

@safe string test()
{
    // allocated in static program memory
    auto hello1 = "Hello world!";
    // allocated on the stack, copied from hello1
    immutable(char)[12] hello2 = hello1;

    auto result1 = hello1.latterHalf; // ok
    return result1; // ok

    auto result2 = hello2[].latterHalf; // ok
    // Nice try! result2 is scope and can't
    // be returned.
    return result2;
}

return scope parameters work by checking if any of the arguments passed to them are scope. If so, the return value is treated as a scope value that may not outlive any of the return scope arguments. If none are scope, the return value is treated as a global reference that can be copied freely. Like scope, return scope is conservative. Even if one does not actually return the address protected by return scope, the compiler will still perform the call site lifetime checks just as if one did.

scope is shallow

@safe void test()
{
    scope a = "first";
    scope b = "second";
    string[] arr = [a, b];
}

In test, initializing arr does not compile. This may be surprising given that the language automatically adds scope to a variable on initialization if needed.

However, consider what the scope on scope string[] arr would protect. There are two things it could potentially protect: the addresses of the strings in the array, or the addresses of the characters in the strings. For this assignment to be safe, scope would have to protect the characters in the strings, but it only protects the top-level reference, i.e., the strings in the array. Thus, the example does not work. Now change arr so that it’s a static array:

@safe void test()
{
    scope a = "first";
    scope b = "second";
    string[2] arr = [a, b];
}

This works because static arrays are not references. Memory for all of their elements is allocated in place on the stack (i.e., they contain their elements), as opposed to dynamic arrays which contain a reference to elements stored elsewhere. When a static array is scope, its elements are treated as scope. And since the example would not compile were arr not scope, it follows that scope is inferred.

Some practical tips

Let’s face it, the DIP1000 rules take time to understand, and many would rather spend that time coding something useful. The first and most important tip is: avoid non-@safe code like the plague if doable. Of course, this advice is not new, but it appears even more important with DIP1000. In a nutshell, the language does not check the validity of scope and return scope in a non-@safe function, but when calling those functions the compiler assumes that the attributes are respected.

This makes scope and return scope terrible footguns in unsafe code. But by resisting the temptation to mark code @trusted to avoid thinking, a D coder can hardly do damage. Misusing DIP1000 in @safe code can cause needless compilation errors, but it won’t corrupt memory and is unlikely to cause other bugs either.

A second important point worth mentioning is that there is no need for scope and return scope for function attributes if they receive only static or GC-allocated data. Many langauges do not let coders refer to the stack at all; just because D programmers can do so does not mean they must. This way, they don’t have to spend any more time solving compiler errors than they did before DIP1000. And if a desire to work with the stack arises after all, the authors can then return to annotate the functions. Most likely they will accomplish this without breaking the interface.

What’s next?

This concludes today’s blog post. This is enough to know how to use arrays and pointers with DIP1000. In principle, it also enables readers to use DIP1000 with classes and interfaces. The only thing to learn is that a class reference, including the this pointer in member functions, works with DIP1000 just like a pointer would. Still, it’s hard to grasp what that means from one sentence, so later posts shall illustrate the subject.

In any case, there is more to know. DIP1000 has some features for ref function parameters, structs, and unions that we didn’t cover here. We’ll also dig deeper on how DIP1000 plays with non-@safe functions and attribute auto inference. Currently, the plan is to do two more posts for this series.

Do let us know in the comment section or the D forums if you have any useful DIP1000 tips that were not covered!

Thanks to Walter Bright for reviewing this article.

DIP Reviews: Discussion vs. Feedback

Digital Mars D logoFor a while now, I’ve been including a link to the DIP Reviewer Guidelines in the initial forum post for every DIP review. My hope was that it would encourage reviewers to keep the thread on topic and also to provide more focused feedback. As it turns out, a link to reviewer guidelines is not quite enough. Recent review threads have turned into massive, 20+ page discussions touching on a number of tangential topics.

The primary purpose of the DIP review process, as I’ve tried to make clear in blog posts, forum discussions, and the reviewer guidelines, is to improve the DIP. It is not a referendum on the DIP. In every review round, the goal is to strengthen the content where it is lacking, bring clarity and precision to the language, make sure all the bases are covered, etc.

At the same time, we don’t want to discourage discussion on the merits of the proposal. Opinions about the necessity or the validity of a DIP can raise points that the language maintainers can take into consideration when they are deciding whether to approve or reject it, or even cause the DIP author to withdraw the proposal. It’s happened before. That’s why such discussion is encouraged in the Community Review rounds (though it’s generally discouraged in Final Review, which should be focused wholly on improving the proposal).

The problem

One issue with allowing such free-form discussion in the review threads is that there is a tremendous amount of noise drowning out the signal. Finding specific DIP-related feedback requires trawling through every post, digging through multiple paragraphs of mixed discussion and feedback. Sometimes, one or more people will level a criticism that spawns a long discussion and results in a changing of minds. This makes it time consuming for me as the DIP manager when I have to summarize the review. It also increases the likelihood that I’ll overlook something.

My summary isn’t just for the ‘Reviews’ section at the bottom of the DIP. It’s also my way of ensuring that the DIP author is aware of and has considered all the unique points of feedback. More than once I have found something the DIP author missed or had forgotten about. But if I overlook something and the DIP author also overlooks it, then we may have missed an opportunity to improve the DIP.

I have threatened to delete posts that go off topic in these threads,  but I can count on one hand the number of posts I’ve actually deleted. In reality, these discussions branch off in so many directions that it’s not easy to say definitively that a post that isn’t focused on the DIP itself is actually off topic. So I tend to let the posts stand rather than risk derailing the thread or removing information that is actually relevant.

The Solution

Starting with the upcoming Final Review of DIP 1027, I’m going to take a new approach to soliciting feedback. Rather than one review thread, I’ll be launching two for each DIP.

The Discussion Thread will be much the same as the current review thread. Opinions and discussion will be welcome and encouraged. I’ll still delete posts that are completely off topic, but other than that I’ll let the discussion flow where it may.

The Feedback Thread will be exclusively for feedback on the document and its contents. There will be no discussion allowed. Every post must contain specific points of feedback (preferably actionable items) intended to improve the proposal. Each post should be a direct reply to my initial post. There are only two exceptions: when a post author who has decided to retract feedback they made in a previous post, said poster can reply to the post in which they made the original feedback in order to make the retraction; and the DIP author may reply directly to any feedback post in order to indicate agreement or disagreement.

Posts in the feedback thread should contain answers to the questions posed in the DIP Reviewer Guidelines. It would be great if reviewers could take the time to do what Joseph Rushton Wakeling did in the Community Review for DIP 1028, where he explicitly listed and answered each question, but we won’t be requiring it. Feedback as bullet points is also very welcome.

Opinions on the validity of the proposed feature will be allowed in the feedback thread as long as they are backed with supporting arguments. In other words, “I’m against this! This is a terrible feature.” is not valid for the feedback thread. That sort of post goes in the discussion thread. However, “I’m against this. This is a terrible feature because <reasoned argument goes here>” is acceptable.

The rules of the feedback thread will be enforced without prejudice. Any post that is not a reply to my initial post, retraction of previous feedback, or a response by the DIP author will be deleted. Any post that does not provide the sort of feedback described above will be deleted. If I do delete a post, I won’t leave a new post explaining why. I’m going to update the DIP Reviewer Guidelines and each opening post in a feedback thread will include a link to that document as well as a paragraph or two summarizing the rules.

I’ll require DIP authors to follow both threads and to participate in the discussion thread. When it comes time to summarize the review, the feedback thread will be my primary source. I will, of course, follow the discussion thread as well and take notes on anything relevant. But if you want to ensure any specific criticisms you may have about a DIP are accounted for, be sure to post them in the feedback thread.

Hopefully, this new approach won’t be too disruptive. We’ll see how it goes.

 

Revisions to the DIP Process

At the AGM that was held prior to the Hackathon at DConf 2019 in London, I announced that I would be making revisions to the DIP progress aimed at shortening the length of time required to go from the Community  Review to a final verdict. I also, in response to Joseph Rushton Wakeling’s feedback about guidance for reviewers, agreed to enhance the existing documentation to clarify what is expected of reviewers during the Community and Final Review stages and to provide guidance on how to provide a good review.

The new documentation is now live in four documents under the new docs subdirectory in the DIP repository (all of which are linked in the README). The PROCEDURE and GUIDELINES documents are still there so that existing links remain valid, but all of their content has been replaced with links to the new documentation. Please consider the new documentation to be in draft form. I have not yet subjected them to intense editing, so any corrections are welcome, as are suggestions on how to enhance them.

Anyone intending to participate in a DIP review by leaving feedback in a review thread should familiarize themselves with the DIP Reviewer Guidelines. I can’t force anyone to do so, but I do consider this mandatory. In my role as DIP manager, trying to summarize long threads that have gone off on in-depth discussions of one thing or another, reading through post after post for any sign of actual feedback, is a time-consuming (and highly annoying!) process. Henceforth, I will be deleting any posts in Community and Final review discussion threads that do not adhere to the guidelines laid out in the above document. As I declare in the document, such posts will be copied and pasted in a separate thread where off-topic discussions of the DIP may continue. It’s not my intention to stifle debate or censor anyone or in any other ray restrict discussion of the DIP–I just want to make my job, and the DIP author’s, easier. So please, for my sake and yours, read and understand the reviewer guidelines.

The content of the GUIDELINES document, which was targeted at DIP authors, has been moved (and slightly modified) into the DIP Author Guidelines. The portion of the PROCEDURE document that was aimed at DIP authors is now in The DIP Authoring Process document. All potential DIP authors should read and understand both documents before submitting a DIP. Failure to do so may result in surprises. For example, I’m going to be more proactive in closing pull requests that are submitted while a DIP is still in development and not yet in a first draft state. So please read!

Finally, the portion of the PROCEDURE document that described the different review stages is now found in the document titled The DIP Review Process. Everyone should read this. The primary difference from the previous document is that I’ve explicitly declared that Community Reviews will always begin in the first seven days of a month and Final Reviews will always begin in the third week of a month. Additionally, where I would formerly allow multiple Community Reviews to take place simultaneously, I now restrict it to one at any given time. The goal is to streamline the process and minimize the time it takes to go from Community Review to a final verdict.

As the new document outlines, the best-case scenario, in which only one round of Community Review is required and no DIPs are in active consideration of the language maintainers when another DIP finishes the Final Review, should look like this:

  • the DIP enters Community Review in the first week of Month A.
  • after Community Review, the DIP author will have four weeks to complete any required revisions.
  • in the third week of Month B, the Final Review begins.
  • after the Final Review, no revisions are required and no other DIP is under active consideration, so the DIP may immediately move into Formal Assessment.
  • the language maintainers have enough information to render a verdict on the DIP within 30 days.

So it should take between two and three months for the review process to complete. Again, this is the best-case scenario. I expect it more likely that it will typically take between four and five months, given that some DIPs will need multiple Community Review rounds and some will require revision during Formal Assessment.

As I said at the AGM, I’m always open to improving the process to the extent I can within the boundaries of the current framework. Any fundamental structural changes will need approval by Walter and Átila. If you have any suggestions to strengthen the documentation, please let me know.

Updates in D Land

As we encroach upon the end of 2018, a recent Reddit thread wishing D a happy 17th birthday reminded me how far the language has come since I first stumbled upon it in 2003. Many of the steps along the way were powered by the energy of users who had little incentive to contribute beyond personal interest. And here we are, all these years later, still moving forward.

There are a number of current and upcoming happenings that will play a role in keeping that progress going. In this post, I’d like to remind you, update you, or inform you about some of them.

The Pull Request Manager Campaign

If you haven’t heard, the D Language Foundation has hired a pull request manager, to be paid out of a pool of donations. This is our first major fundraising campaign through Flipcause. I’m happy to report that it’s going well. As I write, we’ve raised $1,864 of our $3,000 target in 66 days thanks to the kindness of 30 supporters. If you’d like to support us in this cause, click on the campaign card.

You can access our full campaign menu at any time via the “Donate Now” button in the sidebar here on the blog. A pull request has also been submitted to integrate the menu into dlang.org’s donation page. Currently, we only have two campaigns (this one and the General Fund) but any future campaigns will be accessible through those menus.

Symmetry Autumn of Code

Earlier this year, Symmetry Investments partnered with the D Language Foundation to sponsor the Symmetry Autumn of Code (SAoC). Three participants were selected to work on D-related projects over the course of four months, with milestones to mark their progress. If you haven’t heard of it or had forgotten, you can read the details on the SAoC page here at the blog.

Unfortunately, one participant was unable to continue after the first milestone. The other two, whom we have come to refer to as the two Francescos, have each successfully completed three milestones and are in the home stretch, aiming for that final payment and free trip to DConf 2019.

Francesco Mecca is working on porting an old D1 GC to modern D, and Francesco Galla’ is busy adding HTTP/2 support the vibe-http library. Both have made significant progress and are on track to a successful SAoC. Read more about their projects in my previous SAoC update.

DIP Updates

I’ve received partial feedback on a decision regarding DIP 1013 (The Deprecation Process) and expect to hear the final verdict soon. As soon as I do, I’ll move Manu’s DIP 1016 (ref T accepts r-values) into the Formal Assessment stage for Walter and Andrei to consider.

I had intended to move Razvan’s Copy Constructor DIP into Community Review by now, as that is a high priority for Walter and Andrei. However, he’s been working out some more details so it’s not quite yet ready. So as not to hold up the process any longer, I’ll be starting Community Review for one of the other DIPs in the PR queue at the end of this week. When the Copy Constructor DIP is ready, I’ll run its review in parallel.

Google Summer of Code (GSoC) 2019

At the end of last month, I announced in the forums that we’re ramping up for GSoC 2019. I seeded our Wiki page with two potential project ideas to get us started. So far, only one additional idea has been added and no one has contacted me about participating as a student or a mentor.

It’s been a while since we were last accepted into GSoC and we’d very much like to get into it this time. To do so, we need more project ideas, students to execute them, and mentors to provide guidance to the students. If you’re looking for another way to contribute to the D community, this is a great way to do so. Adding project ideas costs little beyond the time it takes to add the details to the Wiki and, if you are lacking in ideas already dying to escape the confines of your neurocranium, the time it takes to brainstorm something. Student and mentor participation is a more significant commitment, but it’s also a lot more rewarding. If you’re interested, tell me at aldacron@gmail.com.

DConf 2019

Finally, I’m happy to announce… Just kidding. I can’t announce anything yet about DConf 2019, but I hope to be able to soon. What I can say with certainty is that in 2019, DConf will be where DConf has never gone before. We’re currently working out some details with an eye toward making 2019 a big year for DConf.

I’m really excited about it and eager to let everyone know. I’ll do so as soon as I’m able. Watch this space!