Monthly Archives: April 2017

automem: Hands-Free RAII for D

Átila Neves has used both C++ and D professionally. He’s responsible for several D libraries and tools, like unit-threaded, cerealed, and reggae.


Garbage collected languages tend to suffer from a framing problem, and D is no exception. Its inclusion of a mark-and-sweep garbage collector makes safe memory management easy and convenient, but, thanks to a widespread perception that GC in general is a performance killer, alienates a lot of potential users due to its mere existence.

Coming to D as I did from C++, the one thing I didn’t like about the language initially was the GC. I’ve since come to realize that my fears were mostly unfounded, but the fact remains that, for many people, the GC is reason enough to avoid the language. Whether or not that is reasonable given their use cases is debatable (and something over which reasonable people may disagree), but the existence of the perception is not.

A lot of work has been done over the years to make sure that D code can be written which doesn’t depend on the GC. The @nogc annotation is especially important here, and I don’t think it’s been publicized enough. A @nogc main function is a compile-time guarantee that the program will not ever allocate any GC memory. For the types of applications that need those sorts of guarantees, this is invaluable.

But if not allocating from the GC heap, where does one get the memory? Still in the experimental package of the standard library, std.experimental.allocator provides building blocks for composing allocators that should satisfy any and all memory allocation needs where the GC is deemed inappropriate. Better still, via the IAllocator interface, one can even switch between GC and custom allocation strategies as needed at runtime.

I’ve recently used std.experimental.allocator in order to achieve @nogc guarantees and, while it works, there’s one area in which the experience wasn’t as smooth as when using C++ or Rust: disposing of memory. Like C++ and Rust, D has RAII. As is usual in all three, it’s considered bad form to explicitly release resources. And yet, in the current state of affairs, while using the D standard library one has to manually dispose of memory if using std.experimental.allocator. D makes it easier than most languages that support exceptions, due to scope(exit), but in a language with RAII that’s just boilerplate. And as the good lazy programmer that I am, I abhor writing code that doesn’t need to be, and shouldn’t be, written. An itch developed.

The inspiration for the solution I came up with was C++; ever since C++11 I’ve been delighted with using std::unique_ptr and std::shared_ptr and basically never again worrying about manually managing memory. D’s standard library has Unique and RefCounted in std.typecons but they predate std.experimental.allocator and so “bake in” the allocation strategy. Can we have our allocation cake and eat it too?

Enter automem, a library I wrote providing C++-style smart pointers that integrate with std.experimental.allocator. It was clear to me that the design had to be different from the smart pointers it took inspiration from. In C++, it’s assumed that memory is allocated with new and freed with delete (although it’s possible to override both). With custom allocators and no real obvious default choice, I made it so that the smart pointers would allocate memory themselves. This makes it so one can’t allocate with one allocator and deallocate with a different one, which is another benefit.

Another goal was to preserve the possibility of Unique, like std::unique_ptr, to be a zero-cost abstraction. In that sense the allocator type must be specified (it defaults to IAllocator); if it’s a value type with no state, then it takes up no space. In fact, if it’s a singleton (determined at compile-time by probing where Allocator.instance exists), then it doesn’t even need to be passed in to the constructor! As in much modern D code, Design by Instropection pays its dues here. Example code:

struct Point {
    int x;
    int y;
}

{
    // must pass arguments to initialise the contained object
    // but not an allocator instance since Mallocator is
    // a singleton (Mallocator.instance) returns the only
    // instantiation
    
    auto u1 = Unique!(Point, Mallocator)(2, 3);
    assert(*u1 == Point(2, 3));
    assert(u1.y == 3); // forwards to the contained object

    // auto u2 = u1; // won't compile, can only move
    typeof(u1) u2;
    move(u1, u2);
    assert(cast(bool)u1 == false); // u1 is now empty
}
// memory freed for the Point structure created in the block

RefCounted is automem’s equivalent of C++’s std::shared_ptr. Unlike std::shared_ptr however, it doesn’t always do an atomic reference count increment/decrement. The reason is that it leverage’s D’s type system to determine when it has to; if the payload is shared, then the reference count is changed atomically. If not, it can’t be sent to other threads anyway and the performance penalty doesn’t have to be paid. C++ always does an atomic increment/decrement. Rust gets around this with two types, Arc and Rc. In D the type system disambiguates. Another win for Design by Introspection, something that really is only possible in D. Example code:

{
    auto s1 = RefCounted!(Point, Mallocator)(4, 5);
    assert(*s1 == Point(4, 5));
    assert(s1.x == 4);
    {
        auto s2 = s1; // can be copied, non-atomic reference count
    } // ref count goes to 1 here

} // ref count goes to 0 here, memory released

Given that the allocator type is usually specified, it means that when using a @nogc allocator (most of them), the code using automem can itself be made @nogc, with RAII taking care of any memory management duties. That means compile-time guarantees of no GC allocation for the applications that need them.

I hope automem and std.experimental.allocator manage to solve D’s GC framing problem. Now it should be possible to write @nogc code with no manual memory disposal in D, just as it is in C++ and Rust.

The DConf Experience

April 23rd, the deadline for DConf 2017 registrations, is just a few days away. Personally, I wasn’t sure if I’d be able to attend this year or not, but fortunately things worked out. While I’m looking forward to more great presentations this year, that’s not what keeps drawing me back (or makes me regret being unable to attend in 2014 and 2015).

The main draw for me is putting faces to all the GitHub and forum handles, then getting the opportunity to reconnect in person with those I’ve met at past conferences to talk about more than just the D programming language or the projects to which we all contribute.

I’m not the only one who feels that way. Walter Bright had this to say of DConf 2016:

Besides the technical content, I enjoyed the camaraderie of the conference. The pleasant walking and talking going between the Ibis hotel and the conference. The great food and conversations at the event. Chatting at the Ibis after hours.

If you’re staying at the Hotel Ibis Neukölln (which, unfortunately, is booked solid the week of the conference last I heard) and you’re an early riser, you’ll likely find Walter down in the lobby having First Breakfast and up for a good conversation.

Of course, the mingling isn’t necessarily confined to the conference hall or the hotel lobby. Conference goers sometimes share a room in order to save costs. That’s a great opportunity to learn something new about someone you might not otherwise have discovered, as GDC meister Iain Buclaw found out at DConf 2013 when he and Manu Evans were roomies for a few days:

I recall that I had gotten some microwavable food from the nearby groceries ahead of his arrival — “I didn’t know what you would like, or if you had allergies, so I went with the safe option and only got Vegan foods”. It was a pleasant surprise to discover we have a diet in common also.

At that edition, Iain, Manu and Brad Roberts were shuttled to and from the conference by Walter in his rental car. Carpooling has been a common part of the Stateside DConfs. At the venue in Berlin, there’s no need for it. The semi-official hotel is in walking distance and there’s a subway stop nearby for those who are further off. And, as Steven Schveighoffer can attest, rental bicycles are always an option.

I remember last year, Joseph Wakeling, Andrew Edwards and his son, and myself went for a great bike tour of Berlin given by native David Eckardt of Sociomantic. This included the experience of dragging our rented bikes on the subway, and getting turned away by police in riot gear when we had inadvertently tried to go somewhere that political demonstrations were happening. Highly recommended for anyone, especially if David is gracious enough to do the tour again! We randomly met Ethan Watson riding a rented bike along the way too 🙂

Speaking of Ethan, it appears there’s a good reason that he was riding his bike alone and in the wrong direction.

My first BeerConf was in 2016, located in the rather interesting city of Berlin. Every evening, we would gather and drink large amounts of very cheap beer. This would also coincide with some rather enjoyable shenanigans, be they in-depth conversations; pub crawls; more in-depth conversations; convenience store crawls (where the beer is even cheaper); even more in-depth conversations; and a remarkable lack of late night food.

Hours during the day were passed away at a parallel conference that I kinda stumbled in to, DConf. Interestingly, this was stocked with the same characters that I socialised with during BeerConf. Maybe I was drunk then. Maybe I’m drunk now. Did I present a talk there? I think I did, although I can only assume the talk did not have anything to do with beer.

I’ve found out recently that I’m dual-booked for BeerConf 2017 as well as presenting again at DConf. Since attending these events can only be achieved through a submission process my deduction is that I had a thoroughly good time meeting all these people and socialising them, as well as sharing knowledge and technical expertise, and decided I must do it again. Either that, or someone else signed me up. Don’t tell me which option is the truth, I’ll work it out.

Yes, the beer is a big part of it, too. This blog was born at the small bar in the lobby of the Ibis, over a couple of cold ones and the background cacophony of D programmer talk. Who could say what new D initiatives will spawn from the fermented grains this year?

But let’s not forget the presentations. While they’ll be livestreamed and those who don’t attend can watch them on YouTube in perpetuity, the social aspect lends a flavor to them that the videos just can’t. Andrew Edwards is a fan of one speaker in particular.

I’m always fascinated by Don Clugston’s talks. The most memorable is DConf 2013, during which he stated that, “The language made minor steps… We got here like miners following a vein of gold, not by an original master plan,” in his explanation of how D arrived at its powerful metaprogramming capabilities.

Where else can you hear “wonks” recounting their first hand experiences of what makes D “berry, berry good” for them? It is just a great experience talking and exchanging ideas with some of the greatest minds in the D community.

And there’s no substitute for being in the room when something unexpected happens, as it did during Stefan Koch’s lightning talk at DConf 2016, when he wanted to do a bit of live coding on stage.

There was no DMD on the computer 🙂 That surprised me.

Whether your interest is social, technical, alcohological, or otherwise, DConf is just a fun place to be. If you have the time and the means to attend, you’re  warmly invited to make your own DConf memories. Do yourself a favor and register before the window closes. If you can’t make it, be sure to follow the livestream (pay attention to the Announce forum for details) or pull up #D on freenode.net to ask questions of the presenters. But we really hope to see you there!

I’ll leave you to consider Andrei Alexandrescu’s take on DConf, which concisely sums up the impact the conference has on many who attend:

To me, DConf has already become the anchoring event that has me inspired and motivated for the whole year. We are a unique programming language community in that we’re entirely decentralized geographically. Even the leadership is spread all over across the two US coasts, Western Europe, Eastern Europe, and Asia. Therefore, the event that brings us all together is powerful – an annual systole of the live, beating heart of a great community.

We don’t lack a good program with strong talks, but to me the best moments are found in the interstices – the communal meals, the long discussions with Walter and others in the hallway, the late nights in the hotel lobby. All of these are much needed accelerated versions of online exchanges.

I’m very much looking forward to this year’s edition. In addition to the usual suspects, we’ll have a strong showing of work done by our graduate scholarship recipients.

Thanks to Walter, Iain, Steven, Ethan, Andrew, Stefan and Andrei for sharing their anecdotes and thoughts.

The New CTFE Engine

Stefan Koch is the maintainer of sqlite-d, a native D sqlite reader, and has contributed to projects like SDC (the Stupid D Compiler) and vibe.d. He was also responsible for a 10% performance improvement in D’s current CTFE implementation and is currently writing a new CTFE engine, the subject of this post.


For the past nine months, I’ve been working on a project called NewCTFE, a reimplementation of the Compile-Time Function Evaluation (CTFE) feature of the D compiler front-end. CTFE is considered one of the game-changing features of D.

As the name implies, CTFE allows certain functions to be evaluated by the compiler while it is compiling the source code in which the functions are implemented. As long as all arguments to a function are available at compile time and the function is pure (has no side effects), then the function qualifies for CTFE. The compiler will replace the function call with the result.

Since this is an integral part of the language, pure functions can be evaluated anywhere a compile-time constant may go. A simple example can be found in the standard library module, std.uri, where CTFE is used to compute a lookup table. It looks like this:

private immutable ubyte[128] uri_flags = // indexed by character
({

    ubyte[128] uflags;

    // Compile time initialize
    uflags['#'] |= URI_Hash;

    foreach (c; 'A' .. 'Z' + 1)
    {
        uflags[c] |= URI_Alpha;
        uflags[c + 0x20] |= URI_Alpha; // lowercase letters

    }

    foreach (c; '0' .. '9' + 1) uflags[c] |= URI_Digit;

    foreach (c; ";/?:@&=+$,") uflags[c] |= URI_Reserved;

    foreach (c; "-_.!~*'()") uflags[c] |= URI_Mark;

    return uflags;

})();

Instead of populating the table with magic values, a simple expressive function literal is used. This is much easier to understand and debug than some opaque static array. The ({ starts a function-literal, the }) closes it. The () at the end tells the compiler to immediately evoke that literal such that uri_flags becomes the result of the literal.

Functions are only evaluated at compile time if they need to be. uri_flags in the snippet above is declared in module scope. When a module-scope variable is initialized in this manner, the initializer must be available at compile time. In this case, since the initializer is a function literal, an attempt will be made to perform CTFE. This particular literal has no arguments and is pure, so the attempt succeeds.

For a more in-depth discussion of CTFE, see this article by H. S. Teoh at the D Wiki.

Of course, the same technique can be applied to more complicated problems as well; std.regex, for example, can build a specialized automaton for a regex at compile time using CTFE. However, as soon as std.regex is used with CTFE for non-trivial patterns, compile times can become extremely high (in D everything that takes longer than a second to compile is bloat-ware :)). Eventually, as patterns get more complex, the compiler will run out of memory and probably take the whole system down with it.

The blame for this can be laid at the feet of the current CTFE interpreter’s architecture. It’s an AST interpreter, which means that it interprets the AST while traversing it. To represent the result of interpreted expressions, it uses DMD’s AST node classes. This means that every expression encountered will allocate one or more AST nodes. Within a tight loop, the interpreter can easily generate over 100_000_000 nodes and eat a few gigabytes of RAM. That can exhaust memory quite quickly.

Issue 12844 complains about std.regex taking more than 16GB of RAM. For one pattern. Then there’s issue 6498, which executes a simple 0 to 10_000_000 while loop via CTFE and runs out of memory.

Simply freeing nodes doesn’t fix the problem; we don’t know which nodes to free and enabling the garbage collector makes the whole compiler brutally slow. Luckily there is another approach which doesn’t allocate for every expression encountered. It involves compiling the function to a virtual ISA (Instruction Set Architecture). This virtual ISA, also known as bytecode, is then given to a dedicated interpreter for that ISA (in the case in which a virtual ISA happens to be the same as the ISA of the host, we call it a JIT (Just in Time) interpreter).

The NewCTFE project concerns itself with implementing such a bytecode interpreter. Writing the actual interpreter (a CPU emulator for a virtual CPU/ISA) is reasonably simple. However, compiling code to a virtual ISA is exactly as much work as compiling it to a real ISA (though, a virtual ISA has the added benefit that it can be extended for customized needs, but that makes it harder to do JIT later). That’s why it took a month just to get the first simple examples running on the new CTFE engine, and why slightly more complicated ones still aren’t running even after 9 months of development. At the end of the post, you’ll find an approximate timeline of the work done so far.

I’ll be giving a presentation at DConf 2017, where I’ll discuss my experience implementing the engine and explain some of the technical details, particularly regarding the trade-offs and design choices I’ve made. The current estimation is that the 1.0 goals will not be met by then, but I’ll keep coding away until it’s done.

Those interested in keeping up with my progress can follow my status updates in the D forums. At some point in the future, I will write another article on some of the technical details of the implementation. In the meantime, I hope the following listing does shed some light on how much work it is to implement NewCTFE 🙂

  • May 9th 2016
    Announcement of the plan to improve CTFE.
  • May 27th 2016
    Announcement that work on the new engine has begun.
  • May 28th 2016
    Simple memory management change failed.
  • June 3rd 2016
    Decision to implement a bytecode interpreter.
  • June 30th 2016
    First code (taken from issue 6498) consisting of simple integer arithmetic runs.
  • July 14th 2016
    ASCII string indexing works.
  • July 15th 2016
    Initial struct support
  • Sometime between July and August
    First switches work.
  • August 17th 2016
    Support for the special cases if(__ctfe) and if(!__ctfe)
  • Sometime between August and September
    Ternary expressions are supported
  • September 08th 2016
    First Phobos unit tests pass.
  • September 25th 2016
    Support for returning strings and ternary expressions.
  • October 16th 2016
    First (almost working) version of the LLVM backend.
  • October 30th 2016
    First failed attempts to support function calls.
  • November 01st
    DRuntime unit tests pass for the first time.
  • November 10th 2016
    Failed attempt to implement string concatenation.
  • November 14th 2016
    Array expansion, e.g. assignment to the length property, is supported.
  • November 14th 2016
    Assignment of array indexes is supported.
  • November 18th 2016
    Support for arrays as function parameters.
  • November 19th 2016
    Performance fixes.
  • November 20th 2016
    Fixing the broken while(true) / for (;;) loops; they can now be broken out of 🙂
  • November 25th 2016
    Fixes to goto and switch handling.
  • November 29th 2016
    Fixes to continue and break handling.
  • November 30th 2016
    Initial support for assert
  • December 02nd 2016
    Bailout on void-initialized values (since they can lead to undefined behavior).
  • December 03rd 2016
    Initial support for returning struct literals.
  • December 05th 2016
    Performance fix to the bytecode generator.
  • December 07th 2016
    Fixes to continue and break in for statements (continue must not skip the increment step)
  • December 08th 2016
    Array literals with variables inside are now supported: [1, n, 3]
  • December 08th 2016
    Fixed a bug in switch statements.
  • December 10th 2016
    Fixed a nasty evaluation order bug.
  • December 13th 2016
    Some progress on function calls.
  • December 14th 2016
    Initial support for strings in switches.
  • December 15th 2016
    Assignment of static arrays is now supported.
  • December 17th 2016
    Fixing goto statements (we were ignoring the last goto to any label :)).
  • December 17th 2016
    De-macrofied string-equals.
  • December 20th 2016
    Implement check to guard against dereferencing null pointers (yes… that one was oh so fun).
  • December 22ed 2016
    Initial support for pointers.
  • December 25th 2016
    static immutable variables can now be accessed (yes the result is recomputed … who cares).
  • January 02nd 2017
    First Function calls are supported !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  • January 17th 2017
    Recursive function calls work now 🙂
  • January 23rd 2017
    The interpret3.d unit-test passes.
  • January 24th 2017
    We are green on 64bit!
  • January 25th 2017
    Green on all platforms !!!!! (with blacklisting though)
  • January 26th 2017
    Fixed special case cast(void*) size_t.max (this one cannot go through the normal pointer support, which assumes that you have something valid to dereference).
  • January 26th 2017
    Member function calls are supported!
  • January 31st 2017
    Fixed a bug in switch handling.
  • February 03rd 2017
    Initial function pointer support.
  • Rest of Feburary 2017
    Wild goose chase for issue #17220
  • March 11th 2017
    Initial support for slices.
  • March 15th 2017
    String slicing works.
  • March 18th 2017
    $ in slice expressions is now supported.
  • March 19th 2017
    The concatenation operator (c = a ~ b) works.
  • March 22ed 2017
    Fixed a switch fallthrough bug.