Author Archives: Michael Parker

The D Blog in 2017

The first full year of the D blog is now in the rear view. Last year around this time, I posted some statistics from the seven months of 2016 that the blog was in business. This time, looking back on 2017, we’ve got a full twelve to draw on.

The fun stuff

From my perspective, 2017 was a fun year for managing the blog. The only negatives for me are that I didn’t get as many Project Highlights as I would have liked and I never started the series on DUB that I had envisioned. But there were some new features that I quite enjoyed working on:

  • the GC series came about in response some posts in the D forums. The next post in the series should have come at the end of December, but I’ve had to put it off for just a bit. I’m not done with the series yet. I’m also still looking for more contributors willing to share their strategies and libraries for working with, around, and without the GC.
  • I started a new series on interfacing D with C. I’ll be continuing on with that in the coming months, eventually writing about the other direction (interfacing C with D), and pushing out a few words about -betterC mode.
  • at DConf 2017, the Funkwerk crew agreed to cooperate with me in putting out a series of posts from their perspective of using D in production. This is the first of what I hope will become a regular series highlighting companies working with D in production, the projects they’re building, and the tools they use.
  • as a reaction to the pain that comes when I cut content out of posts that feel too long, I decided to create a new domain for “extended posts”: dblog-ext.info, and a corresponding page of links here at the blog. The separate domain and the simple layout are both to make it clear that it’s not part of the official blog. I don’t know if it will be permanent, but I intend to keep it alive for a while yet until to see if more people actually make use of it. I now encourage anyone writing guest posts not to feel constrained by word count. If the post is too long, we can split it into multiple posts or, where there’s not enough content for that, make an extended post for it when it makes sense to do so.

The stats

We saw 44 new posts added to the blog in 2017. Across the entire blog, including the front page, there were a total of 132,985 page views from 96,101 visitors who left 117 comments.

The top five referrers:

Referrer Page Views
Reddit 21,920
Hacker News 20,693
Google Search 10,761
D Forums 7,008
Twitter 5,265

The top five countries:

Country Page Views
United States 43,495
Germany 9,606
United Kingdom 8,226
Russia 5,022
Canada 4,890

Several posts included links to D projects at GitHub. Counting projects, profiles, organizations, and specific file links, the top five most-clicked were:

  1. voxelman
  2. dlangui
  3. DerelictOrg
  4. DIP 1005
  5. yomm11(Open multi-methods for C++)

The top five posts of 2017:

Post Title Page Views
D as a Better C 17,502
Faster Command Line Tools in D 10,143
Don’t Fear the Reaper 8,277
D’s Newfangled Name Mangling 6,011
Compile–Time Sort in D 4,874

All time (as of a few minutes before the timestamp on this post), there are 70 posts (aside from this one) that have had 187,946 views from 136,850 visitors. The top five most-viewed posts of all time :

Post Title Page Views
D as a Better C 17,553
Faster Command Line Tools in D 10,151
Don’t Fear the Reaper 8,282
Find Was Too Damn Slow, So We Fixed It 6,228
D’s Newfangled Name Mangling 6,102

In 2018…

This year, look for the GC series and the D & C series to continue. I’m hoping to recruit a couple of semi-regular guest posters to help me up the post count a little bit. At the moment, I’m pretty much at peak output and could use the help. I’m on constant lookout for projects to highlight, and plan to bring at least one more company highlight this year (Funkwerk’s series will be wrapping up this month). I hope to bring some DConf 2018-themed posts in the runup to this year’s conference.

Finally, as always, if you have something D to write about, whether it’s your project, a language feature, a tutorial, an algorithm… anything about programming in D, please let me know!

DMD 2.078.0 Has Been Released

Another major release of DMD, this time 2.078.0, has been packaged and delivered in time for the new year. See the full changelog at dlang.org and download the compiler for your platform either from the main download page or the 2.078.0 release directory.

This release brings a number of quality-of-life improvements, fixing some minor annoyances and inconsistencies, three of which are targeted at smoothing out the experience of programming in D without DRuntime.

C runtime construction and destruction

D has included static constructors and destructors, both as aggregate type members and at module level, for most of its existence. The former are called in lexical order as DRuntime goes through its initialization routine, and the latter are called in reverse lexical order as the runtime shuts down. But when programming in an environment without DRuntime, such as when using the -betterC compiler switch, or using a stubbed-out runtime, static construction and destruction are not available.

DMD 2.078.0 brings static module construction and destruction to those environments in the form of two new pragmas, pragma(crt_constructor) and pragma(crt_destructor) respectively. The former causes any function to which it’s applied to be executed before the C main, and the latter after the C main, as in this example:

crun1.d

// Compile with:    dmd crun1.d
// Alternatively:   dmd -betterC crun1.d

import core.stdc.stdio;

// Each of the following functions should have
// C linkage (cdecl).
extern(C):

pragma(crt_constructor)
void init()
{
    puts("init");
}

pragma(crt_destructor)
void fini()
{
    puts("fini");
}

void main()
{
    puts("C main");
}

The compiler requires that any function annotated with the new pragmas be declared with the extern(C) linkage attribute. In this example, though it isn’t required, main is also declared as extern(C). The colon syntax on line 8 applies the attribute to every function that follows, up to the end of the module or until a new linkage attribute appears.

In a normal D program, the C main is the entry point for DRuntime and is generated by the compiler. When the C runtime calls the C main, the D runtime does its initialization, which includes starting up the GC, executing static constructors, gathering command-line arguments into a string array, and calling the application’s main function, a.k.a. D main.

When a D module’s main is annotated with extern(C), it essentially replaces DRuntime’s implementation, as the compiler will never generate a C main function for the runtime in that case. If -betterC is not supplied on the command line, or an alternative implementation is not provided, DRuntime itself is still available and can be manually initialized and terminated.

The example above is intended to clearly show that the crt_constructor pragma will cause init to execute before the C main and the crt_destructor causes fini to run after. This introduces new options for scenarios where DRuntime is unavailable. However, take away the extern(C) from main and the same execution order will print to the command line:

crun2.d

// Compile with:    dmd crun2.d

import core.stdc.stdio;

pragma(crt_constructor)
extern(C) void init()
{
    puts("init");
}

pragma(crt_destructor)
extern(C) void fini()
{
    puts("fini");
}

void main()
{
    import std.stdio : writeln;
    writeln("D main");
}

The difference is that the C main now belongs to DRuntime and our main is the D main. The execution order is: init, C main, D main, fini. This means init is effectively called before DRuntime is initialized and fini after it terminates. Because this example uses the DRuntime function writeln, it can’t be compiled with -betterC.

You may discover that writeln works if you import it at the top of the module and substitute it for puts in the example. However, always remember that even though DRuntime may be available, it’s not in a valid state when a crt_constructor and a crt_destructor are executed.

RAII for -betterC

One of the limitations in -betterC mode has been the absence of RAII. In normal D code, struct destructors are executed when an instance goes out of scope. This has always depended on DRuntime, and since the runtime isn’t available in -betterC mode, neither are struct destructors. With DMD 2.078.0, the are in the preceding sentence becomes were.

destruct.d

// Compile with:    dmd -betterC destruct.d

import core.stdc.stdio : puts;

struct DestroyMe
{
    ~this()
    {
        puts("Destruction complete.");
    }
}

extern(C) void main()
{
    DestroyMe d;
}

Interestingly, this is implemented in terms of try..finally, so a side-effect is that -betterC mode now supports try and finally blocks:

cleanup1.d

// Compile with:    dmd -betterC cleanup1.d

import core.stdc.stdlib,
       core.stdc.stdio;

extern(C) void main()
{
    int* ints;
    try
    {
        // acquire resources here
        ints = cast(int*)malloc(int.sizeof * 10);
        puts("Allocated!");
    }
    finally
    {
        // release resources here
        free(ints);
        puts("Freed!");
    }
}

Since D’s scope(exit) feature is also implemented in terms of try..finally, this is now possible in -betterC mode also:

cleanup2.d

// Compile with: dmd -betterC cleanup2.d

import core.stdc.stdlib,
       core.stdc.stdio;

extern(C) void main()
{
    auto ints1 = cast(int*)malloc(int.sizeof * 10);
    scope(exit)
    {
        puts("Freeing ints1!");
        free(ints1);
    }

    auto ints2 = cast(int*)malloc(int.sizeof * 10);
    scope(exit)
    {
        puts("Freeing ints2!");
        free(ints2);
    }
}

Note that exceptions are not implemented for -betterC mode, so there’s no catch, scope(success), or scope(failure).

Optional ModuleInfo

One of the seemingly obscure features dependent upon DRuntime is the ModuleInfo type. It’s a type that works quietly behind the scenes as one of the enabling mechanisms of reflection and most D programmers will likely never hear of it. That is, unless they start trying to stub out their own minimal runtime. That’s when linker errors start cropping up complaining about the missing ModuleInfo type, since the compiler will have generated an instance of it for each module in the program.

DMD 2.078.0 changes things up. The compiler is aware of the presence of the runtime implementation at compile time, so it can see whether or not the current implementation provides a ModuleInfo declaration. If it does, instances will be generated as appropriate. If it doesn’t, then the instances won’t be generated. This makes it just that much easier to stub out your own runtime, which is something you’d want to do if you were, say, writing a kernel in D.

Other notable changes

New users of DMD on Windows will now have an easier time getting a 64-bit environment set up. It’s still necessary to install the Microsoft build tools, but now DMD will detect the installation of either the Microsoft Build Tools package or Visual Studio at runtime when either -m64 or -m32mscoff is specified on the command line. Previously, configuration was handled automatically only by the installer; manual installs had to be configured manually.

DRuntime has been enhanced to allow more fine-grained control over unit tests. Of particular note is the --DRT-testmode flag which can be passed to any D executable. With the argument "run-main", the current default, any unit tests present will be run and then main will execute if they all pass; with "test-or-main", the planned default beginning with DMD 2.080.0, any unit tests present will run and the program will exit with a summary of the results, otherwise main will execute; with "test-only", main will not be executed, but test results will still be summarized if present.

Onward into 2018

This is the first DMD release of 2018. We can’t wait to see what the next 12 months bring for the D programming language community. From everyone at the D Language Foundation, we hope you have a very Happy New Year!

Interfacing D with C: Getting Started

One of the early design goals behind the D programming language was the ability to interface with C. To that end, it provides ABI compatibility, allows access to the C standard library, and makes use of the same object file formats and system linkers that C and C++ compilers use. Most built-in D types, even structs, are directly compatible with their C counterparts and can be passed freely to C functions, provided the functions have been declared in D with the appropriate linkage attribute. In many cases, one can copy a chunk of C code, paste it into a D module, and compile it with minimal adjustment. Conversely, appropriately declared D functions can be called from C.

That’s not to say that D carries with it all of C’s warts. It includes features intended to eliminate, or more easily avoid, some of the errors that are all too easy to make in C. For example, bounds checking of arrays is enabled by default, and a safe subset of the language provides compile-time enforcement of memory safety. D also changes or avoids some things that C got wrong, such as what Walter Bright sees as C’s biggest mistake: conflating pointers with arrays. It’s in these differences of implementation that surprises lurk for the uninformed.

This post is the first in a series exploring the interaction of D and C in an effort to inform the uninformed. I’ve previously written about the basics of this topic in an article at GameDev.net, and in my book, ‘Learning D’, where the entirety of Chapter 9 covers it in depth.

This blog series will focus on those aforementioned corner cases so that it’s not necessary to buy the book or to employ trial and error in order to learn them. As such, I’ll leave the basics to the GameDev.net article and recommend that anyone interfacing D with C (or C++) give it a read along with the official documentation.

The C and D code that I provide to highlight certain behavior is intended to be compiled and linked by the reader. The code demonstrates both error and success conditions. Recognizing and understanding compiler errors is just as important as knowing how to fix them, and seeing them in action can help toward that end. That implies some prerequisite knowledge of compiling and linking C and D source files. Happily, that’s the focus of the next section of this post.

For the C code, we’ll be using the Digital Mars C/C++ and Microsoft C/C++ compilers on Windows, and GCC and Clang elsewhere. On the D side, we’ll be working exclusively with the D reference compiler, DMD. Windows users unfamiliar with setting up DMD to work with the Microsoft tools will be well served by the post on this blog titled, ‘DMD, Windows, and C’.

We’ll finish the post with a look at one of the corner cases, one that is likely to rear its head early on in any exploration of interfacing D with C, particularly when creating bindings to existing C libraries.

Compiling and linking

The articles in this series will present example C source code that is intended to be saved and compiled into object files for linking with D programs. The command lines for generating the object files look pretty much the same on every platform, with a couple of caveats. We’ll look first at Windows, then lump all the other supported systems together in a single section.

In the next two sections, we’ll be working with the following C and D source files. Save them in the same directory (for convenience) and make sure to keep the names distinct. If both files have the same name in the same directory, then the object files created by the C compiler and DMD will also have the same name, causing the latter to overwrite the former. There are compiler switches to get around this, but for a tutorial we’re better off keeping the command lines simple.

chello.c

#include <stdio.h>
void say_hello(void) 
{
    puts("Hello from C!");
}

hello.d

extern(C) void say_hello();
void main() 
{
    say_hello();
}

The extern(C) bit in the declaration of the C function in the D code is a linkage attribute. That’s covered by the other material I referenced above, but it’s a potential gotcha we’ll look at later in this series.

Windows

The official DMD packages for Windows, available at dlang.org as a zip archive and an installer, are the only released versions of DMD that do not require any additional tooling to be installed as a prerequisite to compile D files. These packages ship with everything they need to compile 32-bit executables in the OMF format (again, I refer you to ‘DMD, Windows, and C’ for the details).

When linking any foreign object files with a D program, it’s important that the object file format and architecture match the D compiler output. The former is an issue primarily on Windows, while attention must be paid to the latter on all platforms.

Compiling C source to a format compatible with vanilla DMD on Windows requires the Digital Mars C/C++ compiler. It’s a free download and ships with some of the same tools as DMD. It outputs object files in the OMF format. With both it and DMD installed and on the system path, the above source files can be compiled, linked, and executed like so:

dmc -c chello.c
dmd hello.d chello.obj
hello

The -c option tells DMC to forego linking, causing it to only compile the C source and write out the object file chello.obj.

To get 64-bit output on Windows, DMC is not an option. In that case, DMD requires the Microsoft build tools on Windows. Once the MS build tools are installed and set up, open the preconfigured x64 Native Tools Command Prompt from the Start menu and execute the following commands (again, see ‘D, Windows, and C’ on this blog for information on how to get the Microsoft build tools and open the preconfigured command prompt, which may have a slightly different name depending on the version of Visual Studio or the MS Build Tools installed):

cl /c chello.c
dmd -m64 hello.d chello.obj
hello

Again, the /c option tells the compiler not to link. To produce 32-bit output with the MS compiler, open a preconfigured x86 Native Tools Command Prompt and execute these commands:

cl /c hello.c
dmd -m32mscoff hello.c chello.obj
hello

DMD recognizes the -m32 switch on Windows, but that tells it to produce 32-bit OMF output (the default), which is not compatible with Microsoft’s linker, so we must use -m32mscoff here instead.

Other platforms

On the other platforms D supports, the system C compiler is likely going to be GCC or Clang, one of which you will already have installed if you have a functioning dmd command. On Mac OS, clang can be installed via XCode in the App Store. Most Linux and BSD systems have a GCC package available, such as via the often recommended command line, apt-get install build-essential, on Debian and Debian-based systems. Please see the documentation for your system for details.

On these systems, the environment variable CC is often set to the system compiler command. Feel free to substitute either gcc or clang for CC in the lines below as appropriate for your system.

CC -c chello.c
dmd hello.d chello.o
./hello

This will produce either 32-bit or 64-bit output, depending on your system configuration. If you are on a 64-bit system and have 32-bit developer tools installed, you can pass -m32 to both CC and dmd to generate 32-bit binaries.

The long way

Now that we’re configured to compile and link C and D source in the same binary, let’s take a look at a rather common gotcha. To fully appreciate this one, it helps to compile it on both Windows and another platform.

One of the features of D is that all of the integral types have a fixed size. A short is always 2 bytes and an int is always 4. This never changes, no matter the underlying system architecture. This is quite different from C, where the spec only imposes relative requirements on the size of each integral type and leaves the specifics to the implementation. Even so, there are wide areas of agreement across modern compilers such that on every platform D currently supports the sizes for almost all the integral types match those in D. The exceptions are long and ulong.

In D, long and ulong are always 8 bytes across all platforms. This never changes. It lines up with the corresponding C types just fine on most 64-bit systems under the version(Posix) umbrella, where the C long and unsigned long are also 8 bytes. However, they are 4 bytes on 32-bit architectures. Moreover, they’re always 4 bytes on Windows, even on a 64-bit architecture.

Most C code these days will account for these differences either by using the preprocessor to define custom integral types or by making use of the C99 stdint.h where types such as int32_t and int64_t are unambiguously defined. Yet, it’s still possible to encounter C libraries using long in the wild.

Consider the following C function:

maxval.c

#include <limits.h>
unsigned long max_val(void)
{
    return ULONG_MAX;
}

The naive D implementation looks like this:

showmax1.d

extern(C) ulong max_val();
void main()
{
    import std.stdio : writeln;
    writeln(max_val());
}

What this does depends on the C compiler and architecture. For example, on Windows with dmc I get 7316910580432895, with x86 cl I get 59663353508790271, and 4294967295 with x64 cl. The last one is actually the correct value, even though the size of the unsigned long on the C side is still 4 bytes as it is in the other two scenarios. I assume this is because the x64 ABI stores return values in the 8-byte RAX register, so it can be read into the 8-byte ulong on the D side with no corruption. The important point here is that the two values in the x86 code are garbage because the D side is expecting a 64-bit return value from 32-bit registers, so it’s reading more than it’s being given.

Thankfully, DRuntime provides a way around this in core.c.config, where you’ll find c_long and c_ulong. Both of these are conditionally configured to match the compile-time C runtime implementation and architecture configuration. With this, all that’s needed is to change the declaration of max_val in the D module, like so:

showmax2.d

import core.stdc.config : c_ulong;
extern(C) c_ulong max_val();

void main()
{
    import std.stdio : writeln;
    writeln(max_val());
}

Compile and run with this and you’ll find it does the right thing everywhere. On Windows, it’s 4294967295 across the board.

Though less commonly encountered, core.stdc.config also declares a portable c_long_double type to match any long double that might pop up in a C library to which a D module must bind.

Looking ahead

In this post, we’ve gotten set up to compile and link C and D in the same executable and have looked at the first of several potential problem spots. We used DMD here, but it should be possible to substitute one of the other D compilers (ldc or gdc) without changing the command line (with the exception of -m32mscoff, which is specific to DMD). The next post in this series will focus entirely on getting D arrays and C arrays to cooperate. See you there!

Project Highlight: Diamond MVC Framework

Anyone who has been around the D community for longer than an eye blink will have heard of vibe.d, undoubtedly the most widely-used web application framework written in the D programming language. Those same people could be excused if they haven’t yet heard of Diamond, announcements for which have only started showing up in the D forums relatively recently.

According to its website, Diamond is “a full-stack, cross-platform MVC/Template Framework” that’s “inspired by ASP.NET and uses vibe.d for its backend”. Jacob Jensen, the project’s author, explains.

I have always been interested in web development, so one of the few projects I started writing in D was a web server using Phobos’s std.socket. It wasn’t a notable project or anything, but more an experiment. Then I discovered vibe.d and toyed around with it. Unfortunately, coming from a background working with ASP.NET using Razor, I wasn’t a big fan of the Diet templates. So initially I started Diamond as an alternative template engine. However, I ended up just adding more and more to the project and it soon became more and more independent. At this point, you don’t really write a lot of vibe.d code when using it, because the most general vibe.d features have wrappers in Diamond to better interact with the rest of the project.

Development on Diamond began in early 2016, but he put it aside a few months later. Then in October of 2017, after picking up some contract web app work, Diamond was resurrected in its 2.0 form.

I decided I wanted to use D, so I simply did a complete revamp of the project and plan to keep maintaining it.

His biggest hurdle has been keeping the design of the framework user-friendly and minimizing complex interactions between controllers, views, and models.

It was a big challenge to ensure that everything worked together in a way that made it feel natural to work with. On top of that I had to make sure that Diamond worked under multiple build types. At first Diamond was written solely with the web in mind and thus only supported websites and web APIs, but I saw the potential to use the template engine for more than just the web, like email templates. Thus I introduced yet a third way of building Diamond applications, which had to be completely separated from the web part of Diamond without introducing complexity into user code or the build process. By introducing stand-alone support, Diamond is now able to be used with existing projects that aren’t already using it for the web, e.g. someone could use Diamond to extend their existing web pages without having to switch the whole project to Diamond, or simply use Diamond for only a small portion of it.

Aside from the challenge of maintaining user-friendliness, Jacob says he’s encountered only a few issues in developing the framework, most of which came when he was refactoring for the 2.0 release. One in particular is interesting, as his solution for it was a D language feature you don’t often hear much about.

When I was introducing attributes to controllers to avoid manual mapping of actions, it took a while to figure out the best approach to it without having to pass additional information about the controller to its base class.

To demonstrate, Controller subclasses could originally be declared like so:

class MyController(TView) : Controller!TView
{
    ...
}

After he initially added attributes, the refactoring required the base class template to know the derived type at compile time in order to reflect on its attributes. His initial solution required that subclasses specify their own types as an additional template parameter to the Controller template in the class declaration.

class MyController(TView) : Controller!(TView, MyController!TView)
{
    ...
}

He didn’t like it, but it was the only way he could see to make the base class aware of the derived type. Then he discovered D’s template this parameters.

Template this parameters allow any templated member function to know at compile time the static, i.e. declared, type of the instance on which the function is being called.

module base;
class Base 
{
    void printType(this T)() 
    {
        import std.stdio;
        writeln(typeid(T));
    }
}

class Derived : Base {}

void main()
{
    Derived d1 = new Derived;
    auto d2 = new Derived;
    Base b1 = new Derived;

    d1.printType();
    d2.printType();
    b1.printType();
}

And this prints (in modulename.TypeName format):

base.Derived
base.Derived
base.Base

In Diamond, this is used in the Controller constructor in order to parse the UDAs (User Defined Attributes) attached to the derived type at compile time:

class Controller(TView) : BaseController
{
    ...
    this(this TController)(TView view)
    {
        ...

        static if (hasUDA!(TController, HttpAuthentication))
        {
            ...
        }

        static if (hasUDA!(TController, HttpVersion))
        {
            ...
        }
        ...
    }
}

The caveat, and the price Jacob is willing to pay for the increased convenience to users, is that instances of derived types should never be declared to have the type of the base class. When working with templated types in D, it’s idiomatic to use type inference anyway:

// This won't pick up the MyController attributes, as the declared
// type is that of the base class
Controller!ViewImpl controller1 = new MyController!ViewImpl;

// But this will
MyController!ViewImpl controller3 = new MyController!ViewImpl;

// And so will this -- it's also more idiomatic
auto controller2 = new MyController!ViewImpl;

Overall, Jacob has found the transition from C# to D fairly painless.

Most code I was used to writing, coming from C#, is pretty straight-forward in D. One of the pros of D, however, is its compile-time functionality. I use it heavily in Diamond to parse templates, map routes and controller actions, etc. It’s a really powerful tool in development and probably the most powerful tool in D. I also really like templates in D. They’re implemented in a way that doesn’t make them seem complex, unlike in C++, where templates can often seem obscure and cryptic. D is probably the most natural programming language that I’ve used.

Diamond indirectly supports Mongo and Redis through vibe.d, and has its own MySQL ORM interface that uses the native MySQL library under the hood. He has some plans improve upon the database support, however.

I plan to rewrite the whole MySQL part, since it currently uses some deprecated features – it was based on some old code I had been using. Along with that, I plan on implementing some “generic services” that can be used to create internal services in the project, which will of course wrap database engines such as MySQL, Mongo, Redis, etc., creating a similar API between them all and exposing an easier way to implement sharding.

He also intends to add support for textual data formats other than JSON (such as XML) to make Diamond compatible with SOAP or WCF services, add improved support for components in the view, and provide better integration with JavaScript. He also would like to implement an app server for hosting Diamond applications.

Anyone intending to use D for web work today who, like Jacob, has experience using ASP.NET and Razor should feel right at home using Diamond. For the rest, it’s an alternative to using vibe.d directly that some may find more comfortable. You can find the Diamond source, the current documentation, and the in-development official website (for which Jacob is dog-fooding Diamond) all at GitHub.

DConf 2018: Assemblage in Bavaria

It’s official! The D Language Foundation has put out a call for submissions for the next iteration of the annual gathering of D programming language enthusiasts. DConf 2018, hosted by QA Systems, is taking place in Munich from May 2nd to the 5th, 2018.

This time around, there’s a focus on growth and outreach. DConf has always been open to all, but past editions largely targeted those already “in the know”. For DConf 2018, the D Language Foundation is actively reaching out, encouraging anyone with little or no D language experience to stop by and see what all the fuss is about.

In the coming months, the D Blog will feature a series of posts related to DConf 2018. To get us started, Andrei Alexandrescu, Vice President and Treasurer of the D Language Foundation, sat down to answer a few questions about the event.


Q: Thanks for taking time out of your schedule for this, Andrei. The first thing I want to get to is the choice of location. At the end of DConf 2017, there was a lot of speculation about where the next edition would be held. We’ve seen two in Menlo Park, California, one in Orem, Utah, and two in Berlin. What led to the choice of Munich?

A: It has a lot to do with my recent visit there. I had mentioned a while ago to our tireless collaborator Sebastian Wilzbach (who studies at both the Technical University of Munich and Ludwig Maximilian University) about the annual classes I teach in neighboring Stuttgart. He suggested I make two trips in one and give a talk in Munich as well.

Once we committed to a date, I was shocked by the earnestness of everybody involved with organizing. The event filled within an hour of opening, in comparable amounts by existing D programmers (there’s a strong D community in Munich) and by curious programmers coming from other languages. There was even some competition among companies willing to host the event.

We ended up holding it at Brainlab’s new headquarters (check it out, they are a great innovator in medical technology). The event was a triumph! The folks in the audience were that combination of smart, receptive, and inquisitive that makes for an amazing interaction. We started at 6:30 and quite a few of us segued into beers, dinner, and of course more chatting, to finally part around midnight.

At that point I thought, Munich sounds like a perfect place for DConf. Later I spoke to my business partner (Andreas Sczepansky, owner of QA Systems) about the great reception the talk got in Munich. He got intrigued and agreed to work with us on DConf 2018. And here we are.

Q: What can attendees expect to see at DConf 2018?

A: We’re counting on a strong technical program, as has been the case in the past events. Also, last year’s day-long hackathon (a largely unstructured “let’s work on cool stuff in small groups” day) was surprisingly successful and enjoyed by everyone involved. So we’re making it bigger and hopefully better this year. It will be on the last day of the event, May 5th.

This year we also want to promote a growth theme. We’re working on bringing a strong outside keynote speaker, and QA Systems will help us to market to companies and grass-roots coders who are currently using other languages. We believe D offers many strategic advantages to the high-tech milieu in Bavaria and beyond.

Q: What do you mean by that? What makes Bavaria special?

A: I noticed there’s a strong IT industry in the area built around automotive, industrial machinery, healthcare, scientific computing, and more. Really serious software with difficult demands and high stakes. We’re talking about systems ranging from memory-constrained embedded systems to high-performance desktop software to large systems that take a long time to design, build, and test. D is all about building fast software, fast. So we have a great opportunity to make the strong case that the D language could help these application domains.

Q: You and Walter Bright have traditionally given the opening and closing keynotes at every DConf. What are you guys planning to talk about this time?

A: I know Walter is considering giving a talk on Project Detente – a multifaceted approach to smooth interoperation with C and C++ that also allows easy incremental migration of large projects from those languages to D. As for me, I haven’t decided yet. I’m really excited by the opportunities opened by this Design by Introspection thing I discussed in my DConf 2017 keynote [Also, see the blog post he wrote about his presentation at Google’s Tel Aviv campus – Ed.].

Q: Last question: what’s the elevator pitch for DConf? If you only had 30 seconds to sell a prospective attendee on the event, what would you say?

A: D is a language with depth. Richness. It has unique solutions to some difficult problems, such as reconciling compile-time computation, partial evaluation, domain-specific languages, and metaprogramming all together in a wholesome manner. Such matters are so fundamental to the way we design, build, and execute our programs that we either consider them solved or unsolvable. Chances are, attending DConf will make you like the D language more. But more importantly, your view of your own métier will be improved regardless of your languages of choice.


Be sure to keep an eye on this space for more details about DConf 2018 as they are released. And if you’re planning to submit a talk, don’t procrastinate. The submission deadline is Feb 25th.

See you in Munich!

DMD 2.077.0 Released

The D Language Foundation is happy to announce DMD 2.077.0. This latest release of the reference compiler for the D programming language is available from the dlang.org Downloads page. Among the usual slate of bug and regression fixes, this release brings a couple of particulary beneficial enhancements that will have an immediate impact on some existing projects.

Cutting symbol bloat

Thanks to Rainer Schütze, the compiler now produces significantly smaller mangled names in situations where they had begun to get out of control, particularly in the case of IFTI (Implicit Function Template Instantiation) where Voldemort types are involved. That may call for a bit of a detour here.

The types that shall not be named

Voldemort types are perhaps one of D’s more interesting features. They look like this:

auto getHeWhoShallNotBeNamed() 
{
    struct NoName 
    {
        void castSpell() 
        {
            import std.stdio : writeln;
            writeln("Crucio!");
        }           
    }
    return NoName();
}

void main() 
{
    auto voldemort = getHeWhoShallNotBeNamed();
    voldemort.castSpell();
}

Here we have an auto function, a function for which the return type is inferred, returning an instance of a type declared inside the function. It’s possible to access public members on the instance even though its type can never be named outside of the function where it was declared. Coupled with type inference in variable declarations, it’s possible to store the returned instance and reuse it. This serves as an extra level of encapsulation where it’s desired.

In D, for any given API, as far as the world outside of a module is concerned, module private is the lowest level of encapsulation.

module foobar;

private struct Foo
{
    int x;
}

struct Bar 
{
    private int y;
    int z;
}

Here, the type Foo is module private. Bar is shown here for completeness, as those new to D are often surprised to learn that private members of an aggregate type are also module private (D’s equivalent of the C++ friend relationship). There is no keyword that indicates a lower level of encapsulation.

Sometimes you just may not want Foo to be visible to the entire module. While it’s true that anyone making a breaking change to Foo’s interface also has access to the parts of the module that break (which is the rationale behind module-private members), there are times when you may not want the entire module to have access to Foo at all. Voldemort types fill that role of hiding details not just from the world, but from the rest of the module.

The evil side of Voldemort types

One unforeseen consequence of Voldemort types that was first reported in mid–2016 was that, when used in templated functions, they caused a serious explosion in the size of the mangled function names (in some cases up to 1 MB!), making for some massive object files. There was a good bit of forum discussion on how to trim them down, with a number of ideas tossed around. Ultimately, Rainer Schütze took it on. His persistence has resulted in shorter mangled names all around, but the wins are particularly impressive when it comes to IFTI and Voldemort types. (Rainer is also the maintainer of Visual D, the D programming language plugin for Visual Studio)

D’s name-mangling scheme is detailed in the ABI documentation. The description of the new enhancement is in the section titled ‘Back references’.

Improved vectorization

D has long supported array operations such as element-wise addtion, multiplication, etc. For example:

int[] arr1 = [0, 1, 2];
int[] arr2 = [3, 4, 5];
int[3] arr3 = arr1[] + arr2[];
assert(arr3 == [3, 5, 7]);

In some cases, such operations could be vectorized. The reason it was some cases and not all cases is because dedicated assembly routines were used to achieve the vectorization and they weren’t implemented for every case.

With 2.077.0, that’s no longer true. Vectorization is now templated so that all array operations benefit. Any codebase out there using array operations that were not previously vectorized can expect a sizable performance increase for those operations thanks to the increased throughput (though whether an application benefits overall is of course context-dependent). How the benefit is received depends on the compiler being used. From the changelog:

For GDC/LDC the implementation relies on auto-vectorization, for DMD the implementation performs the vectorization itself. Support for vector operations with DMD is determined statically (-mcpu=native, -mcpu=avx2) to avoid binary bloat and the small test overhead. DMD enables SSE2 for 64-bit targets by default.

Note that the changelog initially showed -march instead of -mcpu in the quoted lines, and the updated version had not yet been posted when this announcement was published.

DMD’s implementation is implemented in terms of core.simd, which is also part of DRuntime’s public API.

The changelog also notes that there’s a potential for division performed on float arrays in existing code to see a performance decrease in exchange for an increase in precision.

The implementation no longer weakens floating point divisions (e.g. ary[] / scalar) to multiplication (ary[] * (1.0 / scalar)) as that may reduce precision. To preserve the higher performance of float multiplication when loss of precision is acceptable, use either -ffast-math with GDC/LDC or manually rewrite your code to multiply by (1.0 / scalar) for DMD.

Other assorted treats

Just the other day, someone asked in the forums if DMD supports reproducible builds. As of 2.077.0, the answer is affirmative. DMD now ensures that compilation is deterministic such that given the same source code and the same compiler version, the binaries produced will be identical. If this is important to you, be sure not to use any of the non-determistic lexer tokens (__DATE__, __TIME__, and __TIMESTAMP__) in your code.

DMD’s -betterC command line option gets some more love in this release. When it’s enabled, DRuntime is not available. Library authors can now use the predefined version D_BetterC to determine when that is the case so that, where it’s feasible, they can more conveniently support applications with and without the runtime. Also, the option’s behavior is now documented, so it’s no longer necessary to go to the forums or parse through search results to figure out what is and isn’t actually supported in BetterC mode.

The entire changelog is, as always, available at dlang.org.

DMD, Windows, and C

The ability to interface with C was baked into D from the beginning. Most of the time, it’s something that requires little thought – as long as the declarations on the D side match what exists on the C side, things will usually just work. However, there are a few corner-case gotchas that arise from the simple fact that D, though compatible, is not C.

An upcoming series of posts here on the D blog will delve into some of these dark corners and shine a light on the traps lying in wait. In these posts, readers will be asked to follow along by compiling and executing the examples themselves so they may more thoroughly understand the issues discussed. This means that, in addition to a D compiler, readers will need access to a C compiler.

That raises a potential snafu. On the systems that the DMD frontend groups under the version(Posix) umbrella, it’s a reliable assumption that a C compiler is easily available (if a D compiler is installed and functioning properly, the C compiler will already be installed). The concept of a system compiler is a long established tradition on those systems. On Windows… not so much.

So before diving into a series about C and D, a bit of a primer is called for. That’s where this post comes in. The primary goal is to help ensure a C environment is installed and working on Windows. It’s also useful to understand why things are different on that platform than on the others. Before we get to the why, we’ll dig into the how.

First, assume we have the following two source files in the same directory.

cfoo.c

#include <stdio.h>

void say_hello(void) 
{
    puts("Hello!");
}

dfoo.d

extern(C) void say_hello();

void main() 
{
    say_hello();
}

Now let’s see how to get the two working together.

DMD and C

The DMD packages for Windows ship with everything the compiler needs: a linker and other tools, plus a handful of critical system libraries. So on the one hand, Windows is the only platform where DMD has no external dependencies out of the box. On the other hand, it’s the only platform where a working DMD installation does not imply a C compiler is also installed. And these days, the out-of-the-box experience often isn’t the one you want.

On all the other platforms, the C compiler option is the system compiler, which in practice means GCC or Clang. The system linker to which DMD sends its generated object files might be ld, lld, ld.gold, or any ld-compatible linker. On Windows, there are currently two compiler choices, and neither can be assumed to be installed by default: the Digital Mars C and C++ compiler, dmc, or the Microsoft compiler, cl. We’ll look at each in turn.

DMD and DMC

The linker (Optlink) and other tools that ship with DMD are also part of the DMC distribution. DMD uses these tools by default (or when the -m32 switch is passed on the command line). To link any C objects or static libraries, they should be in the OMF format. Compilers that can generate OMF are a rare breed these days, and while something like Open Watcom may work, using DMC will guarantee 100% compatibility.

The DMC package (version 8.57 as I write) can be downloaded from digitalmars.com. It’s a 3 MB zip file that can be unzipped anywhere. Personally, since I only ever use it in conjunction with DMD, I keep it in C:\D so that the dm directory is a sibling of the dmd2 directory. Once it’s unzipped, it can be added to the path if desired. Be aware that some of the tools DMD and DMC ship with may conflict with tools in other packages if they are on the global path.

For example, both come with Digital Mars make and Optlink, which is named link.exe. The former might conflict with Cygwin, or a MinGW distribution that’s independent of MSYS2 (if mingw32-make has been renamed), and the latter with Microsoft’s linker (which generally shouldn’t be on the global path anyway). Some may prefer just to keep it all off the global path. In that case, it’s simple to configure a command prompt shortcut that sets the PATH when it launches. For example, create a batch file, that looks like this:

echo Welcome to your Digital Mars environment.
@set PATH=C:\D\dmd2\windows\bin;C:\D\dm\bin;%PATH%

Save it as C:\D\dmenv.bat. Right click an empty spot on the desktop and, from the popup menu, select New->Shortcut. In the location field, enter the following:

C:\System\Win32\cmd.exe /k C:\d\dmenv.bat

Now you have a shortcut that, when double clicked, will launch a command prompt that has both dmd and dmc on the path.

Once installed, documentation on the command-line switches for the tools is available at the Digital Mars site. The most relevant are the docs for DMC, Optlink, and Librarian (lib.exe). The latter two will come in handy even when doing pure D development with vanilla DMD, as those are the tools needed to when manually manipulating its object file output.

That’s all there is to it. As long as both dmc.exe and dmd.exe are on the path in any given command prompt, both compilers will find the tools they need via the default settings in their configuration files. For knocking together quick tests with both C and D on Windows, it’s a quick thing to launch a command prompt, compile & link, and execute:

dmc -c cfoo.c
dmd dfoo.d cfoo.obj
dfoo

Easy peasy. Now let’s look at the other option.

DMD and Microsoft’s CL

Getting DMD to work with the Microsoft toolchain requires installing the Microsoft build tools and the Windows SDK. The easiest way to get everything is to use one of the Community editions of Visual Studio. The installer will download and install all the tools and the SDK. The latest is always available from https://www/visualstudio.com. With VS 2017, the installer has been overhauled such that it’s possible to minimize the size of the install more than was possible with past editions. An alternative is to install the Microsoft Build Tools and the Windows SDK separately. However, this is still a large install that isn’t much of a win in light of the new VS 2017 installer options (for those on Windows 8.1 or 10).

Once the tooling is installed, DMD’s configuration file needs to be modified to point its environment variables to the proper locations. Rather than repeat all of that here, I’ll direct you to the DMD installation page at the D Wiki. One of the reasons to prefer the DMD installer over the zip archive is that it will detect any installation of Visual Studio or the Microsoft Build Tools and automatically modify the configuration as needed. This is more convenient than needing to remember to update the configuration every time a new version of DMD is installed. It also offers to install VS 2013 Community if the tooling isn’t found, can install Visual D (the D plugin for Visual Studio), and will add DMD to the system path if you want it to.

It’s a bit of an annoyance to launch Visual Studio for simple tests between C and D. Since it’s not recommended to put the MS tools on the system path, each VS and Microsoft Build Tools installation ships with a number of batch files that will set the path for you (like the one we created for DMC above). The installer sets up shortcuts in the Windows Start menu. There are several different options to choose from. To launch a 64-bit environment with VS 2017 (or the 2017 build tools), find Visual Studio 2017 in the Start menu and select x64 Native Tools Command Prompt for VS 2017. For VS 2015 (or the 2015 build tools), go to Visual Studio 2015 and click on VS 2015 x64 Native Build Tools Command Prompt. Similar options exist for 32-bit (where x86 replaces x64) and cross compiling.

From the VS-enabled 64-bit environment, we can run the following commands to compile our two files.

cl /c cfoo.c
dmd -m64 dfoo.d cfoo.obj
dfoo

In a 32-bit VS environment, replace -m64 with -m32mscoff.

The consequences of history

When a new programming language is born these days, it’s not uncommon for its tooling to be built on top of an existing toolchain rather than completely from scratch. Whether we’re talking about languages like Kotlin built on the JRE, or those like Rust using LLVM, reusing existing tools saves time and allows the developers to focus their precious man-hours on the language itself and any language-specific tooling they require.

When Walter Bright first started putting D together in 1999, that trend had not yet come around. However, he already had an existing toolchain in the form of the Digital Mars C and C++ compiler tools. So it was a no-brainer to make use of his existing tools and compiler backend and just focus on making a new frontend for DMD. There were four major side-effects of this decision, all of which had varying consequences in D’s future development.

First, the DMC tools were Windows-only, so the early versions of DMD would be as well. Second, the linker, Optlink, only supports the OMF format. That meant that DMD’s output would be incompatible with the more common COFF output of most modern C and C++ compilers on Windows. Third, the DMC tools do not support 64-bit, so DMD would be restricted to 32-bit output. Finally, Symantec had the legal rights to the existing backend, which meant their license would apply to DMD. While the frontend was open source, the backend license required one to get permission from Walter to distribute DMD (on a side note, this prevented DMD from being included in official Linux package repositories once Linux support was added, but Symantec granted permission to relicense the backend earlier this year and it is now freely distributable under the Boost license).

DMD 0.00 was released in December of 2001. The 0.63 release brought Linux support in May of 2003. Walter could have based the Linux version on the GCC backend, but as a business owner, and through a caution born from past experience, he was concerned about any legal issues that could arise from his working with GPL code on one platform and maintaining a proprietary backend on another. Instead, he modified the DMD backend to generate ELF objects and hand them off to the GCC tools. This decision to enhance the backend became the approach for all new formats going forward. He did the same when adding support for Mac OS X: he modified the backend to work with the Mach-O object format.

Along with the new formats, the compiler gained the ability to generate 64-bit binaries everywhere except Windows. In order to interface with C on Windows, it was usually necessary to convert COFF object files and static libraries to OMF, to use a tool like coffimplib to generate DLL import libraries in the OMF format, or to create dynamic bindings and load DLLs manually via LoadLibrary and GetProcAddress. Then Remedy Games decided to use D.

Quantum Break was the first AAA game title to ship with D as part of its development process. Remedy used it for their gameplay code, creating their own open source tool to bind with their C++ game engine. Before they could get that far, however, they needed 64-bit support in DMD on Windows. That was the motivator to get it implemented. It took a while (apparently, there are some undocumented quirks in Microsoft’s variant of COFF, a.k.a PECOFF, a.k.a. MS-COFF), but Walter eventually got it done, and support for 32-bit COFF along with it. Again, as he had on other platforms, he modified the backend to generate object files in the new format.

This is why it’s necessary to have the Microsoft toolchain installed in order to produce 64-bit binaries with DMD on Windows. Microsoft’s cl is as close to a system compiler as one is going to get on Windows. There is, however, an option that has not yet been fully explored. It’s a toolchain that can be freely distributed, packaged with a reasonable download size, supports 32-bit and 64-bit output, and is mostly compatible with PECOFF. There is a possibility that it may be investigated as an option for future DMD releases to be built upon.

Going from here

Now that this primer is out of the way, the short series on C is just about ready to go. It will kick off with a brief summary of existing material, showing how easy it is to get D and C to work together in the general case. That will be followed up by two posts on arrays and strings. This is where most of the gotchas come into play, and anyone using D and C in the same program should understand what they are and how to avoid them.

Go Your Own Way (Part Two: The Heap)

This post is part of an ongoing series on garbage collection in the D Programming Language, and the second of two regarding the allocation of memory outside of the GC. Part One discusses stack allocation. Here, we’ll look at allocating memory from the non-GC heap.

Although this is only my fourth post in the series, it’s the third in which I talk about ways to avoid the GC. Lest anyone jump to the wrong conclusion, that fact does not signify an intent to warn programmers away from the D garbage collector. Quite the contrary. Knowing how and when to avoid the GC is integral to understanding how to efficiently embrace it.

To hammer home a repeated point, efficient garbage collection requires reducing stress on the GC. As highlighted in the first and subsequent posts in this series, that doesn’t necessarily mean avoiding it completely. It means being judicious in how often and how much GC memory is allocated. Fewer GC allocations means fewer opportunities for a collection to trigger. Less total memory allocated from the GC heap means less total memory to scan.

It’s impossible to make any accurate, generalized statement about what sort of applications may or may not feel an impact from the GC; such is highly application specific. What can be said is that it may not be necessary for many applications to temporarily avoid or disable the GC, but when it is, it’s important to know how. Allocating from the stack is an obvious approach, but D also allows allocating from the non-GC heap.

The ubiquitous C

For better or worse, C is everywhere. Any software written today, no matter the source language, is probably interacting with a C API at some level. Despite the C specification defining no standard ABI, its platform-specific quirks and differences are understood well enough that most languages know how to interface with it. D is no exception. In fact, all D programs have access to the C standard library by default.

The core.stdc package, part of DRuntime, is a collection of D modules translated from C standard library headers. When a D executable is linked, the C standard library is linked along with it. All that need be done to gain access is to import the appropriate modules.

import core.stdc.stdio : puts;
void main() 
{
    puts("Hello C standard library.");
}

Some who are new to D may be laboring under a misunderstanding that functions which call into C require an extern(C) annotation, or, after Walter’s Bright’s recent ‘D as a Better C’ article, must be compiled with -betterC on the command line. Neither is true. Normal D functions can call into C without any special effort beyond the presence of an extern(C) declaration of the function being called. In the snippet above, the declaration of puts is in the core.stdc.stdio module, and that’s all we need to call it.

malloc and friends

Given that we have access to C’s standard library in D, we therefore have access to the functions malloc, calloc, realloc and free. All of these can be made available by importing core.stdc.stdlib. And thanks to D’s slicing magic, using these functions as the foundation of a non-GC memory management strategy is a breeze.

import core.stdc.stdlib;
void main() 
{
    enum totalInts = 10;
    
    // Allocate memory for 10 ints
    int* intPtr = cast(int*)malloc(int.sizeof * totalInts);

    // assert(0) (and assert(false)) will always remain in the binary,
    // even when asserts are disabled, which makes it nice for handling
    // malloc failures    
    if(!intPtr) assert(0, "Out of memory!");

    // Free when the function exits. Not necessary for this example, but
    // a potentially useful strategy for temporary allocations in functions 
    // other than main.
    scope(exit) free(intPtr);

    // Slice the D pointer to get a more manageable length/pointer pair.
    int[] intArray = intPtr[0 .. totalInts];
}

Not only does this bypass the GC, it also bypasses D’s default initialization. A GC-allocated array of type T would have all of its elements initialized to T.init, which is 0 for int. If mimicking D’s default initialization is the desired behavior, more work needs to be done. In this example, we could replace malloc with calloc for the same effect, but that would only be correct for integrals. float.init, for example, is float.nan rather than 0.0f. We’ll come back to this later in the article.

Of course, it would be more idiomatic to wrap both malloc and free and work with slices of memory. A minimal example:

import core.stdc.stdlib;

// Allocate a block of untyped bytes that can be managed
// as a slice.
void[] allocate(size_t size)
{
    // malloc(0) is implementation defined (might return null 
    // or an address), but is almost certainly not what we want.
    assert(size != 0);

    void* ptr = malloc(size);
    if(!ptr) assert(0, "Out of memory!");
    
    // Return a slice of the pointer so that the address is coupled
    // with the size of the memory block.
    return ptr[0 .. size];
}

T[] allocArray(T)(size_t count) 
{ 
    // Make sure to account for the size of the
    // array element type!
    return cast(T[])allocate(T.sizeof * count); 
}

// Two versions of deallocate for convenience
void deallocate(void* ptr)
{   
    // free handles null pointers fine.
    free(ptr);
}

void deallocate(void[] mem) 
{ 
    deallocate(mem.ptr); 
}

void main() {
    import std.stdio : writeln;
    int[] ints = allocArray!int(10);
    scope(exit) deallocate(ints);
    
    foreach(i; 0 .. 10) {
        ints[i] = i;
    }

    foreach(i; ints[]) {
        writeln(i);
    }
}

allocate returns void[] rather than void* because it carries with it the number of allocated bytes in its length property. In this case, since we’re allocating an array, we could instead rewrite allocArray to slice the returned pointer immediately, but anyone calling allocate directly would still have to take into account the size of the memory. The disassociation between arrays and their length in C is a major source of bugs, so the sooner we can associate them the better. Toss in some templates for calloc and realloc and you’ve got the foundation of a memory manager based on the C heap.

On a side note, the preceding three snippets (yes, even the one with the allocArray template) work with and without -betterC. But from here on out, we’ll restrict ourselves to features in normal D code.

Avoid leaking like a sieve

When working directly with slices of memory allocated outside of the GC heap, be careful about appending, concatenating, and resizing. By default, the append (~=) and concatenate (~) operators on built-in dynamic arrays and slices will allocate from the GC heap. Concatenation will always allocate a new memory block for the combined string. Normally, the append operator will allocate to expand the backing memory only when it needs to. As the following example demonstrates, it always needs to when it’s given a slice of non-GC memory.

import core.stdc.stdlib : malloc;
import std.stdio : writeln;

void main()
{
    int[] ints = (cast(int*)malloc(int.sizeof * 10))[0 .. 10];
    writeln("Capacity: ", ints.capacity);

    // Save the array pointer for comparison
    int* ptr = ints.ptr;
    ints ~= 22;
    writeln(ptr == ints.ptr);
}

This should print the following:

Capacity: 0
false

A capacity of 0 on a slice indicates that the next append will trigger an allocation. Arrays allocated from the GC heap normally have space for extra elements beyond what was requested, meaning some appending can occur without triggering a new allocation. It’s more like a property of the memory backing the array rather than of the array itself. Memory allocated from the GC does some internal bookkeeping to keep track of how many elements the memory block can hold so that it knows at any given time if a new allocation is needed. Here, because the memory for ints was not allocated by the GC, none of that bookkeeping is being done by the runtime on the existing memory block, so it must allocate on the next append (see Steven Schveighoffer’s ’D Slices article for more info).

This isn’t necessarily a bad thing when it’s the desired behavior, but anyone who’s not prepared for it can easily run into ballooning memory usage thanks to leaks from malloced memory never being deallocated. Consider these two functions:

void leaker(ref int[] arr)
{
    ...
    arr ~= 10;
    ...
}

void cleaner(int[] arr)
{
    ...
    arr ~= 10;
    ...
}

Although arrays are reference types, meaning that modifying existing elements of an array argument inside a function will modify the elements in the original array, they are passed by value as function parameters. Any activity that modifies the structure of an array argument, i.e. its length and ptr properties, only affects the local variable inside the function. The original will remain unchanged unless the array is passed by reference.

So if an array backed by the C heap is passed to leaker, the append will cause a new array to be allocated from the GC heap. Worse, if free is subsequently called on the ptr property of the original array, which now points into the GC heap rather than the C heap, we’re in undefined behavior territory. cleaner, on the other hand, is fine. Any array passed into it will remain unchanged. Internally, the GC will allocate, but the ptr property of the original array still points to the original memory block.

As long as the original array isn’t overwritten or allowed to go out of scope, this is a non-issue. Functions like cleaner can do what they want with their local slice and things will be fine externally. Otherwise, if the original array is to be discarded, you can prevent all of this by tagging functions that you control with @nogc. Where that’s either not possible or not desirable, then either a copy of the pointer to the original malloced memory must be kept and freeed at some point after the reallocation takes place, custom appending and concatenation needs to be implemented, or the allocation strategy needs to be reevaluated.

Note that std.container.array contains an Array type that does not rely on the GC and may be preferable over managing all of this manually.

Other APIs

The C standard library isn’t the only game in town for heap allocations. A number of alternative malloc implementations exist and any of those can be used instead. This requires manually compiling the source and linking with the resultant objects, but that’s not an onerous task. Heap memory can also be allocated through system APIs, like the Win32 HeapAlloc function on Windows (available by importing core.sys.windows.windows). As long as there’s a way to get a pointer to a block of heap memory, it can be sliced and manipulated in a D program in place of a block of GC memory.

Aggregate types

If we only had to worry about allocating arrays in D, then we could jump straight on to the next section. However, we also need to concern ourselves with struct and class types. For this discussion, however, we will only focus on the former. The next couple of posts in the series will focus exclusively on classes.

Allocating an array of struct types, or a single instance of one, is often no different than when the type is int.

struct Point { int x, y; }
Point* onePoint = cast(Point*)malloc(Point.sizeof);
Point* tenPoints = cast(Point*)malloc(Point.sizeof * 10);

Where things break down is when contructors enter the mix. malloc and friends know nothing about constructing D object instances. Thankfully, Phobos provides us with a function template that does.

std.conv.emplace can take either a pointer to typed memory or an untyped void[], along with an optional number of arguments, and return a pointer to a single, fully initialized and constructed instance of that type. This example shows how to do so using both malloc and the allocate function template from above:

struct Vertex4f 
{ 
    float x, y, z, w; 
    this(float x, float y, float z, float w = 1.0f)
    {
        this.x = x;
        this.y = y;
        this.z = z;
        this.w = w;
    }
}

void main()
{
    import core.stdc.stdlib : malloc;
    import std.conv : emplace;
    import std.stdio : writeln;
    
    Vertex4f* temp1 = cast(Vertex4f*)malloc(Vertex4f.sizeof);
    Vertex4f* vert1 = emplace(temp1, 4.0f, 3.0f, 2.0f); 
    writeln(*vert1);

    void[] temp2 = allocate(Vertex4f.sizeof);
    Vertex4f* vert2 = emplace!Vertex4f(temp2, 10.0f, 9.0f, 8.0f);
    writeln(*vert2);
}

Another feature of emplace is that it also handles default initialization. Consider that struct types in D need not implement constructors. Here’s what happens when we change the implementation of Vertex4f to remove the constructor:

struct Vertex4f 
{
    // x, y, z are default inited to float.nan
    float x, y, z;

    // w is default inited to 1.0f
    float w = 1.0f;
}

void main()
{
    import core.stdc.stdlib : malloc;
    import std.conv : emplace;
    import std.stdio : writeln;

    Vertex4f vert1, vert2 = Vertex4f(4.0f, 3.0f, 2.0f);
    writeln(vert1);
    writeln(vert2);    
    
    auto vert3 = emplace!Vertex4f(allocate(Vertex4f.sizeof));
    auto vert4 = emplace!Vertex4f(allocate(Vertex4f.sizeof), 4.0f, 3.0f, 2.0f);
    writeln(*vert3);
    writeln(*vert4);
}

This prints the following:

Vertex4f(nan, nan, nan, 1)
Vertex4f(4, 3, 2, 1)
Vertex4f(nan, nan, nan, 1)
Vertex4f(4, 3, 2, 1)

So emplace allows heap-allocated struct instances to be initialized in the same manner as stack allocated struct instances, with or without a constructor. It also works with the built-in types like int and float. Just always remember that emplace is intended to initialize and construct a single instance, not an array of instances.

If the aggregate type has a destructor, it should be invoked before its memory is deallocated. This can be achieved with the destroy function (always available through the implicit import of std.object).

std.experimental.allocator

The entirety of the text above describes the fundamental building blocks of a custom memory manager. For many use cases, it may be sufficient to forego cobbling something together by hand and instead take advantage of the D standard library’s std.experimental.allocator package. This is a high-level API that makes use of low-level techniques like those described above, along with Design by Introspection, to facilitate the assembly of different types of allocators that know how to allocate, initialize, and construct arrays and type instances. Allocators like Mallocator and GCAllocator can be used to grab chunks of memory directly, or combined with other building blocks for specialized behavior. See the emsi-containers library for a real-world example.

Keeping the GC informed

Given that it’s rarely recommended to disable the GC entirely, most D programs allocating outside the GC heap will likely also be using memory from the GC heap in the same program. In order for the GC to properly do its job, it needs to be informed of any non-GC memory that contains, or may potentially contain, references to memory from the GC heap. For example, a linked list whose nodes are allocated with malloc might contain references to classes allocated with new.

The GC can be given the news via GC.addRange.

import core.memory;
enum size = int.sizeof * 10;
void* p1 = malloc(size);
GC.addRange(p1, size);

void[] p2 = allocate!int(10);
GC.addRange(p2.ptr, p2.length);

When the memory block is no longer needed, the corresponding GC.removeRange can be called to prevent it from being scanned. This does not deallocate the memory block. That will need to be done manually via free or whatever allocator interface was used to allocate it. Be sure to read the documentation before using either function.

Given that one of the goals of allocating from outside the GC heap is to reduce the amount of memory the GC must scan, this may seem self-defeating. That’s the wrong way to look at it. If non-GC memory is going to hold references to GC memory, then it’s vital to let the GC know about it. Not doing so can cause the GC to free up memory to which a reference still exists. addRange is a tool specifically designed for that situation. If it can be guaranteed that no GC-memory references live inside a non-GC memory block, such as a malloced array of vertices, then addRange need not be called on that memory block.

A word of warning

Be careful when passing typed pointers to addRange. Because the function was implemented with the C like approach of taking a pointer to a block of memory and the number of bytes it contains, there is an opportunity for error.

struct Item { SomeClass foo; }
auto items = (cast(Item*)malloc(Item.sizeof * 10))[0 .. 10];
GC.addRange(items.ptr, items.length);

With this, the GC would be scanning a block of memory exactly ten bytes in size. The length property returns the number of elements the slice refers to. Only when the type is void (or the element type is one-byte long, like byte and ubyte) does it equate to the size of the memory block the slice refers to. The correct thing to do here is:

GC.addRange(items.ptr, items.length * Item.sizeof);

However, until DRuntime is updated with an alternative, it may be best to implement a wrapper that takes a void[] parameter.

void addRange(void[] mem) 
{
    import core.memory;
    GC.addRange(mem.ptr, mem.length);
}

Then calling addRange(items) will do the correct thing. The implicit conversion of the slice to void[] in the function call will mean that mem.length is the same as items.length * Item.sizeof.

The GC series marches on

This post has covered the very basics of using the non-GC heap in D programs. One glaring omission, in addition to class types, is what to do about destructors. I’m saving that topic for the post about classes, where it is highly relevant. That’s the next scheduled post in the GC series. Stay tuned!

Thanks to Walter Bright, Guillaume Piolat, Adam D. Ruppe, and Steven Schveighoffer for their valuable feedback on a draft of this article.

DMD 2.076.0 Released

The core D team is proud to announce that version 2.076.0 of DMD, the reference compiler for the D programming language, is ready for download. The two biggest highlights in this release are the new static foreach feature for improved generative and generic programming, and significantly enhanced C language integration making incremental conversion of C projects to D easy and profitable.

static foreach

As part of its support for generic and generative programming, D allows for conditional compilation by way of constructs such as version and static if statements. These are used to choose different code paths during compilation, or to generate blocks of code in conjunction with string and template mixins. Although these features enable possibilities that continue to be discovered, the lack of a compile-time loop construct has been a steady source of inconvenience.

Consider this example, where a series of constants named val0 to valN needs to be generated based on a number N+1 specified in a configuration file. A real configuration file would require a function to parse it, but for this example, assume the file val.cfg is defined to contain a single numerical value, such as 10, and nothing else. Further assuming that val.cfg is in the same directory as the valgen.d source file, use the command line dmd -J. valgen.d to compile.

module valgen;
import std.conv : to;

enum valMax = to!uint(import("val.cfg"));

string genVals() 
{
    string ret;
    foreach(i; 0 .. valMax) 
    {
        ret ~= "enum val" ~ to!string(i) ~ "=" ~ to!string(i) ~ ";";
    }
    return ret;
}

string genWrites() 
{
    string ret;
    foreach(i; 0 .. valMax) 
    {
        ret ~= "writeln(val" ~ to!string(i) ~ ");";
    }
    return ret;
}

mixin(genVals);

void main() 
{
    import std.stdio : writeln;
    mixin(genWrites);
}

The manifest constant valMax is initialized by the import expression, which reads in a file during compilation and treats it as a string literal. Since we’re dealing only with a single number in the file, we can pass the string directly to the std.conv.to function template to convert it to a uint. Because valMax is an enum, the call to to must happen during compilation. Finally, because to meets the criteria for compile-time function evaluation (CTFE), the compiler hands it off to the interpreter to do so.

The genVals function exists solely to generate the declarations of the constants val0 to valN, where N is determined by the value of valMax. The string mixin on line 26 forces the call to genVals to happen during compilation, which means this function is also evaluated by the compile-time interpreter. The loop inside the function builds up a single string containing the declaration of each constant, then returns it so that it can be mixed in as several constant declarations.

Similarly, the genWrites function has the single-minded purpose of generating one writeln call for each constant produced by genVals. Again, each line of code is built up as a single string, and the string mixin inside the main function forces genWrites to be executed at compile-time so that its return value can be mixed in and compiled.

Even with such a trivial example, the fact that the generation of the declarations and function calls is tucked away inside two functions is a detriment to readability. Code generation can get quite complex, and any functions created only to be executed during compilation add to that complexity. The need for iteration is not uncommon for anyone working with D’s compile-time constructs, and in turn neither is the implementation of functions that exist just to provide a compile-time loop. The desire to avoid such boilerplate has put the idea of a static foreach as a companion to static if high on many wish lists.

At DConf 2017, Timon Gehr rolled up his sleeves during the hackathon and implemented a pull request to add support for static foreach to the compiler. He followed that up with a D Improvement Proposal, DIP 1010, so that he could make it official, and the DIP met with enthusiastic approval from the language authors. With DMD 2.076, it’s finally ready for prime time.

With this new feature, the above example can be rewritten as follows:

module valgen2;
import std.conv : to;

enum valMax = to!uint(import("val.cfg"));

static foreach(i; 0 .. valMax) 
{
    mixin("enum val" ~ to!string(i) ~ "=" ~ to!string(i) ~ ";");
}

void main() 
{
    import std.stdio : writeln;
    static foreach(i; 0 .. valMax) 
    {
        mixin("writeln(val" ~ to!string(i) ~ ");");
    }
}

Even such a trivial example brings a noticeable improvement in readability. Don’t be surprised to see compile-time heavy D libraries (and aren’t most of them?) get some major updates in the wake of this compiler release.

Better C integration and interoperation

DMD’s -betterC command line switch has been around for quite a while, though it didn’t really do much and it has languished from inattention while more pressing concerns were addressed. With DMD 2.076, its time has come.

The idea behind the feature is to make it even easier to combine both D and C in the same program, with an emphasis on incrementally replacing C code with D code in a working project. D has been compatible with the C ABI from the beginning and, with some work to translate C headers to D modules, can directly make C API calls without going through any sort of middleman. Going the other way and incorporating D into C programs has also been possible, but not as smooth of a process.

Perhaps the biggest issue has been DRuntime. There are certain D language features that depend on its presence, so any D code intended to be used in C needs to bring the runtime along and ensure that it’s initialized. That, or all references to the runtime need to be excised from the D binaries before linking with the C side, something that requires more than a little effort both while writing code and while compiling it.

-betterC aims to dramatically reduce the effort required to bring D libraries into the C world and modernize C projects by partially or entirely converting them to D. DMD 2.076 makes significant progress toward that end. When -betterC is specified on the command line, all asserts in D modules will now use the C assert handler rather than the D assert handler. And, importantly, neither DRuntime nor Phobos, the D standard library, will be automatically linked in as they normally are. This means it’s no longer necessary to manually configure the build process or fix up the binaries when using -betterC. Now, object files and libraries generated from D modules can be directly linked into a C program without any special effort. This is especially easy when using VisualD, the D plugin for Visual Studio. Not too long ago, it gained support for mixing C and D modules in the same project. The updated -betterC switch makes it an even more convenient feature.

While the feature is now more usable, it’s not yet complete. More work remains to be done in future releases to allow the use of more D features currently prohibited in betterC. Read more about the feature in Walter Bright’s article here on the D Blog, D as a Better C.

A new release schedule

This isn’t a compiler or language feature, but it’s a process feature worth noting. This is the first release conforming to a new release schedule. From here on out, beta releases will be announced on the 15th of every even month, such as 2017–10–15, 2017–12–15, 2018–2–15, etc. All final releases will be scheduled for the 1st of every odd month: 2017–11–01, 2018–01–01, 2018–03–01, etc. This will bring some reliability and predictability to the release schedule, and make it easier to plan milestones for enhancements, changes, and new features.

Get it now!

As always, the changes, fixes, and enhancements for this release can be found in the changelog. This specific release will always be available for download at http://downloads.dlang.org/releases/2.x/2.076.0, and the latest release plus betas and nightlies can be found at the download page on the DLang website.

Project Highlight: Funkwerk

Funkwerk is a German company that develops intelligent communication technology. One of their projects is a passenger information system for long-distance and local transport that is deployed by long-distance rail networks in Germany, Austria, Switzerland, Finland, Norway and Luxembourg, as well as city railways in Berlin and Munich. The system is developed at the company’s Munich location and, some time ago, they came to the conclusion that it needed a rewrite. According to Funkwerk’s Mario Kröplin:

From a bird’s-eye view, the long-term replacement of our aged passenger information system is our primary project. At some point, it became obvious that the system was getting hard to maintain and hard to change. In 2008, a new head of the development department was hired. It was time for a change.

They wanted to do more than just rewrite the system. They also wanted to make changes in their development process, and that led to questions of how best to approach the changes so that they didn’t negatively impact productivity.

There is an inertia when experienced programmers are asked to suddenly start with unit testing and code reviews. The motivation for unit testing and code reviews is higher when you learn a new programming language. Especially one where unit testing is a built-in feature. It takes a new language to break old habits.

The decision was made to select a different language for the project, preferably one with built-in support for unit testing. They also allowed themselves a great deal of leeway in making that choice.

Compared with other monolithic legacy systems, we were in the advantageous position that the passenger information system was constructed as a set of services. There is some interprocess communication between the services, but otherwise the services are largely independent. The service architecture made such a decision less serious: just make an experiment with any one of the services in the new language. Should the experiment fail, then it’s not a heavy loss.

So they cast out a net and found D. One thing about the language that struck them immediately was that it fit in a comfort zone somewhere between the languages with which their team was already familiar.

The languages in use were C++ (which rather feels like being part of the problem than part of the solution) and Java. D was presented as a better C++. With garbage collection, changing to D is easy enough for both C++ and Java programmers.

They took D for a spin with some experimentation. The first experiment was a bit of a failure, but the second was a success. In fact, it went well enough that it convinced them to move forward with D.

In retrospect, the change of programming language was one aspect that led to an agile transition. One early example of a primary D project was the replacement of the announcement service. This one gets all information about train journeys – especially about the disruptions – and decides when to play which announcements. The business rules make this quite complicated. The new service uses object-oriented programming to handle the complicated business rules nicely. So you could say it’s a Java program written in D. However, as the business rules change from customer to customer, the announcement service is constantly changing. And with the development of D and with our learning, we are now in a better position to evolve the software further.

When they first started the changeover, D2 was still in its early stages of development and the infamous Phobos/Tango divide was still a thing in the D community (an issue that, for those of you out of the loop the last several years, was put to rest long ago). Since D1 was considered stable, that was an obvious choice for them. They also chose Tango over Phobos, primarily because Tango provided logging and HTTP facilities, while Phobos did not. In 2012, as D2 was stablizing and official support for D1 was being phased out, it became apparent that they would have to make a transition from D1 and Tango to D2 and Phobos, which is what they use today.

One of the D features that hooked Mario early on was its built-in support for contracts.

Contracts won me over from the beginning. No joke! When we had hard times to hunt down segmentation faults, a failed assertion boosted the diagnostics. I used the assert macro a lot in C++, but with contracts in D there is no question about whom to blame. A failed precondition: caller’s fault. A failed postcondition or invariant: my fault. We have contracts everywhere. And we don’t use the -release switch. We would rather give up performance than information that helps us fixing bugs.

As with many other active D users, D’s dynamic arraysslices and the built-in associative arrays stand out for the Funkwerk programmers

They work out of the box. It’s not like in Java, where you’d better not use the arrays of the language, but the ArrayList of the library.

They also have put D’s ranges and Uniform Function Call Syntax to heavy use.

With ranges and UFCS, it became possible to replace loops with intention-revealing chains of map and filter and whatever. While the loop describes how something is done, the ranges describe what is done. We enjoy setting the delay in our tests to 5.minutes. And we use UDAs in accessors and dunit. It’s hard to imagine what pieces of our clean code would look like in any other language.

accessors is a library that uses templatesmixins, and UDAs to automatically generate property getters and setters. dunit is a unit testing library that, in an ironic twist, Mario maintains. As it turned out, their intuition that their developers would be more motivated to write code when a language has built-in unit testing was correct (that’s been cited by Walter Bright as a reason it was included in the language in the first place). However, that first failed experiment in D came down to the simplistic capabilities the built-in unit testing provides.

One of the failures of our first experiment was an inappropriate use of the unittest functions. You’re on your own when you have to write more code per test case, for example for testing interactions of objects. With xUnit testing frameworks like JUnit, you get a toolbox and lots of instructions, like the exhaustive http://xunitpatterns.com/. With D, you get the single tool unittest and lots of examples of test cases that can be expressed as one-liners. This does not help in scenarios where such test cases are the exception. Even Phobos has examples where the unittest blocks are lengthy and obscure. Often, such unittest blocks are not clean code but a collection of “Test Smells”. We soon replaced the built-in unit testing (one of the benefits we’ve seen) with DUnit.

The transition to D2 and Phobos necessitated a move away from the Tango-based DUnit. Mario found the open source (and lowercase) dunit (the original project is here) a promising replacement, and ultimately found himself maintaining a fork that has evolved considerably from the original. However, the team recently discovered a way to combine xUnit testing with D’s built-in unittest, which may lead to another transition in their unit testing.

Additionally, Phobos still had no logging package in 2012, so they needed a replacement for the Tango logger API. Mario rolled up his sleeves and implemented log, which, like acessors and dunit, is available under the Boost Software License.

It was meant as an improvement proposal for an early stage of  std.experimental.logger. I was excited that it was possible to implement the logging framework together with support for rolling log files, for logrotate, and for syslog in just 500 lines. We’re still using it.

Funkwerk has also contributed code to Phobos.

I was looking for a way to implement a delay calculation. The problem is that you have an initial delay and you assume that the driver runs faster and makes shorter stops in order to catch up. But the algorithm was missing. So we made a pull request for std.algorithm.cumulativeFold. It’s there because we strived for clean code in a forecast service.

Garbage collection has been a topic of interest on this blog lately. Mario has an interesting anecdote from Funkwerk’s usage of D’s GC.

Our service was losing connections from time to time. It took a while to find that the cause was the interaction between garbage collection and networking. The signals used by the garbage collector interrupt the blocking read. The blocking read is hidden in a library, for example, in std.socketstream, where the retry is missing. Currently, we abuse inheritance to patch the SocketStream class.

The technical description of the derived class from the Ddoc comments read:

/**
 * When a timeout has been set on the socket, the interface functions "recv"
 * and "send" are not restarted after being interrupted by a signal handler.
 * They fail with the error "EINTR" when interrupted, especially
 * by the signal used to "stop the world" during garbage collection.
 *
 * In this case, retries avoid that the stream is mistaken to be exhausted.
 * Note, however, that these retries will likely exceed the specified timeout.
 *
 * See also: Linux manual page signal(7)
 */

After nearly a decade of active development with D, Mario and his team have been pleased with their choice. They plan to continue using the language in new projects.

Over the years, we’ve replaced services in the periphery of the passenger information system whenever this was justified by customer needs. We chose D for the implementation of all of the backend services. We are now working on a small greenfield project (it’s a few buses to be monitored instead of lots of trains), but we intend to grow this project into the next generation of the passenger information system.

In the coming months, we’ll hear more from Mario and the Funkwerk developers about how they use D to develop their system and tooling, and what it’s like to be a professional D programmer.