The Making of ‘D Web Development’

Posted on

A long-time contributor to the D community, Kai Nacke is the author of ‘D Web Development‘ and the maintainer of LDC, the LLVM D Compiler. In this post, he tells the story of how his book came together. Currently, the eBook version is on sale for USD $10.00 as part of the publisher’s Back to School sale, as are ‘D Cookbook‘ by Adam Ruppe and ‘Learning D‘ by Michael Parker.


At the beginning of 2014, I was asked by Packt Publishing if I wanted to review the D Cookbook by Adam Ruppe. Of course I wanted to!

The review was stressful, but it was a lot of fun. At the end of the year came a surprising question for me: would I be willing to switch sides and write a book myself? Here, I hesitated. Sure, writing your own book is a dream, but is this at all possible on top of a regular job? The proposed topic, D Web Development, was interesting. Web technologies I knew, of course, but the vibe.d framework was for me only a large unit test for each LDC release.

My interest was awakened and I created a chapter overview, based solely on my experience as a developer and the online documentation of vibe.d. The result came out well and I was offered a contract. It came with an immediate challenge: I should set up a small project plan. How do you plan to write a book?!?

Without any experience in this area, I stuck to the following rules. For each chapter, I planned a little time frame. Each should include at least one weekend, for the larger chapters perhaps even two. I reserved some time for the Easter holiday, too. The first version of the book would therefore be ready at the beginning of July, when I started writing in mid-February.

Even the first chapter showed that this plan was much too optimistic. The writing went off quickly – as soon as I had something I could write about. But experimenting and testing took a lot of time. For one thing, I didn’t have much experience with vibe.d. There were sample programs that I wanted to develop Saturday to write about on Sunday. However, I was still searching for errors on Monday, without having written a single line!

On the other hand, there were still a few rough edges in vibe.d at the time, but I did not want to write that these would be changed or implemented in later versions of the library. So I developed a few patches for vibe.d, e.g. digest authentication. By the way, there were also new LDC releases to create. Fortunately, the LDC team had expanded, so I just took care of the release itself (thanks so much, folks!). The result was, of course, that I missed many of my milestones.

In May, the first chapters came back from the review process. Other content also had to be written, such as the text for the back of the book. In mid-December, the last chapter was finished and almost all review notes on the other chapters were incorporated. After a little Christmas break, the remaining notes were quickly incorporated and the pre-final version of each chapter was created in January. And then, on February 1, 2016, the news came that my book was now published. I’d done it! Almost exactly one year after I had started with the first chapter.

Was the work worth it? In any case, it was a very special experience. Would I do it again? Yes! Right now, I am playing with the idea of updating the book and expanding a chapter. Let’s see what happens…

The Evolution of the accessors Library

Posted on

Ronny Spiegel is a developer at Funkwerk AG, a German company whose passenger information system is developed in D and was recently highlighted on this blog. In this post, Ronny tells the story of the company’s open source accessors library, which provides a mechanism for users to automatically generate property getters and setters using D’s robust compile-time features.


A little bit of history.

We’ve always used UML tools to visualize the internal structure and document details of software. That’s true for me not only at Funkwerk, but also in the companies I worked before I joined the team here in Karlsfeld. One of the major issues of documentation is that at some point in time it will diverge from the actual implementation and become outdated. Additionally, if you have to support old versions of your components you will have to take care of old versions of your documentation as well.

The first approach to connecting code and model is to generate code from the model, which requires the model to reflect the current implementation. When I joined Funkwerk we were using ArgoUML to manage class diagrams which were used as input to generate code. Not only class or struct skeletons were generated (existing code was kept), but also methods to access members which were not even part of the model. In order to control whether a member should be accessible, annotations, similar to UDAs (User-Defined Attributes), were used as part of the member documentation. So it was very common for us to annotate a member with @Read or @Write even though it was only in the documentation. The tool which we used to generate code was powerful enough to create the implementation of these field accessor methods supported by annotations to synchronize access, or to automatically use invariants for pre- and post-conditions as well.

Anyway, the approach of using the model as a base for code generation always suffers from the same problem: it is very hard to merge models!

So we reversed the whole thing and decided to create documentation from code. We could still use code which had been generated before, but all the new classes had to be supplied with accessor functions. You can imagine that this was very annoying.

public class Journey
{
    private Leg[] legs_;

    public Leg[] legs()
    {
	return this.legs_.dup;
    }

...
}

(Yes, we’ve been writing Java and compiling as D.)

Code which was generated before still had these @Read and @Write annotations next to the fields. So I thought, “These look like UDAs. Why not just use those to generate the methods automatically?” I’d always wanted to use mixins and compile-time introspection in order to move forward with a more D-like development approach.

A first draft…

The very first version of the accessors library was able to generate basic read- and write-accessor methods using the allMembers trait, filtering by UDAs, and generating some basic code like:

public final Leg[] legs() { return this.legs_.dup; }

It works… Yes, it does.

We did not replace all existing accessor methods at once, but working on a large project at that time we introduced many of them. The automated generation of accessor methods was really a simplification for us.

…always has some issues.

The first implementation looked so simple – there must have been issues. And yes, there were. I cannot list all of them because I do not remember anymore, but some of these issues were:

Explicitly defined properties suppressed generated ones

We ran into a situation where we explicitly defined a setter method (e.g. because it had to notify an observer) but wanted to use the generated getter method. The result was that the defined setter method could be used but accessing the generated getter method (with the same name) was impossible.

According to the specification, the compiler places mixins in a nested scope and then imports them into the surrounding scope. If a function with the same name already exists in the surrounding scope, then this function overwrites the function from the mixin. So if there is a field with a @Read annotation and another explicitly defined mutating field accessor, then the @Read accessor is overwritten by the defined one.

The solution to this issue was rather simple. We had to use a string mixin to import the generated code into the class where it shall be used.

Flags

We have a guideline to avoid magic bools wherever possible and use much more verbose flags instead. So a simple attribute like:

private bool isExtraJourney_;

Becomes:

private Flag!”isExtraJourney” isExtraJourney_;

This approach has two advantages. Providing a value with Yes.isExtraJourney is much more verbose than just a true, and it creates a type. When there are two or more flags as part of a method signature, you cannot change the order of the flags (by accident) as you could with bools.

To generate the type of the return value (or in case of mutable access of the parameter) we used T.stringof, where T is the type of the field. Unfortunately, this does not work as expected for Flags.

Flag!”foo” fooFlag;

static assert(`Flag!”foo”`, typeof(fooFlag).stringof); // Fails!
static assert(`Flag`, typeof(fooFlag).stringof); // Succeeds!

Unit Tests

When using the mixin in private types defined in unit tests, a similar issue arose. Classes defined in unittest blocks have a prefix like __unittestL526_8. It was necessary to strip this prefix from the used type string.

Private Classes

While iterating over members of private classes, we stumbled across the issue that the allMembers (or derivedMembers) trait returns, in addition to __ctor, an unaccessible member called this. This issue remains unsolved.

The current implementation…

The currently released version covers the aforementioned issues, although there is still room for new features.

An example might be to provide a predicate which is then used for synchronizing access to the field. That was possible using the old version of the code generator by annotating it with @GuardedBy(“this”). Fortunately, we’ve advanced in our D coding skills and have moved away from Java code compiled with DMD to a more D-like style by using structs wherever we need value semantics (and we don’t have to deal with thousands of copies of that value). So at the moment, this doesn’t really hurt that much.

Another interesting (and still open issue) is to create accessors for aliased imported types. The generated code still refers to the real name of the type, which is then unknown to the compile unit where the code is mixed in.

…has room for improvement!

If you’re interested in dealing with this kind of problem and want to dive into CTFE and compile-time introspection, we welcome contributions!

DMD 2.076.0 Released

Posted on

The core D team is proud to announce that version 2.076.0 of DMD, the reference compiler for the D programming language, is ready for download. The two biggest highlights in this release are the new static foreach feature for improved generative and generic programming, and significantly enhanced C language integration making incremental conversion of C projects to D easy and profitable.

static foreach

As part of its support for generic and generative programming, D allows for conditional compilation by way of constructs such as version and static if statements. These are used to choose different code paths during compilation, or to generate blocks of code in conjunction with string and template mixins. Although these features enable possibilities that continue to be discovered, the lack of a compile-time loop construct has been a steady source of inconvenience.

Consider this example, where a series of constants named val0 to valN needs to be generated based on a number N+1 specified in a configuration file. A real configuration file would require a function to parse it, but for this example, assume the file val.cfg is defined to contain a single numerical value, such as 10, and nothing else. Further assuming that val.cfg is in the same directory as the valgen.d source file, use the command line dmd -J. valgen.d to compile.

module valgen;
import std.conv : to;

enum valMax = to!uint(import("val.cfg"));

string genVals() 
{
    string ret;
    foreach(i; 0 .. valMax) 
    {
        ret ~= "enum val" ~ to!string(i) ~ "=" ~ to!string(i) ~ ";";
    }
    return ret;
}

string genWrites() 
{
    string ret;
    foreach(i; 0 .. valMax) 
    {
        ret ~= "writeln(val" ~ to!string(i) ~ ");";
    }
    return ret;
}

mixin(genVals);

void main() 
{
    import std.stdio : writeln;
    mixin(genWrites);
}

The manifest constant valMax is initialized by the import expression, which reads in a file during compilation and treats it as a string literal. Since we’re dealing only with a single number in the file, we can pass the string directly to the std.conv.to function template to convert it to a uint. Because valMax is an enum, the call to to must happen during compilation. Finally, because to meets the criteria for compile-time function evaluation (CTFE), the compiler hands it off to the interpreter to do so.

The genVals function exists solely to generate the declarations of the constants val0 to valN, where N is determined by the value of valMax. The string mixin on line 26 forces the call to genVals to happen during compilation, which means this function is also evaluated by the compile-time interpreter. The loop inside the function builds up a single string containing the declaration of each constant, then returns it so that it can be mixed in as several constant declarations.

Similarly, the genWrites function has the single-minded purpose of generating one writeln call for each constant produced by genVals. Again, each line of code is built up as a single string, and the string mixin inside the main function forces genWrites to be executed at compile-time so that its return value can be mixed in and compiled.

Even with such a trivial example, the fact that the generation of the declarations and function calls is tucked away inside two functions is a detriment to readability. Code generation can get quite complex, and any functions created only to be executed during compilation add to that complexity. The need for iteration is not uncommon for anyone working with D’s compile-time constructs, and in turn neither is the implementation of functions that exist just to provide a compile-time loop. The desire to avoid such boilerplate has put the idea of a static foreach as a companion to static if high on many wish lists.

At DConf 2017, Timon Gehr rolled up his sleeves during the hackathon and implemented a pull request to add support for static foreach to the compiler. He followed that up with a D Improvement Proposal, DIP 1010, so that he could make it official, and the DIP met with enthusiastic approval from the language authors. With DMD 2.076, it’s finally ready for prime time.

With this new feature, the above example can be rewritten as follows:

module valgen2;
import std.conv : to;

enum valMax = to!uint(import("val.cfg"));

static foreach(i; 0 .. valMax) 
{
    mixin("enum val" ~ to!string(i) ~ "=" ~ to!string(i) ~ ";");
}

void main() 
{
    import std.stdio : writeln;
    static foreach(i; 0 .. valMax) 
    {
        mixin("writeln(val" ~ to!string(i) ~ ");");
    }
}

Even such a trivial example brings a noticeable improvement in readability. Don’t be surprised to see compile-time heavy D libraries (and aren’t most of them?) get some major updates in the wake of this compiler release.

Better C integration and interoperation

DMD’s -betterC command line switch has been around for quite a while, though it didn’t really do much and it has languished from inattention while more pressing concerns were addressed. With DMD 2.076, its time has come.

The idea behind the feature is to make it even easier to combine both D and C in the same program, with an emphasis on incrementally replacing C code with D code in a working project. D has been compatible with the C ABI from the beginning and, with some work to translate C headers to D modules, can directly make C API calls without going through any sort of middleman. Going the other way and incorporating D into C programs has also been possible, but not as smooth of a process.

Perhaps the biggest issue has been DRuntime. There are certain D language features that depend on its presence, so any D code intended to be used in C needs to bring the runtime along and ensure that it’s initialized. That, or all references to the runtime need to be excised from the D binaries before linking with the C side, something that requires more than a little effort both while writing code and while compiling it.

-betterC aims to dramatically reduce the effort required to bring D libraries into the C world and modernize C projects by partially or entirely converting them to D. DMD 2.076 makes significant progress toward that end. When -betterC is specified on the command line, all asserts in D modules will now use the C assert handler rather than the D assert handler. And, importantly, neither DRuntime nor Phobos, the D standard library, will be automatically linked in as they normally are. This means it’s no longer necessary to manually configure the build process or fix up the binaries when using -betterC. Now, object files and libraries generated from D modules can be directly linked into a C program without any special effort. This is especially easy when using VisualD, the D plugin for Visual Studio. Not too long ago, it gained support for mixing C and D modules in the same project. The updated -betterC switch makes it an even more convenient feature.

While the feature is now more usable, it’s not yet complete. More work remains to be done in future releases to allow the use of more D features currently prohibited in betterC. Read more about the feature in Walter Bright’s article here on the D Blog, D as a Better C.

A new release schedule

This isn’t a compiler or language feature, but it’s a process feature worth noting. This is the first release conforming to a new release schedule. From here on out, beta releases will be announced on the 15th of every even month, such as 2017–10–15, 2017–12–15, 2018–2–15, etc. All final releases will be scheduled for the 1st of every odd month: 2017–11–01, 2018–01–01, 2018–03–01, etc. This will bring some reliability and predictability to the release schedule, and make it easier to plan milestones for enhancements, changes, and new features.

Get it now!

As always, the changes, fixes, and enhancements for this release can be found in the changelog. This specific release will always be available for download at http://downloads.dlang.org/releases/2.x/2.076.0, and the latest release plus betas and nightlies can be found at the download page on the DLang website.