Communal Benevolence Required

DConf 2018 is coming up fast. The final pieces of preparation are soon to fall into place and then the day will be upon us. Already, we’re looking at what comes next.

We recently put out a survey to gather a good deal of feedback from the community. At that time, we promised that the survey results would be used to guide some initiatives going forward. Though we’ve been silent on that front, it’s still in the works. You should be hearing an announcement at DConf about the first such initiative, and a subsequent blog post will lay out the framework we’ll be following going forward.

One thing all of our initiatives will have in common is that they will require resources. That, unfortunately, is not something the D Language Foundation has in large quantities. Whether we’re talking about time, money, or manpower, D’s development is constrained by the limited resources available. Fortunately for us, one can be used to drum up the other two, so in the absence of enough folks able to donate their time and manpower, we’ll be relying on donations of money to buy what we need to meet the community’s goals.

The same holds for DConf. In order to keep the registration fees relatively affordable, reimburse the speakers, buy the T-shirts, and fund all of the costs associated with the event, the Foundation uses money from the donation pool to cover what the registrations don’t. That’s exactly the sort of thing the donations are for, of course, but the more we dip into the pool for the conference, the longer it takes to replenish after.

So if you’re looking to help the D community, this is an excellent way to do it. You can head over to our OpenCollective page, or choose one of the alternatives on our donation page at dlang.org, and throw a few dollars at us. Leave a note on the donation form letting us know you’re doing this to support DConf 2018.

As a reminder, if you haven’t registered for DConf and aren’t planning to, the Hackathon on May 5th is open to all. So if you’re in the area that day, come on in and see us!

D Goes Business

A long-time contributor to the D community, Kai Nacke is the author of ‘D Web Development‘ and the maintainer of LDC, the LLVM D Compiler. Be sure to catch Kai at DConf 2018 in Munich, May 2 – 5, where he’ll be speaking about “D for the Blockchain“.


Many companies run their business processes on an SAP system. Often other applications need to access the SAP system because they provide data or request some calculation. There are many ways to achieve this… Let’s use D for it!

 

The SDK and the D binding

As an SAP customer you can download the SAP NetWeaver Remote Function Call SDK. You can call RFC-enabled functions in the SAP system from a native application. Conversely, it is possible from an ABAP program to call a function in a native program. The API documentation for the library is available as a separate download. I recommend downloading it together with the library.

The C/C++ interface is well structured but can be very tedious to use. That’s why I not only created a D binding for the library but also used some D magic to make the developer’s life much easier. I introduced the following additions:

  • Most functions have a parameter of type RFC_ERROR_INFO. As the name implies, this is only needed in case of an error. For each of these functions a new function is generated without the RFC_ERROR_INFO parameter and the return type RFC_RC changed to void. Instead, a SAPException is thrown in case of error.
  • Functions named RfcGet*Count now return a size_t value instead of using an out parameter. This is possible because of the above step which changed the return type to void.
  • Functions named Rfc*ByIndex use a size_t parameter for the index, thus better integrating with D.
  • Functions which have a pointer and length value now accept a D array instead.
  • Use of pointers to zero-terminated UTF–16 strings is replaced with wstring type parameters.

Using the library is easy, just add

dependency "sapnwrfc-d"

to your dub.sdl file. One caveat: the libraries provided by the SAP package must be installed in such a way that the linker can find them. On Windows, you can add the path to the lib folder of the SDK to the LIB and PATH environment variables.

An example application

Let’s create an application calling a remote-enabled function on the SAP system. I use the DATE_COMPUTE_DAY function because it is simple but still has import and export parameters. This function takes a date (a string of format “YYYYMMDD”) and returns the weekday as a number (1 = Monday, 2 = Tuesday and so on).

The application needs two parameters: the system identifier of the SAP system and the date. The system identifier denotes the destination for the RFC call. The parameters for the connection must be in the sapnwrfc.ini file, which must be located in the same folder as the application executable. An invocation of the application using the SAP system X01 looks like this:

D:\OpenSource\sapnwrfc-example>sapnwrfc-example.exe X01 20180316
Date 20180316 is day 5

First, create the application with DUB:

D:\OpenSource>dub init sapnwrfc-example
Package recipe format (sdl/json) [json]: sdl
Name [sapnwrfc-example]:
Description [A minimal D application.]: An example rfc application
Author name [Kai]: Kai Nacke
License [proprietary]:
Copyright string [Copyright © 2018, Kai Nacke]:
Add dependency (leave empty to skip) []: sapnwrfc-d
Added dependency sapnwrfc-d ~>0.0.5
Add dependency (leave empty to skip) []:
Successfully created an empty project in 'D:\OpenSource\sapnwrfc-example'.
Package successfully created in sapnwrfc-example

D:\OpenSource>

Let’s edit the application in source\app.d. Since this is only an example application, I’ve put all the code into the main() function. In order to use the library you simply import the sapnwrfc module. Most functions can throw a SAPException, so you want to wrap them in a try block.

import std.stdio;
import sapnwrfc;

int main(string[] args)
{
    try
    {
        // Put your code here
    }
    catch (SAPException e)
    {
        writefln("Error occured %d %s", e.code, e.codeAsString);
        writefln("'%s'", e.message);
        return 100;
    }
    return 0;
}

The library uses only UTF–16. Like the C/C++ version, the alias cU() can be used to create a zero-terminated UTF–16 string from a D string. I convert the command line parameters first:

    auto dest = cU(args[1]);
    auto date = cU(args[2]);

Now initialize the library. Most important, this function loads the sapnwrfc.ini file and initializes the environment. If this call is missing then it is implicitly done inside the library. Nevertheless, I recommend calling the function. It is possible that I will add more functionality to this function.

    RfcInit();

The next step is to open a connection to the SAP system. Since the connection parameters are in the sapnwrf.ini file, it is only necessary to provide the destination. In your own application you do not need to use the sapnwrf.ini file. You can provide all parameters in the RFC_CONNECTION_PARAMETER[] array which is passed to the RfcOpenConnection() function.

    RFC_CONNECTION_PARAMETER[1] conParams = [ { "DEST"w.ptr, dest } ];
    auto connection = RfcOpenConnection(conParams);
    scope(exit) RfcCloseConnection(connection);

Please note the D features used here. A string literal is always zero-terminated, therefore there is no need to use cU() on the "DEST"w literal. With the scope guard, I make sure that the connection is closed at the end of the block.

Before you can make an RFC call, you have to retrieve the function meta data (function description) and create the function from it.

    auto desc = RfcGetFunctionDesc(connection, "DATE_COMPUTE_DAY"w);
    auto func = RfcCreateFunction(desc);
    scope(exit) RfcDestroyFunction(func);

The RfcGetFunctionDesc() calls the SAP system to look up the meta data. The result is cached to avoid a network round trip each time you invoke this function. The implication is that the remote user needs the right to perform the look up. If this step fails with a security-related error, you should talk to your SAP administrator and review the rights of the remote user.

The DATE_COMPUTE_DAY function has one import parameter, DATE. To pass a parameter you call one of the RfcSetXXX() or RfcSetXXXByIndex() functions. The difference is that the first variant uses the parameter name (here: "DATE") or the index of the parameter in the signature (here: 1). I often use the named parameter because the resulting code is much more readable. The date data type expects an 8 character UTF–16 array.

    RfcSetDate(func, "DATE", date[0..8]);

Now the function can be called:

    RfcInvoke(connection, func);

The computed weekday is returned in the export parameter DAY. There is a set of RfcGetXXX() and RfcGetXXXByIndex() functions to retrieve the value.

    wchar[1] day;
    RfcGetChars(func, "DAY", day);

Let’s print the result:

    writefln("Date %s is weekday %s", date[0..8], day);

Congratulations! You’ve finished your first RFC call.

Build the application with dub build. Before you can run the application you still need to create the sapnwrfc.ini file. This looks like:

DEST=X01
MSHOST=x01.your.domain
GROUP=PUBLIC
CLIENT=001
USER=kai
PASSWD=secret
LANG=EN

The SDK contains a commented sapnwrfc.ini file in the demo folder. If you are on Windows and your system still uses SAP GUI with the (deprecated) saplogon.ini file, then you can use the createini example application from my bindings library to convert the saplogon.ini file into the sapnwrfc.ini file.

Summary

Calling an RFC function of an SAP system with D is very easy. D features like support for UTF–16 strings, scope guards, and exceptions make the source quite readable. The presented example application is part of the D bindings library and can be downloaded from GitHub at https://github.com/redstar/sapnwrfc-d.

std.variant Is Everything Cool About D

Jared Hanson has been involved with the D community since 2012, and an active contributor since 2013. Recently, he joined the Phobos team and devised a scheme to make it look like he’s contributing by adding at least 1 tag to every new PR. He holds a Bachelor of Computer Science degree from the University of New Brunswick and works as a Level 3 Support Engineer at one of the largest cybersecurity companies in the world.


I recently read a great article by Matt Kline on how std::visit is everything wrong with modern C++. My C++ skills have grown rusty from disuse (I have long since left for the greener pastures of D), but I was curious as to how things had changed in my absence.

Despite my relative unfamiliarity with post–2003 C++, I had heard about the addition of a library-based sum type in std for C++17. My curiosity was mildly piqued by the news, although like many new additions to C++ in the past decade, it’s something D has had for years. Given the seemingly sensational title of Mr. Kline’s article, I wanted to see just what was so bad about std::visit, and to get a feel for how well D’s equivalent measures up.

My intuition was that the author was exaggerating for the sake of an interesting article. We’ve all heard the oft-repeated criticism that C++ is complex and inconsistent (even some of its biggest proponents think so), and it is true that the ergonomics of templates in D are vastly improved over C++. Nevertheless, the underlying concepts are the same; I was dubious that std::visit could be much harder to use than std.variant.visit, if at all.

For the record, my intuition was completely and utterly wrong.

Exploring std.variant

Before we continue, let me quickly introduce D’s std.variant module. The module centres around the Variant type: this is not actually a sum type like C++’s std::variant, but a type-safe container that can contain a value of any type. It also knows the type of the value it currently contains (if you’ve ever implemented a type-safe union, you’ll realize why that part is important). This is akin to C++’s std::any as opposed to std::variant; very unfortunate, then, that C++ chose to use the name variant for its implementation of a sum type instead. C’est la vie. The type is used as follows:

import std.variant;

Variant a;
a = 42;
assert(a.type == typeid(int));
a += 1;
assert(a == 43);

float f = a.get!float; //convert a to float
assert(f == 43);
a /= 2;
f /= 2;
assert(a == 21 && f == 21.5);

a = "D rocks!";
assert(a.type == typeid(string));

Variant b = new Object();
Variant c = b;
assert(c is b); //c and b point to the same object
b /= 2; //Error: no possible match found for Variant / int

Luckily, std.variant does provide a sum type as well: enter Algebraic. The name Algebraic refers to algebraic data types, of which one kind is a “sum type”. Another example is the tuple, which is a “product type”.

In actuality, Algebraic is not a separate type from Variant; the former is an alias for the latter that takes a compile-time specified list of types. The values which an Algebraic may take on are limited to those whose type is in that list. For example, an Algebraic!(int, string) can contain a value of type int or string, but if you try to assign a string value to an Algebraic!(float, bool), you’ll get an error at compile time. The result is that we effectively get an in-library sum type for free! Pretty darn cool. It’s used like this:

alias Null = typeof(null); //for convenience
alias Option(T) = Algebraic!(T, Null);

Option!size_t indexOf(int[] haystack, int needle) {
    foreach (size_t i, int n; haystack)
        if (n == needle)
            return Option!size_t(i);
    return Option!size_t(null);
}

int[] a = [4, 2, 210, 42, 7];
Option!size_t index = a.indexOf(42); //call indexOf like a method using UFCS
assert(!index.peek!Null); //assert that `index` does not contain a value of type Null
assert(index == size_t(3));

Option!size_t index2 = a.indexOf(117);
assert(index2.peek!Null);

The peek function takes a Variant as a runtime argument, and a type T as a compile-time argument. It returns a pointer to T that points to the Variant’s contained value iff the Variant contains a value of type T; otherwise, the pointer is null.

Note: I’ve made use of Universal Function Call Syntax to call the free function indexOf as if it were a member function of int[].

In addition, just like C++, D’s standard library has a special visit function that operates on Algebraic. It allows the user to supply a visitor for each type of value the Algebraic may hold, which will be executed iff it holds data of that type at runtime. More on that in a moment.

To recap:

  • std.variant.Variant is the equivalent of std::any. It is a type-safe container that can contain a value of any type.
  • std.variant.Algebraic is the equivalent of std::variant and is a sum type similar to what you’d find in Swift, Haskell, Rust, etc. It is a thin wrapper over Variant that restricts what type of values it may contain via a compile-time specified list.
  • std.variant provides a visit function akin to std::visit which dispatches based on the type of the contained value.

With that out of the way, let’s now talk about what’s wrong with std::visit in C++, and how D makes std.variant.visit much more pleasant to use by leveraging its powerful toolbox of compile-time introspection and code generation features.

The problem with std::visit

The main problems with the C++ implementation are that – aside from clunkier template syntax – metaprogramming is very arcane and convoluted, and there are very few static introspection tools included out of the box. You get the absolute basics in std::type_traits, but that’s it (there are a couple third-party solutions, which are appropriately horrifying and verbose). This makes implementing std::visit much more difficult than it has to be, and also pushes that complexity down to consumers of the library, which makes using it that much more difficult as well. My eyes bled at this code from Mr. Kline’s article which generates a visitor struct from the provided lambda functions:

template <class... Fs>
struct overload;

template <class F0, class... Frest>
struct overload<F0, Frest...> : F0, overload<Frest...>
{
    overload(F0 f0, Frest... rest) : F0(f0), overload<Frest...>(rest...) {}

    using F0::operator();
    using overload<Frest...>::operator();
};

template <class F0>
struct overload<F0> : F0
{
    overload(F0 f0) : F0(f0) {}

    using F0::operator();
};

template <class... Fs>
auto make_visitor(Fs... fs)
{
    return overload<Fs...>(fs...);
}

Now, as he points out, this can be simplified down to the following in C++17:

template<class... Ts> struct overloaded : Ts... { using Ts::operator()...; };
template<class... Ts> overloaded(Ts...) -> overloaded<Ts...>;

template <class... Fs>
auto make_visitor(Fs... fs)
{
    return overload<Fs...>(fs...);
}

However, this code is still quite ugly (though I suspect I could get used to the elipses syntax eventually). Despite being a massive improvement on the preceding example, it’s hard to get right when writing it, and hard to understand when reading it. To write (and more importantly, read) code like this, you have to know about:

There’s a lot of complicated template expansion and code generation going on, but it’s hidden behind the scenes. And boy oh boy, if you screw something up you’d better believe that the compiler is going to spit some supremely perplexing errors back at you.

Note: As a fun exercise, try leaving out an overload for one of the types contained in your variant and marvel at the truly cryptic error message your compiler prints.

Here’s an example from cppreference.com showcasing the minimal amount of work necessary to use std::visit:

template<class... Ts> struct overloaded : Ts... { using Ts::operator()...; };
template<class... Ts> overloaded(Ts...) -> overloaded<Ts...>;

using var_t = std::variant<int, long, double, std::string>;
std::vector<var_t> vec = {10, 15l, 1.5, "hello"};

for (auto& v: vec) {
    std::visit(overloaded {
        [](auto arg) { std::cout << arg << ' '; },
        [](double arg) { std::cout << std::fixed << arg << ' '; },
        [](const std::string& arg) { std::cout << std::quoted(arg) << ' '; },
    }, v);
}

Note: I don’t show it in this article, but if you want to see this example re-written in D, it’s here.

Why is this extra work forced on us by C++, just to make use of std::visit? Users are stuck between a rock and a hard place: either write some truly stigmata-inducing code to generate a struct with the necessary overloads, or bite the bullet and write a new struct every time you want to use std::visit. Neither is very appealing, and both are a one-way ticket to Boilerplate Hell. The fact that you have to jump through such ridiculous hoops and write some ugly-looking boilerplate for something that should be very simple is just… ridiculous. As Mr. Kline astutely puts it:

The rigmarole needed for std::visit is entirely insane.

We can do better in D.

The D solution

This is how the typical programmer would implement make_visitor, using D’s powerful compile-time type introspection tools and code generation abilities:

import std.traits: Parameters;

struct variant_visitor(Funs...)
{
    Funs fs;
    this(Funs fs) { this.fs = fs; }

    static foreach(i, Fun; Funs) //Generate a different overload of opCall for each Fs
        auto opCall(Parameters!Fun params) { return fs[i](params); }
}

auto make_visitor(Funs...)(Funs fs)
{
    return variant_visitor!Funs(fs);
}

And… that’s it. We’re done. No pain, no strain, no bleeding from the eyes. It is a few more lines than the C++ version, granted, but in my opinion, it is also much simpler than the C++ version. To write and/or read this code, you have to understand a demonstrably smaller number of concepts:

  • Templates
  • Template argument packs
  • static foreach
  • Operator overloading

However, a D programmer would not write this code. Why? Because std.variant.visit does not take a callable struct. From the documentation:

Applies a delegate or function to the given Algebraic depending on the held type, ensuring that all types are handled by the visiting functions. Visiting handlers are passed in the template parameter list. (emphasis mine)

So visit only accepts a delegate or function, and figures out which one to pass the contained value to based on the functions’ signatures.

Why do this and give the user fewer options? D is what I like to call an anti-boilerplate language. In all things, D prefers the most direct method, and thus, visit takes a compile-time specified list of functions as template arguments. std.variant.visit may give the user fewer options, but unlike std::visit, it does not require them to painstakingly create a new struct that overloads opCall for each case, or to waste time writing something like make_visitor.

This also highlights the difference between the two languages themselves. D may sometimes give the user fewer options (don’t worry though – D is a systems programming language, so you’re never completely without options), but it is in service of making their lives easier. With D, you get faster, safer code, that combines the speed of native compilation with the productivity of scripting languages (hence, D’s motto: “Fast code, fast”). With std.variant.visit, there’s no messing around defining structs with callable methods or unpacking tuples or wrangling arguments; just straightforward, understandable code:

Algebraic!(string, int, bool) v = "D rocks!";
v.visit!(
    (string s) => writeln("string: ", s),
    (int    n) => writeln("int: ", n),
    (bool   b) => writeln("bool: ", b),
);

And in a puff of efficiency, we’ve completely obviated all this machinery that C++ requires for std::visit, and greatly simplified our users’ lives.

For comparison, the C++ equivalent:

template<class... Ts> struct overloaded : Ts... { using Ts::operator()...; };
template<class... Ts> overloaded(Ts...) -> overloaded<Ts...>;

std::variant<std::string, int, bool> v = "C++ rocks?";
std::visit(overloaded {
    [](const std::string& s) { std::cout << s << '\n'; },
    [](int  n) { std::cout << n << '\n'; },
    [](bool b) { std::cout << b << '\n'; }
}, v);

Note: Unlike the C++ version, the error message you get when you accidentally leave out a function to handle one of the types is comprehendable by mere mortals. Check it out for yourself.

As a bonus, our D example looks very similar to the built-in pattern matching syntax that you find in many up-and-coming languages that take inspiration from their functional forebears, but is implemented completely in user code.

That’s powerful.

Other considerations

As my final point – if you’ll indulge me for a moment – I’d like to argue with a strawman C++ programmer of my own creation. In his article, Mr. Kline also mentions the new if constexpr feature added in C++17 (which of course, D has had for over a decade now). I’d like to forestall arguments from my strawman friend such as:

But you’re cheating! You can use the new if constexpr to simplify the code and cut out make_visitor entirely, just like in your D example!

Yes, you could use if constexpr (and by the same token, static if in D), but you shouldn’t (and Mr. Kline explicitly rejects using it in his article).There are a few problems with this approach which make it all-around inferior in both C++ and D. One, this method is error prone and inflexible in the case where you need to add a new type to your variant/Algebraic. Your old code will still compile but will now be wrong. Two, doing it this way is uglier and more complicated than just passing functions to visit directly (at least, it is in D). Three, the D version would still blow C++ out of the water on readability. Consider:

//D
v.visit!((arg) {
    alias T = Unqual!(typeof(arg)); //Remove const, shared, etc.

    static if (is(T == string)) {
        writeln("string: ", arg);
    }
    else static if (is(T == int)) {
        writeln("int: ", arg);
    }
    else static if (is(T == bool)) {
        writeln("bool: ", arg);
    }
});

vs.

//C++
visit([](auto& arg) {
    using T = std::decay_t<decltype(arg)>;

    if constexpr (std::is_same_v<T, string>) {
        printf("string: %s\n", arg.c_str());
    }
    else if constexpr (std::is_same_v<T, int>) {
        printf("integer: %d\n", arg);
    }
    else if constexpr (std::is_same_v<T, bool>) {
        printf("bool: %d\n", arg);
    }
}, v);

Which version of the code would you want to have to read, understand, and modify? For me, it’s the D version – no contest.


If this article has whet your appetite and you want to find out more about D, you can visit the official Dlang website, and join us on the mailing list or #D on IRC at freenode.net. D is a community-driven project, so we’re also always looking for people eager to jump in and get their hands dirty – pull requests welcome!.

Announcing the DConf 2018 Hackathon Pass

As I write, General Admission for DConf 2018 in Munich has been open for one week. The conference runs from May 2–5, with three days of talks capped off by a day-long Hackathon. We’ve heard there are some in the area who would like to attend, but just can’t get away from work for three days for the full conference. So we’ve decided to do something about it.

This year,  the Hackathon is open, free of charge, to anyone interested in hanging out with us on Saturday, May 5th, the final day of the conference. You’ll have to bring your own lunch, but you’ll get access to the full day. This time around, it’s more than just hacking on and talking about D code.

We’ve added one more new item to the list this year: the Hackathon will be kicked off with a talk by Shachar Shemesh from WekaIO. His talk, Announcing Mecca, introduces the company’s open source D project to the world, a perfect way to get us started on an event intended to motivate hacking on open source D projects.

Last year, our inaugural Hackathon was a great success. It’s not a hackathon in the traditional sense, but an opportunity for D programmers to come together and spend the day working on the projects that they think matter, improve the D ecosystem, or learn more about D programming.

Any attendee with an open source D project is welcome to seek out collaborators to help them get work done on site, or to hunker down and help others hack features and bug fixes into their own projects, or any project in the D ecosystem. Some may simply want to sit and code by themselves, getting instant feedback from others on site when it’s needed, and that’s fine, too. Last year’s event proved to be quite a productive day, particularly for the core D projects.

Those not in the mood for coding, or who aren’t yet up to speed on D programming, are covered, too. Last year, groups gathered to hold brainstorming sessions on ideas for new projects, or for solving complex issues in the core. Others, D’s BDFL Walter Bright among them, took some time during the day to tutor less experienced D programmers, on specific language features and other programming topics, in informal sessions that came about because a few people sitting near each other started asking questions.

In short, the DConf Hackathon is a day of getting things done in a friendly and collaborative environment full of people with shared interests. It’s a loosely organized event, the direction of which is determined by those participating. We’ll throw some ideas out there for you to work on if you need the cues, but you’re free to go your own way.

Last year’s Hackathon fell on a Sunday, and some of the conference attendees had to skip it or leave early in order to get back home for work on Monday. This year, it’s on a Saturday, so we hope that if you weren’t able to stick around last year you can do so this time around.

If you’ve never been to a DConf or don’t know much about the D language, the Hackathon Pass is a great remedy for both ailments. So if you’re in or near Munich Saturday, May 5th, we invite you come on by and hear about a great new open source project, spend a day with a friendly crowd of interesting people, and learn a little about the D programming language in the process.

DConf 2018 Programme & Open Registration

The programme for DConf 2018, May 2–5 in Munich, has been published and general registration is now open.

As always, we’ve got keynotes from Walter and Andrei, and an invited guest speaker, this time in the form of Martin Odersky, professor at EPFL and creator of the Scala language. Our closing keynote comes in the pleasant shape of Liran Zvibel, CEO of WekaIO. In between we’ve got another strong lineup of speakers this year, covering a number of topics from the familiar to the new.

We’ve got experience reports, both commercial and personal, web app development, design methodology, the runtime, tooling, and more. Two topics not seen at DConf before include using D for blockchain application development and in genomic bioinformatics. Some of the Foundation’s scholarship students are back for their progress reports, we’ve got our traditional round of Lightning Talks, and Walter and Andrei will be taking the stage in an Ask us Anything session at the end of Day One!

Don’t forget, we’re running another Hackathon this year. Those who stick around on May 5 (and we hope everyone does) can get together for the morning and afternoon sessions to mingle, talk, hack, and squash bugs. It’s not a hackathon in the traditional sense, but one that is driven by those who attend. Find D coders to help you with your projects, contribute to other D projects, hack at some bugs from the D bug tracker, brainstorm ideas for making the D programming experience better… just go where it takes you.

21 hours of talks and an all-day Hackathon are nothing to sneeze at, but there’s much more to DConf than that. It’s the meetup of meetups for members of the D community, a chance to put faces to names and establish real-world relationships with those we’ve only met online. It’s an opportunity to introduce the language to those who don’t know it well and welcome them into the community. Some of the best DConf memories are made in between the talks, in the restaurants at dinner, and in the hotel lobby, where you’ll find an ongoing stream of interesting and intelligent conversation. And don’t forget the great German beer!

So book your flights and your rooms, pack your bags, and register for Four Days of DLang at DConf 2018 in Munich!

User Stories: Funkwerk

The deadline for the early-bird registration for DConf 2018 in Munich is coming up on March 17th. The price will go up from $340 to $400. If you’d like to go, hurry and sign up to save yourself $60. And remember, the NH Munich Messe hotel, the conference venue, is offering a special deal on single rooms plus breakfast for attendees.


A few of the DConf attendees are coming from a local company called Funkwerk. They’re a D shop that we’ve highlighted here on this blog in a series of posts about their projects (you’ll see one of their products in action if you take the subway or local train service in Munich).

In this post, we cap off the Funkwerk series with the launch of a new feature we creatively call “User Stories”. Now and again, we’ll publish a post in which D users talk of their experiences with D, not about specific projects, but about the language itself. They’ll tell of things like their favorite features, why they use it, how it has changed the way they write code, or anything they’d like to say that expresses how they feel about programming in D.

For this inaugural post, we’ve got three programmers from Funkwerk. First up, Michael Schnelle talks about the power of ranges. Next, Ronny Spiegel tells why generated code is better code. Finally, Stefan Rohe enlightens us on Funkwerk’s community outreach.

The power of ranges

Michael Schnelle has been working as a software developer for about 5 years. Before starting with D 3 years ago, he worked in (Web)Application Development, mostly with Java, Ruby on Rails, and C++, and did Thread Modeling for Applications. He enjoys coding in D and likes how it helps programmers write clean code.

In my experience, no matter what I am programming, I always end up applying functions to a set of data and filter this set of data. Occasionally I also execute something with side effects in between. Let’s look at a simplified use case: the transformation of a given set of data and filtering for a condition afterwards. I could simply write:

foreach(element; elements) {
  auto transformed = transform(element);
  if (metCondition(transformed) {
     results ~= transformed
  } 
}

Using the power from std.algorithm, I can instead write:

filter!(element => metCondition(element))
       (map!(element => transform(element))(elements));

At this point, we have a mixture of functional and object-oriented code, which is quite nice, but still not quite as readable or easy to understand as it could be. Let’s combine it with UFCS (Uniform Function Call Syntax):

elements.map!(element => element.transform)
        .filter!(element => element.metCondition);

I really like this kind of code, because it is clearly self-explanatory. The foreach loop, on the other hand, only tells me how it is being done. If I look through our code at Funkwerk, it is almost impossible to find traditional loops.

But this only takes you one step further. In many cases, there happen to be side effects which need to be executed during the workflow of the program. For this kind of thing, the library provides functions like std.range.tee. Let’s say I want to execute something external with the transformed value before filtering:

elements
  .map!(element => element.transform)
  .tee!(element => operation(element))
  .filter!(element => element.metCondition)
  .array;

It is crucial that operations with side effects are only executed with higher-order functions that are built for that purpose.

int square(int a) { writefln("square value"); return a*a; }

[4, 5, 8]
  .map!(a => square(a))
  .tee!(a => writeln(a))
  .array;

The code above would print out the square value six times, because tee calls range.front twice. It is possible to avoid this by using functions like std.algorithm.iteration.cache, but in my opinion, the nice way would be to avoid side effects in functions that are not meant for that.

In the end, D gives you the possibility to combine the advantages of object-oriented and functional programming, resulting in more readable and maintainable code.

Generated code is better code

Ronny Spiegel has worked as a professional software developer for almost 20 years. He started out using C and C++, but when he joined Funkwerk he really started to love the D language and the tools it provides to introspect code and to automate things at compile time.

In a previous blog post, I gave a short overview of the evolution of the accessors library. As you might imagine, I really like the idea of using the compiler to generate code; in the end this usually results in less work for me and, as a direct result, causes fewer errors.

The establishment of coding guidelines is crucial for a team in order to create maintainable software, and so we have them here at Funkwerk. There is a rule that every value object (or entity) has to implement the toString method in order to provide diagnostic output. The provided string shall be unambiguous so that it’s more like Python’s __repr__ than __str__.

Example:

StationMessage(GeneralMessage(4711, 2017-12-12T10:00:00Z), station="BAR", …)

The generated string should follow some conventions:

  • provide a way to uniquely reconstruct data from a string
    • start with the class name
    • continue with any potential superclasses
    • list all fields providing their name and value separated by a comma
  • be compact but still human readable (for developers)
    • skip the name where it matches the type (e.g. a field of type SysTime is called time)
    • skip the name if the field is called id (usually there’s an IdType used for type safety)
    • there’s some special output format defined for types like Date and SysTime
    • Nullable!T’s will be skipped if null etc.

To format output in a consistent manner, we implemented a SinkWriter wrapping formattedWrite in a way that follows the listed conventions. If this SinkWriter is used everywhere, this is the first step to fully generate the toString method.

Unfortunately that’s not enough; it’s very common to forget something when adding a new field to a class. Today I stumbled across some code where a field was missing in the diagnostics output and that led to some confusion.

Using (template) mixins together with CTFE (Compile Time Function Execution) and the provided type traits, D provides a powerful toolset which enables us to generate such functions automatically.

We usually implement an alternative toString method which uses a sink delegate as described in https://wiki.dlang.org/Defining_custom_print_format_specifiers. The implementation is a no-brainer and looks like this:

public void toString(scope void delegate(const(char)[]) sink) const
{
    alias MySelf = Unqual!(typeof(this));

    sink(MySelf.stringof);
    sink("(");

    with (SinkWriter(sink))
    {
        write("%s", this.id_);
        write("station=%s", this.station_);
        // ...
    }

    sink(")");
}

This code seems to be so easy that it might be generalized like this:

public void toString(scope void delegate(const(char)[]) sink) const
{
    import std.traits : FieldNameTuple, Unqual;

    alias MySelf = Unqual!(typeof(this));

    sink(MySelf.stringof);
    sink("(");

    with (SinkWriter(sink))
    {
        static foreach (fieldName; FieldNameTuple!MySelf)
        {{
            mixin("const value = this." ~ fieldName ~ ";");
            write!"%s=%s"(fieldName, value);
        }}
    }

    sink(")");
}

The above is just a rough sketch of how such a generic function might look. For a class to use this generation approach, simply call something like

mixin(GenerateToString);

inside the class declaration, and that’s it. Never again will a field be missing in the class’s toString output.

Generating the toString method automatically might also help us to switch from the common toString method to an alternative implementation. If there will be more conventions over time, we will only have to extend the SinkWriter and/or the toString-template, and that’s it.

As a summary: Try to generate code if possible – it is less error prone and D supports you with a great set of tools!

Funkwerk and the D-Community

Stefan Rohe started the D-train at Funkwerk back in 2008. They have loved DLang since then and replaced D1-Tango with D2-Phobos in 2013. They are strong believers in open source and local communities, and are thrilled to see you all in Munich at DConf 2018.

Funkwerk is the largest D shop in south Germany, so we hire D-velopers, mainly just through being known for programming in D. In order to give a little bit back to the D community at large and help the local community grow, Funkwerk hosted the foundational edition of the Munich D Meetup.

The local community is important …

Munich Meetup at Brainlab

The meetup was founded in August 2016, 8 years after the first line of D code at Funkwerk was written. Since then, the Meetup has grown steadily to ~350 members. At that number, it is still not the biggest D Meetup, but it is the most visited and the most active. It provides a chance for locals in Munich to interact with like-minded D-interested people each month. And with an alternating level of detail and a different location each month, it stays interesting and attracts different participants.

… and so is the global community

To engage with the global community, Funkwerk is willing to open source some of its general-purpose D libraries. They can all be found under github.com/funkwerk, and some are registered in the DUB registry.

To mention are:

  • accessors – a library to auto generate getters and setters with UDAs
  • depend – a tool that checks actual import dependencies against a UML model of target dependencies
  • d2uml – reverse engineering of D source code into PlantUML class outlines

Feel free to use these and let us know how you like them.

The D Language Foundation at Open Collective

In its work guiding the development of D and promoting its adoption, the D Language Foundation is driven primarily by donations big and small. The money comes in from different sources, the most visible being those listed on the website’s donation page, and is put to use in different ways.

Donors typically receive an email thanking them for their generosity. Recently, we added a sponsors page to shine a light on those who have given and who are willing to have their names on public display. That and a line in the Vision Document about average monthly expenses are the only obvious bits of transparency in the process.

Today, the D Language Foundation is opening a new chapter in the donation story with our Open Collective page. According to OpenCollective.com, an open collective is,

A group of people with a shared mission that operates in full transparency.

The site allows us to set up packages that donors can choose from, with or without rewards, for one-time and recurring donations, at levels within reach of individuals and those more suited for corporate budgets. Donors can leave notes with their donations to tell us what they think of our work or what’s important to them. We can submit expenses to show how the money is being used, and set up fund drives for specific targets.

In short, Open Collective gives us new possibilities in raising money, spending it, and showing how it’s spent, while also providing more opportunities for D community members to participate. We could, for example, designate a specific amount to put toward the development of a particular language or ecosystem feature of importance to the community and ask community members to help us meet that goal. A perfect opportunity to contribute for those eager to see progress in areas that matter to them. Crowd-sourcing for a niche crowd.

Those who wish to remain in the shadows can still donate behind the scenes via our other sources. Additionally, I wouldn’t expect all Foundation expenses to be listed at Open Collective. We’re inexperienced with this platform yet, so it will take us a bit of time to learn how to make it work best for all of us, but we have high hopes that it will prove beneficial in the long run.

On a related note, if you shop at Amazon, you can help us out by making your purchases via smile.amazon.com and choosing the D Language Foundation as your charity. A small percentage of your purchases through smile.amazon.com will go to the Foundation as long as it is selected as your charity. From now until March 31, the donation percentage is tripled.

Amazon Smile

Don’t forget, the State of D Survey is still open for a couple more days. If you haven’t completed it yet, please take the time to do so. We’re looking forward to see what comes from it.

The New New DIP Process

When I took on the role of DIP Manager last year, my number one goal was to clear out the queue. I made a few revisions to the process and got busy. Over the next few months, things went along fairly well, not so much from anything I did as from the quality of the submissions. But at some point, things broke down and the process stalled.

Near the end of the year, Andrei asked me to make two specific changes to the process. One of them was to come up with a different approach for handling the final review. To that end, he suggested I look at how other languages handle the review of their enhancement proposals.

Before I started, I identified some other areas of the process that were problematic so that I could keep an eye out for ideas on how to shore them up. Might as well overhaul the whole process rather than one small part of it. So I put the entire DIP process on hold until the new process was ready to go.

The main thing I learned from looking at other language processes was that about the only thing they have in common is that they use GitHub repositories to store their proposals and that new submissions are made through PRs. Beyond that, there’s a quite a bit of variety in how they handle review and evaluation.

Ultimately, I decided the basic framework we already had was well-suited for our circumstances. It just needed some serious tweaking to iron out the problem spots. So I set out to write up a new procedure document to address the things that needed addressing. When it was done, it took a while to get the seal of approval, but it finally came through and the process is no longer stalled.

So first, before describing the major changes, it will help to understand their motivation.

What went wrong

One of the earliest stumbles I had as DIP Manager was a mix up over DIP 1006. I had gotten it in my head that the author intended to rewrite it before moving forward. The reality was that I had informed him before DConf that I would get it going at the end of May. The result was that it sat there for several months before I realized my error.

Another problem came with DIP 1009. There was an issue with the way it was written – the style didn’t meet the standard laid out in the Guidelines document. This led to multiple email exchanges, with massive delays and more misunderstandings, that resulted in the DIP being stuck in limbo for quite a while.

The communication problem over DIP 1009 was what prompted the process revision. For the Final Review of each DIP, I was acting as the middle-man between Walter and Andrei on one side and the DIP Author on the other. It worked well enough as long as little further effort on the DIP was needed, but as soon as there were questions that needed answering or more work to do, it became inefficient, cumbersome, and prone to misunderstanding.

Perhaps the biggest issue of all was time. DIP 1006 sat in Draft Review with nothing from me for several months. DIPs 1006, 1009, and 1011 have been in the Final Review stage for ages. There’s no reason any DIP author should have to wait for months on end with no feedback, or vague promises, no matter which stage of the process a DIP is in. It’s discouraging and demotivating. The process should require some motivation and effort from DIP authors, but it should also require a commitment from the other side to keep the authors informed and to get each DIP through from beginning to end as efficiently as possible.

Some of these problems could have been avoided if I had taken a different view of my role. I saw the role of DIP Manager more as that of a Shepherd than a Gatekeeper. The ultimate fate of a DIP rested on the Author’s shoulders, not mine. The Guidelines were “more what you call guidelines than actual rules”. After I made my revisions to the Guidelines document early on, they fell right out of my head and I never looked at the document again.

Righting the wrongs

The new Procedure document outlines the new process. Following is a summary of the big ones.

A minor issue is that there was some confusion about the existing review-stage names. There are now four review stages rather than three: Draft Review, Community Review, Final Review, and Formal Assessment. The Draft Review is the same as before. The Community Review is the new name for the old Preliminary Review. The old Final Review, which had two parts, has been split out into the Final Review and the Formal Assessment – the former is the last chance for the community to leave feedback, and the latter is Walter and Andrei’s decision round.

For all but the Draft Review, each stage specifies a maximum amount of time that a DIP can go without progress. For example, a DIP may remain in the Post-Community Round N state for 180 days, and a DIP in Formal Assessment should receive a final disposition within 30 days. The document defines the steps that must be taken when these deadlines are not met.

Related, though not in the document, is what I will do to keep to the deadlines. I’ll be making use of the calendar in the D Foundation’s Google account to post the start and finish date of each stage for each DIP. When a DIP is between stages, I’ll set milestone dates so that the DIP Author and I can have a clear target to aim for. If we’re on the same page, there will be less opportunity for uncertainty and misunderstanding.

The document provides for a new process for handling the Formal Assessment. No longer will I be a middleman between two email chains. Now, Walter and Andrei will provide their feedback on a private gist, with direct participation by the DIP Author. This should help things move more quickly and will eliminate (or greatly reduce) the chance of anyone (me in particular) causing more delay by getting things mixed up.

Another change is the requirement for a Point of Contact (POC). From here on out, every DIP must have a POC. For a single-author DIP, the DIP Author is the POC. If there are multiple authors, they must select one from among themselves. The need for this came to light after a misunderstanding that arose from the communication problem. The POC must commit to seeing the DIP through to the end of the process. The document outlines what happens to a DIP when the POC becomes unavailable.

Another change that’s not outlined in the document is in how I view my role as DIP Manager. From here on out, I will consider the guidelines as actual rules. I’ll do my best to make sure a DIP meets the standards expected in terms of language and style before it leaves the Draft Review stage. We can tweak it as we go, of course, but never again should a DIP be sent back for revision because it’s too informal.

Open to refinement

The new Procedure document and the undocumented tweaks to my process are the result of lessons learned over several months. That doesn’t mean they’re perfect. We’ll always be open to suggestions on how to patch up any holes that are identified. Not every change was mentioned above, so please read the document for the details.

Hopefully, the three DIPs currently awaiting a final disposition will be resolved before too much longer. After that, DIP 1012 will be moved forward for the Final Review and Formal Assessment to become the first DIP to go through the new gist-based review. DIP 1013 (which will likely be the one introducing binary assignment operators for properties), will be the first test-case for the new process in its entirety. Let’s all keep an eye open for what works and what needs work.

And to everyone, thanks for your patience while I went through my growing pains and we got the mess sorted out. Now that the train is back on the tracks, I’ll do my best to keep it moving.

LDC 1.8.0 Released

LDC, the D compiler using the LLVM backend, has been actively developed for going on a dozen years, as laid out by co-maintainer David Nadlinger in his DConf 2013 talk. It is considered one of two production compilers for D, along with GDC, which uses the gcc backend and has been accepted for inclusion into the gcc tree.

The LDC developers are proud to announce the release of version 1.8.0, following a short year and a half from the 1.0 release. This version integrates version 2.078.3 of the D front-end (see the DMD 2.078.0 changelog for the important front-end changes), which is itself written in D, making LDC one of the most prominent mixed D/C++ codebases. You can download LDC 1.8.0 and read about the major changes and bug fixes for this release at GitHub.

More platforms

Kai Nacke, the other LDC co-maintainer, talked at DConf 2016 about taking D everywhere, to every CPU architecture that LLVM supports and as many OS platforms as we can. LDC is a cross-compiler: the same program can compile code for different platforms, in contrast to DMD and GDC, where a different DMD/GDC binary is needed for each platform. Towards that end, this release continues the existing Android cross-compilation support, which was introduced with LDC 1.4. A native LDC compiler to both build and run D apps on your Android phone is also available for the Termux Android app. See the wiki page for instructions on how to use one of the desktop compilers or the native compiler to compile D for Android.

The LDC team has also been putting out LDC builds for ARM boards, such as the Raspberry Pi 3. Download the armhf build if you want to try it out. Finally, some developers have expressed interest in using D for microservices, which usually means running in an Alpine container. This release also comes with an Alpine build of LDC, using the just-merged Musl port by @yshui. This port is brand new. Please try it out and let us know how well it works.

Linking options – shared default libraries

LDC now makes it easier to link with the shared version of the default libraries (DRuntime and the standard library, called Phobos) through the -link-defaultlib-shared compiler flag. The change was paired with a rework of linking-related options. See the new help output:

Linking options:

-L= - Pass to the linker
-Xcc= - Pass to GCC/Clang for linking
-defaultlib=<lib1,lib2,…> - Default libraries to link with (overrides previous)
-disable-linker-strip-dead - Do not try to remove unused symbols during linking
-link-defaultlib-debug - Link with debug versions of default libraries
-link-defaultlib-shared - Link with shared versions of default libraries
-linker=<lld-link|lld|gold|bfd|…> - Linker to use
-mscrtlib=<libcmt[d]|msvcrt[d]> - MS C runtime library to link with
-static

Other new options

  • -plugin=... for compiling with LLVM-IR pass plugins, such as the AFLfuzz LLVM-mode plugin
  • -fprofile-{generate,use} for Profile-Guided Optimization (PGO) based on the LLVM IR code (instead of PGO based on the D abstract syntax tree)
  • -fxray-{instrument,instruction-threshold} for generating code for XRay instrumentation
  • -profile (LDMD2) and -fdmd-trace-functions (LDC2) to support DMD-style profiling of programs

Vanilla compiler-rt libraries

LDC uses LLVM’s compiler-rt runtime library for Profile-Guided Optimization (PGO), Address Sanitizer, and fuzzing. When PGO was first added to LDC 1.1.0, the required portion of compiler-rt was copied to LDC’s source repository. This made it easy to ship the correct version of the library with LDC, and make changes for LDC specifically. However, a copy was needed for each LLVM version that LDC supports (compiler-rt is compatible with only one LLVM version): the source of LDC 1.7.0 has 6 (!) copies of compiler-rt’s profile library.

For the introduction of ASan and libFuzzer in the official LDC binary packages, a different mechanism was used: when building LDC, we check whether the compiler-rt libraries are available in LLVM’s installation and, if so, copy them into LDC’s lib/ directory. To use the same mechanism for the PGO runtime library, we had to remove our additions to that library. Although the added functionality is rarely used, we didn’t want to just drop it. Instead, the functionality was turned into template-only code, such that it does not need to be compiled into the library (if the templated functionality is called by the user, the template’s code will be generated in the caller’s object file).

With this change, LDC no longer needs its own copy of compiler-rt’s profile library and all copies were removed from LDC’s source repository. LDC 1.8.0 ships with vanilla compiler-rt libraries. LDC’s users shouldn’t notice any difference, but for the LDC team it means less maintenance burden.

Onward

A compiler developer’s work is never done. With this release out the door, we march onward toward 1.9. Until then, start optimizing your D programs by downloading the pre-compiled LDC 1.8.0 binaries for Linux, Mac, Windows, Alpine, ARM, and Android, or build the compiler from source from our GitHub repository.

Thanks to LDC contributors Johan Engelen and Joakim for coauthoring this post.

DMD 2.079.0 Released

The D Language Foundation is happy to announce version 2.079.0 of DMD, the reference compiler for the D programming language. This latest version is available for download in multiple packages. The changelog details the changes and bugfixes that were the product of 78 contributors for this release.

It’s not always easy to choose which enhancements or changes from a release to highlight on the blog. What’s important to some will elicit a shrug from others. This time, there’s so much to choose from that my head is spinning. But two in particular stand out as having the potential to result in a significant impact on the D programming experience, especially for those who are new to the language.

No Visual Studio required

Although it has only a small entry in the changelog, this is a very big deal for programming in D on Windows: the Microsoft toolchain is no longer required to link 64-bit executables. The previous release made things easier by eliminating the need to configure the compiler; it now searches for a Visual Studio or Microsoft Build Tools installation when either -m32mscoff or -m64 are passed on the command line. This release goes much further.

DMD on Windows now ships with a set of platform libraries built from the MinGW definitions and a wrapper library for the VC 2010 C runtime (the changelog only mentions the installer, but this is all bundled in the zip package as well). When given the -m32mscoff or -m64 flags, if the compiler fails to find a Windows SDK installation (which comes installed with newer versions of Visual Studio – with older versions it must be installed separately), it will fallback on these libraries. Moreover, the compiler now ships with lld, the LLVM linker. If it fails to find the MS linker, this will be used instead (note, however, that the use of this linker is currently considered experimental).

So the 64-bit and 32-bit COFF output is now an out-of-the-box experience on Windows, as it has always been with the OMF output (-m32, which is the default). This should make things a whole lot easier for those coming to D without a C or C++ background on Windows, for some of whom the need to install and configure Visual Studio has been a source of pain.

Automatically compiled imports

Another trigger for some new D users, particularly those coming from a mostly Java background, has been the way imports are handled. Consider the venerable ‘Hello World’ example:

import std.stdio;

void main() {
    writeln("Hello, World!");
}

Someone coming to D for the first time from a language that automatically compiles imported modules could be forgiven for assuming that’s what’s happening here. Of course, that’s not the case. The std.stdio module is part of Phobos, the D standard library, which ships with the compiler as a precompiled library. When compiling an executable or shared library, the compiler passes it on to the linker along any generated object files.

The surprise comes when that same newcomer attempts to compile multiple files, such as:

// hellolib.d
module hellolib;
import std.stdio;

void sayHello() {
    writeln("Hello!");
}

// hello.d
import hellolib;

void main() {
    sayHello();
}

The common mistake is to do this, which results in a linker error about the missing sayHello symbol:

dmd hello.d

D compilers have never considered imported modules for compilation. Only source files passed on the command line are actually compiled. So the proper way to compile the above is like so:

dmd hello.d hellolib.d

The import statement informs the compiler which symbols are visible and accessible in the current compilation unit, not which source files should be compiled. In other words, during compilation, the compiler doesn’t care whether imported modules have already been compiled or are intended to be compiled. The user must explicitly pass either all source modules intended for compilation on the command line, or their precompiled object or library files for linking.

It’s not that adding support for compiling imported modules is impossible. It’s that doing so comes with some configuration issues that are unavoidable thanks to the link step. For example, you don’t want to compile imported modules from libFoo when you’re already linking with the libFoo static library. This is getting into the realm of build tools, and so the philosophy has been to leave it up to build tools to handle.

DMD 2.079.0 changes the game. Now, the above example can be compiled and linked like so:

dmd -i hello.d

The -i switch tells the compiler to treat imported modules as if they were passed on the command line. It can be limited to specific modules or packages by passing a module or package name, and the same can be excluded by preceding the name with a dash, e.g.:

dmd -i=foo -i=-foo.bar main.d

Here, any imported module whose fully-qualified name starts foo will be compiled, unless the name starts with foo.bar. By default, -i means to compile all imported modules except for those from Phobos and DRuntime, i.e.:

-i=-core -i=-std -i=-etc -i=-object

While this is no substitute for a full on build tool, it makes quick tests and programs with no complex configuration requirements much easier to compile.

The #dbugfix Campaign

On a related note, last month I announced the #dbugfix Campaign. The short of it is, if there’s a D Bugzilla issue you’d really like to see fixed, tweet the issue number along with #dbugfix, or, if you don’t have a Twitter account or you’d like to have a discussion about the issue, make a post in the General forum with the issue number and #dbugfix in the title. The core team will commit to fixing at least two of those issues for a subsequent compiler release.

Normally, I’ll collect the data for the two months between major compiler releases. For the initial batch, we’re going three months to give people time to get used to it. I anticipated it would be slow to catch on, and it seems I was right. There were a few issues tweeted and posted in the days after the announcement, but then it went quiet. So far, this is what we have:

DMD 2.080.0 is scheduled for release just as DConf 2018 kicks off. The cutoff date for consideration during this run will be the day the 2.080.0 beta is announced. That will give our bugfixers time to consider which bugs to work on. I’ll include the tally and the issues they select in the DMD release announcement, then they will work to get the fixes implemented and the PRs merged in a subsequent release (hopefully 2.081.0). When 2.080.0 is released, I’ll start collecting #dbugfix issues for the next cycle.

So if there’s an issue you want fixed that isn’t on that list above, put it out there with #dbugfix! Also, don’t be shy about retweeting #dbugfix issues or +1’ing them in the forums. This will add weight to the consideration of which ones to fix. And remember, include an issue number, otherwise it isn’t going to count!