Author Archives: Michael Parker

User Stories: Funkwerk

The deadline for the early-bird registration for DConf 2018 in Munich is coming up on March 17th. The price will go up from $340 to $400. If you’d like to go, hurry and sign up to save yourself $60. And remember, the NH Munich Messe hotel, the conference venue, is offering a special deal on single rooms plus breakfast for attendees.


A few of the DConf attendees are coming from a local company called Funkwerk. They’re a D shop that we’ve highlighted here on this blog in a series of posts about their projects (you’ll see one of their products in action if you take the subway or local train service in Munich).

In this post, we cap off the Funkwerk series with the launch of a new feature we creatively call “User Stories”. Now and again, we’ll publish a post in which D users talk of their experiences with D, not about specific projects, but about the language itself. They’ll tell of things like their favorite features, why they use it, how it has changed the way they write code, or anything they’d like to say that expresses how they feel about programming in D.

For this inaugural post, we’ve got three programmers from Funkwerk. First up, Michael Schnelle talks about the power of ranges. Next, Ronny Spiegel tells why generated code is better code. Finally, Stefan Rohe enlightens us on Funkwerk’s community outreach.

The power of ranges

Michael Schnelle has been working as a software developer for about 5 years. Before starting with D 3 years ago, he worked in (Web)Application Development, mostly with Java, Ruby on Rails, and C++, and did Thread Modeling for Applications. He enjoys coding in D and likes how it helps programmers write clean code.

In my experience, no matter what I am programming, I always end up applying functions to a set of data and filter this set of data. Occasionally I also execute something with side effects in between. Let’s look at a simplified use case: the transformation of a given set of data and filtering for a condition afterwards. I could simply write:

foreach(element; elements) {
  auto transformed = transform(element);
  if (metCondition(transformed) {
     results ~= transformed
  } 
}

Using the power from std.algorithm, I can instead write:

filter!(element => metCondition(element))
       (map!(element => transform(element))(elements));

At this point, we have a mixture of functional and object-oriented code, which is quite nice, but still not quite as readable or easy to understand as it could be. Let’s combine it with UFCS (Uniform Function Call Syntax):

elements.map!(element => element.transform)
        .filter!(element => element.metCondition);

I really like this kind of code, because it is clearly self-explanatory. The foreach loop, on the other hand, only tells me how it is being done. If I look through our code at Funkwerk, it is almost impossible to find traditional loops.

But this only takes you one step further. In many cases, there happen to be side effects which need to be executed during the workflow of the program. For this kind of thing, the library provides functions like std.range.tee. Let’s say I want to execute something external with the transformed value before filtering:

elements
  .map!(element => element.transform)
  .tee!(element => operation(element))
  .filter!(element => element.metCondition)
  .array;

It is crucial that operations with side effects are only executed with higher-order functions that are built for that purpose.

int square(int a) { writefln("square value"); return a*a; }

[4, 5, 8]
  .map!(a => square(a))
  .tee!(a => writeln(a))
  .array;

The code above would print out the square value six times, because tee calls range.front twice. It is possible to avoid this by using functions like std.algorithm.iteration.cache, but in my opinion, the nice way would be to avoid side effects in functions that are not meant for that.

In the end, D gives you the possibility to combine the advantages of object-oriented and functional programming, resulting in more readable and maintainable code.

Generated code is better code

Ronny Spiegel has worked as a professional software developer for almost 20 years. He started out using C and C++, but when he joined Funkwerk he really started to love the D language and the tools it provides to introspect code and to automate things at compile time.

In a previous blog post, I gave a short overview of the evolution of the accessors library. As you might imagine, I really like the idea of using the compiler to generate code; in the end this usually results in less work for me and, as a direct result, causes fewer errors.

The establishment of coding guidelines is crucial for a team in order to create maintainable software, and so we have them here at Funkwerk. There is a rule that every value object (or entity) has to implement the toString method in order to provide diagnostic output. The provided string shall be unambiguous so that it’s more like Python’s __repr__ than __str__.

Example:

StationMessage(GeneralMessage(4711, 2017-12-12T10:00:00Z), station="BAR", …)

The generated string should follow some conventions:

  • provide a way to uniquely reconstruct data from a string
    • start with the class name
    • continue with any potential superclasses
    • list all fields providing their name and value separated by a comma
  • be compact but still human readable (for developers)
    • skip the name where it matches the type (e.g. a field of type SysTime is called time)
    • skip the name if the field is called id (usually there’s an IdType used for type safety)
    • there’s some special output format defined for types like Date and SysTime
    • Nullable!T’s will be skipped if null etc.

To format output in a consistent manner, we implemented a SinkWriter wrapping formattedWrite in a way that follows the listed conventions. If this SinkWriter is used everywhere, this is the first step to fully generate the toString method.

Unfortunately that’s not enough; it’s very common to forget something when adding a new field to a class. Today I stumbled across some code where a field was missing in the diagnostics output and that led to some confusion.

Using (template) mixins together with CTFE (Compile Time Function Execution) and the provided type traits, D provides a powerful toolset which enables us to generate such functions automatically.

We usually implement an alternative toString method which uses a sink delegate as described in https://wiki.dlang.org/Defining_custom_print_format_specifiers. The implementation is a no-brainer and looks like this:

public void toString(scope void delegate(const(char)[]) sink) const
{
    alias MySelf = Unqual!(typeof(this));

    sink(MySelf.stringof);
    sink("(");

    with (SinkWriter(sink))
    {
        write("%s", this.id_);
        write("station=%s", this.station_);
        // ...
    }

    sink(")");
}

This code seems to be so easy that it might be generalized like this:

public void toString(scope void delegate(const(char)[]) sink) const
{
    import std.traits : FieldNameTuple, Unqual;

    alias MySelf = Unqual!(typeof(this));

    sink(MySelf.stringof);
    sink("(");

    with (SinkWriter(sink))
    {
        static foreach (fieldName; FieldNameTuple!MySelf)
        {{
            mixin("const value = this." ~ fieldName ~ ";");
            write!"%s=%s"(fieldName, value);
        }}
    }

    sink(")");
}

The above is just a rough sketch of how such a generic function might look. For a class to use this generation approach, simply call something like

mixin(GenerateToString);

inside the class declaration, and that’s it. Never again will a field be missing in the class’s toString output.

Generating the toString method automatically might also help us to switch from the common toString method to an alternative implementation. If there will be more conventions over time, we will only have to extend the SinkWriter and/or the toString-template, and that’s it.

As a summary: Try to generate code if possible – it is less error prone and D supports you with a great set of tools!

Funkwerk and the D-Community

Stefan Rohe started the D-train at Funkwerk back in 2008. They have loved DLang since then and replaced D1-Tango with D2-Phobos in 2013. They are strong believers in open source and local communities, and are thrilled to see you all in Munich at DConf 2018.

Funkwerk is the largest D shop in south Germany, so we hire D-velopers, mainly just through being known for programming in D. In order to give a little bit back to the D community at large and help the local community grow, Funkwerk hosted the foundational edition of the Munich D Meetup.

The local community is important …

Munich Meetup at Brainlab

The meetup was founded in August 2016, 8 years after the first line of D code at Funkwerk was written. Since then, the Meetup has grown steadily to ~350 members. At that number, it is still not the biggest D Meetup, but it is the most visited and the most active. It provides a chance for locals in Munich to interact with like-minded D-interested people each month. And with an alternating level of detail and a different location each month, it stays interesting and attracts different participants.

… and so is the global community

To engage with the global community, Funkwerk is willing to open source some of its general-purpose D libraries. They can all be found under github.com/funkwerk, and some are registered in the DUB registry.

To mention are:

  • accessors – a library to auto generate getters and setters with UDAs
  • depend – a tool that checks actual import dependencies against a UML model of target dependencies
  • d2uml – reverse engineering of D source code into PlantUML class outlines

Feel free to use these and let us know how you like them.

The D Language Foundation at Open Collective

In its work guiding the development of D and promoting its adoption, the D Language Foundation is driven primarily by donations big and small. The money comes in from different sources, the most visible being those listed on the website’s donation page, and is put to use in different ways.

Donors typically receive an email thanking them for their generosity. Recently, we added a sponsors page to shine a light on those who have given and who are willing to have their names on public display. That and a line in the Vision Document about average monthly expenses are the only obvious bits of transparency in the process.

Today, the D Language Foundation is opening a new chapter in the donation story with our Open Collective page. According to OpenCollective.com, an open collective is,

A group of people with a shared mission that operates in full transparency.

The site allows us to set up packages that donors can choose from, with or without rewards, for one-time and recurring donations, at levels within reach of individuals and those more suited for corporate budgets. Donors can leave notes with their donations to tell us what they think of our work or what’s important to them. We can submit expenses to show how the money is being used, and set up fund drives for specific targets.

In short, Open Collective gives us new possibilities in raising money, spending it, and showing how it’s spent, while also providing more opportunities for D community members to participate. We could, for example, designate a specific amount to put toward the development of a particular language or ecosystem feature of importance to the community and ask community members to help us meet that goal. A perfect opportunity to contribute for those eager to see progress in areas that matter to them. Crowd-sourcing for a niche crowd.

Those who wish to remain in the shadows can still donate behind the scenes via our other sources. Additionally, I wouldn’t expect all Foundation expenses to be listed at Open Collective. We’re inexperienced with this platform yet, so it will take us a bit of time to learn how to make it work best for all of us, but we have high hopes that it will prove beneficial in the long run.

On a related note, if you shop at Amazon, you can help us out by making your purchases via smile.amazon.com and choosing the D Language Foundation as your charity. A small percentage of your purchases through smile.amazon.com will go to the Foundation as long as it is selected as your charity. From now until March 31, the donation percentage is tripled.

Amazon Smile

Don’t forget, the State of D Survey is still open for a couple more days. If you haven’t completed it yet, please take the time to do so. We’re looking forward to see what comes from it.

The New New DIP Process

When I took on the role of DIP Manager last year, my number one goal was to clear out the queue. I made a few revisions to the process and got busy. Over the next few months, things went along fairly well, not so much from anything I did as from the quality of the submissions. But at some point, things broke down and the process stalled.

Near the end of the year, Andrei asked me to make two specific changes to the process. One of them was to come up with a different approach for handling the final review. To that end, he suggested I look at how other languages handle the review of their enhancement proposals.

Before I started, I identified some other areas of the process that were problematic so that I could keep an eye out for ideas on how to shore them up. Might as well overhaul the whole process rather than one small part of it. So I put the entire DIP process on hold until the new process was ready to go.

The main thing I learned from looking at other language processes was that about the only thing they have in common is that they use GitHub repositories to store their proposals and that new submissions are made through PRs. Beyond that, there’s a quite a bit of variety in how they handle review and evaluation.

Ultimately, I decided the basic framework we already had was well-suited for our circumstances. It just needed some serious tweaking to iron out the problem spots. So I set out to write up a new procedure document to address the things that needed addressing. When it was done, it took a while to get the seal of approval, but it finally came through and the process is no longer stalled.

So first, before describing the major changes, it will help to understand their motivation.

What went wrong

One of the earliest stumbles I had as DIP Manager was a mix up over DIP 1006. I had gotten it in my head that the author intended to rewrite it before moving forward. The reality was that I had informed him before DConf that I would get it going at the end of May. The result was that it sat there for several months before I realized my error.

Another problem came with DIP 1009. There was an issue with the way it was written – the style didn’t meet the standard laid out in the Guidelines document. This led to multiple email exchanges, with massive delays and more misunderstandings, that resulted in the DIP being stuck in limbo for quite a while.

The communication problem over DIP 1009 was what prompted the process revision. For the Final Review of each DIP, I was acting as the middle-man between Walter and Andrei on one side and the DIP Author on the other. It worked well enough as long as little further effort on the DIP was needed, but as soon as there were questions that needed answering or more work to do, it became inefficient, cumbersome, and prone to misunderstanding.

Perhaps the biggest issue of all was time. DIP 1006 sat in Draft Review with nothing from me for several months. DIPs 1006, 1009, and 1011 have been in the Final Review stage for ages. There’s no reason any DIP author should have to wait for months on end with no feedback, or vague promises, no matter which stage of the process a DIP is in. It’s discouraging and demotivating. The process should require some motivation and effort from DIP authors, but it should also require a commitment from the other side to keep the authors informed and to get each DIP through from beginning to end as efficiently as possible.

Some of these problems could have been avoided if I had taken a different view of my role. I saw the role of DIP Manager more as that of a Shepherd than a Gatekeeper. The ultimate fate of a DIP rested on the Author’s shoulders, not mine. The Guidelines were “more what you call guidelines than actual rules”. After I made my revisions to the Guidelines document early on, they fell right out of my head and I never looked at the document again.

Righting the wrongs

The new Procedure document outlines the new process. Following is a summary of the big ones.

A minor issue is that there was some confusion about the existing review-stage names. There are now four review stages rather than three: Draft Review, Community Review, Final Review, and Formal Assessment. The Draft Review is the same as before. The Community Review is the new name for the old Preliminary Review. The old Final Review, which had two parts, has been split out into the Final Review and the Formal Assessment – the former is the last chance for the community to leave feedback, and the latter is Walter and Andrei’s decision round.

For all but the Draft Review, each stage specifies a maximum amount of time that a DIP can go without progress. For example, a DIP may remain in the Post-Community Round N state for 180 days, and a DIP in Formal Assessment should receive a final disposition within 30 days. The document defines the steps that must be taken when these deadlines are not met.

Related, though not in the document, is what I will do to keep to the deadlines. I’ll be making use of the calendar in the D Foundation’s Google account to post the start and finish date of each stage for each DIP. When a DIP is between stages, I’ll set milestone dates so that the DIP Author and I can have a clear target to aim for. If we’re on the same page, there will be less opportunity for uncertainty and misunderstanding.

The document provides for a new process for handling the Formal Assessment. No longer will I be a middleman between two email chains. Now, Walter and Andrei will provide their feedback on a private gist, with direct participation by the DIP Author. This should help things move more quickly and will eliminate (or greatly reduce) the chance of anyone (me in particular) causing more delay by getting things mixed up.

Another change is the requirement for a Point of Contact (POC). From here on out, every DIP must have a POC. For a single-author DIP, the DIP Author is the POC. If there are multiple authors, they must select one from among themselves. The need for this came to light after a misunderstanding that arose from the communication problem. The POC must commit to seeing the DIP through to the end of the process. The document outlines what happens to a DIP when the POC becomes unavailable.

Another change that’s not outlined in the document is in how I view my role as DIP Manager. From here on out, I will consider the guidelines as actual rules. I’ll do my best to make sure a DIP meets the standards expected in terms of language and style before it leaves the Draft Review stage. We can tweak it as we go, of course, but never again should a DIP be sent back for revision because it’s too informal.

Open to refinement

The new Procedure document and the undocumented tweaks to my process are the result of lessons learned over several months. That doesn’t mean they’re perfect. We’ll always be open to suggestions on how to patch up any holes that are identified. Not every change was mentioned above, so please read the document for the details.

Hopefully, the three DIPs currently awaiting a final disposition will be resolved before too much longer. After that, DIP 1012 will be moved forward for the Final Review and Formal Assessment to become the first DIP to go through the new gist-based review. DIP 1013 (which will likely be the one introducing binary assignment operators for properties), will be the first test-case for the new process in its entirety. Let’s all keep an eye open for what works and what needs work.

And to everyone, thanks for your patience while I went through my growing pains and we got the mess sorted out. Now that the train is back on the tracks, I’ll do my best to keep it moving.

DMD 2.079.0 Released

The D Language Foundation is happy to announce version 2.079.0 of DMD, the reference compiler for the D programming language. This latest version is available for download in multiple packages. The changelog details the changes and bugfixes that were the product of 78 contributors for this release.

It’s not always easy to choose which enhancements or changes from a release to highlight on the blog. What’s important to some will elicit a shrug from others. This time, there’s so much to choose from that my head is spinning. But two in particular stand out as having the potential to result in a significant impact on the D programming experience, especially for those who are new to the language.

No Visual Studio required

Although it has only a small entry in the changelog, this is a very big deal for programming in D on Windows: the Microsoft toolchain is no longer required to link 64-bit executables. The previous release made things easier by eliminating the need to configure the compiler; it now searches for a Visual Studio or Microsoft Build Tools installation when either -m32mscoff or -m64 are passed on the command line. This release goes much further.

DMD on Windows now ships with a set of platform libraries built from the MinGW definitions and a wrapper library for the VC 2010 C runtime (the changelog only mentions the installer, but this is all bundled in the zip package as well). When given the -m32mscoff or -m64 flags, if the compiler fails to find a Windows SDK installation (which comes installed with newer versions of Visual Studio – with older versions it must be installed separately), it will fallback on these libraries. Moreover, the compiler now ships with lld, the LLVM linker. If it fails to find the MS linker, this will be used instead (note, however, that the use of this linker is currently considered experimental).

So the 64-bit and 32-bit COFF output is now an out-of-the-box experience on Windows, as it has always been with the OMF output (-m32, which is the default). This should make things a whole lot easier for those coming to D without a C or C++ background on Windows, for some of whom the need to install and configure Visual Studio has been a source of pain.

Automatically compiled imports

Another trigger for some new D users, particularly those coming from a mostly Java background, has been the way imports are handled. Consider the venerable ‘Hello World’ example:

import std.stdio;

void main() {
    writeln("Hello, World!");
}

Someone coming to D for the first time from a language that automatically compiles imported modules could be forgiven for assuming that’s what’s happening here. Of course, that’s not the case. The std.stdio module is part of Phobos, the D standard library, which ships with the compiler as a precompiled library. When compiling an executable or shared library, the compiler passes it on to the linker along any generated object files.

The surprise comes when that same newcomer attempts to compile multiple files, such as:

// hellolib.d
module hellolib;
import std.stdio;

void sayHello() {
    writeln("Hello!");
}

// hello.d
import hellolib;

void main() {
    sayHello();
}

The common mistake is to do this, which results in a linker error about the missing sayHello symbol:

dmd hello.d

D compilers have never considered imported modules for compilation. Only source files passed on the command line are actually compiled. So the proper way to compile the above is like so:

dmd hello.d hellolib.d

The import statement informs the compiler which symbols are visible and accessible in the current compilation unit, not which source files should be compiled. In other words, during compilation, the compiler doesn’t care whether imported modules have already been compiled or are intended to be compiled. The user must explicitly pass either all source modules intended for compilation on the command line, or their precompiled object or library files for linking.

It’s not that adding support for compiling imported modules is impossible. It’s that doing so comes with some configuration issues that are unavoidable thanks to the link step. For example, you don’t want to compile imported modules from libFoo when you’re already linking with the libFoo static library. This is getting into the realm of build tools, and so the philosophy has been to leave it up to build tools to handle.

DMD 2.079.0 changes the game. Now, the above example can be compiled and linked like so:

dmd -i hello.d

The -i switch tells the compiler to treat imported modules as if they were passed on the command line. It can be limited to specific modules or packages by passing a module or package name, and the same can be excluded by preceding the name with a dash, e.g.:

dmd -i=foo -i=-foo.bar main.d

Here, any imported module whose fully-qualified name starts foo will be compiled, unless the name starts with foo.bar. By default, -i means to compile all imported modules except for those from Phobos and DRuntime, i.e.:

-i=-core -i=-std -i=-etc -i=-object

While this is no substitute for a full on build tool, it makes quick tests and programs with no complex configuration requirements much easier to compile.

The #dbugfix Campaign

On a related note, last month I announced the #dbugfix Campaign. The short of it is, if there’s a D Bugzilla issue you’d really like to see fixed, tweet the issue number along with #dbugfix, or, if you don’t have a Twitter account or you’d like to have a discussion about the issue, make a post in the General forum with the issue number and #dbugfix in the title. The core team will commit to fixing at least two of those issues for a subsequent compiler release.

Normally, I’ll collect the data for the two months between major compiler releases. For the initial batch, we’re going three months to give people time to get used to it. I anticipated it would be slow to catch on, and it seems I was right. There were a few issues tweeted and posted in the days after the announcement, but then it went quiet. So far, this is what we have:

DMD 2.080.0 is scheduled for release just as DConf 2018 kicks off. The cutoff date for consideration during this run will be the day the 2.080.0 beta is announced. That will give our bugfixers time to consider which bugs to work on. I’ll include the tally and the issues they select in the DMD release announcement, then they will work to get the fixes implemented and the PRs merged in a subsequent release (hopefully 2.081.0). When 2.080.0 is released, I’ll start collecting #dbugfix issues for the next cycle.

So if there’s an issue you want fixed that isn’t on that list above, put it out there with #dbugfix! Also, don’t be shy about retweeting #dbugfix issues or +1’ing them in the forums. This will add weight to the consideration of which ones to fix. And remember, include an issue number, otherwise it isn’t going to count!

The State of D 2018 Survey

NOTE: The survey is closed. Thanks to everyone who participated!


Strange things are afoot at the D Language Foundation. Odd noises and varicolored lights have been reported emanating from the cellar into the wee hours of the morning. Foundation members have been sighted, stumbling dazed and bleary-eyed in and out of the front door, arms full of mysterious black boxes. Neighbors whisper, and rumor has it that the spawn of so much secretive activity is only one arcane ritual away from seeing the light of day.

How right they are! For the past few weeks, the initiate Sebastian Wilzbach has devoted his energies to studying the Book of Modern Arcana in preparation for the ritual known as the State of D 2018 Survey. With feedback from those already steeped in the Dark arts, he has been refining the incantations of the ritual so that they prove most effective. Now, at long last, his preparations are complete and the ritual has been unleashed upon the world!

Upon the D community anyway.

And now it’s on you. This is your chance to turn your praise, complaints, and nitpicks into action. By participating in the State of D survey, you’ll be providing guidance to the D Language Foundation to help identify both short and long-term goals for the future development of D and its ecosystem.

A handful of initiatives are already coming together in the cellar. You’ll be able to read about them in more detail here as they are announced in the coming days, weeks, and months. The 2018 H1 document, presenting an overview of the current focus, will be announced soon. This survey will help build on existing plans, fill in the details of the general goals, and identify any course corrections that are necessary for 2018 H2 and beyond.

The Foundation is eager to address the issues that matter most to the community. Members frequently make their thoughts known in the forums, but the conversations can be long, sprawling, wandering, and hard to follow. The State of D Survey will allow the core team to see at a glance what’s going right and what’s not, and to focus their attention where it’s needed most. It could take 15 minutes or more of your time to complete, depending on the amount of thought you put into your answers. If you care about the D programming language, it’s well worth every minute.

We’ll leave the survey open for at least two weeks. A short while after it closes, we’ll publish the results here. From then on, the blog will cover what’s happening in response, providing updates on progress at reasonable intervals. Then next year we’ll do it all again.

Remember, D is a community-driven language, but not everyone is able to contribute as much as they would like. Whether you’re a frequent contributor or just getting started writing D programs, this is your chance to help make D an even better language than it is today.

Make your opinion count and take the survey!

DConf 2018 Munich: The Venue

The deadline for DConf 2018 submissions is this Sunday. If you’re on the fence about sending in a proposal, don’t still be poised there when midnight AOE strikes on the 25th! Come down before then on the submission side. If you’re selected to speak, you may be eligible for reimbursement for your hotel and travel expenses (reasonable expenses will be covered). This is our first time in Munich, and if you can pad out your visit by two or three days, there’s a lot to see while you’re there.

The venue is the NH Munich Messe hotel, located in the Zamdorf area of the city.

There’s a bus stop right outside that will get you to the Marienplatz and the New Town Hall, the world-famous center of the Bavarian capital, in short order. Not far from there, you’ll find the original Hofbräuhaus, where servers in traditional costumes pamper thousands of daily visitors from Munich and around the world, who come for the regional cuisine, music, folk dances, and historic atmosphere.

After you see the Old Town, be sure to make time for the modern world. The Deutsches Museum, which according to Wikipedia is the world’s largest science and technology museum, is a good place to start. With over 28,000 exhibits, it may be difficult to pull yourself away.

There are plenty of daytrip destinations outside of the city. One must-see spot is Neuschwanstein castle, one of the most recognizable structures in the world. World War II history buffs may be interested in a trip to Nuremburg. There are plenty of options for guided tours that can get you to these and other locations and back in a day, but it’s not difficult to get there on your own. Sites like TripAdvisor can help with the planning.

As for the hotel:

All the 253 rooms have just been refurbished, so you can expect stylish, comfortable bases. Nice touches include free Wi-Fi and pillow menus. Other highlights include a restaurant serving Bavarian dishes, a stylish lobby bar, and a compact fitness center. The Hotel also has Sky TV, allowing you to catch up on the day’s sporting events.

There’s a bar with a terrace which has the look and feel of a typical Bavarian beer garden. It’s surrounded by a little garden and is a great spot to enjoy a glass of wine or a light meal in the sunshine.

For the health-conscious, they also have a gym that’s open from 2:00 pm to 11:00 pm, and it can be opened at other times upon request. It was refurbished in 2015 and includes a sauna.

Most importantly for us, they are offering DConf attendees a discount on single rooms. Drop a line to reservierungen@nh-hotels.com to take advantage of this offer.

When the submission deadline passes this weekend, the next date to focus on is March 17th. That’s when the early-bird registration discount ends. Head over to the registration page before then!

Project Highlight: The D Community Hub

As has been stressed on this blog before, D is a community-driven language. Because of that, the ecosystem depends on the work of volunteers who are willing to contribute their time and open their projects to the community at large. From IDE and editor plugins to libraries in the DUB registry, it’s all about the efforts of people who are (usually) getting no monetary reward for their efforts.

There are some inherent downsides to that reality. Sometimes projects are abandoned. Sometimes they aren’t updated as frequently as users would like. This can become an issue for those who depend upon these projects, but it’s alleviated by the fact that most D projects are open source and their repositories are publicly available. To keep a project alive and up-to-date only requires more volunteers willing to pitch in.

That’s the motivation behind the D Community Hub (dlang-community) at GitHub. According to Sebastian Wilzbach, it started with Brian Schott’s popular tools used by several IDE and editor plugins:

There were maintenance issues with Brian’s (aka Hackerpilot) awesome projects. He has a full-time job and often could only respond to simple issues every few weeks. This meant that simple bug fix PRs sat in the queue for quite a while. For example, there was one case where the same PR to fix the Windows build script was submitted by three different people (there was no Windows CI at the time).

Brian’s projects weren’t the only ones that motivated the idea. Sebastian and Petar Kirov maintain the DLang Tour, and some of the projects they depend upon were either inactive or slow to update. However, Brian’s tools are widely used, so they started with him. Eventually, they convinced him to move some of his projects to the new organization and others followed.

Sebastian lays out the following benefits that have come from moving multiple projects from disparate developers under an umbrella group:

  • Common best policies (e.g. all repositories have GitHub branch protection)
  • No need to fork an inactive repository – work can be shared easily.
  • No dependence on a single person who might be busy or on vacation (this is especially important for swiftly pulling and releasing bug fixes )
  • One common location whenever updates are required (e.g. package bumps or deprecation fixes)
  • Many of the projects are enabled on the Project Tester (their test suite is run on every PR for the DMD, DRuntime, Phobos, Dub, and tools repositories to prevent regressions) – this is possible because many people have merge rights in case an improvement in the compiler finds critical bugs or deprecations are moved forward
  • Shared knowledge (e.g. all projects support “auto-merge” like the dlang repositories)
  • Automation with bots – Mark Rz (@skl131313) created a bot that automatically triggers update PRs whenever dependencies are updated (some of the projects in dlang-community still support builds with only git submodules and make)
  • Less overhead for automation with CIs (everyone can connect a repo to a third-party provider or restart a failing CI job)

It has also resulted in increased participation. For example, other D users have joined the group, and Sociomantic Labs (the D shop in Berlin that hosted the 2016 and 2017 editions of DConf) has taken over the release process for dfmt, Brian’s tool for formatting D source code.

There are currently 22 repositories in the dlang-community organization, including the following:

  • DCD (the D Completion Daemon) – an autocomplete program that is used by several D IDE and editor plugins
  • dfmt – a formatter for D source code, also used by many IDE and editor plugins
  • D-Scanner – a tool for analyzing D source code
  • dfix – a tool for automatically upgrading D source code
  • libparse – a library for lexing and parsing D source code
  • drepl – a DMD-based REPL for D
  • stdx-allocator – a frozen version of std.experimental.allocator (which is due for an overhaul)
  • containers – a set of containers backed by stdx.allocator to easily switch between different allocation strategies, with or without the GC
  • D-YAML – A YAML parser and emitter
  • harbored-mod – a documentation generator that supports both D’s built-in Ddoc syntax and Markdown

In addition to other D projects, there’s a repository set up specifically to discuss the dlang-community organization via GitHub issues, and repositories that contain artwork. If you decide to use any of these projects, the discussion repository is the place to ask for help when you need it.

Other projects may be added in the future. According to Sebastian, there are a few questions that form a set of loose criteria for inclusion under the dlang-community umbrella.

  • Is there enough interest from the general public so that it is “worth maintaining”?
  • Is there a similar library with active development out there?
  • Is at least one DLang community member competent for the domain covered by the project? If no, is there anyone who’s willing to fill the role?

Sebastian and the others are looking to add a few features over time. These include:

  • More automatic documentation builds
  • Automatic build of binaries on new tags (especially for Windows)
  • d-apt: Sociomantic is working on moving d-apt to GitHub and enabling full automatic CI builds for it.
  • dfmt: Leandro Lucarella / Sociomantic is introducing neptune and a proper release process

For anyone interested in joining the dlang-community organization, there are two options. If you are already a well-known participant in the D community, simply ping one of the existing members for merge rights. For anyone else, the best approach is to start contributing to one or more of the dlang-community projects to build up trust. At some point, frequent trustworthy contributors will be welcomed into the fold.

As for the current contributors, Sebasitian says:

There are many people working behind the scenes on the dlang-community libraries. A special thanks goes to the active reviewers who make it possible that your potential PR gets merged in a timely manner.

  • Basile Burg
  • Brian Schott
  • Jan Jurzitza
  • Leandro Lucarella
  • Martin Nowak
  • Petar Kirov
  • Richard Andrew Cattermole
  • skl131313
  • Stefan Koch

If you have or know of a D project that is suffering from a lack of attention, bringing it to the dlang-community might be the way to breathe new life into it. Don’t be shy in asking for help.

The #dbugfix Campaign

Why so many bugs?

Every major release of DMD comes with a list of closed issues from Bugzilla. For example, looking at the changelog for DMD 2.078.0 shows the following counts for closed regressions, bugs, and enhancements: 51 for the compiler, 37 for the standard library, 6 for the runtime, 17 for the website, and 1 for the linker. That’s 112 total issues, the majority related to the compiler. The total number of closed issues fluctuates between releases, but the compiler and standard library normally get the lion’s share.

This isn’t news to anyone who regularly follows DMD releases. But spend enough time on the forums and you’ll eventually see someone under the impression that bugs aren’t getting fixed. They cite the number of open issues in the database, or the age of some of the open issues, or the fact that they can’t find any formal process for fixing bugs. In reaction, it’s easy to point to the changelogs, or cite the number of closed issues in the database, or bring up the number of open issues in other language compilers. And, of course, to explain once again that this is a volunteer community where people work on the things that matter to them, and organizing groups to complete specific tasks is like herding cats.

That’s all quite reasonable, but really isn’t going to matter to someone who found the motivation to check D out, but is still looking for the motivation to stay. For me personally, I really don’t care how many issues are in the database, or the age of the oldest. All I care about is that it works for me. But I’m already invested in D. I don’t need to be motivated to stick around. And while I wouldn’t use a bug database as criteria to judge a new language, I can see that others do. It’s akin to looking at a stable repository on GitHub and dismissing it as abandoned because of its lack of recent activity. If you don’t see the whole picture, you can’t make an informed judgement.

If perception were the only issue, then it would simply be a matter of web design and PR. However, there have been, and are, people invested in D who have become frustrated because issues they reported, or that directly affect them, have languished in Bugzilla for months or even years. This can’t simply be dismissed as not seeing the whole picture. This is a matter of manpower and process. A number of issues are still open because there isn’t a simple fix, or perhaps because no one has taken an interest. The set of people who can solve complex issues is small. The set of people willing to work on issues that aren’t interesting to them is smaller. But again, how do you get a disparate group of volunteers of varying skill levels to devote their free time to fixing other peoples’ problems?

This is something the D community has struggled with as it has grown. There are no easy, comprehensive solutions without a full-time team of dedicated personnel, something we simply don’t have. However, it’s possible that there are opportunities to take baby steps and chip away at some of these issues without the complications inherent in herding cats.

The #dbugfix campaign

To recap, there are two primary complaints about the D bug-fixing process (such as it is):

  • Too many (old) bugs in the database
  • Bugs you care about aren’t getting fixed

In an effort to alleviate these problems, one baby step to chip away at it, I’m announcing the #dbugfix campaign.

It works like this. If there is an issue in D’s Bugzilla that you want to see fixed, whatever the reason (maybe it’s blocking you, or it’s annoying you, or it’s an enhancement you want, or you think it’s too old – it doesn’t matter), then either tweet out the issue number with #dbugfix in the tweet, or create a topic in the general forum with the issue number and #dbugfix in the subject line. I’ll monitor both Twitter and the forums and keep a running tally of issue numbers.

A week before a major version of DMD is released (starting with 2.080.0, which is slated for May 1), I’ll look at the tally and find the top five issues. I’ve already gotten people to commit to fixing at least two of the top five. That doesn’t mean only two. It could well be more. It depends on the complexity of the issues and how many other volunteers we can scrounge up. Hopefully, the two (or more) fixed bugs will be ready to be merged in the subsequent major release.

In the blog post announcing each major release, I’ll report on which bugs in the current release were fixed as a result of the campaign and announce the two selected for the subsequent release. If any of the top five from the previous release were not fixed, I’ll call for volunteers to help so that they can be squashed as soon as possible.

Yes, I know. We enabled voting on Bugzilla issues and that didn’t change anything. That’s because there was no real commitment to fixing any of the highest-voted issues. The votes simply served a guideline for the people browsing the database, looking for issues to fix. For this campaign, there really are people committed to fixing at least two of the issues that float to the top for every major release.

But two is not a lot! No, it isn’t. But it also isn’t zero. As I mentioned at the top of this post, dozens of issues are already fixed with each major DMD release. The problem (for those who see it as such) is that there’s currently next to zero community involvement in deciding which issues get fixed. This campaign gives the community more input into the selection process and it provides public updates on the status of that process. It is my hope that, in addition to changing perception and chipping away at the bug count, it encourages more people to help fix bugs.

If you would like to volunteer your time and knowledge to helping out with this campaign and increase the number of #dbugfix bugs fixed in each release, please email me at aldacron@gmail.com. For everyone else, I’ve got a search for #dbugfix set up in my Twitter client, so start tweeting!

DConf 2018: Register Now!

It was the middle of November when DConf 2018 was announced here on this blog in a Q & A session with Andrei Alexandrescu. Since then, the DConf train has slowly been building up steam as things have been happening behind the scenes. Now it’s full steam ahead!

 

The venue

DConf 2018 is being hosted at the NH München Messe hotel. They’re offering a discount (single room, breakfast included) to all conference attendees. If you’d like to cut out the commute time between your hotel and DConf, everything you need to take advantage of the discount and get to the hotel is over at the DConf 2018 venue page.

The registration fees

The cost of general admission to DConf 2018 is US $400. A 15% early-bird discount is available from now until March 17. This year, there’s a special deal for past attendees. If you signed up for DConf 2017, the 15% discount doesn’t go away in March. For you, it applies right up to the regular registration deadline. Whenever you’re ready to sign up, head on over to the DConf 2018 registration page where you can pay via PayPal or Eventbrite.

The invited keynote speaker

The D Language Foundation is excited to announce that Martin Odersky, the inventor of the Scala language, a professor at EPFL in Lausanne, Switzerland, and a founder of Lightbend, is this year’s invited keynote speaker. He’ll be presenting a talk titled, “How to Abstract Over Context”, in which he’ll “argue that implicit parameters as they are found in Scala are a canonical way to express context and that implicit function types are the right way to abstract over it.” We’re looking forward to it!

The partners

The time around, the conference is being hosted by QA Systems, a provider of tools for automating unit testing, code coverage, integration testing, and static analysis, in conjunction with HLMC, a company specialized in the organization of IT events. They’re working hard to ensure DConf 2018 is a success. We know it will be.

The call for submissions

Don’t forget, we’re taking submissions for D-language related papers, talks, demos, panels, and research reports until February 25. We’re eager to hear about what’s happening out there in the world of the D programming language. Put your proposal together and send it to foundation@dlang.org for consideration. If you’ve never submitted to DConf before, please give the guidelines a look over before you do so.

The uninitiated

We’re eager to see new faces this year, especially those who know little or nothing about the D programming language. If you or someone you know hasn’t yet figured out what all the fuss is about, we want a chance to show you. The D language community are a friendly bunch, happy to partake in engaging and intelligent conversation well into the night. And we love to meet new people! Every DConf is a chance to reinforce old bonds and forge new ones. You may arrive as a stranger, but you’ll leave as a friend.

The months ahead

As May 2 draws closer, keep an eye on this blog for more DConf 2018 updates, including posts from our partners, scheduled speakers, and the D Language Foundation.

Project Highlight: BSDScheme

Last year, Phil Eaton started working on BSDScheme, a Scheme interpreter that he ultimately intends to support Scheme R7RS. In college, he had completed two compiler projects in C++ for two different courses. One was a Scheme to Forth compiler, the other an implementation of the Tiger language from Andrew Appel’s ‘Modern Compiler Implementation’ books.

I hadn’t really written a complete interpreter or compiler since then, but I’d been trying to get back into language implementation. I planned to write a Scheme interpreter that was at least bootstrapped from a compiled language so I could go through the traditional steps of lexing, parsing, optimizing, compiling, etc., and not just build a language of the meta-circular interpreter variety. I was spurred to action when my co-worker at Linode, Brian Steffens, wrote bshift, a compiler for a C-like language.

For his new project, he wanted to use something other than C++. Though he knows the language and likes some of the features, he overall finds it a “complicated mess”. So he started on BSDScheme using C, building generic ADTs via macros.

As he worked on the project, he referred to Brian’s bshift for inspiration. As it happens, bshift is implemented in D. Over time, he discovered that he “really liked the power and simplicity of D”. That eventually led him to drop C for D.

It was clear it would save me a ton of time implementing all the same data structures and flows one implements in almost every new C project. The combination of compile-time type checking, GC, generic ADT support, and great C interoperability was appealing.

He’s been developing the project on Mac and FreeBSD, using LDC, the LLVM-based D compiler. In that time, he has found a number of D features beneficial, but two stand out for him above the rest. The first is nested functions.

They’re a good step up from C, where nested functions are not part of the standard and only unofficially supported (in different ways) by different compilers. C++ has lambdas, but that’s not really the same thing. It is a helpful syntactic sugar used in BSDScheme for defining new functions with external context (the names of the parameters to bind).

As for the second, hold on to your seats: it’s the GC.

The existence of a standard GC is a big benefit of D over C and C++. Sure, you could use the Boehm GC, but how that works with threads is up to you to discover. It is not fun to do prototyping in a GC-less language because the amount of boilerplate distracts from the goals. People often say this when they’re referring to Python, Ruby, or Node, but D is not at all comparable for (among) a few reasons: 1) compile-time type-checking, 2) dead-simple C interop, 3) multi-processing support.

Spend some time in the D forums and you’ll often find newcomers from C and C++ who, unlike Phil, have a strong aversion to garbage collection and are actively seeking to avoid it. You’ll also find replies from long-time D coders who started out the same way, but eventually came to embrace the GC completely and learned how to make it work for them. The D GC can certainly be problematic for certain types of software, but it is a net win for others. This point is reiterated frequently in the GC series on this blog, which shows how to get the most out of the GC, profile it, and mitigate its impact if it does become a performance problem.

As Phil learned the language, he identified areas for improvement in the D documentation.

Certainly it is advantageous compared to the C preprocessor that there is not an entirely separate language for doing compile-time macros, but the behavior difference and transition guides are missing or poorly written. A comparison between D templates and C++ templates (in all their complexity) could also be a great source of explanation.

We’re always looking to improve the documentation and make it more friendly to newcomers of all backgrounds. The docs are often written by people who are already well-versed in the language and its ecosystem, making blind spots somewhat inevitable. Anyone in the process of learning D is welcome and encouraged to help improve both the Language and Library docs. In the top right corner of each page are two links: “Report a bug” and “Improve this page”. The first takes you to D’s bug tracker to report a documentation bug, the second allows anyone with a logged-in GitHub account to quickly fork dlang.org, edit the page online, and submit a pull request.

In addition to the ulitmate goal of supporting Scheme R7RS, Phil plans to add FFI support in order to allow BSDScheme to call D functions directly, as well as support for D threads and an LLVM-based backend.

Overall, he seems satisfied with his decision to move to D for the implementation.

I think D has been a good time investment. It is a very practical language with a lot of the necessary high-level aspects and libraries for modern development. In the future, I plan to dig more into the libraries and ecosystem for backend web systems. Furthermore, unlike with C or C++, so far I’d feel comfortable choosing D for projects where I am not the sole developer. This includes issues ranging from prospective ease of onboarding to long-term performance and maintainability.

A big thanks to Phil for taking the time to contribute to this post (be sure to check out his blog, too). We’re always happy to hear about new projects in D. BSDScheme is released under the three-clause BSD license, so it’s a great place to start for anyone looking for an interesting open-source D project to contribute to or learn from. Have fun!