Category Archives: Project Highlights

A Gas Dynamics Toolkit in D

The Eilmer flow simulation code is the main simulation program in our collection of gas dynamics simulation tools. An example of its application is shown here with the simulation of the hypersonic flow over the BoLT-II research vehicle that is to be flown in 2022.

BoLT-II simulation with steady-state variant of the Eilmer code. Flow is from bottom-left to top-right of the picture. Only one quarter of the vehicle surface, coloured grey, is shown. Several slices through the flow domain are coloured with the local Mach number, with blue for low values and red for high values. Several streamlines, drawn in black, start at the blunt leading edge of the vehicle and follow the gas flow along the vehicle surface. Image produced by Kyle Damm.

Some history

This simulation program, originally called cns4u, started as a relatively small C program that ran on the Cray-Y/MP supercomputer at NASA Langley Research Center in 1991. A PDF of an early report with the title ‘Single-Block Navier-Stokes Integrator’ can be found here. It describes the simple finite-volume formulation of the code that allows simulation of a nonreacting gas on a single, structured grid. Thirty years on, many capabilities have been added through the efforts of a number of academic staff and students. These capabilities include high-temperature thermochemical effects with reacting gases and distributed-memory parallel simulations on cluster computers. The language in which the program was written changed from C to C++, with connections to Tcl, Python and Lua.

The motivation for using C++ in combination with the scripting languages was to allow many code variations and user programmability so that we could tackle any number of initially unimagined gas-dynamic processes as new PhD students arrived to do their studies. By 2010, the Eilmer3 code (as it was called by then) was sitting at about 100k lines of code and was growing. We were, and still are, mechanical engineers and students of gas-dynamics first and programmers second. C++ was a lot of trouble for us. Over the next 4 years, C++ became even more trouble for us as the Eilmer3 code grew to about 250k lines of code and many PhD students used it to do all manner of simulations for their thesis studies.

Also in 2010, a couple of us (PAJ and RJG) living in different parts of the world (Queensland, Australia and Virginia, USA) came across the D programming language and took note of Andrei Alexandrescu’s promise of stability into the future. Here was the promise of a C++ replacement that we could use to rebuild our code and remain somewhat sane. We each bought a copy of Andrei’s book and experimented with the D language to see if it really was the C++-done-right that we wished for. One of us still has the copy of the initial printing of Andrei’s book without his name on the front cover.

Rebuilding in D

In 2014 we got serious about using D for the next iteration of Eilmer and started porting the core gas dynamics code from C++ to D. Over the next four years, in between university teaching activities, we reimplemented much of the Eilmer3 C++ code in D and extended it. We think that this was done to good effect. This conference paper, from late 2015, documents our effort at the initial port of the structured grid solver. (A preprint is hosted on our site.) The Eilmer4 program is as fast as the earlier C++ program but is far more versatile while being implemented in fewer lines of code. It now works with unstructured as well as structured grids and has a new flexible boundary condition model, a high-temperature thermochemistry module, and in the past two years we have added the Newton-Krylov-accelerated steady-state solver that was used to do the simulation shown above. And importantly for us, with the code now being in D, we now have have many fewer WTF moments.

If you want more details on our development of the Eilmer4 code in D, we have the slides from a number of presentations given to the Centre for Hypersonics over the past six years.

Features of D that have been of benefit to us include:

  • Template programming that other Mechanical Engineers can understand (thanks Walter!). Many of our numerical routines are defined to work with numbers that we define as an alias to either double or Complex!double values. This has been important to us because we can use the same basic update code and get the sensitivity coefficients via finite differences in the complex direction. We think this saved us a large number of lines of code.

  • String mixins have replaced our use of the M4 preprocessor to generate C++ code in Eilmer3. We still have to do a bit of head-scratching while building the code with mixins, but we have retained most of our hair—something that we did not expect to do if we continued to work with C++.

  • Good error messages from the compiler. We often used to be overwhelmed by the C++ template error messages that could run to hundreds of lines. The D compilers have been much nicer to us and we have found the “did you mean” suggestions to be quite useful.

  • A comprehensive standard library in combination with language features such as delegates and closures that allow us to write less code and instead concentrate on our gas dynamics calculations. I think that having to use C++ Functors was about the tipping point in our 25-year adventure with C++.

  • Ranges and the foreach loops make our D code so much tidier than our equivalent C++ code.

  • Low-barrier shared-memory parallelism. We do many of the flow update calculations in parallel over blocks of cells and we like to take advantage of the many cores that are available on a typical workstation.

  • Simple and direct linkage to C libraries. We make extensive use of Lua for our configuration and do large simulations by using many processors in parallel via the OpenMPI library.

  • The garbage collector is wonderful, even if other people complain about it. It makes life simpler for us. For the input and output, we take the comfortable path of letting the compiler manage the memory and then tell the garbage collector to tidy up after us. Of course, we don’t want to overuse it. @nogc is used in the core of the code to force us not to generate garbage. We allocate much of our data storage at the start of a simulation and then pass references to parts of it into the core functions.

  • Fast compilation and good optimizing compilers. Nearly an hour’s build time was fairly common for our old C++ code, and now we would expect a DMD or LDC debug build in about a quarter of a minute. This builds a basic version of the main simulation code on a Lenovo ThinkPad with Core i7 processor. An optimized build can take a little over a minute but the benefit of the faster simulation is paid back by orders of magnitude when a simulation job is run for several hours over hundreds of processors.

  • version(xxxx) { ... } has been a good way to have variants of the code. Some use complex numbers and others are just double numbers. Also, some variants of the code can have multiple chemical species and/or just work with a single-species nonreacting gas. This reduction in physical modelling allows us to reduce the memory required by the simulation. For big simulations of 3D flows, the required memory can be on the order of hundreds of gigabytes.

  • debug { ... } gets used to hide IO code in @nogc functions. If a simulation fails, our first action is often to run the debug-flavor of the code to get more information and then, if needed, run the debug flavor of the code under the control of gdb to dig into the details.

We have a very specialized application and so don’t make use of much of the software ecosystem that has built up around the D language. For a build tool, we use make and for an IDE, we use emacs. The D major mode is very convenient.

There are lots of other features that just work together to make our programming lives a bit better. We are six years in on our adventure with the D programming language and we are still liking it.

Project Highlight: DPP

D was designed from the beginning to be ABI compatible with C. Translate the declarations from a C header file into a D module and you can link directly with the corresponding C library or object files. The same is true in the other direction as long as the functions in the D code are annotated with the appropriate linkage attribute. These days, it’s possible to bind with C++ and even Objective-C.

Binding with C is easy, but can sometimes be a bit tedious, particularly when done by hand. I can speak to this personally as I originally implemented the Derelict collection of bindings by hand and, though I slapped together some automation when I ported it all over to its successor project, BindBC, everything there is maintained by hand. Tools like dstep exist and can work well enough, though they come with limitations which require careful attention to and massaging of the output.

Tediousness is an enemy of productivity. That’s why several pages of discussion were generated from Átila Neves’s casual announcement a few weeks before DConf 2018 that it was now possible to #include C headers in D code.

dpp is a compiler wrapper that will parse a D source file with the .dpp extension and expand in place any #include directives it encounters, translating all of the C or C++ symbols to D, and then pass the result to a D compiler (DMD by default). Says Átila:

What motivated the project was a day at Cisco when I wanted to use D but ended up choosing C++ for the task at hand. Why? Because with C++ I could include the relevant header and be on my way, whereas with D (or any other language really) I’d have to somehow translate the header and all its transitive dependencies somehow. I tried dstep and it failed miserably. Then there’s the fact that the preprocessor is nearly always needed to properly use a C API. I wanted to remove one advantage C++ has over D, so I wrote dpp.

Here’s the example he presented in the blog post accompanying the initial announcement:

// stdlib.dpp
#include <stdio.h>
#include <stdlib.h>

void main() {
    printf("Hello world\n".ptr);

    enum numInts = 4;
    auto ints = cast(int*) malloc(int.sizeof * numInts);
    scope(exit) free(ints);

    foreach(int i; 0 .. numInts) {
        ints[i] = i;
        printf("ints[%d]: %d ".ptr, i, ints[i]);
    }

    printf("\n".ptr);
}

Three months later, dpp was successfully compiling the julia.h header allowing the Julia language to be embedded in a D program. The following month, it was enabled by default on run.dlang.io.

C support is fairly solid, though not perfect.

Although preprocessor macro support is one of dpp’s key features, some macros just can’t be translated because they expand to C/C++ code fragments. I can’t parse them because they’re not actual code yet (they only make sense in context), and I can’t guess what the macro parameters are. Strings? Ints? Quoted strings? How does one programmatically determine that #define FOO(S) (S) is meant to be a C cast? Did you know that in C macros can have the same name as functions and it all works? Me neither until I got a bug report!

Push the stdlib.dpp code block from above through run.dlang.io and read the output to see an example of translation difficulties.

The C++ story is more challenging. Átila recently wrote about one of the problems he faced. That one he managed to solve, but others remain.

dpp can’t translate C++ template specialisations on reference types because reference types don’t exist in D. I don’t know how to translate anything that depends on SFINAE because it also doesn’t exist in D.

For those not in the know, classes in D are reference types in the same way that Java classes are reference types, and function parameters annotated with ref accept arguments by reference, but when it comes to variable declarations, D has no equivalent for the C++ lvalue reference declarator, e.g. int& someRef = i;.

Despite the difficulties, Átila persists.

The holy grail is to be able to #include a C++ standard library header, but that’s so difficult that I’m currently concentrating on a much easier problem first: being able to successfully translate C++ headers from a much simpler library that happens to use standard library types (std::string, std::vector, std::map, the usual suspects). The idea there is to treat all types that dpp can’t currently handle as opaque binary blobs, focusing instead on the production library types and member functions. This sounds simple, but in practice I’ve run into issues with LLVM IR with ldc, ABI issues with dmd, mangling issues, and the most fun of all: how to create instances of C++ stdlib types in D to pass back into C++? If a function takes a reference to std::string, how do I give it one? I did find a hacky way to pass D slices to C++ functions though, so that was cool!

On the plus side, he’s found some of D’s features particularly helpful in implementing dpp, though he did say that “this is harder for me to recall since at this point I mostly take D’s advantages for granted.” The first thing that came to mind was a combination of built-in unit tests and token strings:

unittest {
    shouldCompile(
        C(q{ struct Foo { int i; } }),
        D(q{ static assert(is(typeof(Foo.i) == int)); })
    );
}

It’s almost self-explanatory: the first parameter to shouldCompile is C code (a header), and the second D code to be compiled after translating the C header. D’s token strings allow the editor to highlight the code inside, and the fact that C syntax is so similar to D lets me use them on C code as well!

He also found help from D’s contracts and the garbage collector.

libclang is a C library and as such has hardly any abstractions or invariant enforcement. All nodes in the AST are represented by a libclang “cursor”, which can have several “kinds”. D’s contracts allowed me to document and enforce at runtime which kind(s) of cursors a function expects, preventing bugs. Also, libclang in certain places requires the client code to manually manage memory. D’s GC makes for a wrapper API in which that is never a concern.

During development, he exposed some bugs in the DMD frontend.

I tried using sumtype in a separate branch of dpp to first convert libclang AST entities into types that are actually enforced at compile time instead of run time. Unfortunately that caused me to have to switch to compiling all code at once since sumtype behaves differently in separate compilation, triggering previously unseen frontend bugs.

For unit testing, he uses unit-threaded, a library he created to augment D’s built-in unit testing feature with advanced functionality. To achieve this, the library makes use of D’s compile-time reflection features. But dpp has a lot of unit tests.

Given the number of tests I wrote for dpp, compiling takes a very long time. This is exacerbated by -unittest, which is a known issue. Not using unit-threaded’s runner would speed up compilation, but then I’d lose all the features. It’d be better if the compile-time reflection required were made faster.

Perhaps he’ll see some joy there when Stefan Koch’s ongoing work on NewCTFE is completed.

Átila will be speaking about dpp on May 9 as part of his presentation at DConf 2019 in London. The conference runs from May 8–11, so as I write there’s still plenty of time to register. For those who can’t make it, you can watch the livestream (a link to which will be provided in the D forums each day of the conference) or see the videos of all the talks on the D Language Foundation’s YouTube channel after DConf is complete.

Project Highlight: Spasm

In 2014, Sebastiaan Koppe was working on a React project. The app’s target market included three-year-old mobile phones. He encountered some performance issues that, after investigating, he discovered weren’t attributable solely to the mobile platform.

It all became clear to me once I saw the flame graph. There were so many function calls! We managed to fix the performance issues by being a little smarter about updates and redraws, but the sight of that flame graph never quite left me. It made me wonder if things couldn’t be done more efficiently.

As time went on, Sebastiaan gained the insight that a UI is like a state machine.

Interacting with the UI moves you from one state to the next, but the total set of states and the rules governing the UI are all static. In fact, we require them to be static. Each time a user clicks a button, we want it to respond the same way. We want the same input validations to run each time a user submits a form.

So, if everything is static and defined up-front, why do all the javascript UI frameworks work so hard during runtime to figure out exactly what you want rendered? Would it not be more efficient to figure that out before you send the code to the browsers?

This led him to the idea of analyzing UI definitions to create an optimal UI renderer, but he was unable to act on it at the time. Then in 2018, native-language DOM frameworks targeting WebAsm, like asm-dom for C++ and Percy for Rust, came to his attention. Around the same time, the announcement of Vladimir Panteleev’s dscripten-tools introduced him to Sebastien Alaiwan’s older dscripten project. The former is an alternative build toolchain for the latter, which is an example of compiling D to asm.js via Emscripten. Here he saw an opportunity to revisit his old idea using D.

D’s static introspection gave me the tools to create render code at compile time, bypassing the need for a virtual DOM. The biggest challenge was to map existing UI declarations and patterns to plain D code in such a way that static introspection can be used to extract all of the information necessary for generating the rendering code.

One thing he really wanted to avoid was the need to embed React’s Javascript extension, JSX, in the D code, as that would require the creation of a compile-time parser. Instead, he decided to leverage the D compiler.

For the UI declarations, I ended up at a design where every HTML node is represented by a D struct and all the node’s attributes and properties are members of that struct. Combined with annotations, it gives enough information to generate optimal render code. With that, I implemented the famous todo-mvc application. The end result was quite satisfying. The actual source code was on par or shorter than most javascript implementations, the compiled code was only 60kB after gzip, and rendering various stages in the todo app took less than 2ms.

He announced his work on the D forums in September of 2018 (the live demo is still active as I write).

Unfortunately, he wasn’t satisfied with the amount of effort involved to get the end result. It required using the LLVM-based D compiler, LDC, to compile D to LLVM IR, then using Emscripten to produce asm.js, and finally using binaryen to compile that into WebAssembly. On top of that…

…you needed a patched version of LLVM to generate the asm.js, which required a patched LDC. Compiling all those dependencies takes a long time. Not anything I expect end users to willfully subject themselves to. They just want a compiler that’s easy to install and that just works.

As it happened, the 1.11.0 release of LDC in August 2018 actually had rudimentary support for WebAssembly baked in. Sebastiaan started rewriting the todo app and his Javascript glue code to use LDC’s new WebAssembly target. In doing so, he lost Emsripten’s bundled musl libc, so he switched his D code to use -betterC mode, which eliminates D’s dependency on DRuntime and, in turn, the C standard library.

With that, he had the easy-to-install-and-use package he wanted and was able to get the todo-mvc binary down to 5kb after gzip. When he announced this news in the D forums, he was led down a new path.

Someone asked about WebGL. That got me motivated to think about creating bindings to the APIs of the browser. That same night I did a search and found underrun, an entry in the 2018 js13k competition. I decided to port it to D and use it to figure out how to write bindings to WebGL and WebAudio.

He created the bindings by hand, but later he discovered WebIDL. After the underrun port was complete, he started work on using WebIDL to generate bindings.

It formed very quickly over the course of 2–3 months. The hardest part was mapping every feature of WebIDL to D, and at the same time figuring out how to move data, objects and callbacks between D and Javascript. All kinds of hard choices came up. Do I support undefined? Or optional types? Or union types? What about the “any” type? The answer is yes, all are supported.

He’s been happy with the result. D bindings to web APIs are included in Spasm and they follow the Javascript API as much as possible. A post-compile step is used to generate Javascript glue code.

It runs fairly quickly and generates only the Javascript glue code you actually need. It does that by collecting imported functions from the generated WebAssembly binary, cross-referencing those to functions to WebIDL definitions and then generating Javascript code for those.

The result of all this is Spasm, a library for developing single-page WebAssembly applications in D. The latest version is always available in the DUB repository.

I can’t wait to start working on hot module replacement with Spasm. Many Javascript frameworks provide it out-of-the box and it is really valuable when developing. Server-side rendering is also something that just wants to get written. While Spasm doesn’t need it so much to reduce page load times, it is necessary when you want to unittest HTML output or do SEO.

At the moment, his efforts are directed toward creating a set of basic material components. He’s had a hard time getting something together that works in plain D, and at one point considered abandoning the effort and working instead on a declarative UI language that compiles to D, but ultimately he persisted and will be announcing the project soon.

After the material project there are still plenty of challenges. The biggest thing I have been postponing is memory management. Right now the allocator in Spasm is a simple bump-the-pointer allocator. The memory for the WebAssembly instance in the browser is hardcoded to 16mb and when it’s full, it’s full. I could grow the memory of course, but I really need a way to reuse memory. Without help from the compiler – like Rust has – that either means manual memory management or a GC.

One way to solve the problem would be to port DRuntime to WebAssembly, something he says he’s considered “a few times” already.

At least the parts that I need. But so far the GC has always been an issue. In WebAssembly, memory is linear and starts at 0. When you combine that with an imprecise GC, suddenly everything looks like a pointer and, as a consequence, it won’t free any memory. Recently someone wrote a precise GC implementation. So that is definitely back on the table.

He’s also excited that he recently ran WebAssembly generated from D on Cloudflare workers.

The environment is different from a browser, but its the same in many ways. This is all very exciting and lots of possibilities will emerge for D. In part because you can generate WebAssembly binaries that are pretty lean and mean.

We’re pretty excited about the work Sebastiaan is doing and can’t wait to see where it goes. Keep an eye on the Dlang Newsfeed (@dlang_ng) on Twitter, or the official D Twitter feed (@D_Programming) to learn about future Spasm announcements.

Project Highlight: The D Community Hub

As has been stressed on this blog before, D is a community-driven language. Because of that, the ecosystem depends on the work of volunteers who are willing to contribute their time and open their projects to the community at large. From IDE and editor plugins to libraries in the DUB registry, it’s all about the efforts of people who are (usually) getting no monetary reward for their efforts.

There are some inherent downsides to that reality. Sometimes projects are abandoned. Sometimes they aren’t updated as frequently as users would like. This can become an issue for those who depend upon these projects, but it’s alleviated by the fact that most D projects are open source and their repositories are publicly available. To keep a project alive and up-to-date only requires more volunteers willing to pitch in.

That’s the motivation behind the D Community Hub (dlang-community) at GitHub. According to Sebastian Wilzbach, it started with Brian Schott’s popular tools used by several IDE and editor plugins:

There were maintenance issues with Brian’s (aka Hackerpilot) awesome projects. He has a full-time job and often could only respond to simple issues every few weeks. This meant that simple bug fix PRs sat in the queue for quite a while. For example, there was one case where the same PR to fix the Windows build script was submitted by three different people (there was no Windows CI at the time).

Brian’s projects weren’t the only ones that motivated the idea. Sebastian and Petar Kirov maintain the DLang Tour, and some of the projects they depend upon were either inactive or slow to update. However, Brian’s tools are widely used, so they started with him. Eventually, they convinced him to move some of his projects to the new organization and others followed.

Sebastian lays out the following benefits that have come from moving multiple projects from disparate developers under an umbrella group:

  • Common best policies (e.g. all repositories have GitHub branch protection)
  • No need to fork an inactive repository – work can be shared easily.
  • No dependence on a single person who might be busy or on vacation (this is especially important for swiftly pulling and releasing bug fixes )
  • One common location whenever updates are required (e.g. package bumps or deprecation fixes)
  • Many of the projects are enabled on the Project Tester (their test suite is run on every PR for the DMD, DRuntime, Phobos, Dub, and tools repositories to prevent regressions) – this is possible because many people have merge rights in case an improvement in the compiler finds critical bugs or deprecations are moved forward
  • Shared knowledge (e.g. all projects support “auto-merge” like the dlang repositories)
  • Automation with bots – Mark Rz (@skl131313) created a bot that automatically triggers update PRs whenever dependencies are updated (some of the projects in dlang-community still support builds with only git submodules and make)
  • Less overhead for automation with CIs (everyone can connect a repo to a third-party provider or restart a failing CI job)

It has also resulted in increased participation. For example, other D users have joined the group, and Sociomantic Labs (the D shop in Berlin that hosted the 2016 and 2017 editions of DConf) has taken over the release process for dfmt, Brian’s tool for formatting D source code.

There are currently 22 repositories in the dlang-community organization, including the following:

  • DCD (the D Completion Daemon) – an autocomplete program that is used by several D IDE and editor plugins
  • dfmt – a formatter for D source code, also used by many IDE and editor plugins
  • D-Scanner – a tool for analyzing D source code
  • dfix – a tool for automatically upgrading D source code
  • libparse – a library for lexing and parsing D source code
  • drepl – a DMD-based REPL for D
  • stdx-allocator – a frozen version of std.experimental.allocator (which is due for an overhaul)
  • containers – a set of containers backed by stdx.allocator to easily switch between different allocation strategies, with or without the GC
  • D-YAML – A YAML parser and emitter
  • harbored-mod – a documentation generator that supports both D’s built-in Ddoc syntax and Markdown

In addition to other D projects, there’s a repository set up specifically to discuss the dlang-community organization via GitHub issues, and repositories that contain artwork. If you decide to use any of these projects, the discussion repository is the place to ask for help when you need it.

Other projects may be added in the future. According to Sebastian, there are a few questions that form a set of loose criteria for inclusion under the dlang-community umbrella.

  • Is there enough interest from the general public so that it is “worth maintaining”?
  • Is there a similar library with active development out there?
  • Is at least one DLang community member competent for the domain covered by the project? If no, is there anyone who’s willing to fill the role?

Sebastian and the others are looking to add a few features over time. These include:

  • More automatic documentation builds
  • Automatic build of binaries on new tags (especially for Windows)
  • d-apt: Sociomantic is working on moving d-apt to GitHub and enabling full automatic CI builds for it.
  • dfmt: Leandro Lucarella / Sociomantic is introducing neptune and a proper release process

For anyone interested in joining the dlang-community organization, there are two options. If you are already a well-known participant in the D community, simply ping one of the existing members for merge rights. For anyone else, the best approach is to start contributing to one or more of the dlang-community projects to build up trust. At some point, frequent trustworthy contributors will be welcomed into the fold.

As for the current contributors, Sebasitian says:

There are many people working behind the scenes on the dlang-community libraries. A special thanks goes to the active reviewers who make it possible that your potential PR gets merged in a timely manner.

  • Basile Burg
  • Brian Schott
  • Jan Jurzitza
  • Leandro Lucarella
  • Martin Nowak
  • Petar Kirov
  • Richard Andrew Cattermole
  • skl131313
  • Stefan Koch

If you have or know of a D project that is suffering from a lack of attention, bringing it to the dlang-community might be the way to breathe new life into it. Don’t be shy in asking for help.

Project Highlight: BSDScheme

Last year, Phil Eaton started working on BSDScheme, a Scheme interpreter that he ultimately intends to support Scheme R7RS. In college, he had completed two compiler projects in C++ for two different courses. One was a Scheme to Forth compiler, the other an implementation of the Tiger language from Andrew Appel’s ‘Modern Compiler Implementation’ books.

I hadn’t really written a complete interpreter or compiler since then, but I’d been trying to get back into language implementation. I planned to write a Scheme interpreter that was at least bootstrapped from a compiled language so I could go through the traditional steps of lexing, parsing, optimizing, compiling, etc., and not just build a language of the meta-circular interpreter variety. I was spurred to action when my co-worker at Linode, Brian Steffens, wrote bshift, a compiler for a C-like language.

For his new project, he wanted to use something other than C++. Though he knows the language and likes some of the features, he overall finds it a “complicated mess”. So he started on BSDScheme using C, building generic ADTs via macros.

As he worked on the project, he referred to Brian’s bshift for inspiration. As it happens, bshift is implemented in D. Over time, he discovered that he “really liked the power and simplicity of D”. That eventually led him to drop C for D.

It was clear it would save me a ton of time implementing all the same data structures and flows one implements in almost every new C project. The combination of compile-time type checking, GC, generic ADT support, and great C interoperability was appealing.

He’s been developing the project on Mac and FreeBSD, using LDC, the LLVM-based D compiler. In that time, he has found a number of D features beneficial, but two stand out for him above the rest. The first is nested functions.

They’re a good step up from C, where nested functions are not part of the standard and only unofficially supported (in different ways) by different compilers. C++ has lambdas, but that’s not really the same thing. It is a helpful syntactic sugar used in BSDScheme for defining new functions with external context (the names of the parameters to bind).

As for the second, hold on to your seats: it’s the GC.

The existence of a standard GC is a big benefit of D over C and C++. Sure, you could use the Boehm GC, but how that works with threads is up to you to discover. It is not fun to do prototyping in a GC-less language because the amount of boilerplate distracts from the goals. People often say this when they’re referring to Python, Ruby, or Node, but D is not at all comparable for (among) a few reasons: 1) compile-time type-checking, 2) dead-simple C interop, 3) multi-processing support.

Spend some time in the D forums and you’ll often find newcomers from C and C++ who, unlike Phil, have a strong aversion to garbage collection and are actively seeking to avoid it. You’ll also find replies from long-time D coders who started out the same way, but eventually came to embrace the GC completely and learned how to make it work for them. The D GC can certainly be problematic for certain types of software, but it is a net win for others. This point is reiterated frequently in the GC series on this blog, which shows how to get the most out of the GC, profile it, and mitigate its impact if it does become a performance problem.

As Phil learned the language, he identified areas for improvement in the D documentation.

Certainly it is advantageous compared to the C preprocessor that there is not an entirely separate language for doing compile-time macros, but the behavior difference and transition guides are missing or poorly written. A comparison between D templates and C++ templates (in all their complexity) could also be a great source of explanation.

We’re always looking to improve the documentation and make it more friendly to newcomers of all backgrounds. The docs are often written by people who are already well-versed in the language and its ecosystem, making blind spots somewhat inevitable. Anyone in the process of learning D is welcome and encouraged to help improve both the Language and Library docs. In the top right corner of each page are two links: “Report a bug” and “Improve this page”. The first takes you to D’s bug tracker to report a documentation bug, the second allows anyone with a logged-in GitHub account to quickly fork dlang.org, edit the page online, and submit a pull request.

In addition to the ulitmate goal of supporting Scheme R7RS, Phil plans to add FFI support in order to allow BSDScheme to call D functions directly, as well as support for D threads and an LLVM-based backend.

Overall, he seems satisfied with his decision to move to D for the implementation.

I think D has been a good time investment. It is a very practical language with a lot of the necessary high-level aspects and libraries for modern development. In the future, I plan to dig more into the libraries and ecosystem for backend web systems. Furthermore, unlike with C or C++, so far I’d feel comfortable choosing D for projects where I am not the sole developer. This includes issues ranging from prospective ease of onboarding to long-term performance and maintainability.

A big thanks to Phil for taking the time to contribute to this post (be sure to check out his blog, too). We’re always happy to hear about new projects in D. BSDScheme is released under the three-clause BSD license, so it’s a great place to start for anyone looking for an interesting open-source D project to contribute to or learn from. Have fun!

Project Highlight: Diamond MVC Framework

Anyone who has been around the D community for longer than an eye blink will have heard of vibe.d, undoubtedly the most widely-used web application framework written in the D programming language. Those same people could be excused if they haven’t yet heard of Diamond, announcements for which have only started showing up in the D forums relatively recently.

According to its website, Diamond is “a full-stack, cross-platform MVC/Template Framework” that’s “inspired by ASP.NET and uses vibe.d for its backend”. Jacob Jensen, the project’s author, explains.

I have always been interested in web development, so one of the few projects I started writing in D was a web server using Phobos’s std.socket. It wasn’t a notable project or anything, but more an experiment. Then I discovered vibe.d and toyed around with it. Unfortunately, coming from a background working with ASP.NET using Razor, I wasn’t a big fan of the Diet templates. So initially I started Diamond as an alternative template engine. However, I ended up just adding more and more to the project and it soon became more and more independent. At this point, you don’t really write a lot of vibe.d code when using it, because the most general vibe.d features have wrappers in Diamond to better interact with the rest of the project.

Development on Diamond began in early 2016, but he put it aside a few months later. Then in October of 2017, after picking up some contract web app work, Diamond was resurrected in its 2.0 form.

I decided I wanted to use D, so I simply did a complete revamp of the project and plan to keep maintaining it.

His biggest hurdle has been keeping the design of the framework user-friendly and minimizing complex interactions between controllers, views, and models.

It was a big challenge to ensure that everything worked together in a way that made it feel natural to work with. On top of that I had to make sure that Diamond worked under multiple build types. At first Diamond was written solely with the web in mind and thus only supported websites and web APIs, but I saw the potential to use the template engine for more than just the web, like email templates. Thus I introduced yet a third way of building Diamond applications, which had to be completely separated from the web part of Diamond without introducing complexity into user code or the build process. By introducing stand-alone support, Diamond is now able to be used with existing projects that aren’t already using it for the web, e.g. someone could use Diamond to extend their existing web pages without having to switch the whole project to Diamond, or simply use Diamond for only a small portion of it.

Aside from the challenge of maintaining user-friendliness, Jacob says he’s encountered only a few issues in developing the framework, most of which came when he was refactoring for the 2.0 release. One in particular is interesting, as his solution for it was a D language feature you don’t often hear much about.

When I was introducing attributes to controllers to avoid manual mapping of actions, it took a while to figure out the best approach to it without having to pass additional information about the controller to its base class.

To demonstrate, Controller subclasses could originally be declared like so:

class MyController(TView) : Controller!TView
{
    ...
}

After he initially added attributes, the refactoring required the base class template to know the derived type at compile time in order to reflect on its attributes. His initial solution required that subclasses specify their own types as an additional template parameter to the Controller template in the class declaration.

class MyController(TView) : Controller!(TView, MyController!TView)
{
    ...
}

He didn’t like it, but it was the only way he could see to make the base class aware of the derived type. Then he discovered D’s template this parameters.

Template this parameters allow any templated member function to know at compile time the static, i.e. declared, type of the instance on which the function is being called.

module base;
class Base 
{
    void printType(this T)() 
    {
        import std.stdio;
        writeln(typeid(T));
    }
}

class Derived : Base {}

void main()
{
    Derived d1 = new Derived;
    auto d2 = new Derived;
    Base b1 = new Derived;

    d1.printType();
    d2.printType();
    b1.printType();
}

And this prints (in modulename.TypeName format):

base.Derived
base.Derived
base.Base

In Diamond, this is used in the Controller constructor in order to parse the UDAs (User Defined Attributes) attached to the derived type at compile time:

class Controller(TView) : BaseController
{
    ...
    this(this TController)(TView view)
    {
        ...

        static if (hasUDA!(TController, HttpAuthentication))
        {
            ...
        }

        static if (hasUDA!(TController, HttpVersion))
        {
            ...
        }
        ...
    }
}

The caveat, and the price Jacob is willing to pay for the increased convenience to users, is that instances of derived types should never be declared to have the type of the base class. When working with templated types in D, it’s idiomatic to use type inference anyway:

// This won't pick up the MyController attributes, as the declared
// type is that of the base class
Controller!ViewImpl controller1 = new MyController!ViewImpl;

// But this will
MyController!ViewImpl controller3 = new MyController!ViewImpl;

// And so will this -- it's also more idiomatic
auto controller2 = new MyController!ViewImpl;

Overall, Jacob has found the transition from C# to D fairly painless.

Most code I was used to writing, coming from C#, is pretty straight-forward in D. One of the pros of D, however, is its compile-time functionality. I use it heavily in Diamond to parse templates, map routes and controller actions, etc. It’s a really powerful tool in development and probably the most powerful tool in D. I also really like templates in D. They’re implemented in a way that doesn’t make them seem complex, unlike in C++, where templates can often seem obscure and cryptic. D is probably the most natural programming language that I’ve used.

Diamond indirectly supports Mongo and Redis through vibe.d, and has its own MySQL ORM interface that uses the native MySQL library under the hood. He has some plans improve upon the database support, however.

I plan to rewrite the whole MySQL part, since it currently uses some deprecated features – it was based on some old code I had been using. Along with that, I plan on implementing some “generic services” that can be used to create internal services in the project, which will of course wrap database engines such as MySQL, Mongo, Redis, etc., creating a similar API between them all and exposing an easier way to implement sharding.

He also intends to add support for textual data formats other than JSON (such as XML) to make Diamond compatible with SOAP or WCF services, add improved support for components in the view, and provide better integration with JavaScript. He also would like to implement an app server for hosting Diamond applications.

Anyone intending to use D for web work today who, like Jacob, has experience using ASP.NET and Razor should feel right at home using Diamond. For the rest, it’s an alternative to using vibe.d directly that some may find more comfortable. You can find the Diamond source, the current documentation, and the in-development official website (for which Jacob is dog-fooding Diamond) all at GitHub.

Project Highlight: Funkwerk

Funkwerk is a German company that develops intelligent communication technology. One of their projects is a passenger information system for long-distance and local transport that is deployed by long-distance rail networks in Germany, Austria, Switzerland, Finland, Norway and Luxembourg, as well as city railways in Berlin and Munich. The system is developed at the company’s Munich location and, some time ago, they came to the conclusion that it needed a rewrite. According to Funkwerk’s Mario Kröplin:

From a bird’s-eye view, the long-term replacement of our aged passenger information system is our primary project. At some point, it became obvious that the system was getting hard to maintain and hard to change. In 2008, a new head of the development department was hired. It was time for a change.

They wanted to do more than just rewrite the system. They also wanted to make changes in their development process, and that led to questions of how best to approach the changes so that they didn’t negatively impact productivity.

There is an inertia when experienced programmers are asked to suddenly start with unit testing and code reviews. The motivation for unit testing and code reviews is higher when you learn a new programming language. Especially one where unit testing is a built-in feature. It takes a new language to break old habits.

The decision was made to select a different language for the project, preferably one with built-in support for unit testing. They also allowed themselves a great deal of leeway in making that choice.

Compared with other monolithic legacy systems, we were in the advantageous position that the passenger information system was constructed as a set of services. There is some interprocess communication between the services, but otherwise the services are largely independent. The service architecture made such a decision less serious: just make an experiment with any one of the services in the new language. Should the experiment fail, then it’s not a heavy loss.

So they cast out a net and found D. One thing about the language that struck them immediately was that it fit in a comfort zone somewhere between the languages with which their team was already familiar.

The languages in use were C++ (which rather feels like being part of the problem than part of the solution) and Java. D was presented as a better C++. With garbage collection, changing to D is easy enough for both C++ and Java programmers.

They took D for a spin with some experimentation. The first experiment was a bit of a failure, but the second was a success. In fact, it went well enough that it convinced them to move forward with D.

In retrospect, the change of programming language was one aspect that led to an agile transition. One early example of a primary D project was the replacement of the announcement service. This one gets all information about train journeys – especially about the disruptions – and decides when to play which announcements. The business rules make this quite complicated. The new service uses object-oriented programming to handle the complicated business rules nicely. So you could say it’s a Java program written in D. However, as the business rules change from customer to customer, the announcement service is constantly changing. And with the development of D and with our learning, we are now in a better position to evolve the software further.

When they first started the changeover, D2 was still in its early stages of development and the infamous Phobos/Tango divide was still a thing in the D community (an issue that, for those of you out of the loop the last several years, was put to rest long ago). Since D1 was considered stable, that was an obvious choice for them. They also chose Tango over Phobos, primarily because Tango provided logging and HTTP facilities, while Phobos did not. In 2012, as D2 was stablizing and official support for D1 was being phased out, it became apparent that they would have to make a transition from D1 and Tango to D2 and Phobos, which is what they use today.

One of the D features that hooked Mario early on was its built-in support for contracts.

Contracts won me over from the beginning. No joke! When we had hard times to hunt down segmentation faults, a failed assertion boosted the diagnostics. I used the assert macro a lot in C++, but with contracts in D there is no question about whom to blame. A failed precondition: caller’s fault. A failed postcondition or invariant: my fault. We have contracts everywhere. And we don’t use the -release switch. We would rather give up performance than information that helps us fixing bugs.

As with many other active D users, D’s dynamic arraysslices and the built-in associative arrays stand out for the Funkwerk programmers

They work out of the box. It’s not like in Java, where you’d better not use the arrays of the language, but the ArrayList of the library.

They also have put D’s ranges and Uniform Function Call Syntax to heavy use.

With ranges and UFCS, it became possible to replace loops with intention-revealing chains of map and filter and whatever. While the loop describes how something is done, the ranges describe what is done. We enjoy setting the delay in our tests to 5.minutes. And we use UDAs in accessors and dunit. It’s hard to imagine what pieces of our clean code would look like in any other language.

accessors is a library that uses templatesmixins, and UDAs to automatically generate property getters and setters. dunit is a unit testing library that, in an ironic twist, Mario maintains. As it turned out, their intuition that their developers would be more motivated to write code when a language has built-in unit testing was correct (that’s been cited by Walter Bright as a reason it was included in the language in the first place). However, that first failed experiment in D came down to the simplistic capabilities the built-in unit testing provides.

One of the failures of our first experiment was an inappropriate use of the unittest functions. You’re on your own when you have to write more code per test case, for example for testing interactions of objects. With xUnit testing frameworks like JUnit, you get a toolbox and lots of instructions, like the exhaustive http://xunitpatterns.com/. With D, you get the single tool unittest and lots of examples of test cases that can be expressed as one-liners. This does not help in scenarios where such test cases are the exception. Even Phobos has examples where the unittest blocks are lengthy and obscure. Often, such unittest blocks are not clean code but a collection of “Test Smells”. We soon replaced the built-in unit testing (one of the benefits we’ve seen) with DUnit.

The transition to D2 and Phobos necessitated a move away from the Tango-based DUnit. Mario found the open source (and lowercase) dunit (the original project is here) a promising replacement, and ultimately found himself maintaining a fork that has evolved considerably from the original. However, the team recently discovered a way to combine xUnit testing with D’s built-in unittest, which may lead to another transition in their unit testing.

Additionally, Phobos still had no logging package in 2012, so they needed a replacement for the Tango logger API. Mario rolled up his sleeves and implemented log, which, like acessors and dunit, is available under the Boost Software License.

It was meant as an improvement proposal for an early stage of  std.experimental.logger. I was excited that it was possible to implement the logging framework together with support for rolling log files, for logrotate, and for syslog in just 500 lines. We’re still using it.

Funkwerk has also contributed code to Phobos.

I was looking for a way to implement a delay calculation. The problem is that you have an initial delay and you assume that the driver runs faster and makes shorter stops in order to catch up. But the algorithm was missing. So we made a pull request for std.algorithm.cumulativeFold. It’s there because we strived for clean code in a forecast service.

Garbage collection has been a topic of interest on this blog lately. Mario has an interesting anecdote from Funkwerk’s usage of D’s GC.

Our service was losing connections from time to time. It took a while to find that the cause was the interaction between garbage collection and networking. The signals used by the garbage collector interrupt the blocking read. The blocking read is hidden in a library, for example, in std.socketstream, where the retry is missing. Currently, we abuse inheritance to patch the SocketStream class.

The technical description of the derived class from the Ddoc comments read:

/**
 * When a timeout has been set on the socket, the interface functions "recv"
 * and "send" are not restarted after being interrupted by a signal handler.
 * They fail with the error "EINTR" when interrupted, especially
 * by the signal used to "stop the world" during garbage collection.
 *
 * In this case, retries avoid that the stream is mistaken to be exhausted.
 * Note, however, that these retries will likely exceed the specified timeout.
 *
 * See also: Linux manual page signal(7)
 */

After nearly a decade of active development with D, Mario and his team have been pleased with their choice. They plan to continue using the language in new projects.

Over the years, we’ve replaced services in the periphery of the passenger information system whenever this was justified by customer needs. We chose D for the implementation of all of the backend services. We are now working on a small greenfield project (it’s a few buses to be monitored instead of lots of trains), but we intend to grow this project into the next generation of the passenger information system.

In the coming months, we’ll hear more from Mario and the Funkwerk developers about how they use D to develop their system and tooling, and what it’s like to be a professional D programmer.

Project Highlight: Derelict

Previous project highlights on this blog were written up both in my own words and in quotes from the project maintainers. This time around is different — it would be a little odd to quote myself while writing about my own project.

Derelict is a collection of D bindings to C libraries. In its present incarnation, it resides in a GitHub organization called DerelictOrg. There you’ll find bindings to a number of libraries such as SDL, GLFW, SFML, OpenGL, OpenAL, FreeType, FreeImage and more. There are currently 26 repositories in the organization: one for the documentation, one for a utility package, 21 active packages and three that are currently unsupported, maintained primarily by me and Guillaume Piolat.

The Derelict packages primarily provide dynamic bindings, though some can be configured as static bindings at compile time. Dynamic bindings require the C libraries to which they bind to be dynamically loaded at run time. The mechanism for this is provided by the DerelictUtil package. Loading a library is as simple as a function call, e.g. DerelictSDL2.load();.

Static bindings have a link-time dependency, requiring either statically or dynamically linking with the bound C library when building an executable. In that case, the dependency on DerelictUtil goes away (some packages may still use a few DerelictUtil declarations, requiring one of its modules to be imported, but it need not be linked into the program). On Windows, this introduces potential issues with object file formats, but these days they are quite easy to solve.

It’s hard to believe now, but Derelict has been around continuously, in one form or another, since March of 2004. I was first drawn to D because it fit almost perfectly between the two languages I had the most experience with back in 2003, C and Java. It addressed some of the frustrations I had with each while combining things I loved from both. But early on I ran into my first frustration with D.

At the time, interfacing with C libraries on Windows was a bit annoying. D has excellent support for and compatibility with C libraries in the language, but the object file format issues I alluded to above were frustrating. DMD on Windows could only output object files in the OMF format and could only use the OPTLINK linker, an ancient 32-bit linker that only understands OMF. This is because DMD was implemented on top of backend and tool chain used by the DMC, the Digital Mars C and C++ compiler. Meanwhile, the rest of the Microsoft ecosystem was (and is) using the COFF format. In practice, a static binding to a C library required either using tools to convert the object files or libraries from OMF to COFF, or compiling the C library with DMC. You don’t tend to find support for DMC in most build scripts.

When I found that the bindings for the libraries I wanted to use (SDL and OpenGL) were static, I resolved to create my own. I’m a big fan of dynamic loading, so it was a no-brainer to create dynamic bindings. When you load a DLL via the LoadLibrary/GetProcAddress API, the object file format is irrelevant. From that point on, I never had to worry about the COFF/OMF problem again. Even though the problem didn’t exist outside of Windows, I made them cross platform anyway to keep a consistent interface.

In those days, DMD 1.0.0 wasn’t yet a thing, but a new web site, dsource.org, had just been launched to host open source D projects (much of the site is still online in archive mode thanks to Vladimir Panteleev). All of the projects there used Subversion, so I decided to learn my way around it and take a stab at maintaining an open source project by making my new bindings available. Part of my motivation for picking “Derelict” as the title is because I fully expected no one else would be interested and, whether they were or not, I would eventually abandon it anyway.

As it turned out, people really were interested. Contributions started coming in almost immediately, along with questions and suggestions on the DSource forums. I added new bindings, set up criteria on new binding submissions (they must be gamedev-related and cross-platform), and added documentation on how to create “Derelictified” bindings. Tomasz Stachowiak (now a Frostbite game engine developer) came along and started making contributions, including a templated loader in DerelictUtil that replaced the one I had hacked together and became the basis for Derelict2.

Eventually, with the dawning of the D2 era, D development moved away from DSource to GitHub. So did Derelict, in the guise of Derelict 3. I had made use of every D build tool that came along before then, but finally settled on a custom build script (written in D) for this iteration, since the build tools all died. When DUB came along and looked like it was here to stay, I fully committed to it. I took the opportunity to finally split up the monolithic repository, gave every package its own repo, and created DerelictOrg as their new home. Aside from the appalling lack of documentation (a state which is rapidly changing, but is still a WIP), things have been fairly stable.

If I were still counting from Derelict 3, I think we might be at Derelict 6 by now, but that’s not how it rolls anymore. Since the move to DerelictOrg, I’ve twice made iterative improvements to DerelictUtil that broke binary compatibility with previous releases. The first time around, I didn’t properly manage the roll out. Anyone building the Derelict libraries manually, i.e. when they weren’t using DUB to manage their application project, could run into issues where one package used the new version and another used the older. It seems like it ought to have been a mess, but I heard very little about it. Still, in the most recent overhaul, I took steps to keep the new stuff distinctly separate from the old stuff and rolled it all out at once (which is why most of the latest releases as I write have a -beta suffix).

As D has evolved over the years, so has Derelict. Now that DMD supports COFF on Windows, some of the packages can now be configured at compile time as a static binding. Long ago, DerelictUtil gained the ability to selectively ignore missing symbols (curiously missing from the latest docs, but described in the old DerelictUtil Wiki) and more recently gained a feature that enables a package implementation to support multiple versions of a C library via a configurable call to the loader (SharedLibVersion), which only a handful of packages support and few more likely will (due to API compatibility issues with type sizes or enum values, not every package can). The latest version of DerelictGL3 is massively configurable at compile time. For a little while, I even relaxed my criteria for allowing new packages under the DerelictOrg umbrella, but now after I ended up being wholly responsible for them that’s something I’m not inclined to continue. With the DUB registry, it’s not really necessary.

There are a number of C library bindings using DerelictUtil out in the wild. You can find them, as well as all of the DerelictOrg packages, at the DUB package registry. Some of the third-party bindings have a name like “derelict-extras-foo”, but others use “derelict-foo” like those in DerelictOrg. You can also find a collection of static bindings in the Deimos organization, and third-party static bindings in the DUB registry that have adopted the Deimos approach of providing C headers along side the D modules.

Anyone who needs help with any of the Derelict packages can often find it in the #D irc channel at freenode.net. I drop by infrequently, but I monitor the D forums every day. Asking for Derelict help in the Learn forum is not off-topic.

The Derelict bindings have gone well beyond targeting game developers. Though they’re often used in games (like Voxelman and the first D game on Steam, Mayhem Intergalactic), they are also used elsewhere (like DLangUI). If you’re using any of the Derelict bindings in your own projects, I’d love to hear about it. Particularly since I’m always looking for projects I’ve never heard about to highlight here on the blog.

Project Highlight: excel-d

Ever had the need to write an Excel plugin? Check this out.

Atila Neves opened his lightning talk at DConf 2017 like this:

I’m going to talk about how you can write Excel add-ins in D. Don’t ask me why. It’s just because people need it.

From there he goes into a quick intro on how to write plugins for Excel and gives a taste of what it looks like to register a single function in a C++ implementation:

Excel12f(
    xlfRegister, NULL, 11,          // 11: Number of args
    &xDLL,                          // name of the DLL
    TempStr12(L"Fibonacci"),        // procedure name
    TempStr12(L"UU"),               // type signature
    TempStr12(L"Compute to..."),    // argument text
    TempStr12(L"1"),                // macro type
    TempStr12(L"Generic Add-In"),   // category
    TempStr12(L""),                 // shortcut text
    TempStr12(L""),                 // help topic
    TempStr12(L"Number to compute to"), // function help
    TempStr12(L"Computes the nth Fibonacci number") // arg help 
);

Two things to note about this. First, Excel12f is a C++ function (wrapping a C API) that must be called in an add-in DLL’s entry point (xlAutoOpen) for each function that needs to be registered when the add-in is loaded by Excel. For a small plugin, the implementations of any registered functions might be in the same source file, but in a larger one they might be a bit of a maintenance headache by being located somewhere else. Also take note of all the comments used to document the function arguments, a common sight in C and C++ code bases.

The example D code Atila showed using excel-d is a world of difference:

@Register(
    ArgumentText("Array"),
    HelpTopic("Length of Array"),
    FunctionHelp("Length of an Array"),
    ArgumentHelp(["array"])
)
double DoublesLength(double[] arg) {
    return arg.length;
}

Here, the boilerplate for the registration is being generated at compile-time via a User Defined Attribute, which is used to annotate the function. Implementation and registration are happening in the same place. Another key difference is that the UDA has fields with descriptive names, eliminating the need to comment each argument. Finally, the UDA only requires four arguments, nine less than the C++ function. This is because it makes use of D’s compile-time introspection features to figure out as much as it possibly can and, at the same time, optional arguments (like the shortcut text) can just be omitted.

Since this is a Project Highlight on the D Blog, we’re going to ignore Atila’s opening request and ask, “Why?” There are actually two parts to that. First, why Excel?

Our customers are traders, and they work with Excel as one of their main tools. They need/want to, amongst other things, receive live stock updates in a cell and have their formulae automatically update. There’s other functionality they’d like to have and that means adding this to Excel somehow.

Of all the possible languages that could be used for this purpose, the business chose D. That brings us to the second part of the question: why D?

This is possible in Visual Basic, Python or C#, and possibly other languages. But none of them match D’s performance. C++ does, but it’s tedious and requires a lot of boilerplate to get going. D combines the speed and power of C++ with the reflection capabilities of those other languages. No boilerplate, just code, runs fast.

There’s more to the story, of course. The company is heavily invested in D.

We use D for nearly everything, even some “scripts”. The bulk of it is calculations for market indicators. Lots of data in -> munge -> new data out that needs to look pretty for traders. So integrating with existing code was an important factor.

Even though excel-d is targeting Windows, much of it was actually developed on Linux.

We use a Linux container as our reference development machine, but people use what they want. I do nearly all of my work on Linux and only boot into Windows when I have to. For the Excel work, that’s a necessity. But, as usual for me, I wrote all the tests to be platform agnostic, so I do the Excel development on Linux and test there. Every now and again that means a particular quirk of Excel wasn’t captured well in my mocking code, but it’s usually a quick fix after that.

He says they use both DMD and LDC for development, and both are running in continuous integration.

Although DMD doesn’t technically require Visual Studio to be installed (out of the box, it generates 32-bit OMF objects, and uses the OPTLINK linker, rather than the VS-compatible COFF), anyone doing serious work on Windows is going to need VS (or the Visual Studio Build Tools and the Windows SDK) for 64-bit and 32-bit COFF support. The latest LDC binary releases currently require the MS tools (support for MinGW was dropped, but according to the D Wiki, could be picked up again if there’s a champion for it).

Atila already had VS on his Windows partition. For this project, he got a bit of help from the VS plugin for D, Visual D.

I had to install VisualD because our reference project for Excel was in a Visual Studio solution, but afterwards I reverse engineered the build and didn’t open Visual Studio ever again.

Currently, excel-d has no support for custom dialogs or menus. Both items are on his TODO list.

If you’re working with D and need to write an Excel add-in, or want to try something cleaner than C++ to do so, excel-d is available in the DUB package registry. If not, the sponsors of the project, Kaleidic Associates and Symmetry Investments, have made several other open source projects available. They are interested in hiring talented hackers with a moral compass who aspire towards excellence and would like to work in D.

excel-d was developed by Stefan Koch and Atila Neves.

Project Highlight: workspace-d

Not so long ago, Jan Jurzitza sat down at his keyboard intent on writing a D plugin for Atom, his text editor of choice at the time. Then came disappointment.

“I was pretty unhappy with their API,” he says.

Visual Studio Code was released a short time after. He decided to give it a go and “instantly fell in love with it”. His Atom plugin was pushed aside and he started work on a new plugin for VS Code called code-d.

However, I did not want to maintain the same functionality in two plugins for two different text editors, so I thought that making a program which contains most of the plugin logic, like starting and calling dcd, dscanner, dfmt, etc., would be beneficial and would also help with including D support in more editors and IDEs in the future.

For the uninitiated, DCD (the D Completion Daemon), DScanner, and Dfmt are D-oriented tools for plugin developers, all maintained by Brian Schott. They are, respectively, a client-server based auto-completion program, a source code analyzer, and a code formatter. A number of IDE and text editor plugins employ them directly.

So Jan started work on his new tool and named it workspace-d.

With workspace-d I want to make it simple for plugin developers to integrate D functionality into their editor of choice. workspace-d is designed to work both as a standalone program through stdio and as a D library. Once I ported most of the code from my Atom extension to workspace-d, I could simply spawn it as a subprocess in code-d, which I got working with it quite quickly.

In addition to porting his Atom plugin to use workspace-d, he also created one for Sublime Text. Currently, he’s not devoting any time to either and is looking for someone else to take over maintenance of one or both. Anyone interested might start by submitting pull requests. Aside from workspace-d itself, Jan’s focus is on code-d.

He’s recently been working on version 2.0 of workspace-d, with a focus on streamlining the way it handles requests.

Using traits, templates, and CTFE (Compile-Time Function Execution), basically all D compile time magic, I was able to make an automatic wrapper for the functions for version 2.0. Basically, when a request like {"cmd":"hello"} comes in, it runs the D function hello with its default arguments. If the arguments don’t match, it responds with an error. This system automatically binds function arguments to JSON values and generates a response from the return value.

To deserialize the JSON requests, he’s using painlessjson, a third-party library available in the DUB package registry.

It works really great and I can really recommend it for some simple and easy conversions between D types and JSON. This change really cleaned up all the code and made it possible to use workspace-d as a library.

He’s also working on a new project, serve-d, that works with Microsoft’s Language Server Protocol.

serve-d is an alternative for the workspace-d command line I/O for those who prefer JSON RPC over my custom binary/JSON mix. It’s fiber based and uses workspace-d as a library, which results in really clean code. There’s an alpha version of the implementation on github already, both the server and a new branch on code-d. With the Language Server Protocol, I’m hoping for easier integration in other editors. The concept is basically the same as workspace-d’s command line interface, but, because Microsoft is such a big company, I’m hoping that more editors by big developers are going to implement this protocol.

Building and installing workspace-d should go pretty smoothly on Linux or OS X, but it’s currently a little bumpy on Windows. Because of an issue Jan has yet to resolve, it can only be built on Windows with LDC.

The auto completion didn’t work for some people on Windows because it got stuck in the std.process.execute function when creating a pipe to write to. I couldn’t find any way to reproduce it in a standalone program so I couldn’t file a bug either. So what we did to avoid this issue in the short term was to simply disallow compilation on Windows using DMD. It works just fine when compiled with LDC.

Jan’s primarily a Linux user (he doesn’t own a Mac and only runs Windows in a VM). He credits GitHub user @Andrepuel for getting it operational on OS X, and @aka-demik for finding the issue on Windows and verifying that it compiles with LDC. He’ll be grateful to anyone who can help fully resolve the Windows/DMD issue once and for all.

If you’re looking to develop a D plugin for your favorite editor, consider taking advantage of the work Jan as already done with workspace-d to save yourself some effort. And VS Code users can put it to use via code-d to get code completion and more. Visit its VS Code marketplace page to read reviews and installation instructions.