Author Archives: Kai Nacke

Fuzzing Your D Application with LDC and AFL

Fuzzing, or fuzz testing, is a powerful method to find hidden bugs in your application. The basic idea is to present random input to your application and monitor how it behaves. If it crashes or shows some other unusual behavior then you have found a bug.

The use of true random input is not very effective, as most applications reject such input. Therefore many fuzz testing tools mutate valid input, e.g. flipping one or two bits, and present this mutated input to the application. This approach is easy to automate. A fuzz test can run for hours or days until an input is found which crashes your application.

Fuzz testing is very popular. A lot of security bugs have been found with this method. So it’s better to fuzz test your application by yourself instead of waiting for your users to report serious bugs!

Johan Engelen showed at DConf 2018 and in more detail in a blog post how you can use LLVM libFuzzer to fuzz test your application. For libFuzzer, you need to write a test driver. This is powerful because you can make decisions about the function to test. The downside is that you have to code the test driver.

AFL (short for American Fuzzy Lop, a rabbit breed) is another tool to fuzz test an application. AFL has a different approach than libFuzzer and does not require coding. The application under test has to read its data from stdin or from a file. The binary must be instrumented, which requires a recompile of the application. In case you have no source code for the application you can use AFL together with QUEMU. No instrumentation is required but the tests run much slower.

Because random input is not a good choice, you give AFL one or more valid input files, preferably of a small size. AFL mutates this input file, e.g. by flipping a single bit. This new input file is presented to your application and the reaction on it is observed. With the instrumentation in place, AFL discovers the path the data takes through your application. The relationship between bit flips and different code paths that run because of the bit flips is recorded and used to discover new paths and to trigger unexpected behavior. Input which causes crashes is saved in a directory. The main UI gives a lot of information, including how many unique crashes occured in the test session.

AFL works best if the input is a small binary, e.g. a PNG or a ZIP file. If your application has a more verbose and structured input (e.g. a programming language) then you can provide a dictionary which helps AFL with the basic syntax.

The latest release of AFL has an interesting feature. For instrumenting code compiled with clang, a small LLVM plugin is used. This plugin can also be used with LDC, making it possible to fuzz test your D application!

I used AFL to fuzz test LLtool, my recursive-descent parser generator presented at DConf 2019. LLtool expects a grammar description as a file or on stdin. If no error is found, then a D fragment of a recursive-descent parser is produced. Here, I show my approach.

First of all, you need to install AFL. It is included in most Linux distributions, e.g. Ubuntu. A FreeBSD port is also available. One caveat here: please make sure that the AFL plugin is compiled with the same LLVM version as LDC. Otherwise you will see an error message like /usr/local/lib/afl/ Undefined symbol "...."

during compilation. In this case, download AFL from the link above and compile it yourself.

Different distributions install AFL in different locations. You need to find out the path. E.g. Ubuntu uses /usr/lib/afl, FreeBSD uses /usr/local/lib/afl. I use an environment variable to record this value for later use (bash syntax):

export AFL_PATH=`/ust/lib/afl`

To instrument your code you have to specify the AFL plugin on the LDC command line:

ldc2 -plugin=$AFL_PATH/ *.d

You will see a short statistic emitted by the new pass:

afl-llvm-pass 2.52b by <>
[+] Instrumented 16118 locations (non-hardened mode, ratio 100%).

For LLVM instrumentation, AFL requires a small runtime library. You need to link the object file $AFL_PATH/afl-llvm-rt.o into your application.

In my dub.sdl file I created a special build type for AFL. This puts all the steps above into a single place. Plus, you can copy and paste this build type directly to your own dub.sdl file because the only dependencies are AFL and LDC!

buildType "afl" {
    toolchainRequirements dmd="no" gdc="no" ldc=">=1.0.0"
    dflags "-plugin=$AFL_PATH/"
    sourceFiles "$AFL_PATH/afl-llvm-rt.o"
    versions "AFL"
    buildOptions "debugMode" "debugInfo" "unittests"

Now you can type dub build -b=afl on the command line to instrument your application for use with afl. Do not forget to set the AFL_PATH environment variable, otherwise dub will complain.

Now create two new directories called testcases and findings. Put a small, valid input file into the testcases directory. For example save this

%token number
expr: term "+" term;
term: factor "*" factor;
factor: number;

as file t1.g in the testcases folder. Inputs which crash the application will be saved in the findings directory.

To call AFL, you type on the command line:

afl-fuzz -i testcases -o findings ./LLtool --DRT-trapExceptions=0 @@

Two parts of the command line require further explanation. If the application requires a file for input, you specify the file path as @@. Otherwise AFL assumes that the application reads the input from stdin.

If the application crashes, then AFL saves the input causing the crash in the findings/crashes directory. But the D runtime is very friendly. Exceptions uncaught by the application are caught by the D runtime, a stack trace is printed, and the application terminates. This does not count as a crash for AFL. To produce a crash you have to specify the D runtime option --DRT-trapExceptions=0. For more information, read the relevant edition of This week in D.

It is worth reading the AFL documentation because there it provides a lot of tips and background information. Enjoy watching AFL crashing your application and producing test cases for you!

A long-time contributor to the D community, Kai Nacke is the author of ‘D Web Development‘ and a maintainer of LDC, the LLVM D Compiler.

Containerize Your D Server Application

A container consists of an application packed together with all of its required dependencies. The container is run as an isolated process on Linux or Windows. The Docker tool has made the handling of containers very popular and is now the de-facto standard for deploying containers to a cloud environment. In this blog post I discuss how to create a simple vibe.d application and ship it as a Docker container.

Setting up the build environment

I use Ubuntu 18.10 as my development environment for this application. Additionally, I installed the packages ldc (the LLVM-based D compiler), dub (the D package manager and build tool), gcc, zlib1g-dev and libssl-dev (all required for compiling my vibe.d application). To build and run my container I use Docker CE. I installed it following the instructions at As the last step, I added my user to the docker group (sudo adduser kai docker).

A sample REST application

My vibe.d application is a very simple REST server. You can call the /hello endpoint (with an optional name parameter) and you get back a friendly message in JSON format. The second endpoint, /healthz, is intended as a health check and simply returns the string "OK". You can clone my source repository at to get the source code. Here is the application:

import vibe.d;
import std.conv : to;
import std.process : environment;
import std.typecons : Nullable;

shared static this()
    logInfo("Environment dump");
    auto env = environment.toAA;
    foreach(k, v; env)
        logInfo("%s = %s", k, v);

    auto host = environment.get("HELLO_HOST", "");
    auto port = to!ushort(environment.get("HELLO_PORT", "17890"));

    auto router = new URLRouter;
    router.registerRestInterface(new HelloImpl());

    auto settings = new HTTPServerSettings;
    settings.port = port;
    settings.bindAddresses = [host];

    listenHTTP(settings, router);

    logInfo("Please open http://%s:%d/hello in your browser.", host, port);

interface Hello
    @queryParam("name", "name")
    Msg hello(Nullable!string name);

    string healthz();

class HelloImpl : Hello
    Msg hello(Nullable!string name) @safe
        logInfo("hello called");
        return Msg(format("Hello %s", name.isNull ? "visitor" : name));

    string healthz() @safe
        logInfo("healthz called");
        return "OK";

struct Msg
    string msg;

And this is the dub.sdl file to compile the application:

name "hellorest"
description "A minimal REST server."
authors "Kai Nacke"
copyright "Copyright © 2018, Kai Nacke"
license "BSD 2-clause"
dependency "vibe-d" version="~>0.8.4"
dependency "vibe-d:tls" version="*"
subConfiguration "vibe-d:tls" "openssl-1.1"
versions "VibeDefaultMain"

Compile and run the application with dub. Then open the URL to check that you get a JSON result.

A cloud-native application should follow the twelve-factor app methodology. You can read about the twelve-factor app at In this post I only highlight two of the factors: III. Config and XI. Logs.

Ideally, you build an application only once and then deploy it into different environments, e.g. first to your quality testing environment and then to production. When you ship your application as a container, it comes with all of its required dependencies. This solves the problem that different versions of a library might be installed in different environments, possibly causing hard-to-find errors. You still need to find a solution for how to deal with different configuration settings. Port numbers, passwords or the location of databases are all configuration settings which typically differ from environment to environment. The factor III. Config recommends that the configuration be stored in environment variables. This has the advantage that you can change the configuration without touching a single file. My application follows this recommendation. It uses the environment variable HELLO_HOST for the configuration of the host IP and the variable HELLO_PORT for the port number. For easy testing, the application uses the default values and 17890 in case the variables do not exist. (To be sure that every configuration is complete, it would be safer to stop the application with an error message in case an environment variable is not found.)

The application writes log entries on startup and when a url endpoint is called. The log is written to stdout. This is exactly the point of factor XI. Logs: an application should not bother to handle logs at all. Instead, it should treat logs as an event stream and write everything to stdout. The cloud environment is then responsible for collecting, storing and analyzing the logs.

Building the container

A Docker container is specified with a Dockerfile. Here is the Dockerfile for the application:

FROM ubuntu:cosmic

  apt-get update && \
  apt-get install -y libphobos2-ldc-shared81 zlib1g libssl1.1 && \
  rm -rf /var/lib/apt/lists/*

COPY hellorest /

USER nobody

ENTRYPOINT ["/hellorest"]

A Docker container is a stack of read-only layers. With the first line, FROM ubuntu:cosmic, I specify that I want to use this specific Ubuntu version as the base layer of my container. During the first build, this layer is downloaded from Docker Hub. Every other line in the Dockerfile creates a new layer. The RUN line is executed at build time. I use it to install dependent libraries which are needed for the application. The COPY command copies the executable into the root directory inside the container. And last, CMD specifies the command which the container will run.

Run the Docker command

docker build -t vibed-docker/hello:v1 .

to build the Docker container. After the container is built successfully, you can run it with

docker run -p 17890:17890 vibed-docker/hello:v1

Now open again the URL You should get the same result as before. Congratulations! Your vibe.d application is now running in a container!

Using a multi-stage build for the container

The binary hellorest was compiled outside the container. This creates difficulties as soon as dependencies in your development environment change. It is easy to integrate compiliation into the Dockerfile, but this creates another issue. The requirements for compiling and running the application are different, e.g. the compiler is not required to run the application.

The solution is to use a multi-stage build. In the first stage, the application is build. The second stage contains only the runtime dependencies and application binary built in the first stage. This is possible because Docker allows the copying of files between stages. Here is the multi-stage Dockerfile:

FROM ubuntu:cosmic AS build

  apt-get update && \
  apt-get install -y ldc gcc dub zlib1g-dev libssl-dev && \
  rm -rf /var/lib/apt/lists/*

COPY . /tmp


RUN dub -v build

FROM ubuntu:cosmic

  apt-get update && \
  apt-get install -y libphobos2-ldc-shared81 zlib1g libssl1.1 && \
  rm -rf /var/lib/apt/lists/*

COPY --from=build /tmp/hellorest /

USER nobody

ENTRYPOINT ["/hellorest"]

In my repository I called this file Dockerfile.multi. Therefore, you have to specify the file on the command line:

docker build -f Dockerfile.multi -t vibed-docker/hello:v1 .

Building the container now requires much more time because a clean build of the application is included. The advantage is that your build environment is now independent of your host environment.

Where to go from here?

Using containers is fun. But the fun diminishes as soon as the containers get larger. Using Ubuntu as the base image is comfortable but not the best solution. To reduce the size of your container you may want to try Alpine Linux as the base image, or use no base image as all.

If your application is split over several containers then you can use Docker Compose to manage your containers. For real container orchestration in the cloud you will want to learn about Kubernetes.

A long-time contributor to the D community, Kai Nacke is the author of ‘D Web Development‘ and a maintainer of LDC, the LLVM D Compiler.

D Goes Business

A long-time contributor to the D community, Kai Nacke is the author of ‘D Web Development‘ and the maintainer of LDC, the LLVM D Compiler. Be sure to catch Kai at DConf 2018 in Munich, May 2 – 5, where he’ll be speaking about “D for the Blockchain“.

Many companies run their business processes on an SAP system. Often other applications need to access the SAP system because they provide data or request some calculation. There are many ways to achieve this… Let’s use D for it!


The SDK and the D binding

As an SAP customer you can download the SAP NetWeaver Remote Function Call SDK. You can call RFC-enabled functions in the SAP system from a native application. Conversely, it is possible from an ABAP program to call a function in a native program. The API documentation for the library is available as a separate download. I recommend downloading it together with the library.

The C/C++ interface is well structured but can be very tedious to use. That’s why I not only created a D binding for the library but also used some D magic to make the developer’s life much easier. I introduced the following additions:

  • Most functions have a parameter of type RFC_ERROR_INFO. As the name implies, this is only needed in case of an error. For each of these functions a new function is generated without the RFC_ERROR_INFO parameter and the return type RFC_RC changed to void. Instead, a SAPException is thrown in case of error.
  • Functions named RfcGet*Count now return a size_t value instead of using an out parameter. This is possible because of the above step which changed the return type to void.
  • Functions named Rfc*ByIndex use a size_t parameter for the index, thus better integrating with D.
  • Functions which have a pointer and length value now accept a D array instead.
  • Use of pointers to zero-terminated UTF–16 strings is replaced with wstring type parameters.

Using the library is easy, just add

dependency "sapnwrfc-d"

to your dub.sdl file. One caveat: the libraries provided by the SAP package must be installed in such a way that the linker can find them. On Windows, you can add the path to the lib folder of the SDK to the LIB and PATH environment variables.

An example application

Let’s create an application calling a remote-enabled function on the SAP system. I use the DATE_COMPUTE_DAY function because it is simple but still has import and export parameters. This function takes a date (a string of format “YYYYMMDD”) and returns the weekday as a number (1 = Monday, 2 = Tuesday and so on).

The application needs two parameters: the system identifier of the SAP system and the date. The system identifier denotes the destination for the RFC call. The parameters for the connection must be in the sapnwrfc.ini file, which must be located in the same folder as the application executable. An invocation of the application using the SAP system X01 looks like this:

D:\OpenSource\sapnwrfc-example>sapnwrfc-example.exe X01 20180316
Date 20180316 is day 5

First, create the application with DUB:

D:\OpenSource>dub init sapnwrfc-example
Package recipe format (sdl/json) [json]: sdl
Name [sapnwrfc-example]:
Description [A minimal D application.]: An example rfc application
Author name [Kai]: Kai Nacke
License [proprietary]:
Copyright string [Copyright © 2018, Kai Nacke]:
Add dependency (leave empty to skip) []: sapnwrfc-d
Added dependency sapnwrfc-d ~>0.0.5
Add dependency (leave empty to skip) []:
Successfully created an empty project in 'D:\OpenSource\sapnwrfc-example'.
Package successfully created in sapnwrfc-example


Let’s edit the application in source\app.d. Since this is only an example application, I’ve put all the code into the main() function. In order to use the library you simply import the sapnwrfc module. Most functions can throw a SAPException, so you want to wrap them in a try block.

import std.stdio;
import sapnwrfc;

int main(string[] args)
        // Put your code here
    catch (SAPException e)
        writefln("Error occured %d %s", e.code, e.codeAsString);
        writefln("'%s'", e.message);
        return 100;
    return 0;

The library uses only UTF–16. Like the C/C++ version, the alias cU() can be used to create a zero-terminated UTF–16 string from a D string. I convert the command line parameters first:

    auto dest = cU(args[1]);
    auto date = cU(args[2]);

Now initialize the library. Most important, this function loads the sapnwrfc.ini file and initializes the environment. If this call is missing then it is implicitly done inside the library. Nevertheless, I recommend calling the function. It is possible that I will add more functionality to this function.


The next step is to open a connection to the SAP system. Since the connection parameters are in the sapnwrf.ini file, it is only necessary to provide the destination. In your own application you do not need to use the sapnwrf.ini file. You can provide all parameters in the RFC_CONNECTION_PARAMETER[] array which is passed to the RfcOpenConnection() function.

    RFC_CONNECTION_PARAMETER[1] conParams = [ { "DEST"w.ptr, dest } ];
    auto connection = RfcOpenConnection(conParams);
    scope(exit) RfcCloseConnection(connection);

Please note the D features used here. A string literal is always zero-terminated, therefore there is no need to use cU() on the "DEST"w literal. With the scope guard, I make sure that the connection is closed at the end of the block.

Before you can make an RFC call, you have to retrieve the function meta data (function description) and create the function from it.

    auto desc = RfcGetFunctionDesc(connection, "DATE_COMPUTE_DAY"w);
    auto func = RfcCreateFunction(desc);
    scope(exit) RfcDestroyFunction(func);

The RfcGetFunctionDesc() calls the SAP system to look up the meta data. The result is cached to avoid a network round trip each time you invoke this function. The implication is that the remote user needs the right to perform the look up. If this step fails with a security-related error, you should talk to your SAP administrator and review the rights of the remote user.

The DATE_COMPUTE_DAY function has one import parameter, DATE. To pass a parameter you call one of the RfcSetXXX() or RfcSetXXXByIndex() functions. The difference is that the first variant uses the parameter name (here: "DATE") or the index of the parameter in the signature (here: 1). I often use the named parameter because the resulting code is much more readable. The date data type expects an 8 character UTF–16 array.

    RfcSetDate(func, "DATE", date[0..8]);

Now the function can be called:

    RfcInvoke(connection, func);

The computed weekday is returned in the export parameter DAY. There is a set of RfcGetXXX() and RfcGetXXXByIndex() functions to retrieve the value.

    wchar[1] day;
    RfcGetChars(func, "DAY", day);

Let’s print the result:

    writefln("Date %s is weekday %s", date[0..8], day);

Congratulations! You’ve finished your first RFC call.

Build the application with dub build. Before you can run the application you still need to create the sapnwrfc.ini file. This looks like:


The SDK contains a commented sapnwrfc.ini file in the demo folder. If you are on Windows and your system still uses SAP GUI with the (deprecated) saplogon.ini file, then you can use the createini example application from my bindings library to convert the saplogon.ini file into the sapnwrfc.ini file.


Calling an RFC function of an SAP system with D is very easy. D features like support for UTF–16 strings, scope guards, and exceptions make the source quite readable. The presented example application is part of the D bindings library and can be downloaded from GitHub at

The Making of ‘D Web Development’

A long-time contributor to the D community, Kai Nacke is the author of ‘D Web Development‘ and the maintainer of LDC, the LLVM D Compiler. In this post, he tells the story of how his book came together.

At the beginning of 2014, I was asked by Packt Publishing if I wanted to review the D Cookbook by Adam Ruppe. Of course I wanted to!

The review was stressful, but it was a lot of fun. At the end of the year came a surprising question for me: would I be willing to switch sides and write a book myself? Here, I hesitated. Sure, writing your own book is a dream, but is this at all possible on top of a regular job? The proposed topic, D Web Development, was interesting. Web technologies I knew, of course, but the vibe.d framework was for me only a large unit test for each LDC release.

My interest was awakened and I created a chapter overview, based solely on my experience as a developer and the online documentation of vibe.d. The result came out well and I was offered a contract. It came with an immediate challenge: I should set up a small project plan. How do you plan to write a book?!?

Without any experience in this area, I stuck to the following rules. For each chapter, I planned a little time frame. Each should include at least one weekend, for the larger chapters perhaps even two. I reserved some time for the Easter holiday, too. The first version of the book would therefore be ready at the beginning of July, when I started writing in mid-February.

Even the first chapter showed that this plan was much too optimistic. The writing went off quickly – as soon as I had something I could write about. But experimenting and testing took a lot of time. For one thing, I didn’t have much experience with vibe.d. There were sample programs that I wanted to develop Saturday to write about on Sunday. However, I was still searching for errors on Monday, without having written a single line!

On the other hand, there were still a few rough edges in vibe.d at the time, but I did not want to write that these would be changed or implemented in later versions of the library. So I developed a few patches for vibe.d, e.g. digest authentication. By the way, there were also new LDC releases to create. Fortunately, the LDC team had expanded, so I just took care of the release itself (thanks so much, folks!). The result was, of course, that I missed many of my milestones.

In May, the first chapters came back from the review process. Other content also had to be written, such as the text for the back of the book. In mid-December, the last chapter was finished and almost all review notes on the other chapters were incorporated. After a little Christmas break, the remaining notes were quickly incorporated and the pre-final version of each chapter was created in January. And then, on February 1, 2016, the news came that my book was now published. I’d done it! Almost exactly one year after I had started with the first chapter.

Was the work worth it? In any case, it was a very special experience. Would I do it again? Yes! Right now, I am playing with the idea of updating the book and expanding a chapter. Let’s see what happens…

Making Of: LDC 1.0

This is a guest post from Kai Nacke. A long-time contributor to the D community, Kai is the author of D Web Development and the maintainer of LDC, the LLVM D Compiler.

LDC has been under development for more than 10 years. From release to release, the software has gotten better and better, but the version number has always implied that LDC was still the new kid on block. Who would use a version 0.xx compiler for production code?

These were my thoughts when I raised the question, “Version number: Are we ready for 1.0?” in the forum about a year ago. At that time, the current LDC compiler was 0.15.1. In several discussions, the idea was born that the first version of LDC based on the frontend written in D should be version 1.0, because this would really be a major milestone. Version 0.18.0 should become 1.0!

Was LDC really as mature as I thought? Looking back, this was an optimistic view. At DConf 2015, Liran Zvibel from Weka.IO mentioned in his talk about large scale primary storage systems that he couldn’t use LDC because of bugs! Additionally, the beta version of 0.15.2 had some serious issues and was finally abandoned in favor of 0.16.0. And did I mention that I was busy writing a book about vibe.d?

Fortunately, over the past two years, more and more people began contributing to LDC. The number of active committers grew. Suddenly, the progress of LDC was very impressive: Johan added DMD-style code coverage and worked on merging the new frontend. Dan worked on an iOS version and Joakim on an Android version. Together, they made ARM a first class target of LDC. Martin and Rainer worked on the Windows version. David went ahead and fixed a lot of the errors which had occurred with the Weka code base. I spent some time working on the ports to PowerPC and AArch64. Uncounted small fixes from other contributors improved the overall quality.

Now it was obvious that a 1.x version was overdue. Shortly after DMD made the transition to the D-based frontend, LDC was able to use it. After the usual alpha and beta versions, I built the final release version on Sunday, June 5, and officially announced it the next day. Version 1.0 is shipping now!

Creating a release is always a major effort. I would like to say “Thank you!” to everybody who made this special release happen. And a big thanks to our users; your feedback is always a motivation to make the next LDC release even better.

Onward to 1.1!