New D Compiler Release: DMD 2.075.0

Posted on

DMD 2.075.0 was released a few days back. As with every release, the changelog is available so you can browse the list of fixed bugs and new features. 2.075.0 can be fetched from the dlang.org download page, which always makes available the latest DMD release alongside a nightly build.

Notable Changes

Every DMD release brings with it a number of bug fixes, changes, and enhancements. Here are some of the more noteworthy changes in this release.

Two array properties removed

Anyone who does a lot of work with D’s ranges will likely have encountered this little annoyance that arises from the built-in .sort property of arrays.

void main()
{
    import std.algorithm : remove, sort;
    import std.array : array;
    int[] nums = [5, 3, 1, 2, 4];
    nums = nums.sort.remove(2).array;
}

The .sort property has been deprecated for ages, so the above would result in the following error:

sorted.d(6): Deprecation: use std.algorithm.sort instead of .sort property

The workaround would be to add an empty set of parentheses to the sort call. With DMD 2.075.0, this is no longer necessary and the above will compile. Both the .sort and .reverse array properties have finally been removed from the language.

For the uninitiated, D has two features that have proven convenient in the functional pipeline programming style typically used with ranges. One is that parentheses on a function call are optional when there are no parameters. The other is Universal Function Call Syntax (UFCS), which allows a function call to be made using the dot notation on the first argument, so that a function int add(int a, int b) can be called as: 10.add(5).

Each of D’s built-in types comes with a set of built-in properties. Given that the built-in properties are not functions, no parentheses are used to access them. The .sort array property has been around since the early days of D1. At the time, it was rather useful and convenient for anyone who was happy with the default implementation. When D2 came along with the range paradigm, the standard library was given a set of functions that can treat arrays as ranges, opening them up to use with the many range based functions in the std.algorithm package and elsewhere.

With optional parentheses, UFCS, and a range-based function in std.algorithm called sort, conflict was inevitable. Now range-based programmers can put that behind them and take one more pair of parentheses out of their pipelines.

The breaking up of std.datetime

The std.datetime module has had a reputation as the largest module in D’s standard library. Some developers have been known to use it a stress test for their tooling. It was added to the library long before D got the special package module feature, which allows multiple modules in a package to be imported as a single module.

Once package modules were added, Jonathan M. Davis, the original std.datetime developer, found it challenging to split the monolith into multiple modules. Then, at DConf 2017, he could be seen toiling away on his laptop in the conference hall and the hotel lobby. On the final day of the conference, the day of the DConf Hackathon, he announced that std.datetime was now a package. DMD 2.075.0 is the first release where the new module structure is available.

Any existing code using the old module should still compile. However, any static libraries or object files lying around with the old symbols stuffed inside may need to be recompiled.

Colorized compiler messages

This one is missing from the changelog. DMD now has the ability to output colorized messages. The implementation required going through the existing error messages and properly annotating them where appropriate, so there may well be some messages for which the colors are missing. Also, given that this is a brand new feature and people can be picky about their terminal colors, more work will likely be done on this in the future. Perhaps that might include support for customization.

 

Compiler Ddoc documentation online

DMD, though originally written in C++, was converted to D some time ago. Now that more D programmers are able to contribute to the compiler, work has gone into documenting its source using D’s built-in Ddoc syntax. The result is now online, accessible from the sidebar of the existing library reference. A good starting point is the ddmd.mars module.

And more…

The above is a small part of the bigger picture. The bugfix list shows 89 bugs, regressions, and enhancements across the compiler, runtime, standard library, and web site. See the full changelog for the details.

Thanks to everyone who contributed to this release, whether it was by reporting issues, submitting or reviewing pull requests, testing out the beta, or carrying out any of the numerous small tasks that help a new release see the light of day.

DCompute: GPGPU with Native D for OpenCL and CUDA

Posted on

Nicholas Wilson is a student at Murdoch University, studying for his BEng (Hons)/BSc in Industrial Computer Systems (Hons) and Instrumentation & Control/ Molecular Biology & Genetics and Biomedical Science. He just finished his thesis on low-cost defect detection of solar cells by electroluminescence imaging, which gives him time to work on DCompute and write about it for the D Blog. He plays the piano, ice skates, and has spent 7 years putting D to use on number bashing, automation, and anything else that he could make a computer do for him.


DCompute is a framework and compiler extension to support writing native kernels for OpenCL and CUDA in D to utilise GPUs and other accelerators for computationally intensive code. In development are drivers to automate the interactions between user code and the tedious and error prone compute APIs with the goal of enabling the rapid development of high performance D libraries and applications.

Introduction

After watching John Colvin’s DConf 2016 presentation in May of last year on using D’s metaprogramming to make the OpenCL API marginally less horrible to use, I thought, “This would be so much easier to do if we were able to write kernels in D, rather than doing string manipulations in OpenCL C”. At the time, I was coming up to the end of a rather busy semester and thought that would make a good winter[1] project. After all, LDC, the LLVM D Compiler, has access to LLVM’s SPIR-V and PTX backends, and I thought, “It can’t be too hard, its only glue code”. I slightly underestimated the time it would take, finishing the first stage of DCompute (because naming things is hard), mainlining the changes I made to LDC at the end of February, eight months later — just in time for the close of submissions to DConf, where I gave a talk on the progress I had made.

Apart from familiarising myself with the LDC and DMD front-end codebases, I also had to understand the LLVM SPIR-V and PTX backends that I was trying to target, because they require the use of special metadata (for e.g. denoting a function is a kernel) and address spaces, used to represent __global & friends in OpenCL C and __global__ & friends in CUDA, and introduce these concepts into LDC.

But once I was familiar with the code and had sorted the above discrepancies, it was mostly smooth sailing translating the OpenCL and CUDA modifiers into compiler-recognised attributes and wrapping the intrinsics into an easy to use and consistent interface.

When it was all working and almost ready to merge into mainline LDC, I hit a bit of a snag with regards to CI: the SPIR-V backend that was being developed by Khronos was based on the quite old LLVM 3.6.1 and, despite my suggestions, did not have any releases. So I forward ported the backend and the conversion utility to the master branch of LLVM and made a release myself. Still in progress on this front are converting magic intrinsics to proper LLVM intrinsics and transitioning to a TableGen-driven approach for the backend in preparation for merging the backend into LLVM Trunk. This should hopefully be done soon™.

Current state of DCompute

With the current state of DCompute we are able to write kernels natively in D and have access to most of its language-defining features like templates & static introspection, UFCS, scope guards, ranges & algorithms and CTFE. Notably missing, for hardware and performance reasons, are those features commonly excluded in kernel languages, like function pointers, virtual functions, dynamic recursion, RTTI, exceptions and the use of the garbage collector. Note that unlike OpenCL C++ we allow kernel functions to be templated and have overloads and default values. Still in development is support for images and pipes.

Example code

To write kernels in D, we need to pass -mdcompute-targets=<targets> to LDC, where <targets> is a comma-separated list of the desired targets to build for, e.g. ocl-120,cuda-350 for OpenCL 1.2 and CUDA compute capability 3.5, respectively (yes, we can do them all at once!). We get one file for each target, e.g. kernels_ocl120_64.spv, when built in 64-bit mode, which contains all of the code for that device.

The vector add kernel in D is:

@compute(CompileFor.deviceOnly) module example;
import ldc.dcompute;
import dcompute.std.index;

alias gf = GlobalPointer!float;

@kernel void vadd(gf a, gf b, gf c) 
{
	auto x = GlobalIndex.x;
	a[x] = b[x]+c[x];
}

Modules marked with the @compute attribute are compiled for each of the command line targets, @kernel makes a function a kernel, and GlobalPointer is the equivalent of the __global qualifier in OpenCL.

Kernels are not restricted to just functions — lambdas & tamplates also work:

@kernel void map(alias F)(KernelArgs!F args)
{
    F(args);
}
//In host code
AutoBuffer!float x,y,z; // y & z initialised with data
q.enqueue!(map!((a,b,c) => a=b+c))(x.length)(x, y, z);

Where KernelArgs translates host types to device types (e.g. buffers to pointers or, as in this example, AutoBuffers to AutoIndexed Pointers) so that we encapsulate the differences in the host and device types.

The last line is the expected syntax for launching kernels, q.enqueue!kernel(dimensions)(args), akin to CUDA’s kernel<<<dimensions,queue>>>(args). The libraries for launching kernels are in development.

Unlike CUDA, where all the magic for transforming the above expression into code on the host lies in the compiler, q.enqueue!func(sizes)(args) will be processed by static introspection of the driver library of DCompute.
The sole reason we can do this in D is that we are able to query the mangled name the compiler will give to a symbol via the symbol’s .mangleof property. This, in combination with D’s easy to use and powerful templates, means we can significantly reduce the mental overhead associated with using the compute APIs. Also, implementing this in the library will be much simpler, and therefore faster to implement, than putting the same behaviour in the compiler. While this may not seem much for CUDA users, this will be a breath of fresh air to OpenCL users (just look at the OpenCL vector add host code example steps 7-11).

While you cant do that just yet in DCompute, development should start to progress quickly and hopefully become a reality soon.

I would like to thank John Colvin for the initial inspiration, Mike Parker for editing, and the LDC folks, David Nadlinger, Kai Nacke, Martin Kinke, with a special thanks to Johan Engelen, for their help with understanding the LDC codebase and reviewing my work.

If you would like to help develop DCompute (or be kept in the loop), feel free to drop a line at the libmir Gitter. Similarly, any efforts preparing the SPIR-V backend for inclusion into LLVM are also greatly appreciated.

[1] Southern hemisphere.

The New CTFE Engine

Posted on

Stefan Koch is the maintainer of sqlite-d, a native D sqlite reader, and has contributed to projects like SDC (the Stupid D Compiler) and vibe.d. He was also responsible for a 10% performance improvement in D’s current CTFE implementation and is currently writing a new CTFE engine, the subject of this post.


For the past nine months, I’ve been working on a project called NewCTFE, a reimplementation of the Compile-Time Function Evaluation (CTFE) feature of the D compiler front-end. CTFE is considered one of the game-changing features of D.

As the name implies, CTFE allows certain functions to be evaluated by the compiler while it is compiling the source code in which the functions are implemented. As long as all arguments to a function are available at compile time and the function is pure (has no side effects), then the function qualifies for CTFE. The compiler will replace the function call with the result.

Since this is an integral part of the language, pure functions can be evaluated anywhere a compile-time constant may go. A simple example can be found in the standard library module, std.uri, where CTFE is used to compute a lookup table. It looks like this:

private immutable ubyte[128] uri_flags = // indexed by character
({

    ubyte[128] uflags;

    // Compile time initialize
    uflags['#'] |= URI_Hash;

    foreach (c; 'A' .. 'Z' + 1)
    {
        uflags[c] |= URI_Alpha;
        uflags[c + 0x20] |= URI_Alpha; // lowercase letters

    }

    foreach (c; '0' .. '9' + 1) uflags[c] |= URI_Digit;

    foreach (c; ";/?:@&=+$,") uflags[c] |= URI_Reserved;

    foreach (c; "-_.!~*'()") uflags[c] |= URI_Mark;

    return uflags;

})();

Instead of populating the table with magic values, a simple expressive function literal is used. This is much easier to understand and debug than some opaque static array. The ({ starts a function-literal, the }) closes it. The () at the end tells the compiler to immediately evoke that literal such that uri_flags becomes the result of the literal.

Functions are only evaluated at compile time if they need to be. uri_flags in the snippet above is declared in module scope. When a module-scope variable is initialized in this manner, the initializer must be available at compile time. In this case, since the initializer is a function literal, an attempt will be made to perform CTFE. This particular literal has no arguments and is pure, so the attempt succeeds.

For a more in-depth discussion of CTFE, see this article by H. S. Teoh at the D Wiki.

Of course, the same technique can be applied to more complicated problems as well; std.regex, for example, can build a specialized automaton for a regex at compile time using CTFE. However, as soon as std.regex is used with CTFE for non-trivial patterns, compile times can become extremely high (in D everything that takes longer than a second to compile is bloat-ware :)). Eventually, as patterns get more complex, the compiler will run out of memory and probably take the whole system down with it.

The blame for this can be laid at the feet of the current CTFE interpreter’s architecture. It’s an AST interpreter, which means that it interprets the AST while traversing it. To represent the result of interpreted expressions, it uses DMD’s AST node classes. This means that every expression encountered will allocate one or more AST nodes. Within a tight loop, the interpreter can easily generate over 100_000_000 nodes and eat a few gigabytes of RAM. That can exhaust memory quite quickly.

Issue 12844 complains about std.regex taking more than 16GB of RAM. For one pattern. Then there’s issue 6498, which executes a simple 0 to 10_000_000 while loop via CTFE and runs out of memory.

Simply freeing nodes doesn’t fix the problem; we don’t know which nodes to free and enabling the garbage collector makes the whole compiler brutally slow. Luckily there is another approach which doesn’t allocate for every expression encountered. It involves compiling the function to a virtual ISA (Instruction Set Architecture). This virtual ISA, also known as bytecode, is then given to a dedicated interpreter for that ISA (in the case in which a virtual ISA happens to be the same as the ISA of the host, we call it a JIT (Just in Time) interpreter).

The NewCTFE project concerns itself with implementing such a bytecode interpreter. Writing the actual interpreter (a CPU emulator for a virtual CPU/ISA) is reasonably simple. However, compiling code to a virtual ISA is exactly as much work as compiling it to a real ISA (though, a virtual ISA has the added benefit that it can be extended for customized needs, but that makes it harder to do JIT later). That’s why it took a month just to get the first simple examples running on the new CTFE engine, and why slightly more complicated ones still aren’t running even after 9 months of development. At the end of the post, you’ll find an approximate timeline of the work done so far.

I’ll be giving a presentation at DConf 2017, where I’ll discuss my experience implementing the engine and explain some of the technical details, particularly regarding the trade-offs and design choices I’ve made. The current estimation is that the 1.0 goals will not be met by then, but I’ll keep coding away until it’s done.

Those interested in keeping up with my progress can follow my status updates in the D forums. At some point in the future, I will write another article on some of the technical details of the implementation. In the meantime, I hope the following listing does shed some light on how much work it is to implement NewCTFE 🙂

  • May 9th 2016
    Announcement of the plan to improve CTFE.
  • May 27th 2016
    Announcement that work on the new engine has begun.
  • May 28th 2016
    Simple memory management change failed.
  • June 3rd 2016
    Decision to implement a bytecode interpreter.
  • June 30th 2016
    First code (taken from issue 6498) consisting of simple integer arithmetic runs.
  • July 14th 2016
    ASCII string indexing works.
  • July 15th 2016
    Initial struct support
  • Sometime between July and August
    First switches work.
  • August 17th 2016
    Support for the special cases if(__ctfe) and if(!__ctfe)
  • Sometime between August and September
    Ternary expressions are supported
  • September 08th 2016
    First Phobos unit tests pass.
  • September 25th 2016
    Support for returning strings and ternary expressions.
  • October 16th 2016
    First (almost working) version of the LLVM backend.
  • October 30th 2016
    First failed attempts to support function calls.
  • November 01st
    DRuntime unit tests pass for the first time.
  • November 10th 2016
    Failed attempt to implement string concatenation.
  • November 14th 2016
    Array expansion, e.g. assignment to the length property, is supported.
  • November 14th 2016
    Assignment of array indexes is supported.
  • November 18th 2016
    Support for arrays as function parameters.
  • November 19th 2016
    Performance fixes.
  • November 20th 2016
    Fixing the broken while(true) / for (;;) loops; they can now be broken out of 🙂
  • November 25th 2016
    Fixes to goto and switch handling.
  • November 29th 2016
    Fixes to continue and break handling.
  • November 30th 2016
    Initial support for assert
  • December 02nd 2016
    Bailout on void-initialized values (since they can lead to undefined behavior).
  • December 03rd 2016
    Initial support for returning struct literals.
  • December 05th 2016
    Performance fix to the bytecode generator.
  • December 07th 2016
    Fixes to continue and break in for statements (continue must not skip the increment step)
  • December 08th 2016
    Array literals with variables inside are now supported: [1, n, 3]
  • December 08th 2016
    Fixed a bug in switch statements.
  • December 10th 2016
    Fixed a nasty evaluation order bug.
  • December 13th 2016
    Some progress on function calls.
  • December 14th 2016
    Initial support for strings in switches.
  • December 15th 2016
    Assignment of static arrays is now supported.
  • December 17th 2016
    Fixing goto statements (we were ignoring the last goto to any label :)).
  • December 17th 2016
    De-macrofied string-equals.
  • December 20th 2016
    Implement check to guard against dereferencing null pointers (yes… that one was oh so fun).
  • December 22ed 2016
    Initial support for pointers.
  • December 25th 2016
    static immutable variables can now be accessed (yes the result is recomputed … who cares).
  • January 02nd 2017
    First Function calls are supported !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  • January 17th 2017
    Recursive function calls work now 🙂
  • January 23rd 2017
    The interpret3.d unit-test passes.
  • January 24th 2017
    We are green on 64bit!
  • January 25th 2017
    Green on all platforms !!!!! (with blacklisting though)
  • January 26th 2017
    Fixed special case cast(void*) size_t.max (this one cannot go through the normal pointer support, which assumes that you have something valid to dereference).
  • January 26th 2017
    Member function calls are supported!
  • January 31st 2017
    Fixed a bug in switch handling.
  • February 03rd 2017
    Initial function pointer support.
  • Rest of Feburary 2017
    Wild goose chase for issue #17220
  • March 11th 2017
    Initial support for slices.
  • March 15th 2017
    String slicing works.
  • March 18th 2017
    $ in slice expressions is now supported.
  • March 19th 2017
    The concatenation operator (c = a ~ b) works.
  • March 22ed 2017
    Fixed a switch fallthrough bug.

Project Highlight: workspace-d

Posted on

Not so long ago, Jan Jurzitza sat down at his keyboard intent on writing a D plugin for Atom, his text editor of choice at the time. Then came disappointment.

“I was pretty unhappy with their API,” he says.

Visual Studio Code was released a short time after. He decided to give it a go and “instantly fell in love with it”. His Atom plugin was pushed aside and he started work on a new plugin for VS Code called code-d.

However, I did not want to maintain the same functionality in two plugins for two different text editors, so I thought that making a program which contains most of the plugin logic, like starting and calling dcd, dscanner, dfmt, etc., would be beneficial and would also help with including D support in more editors and IDEs in the future.

For the uninitiated, DCD (the D Completion Daemon), DScanner, and Dfmt are D-oriented tools for plugin developers, all maintained by Brian Schott. They are, respectively, a client-server based auto-completion program, a source code analyzer, and a code formatter. A number of IDE and text editor plugins employ them directly.

So Jan started work on his new tool and named it workspace-d.

With workspace-d I want to make it simple for plugin developers to integrate D functionality into their editor of choice. workspace-d is designed to work both as a standalone program through stdio and as a D library. Once I ported most of the code from my Atom extension to workspace-d, I could simply spawn it as a subprocess in code-d, which I got working with it quite quickly.

In addition to porting his Atom plugin to use workspace-d, he also created one for Sublime Text. Currently, he’s not devoting any time to either and is looking for someone else to take over maintenance of one or both. Anyone interested might start by submitting pull requests. Aside from workspace-d itself, Jan’s focus is on code-d.

He’s recently been working on version 2.0 of workspace-d, with a focus on streamlining the way it handles requests.

Using traits, templates, and CTFE (Compile-Time Function Execution), basically all D compile time magic, I was able to make an automatic wrapper for the functions for version 2.0. Basically, when a request like {"cmd":"hello"} comes in, it runs the D function hello with its default arguments. If the arguments don’t match, it responds with an error. This system automatically binds function arguments to JSON values and generates a response from the return value.

To deserialize the JSON requests, he’s using painlessjson, a third-party library available in the DUB package registry.

It works really great and I can really recommend it for some simple and easy conversions between D types and JSON. This change really cleaned up all the code and made it possible to use workspace-d as a library.

He’s also working on a new project, serve-d, that works with Microsoft’s Language Server Protocol.

serve-d is an alternative for the workspace-d command line I/O for those who prefer JSON RPC over my custom binary/JSON mix. It’s fiber based and uses workspace-d as a library, which results in really clean code. There’s an alpha version of the implementation on github already, both the server and a new branch on code-d. With the Language Server Protocol, I’m hoping for easier integration in other editors. The concept is basically the same as workspace-d’s command line interface, but, because Microsoft is such a big company, I’m hoping that more editors by big developers are going to implement this protocol.

Building and installing workspace-d should go pretty smoothly on Linux or OS X, but it’s currently a little bumpy on Windows. Because of an issue Jan has yet to resolve, it can only be built on Windows with LDC.

The auto completion didn’t work for some people on Windows because it got stuck in the std.process.execute function when creating a pipe to write to. I couldn’t find any way to reproduce it in a standalone program so I couldn’t file a bug either. So what we did to avoid this issue in the short term was to simply disallow compilation on Windows using DMD. It works just fine when compiled with LDC.

Jan’s primarily a Linux user (he doesn’t own a Mac and only runs Windows in a VM). He credits GitHub user @Andrepuel for getting it operational on OS X, and @aka-demik for finding the issue on Windows and verifying that it compiles with LDC. He’ll be grateful to anyone who can help fully resolve the Windows/DMD issue once and for all.

If you’re looking to develop a D plugin for your favorite editor, consider taking advantage of the work Jan as already done with workspace-d to save yourself some effort. And VS Code users can put it to use via code-d to get code completion and more. Visit its VS Code marketplace page to read reviews and installation instructions.

Editable and Runnable Doc Examples on dlang.org

Posted on

Sebastian Wilzbach was a GSoC student for the D Language Foundation in 2016 and has since become a regular contributor to Phobos, D’s standard library, and dlang.org.


This article explains the steps that were needed to have editable and runnable examples in the documentation on dlang.org. First, let’s begin with the building blocks.

Unit testing in D

One of D’s coolest features is its unittest block, which allows the insertion of testable code anywhere in a program. It has become idiomatic for a function to be followed directly by its tests. For example, let’s consider a simple function add which is accompanied by two tests:

auto add(int a, int b)
{
    return a + b;
}

unittest
{
    assert(2.add(2) == 4);
    assert(3.add(4) == 7);
}

By default, all unittest blocks will be ignored by the compiler. Specifying -unittest on the compiler’s command line will cause the unit tests to be included in the compiled binary. Combined with -main, tests in D can be directly executed with:

rdmd -main -unittest add.d

If a unittest block is annotated with embedded documentation, a D documentation generator can also display the tests as examples in the generated documentation. The DMD compiler ships with a built-in documentation generator (DDoc), which can be run with the -D flag, so executing:

dmd -D -main add.d

would yield the documentation of the add function above with its tests displayed as examples, as demonstrated here:

Please note that the documentation on dlang.org is generated with DDoc. However, in case you don’t like DDoc, there are several other options.

Executing code on the web

Frequent readers of the D Blog might remember Damian Ziemba’s DPaste – an online compiler for the D Programming language. In 2012, he made the examples on the front page of D’s website runnable via his service. Back in those old days, the website of the D Programming language looked like this:

As a matter of fact, until 2015, communication with DPaste was done in XML.

Putting things together

So D has a unit test system that allows placing executable unit tests next to the functions they test, the tests can also be rendered as examples in the generated documentation, and there exists a service, in the form of DPaste, that allows D code to be executed on the web. The only thing missing was to link them together to produce interactive documentation for a compiled language.

There was one big caveat that needed to be addressed before that could happen. While D’s test suite, which is run on ten different build machines, ensures that all unit tests compile & run without errors, an extracted test might contain symbols that were imported at module scope and thus wouldn’t be runnable on dlang.org. A unittest block can only be completely independent of the module in which it is declared if all of its symbols are imported locally in the test’s scope. The solution was rather simple: extract all tests from Phobos, then compile and execute them separately to ensure that a user won’t hit a “missing import” error on dlang.org. Thanks to D’s ultra-fast frontend, this step takes less than a minute on a typical machine in single-core build mode.

Moreover, to prevent any regressions, this has been integrated into Phobos’s test suite and is run for every PR via CircleCi. As Phobos has extensive coverage with unit tests, we started this transition with a blacklist and, step-by-step, removed modules for which all extracted tests compile. With continuous checking in place, we were certain that none of the exposed unit tests would throw any errors when executed in the documentation, so we could do the flip and replace the syntax-highlighted unit test examples with an interactive code editor.

Going one step further

With this setup in place, hitting the “Run” button would merely show the users “All tests passed”. While that’s always good feedback, it conveys less information than is usually desirable.

Documentation that supports runnable examples tends to send any output to stdout. This allows the reader to take the example and modify it as needed while still seeing useful output about the modifications. So, for example, instead of using assertions to validate the output of a function, which is idiomatic in D unit tests and examples:

assert(myFun() == 4);

Other documentation usually prints to stdout and shows the expected output in a comment. In D, that would look like this:

writeln(myFun()); // 4

I initially tried to do such a transformation with regular expressions, but I was quickly bitten by the complexity of a context-free language. So I made another attempt using Brian Schott’s libdparse, a library to parse and lex D source code. libdparse allows one to traverse the abstract syntax tree (AST) of a D source file. During the traversal of the AST, the transformation tool can rewrite all AssertExpressions into writeln calls, similar to the way other documentation displays examples. To speak in the vocabulary of compiler devs: we are lowering AssertExpressions into the more humanly digestible writeln calls!

Once the AST has been traversed and modified, it needs to be transformed into source code again. This led to improvements in libdparse’s formatting capabilities (1, 2).

The future

As of now, there are still a small number of functions in Phobos that don’t have a nice public example that is runnable on dlang.org. Tooling to check for this has recently been activated in Phobos. So now you can use this tool (make -f posix.mak has_public_example) to find functions lacking public tests and remove those modules from the blacklist.

Another target for improvement is DPaste. For example, it currently doesn’t cache incoming requests, which could improve the performance of executed examples on dlang.org. However, due to the fast compilation speed of the DMD compiler, this “slow-down” isn’t noticeable and is more of a perfectionist wish.

I hope you enjoy the new “Run” button on the documentation and have as much fun playing with it as I do. Click here to get started.