Editable and Runnable Doc Examples on dlang.org

Posted on

Sebastian Wilzbach was a GSoC student for the D Language Foundation in 2016 and has since become a regular contributor to Phobos, D’s standard library, and dlang.org.


This article explains the steps that were needed to have editable and runnable examples in the documentation on dlang.org. First, let’s begin with the building blocks.

Unit testing in D

One of D’s coolest features is its unittest block, which allows the insertion of testable code anywhere in a program. It has become idiomatic for a function to be followed directly by its tests. For example, let’s consider a simple function add which is accompanied by two tests:

auto add(int a, int b)
{
    return a + b;
}

unittest
{
    assert(2.add(2) == 4);
    assert(3.add(4) == 7);
}

By default, all unittest blocks will be ignored by the compiler. Specifying -unittest on the compiler’s command line will cause the unit tests to be included in the compiled binary. Combined with -main, tests in D can be directly executed with:

rdmd -main -unittest add.d

If a unittest block is annotated with embedded documentation, a D documentation generator can also display the tests as examples in the generated documentation. The DMD compiler ships with a built-in documentation generator (DDoc), which can be run with the -D flag, so executing:

dmd -D -main add.d

would yield the documentation of the add function above with its tests displayed as examples, as demonstrated here:

Please note that the documentation on dlang.org is generated with DDoc. However, in case you don’t like DDoc, there are several other options.

Executing code on the web

Frequent readers of the D Blog might remember Damian Ziemba’s DPaste – an online compiler for the D Programming language. In 2012, he made the examples on the front page of D’s website runnable via his service. Back in those old days, the website of the D Programming language looked like this:

As a matter of fact, until 2015, communication with DPaste was done in XML.

Putting things together

So D has a unit test system that allows placing executable unit tests next to the functions they test, the tests can also be rendered as examples in the generated documentation, and there exists a service, in the form of DPaste, that allows D code to be executed on the web. The only thing missing was to link them together to produce interactive documentation for a compiled language.

There was one big caveat that needed to be addressed before that could happen. While D’s test suite, which is run on ten different build machines, ensures that all unit tests compile & run without errors, an extracted test might contain symbols that were imported at module scope and thus wouldn’t be runnable on dlang.org. A unittest block can only be completely independent of the module in which it is declared if all of its symbols are imported locally in the test’s scope. The solution was rather simple: extract all tests from Phobos, then compile and execute them separately to ensure that a user won’t hit a “missing import” error on dlang.org. Thanks to D’s ultra-fast frontend, this step takes less than a minute on a typical machine in single-core build mode.

Moreover, to prevent any regressions, this has been integrated into Phobos’s test suite and is run for every PR via CircleCi. As Phobos has extensive coverage with unit tests, we started this transition with a blacklist and, step-by-step, removed modules for which all extracted tests compile. With continuous checking in place, we were certain that none of the exposed unit tests would throw any errors when executed in the documentation, so we could do the flip and replace the syntax-highlighted unit test examples with an interactive code editor.

Going one step further

With this setup in place, hitting the “Run” button would merely show the users “All tests passed”. While that’s always good feedback, it conveys less information than is usually desirable.

Documentation that supports runnable examples tends to send any output to stdout. This allows the reader to take the example and modify it as needed while still seeing useful output about the modifications. So, for example, instead of using assertions to validate the output of a function, which is idiomatic in D unit tests and examples:

assert(myFun() == 4);

Other documentation usually prints to stdout and shows the expected output in a comment. In D, that would look like this:

writeln(myFun()); // 4

I initially tried to do such a transformation with regular expressions, but I was quickly bitten by the complexity of a context-free language. So I made another attempt using Brian Schott’s libdparse, a library to parse and lex D source code. libdparse allows one to traverse the abstract syntax tree (AST) of a D source file. During the traversal of the AST, the transformation tool can rewrite all AssertExpressions into writeln calls, similar to the way other documentation displays examples. To speak in the vocabulary of compiler devs: we are lowering AssertExpressions into the more humanly digestible writeln calls!

Once the AST has been traversed and modified, it needs to be transformed into source code again. This led to improvements in libdparse’s formatting capabilities (1, 2).

The future

As of now, there are still a small number of functions in Phobos that don’t have a nice public example that is runnable on dlang.org. Tooling to check for this has recently been activated in Phobos. So now you can use this tool (make -f posix.mak has_public_example) to find functions lacking public tests and remove those modules from the blacklist.

Another target for improvement is DPaste. For example, it currently doesn’t cache incoming requests, which could improve the performance of executed examples on dlang.org. However, due to the fast compilation speed of the DMD compiler, this “slow-down” isn’t noticeable and is more of a perfectionist wish.

I hope you enjoy the new “Run” button on the documentation and have as much fun playing with it as I do. Click here to get started.

Snowflake Strings

Posted on

Walter Bright is the BDFL of the D Programming Language and founder of Digital Mars. He has decades of experience implementing compilers and interpreters for multiple languages, including Zortech C++, the first native C++ compiler. He also created Empire, the Wargame of the Century.


Consider the following D code in file.d:

int foo(int i) {
    assert(i < 3);
    return i;
}

This is equivalent to the C code:

#include <assert.h>

int foo(int i) {
    assert(i < 3);
    return i;
}

The assert() in the D code is “lowered” (i.e. rewritten by the compiler) to the following:

(i < 3 || _d_assertp("file.d", 2))

We’re interested in how the compiler writes that string literal, "file.d" to the generated object file. The most obvious implementation is to write the characters into the data section and push the address of that to call _d_assertp().

Indeed, that does work, and it’s tempting to stop there. But since we’re professional compiler nerds obsessed with performance, details, etc., there’s a lot more oil in that olive (to borrow one of Andrei’s favorite sayings). Let’s put it in a press and start turning the screw, because assert()s are used a lot.

First off, string literals are immutable (they were originally mutable in C and C++, but are no longer, and D tries to learn from such mistakes). This suggests the string can
be put in read-only memory. What advantages does that deliver?

  • Read-only memory is, well, read only. Attempts to write to it are met with a seg fault exception courtesy of the CPUs memory management logic. Seg faults are a bit graceless, like a cop pulling you over for driving on the wrong side of the road, but at least there wasn’t a head-on collision or corruption of the program’s memory.
  • Read-only pages are never swapped out by the virtual memory system; they never get marked as “dirty” because they are never written to. They may get discarded and reloaded, but that’s much less costly.
  • Read-only memory is safe from corruption by malware (unless the malware infects the MMU, sigh).
  • Read-only memory in a shared library is shared – copies do not have to be made for each user of the shared library.
  • Read-only memory does not need to be scanned for pointers to the heap by the garbage collection system (if the application does GC).

Essentially, shoving as much as possible into read-only memory is good for performance, size and safety. There’s the first drop of oil.

The next issue crops up as soon as there’s more than one assert:

int foo(int i, int j) {
    assert(i < 3);
    assert(j & 1);
    return i + j;
}

The compiler emits two copies of "file.d" into the object file. Since they’re identical, and read-only, it makes sense to only emit one copy:

string TMP = "file.d";
int foo(int i, int j) {
    (i < 3 || _d_assertp(TMP, 2))
    (j & 1 || _d_assertp(TMP, 3))
    return i + j;
}

This is called string pooling and is fairly simple to implement. The compiler maintains a hash table of string literals and their associated symbol names (TMP in this case).

So far, this is working reasonably well. But when asserts migrate into header files, macros, and templates, the same string can appear in lots of object files, since the compiler doesn’t know what is happening in other object files (the separate compilation model). Other string literals can exhibit this behavior, too, when generic coding practices are used. There needs to be some way to present these in the object file so the linker can pool identical strings.

The dmd D compiler currently supports four different object file formats on different platforms:

  • Elf, for Linux and FreeBSD
  • Mach-O, for OSX
  • MS-COFF, for Win64
  • OMF, for Win32

Each does it in a different way, with different tradeoffs. The methods tend to be woefully under documented, and figuring this stuff out is why I get paid the big bucks.

Elf

Elf turns out to have a magic section just for this purpose. It’s named .rodata.strM.N where N is replace by the number of bytes a character has, and M is the alignment. For good old char strings, that would be .rodata.str1.1. The compiler just dumps the strings into that section, and the Elf linker looks through it, removing the redundant strings and adjusting the relocations accordingly. It’ll handle the usual string types – char, wchar, and dchar – with aplomb.

There’s just a couple flaws. The end of a string is determined by a nul character. This means that strings cannot have embedded nuls, or the linker will regard them as multiple strings and shuffle them about in unexpected ways. One cannot have relocations in those sections, either. This means it’s only good for C string literals, not other kinds of data.

This poses a problem for D, where the strings are length-delineated strings, not nul-terminated ones. Does this mean D is doomed to being unable to take advantage of the C-centric file formats and linker design? Not at all. The D compiler simply appends a nul when emitting string literals. If the string does have an embedded nul (allowed in D), it is not put it in these special sections (and the benefit is lost, but such strings are thankfully rare).

Mach-O

Mach-O uses a variant of the Elf approach, a special section named __cstring. It’s more limited in that it only works with single byte chars. No wchar_ts for you! If there ever was confirmation that UTF-16 and UTF-32 are dead end string types, this should be it.

MS-COFF

Microsoft invented MS-COFF by extending the old Unix COFF format. It has many magic sections, but none specifically for strings. Instead, it uses what are called COMDAT sections, one for each string. COMDATs are sections tagged with a unique name, and when the linker is faced with multiple COMDATs with the same name, one is picked and all references to the other COMDATs are rewritten to refer to the Anointed One. COMDATs first appeared in object formats with the advent of C++ templates, since template code generation tends to generate the same code over and over in separate files.

(Isn’t it interesting how object file formats are driven by the needs of C and C++?)

The COMDAT for "hello" would look something like this:

??_C@_05CJBACGMB@hello?$AA@:
db 'h', 'e', 'l', 'l', 'o', 0

The tty noise there is the mangled name of the COMDAT which is generated from the string literal’s contents. The algorithm must match across compilation units, as that is how the linker decides which ones are the same (experimenting with it will show that the substring CJBACGMB is some sort of hash). Microsoft’s algorithm for the mangling and hash is undocumented as far as I can determine, but it doesn’t matter anyway, it only has to have a 1:1 mapping between name and string literal. That screams “MD5 hash” to me, so that’s what dmd does. The name is an MD5 hash of the string literal contents, which also has the nice property that no matter how large the string gets, the identifier doesn’t grow.

COMDATs can have anything stuffed in them, so this is a feature that is usable for a lot more than just strings.

The downside of the COMDAT scheme is the space taken up by all those names, so shipping a program with the debug symbols in it could get much larger.

OMF

The caboose is OMF, an ancient format going back to the early 80’s. It was extended with a kludgy COMDAT system to support C++ just before most everyone abandoned it. DMD still emits it for Win32 programs. We’re stuck with it because it’s the only format the default linker (OPTLINK) understands, and so we find a way to press it into service.

Since it has COMDATs, that’s the mechanism used. The wrinkle is that COMDATs are code sections or data sections only; there are no other options. We want it to be read-only, so the string COMDATs are emitted as code sections (!). Hey, it works.

Conclusion

I don’t think we’ve pressed all the oil out of that olive yet. It may be like memcpy, where every new crop of programmers thinks of a way to speed it up.

I hope you’ve enjoyed our little tour of literals, and may all your string literals be unique snowflakes.

Thanks to Mike Parker for his help with this article.

Project Highlight: DPaste

Posted on

DPaste is an online compiler and collaboration tool for the D Programming Language. Type in some D code, click run, and see the results. Code can also be saved so that it can be shared with others. Since it was first announced in the forums back in 2012, it has become a frequently used tool in facilitating online discussions in the D community. But Damian Ziemba, the creator and maintainer of DPaste, didn’t set out with that goal in mind.

Actually it was quite spontaneous and random. I was hanging out on the #D IRC channel at freenode. I was quite amazed at how active this channel was. People were dropping by asking questions, lots of code snippets were floating around. One of the members created an IRC bot that was able to compile code snippets, but it was for his own language that he created with D. Someone else followed and created the same kind of bot, but with the ability to compile code in D, though it didn’t last long as it was run on his own PC. So I wrote my own, purely in D, that was compiling D snippets. It took me maybe 4-5 hours to write both an IRC support lib and the logic itself. Then some server hardening where the bot was running and voila, we had nazbot @ #D, which was able to evaluate statements like ^stmt import std.stdio; writeln("hello world"); and would respond with, "hello world".

Nazbot became popular and people started floating new ideas. That ultimately led Damian to take a CMS he had already written in PHP and repurpose it to use as a frontend for what then became DPaste.

The frontend is written in PHP and uses MySQL for storage. It acts as a web interface (using a Bootstrap HTML template and jQuery) and API provider for 3rd Parties. The backend is responsible for actual compilation and execution. It’s possible to use multiple backends. The frontend is a kind of load-balancer when it comes to choosing a backend. The frontend and the backend may live on different machines.

DPaste is primarily used through the web interface, but it’s also used by dlang.org.

Once dpaste.dzfl.pl was well received, the idea popped up that maybe we could provide runnable examples on the main site. So it was implemented. The next idea, proposed by Andrei Alexandrescu, was to enable runnable examples on all of the Phobos documentation. I got swallowed by real life and couldn’t contribute at the time, but eventually Sebastian Wilzbach took it up and finished the implementation. So today we have interactive examples in the Phobos documentation.

When Damian first started work on DPaste in 2011, the D ecosystem looked a bit different than it does today.

There weren’t as many 3rd party libraries as we have now; there was no DUB, there was no vibe.d, etc. I wish I’d had vibe.d back then. I would have implemented the frontend in D instead of PHP.

What I enjoy the most about D is just how “nice” to the eye the language is (compared to C and C++, which I work with on a daily basis) and how easy it is to express what’s in your mind. I’ve never had to stop and think, “how the hell can I implement this”, which is quite common with C++ in my case. In the current state, what is also amazing is how D is becoming a “batteries-included” package. Whatever you need, you just dub fetch it.

He’s implemented DPaste such that it requires very little in terms of maintenance costs. It automatically updates itself to the latest compiler release and also knows how to restart itself if the backend hangs for some reason. He says the only real issue he’s had to deal with over the past five years is spam, which has forced him to reimplement the captcha mechanism several times.

As for the future? He has a few things in mind.

I plan to rewrite the backend from scratch, open source it and use a docker image so anybody can easily pick up development or host his own backend (which is almost done). Functionally, I want to maintain different compiler versions like DMD 2.061.0, DMD 2.062.1, DMD 2.063.0, LDC 0.xx, GDC x.xx.xx, etc., and connect more architectures as backends (currently x86, arm and aarch64 are planned).

I also want to rewrite the frontend in D using vibe.d, websockets, and angular.js. In general, I would like to make the created applications more interactive. So, for example, you could use the output from your code snippet in realtime as it is produced. I would like also to split a middle end off from the frontend. The middle end would provide communication with backends and offer both a REST API and websockets. Then the frontend would be responsible purely for user interaction and nothing else.

He would also like to see DPaste become more official, perhaps through making it a part of dlang.org. And for a point further down the road, Damian has an even grander plan.

I hope to make a full blown online IDE for dlang.org with workspaces, compilers to chose, and so on.

That would be cool to see!

Project Highlight: The New CTFE Engine

Posted on

CTFE (Compile-Time Function Execution) is today a core feature of the D Programming Language. D creator Walter Bright first implemented it in DMD as an extension of the constant folding logic that was already there. Don Clugston (of FastDelegate fame) made a pass at improving it and, according to Walter, “took it much further“. Since that time, usage of CTFE has shown up in one D project after another, including in D’s standard library. For example, Dmitry Olshansky employed it in his overhaul of std.regex to great effect.

On the last day of DConf 2016, Stefan Koch gave a lightning talk on his thoughts about CTFE in D. At the end of the talk, in response to a question from Andrei Alexandrescu on how D’s implementation could be improved, he said the following:

CTFE is really a hack. You can see that it’s a hack. It’s implemented as a hack. It is the most useful hack that I’ve ever seen, and it is definitely a hacker’s tool to do stuff that are like magic. But to be fast, it would need to be heavily redesigned, reimplemented, possibly executed in multiple threads, because it is used for stuff that we could never have envisioned when it was invented.

Not long after that, Stefan opened a discussion on the fourms and took up the torch to improve the CTFE engine. As to why he got started on this journey in the first place, Stefan says, “I started work on the CTFE engine because I said so at DConf.” But, of course, there’s more to it than that.

I have pretty heavy-weight CTFE needs (I worked on a compile-time trans-compiler). Also my CTFE SQLite reader is failing if you want to read a database bigger then 2MB at ctfe.

His investigations into the performance of the CTFE interpreter shed light on its problems.

The current interpreter interprets every AST-Node it sees directly. This leaves very little space to collect information about the code that is being interpreted. It doesn’t know when something will be used as a reference, so it needs to copy every variable on every mutation. It has to do a deep-copy for this. That means it copies the whole chain of mutations every time.

To clarify, he offers the following example.

Imagine foreach(i;0 .. 10) { a = i; }. On the first iteration we save a` = 0 and set a`` to 1. On the second iteration we save a``` = 1 and a````= 0 and we set a````` to 2 , then a`````` = 1 and a``````` = 0 and so on. As you can see, the memory requirements just shoot up. It’s basically a factorial function with a very small coefficient. That is why for very small workloads this extreme overhead is not noticeable.

That flaw looked unfixable. Indeed the whole architecture in dinterpret.d is very convoluted and hard to understand. I did a few experiments on improving memory-management of the interpreter but it proved fruitless.

Once he realized there was going to be no quick fix, Stefan sat down and drew up a plan to avoid digging himself into the same hole the current interpreter was in. The result of his planning led him down a road he hadn’t expected to travel.

Direct Interpretation was out of the question since it would give the new engine too little time to analyze data-flow and decided whether a copy was really needed or not. I had to implement an Intermediate Representation. It had to be portable to different evaluation back-ends. I ended up with a solution, inspired by OpenGL, of defining my interface in the form of function calls an evaluation back end had to implement. That meant I would not be able to simply modify the current interpreter. This made the start very steep, but it is a decision I do not regret.

His implementation consists of a front end and a back end.

The front end walks the AST and issues calls to the back end. And the back end transforms those calls into actual bytecode. This bytecode is interperted by the back end as soon as the front end requires it.

In terms of functionality, he likens the current implementation to an immediate mode graphics API, and his revamp to retained mode. In this case, though, it’s the immediate mode that’s the memory hog.

You can read about his progress in the CTFE Status thread, where he has been posting frequent updates. His updates include problems he encounters, features he implements, and performance statistics. Eventually, every compiler that uses the DMD front end will benefit from his improvements.

GSoC Report: DStep

Posted on

Wojciech Szęszoł is a Computer Science major at the University of Wrocław. As part of Google Summer of Code 2016, he chose to make improvements to Jacob Carlborg’s DStep, a tool to generate D bindings from C and Objective-C header files.


GSoC-icon-192It was December of last year and I was writing an image processing project for a course at my university. I would normally use Python, but the project required some custom processing, so I wasn’t able to use numpy. And writing the inner loops of image processing algorithms in plain Python isn’t the best idea. So I decided to use D.

I’ve been conscious about the existence of the D language for as long as I can remember, but I’d never convinced myself before to try it out. The first thing I needed to do was to load an image. At the time, I didn’t know that there is a DUB repository containing bindings to image loading libraries, so I started writing bindings to libjpeg by myself. It didn’t end very well, so I thought there should be a tool that will do the job for me. That’s when I found DStep and htod.

Unluckily, the capabilities of DStep weren’t satisfying (mostly the lack of any kind of support for the preprocessor) and htod didn’t run on Linux. I ended up coding my project in C++, but as GSoC (Google Summer of Code) was lurking on the horizon, I decided that I should give it a try with DStep. I began by contacting Craig Dillabaugh (Ed. Note: Craig volunteers to organize GSoC projects using D) to learn if there was any need for developing such a project. It sparked some discussion on the forum, the idea was accepted, and, more importantly, Russel Winder agreed to be the mentor of the project. After some time I needed to prepare and submit an official proposal. There was an interview and fortunately I was accepted.

The first commit I made for DStep is dated to February, 1. It was a proof of concept that C preprocessor definitions can be translated to D using libclang. Then I improved the testing framework by replacing the old Cucumber-based tests with some written in D. I made a few more improvements before the actual GSoC coding period began.

During GSoC, I added support for translation of preprocessor macros (constants and functions). It required implementing a parser for a small part of the C language as the information from libclang was insufficient. I implemented translation of comments, improved formatting of the output code (e.g. made DStep keep the spacing from C sources), fixed most of the issues from the GitHub issue list and ported DStep to Windows. While I was coding I was getting support from Jacob Carlborg. He did a great job by reviewing all of the commits I made. When I didn’t know how to accomplish something with D, I could always count on help on forum.dlang.org.

DStep was the first project of such a size that I coded in D. I enjoyed its modern features, notably the module system, garbage collector, built-in arrays, and powerful templates. I used unittest blocks a lot. It would be nice to have named unit tests, so that they can be run selectively. From the perspective of a newcomer, the lack of consistency and symmetry in some features is troubling, at least before getting used to it. For example there is a built-in hash map and no hash set, some identifiers that should be keywords starts with @ (Ed. Note: see the Attributes documentation), etc. I was very sad when I read that the in keyword is not yet fully implemented. Despite those little issues, the language served me very well overall. I suppose I will use D for my personal toy projects in the future. And for DStep of course. I have some unfinished business with it :).

I would like to encourage all students to take part in future editions of GSoC. And I must say the D Language Foundation is a very good place to do this. I learned a lot during this past summer and it was a very exciting experience.