Project Highlight: Funkwerk

Posted on

Funkwerk is a German company that develops intelligent communication technology. One of their projects is a passenger information system for long-distance and local transport that is deployed by long-distance rail networks in Germany, Austria, Switzerland, Finland, Norway and Luxembourg, as well as city railways in Berlin and Munich. The system is developed at the company’s Munich location and, some time ago, they came to the conclusion that it needed a rewrite. According to Funkwerk’s Mario Kröplin:

From a bird’s-eye view, the long-term replacement of our aged passenger information system is our primary project. At some point, it became obvious that the system was getting hard to maintain and hard to change. In 2008, a new head of the development department was hired. It was time for a change.

They wanted to do more than just rewrite the system. They also wanted to make changes in their development process, and that led to questions of how best to approach the changes so that they didn’t negatively impact productivity.

There is an inertia when experienced programmers are asked to suddenly start with unit testing and code reviews. The motivation for unit testing and code reviews is higher when you learn a new programming language. Especially one where unit testing is a built-in feature. It takes a new language to break old habits.

The decision was made to select a different language for the project, preferably one with built-in support for unit testing. They also allowed themselves a great deal of leeway in making that choice.

Compared with other monolithic legacy systems, we were in the advantageous position that the passenger information system was constructed as a set of services. There is some interprocess communication between the services, but otherwise the services are largely independent. The service architecture made such a decision less serious: just make an experiment with any one of the services in the new language. Should the experiment fail, then it’s not a heavy loss.

So they cast out a net and found D. One thing about the language that struck them immediately was that it fit in a comfort zone somewhere between the languages with which their team was already familiar.

The languages in use were C++ (which rather feels like being part of the problem than part of the solution) and Java. D was presented as a better C++. With garbage collection, changing to D is easy enough for both C++ and Java programmers.

They took D for a spin with some experimentation. The first experiment was a bit of a failure, but the second was a success. In fact, it went well enough that it convinced them to move forward with D.

In retrospect, the change of programming language was one aspect that led to an agile transition. One early example of a primary D project was the replacement of the announcement service. This one gets all information about train journeys – especially about the disruptions – and decides when to play which announcements. The business rules make this quite complicated. The new service uses object-oriented programming to handle the complicated business rules nicely. So you could say it’s a Java program written in D. However, as the business rules change from customer to customer, the announcement service is constantly changing. And with the development of D and with our learning, we are now in a better position to evolve the software further.

When they first started the changeover, D2 was still in its early stages of development and the infamous Phobos/Tango divide was still a thing in the D community (an issue that, for those of you out of the loop the last several years, was put to rest long ago). Since D1 was considered stable, that was an obvious choice for them. They also chose Tango over Phobos, primarily because Tango provided logging and HTTP facilities, while Phobos did not. In 2012, as D2 was stablizing and official support for D1 was being phased out, it became apparent that they would have to make a transition from D1 and Tango to D2 and Phobos, which is what they use today.

One of the D features that hooked Mario early on was its built-in support for contracts.

Contracts won me over from the beginning. No joke! When we had hard times to hunt down segmentation faults, a failed assertion boosted the diagnostics. I used the assert macro a lot in C++, but with contracts in D there is no question about whom to blame. A failed precondition: caller’s fault. A failed postcondition or invariant: my fault. We have contracts everywhere. And we don’t use the -release switch. We would rather give up performance than information that helps us fixing bugs.

As with many other active D users, D’s dynamic arraysslices and the built-in associative arrays stand out for the Funkwerk programmers

They work out of the box. It’s not like in Java, where you’d better not use the arrays of the language, but the ArrayList of the library.

They also have put D’s ranges and Uniform Function Call Syntax to heavy use.

With ranges and UFCS, it became possible to replace loops with intention-revealing chains of map and filter and whatever. While the loop describes how something is done, the ranges describe what is done. We enjoy setting the delay in our tests to 5.minutes. And we use UDAs in accessors and dunit. It’s hard to imagine what pieces of our clean code would look like in any other language.

accessors is a library that uses templatesmixins, and UDAs to automatically generate property getters and setters. dunit is a unit testing library that, in an ironic twist, Mario maintains. As it turned out, their intuition that their developers would be more motivated to write code when a language has built-in unit testing was correct (that’s been cited by Walter Bright as a reason it was included in the language in the first place). However, that first failed experiment in D came down to the simplistic capabilities the built-in unit testing provides.

One of the failures of our first experiment was an inappropriate use of the unittest functions. You’re on your own when you have to write more code per test case, for example for testing interactions of objects. With xUnit testing frameworks like JUnit, you get a toolbox and lots of instructions, like the exhaustive http://xunitpatterns.com/. With D, you get the single tool unittest and lots of examples of test cases that can be expressed as one-liners. This does not help in scenarios where such test cases are the exception. Even Phobos has examples where the unittest blocks are lengthy and obscure. Often, such unittest blocks are not clean code but a collection of “Test Smells”. We soon replaced the built-in unit testing (one of the benefits we’ve seen) with DUnit.

The transition to D2 and Phobos necessitated a move away from the Tango-based DUnit. Mario found the open source (and lowercase) dunit (the original project is here) a promising replacement, and ultimately found himself maintaining a fork that has evolved considerably from the original. However, the team recently discovered a way to combine xUnit testing with D’s built-in unittest, which may lead to another transition in their unit testing.

Additionally, Phobos still had no logging package in 2012, so they needed a replacement for the Tango logger API. Mario rolled up his sleeves and implemented log, which, like acessors and dunit, is available under the Boost Software License.

It was meant as an improvement proposal for an early stage of  std.experimental.logger. I was excited that it was possible to implement the logging framework together with support for rolling log files, for logrotate, and for syslog in just 500 lines. We’re still using it.

Funkwerk has also contributed code to Phobos.

I was looking for a way to implement a delay calculation. The problem is that you have an initial delay and you assume that the driver runs faster and makes shorter stops in order to catch up. But the algorithm was missing. So we made a pull request for std.algorithm.cumulativeFold. It’s there because we strived for clean code in a forecast service.

Garbage collection has been a topic of interest on this blog lately. Mario has an interesting anecdote from Funkwerk’s usage of D’s GC.

Our service was losing connections from time to time. It took a while to find that the cause was the interaction between garbage collection and networking. The signals used by the garbage collector interrupt the blocking read. The blocking read is hidden in a library, for example, in std.socketstream, where the retry is missing. Currently, we abuse inheritance to patch the SocketStream class.

The technical description of the derived class from the Ddoc comments read:

/**
 * When a timeout has been set on the socket, the interface functions "recv"
 * and "send" are not restarted after being interrupted by a signal handler.
 * They fail with the error "EINTR" when interrupted, especially
 * by the signal used to "stop the world" during garbage collection.
 *
 * In this case, retries avoid that the stream is mistaken to be exhausted.
 * Note, however, that these retries will likely exceed the specified timeout.
 *
 * See also: Linux manual page signal(7)
 */

After nearly a decade of active development with D, Mario and his team have been pleased with their choice. They plan to continue using the language in new projects.

Over the years, we’ve replaced services in the periphery of the passenger information system whenever this was justified by customer needs. We chose D for the implementation of all of the backend services. We are now working on a small greenfield project (it’s a few buses to be monitored instead of lots of trains), but we intend to grow this project into the next generation of the passenger information system.

In the coming months, we’ll hear more from Mario and the Funkwerk developers about how they use D to develop their system and tooling, and what it’s like to be a professional D programmer.

Project Highlight: Derelict

Posted on

Previous project highlights on this blog were written up both in my own words and in quotes from the project maintainers. This time around is different — it would be a little odd to quote myself while writing about my own project.

Derelict is a collection of D bindings to C libraries. In its present incarnation, it resides in a GitHub organization called DerelictOrg. There you’ll find bindings to a number of libraries such as SDL, GLFW, SFML, OpenGL, OpenAL, FreeType, FreeImage and more. There are currently 26 repositories in the organization: one for the documentation, one for a utility package, 21 active packages and three that are currently unsupported, maintained primarily by me and Guillaume Piolat.

The Derelict packages primarily provide dynamic bindings, though some can be configured as static bindings at compile time. Dynamic bindings require the C libraries to which they bind to be dynamically loaded at run time. The mechanism for this is provided by the DerelictUtil package. Loading a library is as simple as a function call, e.g. DerelictSDL2.load();.

Static bindings have a link-time dependency, requiring either statically or dynamically linking with the bound C library when building an executable. In that case, the dependency on DerelictUtil goes away (some packages may still use a few DerelictUtil declarations, requiring one of its modules to be imported, but it need not be linked into the program). On Windows, this introduces potential issues with object file formats, but these days they are quite easy to solve.

It’s hard to believe now, but Derelict has been around continuously, in one form or another, since March of 2004. I was first drawn to D because it fit almost perfectly between the two languages I had the most experience with back in 2003, C and Java. It addressed some of the frustrations I had with each while combining things I loved from both. But early on I ran into my first frustration with D.

At the time, interfacing with C libraries on Windows was a bit annoying. D has excellent support for and compatibility with C libraries in the language, but the object file format issues I alluded to above were frustrating. DMD on Windows could only output object files in the OMF format and could only use the OPTLINK linker, an ancient 32-bit linker that only understands OMF. This is because DMD was implemented on top of backend and tool chain used by the DMC, the Digital Mars C and C++ compiler. Meanwhile, the rest of the Microsoft ecosystem was (and is) using the COFF format. In practice, a static binding to a C library required either using tools to convert the object files or libraries from OMF to COFF, or compiling the C library with DMC. You don’t tend to find support for DMC in most build scripts.

When I found that the bindings for the libraries I wanted to use (SDL and OpenGL) were static, I resolved to create my own. I’m a big fan of dynamic loading, so it was a no-brainer to create dynamic bindings. When you load a DLL via the LoadLibrary/GetProcAddress API, the object file format is irrelevant. From that point on, I never had to worry about the COFF/OMF problem again. Even though the problem didn’t exist outside of Windows, I made them cross platform anyway to keep a consistent interface.

In those days, DMD 1.0.0 wasn’t yet a thing, but a new web site, dsource.org, had just been launched to host open source D projects (much of the site is still online in archive mode thanks to Vladimir Panteleev). All of the projects there used Subversion, so I decided to learn my way around it and take a stab at maintaining an open source project by making my new bindings available. Part of my motivation for picking “Derelict” as the title is because I fully expected no one else would be interested and, whether they were or not, I would eventually abandon it anyway.

As it turned out, people really were interested. Contributions started coming in almost immediately, along with questions and suggestions on the DSource forums. I added new bindings, set up criteria on new binding submissions (they must be gamedev-related and cross-platform), and added documentation on how to create “Derelictified” bindings. Tomasz Stachowiak (now a Frostbite game engine developer) came along and started making contributions, including a templated loader in DerelictUtil that replaced the one I had hacked together and became the basis for Derelict2.

Eventually, with the dawning of the D2 era, D development moved away from DSource to GitHub. So did Derelict, in the guise of Derelict 3. I had made use of every D build tool that came along before then, but finally settled on a custom build script (written in D) for this iteration, since the build tools all died. When DUB came along and looked like it was here to stay, I fully committed to it. I took the opportunity to finally split up the monolithic repository, gave every package its own repo, and created DerelictOrg as their new home. Aside from the appalling lack of documentation (a state which is rapidly changing, but is still a WIP), things have been fairly stable.

If I were still counting from Derelict 3, I think we might be at Derelict 6 by now, but that’s not how it rolls anymore. Since the move to DerelictOrg, I’ve twice made iterative improvements to DerelictUtil that broke binary compatibility with previous releases. The first time around, I didn’t properly manage the roll out. Anyone building the Derelict libraries manually, i.e. when they weren’t using DUB to manage their application project, could run into issues where one package used the new version and another used the older. It seems like it ought to have been a mess, but I heard very little about it. Still, in the most recent overhaul, I took steps to keep the new stuff distinctly separate from the old stuff and rolled it all out at once (which is why most of the latest releases as I write have a -beta suffix).

As D has evolved over the years, so has Derelict. Now that DMD supports COFF on Windows, some of the packages can now be configured at compile time as a static binding. Long ago, DerelictUtil gained the ability to selectively ignore missing symbols (curiously missing from the latest docs, but described in the old DerelictUtil Wiki) and more recently gained a feature that enables a package implementation to support multiple versions of a C library via a configurable call to the loader (SharedLibVersion), which only a handful of packages support and few more likely will (due to API compatibility issues with type sizes or enum values, not every package can). The latest version of DerelictGL3 is massively configurable at compile time. For a little while, I even relaxed my criteria for allowing new packages under the DerelictOrg umbrella, but now after I ended up being wholly responsible for them that’s something I’m not inclined to continue. With the DUB registry, it’s not really necessary.

There are a number of C library bindings using DerelictUtil out in the wild. You can find them, as well as all of the DerelictOrg packages, at the DUB package registry. Some of the third-party bindings have a name like “derelict-extras-foo”, but others use “derelict-foo” like those in DerelictOrg. You can also find a collection of static bindings in the Deimos organization, and third-party static bindings in the DUB registry that have adopted the Deimos approach of providing C headers along side the D modules.

Anyone who needs help with any of the Derelict packages can often find it in the #D irc channel at freenode.net. I drop by infrequently, but I monitor the D forums every day. Asking for Derelict help in the Learn forum is not off-topic.

The Derelict bindings have gone well beyond targeting game developers. Though they’re often used in games (like Voxelman and the first D game on Steam, Mayhem Intergalactic), they are also used elsewhere (like DLangUI). If you’re using any of the Derelict bindings in your own projects, I’d love to hear about it. Particularly since I’m always looking for projects I’ve never heard about to highlight here on the blog.

Project Highlight: excel-d

Posted on

Ever had the need to write an Excel plugin? Check this out.

Atila Neves opened his lightning talk at DConf 2017 like this:

I’m going to talk about how you can write Excel add-ins in D. Don’t ask me why. It’s just because people need it.

From there he goes into a quick intro on how to write plugins for Excel and gives a taste of what it looks like to register a single function in a C++ implementation:

Excel12f(
    xlfRegister, NULL, 11,          // 11: Number of args
    &xDLL,                          // name of the DLL
    TempStr12(L"Fibonacci"),        // procedure name
    TempStr12(L"UU"),               // type signature
    TempStr12(L"Compute to..."),    // argument text
    TempStr12(L"1"),                // macro type
    TempStr12(L"Generic Add-In"),   // category
    TempStr12(L""),                 // shortcut text
    TempStr12(L""),                 // help topic
    TempStr12(L"Number to compute to"), // function help
    TempStr12(L"Computes the nth Fibonacci number") // arg help 
);

Two things to note about this. First, Excel12f is a C++ function (wrapping a C API) that must be called in an add-in DLL’s entry point (xlAutoOpen) for each function that needs to be registered when the add-in is loaded by Excel. For a small plugin, the implementations of any registered functions might be in the same source file, but in a larger one they might be a bit of a maintenance headache by being located somewhere else. Also take note of all the comments used to document the function arguments, a common sight in C and C++ code bases.

The example D code Atila showed using excel-d is a world of difference:

@Register(
    ArgumentText("Array"),
    HelpTopic("Length of Array"),
    FunctionHelp("Length of an Array"),
    ArgumentHelp(["array"])
)
double DoublesLength(double[] arg) {
    return arg.length;
}

Here, the boilerplate for the registration is being generated at compile-time via a User Defined Attribute, which is used to annotate the function. Implementation and registration are happening in the same place. Another key difference is that the UDA has fields with descriptive names, eliminating the need to comment each argument. Finally, the UDA only requires four arguments, nine less than the C++ function. This is because it makes use of D’s compile-time introspection features to figure out as much as it possibly can and, at the same time, optional arguments (like the shortcut text) can just be omitted.

Since this is a Project Highlight on the D Blog, we’re going to ignore Atila’s opening request and ask, “Why?” There are actually two parts to that. First, why Excel?

Our customers are traders, and they work with Excel as one of their main tools. They need/want to, amongst other things, receive live stock updates in a cell and have their formulae automatically update. There’s other functionality they’d like to have and that means adding this to Excel somehow.

Of all the possible languages that could be used for this purpose, the business chose D. That brings us to the second part of the question: why D?

This is possible in Visual Basic, Python or C#, and possibly other languages. But none of them match D’s performance. C++ does, but it’s tedious and requires a lot of boilerplate to get going. D combines the speed and power of C++ with the reflection capabilities of those other languages. No boilerplate, just code, runs fast.

There’s more to the story, of course. The company is heavily invested in D.

We use D for nearly everything, even some “scripts”. The bulk of it is calculations for market indicators. Lots of data in -> munge -> new data out that needs to look pretty for traders. So integrating with existing code was an important factor.

Even though excel-d is targeting Windows, much of it was actually developed on Linux.

We use a Linux container as our reference development machine, but people use what they want. I do nearly all of my work on Linux and only boot into Windows when I have to. For the Excel work, that’s a necessity. But, as usual for me, I wrote all the tests to be platform agnostic, so I do the Excel development on Linux and test there. Every now and again that means a particular quirk of Excel wasn’t captured well in my mocking code, but it’s usually a quick fix after that.

He says they use both DMD and LDC for development, and both are running in continuous integration.

Although DMD doesn’t technically require Visual Studio to be installed (out of the box, it generates 32-bit OMF objects, and uses the OPTLINK linker, rather than the VS-compatible COFF), anyone doing serious work on Windows is going to need VS (or the Visual Studio Build Tools and the Windows SDK) for 64-bit and 32-bit COFF support. The latest LDC binary releases currently require the MS tools (support for MinGW was dropped, but according to the D Wiki, could be picked up again if there’s a champion for it).

Atila already had VS on his Windows partition. For this project, he got a bit of help from the VS plugin for D, Visual D.

I had to install VisualD because our reference project for Excel was in a Visual Studio solution, but afterwards I reverse engineered the build and didn’t open Visual Studio ever again.

Currently, excel-d has no support for custom dialogs or menus. Both items are on his TODO list.

If you’re working with D and need to write an Excel add-in, or want to try something cleaner than C++ to do so, excel-d is available in the DUB package registry. If not, the sponsors of the project, Kaleidic Associates and Symmetry Investments, have made several other open source projects available. They are interested in hiring talented hackers with a moral compass who aspire towards excellence and would like to work in D.

excel-d was developed by Stefan Koch and Atila Neves.

Project Highlight: workspace-d

Posted on

Not so long ago, Jan Jurzitza sat down at his keyboard intent on writing a D plugin for Atom, his text editor of choice at the time. Then came disappointment.

“I was pretty unhappy with their API,” he says.

Visual Studio Code was released a short time after. He decided to give it a go and “instantly fell in love with it”. His Atom plugin was pushed aside and he started work on a new plugin for VS Code called code-d.

However, I did not want to maintain the same functionality in two plugins for two different text editors, so I thought that making a program which contains most of the plugin logic, like starting and calling dcd, dscanner, dfmt, etc., would be beneficial and would also help with including D support in more editors and IDEs in the future.

For the uninitiated, DCD (the D Completion Daemon), DScanner, and Dfmt are D-oriented tools for plugin developers, all maintained by Brian Schott. They are, respectively, a client-server based auto-completion program, a source code analyzer, and a code formatter. A number of IDE and text editor plugins employ them directly.

So Jan started work on his new tool and named it workspace-d.

With workspace-d I want to make it simple for plugin developers to integrate D functionality into their editor of choice. workspace-d is designed to work both as a standalone program through stdio and as a D library. Once I ported most of the code from my Atom extension to workspace-d, I could simply spawn it as a subprocess in code-d, which I got working with it quite quickly.

In addition to porting his Atom plugin to use workspace-d, he also created one for Sublime Text. Currently, he’s not devoting any time to either and is looking for someone else to take over maintenance of one or both. Anyone interested might start by submitting pull requests. Aside from workspace-d itself, Jan’s focus is on code-d.

He’s recently been working on version 2.0 of workspace-d, with a focus on streamlining the way it handles requests.

Using traits, templates, and CTFE (Compile-Time Function Execution), basically all D compile time magic, I was able to make an automatic wrapper for the functions for version 2.0. Basically, when a request like {"cmd":"hello"} comes in, it runs the D function hello with its default arguments. If the arguments don’t match, it responds with an error. This system automatically binds function arguments to JSON values and generates a response from the return value.

To deserialize the JSON requests, he’s using painlessjson, a third-party library available in the DUB package registry.

It works really great and I can really recommend it for some simple and easy conversions between D types and JSON. This change really cleaned up all the code and made it possible to use workspace-d as a library.

He’s also working on a new project, serve-d, that works with Microsoft’s Language Server Protocol.

serve-d is an alternative for the workspace-d command line I/O for those who prefer JSON RPC over my custom binary/JSON mix. It’s fiber based and uses workspace-d as a library, which results in really clean code. There’s an alpha version of the implementation on github already, both the server and a new branch on code-d. With the Language Server Protocol, I’m hoping for easier integration in other editors. The concept is basically the same as workspace-d’s command line interface, but, because Microsoft is such a big company, I’m hoping that more editors by big developers are going to implement this protocol.

Building and installing workspace-d should go pretty smoothly on Linux or OS X, but it’s currently a little bumpy on Windows. Because of an issue Jan has yet to resolve, it can only be built on Windows with LDC.

The auto completion didn’t work for some people on Windows because it got stuck in the std.process.execute function when creating a pipe to write to. I couldn’t find any way to reproduce it in a standalone program so I couldn’t file a bug either. So what we did to avoid this issue in the short term was to simply disallow compilation on Windows using DMD. It works just fine when compiled with LDC.

Jan’s primarily a Linux user (he doesn’t own a Mac and only runs Windows in a VM). He credits GitHub user @Andrepuel for getting it operational on OS X, and @aka-demik for finding the issue on Windows and verifying that it compiles with LDC. He’ll be grateful to anyone who can help fully resolve the Windows/DMD issue once and for all.

If you’re looking to develop a D plugin for your favorite editor, consider taking advantage of the work Jan as already done with workspace-d to save yourself some effort. And VS Code users can put it to use via code-d to get code completion and more. Visit its VS Code marketplace page to read reviews and installation instructions.

Project Highlight: vibe.d

Posted on

Since the day Sönke Ludwig first announced vibe.d on the D Forums, the project has been a big hit in the D community. It’s the exclusive subject of one book, has a chapter of its own in another, and has been proven in production both commercially and otherwise. As so many projects do, it all started out of frustration.

I was dissatisfied with existing network web libraries (in particular with Node.js, which was the new big thing back then, because it was also built on an asynchronous I/O model). D 2.0 gained cross platform fiber support through the integration of DRuntime, which seemed like a perfect opportunity to avoid the shortcomings of Node.js’s programming model (“callback hell”). Together with D’s strong type checking and the high performance of natively compiled applications this had the ideal preconditions for creating a network framework.

From the initial release, work progressed on adding web and REST interface generators (vibe.web.web and vibe.web.rest, respectively).

This was made possible by D’s advanced meta programming facilities, string mixins and compile-time reflection in particular. The eventual addition of user-defined attributes to the language enabled some important advances later on, such as the recently added authorization framework.

Vibe.d is at its core an I/O and concurrency framework that makes heavy use of fibers which run in a quasi-parallel framework.

Every time an operation (e.g. reading from a socket) needs to wait, the fiber yields execution, so another fiber can run instead. Each fiber uses up very little memory compared to a full thread and switching between fibers is very cheap. This enables highly scalable applications that behave like normal multithreaded applications (save for the low-level issues associated with real multithreading).

At a higher level, it can serve as a web framework for backend development and provides functionality for protocols like HTTP and SMTP, database connectivity, and the parsing of data formats. A number of third-party packages that extend or complement vibe.d can be found in the DUB repository (Sönke is also the creator and maintainer of DUB, the D build tool and package manager).

Big changes are currently afoot with the project. Beginning with the release of vibe.d 0.7.27 in February 2016, work began on splitting the monolithic project into independent DUB packages. One goal is to make it possible to use one vibe.d component without pulling them all in, reducing build times in the process.

Another goal is to employ modern D idioms where possible and to improve memory usage and performance as far as possible. It is surprising how much D evolved in just the short amount of time that vibe.d has been alive!

Diet-NG, vibe.d’s template engine based on Jade, was the first to be granted independence. It went through a complete rewrite that adds a strong test suite, makes use of D’s ranges where possible, provides a more flexible API, and eliminates dependencies on other vibe.d packages. Now he’s working on the core package.

The vibe-core package encapsulates the whole event and fiber logic, including I/O, tasks, concurrency primitives and general operating system abstraction. The original design was based heavily on classes and interfaces and had a very high level operating system abstraction layer, resulting in several downsides. For example, there was a dependence on the GC and virtual function calls could be an issue on certain platforms. One of the main goals was to minimize performance overhead in the new implementation.

As part of his experimentation with different API idioms and slimming down the code base, he produced the eventcore library.

The API follows a proactor pattern, meaning that a callback gets invoked whenever a certain asynchronous operation finishes. This is in contrast to the reactor pattern that is exposed by the non-blocking Posix APIs. It was chosen mainly so that asynchronous I/O APIs, such as Windows overlapped I/O and POSIX AIO, could be supported.

On top of eventcore, the fiber logic was implemented in a completely general way, using a central fiber scheduler and a generic asyncAwait function. This means that lots of corner cases are handled in a much more robust way now and that improvements in all areas can be made much faster and with fewer chances of breaking anything.

Next on the list for independence is the HTTP package. Sönke plans to completely rebuild the package from the ground up, adding HTTP/2 support and making it possible to enable allocation-free request/response processing.

Other obvious candidates are MongoDB and Redis clients, JSON and BSON support, the serialization framework, and the Markdown parser. The goal for each of these packages is always to take a look at the code and employ modern D idioms during the process.

If you’ve never taken D for a spin, vibe.d is a fun playground in which to do so. It’s not difficult to get up and running, and easier still if you have experience with such frameworks in other languages. It’s also ready to put into production in its current state, despite the leading zero in the version number. As Sönke makes progress on breaking it up into separate packages, it will almost certainly become an even more integral part of the growing D community.