Project Highlight: Timur Gafarov

dlib-logoTo begin with, let’s be clear that Timur Gafarov is a person and not a project. The impetus for this post was an open source first-person shooter, called Atrium, that he develops and maintains. In the course of making the game, he has created a few other D projects, each of which could be the focus of its own post. So this time around, we’re going to do a plural Project Highlight and introduce you to the GitHub repository of Timur Gafarov.

When Timur first came to D six years ago, you might say it was love at first sight.

As an indie game developer with a strong bias toward graphics engines and rendering tech, I always try to keep track of modern compiled languages effective enough for writing real-time stuff. The most obvious choice in this field is C++, and I actually used it for several years until I found D in 2010. I immediately fell in love with the language’s clean, beautiful syntax, its powerful template system, the numerous built-in features absent in C++, and the rich and easy to use standard library.

Interestingly, he was actually attracted by one of the things often cited as a turn-off about D back then: the lack of libraries. It was a situation in which he saw opportunities to create things from scratch, without worrying about reinventing the wheel. At the time, DUB was not yet a thing, so the first task he set himself was coding up a build system called Cook, which he still uses sometimes instead of DUB.

After that, he was ready to start making a game. He wanted to use OpenGL, and found an existing binding in Derelict (a collection of dynamic bindings maintained by this post’s author — there also exists a collection of static bindings called Deimos) that allowed him to do so in D. With that off his plate, he next sat down to write a game engine. The first few steps in that direction resulted in a set of utility packages that coalesced into dlib.

At first, there were no clear plans or goals. After a previous period of using third-party engines, I had some experience with low-level graphics coding in C++ and just wanted to port my stuff to D for a start. I began with vector/matrix algebra and image I/O. These efforts resulted in dlib, a general purpose library.

He next turned his attention to 3D physics. Even though there were existing libraries with bindable C interfaces, like ODE (with a dynamic binding that existed then in the form of DerelictODE and a static binding in Deimos) and Newton (for which Deimos-like and Derelict-like bindings have since been created by third parties), he just couldn’t help himself. Enter dmech.

Atrium was born as part of my experiment with writing a 3D physics engine, called dmech. OK, this wasn’t strictly necessary, but the chance of being the first one to write this kind of thing in D was so attractive that I couldn’t stand it 🙂 Of course, I didn’t dare to compete with such industry standards as Bullet, but nevertheless it was an amazing experience. I’ve learned a lot.

A screenshot from Atrium.

A screenshot from Atrium.

A physics engine and utility library weren’t all he needed. That’s where DGL comes in.

The next milestone was writing a graphics engine that I call DGL. I can’t consider this step fully completed, because I’m never happy with my abstractions and design solutions. Finally, I ended up with some kind of simplified Physically-Based Rendering pipeline, Percentage Closer Filtering shadows and multipass rendering, with which I’m sort of satisfied for now.

There was a period when I had to use outdated hardware, so DGL uses OpenGL 1.x, relying on extensions to utilize modern GPU technologies. Yeah, in 2016 this sounds funny 🙂 But recently I started experimenting with OpenGL 4, so the engine is likely to be rewritten. Again!

But for Timur, the graphics system isn’t the most important part of Atrium.

It’s the collision detection and character kinematics. I think that believable character behavior and interactions with the virtual world are crucial for any realistic game. In Atrium, these are fully physics-based, with as few hacks and workarounds as possible. Rigid body dynamics natively ‘talk’ with player-controllable kinematics. That means, for example, that a character can be pushed with moving objects and can push other objects himself. A stack of dynamic boxes can be transported via a kinematic moving platform. A gravity gun can be used to move things. And so on. I’m deeply inspired by Valve’s masterpieces, Half-Life 2 and Portal, so I want to make my own ‘physical’ first person puzzle.

Working on this project has certainly been a labor of love.

It has already taken me about six years of spare-time work and it still exists only in the form of an early gameplay demo. Of course, if I’d used an existing mature engine, like Unity or UE, things would be much simpler. Constraining myself to use D for such a complex task may look strange, but it’s fun. And that’s that.

Through those years, he has learned a good deal about the ups and downs of game development with D. Particularly that ever popular bugbear known as the GC.

Modern D is a very attractive choice as a language for game development. Even the garbage collector is not a problem, because you can use object pools, custom allocators, or simply malloc and free. The key point is to know when the GC is invoked and try to avoid those cases in performance critical code. Personally, I prefer using malloc so that I can free the memory when I want, since delete has been deprecated and destroy  just releases all the references to an object instance without actually deleting it. Using manual memory management imposes some restrictions on the code–for example, you can’t use closures or D’s built-in containers–but that, again, is not a big problem. A large effort is currently underway to lessen GC usage in dlib, so that you can use it to write fully unmanaged applications with ease. It has GC-free containers, file I/O streams, image decoders, and so on.

If you are interested in game development with D, Timur’s GitHub repository should be an early point of call. Even if you aren’t making a game or a game engine, you may well find something useful in dlib. With its growing list of contributors, it’s getting a good deal of care and attention.

Thanks to Timur for taking the time to contribute to this post. We wish him luck with all of his projects!

GSoC Report: DStep

Wojciech Szęszoł is a Computer Science major at the University of Wrocław. As part of Google Summer of Code 2016, he chose to make improvements to Jacob Carlborg’s DStep, a tool to generate D bindings from C and Objective-C header files.


GSoC-icon-192It was December of last year and I was writing an image processing project for a course at my university. I would normally use Python, but the project required some custom processing, so I wasn’t able to use numpy. And writing the inner loops of image processing algorithms in plain Python isn’t the best idea. So I decided to use D.

I’ve been conscious about the existence of the D language for as long as I can remember, but I’d never convinced myself before to try it out. The first thing I needed to do was to load an image. At the time, I didn’t know that there is a DUB repository containing bindings to image loading libraries, so I started writing bindings to libjpeg by myself. It didn’t end very well, so I thought there should be a tool that will do the job for me. That’s when I found DStep and htod.

Unluckily, the capabilities of DStep weren’t satisfying (mostly the lack of any kind of support for the preprocessor) and htod didn’t run on Linux. I ended up coding my project in C++, but as GSoC (Google Summer of Code) was lurking on the horizon, I decided that I should give it a try with DStep. I began by contacting Craig Dillabaugh (Ed. Note: Craig volunteers to organize GSoC projects using D) to learn if there was any need for developing such a project. It sparked some discussion on the forum, the idea was accepted, and, more importantly, Russel Winder agreed to be the mentor of the project. After some time I needed to prepare and submit an official proposal. There was an interview and fortunately I was accepted.

The first commit I made for DStep is dated to February, 1. It was a proof of concept that C preprocessor definitions can be translated to D using libclang. Then I improved the testing framework by replacing the old Cucumber-based tests with some written in D. I made a few more improvements before the actual GSoC coding period began.

During GSoC, I added support for translation of preprocessor macros (constants and functions). It required implementing a parser for a small part of the C language as the information from libclang was insufficient. I implemented translation of comments, improved formatting of the output code (e.g. made DStep keep the spacing from C sources), fixed most of the issues from the GitHub issue list and ported DStep to Windows. While I was coding I was getting support from Jacob Carlborg. He did a great job by reviewing all of the commits I made. When I didn’t know how to accomplish something with D, I could always count on help on forum.dlang.org.

DStep was the first project of such a size that I coded in D. I enjoyed its modern features, notably the module system, garbage collector, built-in arrays, and powerful templates. I used unittest blocks a lot. It would be nice to have named unit tests, so that they can be run selectively. From the perspective of a newcomer, the lack of consistency and symmetry in some features is troubling, at least before getting used to it. For example there is a built-in hash map and no hash set, some identifiers that should be keywords starts with @ (Ed. Note: see the Attributes documentation), etc. I was very sad when I read that the in keyword is not yet fully implemented. Despite those little issues, the language served me very well overall. I suppose I will use D for my personal toy projects in the future. And for DStep of course. I have some unfinished business with it :).

I would like to encourage all students to take part in future editions of GSoC. And I must say the D Language Foundation is a very good place to do this. I learned a lot during this past summer and it was a very exciting experience.

Ruminations on D: An Interview with Walter Bright

Joakim is the resident interviewer for the D Blog. He has also interviewed members of the D community for This Week in D and is responsible for the Android port of LDC.


d6Walter Bright is the creator and first implementer of the D programming language. He was an early developer of C++ compilers starting from the mid-’80s, including the first C++ compiler to translate source code directly to object code without using C as an intermediate, and has written compilers for ABEL, C, Java, and Javascript. He believes he is the only person to have written a full C++98 compiler by himself. Empire, one of the first computer strategy games, was written by Walter at Caltech. Before getting into computer programming full-time, Walter worked for Boeing from 1979-1982 on the 757’s flight controls, particularly the stabilizer trim gearbox and system and the elevators.

Joakim: D appears to be picking up speed. The fourth straight DConf took place earlier this year, Wired wrote a nice article a couple years ago, Gartner ranked it in the top 20 languages soon afterwards, and downloads of the reference compiler, DMD, average 1200 per day in recent years. Talk about the current popularity and status of the language and what you’re doing to take it to the next level.

Walter: I don’t worry too much about that. I spend my efforts making D the best language possible, and let the metrics take care of themselves. It’s like being a CEO; he shouldn’t be sweating the stock price, he should be working on making money for the company, then the stock price will take care of itself.

There’s always a stack of things to do, more or less with the most important one on the top, and I pop the top off and work on it, mostly the one with the maximum benefit/cost ratio. The ordering in the stack changes all the time. If an item has others strongly interested in working on it, I defer to them.

I get the jobs that nobody else wants to do. 🙂 Regression fixes for complex problems is a big one. It’s much more fun designing new stuff than maintenance on the existing stuff, dealing with technical debt, etc.

Joakim: In the last year, @nogc and interfacing with C++ have been at the top of your stack. Why? You always used to say that interfacing more with C++ would be too much work.

Walter: It became clear that the garbage collector wasn’t needed to be embedded in most things, that memory allocation could be decided separately from the algorithm. Letting the user decide seemed like a great way forward.

As for C++, I figured out a way to support it that avoided the problems I thought were impractical to deal with. I did a talk about it PDF slides here– and a Rust user wrote up a partial transcript.

Since that talk, I’ve come up with a solution for exceptions. C++ can throw/catch exceptions of any type. But few people do anything other than throw/catch a reference to a class, like std::exception. All D needed to do was allow throw/catch of a class marked with the extern (C++) attribute. Throw/catch of other types remains unsupported, and most likely will remain unsupported.

In order to make that work, the custom exception handling code in the code generation and the library had to change to work with the DWARF exception handling mechanism. That turned out to be a fair amount of work, as it is rather under-documented. But it’s pretty simple for the user.

Joakim: What do you plan on working on in D for the rest of the year? Work on @nogc and C++ support seems to be ongoing and you’ve tried to increase the use of ranges in the standard library for some time now. Anything else? What is the status of those efforts?

Walter: Those are still high-priority and ongoing efforts. But also we’re flexible; if a major opportunity comes up that needs us to push in a different direction, we’ll adapt. The pervasive use of ranges is advancing rapidly, I’ve been very pleased with the results.

The current focus is on improving memory safety. It’s become more and more important to have verifiable memory safety in a programming language, as the expenses involved when unsafe code is exploited in order to install malware become greater and greater. D has always supported memory safety, but recently we’ve embarked on a much more comprehensive review of memory safety in the core language and are making changes to close the gaps.

Joakim: How much time do you spend on D and what is your daily routine?

Walter: I work full time on D. I probably spend half my time working on the language, and the other half helping others with it, discussing things, doing interviews (!), writing articles, doing conferences, etc.

Joakim: You were 42 when you started working on D and I guess it is the first language you designed? Talk about why you started working on it so late–you were probably older then than most of the D contributors now–and what insights your experience gave you that your younger self or other young contributors may not have.

Walter: ABEL is the first language I designed, and was a solid success for Data I/O. It’s obsolete now, because the electronic devices it was aimed at are now obsolete, but it had a great run for 10 years or so.

Having been writing compilers my whole career, doing tech support for them, and following the various changes in the languages inevitably gave me much insight on what worked and what didn’t. Probably the biggest thing is that simpler is better. But making something simple is actually quite difficult. Anybody can (and does) design a complex solution, but few can see through the complexity to find the underlying simplicity.

Many successful languages were designed by older engineers.

Joakim: Please give some examples of such simplicity and how you were able to find it.

Walter: We nailed it with arrays (Jan Knepper’s idea), the basic template design, compile-time function execution (CTFE), and static if. I have no idea what the thought process is in any repeatable manner. If anything, it’s simply a dogged sense that there’s got to be a better way. It took me years to suddenly realize that a template function is nothing more than a function with two sets of parameters –compile time and run time–and then everything just fell into place.

I was more or less blinded by the complexity of templates such that I had simply missed what they fundamentally were. There was also a bit of the “gee, templates are hard” that predisposes one to believe they actually are hard, and then confirmation bias sets in.

I once attended a Scott Meyers presentation on type lists. He took an hour to explain it, with slide after slide of complexity. I had the thought that if it was an array of ints, his presentation would be 2 minutes long. I realized that an array of types should be equally straightforward.

With CTFE, we just went straight in the front door from asking the question “why can’t we execute this function at compile time, just like constant folding?” I did the initial one just by extending the constant folding logic. Don Clugston took it much further, but at its heart it’s still a modified dwarf. Stefan Koch is currently working on making a real interpreter out of it.

Joakim: Why do you think there hasn’t been a killer app for D yet? For example, Ruby was kind of an obscure language for a dozen years till Rails propelled it into the spotlight.

Walter: I suspect the age of the killer app is behind us. There is so much software existing and being written, for every imaginable purpose, that it’s hard to believe there will even be another killer app of any sort. Of course, predictions are notoriously unreliable.

Joakim: D has a unique approach in the compiled languages market, being mostly community-developed without a major corporate sponsor. C++ had Bell Labs, Rust has Mozilla, Go has Google; all have full-time paid devs on the language, even if they also take open-source contributions. Do you think this is a problem and is D being left behind?

Walter: We recently started a D foundation which will make it a lot easier for corporations to sponsor D.

Joakim: Do you make money off D? I know you’ve contracted with Facebook to write a C++ linter, Flint, and preprocessor, Warp, and that you work with Sociomantic and other companies using D.

Walter: I do some paid consulting work for D, but am careful not to take on so much that it interferes with working on D itself.

Joakim: Do you write much code in D outside of the standard library? If so, talk about recent stuff you’ve written and how the experience has been, plus a bit about using some of the new features.

Walter: D consumes so much of my efforts, there’s not much time to write other D apps other than smallish utilities. Currently, of course, the D compiler front end itself is now in D. But it’s been translated from C++, so isn’t idiomatic D.

Joakim: OK, so I guess Warp is the largest program you’ve written in D lately, about 10 klocs. Can you talk about the experience of actually coding that in D, as opposed to C/C++? What stood out for you?

Walter: What stood out is the speed with which it went together, and the remarkably small number of bugs that surfaced in it after it was released. I credit the extensive use of unit tests for the latter.

Having written another preprocessor before certainly helped, and it would be hard to tease that out as a separate effect. But I still believe that unit testing made the difference, because the way the preprocessor worked was very different from the one I’d done before.

What stood out with D was the ease of changing the data structures to try different ways, compared with doing this in other languages.

Joakim: When you think back to your vision for D as you were starting out in 1999, does it at all resemble that today? Anything big missing that you wanted back then?

Walter: D is far more advanced than what I thought of 15 years ago. Programming language ideas have certainly advanced since then, and D along with it.

D originally wasn’t going to have templates at all, based on my earlier experience with them. But finding a simple way to do them changed everything – even to the point where well over half of a modern D program is templates!

The idea of ranges slowly evolved over time, we’re still learning how to do it right.

bit as a basic type was unworkable. complex as a basic type turned out to be pointless (it works better as a library type). Auto-decoding of UTF-8 to code points turned out not nearly as useful as expected.

Transitive const was a leap of faith, and is consistently overlooked by other languages looking to adopt D features. I have a lot of faith in transitive const, and it is already paying off in making it possible to have pure functions, a key feature for modern programming.

Inside D Version Manager

In his day job, Jacob Carlborg is a Ruby backend developer for Derivco Sweden, but he’s been using D on his own time since 2006. He is the maintainer of numerous open source projects, including DStep, a utility that generates D bindings from C and Objective-C headers, DWT, a port of the Java GUI library SWT, and the topic of this post, DVM. He implemented native Thread Local Storage support for DMD on OS X and contributed, along with Michel Fortin, to the integration of Objective-C in D.


D Version Manager (DVM), is a cross-platform tool that allows you to easily download, install and manage multiple D compiler versions. With DVM, you can select a specific version of the compiler to use without having to manually modify the PATH environment variable. A selected compiler is unique in each shell session, and it’s possible to configure a default compiler.

The main advantage of DVM is the easy downloading and installation of different compiler versions. Specify the version of the compiler you would like to install, e.g. dvm install 2.071.1, and it will automatically download and install that version. Then you can tell DVM to use that version by executing dvm use 2.071.1. After that, you can invoke the compiler as usual with dmd. The selected compiler version will persist until the end of the shell session.

DVM makes it possible for the user to select a specific compiler version without having to modify any makefiles or build scripts. It’s enough for any build script to refer to the compiler by name, i.e. dmd, as long as the user selects the compiler version with DVM before invoking the script.

History

DVM was created in the beginning of 2011. That was a different time for D. No proper installers existed, D1 was still a viable option, and each new release of DMD brought with it a number of regressions. Because of all the regressions, it was basically impossible to always use the latest compiler, and often even older compilers, for all of your projects. Taking into consideration projects from other developers, some were written in D1 and some in D2, making it inconvenient to have only one compiler version installed.

It was for these reasons I created DVM. Being able to have different versions of the compiler active in different shell sessions makes it easy to work on different projects requiring different versions of the compiler. For example, it was possible to open one tab for a D1 compiler and another for a D2 compiler.

The concept of DVM comes directly from the Ruby tool RVM. Where DVM installs D compilers, RVM installs Ruby interpreters. RVM can do everything DVM can do and a lot more. One of the major things I did not want to copy from RVM is that it’s completely written in shell script (bash). I wanted DVM to be written in D. Because it’s written in shell script, RVM enables some really useful features that DVM does not support, but some of them are questionable (some might call them hacks). For example, when navigating to an RVM-enabled project, RVM will automatically select the correct Ruby interpreter. However, it accomplishes this by overriding the built-in cd command. When the command is invoked, RVM will look in the target directory for one of the files .rvmrc or .ruby-version. If either is present, it will read that file to determine which Ruby interpreter to select.

Implementation and Usage

One of the goals of DVM was that it should be implemented in D. In the end, it was mostly written in D with a few bits of shell script. Note that the following implementation details are specific to the platforms that fall under D’s Posix umbrella, i.e. version(Posix), but DVM is certainly available for Windows with the same functionality.

Structure of the DVM Installation

Before DVM can be used, it needs to install itself. This is accomplished with the command, dvm install dvm. This will create the ~/.dvm directory. It contains the following subdirectories: archives, bin, compilers, env and scripts.

archives contains a cache of downloaded zip archives of D compilers.

bin contains shell scripts, acting as symbolic links, to all installed D compilers. The name of each contains the version of the compiler, e.g. dmd-2.071.1, making it possible to invoke a specific compiler without first having to invoke the use command. This directory also contains one shell script, dvm-current-dc, pointing to the currently active D compiler. This allows the currently active D compiler to be invoked without knowing which version has been set. This can be useful for executing the compiler from within an editor or IDE, for example. A shell script for the default compiler exists as well. Finally, this directory also contains the binary dvm itself.

The compilers directory contains all installed compilers. All of the downloaded compilers are unpacked here. Due to the varying quality of the D compiler archives throughout the years, the install command will also make a few adjustments if necessary. In the old days, there was only one archive for all platforms. This command will only include binaries and libraries for the current platform. Another adjustment is to make sure all executables have the executable permission set.

The env directory contains helper shell scripts for the use command. There’s one script for each installed compiler and one for the default selected compiler.

The scripts directory currently only contains one file, dvm. It’s a shell script which wraps the dvm binary in the bin directory. The purpose of this wrapper is to aid the use command.

The use Command

The most interesting part of the implementation is the use command, which selects a specific compiler, e.g. dvm use 2.071.1. The selection of a compiler will persist for the duration of the shell session (window, tab, script file).

The command works by prepending the path of the specified compiler to the PATH environment variable. This can be ~/.dvm/compilers/dmd-2.071.1/{platform}/bin for example, where {platform} is the currently running platform. By prepending the path to the environment variable, it guarantees the selected compiler takes precedence over any other possible compilers in the PATH. The reason the {platform} section of the path exists is related to the structure of the downloaded archive. Keeping this structure avoids having to modify the compiler’s configuration file, dmd.conf.

The interesting part here is that it’s not possible to modify the environment variables of the parent process, which in this case is the shell. The magic behind the use command is that the dvm command that you’re actually invoking is not the D binary; it’s the shell script in the ~/.dvm/scripts path. This shell script contains a function called dvm. This can be verified by invoking type dvm | head -n 1, which should print dvm is a function if everything is installed correctly.

The installation of DVM adds a line to the shell initialization file, .bashrc, .bash_profile or similar. This line will load/source the DVM shell script in the ~/.dvm/scripts path which will make the dvm command available. When the dvm function is invoked, it will forward the call to the dvm binary located in ~/.dvm/bin/dvm. The dvm binary contains all of the command logic. When the use command is invoked, the dvm binary will write a new file to ~/.dvm/tmp/result and exit. This file contains a command for loading/sourcing the environment file available in ~/.dvm/env that corresponds to the version that was specified when the use command was invoked. After the dvm binary has exited, the shell script function takes over again and loads/sources the result file if it exists. Since the shell script is loaded/sourced instead of executed, the code will be evaluated in the current shell instead of a sub-shell. This is what makes it possible to modify the PATH environment variable. After the result file is loaded/sourced, it’s removed.

If you find yourself with the need to build your D project(s) with multiple compiler versions, such as the current release of DMD, one or more previous releases, and/or the latest beta, then DVM will allow you to do so in a hassle-free manner. Pull up a shell, execute use on the version you want, and away you go.

Project Highlight: Visual D

In the world of modern software development, a language that is not supported in any of the major Integrated Development Environments is not going to gain very much traction. For better or worse, the IDE has become a widespread and permanent fixture. Programmers are free to use any editor or environment they want in their own time, but those who write software for a living and are not self-employed are going to have to work in the environment(s) allowed by their employer. In Windows shops, that often means Visual Studio.

For the first several years of D’s existence, not only was there no support for it in Visual Studio, the toolchain on Windows produced binaries only in the OMF format, making them incompatible with the Microsoft tools out of the box. Today, that has changed, and both DMD and LDC can output COFF files that are compatible with the MS linker. But even before DMD got support for COFF, one community member initiated Visual D, a project to bring D to the Visual Studio environment.

Rainer Schuetze first discovered D shortly after work on D2 began.

It must have been an article in the German c’t magazine (maybe this one) that brought my attention to the D language. The meta programming features of D2 caught my eye, namely templates and compile-time function evaluation. It was refreshing to see these elements being neatly integrated into the language, not bolted on top of it with a different language as in C++.

He decided to give D a try. The road that led him to Visual D originated not from a longing for an IDE, but from his attempts at debugging DMD’s output.

The options D had to offer on Windows were rather disappointing: the version of Microsoft’s Windbg distributed with DMD was barely usable as it was already 12 years old back then in 2008. Newer versions didn’t work at all with the executables generated by DMD. The debugger being one of the highlights of Visual Studio, I tried it, too, but anything newer than VS.NET 2003 failed miserably, with the latter being almost acceptable. It didn’t stop execution on breakpoints. So it seemed that there might be just a small tweak to make debugging work considerably better with Visual Studio.

It was this desire to debug D programs that led Rainer to start his first open source project, cv2pdb. Rainer’s tool can convert CodeView format output by DMD to the PDB format used by the Visual Studio debugger. Once that was working, it was no great leap to want better integration with the Visual Studio environment, such as build support and code completion.

Rainer found one abandoned project that had been started toward that end already, but he couln’t get it functional. He also experimented with modifying an existing language service written in C#, but had trouble getting it to work with a parser written in D. In the end, he decided that spending his time with D “should not mean writing code in other languages.” So, in 2009, Visual D was born. He ran into some difficulties almost immediately, as anyone using D in those days was bound to.

At the time, you would hit a compiler bug every few hundred lines of code, but these were not the biggest obstacles on the way to getting this extension to work. Visual Studio is a Win32 application and loads its extensions dynamically as DLLs. D very much relies on Thread Local Storage (TLS) as it is the default for global variables. Under Windows this uses “implicit TLS” built into the binary. Unfortunately, Windows Vista was the first version to support this for dynamically loaded DLLs, but Windows XP was still the most widely used version. It only supports TLS for the DLLs that are loaded implicitly with the application.

Eventually, after a lot of debugging, he managed to work around his Windows XP problems by tricking the system into believing a manually loaded DLL had been implicitly loaded with the application. The result of these efforts can be seen in the DRuntime modules core.sys.windows.dll and core.sys.windows.threadaux. This implementation comes with the drawback that DLLs using it cannot be unloaded. An improved version by Denis Shelomovskii works around this. Given the decline of XP usage, the need for this will eventually fade away.

TLS wasn’t his only problem.

Another issue turned up with the interfaces that create the API between Visual Studio and its extensions: these are all COM interfaces. Fortunately, D supports COM out of the box, virtual function calls work nicely. Unfortunately, the implementation in the library is currently unusable. The allocation of a COM object (every class derived from IUnknown) is done using C’s malloc, assuming the usual approach to free this memory in the Release() function when the reference count goes down to zero. Unfortunately, this is not possible from within a member function of a D class as the invariant is still called before the function returns. That’s why the runtime implementation just leaks the memory.

As a solution, he initially implemented his own ComObject class and overloaded new. Later, when the overloading of new was deprecated in the language, he switched to using a factory function for COM objects. He wasn’t finished yet.

The Visual Studio SDK provides COM interface definitions for C++ that I started to convert manually on a per case basis whenever I needed a declaration in D. This proved rather tedious as dependencies grew, so I considered converting the interface definitions automatically. These are given by Interface Definition Language (IDL) files or C++ header files.

So he created a conversion tool.

It uses a tokenizer followed by some custom conversion functions (mostly dealing with C pre-processor code) and a long list of replacement rules using the function now accessible within Visual D as Search/Replace Tokens. Each token comes with the comment preceding it, so the D files very much resemble the original files and still contain documentation, which is very helpful for the VS SDK. The conversion now works for all files of the VS SDK versions 2008 to 2015 and a selection of required files from the Windows SDKs 6.0 to 10 (about 90 header files converted, some more stubbed to be mostly empty).

Visual D has come a long way since Rainer first started working on it, but the journey is by no means complete. The next release will integrate the expression evaluator of the Mago debugger with the Visual Studio debug engine, thanks to Microsoft’s publication of information about the debug engine’s extensibility. Additionally, Visual D users will gain the ability to integrate with C and C++ projects.

Just drop your D files into a C/C++ project and they compile with the rest of your application without further ado. Enjoy settings integration instead of the ancient looking project dialogs of visualdproj files. More importantly, all the additional tools like the Manifest Tool, MIDL Compiler and even Build Extensions (e.g. Assembler) are easily available to D projects, too.

These new features are available in a preview version right now. Rainer has been short on time, so the features haven’t improved much since the preview release, but once he finds the time you can be sure to see these new features in the next release.

The Origins of the D Cookbook

Adam Ruppe is the author of D Cookbook and maintainer of This Week in D. Modules from his freely available arsd package are used throughout the D community. He is also known for his legendary DConf 2014 presentation.


dcookbookIn 2013, Packt Publishing approached me to write D Cookbook. Over the next several months, I was tasked with collecting over 100 D programming language “recipes” and writing a couple of pages about each one.

I wanted it to be a mix of practical and fun examples of what D can do over the wide variety of fields in which I have used it, each one illustrative of some language concept that the reader could use in different contexts. As such, for the majority of the book, I avoided the use of libraries, even keeping the presence of Phobos or my own modules to a minimum, allowing me to focus on the implementations of the ideas.

That said, I didn’t come up with a hundred recipes out of thin air! They came from three main sources: my arsd libraries, my support efforts on the D forums, the #D IRC channel, and Stack Overflow (it was Stack Overflow that got Packt’s attention in the first place), and a few other side projects, like my minimal D runtime (zip file).

I first saw D way back in 2002. At the time, I was still fairly new to programming and was going to download another copy of Digital Mars C++ (which I used to compile 16-bit DOS programs!) when I saw a link on the Digital Mars website about this D programming language. My first impression – upon seeing the import keyword – was that it was too Java-like and I wasn’t interested. Oh, how I regret that naive snap decision! I didn’t take my next look at D until 2006, and that time, with far more real world experience and theoretical knowledge under my belt, I quickly fell in love.

Over the next year, I wrote a game library based on SDL and OpenGL and made a few little games for myself. My D code at this point wasn’t terribly unique. Much of it was based on my existing C++ codebases. That fairly uninteresting fact would later become one of my most recurring D tips: a great deal of your C, C++, and Java knowledge can carry over directly to D! Thanks to the similarities between the languages, it’s easy to learn. While your code is unlikely to work perfectly if simply copy/pasted into a .d file, porting it probably isn’t hard. I found the D language very comfortable right off the bat. Even today, I tell people with D questions to simply consider how they would do it in C++ and apply that existing knowledge to D. This can also work with C and C++ solutions gleaned from Internet resources like Stack Overflow.

I started working as a full-time web developer in 2008, which put a time constraint on my hobby game development efforts, but didn’t ice my love of D. Indeed, it wasn’t long at all before I worked D into my professional life by writing web libraries and eventually transitioning my work projects away from PHP and onto D!

At the same time, the D language was going through a series of rapid changes. Templates and compile time function evaluation got overhauled, immutable was introduced… Most interestingly to me, compile time reflection got massively expanded and a few features like opDispatch got added. Compile time reflection in older D was limited to the is expression and template tricks, but new D had __traits, making it competitive even with dynamic languages, without sacrificing D’s other strengths.

I was a very early adopter of these features and set out to discover how to combine them in ways to make my work easier, to compete with the other web languages, and to just show off a little 🙂 If someone came on the forum and said D can’t do this, then I’d reply Challenge accepted.

In the following years, I wrote: a DOM and JSON library, taking the most interesting ideas from Javascript into D; a web framework, realizing some dreams I had in rapidly churning out prototypes that could also survive the change process of real world development; OAuth and database libraries to support the needs of the projects; and more. One of the most interesting modules was my web.d, which takes a D class definition and builds web infrastructure around it: an automatically generated web site with CRUD forms from the static type information as well as Javascript and PHP code for API access to that functionality. This stretched D’s reflection capabilities and hit quite a few bugs on the bleeding edge, necessitating creative workarounds or alternative approaches. If I was really desperate, I’d fix the bug in DMD myself!

At the same time, I was regularly seen in the D community, helping other people with their problems, and, of course, reading insights from other members of the community. Every few days, I had another tip, and was also building up a mental picture of people’s common difficulties with D.

By 2013, I had years of experience with almost every corner and combination of D. Now, you can get a good slice of that in just a few days by reading the book!

Core Team Update: Martin Nowak

In the early days of DMD, new releases were put out when Walter decided they were ready. There was no formal process, no commonly accepted set of criteria that defined the point at which a new compiler version was due or the steps involved in building the release packages. In the early days, that was just fine. As time passed, not so much.

According to Martin Nowak:

The old release process was completely opaque, inconsistent, irregular, and also very time-consuming for Walter. So at some point it more or less failed and Walter could no longer manage to build the next release.

Martin was eager to do something about it, but he wasn’t the first person to take action on the issue. He decided to start with the work that had come before.

At this point, I took a fairly complex script from Nick Sabalausky, which tried to emulate Walter’s process, and started to improve upon it. It got trimmed down to a much smaller size over time.

But that was just the beginning.

Next, I prepared OS images for Vagrant for all supported platforms. That alone took more than a week. After that, I wired up the release script with Vagrant. From there on we had kind of reproducible builds.

That was a major step forward and eliminated some of the common mistakes that had crept in now and again with the previous, no-so-reproducible, process. With the pieces in place, Martin got some help from another community member.

Andrew Edwards took over the actual release building, but I was still doing the main work of keeping the scripts running and managing bugfixes. At some point, Andrew no longer had time. As the back and forth between us was almost as much work as the release build process itself, I completely took over. Nowadays, we have the script running fully automated to build nightlies.

Thanks to Martin, the process for building DMD release packages is described step-by-step in the D Wiki so that anyone can do it if necessary. Moreover, he coauthored and implemented a D Improvement Proposal to clearly define a release schedule. This process was adopted beginning with DMD 2.068 and continues today.

With the improved release schedule, DMD users can plan ahead to anticipate new releases, or take advantage of nightly builds and point releases to test out bug fixes or, whenever they come around, new features in the language or the standard library. However, the schedule is not etched in stone, as any particular release may be delayed for one reason or another. For example, the 2.071 release introduced major changes to D’s import and symbol lookup to fix some long-standing annoyances, with the subsequent 2.071.1 fixing some regressions they introduced. The release of 2.072 has been delayed until all of the known issues related to the changes have been fixed.

Those who were around before Martin stepped up to take charge of the release process surely notice the difference. He and the others who have contributed along the way (like Nick and Andrew) have done the D community a major service. As a result, Martin has become an essential member of the core D team.

D in Games: Ethan Watson of Remedy Games Goes to GDC Europe

At DConf 2013 at the Facebook HQ, Manu Evans, then of Remedy Games, gave a talk titled, “Using D Alongside a Game Engine” (follow the link for the slides and video). He talked about Remedy’s experience getting D to work with their existing C++ game engine. The D landscape was a bit different when they got underway than it is today. For starters, DMD did not support 64-bit targets on Windows and the language had not yet gained support for directly binding to C++. Manu’s talk discusses the steps they took to work around such issues and achieve a working system.

Fast forward to 2016. Remedy releases Quantum Break, the first commercial AAA game to ship with bits implemented in D. Ethan Watson comes to DConf 2016 with a talk titled, “Quantum Break: AAA Gaming with Some D Code” (follow the link for the slides, go to YouTube for the video). In it, he expanded on what Manu had talked about before. Now, he’s getting ready to take that message to Game Developers Conference Europe.

On the 16th of August this year in Cologne, Germany, I will be presenting a talk titled, “D: Using an Emerging Language in Quantum Break“. For those who have tuned in to DConf previously, it will ostensibly be a combination of both my talk from 2016 and Manu Evans’s talk from 2013, but necessarily cut down due to the fact that it will need to fit two hours of content into one.

He has more in mind than just discussing the technical aspects of Remedy’s solution. During informal discussions at DConf, Ethan spoke of how he would like to bring D to the attention of his colleagues at other studios in the game industry. His upcoming talk presents just such an opportunity.

The approach I’ll be taking is something of a sales pitch to other game developers. My talk at DConf this year was a bit apologetic on my part as I felt we did not use D anywhere near as much as we should, and that we still had unsolved problems at the end of it. GDC Europe will be a platform for me to talk about the journey toward shipping a game with an emerging language in use; a way to show what benefits the language has over the industry stalwart C++ and other modern languages like C# and Rust; and as a bit of a way to drum up support from other developers and get them to at least look at the language and evaluate it for their own usage.

Additionally, a future event Ethan hinted at in his DConf talk will become reality in time for GDC Europe.

We will also be open sourcing our binding solution for GDC. I’ve been cleaning it up over the last couple of weeks, to the point where it’s effectively a complete reimplementation designed for ease of use, ease of reading, and to support some of the additional ideas I’ve had for the system since I took over from Manu. It’ll basically be our way of saying, “D is great! Here’s some code we prepared earlier to help you get up to speed.”

That’s awesome news for the D community as a whole. D has had built-in support for binding to C++ for a little while and it has improved over several releases, though it’s not as fully functional as the support for binding to C (a far simpler task) and likely never will be. There is also an experimental third-party solution called Calypso, which is a fork of the LDC compiler that ultimately aims to provide full and direct interfacing with C++. The availability of an open-source alternative that has been battle-tested in production code can only be a good thing.

Unfortunately, there is one downside to Ethan’s presentation.

They’ve scheduled my talk at the same time as John Romero’s talk. Never thought I’d be competing with him for attention.

Who would want to compete for an audience with one of the co-founders of id software and co-creator of franchises like Doom and Quake, who is giving a talk about the programming principles the id team developed in their early years? Well, if you ask Ethan, the downside is that he can’t skip out of his own talk for Romero’s. Ah, well. As they say, them’s the breaks.

The Why and Wherefore of the New D Improvement Proposal Process

When a programmer sees fellow programmers contributing code to open source projects, it is unlikely that the question of why will ever come to mind in terms of motive. Regarding technical merit or usefulness, sure. But motive? Coders who are inclined to contribute bug fixes, new features, build systems, or other resources to open source projects do so for a variety of reasons, most of which other programmers will intuitively understand. But when a programmer volunteers to take over what is solely an administrative process for an open source project, that’s a different story altogether.

Code contributions are usually one-off affairs. They require a time investment up front, but once the PR or patch is merged, that’s often the end of it. Managing an administrative process, on the other hand, is an ongoing commitment. Time that could be spent coding is instead spent doing the proverbial paperwork. And the benefit to the volunteer may not always be clear. Implementing a ray tracer and going bungee jumping are things you might do for the hell of it. Administrative tasks? Not so much.

The most recent post on this blog highlighted two non-coding initiatives which were kicked off by members of the D community. One of them is a new process for handling D Improvement Proposals, or DIPs. This was the work of Mihails Strasuns. Not only did he put together a new process, he also volunteered to manage it. What on earth was he thinking?

There isn’t anything special or unique about DIPs. Most programming languages that rely heavily on community input get something similar at some point. Once a language matures, changing even small bits of it becomes challenging and needs careful consideration. All of the relevant information has to be published somewhere.

The previous DIP process looked like a collection of articles on a community wiki. It seemed to work pretty well initially, when the DIP count was low enough, but with the number of improvement proposals reaching 100 and most of them still remaining in “draft” status, scalability issues became noticeable.

He highlights three issues specifically. First up is quality control.

There was no quality control for submitted proposals and, as a result, submissions were often lacking detailed information about language impact.

With the DIPs being Wiki-based, anyone with a D Wiki account could add a page for a new DIP, but there was no one designated to ensure that new DIPs met any sort of standard. This resulted in quality ranging from a detailed treatise that covered all the bases, to something that looked like notes on a cocktail napkin. Some remain in draft status, having never been finalized by their authors.

Another issue is the lack of response from the leadership.

It was very rare to get a formal response from Andrei or Walter regarding a DIP, partially because proper reviewing of incomplete proposals was demanding too much time.

Walter and Andrei already have their hands full. The last thing they need is to spend time reviewing a DIP that isn’t ready for review. A DIP should be ready for their consideration before it is brought to their attention and not require them to ask for information that should already be there. Without a designated arbiter to decide when a DIP is complete, how are they to know? The author of a DIP may believe it ready, but all too often will be wrong.

Finally, there is the issue of maintaining the list of DIPs.

Abandoned or rejected proposals often were not marked as such because editing the list was purely a community effort.

With no one person responsible for maintaining the list, it was updated only when someone in the community noticed an update was needed and was willing to make the appropriate edits. The result was that there were only a small number of people who ever touched it.

So the solution Mihails came up with to address all of these problems, which he says is similar to that used with other modern programming languages, is described in the README of the new github project he set up to replace the Wiki. It outlines the procedure, including how to submit a DIP and how to get it approved, gives advice on how to write a great DIP, and even outlines the responsibilities of the new DIP manager. Which, for now, is him.

And that brings us back to the topic of the first couple of paragraphs. We now know the problems he saw with the process and how he decided to improve it, but we still don’t know why. So, why, Mihails?

There is, of course, a personal reason which caused me to step up with this process proposal. In a way, this is an attempt to solve a conflict of interest that arises from being employed by one of the largest D commercial users while being actively engaged in the language on my own. I want for the language to change and improve, but, at the same time, I have to ensure any such changes don’t cause too much disruption to our systems at work.

That commercial user of which he speaks is Sociomantic, hosts of the excellent DConf 2016 in Berlin and maintainers of the ocean and makd projects they opened up for the D community.

Having a formalized process for language changes helps a lot in that regard. It means there is a single information feed to follow if one doesn’t want to miss anything of major impact. It’s much more concise than a newsgroup or mailing list. And volunteering to handle the DIP bureaucracy allows me to make timely suggestions for better ways to handle any changes so that they don’t cause too much trouble to existing code.

The DIP process arose organically to fill a void. Its was, perhaps, inevitable that its ad hoc existence take another step in its evolution. It was just waiting for the right person with the right ideas and the willingness to make it happen. That person turned out to be Mihails Strasuns. The D communnity will be better off thanks to his efforts.

The DLang Vision and Improvement Process

The evolution of D has been a process of steady, if not always speedy, improvement. In the early days, Walter was the only person working on the language, the reference compiler, and the standard library, though he did accept contributions. Today, numerous people contribute to or maintain a piece of every aspect of D’s development. Semi-formal processes exist where none did before. Things have come a long way.

As with anything in life, those who come later are unaware of the improvements that came before. If you’ve just stepped off the boat into D Land, you don’t remember the days before core D development moved to github, or the time before the first release of Visual D, the D plugin for Visual Studio developed and maintained by Rainer Schuetze. All you’re going to see is that your pet Pull Request hasn’t yet been merged, or that Visual D doesn’t allow you to do all of the things that the core of Visual Studio allows. You’ll rightly head over to the forums to complain about the PR merge process, or request a new feature for Visual D (or gripe about its absence).

And so the story goes, ad infinitum. Along the way, the community loses some members who once found promise in the language, but become frustrated because their pet peeves have not been addressed. And herein lies the point of this post.

The D Programming Language is a community driven language. As such, there is no formal process for creating opportunities to fix and improve the areas that need fixing and improving. Walter has his head down in compiler code and often barely has time to come up for air. Andrei is stretched thin in the additions he has planned for Phobos, the work he does on behalf of the D Foundation, and his numerous other commitments. It’s up to the community to get things moving on other fronts.

Most often, improvements come at the instigation of one highly motivated individual, who either gets busy with the rolling up of sleeves or keeps pushing until someone better positioned takes up the challenge. Still others want to contribute, but aren’t sure where to begin. One place is the documentation, as described the last time this blog touched on this theme. For other areas, the place to look is the D Vision Document.

Andrei started the Vision Document as a means to let the community know the direction in which he and Walter expect the language to go, with plans to release an updated document biannually. Each document lists progress that has been made in the preceding six months and the goals for the coming six. The first document focused on goals the D leadership will personally enable or make happenThe latest document, which was put up for feedback recently in the forums, includes additional goals the leadership strongly believe are important for the success of the D language. This new addition is the result of an initiative by community member Robert Schadek.

Robert took the simple step of creating a Wiki page to collate the wishlist items that Walter and Andrei mention in the forum now and again. These are things they would like to see completed, but have no time to make happen themselves. Andrei merged these items into the Vision Document so that there is only one document to maintain and one source for the information. This is a perfect example of a low-barrier community contribution that has led to an overall improvement (though Robert is no stranger to higher-barrier contributions) and shows how there are ways other than submitting Pull Requests to improve the D Programming Language experience.

A higher-level means of helping to improve the language is through D Improvement Proposals. A DIP is a formal proposal to make changes or additions to the language. The author of the DIP need not be the implementer. For some time, these have primarily been loosely maintained in an informal process at the D Wiki. Recently, Mihails Strasuns decided the process needed better management, so he created a github repository for DIPs as part of a new process to address some shortcomings of the old system. Then he made a forum announcement about it, including the news that he is, for now, volunteering his services as the new DIP manager.

So, to recap, if you are itching to contribute to D in some way that you know will have an impact, the latest Vision Document is the place to go for inspiration. As I write, that’s the 2016H2 document. If you have an idea for a new feature or improvement to the language, creating a DIP is the way to go (a future post here will say more about the DIP process). There are numerous other areas where contributions are welcome, like fixing or reporting bugs, reviewing PRs, and taking the initiative to contribute to the D process like Robert and Mihails have done.

If you don’t have the time, motivation, or inclination to contribute, that’s fine, too. Just keep in mind when you come to the forums to vent about one frustration or another (and please, feel free to do so) that much of what exists in the D ecosystem today, along with nearly every improvement that will come for the foreseeable future, is the result of someone in the community expending their own time and effort to make it so, usually with no financial incentive (the exception being bug bounties). Please let that knowledge color your perspective and your expectations. If you can’t, or won’t, work to resolve shortcomings in the language, the processes, or the ecosystem yourself, your goal in bringing them to the community should be to motivate someone else to do so. Not only will that increase the odds of fixing your issues, it will help all D users inch closer to the ultimate D Nirvana.