DConf 2018: Assemblage in Bavaria

Posted on

It’s official! The D Language Foundation has put out a call for submissions for the next iteration of the annual gathering of D programming language enthusiasts. DConf 2018, hosted by QA Systems, is taking place in Munich from May 2nd to the 5th, 2018.

This time around, there’s a focus on growth and outreach. DConf has always been open to all, but past editions largely targeted those already “in the know”. For DConf 2018, the D Language Foundation is actively reaching out, encouraging anyone with little or no D language experience to stop by and see what all the fuss is about.

In the coming months, the D Blog will feature a series of posts related to DConf 2018. To get us started, Andrei Alexandrescu, Vice President and Treasurer of the D Language Foundation, sat down to answer a few questions about the event.


Q: Thanks for taking time out of your schedule for this, Andrei. The first thing I want to get to is the choice of location. At the end of DConf 2017, there was a lot of speculation about where the next edition would be held. We’ve seen two in Menlo Park, California, one in Orem, Utah, and two in Berlin. What led to the choice of Munich?

A: It has a lot to do with my recent visit there. I had mentioned a while ago to our tireless collaborator Sebastian Wilzbach (who studies at both the Technical University of Munich and Ludwig Maximilian University) about the annual classes I teach in neighboring Stuttgart. He suggested I make two trips in one and give a talk in Munich as well.

Once we committed to a date, I was shocked by the earnestness of everybody involved with organizing. The event filled within an hour of opening, in comparable amounts by existing D programmers (there’s a strong D community in Munich) and by curious programmers coming from other languages. There was even some competition among companies willing to host the event.

We ended up holding it at Brainlab’s new headquarters (check it out, they are a great innovator in medical technology). The event was a triumph! The folks in the audience were that combination of smart, receptive, and inquisitive that makes for an amazing interaction. We started at 6:30 and quite a few of us segued into beers, dinner, and of course more chatting, to finally part around midnight.

At that point I thought, Munich sounds like a perfect place for DConf. Later I spoke to my business partner (Andreas Sczepansky, owner of QA Systems) about the great reception the talk got in Munich. He got intrigued and agreed to work with us on DConf 2018. And here we are.

Q: What can attendees expect to see at DConf 2018?

A: We’re counting on a strong technical program, as has been the case in the past events. Also, last year’s day-long hackathon (a largely unstructured “let’s work on cool stuff in small groups” day) was surprisingly successful and enjoyed by everyone involved. So we’re making it bigger and hopefully better this year. It will be on the last day of the event, May 5th.

This year we also want to promote a growth theme. We’re working on bringing a strong outside keynote speaker, and QA Systems will help us to market to companies and grass-roots coders who are currently using other languages. We believe D offers many strategic advantages to the high-tech milieu in Bavaria and beyond.

Q: What do you mean by that? What makes Bavaria special?

A: I noticed there’s a strong IT industry in the area built around automotive, industrial machinery, healthcare, scientific computing, and more. Really serious software with difficult demands and high stakes. We’re talking about systems ranging from memory-constrained embedded systems to high-performance desktop software to large systems that take a long time to design, build, and test. D is all about building fast software, fast. So we have a great opportunity to make the strong case that the D language could help these application domains.

Q: You and Walter Bright have traditionally given the opening and closing keynotes at every DConf. What are you guys planning to talk about this time?

A: I know Walter is considering giving a talk on Project Detente – a multifaceted approach to smooth interoperation with C and C++ that also allows easy incremental migration of large projects from those languages to D. As for me, I haven’t decided yet. I’m really excited by the opportunities opened by this Design by Introspection thing I discussed in my DConf 2017 keynote [Also, see the blog post he wrote about his presentation at Google’s Tel Aviv campus – Ed.].

Q: Last question: what’s the elevator pitch for DConf? If you only had 30 seconds to sell a prospective attendee on the event, what would you say?

A: D is a language with depth. Richness. It has unique solutions to some difficult problems, such as reconciling compile-time computation, partial evaluation, domain-specific languages, and metaprogramming all together in a wholesome manner. Such matters are so fundamental to the way we design, build, and execute our programs that we either consider them solved or unsolvable. Chances are, attending DConf will make you like the D language more. But more importantly, your view of your own métier will be improved regardless of your languages of choice.


Be sure to keep an eye on this space for more details about DConf 2018 as they are released. And if you’re planning to submit a talk, don’t procrastinate. The submission deadline is Feb 25th.

See you in Munich!

DMD 2.077.0 Released

Posted on

The D Language Foundation is happy to announce DMD 2.077.0. This latest release of the reference compiler for the D programming language is available from the dlang.org Downloads page. Among the usual slate of bug and regression fixes, this release brings a couple of particulary beneficial enhancements that will have an immediate impact on some existing projects.

Cutting symbol bloat

Thanks to Rainer Schütze, the compiler now produces significantly smaller mangled names in situations where they had begun to get out of control, particularly in the case of IFTI (Implicit Function Template Instantiation) where Voldemort types are involved. That may call for a bit of a detour here.

The types that shall not be named

Voldemort types are perhaps one of D’s more interesting features. They look like this:

auto getHeWhoShallNotBeNamed() 
{
    struct NoName 
    {
        void castSpell() 
        {
            import std.stdio : writeln;
            writeln("Crucio!");
        }           
    }
    return NoName();
}

void main() 
{
    auto voldemort = getHeWhoShallNotBeNamed();
    voldemort.castSpell();
}

Here we have an auto function, a function for which the return type is inferred, returning an instance of a type declared inside the function. It’s possible to access public members on the instance even though its type can never be named outside of the function where it was declared. Coupled with type inference in variable declarations, it’s possible to store the returned instance and reuse it. This serves as an extra level of encapsulation where it’s desired.

In D, for any given API, as far as the world outside of a module is concerned, module private is the lowest level of encapsulation.

module foobar;

private struct Foo
{
    int x;
}

struct Bar 
{
    private int y;
    int z;
}

Here, the type Foo is module private. Bar is shown here for completeness, as those new to D are often surprised to learn that private members of an aggregate type are also module private (D’s equivalent of the C++ friend relationship). There is no keyword that indicates a lower level of encapsulation.

Sometimes you just may not want Foo to be visible to the entire module. While it’s true that anyone making a breaking change to Foo’s interface also has access to the parts of the module that break (which is the rationale behind module-private members), there are times when you may not want the entire module to have access to Foo at all. Voldemort types fill that role of hiding details not just from the world, but from the rest of the module.

The evil side of Voldemort types

One unforeseen consequence of Voldemort types that was first reported in mid–2016 was that, when used in templated functions, they caused a serious explosion in the size of the mangled function names (in some cases up to 1 MB!), making for some massive object files. There was a good bit of forum discussion on how to trim them down, with a number of ideas tossed around. Ultimately, Rainer Schütze took it on. His persistence has resulted in shorter mangled names all around, but the wins are particularly impressive when it comes to IFTI and Voldemort types. (Rainer is also the maintainer of Visual D, the D programming language plugin for Visual Studio)

D’s name-mangling scheme is detailed in the ABI documentation. The description of the new enhancement is in the section titled ‘Back references’.

Improved vectorization

D has long supported array operations such as element-wise addtion, multiplication, etc. For example:

int[] arr1 = [0, 1, 2];
int[] arr2 = [3, 4, 5];
int[3] arr3 = arr1[] + arr2[];
assert(arr3 == [3, 5, 7]);

In some cases, such operations could be vectorized. The reason it was some cases and not all cases is because dedicated assembly routines were used to achieve the vectorization and they weren’t implemented for every case.

With 2.077.0, that’s no longer true. Vectorization is now templated so that all array operations benefit. Any codebase out there using array operations that were not previously vectorized can expect a sizable performance increase for those operations thanks to the increased throughput (though whether an application benefits overall is of course context-dependent). How the benefit is received depends on the compiler being used. From the changelog:

For GDC/LDC the implementation relies on auto-vectorization, for DMD the implementation performs the vectorization itself. Support for vector operations with DMD is determined statically (-mcpu=native, -mcpu=avx2) to avoid binary bloat and the small test overhead. DMD enables SSE2 for 64-bit targets by default.

Note that the changelog initially showed -march instead of -mcpu in the quoted lines, and the updated version had not yet been posted when this announcement was published.

DMD’s implementation is implemented in terms of core.simd, which is also part of DRuntime’s public API.

The changelog also notes that there’s a potential for division performed on float arrays in existing code to see a performance decrease in exchange for an increase in precision.

The implementation no longer weakens floating point divisions (e.g. ary[] / scalar) to multiplication (ary[] * (1.0 / scalar)) as that may reduce precision. To preserve the higher performance of float multiplication when loss of precision is acceptable, use either -ffast-math with GDC/LDC or manually rewrite your code to multiply by (1.0 / scalar) for DMD.

Other assorted treats

Just the other day, someone asked in the forums if DMD supports reproducible builds. As of 2.077.0, the answer is affirmative. DMD now ensures that compilation is deterministic such that given the same source code and the same compiler version, the binaries produced will be identical. If this is important to you, be sure not to use any of the non-determistic lexer tokens (__DATE__, __TIME__, and __TIMESTAMP__) in your code.

DMD’s -betterC command line option gets some more love in this release. When it’s enabled, DRuntime is not available. Library authors can now use the predefined version D_BetterC to determine when that is the case so that, where it’s feasible, they can more conveniently support applications with and without the runtime. Also, the option’s behavior is now documented, so it’s no longer necessary to go to the forums or parse through search results to figure out what is and isn’t actually supported in BetterC mode.

The entire changelog is, as always, available at dlang.org.