My Thoughts on Sutter’s “C++ and Beyond 2011”

Around a month ago, Herb Sutter gave a talk on why C++ is once more gaining relevance in the world of programming, and how after a decade of neglect and abandonment, it is set to pave the way into the future. I downloaded it a while ago and finally had a chance to watch it last night; the talk itself is most-excellent and is around 40 minutes long, you can stream it online or download it in a higher-resolution format on Channel 9. Or click below to watch:

As someone that’s been using both C/C++ and .NET extensively over the past years, I found there was one very important point that Sutter glanced on, danced around, and did everything short of actually naming in his talk and it’s that if you’re doing anything remotely intricate or complicated, leaky abstractions in managed languages will bite you in the ass, and end up lowering your productivity, some times (and if what you’re working on is truly complicated, often times) to a point where you’d have been more productive using C or C++ in the first place.

The concept of leaky abstractions isn’t anything new and I’m hardly the first to point out how it can turn a knight in shining armor into a harbinger of doom and destruction. It’s the number one problem fundamentally present in almost any framework, but even more so in managed languages where the framework is all you have, and you’re not allowed to side-step it and build your own foundations to work with (p/invoke and interop aside). But lately it’s becoming more and more of a problem as the “push” for innovation that Sutter speaks of has become a fundamental requirement in just about all corners of the industry.

5 years ago, few in the .NET community could tell you what p/invoke was and how you’d use it. Now, it’s considered fairly basic knowledge and a working familiarity with the underlying C WIN32 API is a must for any desktop software developer looking to make a memorable, high-performing product in the world of .NET. Pretty much each and every WIN32 API documentation page on MSDN has a comment from someone on how to import that particular function into .NET with an interop definition, and people are running into the limitations of the managed framework far more often than they used to.

I’m still a big fan of .NET in general as it makes it really easy to quickly jump from idea to prototype to finished product for quick, one-off applications or basic tools and utilities. Even for huge products and projects, .NET’s GUI tools far exceed any C++ offerings in terms of ease-of-use and even functionality when it comes to having standard controls and features a single click away. But the leaky abstractions in .NET are proving to be a real pain in the back, and sometimes even interop just doesn’t cut it for the most trivial and basic of things.

Recently, I hit a wall with .NET in attempting to make EasyBCD support the parsing of internationalized BCD output in the form of bcdedit’s stdout. bcdedit, like many other Microsoft tools and utilities, is only UTF8 or Unicode aware if the console codepage is explicitly set in the application’s console window. I spent an entire day trying to shoehorn this functionality via various .NET hacks and failed interop attempts, a week off-and-on testing and debugging my various incomplete solutions, and in the end my solution was to create a C++ “proxy” application that would create a console, set the codepage to UTF8, run the command line utility, and pipe the output back to the host (source code here, MIT license: UtfRedirect). It was a guaranteed fix, took all of half an hour including testing it on 4 different platforms and experimenting with various internationalized stdout texts.

I personally believe the “best” compromise for medium-sized projects is to use C#/.NET to create the GUI and small helper scripts/utilities, but to build the core in C++ with an exposed C API or commandline interface. The short and long of the matter is, making a language truly and properly productive is a lot more work than just providing a sandbox with prettied-up API calls. You have to organically add improved productivity into a language from the bottom-up, making sure that the productivity goes hand-in-hand with power and flexibility, that it always second fiddle to being able to get the job done; after all, what use is a one-liner code if no mater how you twist and turn you can’t get it to do what you want?

  • Similar Posts

    Craving more? Here are some posts a vector similarity search turns up as being relevant or similar from our catalog you might also enjoy.
    1. Is .NET Taking Over the World?
    2. Please Microsoft, Stop Holding .NET Back!
    3. "People Hate Making Desktop Apps…" Since When!?
    4. Microsoft's .NET-Powered Windows Live Writer
    5. System.Threading.Thread, Universal Windows Platform, and the fragmentation of .NET Standard
  • 5 thoughts on “My Thoughts on Sutter’s “C++ and Beyond 2011”

    1. Microsoft has a tool called “P/Invoke Interop Assistant”. You just paste native c-code (structs, calls, functions, etc), set 32/64bit and get out c# or vb.net code. It gets most of the win32-calls right first time. The results are often different, and better, than your own conversion, the examples you find on the http://www.pinvoke.net website or comments on msdn.

      I’m posting this because i think many people are unaware of this tool, i used to dig through pinvoke.net for several years before i stumbled upon this tool.

    2. ‘I personally believe the “best” compromise for medium-sized projects is to use C#/.NET to create the GUI and small helper scripts/utilities, but to build the core in C++ with an exposed C API or commandline interface. ‘

      So in your opinion managed languages in general today should basically be used the way VB was used a decade ago? I sort of agree. I’ve worked on small/medium desktop C++ applications and a few .Net webapps and utilities, and my impression of all traditional CLR languages has been that they represent the next iteration of VB6. This isn’t a bad thing. VB was a successful language that enabled a lot of people to get things done very quickly. .Net and the CLR build on that.

      Also, you seem to be suggesting that leaky abstractions are something managed languages have a monopoly on. They’ll bite you in the ass in a native code project as well. Abstractions will leak. This is an inevitable consequence of the fact that they are nothing more than convenient models. How any given leaking abstraction is handled is not a language concern, as you seem to imply, but instead a design/architecture concern.

      Lastly, a problem that I see with your idea of building a core in C++ is that .Net has such a rich framework available to it. Outside of GUI-centric WPF, Win/WebForms, Silverlight, and more recently “Metro”, there has been so much investment in, for example, data modeling and access. You’ll take a big productivity hit by dropping to C++ and not being able to leverage these components. If you’ve got deep pockets and a lot of time then more power to you.

      (Incidentally, data access in a native world is relegated to what? ODBC? ADO/COM? I’ve spent too much time in ADOnet, EF, LinqToSql, and occasionally NHibernate to want to go back to CRecordsets and plain ODBC.)

    3. @Garren: Thanks for your comment. The point I was trying to make is that while abstractions always leak regardless of the language, if you’re sandboxed into your abstractions due to language limitations or JIT virtualization, not only will the leaky abstraction make things hard, it’ll make things impossible. The UtfRedirect app is an example of something that, to the best of my knowledge, simply cannot be done from within .NET without modifying the source code of the framework or creating your own Process class. i.e. you can’t patch it, you can only rewrite it. I fully agree that .NET makes some things incredibly easy vs native C/C++, and I’m the first to admit to being guilty of taking advantage of them whenever I can, however.

      @sense: Thanks! I never heard of it before, I think that’s something that’ll come in handy!

    4. First of all thank you for EasyBCD, it saved my life at least once when I was working with Linux on my machine. I looked to presentation and the code is basically the performance problem that may be annoying to get it right in a virtual machine (coffee generation).
      At the end I think as I´ve written fairly large C#/C++ application (an opensource CAD application=, that if you think your application from performance standpoint, the .Net one can be better than generation old code that did not took advantage of some performance paradigms (because i.e. the templates did not compile right when the C++ framework was started) and using Generics will bring the performance gap from C++ to be smaller. At the end, I think that people have to go high (as level of abstraction= as much as possible and to get lower as performance require. In fact there are parts of C++ where it does not perform that well (the first thing that comes in mind are when new CPU instructions appear, the compiler may not generate optimum code for them, or if you want that the code to work just on this or that CPU in this way, you may need to go to assembly).
      Sorry to appear in a way like a rant, but if the application is in GAC, you basically can have performance guarantees (because GAC assemblies are AOT’ed by NGen) for most desktop applications. RAII may be missing for C# as a performant GC is missing for C++

    5. Hi —

      Interesting topic, and one that is near and dear to my heart. It is precisely the need for the control that you talk about that has led me to develop, as a foundation for the language modeling tool I am building, my own virtual machine that give me access to run-time assembly language generation as well as higher-level procedural and functional programming facilities. Yet I don’t want to give up the power of run-time based libraries either, which is why I am working on integrated with both the .NET CLR and the JVM at a byte code and Jitting mechanism level.

      But I’m curious what your thoughts are on the whole issue of multi-core support and specifically the new parallel programming facilities that have appeared in more recent .NET runtime and library versions. Here’s an area where even seasoned C++ developers have difficulty and I’m thinking that being able to take many of the hard issues and sweep them under the covers of a runtime abstraction is a really incredibly useful thing to do. Things like making it easier to program with continuations, letting the runtime take care of the hard work of passing component state forward from functional unit to functional unit and thread to thread is a powerful mechanism that is really only possible with a managed runtime. As the need to leverage (or at least play nicely with) multiple cores becomes more and more important, I’m wondering whether moving back to a C++ do-it-all-yourself mindframe is really the way to go.

      Very interesting question.

    Leave a Reply

    Your email address will not be published. Required fields are marked *