Using build.rs to integrate rust applications with system libraries like a pro

I’m happy to announce the release of version 0.2 of the rsconf crate, with new support for informing Cargo about the presence of custom cfg keys and values (to work around a major change that has resulted in hundreds of warnings for many popular crates under 1.80 nightly).

rsconf itself is a newer crate that was born out of the need (in the fish-shell transition from C++ to rust) for a replacement for some work that’s traditionally been relegated to the build system (e.g. CMake or autoconf) in order to “feature detect” various native system capabilities in the kernel, selected runtime (e.g. libc), or installed libraries. It (optionally) integrates with the popular cc crate so you can test and configure the build toolchain for various underlying features or behavior, and then unlock conditional compilation of native rust code that interops with the system or external libraries accordingly.

Continue reading

tcpproxy 0.4 released

Image courtesy of Hack A Day

This blog post was a bit delayed in the pipeline, but a new release of tcproxy, our educational async (tokio) rust command line proxy project, is now available for download (precompiled binaries or install via cargo).

I was actually surprised to find that we haven’t written about tcpproxy before (you can see our other rust-related posts here), but it’s a command line tcp proxy “server” written with two purposes in mind: a) serving as a real-world example of an async (tokio-based) rust networking project, and b) serving as a minimal but-still-useful tcp proxy you can run and use directly from the command line, without needing complex installation or configuration procedures. (You can think of it as being like Minix, but for rust and async networking.)

The tcpproxy project has been around for quite some time, originally published in 2017 before rust’s async support was even stabilized. At the time, it manually chained futures to achieve scalability without relying on the thread-per-connection model – but today its codebase is a lot easier to follow and understand thanks to rust’s first-class async/await support.

Continue reading

Implementing truly safe semaphores in rust

Discuss this article on r/rust or on Hacker News.

Low-level or systems programming languages generally strive to provide libraries and interfaces that enable developers, boost productivity, enhance safety, provide resistance to misuse, and more — all while trying to reduce the runtime cost of such initiatives. Strong type systems turn runtime safety/sanity checks into compile-time errors, optimizing compilers try to reduce an enforced sequence of api calls into a single instruction, and library developers think up of clever hacks to even completely erase any trace of an abstraction from the resulting binaries. And as anyone that’s familiar with them can tell you, the rust programming language and its developers/community have truly embraced this ethos of zero-cost abstractions, perhaps more so than any others.

I’m not going to go into detail about what the rust language and standard library do to enable zero-cost abstractions or spend a lot of time going over some of the many examples of zero-cost interfaces available to rust programmers, though I’ll just quickly mention a few of my favorites: iterators and all the methods the Iterator trait exposes have to be at the top of every list given the amount of black magic voodoo the compiler has to do to turn these into their loop-based equivalents, zero-sized types make developing embedded firmware in rust a dream and it’s really crazy to see how all the various peripheral abstractions can be completely erased giving you small firmware blobs despite all the safety abstractions, and no list is complete the newest member of the team, async/await and how rust manages to turn an entire web server api into a single state machine and event loop. (And to think this can be used even on embedded without a relatively heavy async framework like tokio and with even zero allocations to boot!)

Continue reading

SecureStore 0.100: KISS, git-versioned secrets management for rust

A few days ago, we published a new version of both the securestore library/crate and the ssclient CLI used to create, manage, and retrieve secrets from SecureStore vaults, an open and cross-language protocol for KISS secrets management. SecureStore vaults provide a more secure and far more reliable solution to storing secrets in environment variables and a simpler and less error prone alternative to network-based secrets management solutions, and make setting up development environments a breeze.

For some background, the SecureStore protocol (first published in 2017) is an open specification and cross-language library/frontend for securely storing encrypted secrets versioned in git, alongside your code. We have implementations available in rust (crate, cli) and for C#/.NET (api and cli, nuget) and the specification is purposely designed to be both easy-to-use and easy-to-port to other languages or frameworks.

This is the first update with (minor) breaking changes to the securestore public api, although pains have been taken to ensure that most common workflows won’t break. The changes are primarily to improve ergonomics when retrieving secrets from rust, and come with completely rewritten docs and READMEs (for the project, the lib, and the cli).

Continue reading

C# file size formatting library PrettySize 3.1 released

PrettySizeHot on the heels of an update to our rust port of PrettySize we have a new release of PrettySize.NET that brings new features and capabilities to the best .NET library for formatting file sizes for human-readable output and display.

PrettySize 3.1, available on GitHub and via Nuget, has just been released and contains a number of improvements and requested features and newfound abilities to make handling file sizes (and not just formatting them) easier and more enjoyable.

Continue reading

SecureStore: the open secrets container format

It’s been a while since we first released our SecureStore.NET library for C# and ASP.NET developers back in 2017, as a solution for developers looking for an uncomplicated way of safely and securely storing secrets without needing to build and maintain an entire infrastructure catering to that end. Originally built way back in 2015 to support secrets storage in legacy ASP.NET applications, SecureStore.NET has been since updated for ASP.NET Core and UWP desktop application development, and now we’re proud to announce the release of SecureStore 1.0 with multi-platform and cross-framework support, with an updated schema making a few more features possible and official implementations in C#/.NET and Rust.

Continue reading

A persistent cache for ASP.NET Core

One of the nicest things about ASP.NET Core is the availability of certain singleton models that greatly simplify some very common developer needs. Given that (depending on who you ask) one of the two hardest problems in computing is caching1, it’s extremely helpful that ASP.NET Core ships with several models for caching data, chief of which are IMemoryCache and IDistributedCache, added to an ASP.NET Core application via dependency injection and then available to both the framework and the application itself. Although these two expose almost identical APIs, they differ rather significantly in semantics.2

Continue reading


  1. Although that *probably* refers more to cache coherence rather than simply key-value persistence, to be perfectly frank. 

  2. It is extremely refreshing to see Microsoft adopting the Haskell/Rust approach of using types to express/convey intention and semantics rather than merely shape. 

Compare streams quickly and efficiently in C# and .NET

Have you ever needed to compare the contents of two files (or other streams), and it mattered how quickly you got it done? To be frank, it doesn’t normally come up in the list of things you may need on a daily code-crunching basis, but that rather depends on what kind of programs you tend to write. In our world, let’s just say it’s not an uncommon task.

At a first blush, it would seem to be no harder than comparing two arrays. A pointer reading from each file, compare bytes as you come across them, and bail when things differ. And it would be that easy if you were to use memory-mapped files and let the OS map a file on disk to a range in memory, but that has some drawbacks that may not always be OK depending on what you’re trying to do with the files (or streams) in question. It also requires having a physical path on the filesystem that you can pass in to the kernel, and it unduly burdens the kernel with some not insignificant workloads that aren’t (in practice) subject to the same scheduling and fairness guarantees that user code would be, and they can tend to slow down older machines significantly1.

Continue reading


  1. Especially under Windows 

Faster lookups, insertions, and in-order traversal than a red-black or AVL tree

NeoSmart Technologies is pleased to announce the immediate availability of its open source NeoSmart.Collections library/package of specialized containers and collections designed to eke out performance beyond that which is available in typical libraries and frameworks, by purposely optimizing for specific (common!) use cases at the cost of pretty much everything else.

In many regards, the data structures/containers/collections that ship with pretty much any given framework or standard library are a study in compromise. Language and framework developers have no idea what sort of data users will throw at their collections, or in what order. They have no clue whether or not a malicious third party would ultimately be at liberty to insert carefully crafted values into a container, or even what the general ratio of reads to writes would look like. The shipped code must be resilient, fairly performant, free of any obvious pathological cases with catastrophic memory or computation complexities, and above all, dependable.

It isn’t just language and framework developers that are forced to make choices that don’t necessarily align with your own use case. Even when attempting to identify what alternative data structure you could write up and use for your needs, you’ll often be presented with theoretical 𝒪 numbers that don’t necessarily apply or even have any relevance at all in the real world. An algorithm thrown out the window for having a horrible 𝒪 in Algorithms and Data Structures 101 may very well be your best friend if you can reasonably confine it to a certain subset of conditions or input values – and that’s without delving into processor instruction pipelines, execution units, spatial and temporal data locality, branch prediction, or SIMD processing.

Continue reading

ignore-result: A simple crate to make ignoring rust Results safe and easy

Rust, both by design and by convention, has a fairly strongly defined model for strict error handling, designed to force developers to deal with errors up front rather than assume they don’t exist. If you stick to a few conventions and principles for best practice, error handling becomes fairly straight-forward (although what you ultimately do with the errors is a different question) if you are living in an all-rust world. The problems start when you step foot outside of the comfortable world of crates and Results, such as when dealing with FFI to interface with C libraries or using rust in an embedded context.

The typical approach for dealing with errors in rust in 2018 is to have any function that can encounter a scenario wherein it is unable to return a valid value declare a return type of Result<T, E> where T is the expected result type in normal cases and E is a type covering possible errors that can arise during execution. When calling from one such function into other functions with non-guaranteed success, the typical control flow in the event of an error is almost always “early out”:

Continue reading