tcpproxy 0.4 released

Educational async rust tokio proxy project updated

Image courtesy of Hack A Day

This blog post was a bit delayed in the pipeline, but a new release of tcproxy, our educational async (tokio) rust command line proxy project, is now available for download (precompiled binaries or install via cargo).

I was actually surprised to find that we haven’t written about tcpproxy before (you can see our other rust-related posts here), but it’s a command line tcp proxy “server” written with two purposes in mind: a) serving as a real-world example of an async (tokio-based) rust networking project, and b) serving as a minimal but-still-useful tcp proxy you can run and use directly from the command line, without needing complex installation or configuration procedures. (You can think of it as being like Minix, but for rust and async networking.)

The tcpproxy project has been around for quite some time, originally published in 2017 before rust’s async support was even stabilized. At the time, it manually chained futures to achieve scalability without relying on the thread-per-connection model – but today its codebase is a lot easier to follow and understand thanks to rust’s first-class async/await support.

That doesn’t mean that there aren’t “gotchas” that rust devs need to be aware of when developing long-lived async-powered applications, and tcpproxy’s purpose here is to serve as a real-world illustration of the correct way to handle some of the thornier issues such as tying the lifetime of various connections (or halves of connections) to one-another and aborting all remaining tasks when the first terminates (without blocking or polling).

The 0.4.0 release doesn’t contain any major changes but tweaks a number of things to improve both the usability of the application and to model the correct way of handling a few things (such as not using an Arc<T> to share state that remains alive (and static) for the duration of the program’s execution1)

One of the user-visible changes in this release is that ECONNRESET and ECONNABORT are no longer treated as exceptional, meaning that tcpproxy proceeds as if the connection in question were closed normally and uneventfully. While a compliant TCP client shouldn’t just abort a tcp connection (and a server shouldn’t reset one), these things happen quite often in the real world, and since all tcpproxy connections are stateless, there’s really no reason to handle these any differently than a normal, compliant tcp connection tear-down. Since we don’t report a connection error in these cases, tcpproxy prints (when executed in debug -d mode, that is) the normal messages about the number of bytes proxied in each direction, hopefully leading to less confusion.

For those of you hearing about the tcpproxy project for the first time, I invite you to look over the core event loop which remains fairly small even when correctly handling all the cases we need to account for and synchronizing lifetimes the way we like. If you spot something that’s wrong, not quite right, or could be done in a more idiomatic way, please do leave a comment, send an email, or open an issue – tcpproxy is an open source project and it takes a village to raise and nurture even the smallest of projects to a healthy state!

You can follow me on twitter @mqudsi or sign up below for our rust-only mailing list to receive a heads-up when new rust educational content or rust open source crates are released. If you’re in a position to do so, I am also experimenting with accepting sponsors on my Patreon page and would greatly appreciate your patronage and support!

If you would like to receive a notification the next time we release a rust library, publish a crate, or post some rust-related developer articles, you can subscribe below. Note that you'll only get notifications relevant to rust programming and development by NeoSmart Technologies. If you want to receive email updates for all NeoSmart Technologies posts and releases, please sign up in the sidebar to the right instead.


  1. In cases like this, the recommendation is to actually just leak the memory instead to reduce cache coherency traffic in the MESI or MOESI protocols that is caused when each new task increments or decrements the shared reference count bits in the Arc<T>. If you know the value is going to live until the end of the application’s lifetime anyway, there’s no need to incur that cost and any future (read-only) access to the shared variable from any thread on any core will be ~free. 

  • Similar Posts

    Craving more? Here are some posts a vector similarity search turns up as being relevant or similar from our catalog you might also enjoy.
    1. Using build.rs to integrate rust applications with system libraries like a pro
    2. A high-performance, cross-platform tac rewrite
    3. Using SIMD acceleration in rust to create the world's fastest tac
    4. Implementing truly safe semaphores in rust
    5. SecureStore: the open secrets container format
  • Leave a Reply

    Your email address will not be published. Required fields are marked *