This blog is written in Rust, and I wanted a way to reload the web pages automatically while I change the posts' contents, styles, etc. This is common-place with JavaScript frameworks, but not automatic in the Rust land. So I've embarked on a side quest to achieve just that: the "type and auto-reload" experience. In the end, I was surprised to learn a bit more about sockets and processes in Linux.

This post is a note to myself about these nuggets that I've learned and to share the solution. It may be helpful for future me and I hope for someonelse out there.

TL;DR

You can check the solution here: https://git.sitegui.dev/sitegui/axum-web-auto-reload-example/src/branch/main/src/main.rs. The README in that repo has some nice diagrams as well.

Shopping list

To reload the browser page on a source file change, you will need:

  • your server: I'm developing mine with axum in Rust
  • a tool to listen to a port, pass down the socket to the server, detect file changes, and restart the server. I'm using watchexec
  • a browser

To run it all, I use this command:

watchexec \
  --socket 8080 \
  --restart \
  --stop-signal SIGINT -- \
  cargo run

Accepting the passed socket

This part is very important for smooth reloads: when the browser reloads, it will try to restablish a connection with your server. However, at the same time your server is reloading and probably not yet available on the localhost port, causing the browser to fail immediately with a passive-aggressive message telling it was ghostest by its dearest friend localhost.

The solution is a bit complex but quite brilliant: let's leave the socket connection to the watchexec process, which will stay alive through the session. When the browser tries to connect, the connection will not fail immediately because no process was listening. Of course, watchexec has no idea what to do with that incomming connection: it is your server that knows. So watchexec spawns your server and pass it the socket, so that it can do its serverly stuff, like accepting connections and spitting HTML or whatever. To "pass the socket", watchexec uses two environment variables:

  • LISTEN_FDS: the number of sockets being passed
  • LISTEN_FDS_FIRST_FD: the file-descriptor id of the first socket being passed

The sockets are created with ReuseAddr and ReusePort so that the server can listen to it again.

To make it work, your server should detect that it is called by something like watchexec and that it has received a socket. I'm using the crate listenfd to help with that:

use listenfd::ListenFd;
use std::error::Error;
use tokio::net::TcpListener;

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    // Use the crate `listenfd` to get the socket passed by `watchexec`
    let inherited_socket = ListenFd::from_env().take_tcp_listener(0)?;

    let (auto_refresh, listener) = if let Some(inherited_socket) = inherited_socket {
        println!("Listening on inherit socket");
        inherited_socket.set_nonblocking(true)?; // required by TcpListener::from_std()
        (true, TcpListener::from_std(inherited_socket)?)
    } else {
        // Fallback to the typical way of listening to a new port
        let listener = TcpListener::bind(("127.0.0.1", 8000)).await?;
        println!("Listening on http://{}", listener.local_addr()?);
        (false, listener)
    };
}

Add a page script to reload

When auto_refresh is true in the code above, it means that we're in "let's reload guys!" territory.

In my server, I'm using minijinja to render Jinja2 templates, so I do something like this:

{% if AUTO_REFRESH %}
<script>
  // Connect to the server and reload when a new message is received
  const eventSource = new EventSource(`/auto_refresh`)
  eventSource.addEventListener("message", () => location.reload())
</script>
{% endif %}

and in my main:

fn main() {
    jinja_env.add_global("AUTO_REFRESH", auto_refresh);
}

To tell that the browser should reload, I'm using server-sent events (SSE) which are pretty cool in fact! My use case here is very simple: whenever the server sends an event, reload.

Please call me

The last piece of the puzzle is to implement the /auto_refresh SSE endpoint in the server:

use axum::response::sse::{Event, KeepAlive};
use axum::response::Sse;
use std::convert::Infallible;
use tokio::signal;
use tokio::sync::mpsc;
use tokio_stream::Stream;
use tokio_stream::wrappers::UnboundedReceiverStream;

/// Create a server-sent event stream that will send a single "goodbye" event when the server is
/// stopped.
pub async fn get_auto_refresh() -> Sse<impl Stream<Item=Result<Event, Infallible>>> {
    println!("GET /auto_refresh");

    let (tx, rx) = mpsc::unbounded_channel();
    tokio::spawn(async move {
        wait_ctrl_c().await;
        println!("SSE: sending goodbye");
        let event = Event::default().data("goodbye");
        let _ = tx.send(Ok(event));
    });

    Sse::new(UnboundedReceiverStream::new(rx)).keep_alive(KeepAlive::new())
}

async fn wait_ctrl_c() {
    let _ = signal::ctrl_c().await;
}

And that's it! For sure, the JavaScript ecosystem has perfected this integration of different pieces into a better experience, but now I can enjoy it also. Maybe as a next step I can package all the different pieces into a single crate?