For our company hackathon this year, three of us tried out the Axum crate to see what we could do with it and feel out its limitations. We loved a lot about it, but there’s still much more growth left for Rust libraries for backend Web development.
This post is going to be technical and assume some level of Rust familarity. Jump to the conclusion for our non-technical summary.
Project Structure
Coming from Rails, we seldom consider how to structure a Web app. That simply isn’t the business goal, so Rails decided it for us. Axum dictated no such convention.
We ended up with a pattern where lib.rs
has one function, router()
, that
returns a Router<AppState>
but without the state. Then the main.rs
would
call that function, add a state to it, and turn that into a service.
This gave us the flexibility of creating the state as needed for the app runner. As I’ll explain in a moment, we had different app runners that created state in different ways.
We divided handlers by resource under a controllers/
directory:
controllers/posts.rs
,controllers/comments.rs
. Each module exposed its own
router()
function that was then used by the outer router()
in lib.rs
.
This way each module knows how to route to itself.
We additionally had models
, views
, filters
(for templates), forms
,
errors
, and state
modules.
While it was possible to assemble all of this, and it was nice to be able to
see the details, we were just trying to build an app. Minute details like
configuring the logger to show line numbers or telling Axum that an AppError
is a 500 were certainly not part of our business logic.
Environments
Everybody has a testing environment. Some people are lucky enough enough to have a totally separate environment to run production in.
We want at least three environments: test, development, and production. We know that the production server is going to be configured differently from a dev server. Moreover, we know that they should all use different databases.
To achieve this we used three cargo workspaces: production
, development
,
and app
. The first two only have a src/main.rs
and the minimal Cargo.toml
needed to get those running. They exist to make an AppState
(from env, from a
TOML file, etc.) and then to run the server. Meanwhile, the app
workspace
contains the actual business logic.
This division would have helped discussions with our devops team – they could
iterate on the production
workspace without slowing development for the rest
of the team using the development
workspace.
This technique could use some love – it lead to leaky abstractions. And again, we had to find and glue together all the logging, env reading, database pooling, etc. ourselves.
Substates
Zooming in a bit on the AppState
, we made use of Axum 0.6’s substates and
FromRef
. Axum follows a classic callback pattern: give the runner (Router) an
object, and the runner will pass that to the callbacks (Handler). In 0.6 they
made it easy to have a state struct with sub-structs, and to extract those
sub-structs within handlers.
We stuffed our DatabaseConnection
in the state, and extracted it nicely
within the handlers. This was a nice compromise between global state/magic
variables and verbose dependency injection.
This technique of extracting the data from a state struct as you need it is
quite handy. That said, #[derive(FromRef)]
is borderline magic, and when it
failed it left cryptic typecheck messages.
Tasks
Axum, being a Web library, does not ship with a command runner. Fair enough. But we still need to easily run project-specific commands!
We turned to two ideas for this: Just, and xtask.
Just is a command runner. It has a handy feature where it can load in an .env
file first. We made use of this to run a server with full backtracing
(RUST_BACKTRACE
) and logging (RUST_LOG
). We ended up with two tasks:
dev-server
(alias: s
) and migrate
.
set dotenv-load
default:
@just --list --justfile {{justfile()}}
alias s := dev-server
dev-server:
cargo run -p development
migrate:
cargo xtask migrate:up
We also needed to run tasks written in Rust, such as migrations. For this we
tried the xtask pattern, with an xtask
workspace and one main.rs
that
parses a subcommand and runs it.
Once we figured out the tooling, it really did not get in our way. The Just tool works nicely, and xtask provides an easy way to add “scripts” that use Rust libraries.
In the future we may stick with just xtask so simplify the competing processes,
at the expense of losing out on easily running anything that is not Rust such
as shell scripts. Those in the Rails world might be reminded of the rake
vs
rails
command runner split, which was eventually merged to save us the
headache of remembering which command offers which task.
All that said, collecting all the tasks needed for a Web app proved tedious. As
a hurdle atop of that, SQLx’s docs lead to confusion about which tools we
need to build ourself – we learned about SQLx CLI while writing this post!
Additionally, dotenv management differs between Just and xtask (dotenvy):
Just allows you to override .env
via the command line, but dotenvy does
not.
Tests
Rust’s unit tests are good, and we had no question that they’d help us verify our Askama filters. So we instead concentrated our two-day hackathon time on integration tests.
For this we wanted to spin up a server then control a Web browser as it navigates around. We started with the example from the Axum repo, which was easy enough to get running (once we figured out how our real app differed from their trivial example handler), but left us wanting more. An assertion that the response code is 200 was easy enough, but verifying anything about the response itself was more cumbersome.
So we turned our attention to the thirtyfour crate, which allows us
WebDriver (Selenium) access. Unfortunately we ran out of time before we got it
to a state we were happy with. One thing we noticed was the lack of high-level
helpers: we’d be in charge of finding our own buttons and then clicking them,
instead of using a high level click_button("Create post")
function. The idea
of re-implementing all of Ruby’s Capybara felt out of scope for the hackathon.
Database
Coming from Rails, we were particularly interested in how to use Rust to access the DB, and how to migrate the schema.
We did not expect to spend any time configuring a connection, but we did end up
doing just that. From setting a DATABASE_URL
environment variable to making
our own ConnectOptions
struct so we could log statements to building our
own DB pool to passing the pool in the state, we had to look up and write some
15 lines of common infrastructure code.
ORM
Despite Diesel being written by a thoughtbot alum, we tried SQLx. SQLx is not an object-relational mapper, which left much up to us. Next time we may evaluate Diesel, SeaORM, and Canyon to see if they give us more to grab onto.
In practice we came up with a framework that allowed us to iterate quickly. As
described earlier, our AppState
contains a struct
DatabaseConnection(sqlx::PgPool)
. We used that to hang table-like methods on
the DatabaseConnection
itself, reducing the need for our Handlers to care
about PgPools.
#[derive(Clone, FromRef)]
pub struct DatabaseConnection(pub PgPool);
impl DatabaseConnection {
pub fn posts(&self) -> PostsTable {
PostsTable(self.0.clone())
}
}
This gave us a PostsTable
struct that could contain all the database
abstractions related to the posts
table.
pub struct Post {
pub id: sqlx::types::Uuid,
pub title: String,
pub content: String,
pub created_at: OffsetDateTime,
}
pub struct PostsTable(pub PgPool);
impl PostsTable {
pub async fn get(&self, id: uuid::Uuid) -> Result<Post, sqlx::Error> {
sqlx::query_as!(Post, "select * from posts where id = $1", id)
.fetch_one(&self.0)
.await
}
}
We decided to expose the sqlx::Error
return values from these methods
instead of wrapping them in AppError::SqlxError
structs. We instead wrapped
them in the handler as needed. More on that below.
This ad hoc framework made it quick to add new DB methods (all
,
create_from_form
, etc.). Note, however, that we need to maintain a Post
struct that mirrors the DB exactly. SQLx does conveniently detect abnormalities
in static analysis, but it is just another thing to do.
While these new DB methods were convenient, they were not a rich, composable language like we are used to in ActiveRecord scopes and ARel objects. SQLx is not an ORM, though, so that’s on us.
SQLx also has a lot of components with a scattered set of docs, which made the process a little slow. That said, an ORM would also have a large set of docs to work through.
Migrations
One of the extremely cool features that SQLx offers is static analysis of your
queries, using the query!
and query_as!
macros. This means your
build will fail if you reference a table that doesn’t exist, for example.
Our original setup, inspired by examples around the Web, was to run the migrations when the app first boots. This means that we would compile the app, run it, and then the migrations would run.
However, if a query depends on a migration having been run, then it won’t compile, and thus the migration won’t run.
We solved this by running migrations manually, first via a simple Rust script, then via an xtask, and finally via the SQLx CLI program.
By following the SQLx docs, we ended up without support for down migrations – all our migrations go in one direction: up. This happened because down migrations require a completely different structure. Discovering this was tricky – we didn’t actually learn this until writing this post.
Regardless, we especially liked the _sqlx_migrations
schema with the
checksum
and execution_time
columns.
HTML
We were building a traditional Web program: request comes in, response with HTML goes back. For that we wanted to use a templating language. We went with Askama over Tera because it came with Axum support and was very simple to use.
We defined structs deriving Template
in a views
module, and each of our
rendering handlers returned one of these structs. This was a lovely way to
communicate the goal of the handler: each handler exists to make that struct,
and everything in the handler works towards that cause.
The Askama syntax itself is fine. I like that they support matching on enums right in the syntax. I have two wishes for improvement.
The first is that since so much depends on the filters, I wish Askama shipped with more filters. We ended up writing our own for showing inline form errors with the validator crate (discussed below). Ultimately we’d have liked something that could render structs representing forms, though we see how that would require a lot of iteration to get right.
The second is for a development mode where templates are rendered at runtime, not compile-time. Re-compiling any time we wanted to see HTML changes was a bit of a bummer.
But overall, Askama did its job and stayed out of the way.
I18n
We typically build every app with internationalization from the start. This is primarily a concern of the templates, the forms, and the error messages.
We looked at a few options and landed on Fluent, which looks incredibly well thought out. We tried to make headway with fluent-rs, then fluent-resmgr, then fluent-templates, but ultimately we tabled that for when we have more time.
It does look like the basic tooling is there, but the integration could use a lot of love.
Errors
All of our handlers returned an axum::response::Result<T>
, where T
is
one of the Askama template structs mentioned previously. We defined our own
AppError
enum, deriving the thiserror::Error
trait. Our custom Axum
glue for this was a simple 500:
impl IntoResponse for AppError {
fn into_response(self) -> Response {
(StatusCode::INTERNAL_SERVER_ERROR, self.to_string()).into_response()
}
}
As mentioned previously, our models returned sqlx::Error
on failure. This
meant that we needed to map_err
our way into the right type. We could not
figure out the right From
incantation to avoid using map_err
.
However, being able to use ?
everywhere was extremely nice. The happy path
was easy to read while the error path had just normal Rust magic.
Forms
We used the Axum Form
extractor, with minimal fuss. We did have to do our
own research when it came to validations, though.
We started down a path where our Serde deserialization for the form structs had
validation built in, using the try_from
attribute. This was fun, but dealing
with extractor errors felt like we were splitting a concept (“create an
article”) too far.
Instead we used the validator crate on the form structs, and then handled invalid data manually in our handlers. This felt like the right level of abstraction to us.
While we did enjoy the validator library, it was a little verbose to render the errors on the form. We ended up converting the errors into a HashMap and then working with that.
Glue
A few more general notes:
Verbosity
Axum makes a lot of sense, but is also a bit verbose. For example, to redirect:
Ok(Redirect::to(&format!("posts/{}", post.id)).into_response())
Compare in Rails:
redirect_to post
Or to render a user error:
Ok((
StatusCode::UNPROCESSABLE_ENTITY,
views::PostsNewTemplate {
errors: errors.into_errors(),
},
).into_response())
Again, compare Rails:
render :new, status: :unprocessable_entity
It would be nice to get such common handler code slimmed down while still maintaining the flexibility.
Logging
Out of the box, I want to develop in an environment where I have a log of when a request is processed, a response is sent, a DB query run, and verbose error messages, all to a simple file on my own laptop. In production, we likely want something much more complex with filtering. That we had to set this up ourselves was a bit of a time sink.
On top of that, learning about which libraries could be influenced by
RUST_LOG
often involved reading library code. That said, Rust’s logging and
tracing facilities are quite nice.
Types
Rust’s type system is known to be fantastic. This hackathon team included someone who has been writing Rust for a few years now all the way down to someone who has made a “hello, world” and not much else. At all levels, the typechecker was in conversation with us and part of the mob and pair programming sessions, pointing out things we overlooked. The types seldom got in the way.
At one point we started to refactor in “the Ruby style”: tiny, tiny changes
that we could keep entirely in our head. Partway through we switched to a “Rust
style”: moving around large swaths of code with guidance from rust-analyzer and
cargo check
. The Rust style worked perfectly.
Conclusion
Three Rails devs with mixed Rust knowledge were able to get an ugly blog running in Axum over two days, building out the app’s framework for extensibility while we went. It was fun the entire time, and Rust truly is a pleasant language to use.
Two days is slow going for something as simple as a database of (title, body)
pairs. But Axum, Hyper, and Rust laid a solid foundation that can be built upon
quickly. When more libraries, more functionality, and more glue hits the
ecosystem, Web development in Rust will be another rapid application
development option. We look forward to that possibility!