When building an application written in Rust as a container image, there were several ways to write a Dockerfile, so I compared the build time of the image for each.
This is because Rust has a long application build time, so depending on how you write the Dockerfile, development efficiency may be reduced.
For example, suppose you want to run an application created with Rust on Kubernetes. Suppose you want to use skaffold for local application development. skaffold detects code changes, rebuilds the image, and redeploys the application on Kubernetes. At this point, you'll run into a Rust-specific long build time issue. Due to the long build time, the image rebuild cannot keep up with the code changes.
The above is just an example, but I wanted to write a Dockerfile that builds quickly anyway, so I compared several.
This time, we'll measure the image build times caused by code changes and compare them. I don't really care about the build image size. Also, it is not intended for development in the cargo workspace.
Imagine an app that uses serde
and rocket
. Cargo.toml
and main.rs
are as follows.
[dependencies]
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
rocket = { git = "https://github.com/SergioBenitez/Rocket" }
#[macro_use] extern crate rocket;
#[get("/")]
fn hello() -> &'static str {
"Hello, before build!"
}
#[launch]
fn rocket() -> rocket::Rocket {
rocket::ignite().mount("/", routes![hello])
}
After building the image once, change the code and rebuild the image. As an image, the script will be as follows. The return value of the above hello ()
function is different between the code before and after the change.
rm -rf src && cp -r testsrc/before src
docker build -q -f Dockerfile.base -t rust-docker-base .
rm -rf src && cp -r testsrc/after src
time docker build -q -f Dockerfile.base -t rust-docker-base .
I put each Dockerfile and script on Github.
The most basic Dockerfile.
FROM rust:1.48.0
WORKDIR /app
COPY . .
RUN cargo build --release
ENTRYPOINT ["/app/target/release/app"]
This is the image build method introduced in Fast + Small Docker Image Builds for Rust Apps. I have created a temporary main.rs once to create a build cache for my application.
# https://shaneutt.com/blog/rust-fast-small-docker-image-builds/
FROM rust:1.48.0
WORKDIR /app
COPY Cargo.toml Cargo.toml
RUN mkdir src/
RUN echo "fn main() {println!(\"if you see this, the build broke\")}" > src/main.rs
RUN cargo build --release
RUN rm -f target/release/deps/app*
COPY . .
RUN cargo build --release
ENTRYPOINT ["/app/target/release/app"]
How to build an image using cargo-chef.
# https://github.com/LukeMathWalker/cargo-chef
FROM rust as planner
WORKDIR app
# We only pay the installation cost once,
# it will be cached from the second build onwards
RUN cargo install cargo-chef
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM rust as cacher
WORKDIR app
RUN cargo install cargo-chef
COPY --from=planner /app/recipe.json recipe.json
RUN cargo chef cook --release --recipe-path recipe.json
FROM rust as builder
WORKDIR app
COPY . .
# Copy over the cached dependencies
COPY --from=cacher /app/target target
COPY --from=cacher $CARGO_HOME $CARGO_HOME
RUN cargo build --release
ENTRYPOINT ["/app/target/release/app"]
In addition to the Dockerfile of target 1, use BuildKit.
FROM rust:1.48.0
WORKDIR /app
COPY . .
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/app/target \
cargo build --release
ENTRYPOINT ["/app/target/release/app"]
I'm using BuildKit, but I'm only caching what is stored in sccache.
FROM rust:1.47.0
RUN cargo install sccache
ENV HOME=/app
ENV SCCACHE_CACHE_SIZE="1G"
ENV SCCACHE_DIR=$HOME/.cache/sccache
ENV RUSTC_WRAPPER="/usr/local/cargo/bin/sccache"
WORKDIR $HOME
COPY . .
RUN --mount=type=cache,target=/app/.cache/sccache cargo build --release
ENTRYPOINT ["/app/target/release/app"]
It was measured with the following PC. Just in case, I ran docker system prune -f -a
before running the build.
macOS Catalina v10.15.7
MacBook Pro (16-inch, 2019)
Processor 2.4GHz 8-Core Intel Core i9
Memory 64GB 2667 MHz DDR4
The result is as follows.
Time of Dockerfile.base
87.27 real 2.86 user 0.86 sys
Time of Dockerfile.echo
21.49 real 2.86 user 0.85 sys
Time of Dockerfile.cargochef
18.40 real 2.86 user 0.84 sys
Time of Dockerfile.buildkit-base
14.09 real 0.12 user 0.06 sys
Time of Dockerfile.buildkit-sccache
34.50 real 0.16 user 0.08 sys
Even if the order was changed, the result was almost the same.
Time of Dockerfile.buildkit-base
15.63 real 0.15 user 0.08 sys
Time of Dockerfile.cargochef
18.81 real 3.05 user 0.92 sys
Time of Dockerfile.echo
22.82 real 3.00 user 0.92 sys
Time of Dockerfile.buildkit-sccache
35.12 real 0.16 user 0.09 sys
Time of Dockerfile.base
90.60 real 2.93 user 0.89 sys
In this experiment, using BuildKit is fast. If it's private, I used the echo
Dockerfile, so I'd like to replace it a little and see if it's really fast (can it be replaced).
I would like to try what happens when I use emk/rust-musl-builder.
If you have any other suggestions, please leave a comment, Github Issue or PR! The code for this time can be found in mkazutaka/rust-dockerfile-comparison.
The Dockerfile for the final application using the Rocket framework I'm using. skaffold v1.17.2 doesn't work with Buildkit (skaffold Issue # 5178), so use bleeding edge build. I try to use debian as the base image after seeing the article (Why does musl make my Rust code so slow?) because musl is slow.
FROM debian:buster-slim as runner
RUN apt update; apt install -y libssl1.1
FROM rust:1.48.0 as builder
WORKDIR /usr/src
RUN rustup target add x86_64-unknown-linux-musl
COPY Cargo.toml Cargo.lock ./
COPY src ./src
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/src/target \
cargo install --path .
FROM runner
COPY --from=builder /usr/local/cargo/bin/myapp .
COPY Rocket.toml .
USER 1000
CMD ["./myapp"]
Recommended Posts