Over the last few days I've figured out how to use Forgejo Actions. I was excited to try it since
it's integrated directly into Forgejo these days, and compatibility with github Actions means
there's already loads of third party actions to take advantage of. Previously I was using Drone CI.
I still am in a number of projects. But I'm hoping to get everything moved over to it over the next
week or so.
But that's not why I'm writing this blog post. I'm writing this blog post because I haven't found
very many useful resources for understanding how Actions works, despite the prevalence github
Actions. All the documentation makes it look so simple to use, and in many cases I'm sure it is, but
when you're running it yourself, you'll find the edges.
Getting Started
Setting up Forgejo Actions isn't a huge deal. I'm running it via docker-compose, but it can be run
natively or in kubernetes or with lxc (etc). The one crucial thing though is it needs access to a
docker daemon in order to operate. Giving it a Docker in Docker container is fine. Here's my DinD
section:
1 | services:
|
2 | docker-in-docker:
|
3 | image: docker:dind
|
4 | privileged: true
|
5 | command: ["dockerd", "-H", "tcp://0.0.0.0:2375", "--tls=false"]
|
6 | restart: always
|
Simple enough? Now we're going to get a little more complicated with the Runner configuration. The
runner needs to be brought up in 2 steps. The first step registers the runner with Forgejo and
creates the runner configuration file. The second step launches the runner process. This is
unfortunately not as simple as launching the container, but it isn't too bad.
1 | services:
|
2 | |
3 |
|
4 | Forgejo-runner-1-register:
|
5 | image: code.Forgejo.org/Forgejo/runner:3.3.0
|
6 | links:
|
7 | - docker-in-docker
|
8 | environment:
|
9 | DOCKER_HOST: tcp://docker-in-docker:2375
|
10 | volumes:
|
11 | - /storage/Forgejo-actions/runner-1:/data
|
12 | user: 0:0
|
13 | command: >-
|
14 | bash -ec '
|
15 | if [ -f config.yml ]; then
|
16 | exit 0 ;
|
17 | fi ;
|
18 | while : ; do
|
19 | Forgejo-runner register --no-interactive --instance https://git.asonix.dog --name bluestar-runner-1 --token TOKEN && break ;
|
20 | sleep 1 ;
|
21 | done ;
|
22 | Forgejo-runner generate-config > config.yml ;
|
23 | sed -i -e "s|network: .*|network: host|" config.yml ;
|
24 | sed -i -e "s|labels: \[\]|labels: \[\"docker:docker://bash:alpine3.19\"\]|" config.yml ;
|
25 | chown -R 1000:1000 /data
|
26 | '
|
This is the first step. The script loops attempting to register the runner with Forgejo if there
isn't an existing configuration file, and when it succeeds it writes the configuration file and
updates some values.
This can be more-or-less copied verbatim, with the exception of TOKEN
, which needs to be copied
from the Forgejo actions admin panel. We'll come back to the config.yml file later. Next up we
actually run the runner.
1 | services:
|
2 | |
3 | |
4 |
|
5 | Forgejo-runner-1-daemon:
|
6 | image: code.Forgejo.org/Forgejo/runner:3.3.0
|
7 | links:
|
8 | - docker-in-docker
|
9 | environment:
|
10 | DOCKER_HOST: tcp://docker-in-docker:2375
|
11 | depends_on:
|
12 | Forgejo-runner-1-register:
|
13 | condition: service_completed_successfully
|
14 | volumes:
|
15 | - /storage/Forgejo-actions/runner-1:/data
|
16 | command: "Forgejo-runner --config config.yml daemon"
|
17 | restart: always
|
A lot less going on. We let the Forgejo runner access our Docker-in-Docker daemon and launch it
after the registration container finishes. Off to a great start.
Using my basic "make sure it works" action that I cobbled together after reading some of the
documentation, we can make sure the runner works:
1 | on:
|
2 | pull_request:
|
3 | push:
|
4 | branches:
|
5 | - main
|
6 | tags:
|
7 | - "v*.*.*"
|
8 |
|
9 | env:
|
10 | BINARY: example
|
11 |
|
12 | jobs:
|
13 | test:
|
14 | runs-on: docker
|
15 | strategy:
|
16 | matrix:
|
17 | info:
|
18 | - arch: amd64
|
19 | - arch: arm64v8
|
20 | - arch: arm64v7
|
21 | steps:
|
22 | |
23 | - run: env
|
24 | - run: echo "${{ matrix.info.arch }} Good"
|
25 |
|
26 | test2:
|
27 | runs-on: docker
|
28 | container:
|
29 | |
30 | image: debian:bookworm-slim
|
31 | steps:
|
32 | - run : echo "Hello, debian"
|
33 |
|
34 | test3:
|
35 | runs-on: docker
|
36 | container:
|
37 | image: docker.io/asonix/rust-builder:latest-linux-arm32v7
|
38 | steps:
|
39 | |
40 | - run: cargo init --bin --name $BINARY
|
41 | - run: build
|
This all runs successfully when a branch or a tag matching v*.*.* is pushed. We did it! We're
done!
This is a deliberately simplified version of the setup I actually went through, which involved
bash
not existing at first.
Let's add it to an existing project (say, pict-rs)
1 | on:
|
2 | push:
|
3 | pull_request:
|
4 | branches:
|
5 | - main
|
6 |
|
7 | jobs:
|
8 | clippy:
|
9 | runs-on: docker
|
10 | container:
|
11 | image: docker.io/asonix/rust-builder:latest-linux-arm32v7
|
12 | steps:
|
13 | -
|
14 | name: Checkout pict-rs
|
15 | uses: actions/checkout@v4
|
16 | -
|
17 | name: Clippy
|
18 | run: |
|
19 | cargo clippy --no-default-features -- -D warnings
|
20 | cargo clippy --no-default-features --features io-uring -- -D warnings
|
And run it!
OCI runtime exec failed: exec failed: unable to start container process: exec: "node":
executable file not found in $PATH: unknown
Oh.
...what?
Hmm...
The Problems
So actions/checkout@v4 depends on node to run, but my rust builder container doesn't have node in it
so... I can't checkout my code? Well let's just split this up a bit, then.
1 | on:
|
2 | push:
|
3 | pull_request:
|
4 | branches:
|
5 | - main
|
6 |
|
7 | jobs:
|
8 | clone:
|
9 | runs-on: docker
|
10 | container:
|
11 | image: docker.io/node:20-bookworm
|
12 | steps:
|
13 | -
|
14 | name: Checkout pict-rs
|
15 | uses: actions/checkout@v4
|
16 |
|
17 | clippy:
|
18 | needs: [clone]
|
19 | runs-on: docker
|
20 | container:
|
21 | image: docker.io/asonix/rust-builder:latest-linux-amd64
|
22 | steps:
|
23 | -
|
24 | name: Clippy
|
25 | run: |
|
26 | cargo clippy --no-default-features -- -D warnings
|
27 | cargo clippy --no-default-features --features io-uring -- -D warnings
|
Except that doesn't work. The cloned repo doesn't stick around between jobs. If we had more than one
runner the jobs might not even run on the same one! We could try solving this with artifacts but
wait... actions/download-artifact@v4
also depends on node, so we can't run it in the rust-builder
container.
So we have Actions but we can't use them. How does github handle this? Well github's answer is to
install anything you could ever need into their default actions containers. Meaning node, go,
python, ruby, docker (oh, docker... we'll need that too) and more are all bundled into a 60GB image.
If you remember when we had a script that ran sed
on the config earlier...
sed -i -e "s|labels: \[\]|labels: \[\"docker:docker://bash:alpine3.19\"\]|" config.yml ;
We were setting a value in the Forgejo runner's config to provide our runners with default
containers. We started with bash
on alpine
. While that particular container is very small and
doesn't take long to download, it doesn't contain the majority of things that actions expect to
exist.
In order to build pict-rs we need rust, and not just any rust but a rust that is capable of
cross-building (I target armv7, aarch64, and x86_64 with musl libc). In order to clone pict-rs, use
the actions cache, use the actions artifacts, and more we need nodejs. In order to build docker
containers we need docker and qemu. The container image we need to have or make to run our CI is now
nontrivial.
As a proof of concept, I first wrote my actions where I installed everything by hand. I started with
the node:20-bookworm image from dockerhub. I had steps to apt-get install docker, download rustup
and install it, add the proper targets, add clippy, add cargo-binstall,
cargo binstall cargo-zigbuild
, and install zig. While this worked, it took a while just in the
setup phase, which isn't what we want for CI.
My previous CI for pict-rs used my rust-builder
image, which doesn't actually use
cargo-zigbuild. I opted to use zig's linker for pict-rs in Forgejo Actions because I knew it
would be easier than manually constructing a cross-compile environment. It would also enable me
to use a single container to build for any platform, rather than my previous CI which had a
unique container for each platform I targeted.
So I wrote a bit of caching. I had used an action to install zig, and that action cached zig on the
runner. I wrote my own caching layer for all the rustup and cargo bits. That sped things up as well,
but it still meant using space in the runner cache, and potentially installing everything again on a
cache miss. In this process, I also hit the github rate limit for downloading cargo-zigbuild via
cargo-binstall, meaning I had to start compiling it from crates.io instead (which doesn't take too
long, but it's still longer than downloading a binary).
As an aside, I had to set DOCKER_HOST: tcp://docker-in-docker:2375
as an environment variable in
the runner config.yml
file so that my use of docker in the actions container would find my
docker-in-docker daemon.
Giving In
I decided I needed a universal base container image to run my CI the way github does it, because
it's the only way that makes sense for their CI system. If all the actions people write are going to
expect me to have things installed, then I better have them installed. You can find the actions
workflow I use to produce my base image in my
actions-base-image repository. I'm sure that in
the future I will encounter more actions that fail to run on this image and I will need to update it
to add more dependencies.
I also wrote another caching action, simpler now than before since all the rust, zig, and docker
bits are baked into the base image. You can find it here in
the cache-rust-dependencies
folder. It's extremely basic but saves me a download from crates.io
for each job pict-rs runs.
So?
Am I happy with Forgejo Actions? Not really. I think the design of Actions is pretty bad for the
Forgejo Actions case where you control the runners yourself. On github it's fine, since github
manages the 60GB image behind the scenes where you never need to think about it. Outside of github
it's less ideal. I'm still going to migrate all my projects to it now that I have everything
working and a not-too-big base image for myself (it's 1GB).
I hope anyone else struggling with Actions (github, gitea, forgejo, or otherwise) gains some insight
from this post. It's not like I built this blog in the first place to be able to put this online or
anything. Let's hope now that I've written all this that I don't need to update my base image to
publish this to garage.
I ended up adding minio-client
to my base image in order to publish this