Automatically build and publish to Github Container Registry
#213
Open
proffalken wants to merge 6 commits from proffalken/feature/auto_build_and_release
into master
pull from: proffalken/feature/auto_build_and_release
merge into: topaLE:master
topaLE:1.6.X
topaLE:1.7.X
topaLE:1.9.X
topaLE:Mikhail5555/feature/remote-header-auth
topaLE:Nuckerr/master
topaLE:Saibamen/fix_871
topaLE:WillianRod/feat/add-favicon-badges
topaLE:andreasbrett/logging
topaLE:andreasbrett/securepush
topaLE:bertyhell/bugfix/heartbeat-bar-animation
topaLE:bertyhell/feature/monitor-checks
topaLE:bertyhell/feature/translations-extraction-script
topaLE:cert-notification
topaLE:chakflying/settings-redesign
topaLE:debian-docker
topaLE:deefdragon/Template-Engine
topaLE:deefdragon/notif-tests
topaLE:e2e-test
topaLE:fdcastel/push-api-tags
topaLE:free-disk-space
topaLE:ivanbratovic/http-basicauth
topaLE:ivanbratovic/improve-translatables
topaLE:k8s-unofficial
topaLE:lucasra1/overall_status
topaLE:master
topaLE:mhkarimi1383/master
topaLE:mrphuongbn/master
topaLE:no-need-build
topaLE:philippdormann/feature/release-management
topaLE:proffalken/feature/680_add_labels_to_prometheus_metrics
topaLE:rebasesoftware/feature/request-with-http-proxy
topaLE:restructure-status-page
topaLE:sqlite-upgrade-prebuilt
topaLE:tarun7singh/master
topaLE:thomasleveil/feature/565-duplicate-monitor
topaLE:thomasleveil/ux/add-group-at-the-top
Reviewers
Request review
No reviewers
Labels
Something isn't working dependencies
Pull requests that update a dependency file discussion doc
Improvements or additions to documentation duplicate
This issue or pull request already exists feature-request
New feature or request good first issue
Good for newcomers hacktoberfest hacktoberfest-accepted help help wanted
Extra attention is needed High
High Priority impossible invalid
This doesn't seem right investigating k8s Low
Low Priority Medium
Medium Priority News prerelease bug question
Further information is requested resolved Unknown wontfix
This will not be worked on
Apply labels
Clear labels
bug
Something isn't working dependencies
Pull requests that update a dependency file discussion doc
Improvements or additions to documentation duplicate
This issue or pull request already exists feature-request
New feature or request good first issue
Good for newcomers hacktoberfest hacktoberfest-accepted help help wanted
Extra attention is needed High
High Priority impossible invalid
This doesn't seem right investigating k8s Low
Low Priority Medium
Medium Priority News prerelease bug question
Further information is requested resolved Unknown wontfix
This will not be worked on
No Label
bug
dependencies
discussion
doc
duplicate
feature-request
good first issue
hacktoberfest
hacktoberfest-accepted
help
help wanted
High
impossible
invalid
investigating
k8s
Low
Medium
News
prerelease bug
question
resolved
Unknown
wontfix
Milestone
Set milestone
Clear milestone
No items
No Milestone
Projects
Clear projects
No project
Assignees
Assign users
Clear assignees
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.
No due date set.
Dependencies
This pull request currently doesn't have any dependencies.
Reference in new issue
There is no content yet.
Delete Branch 'proffalken/feature/auto_build_and_release'
Deleting a branch is permanent. It CANNOT be undone. Continue?
No
Yes
This PR compliments the sentiment behind #53 however it does the following:
It also creates tagged images for the following scenarios:
latest
for the most recent versionThis is a "copy&paste" from https://github.com/MakeMonmouth/mmbot/blob/main/.github/workflows/container_build.yml and will require
packages
to be enabled for this repo, however it already uses the new dynamic GITHUB_TOKEN setting to authenticate.I think this is a great improvement and is the 'standard' i've seen across many repos these days. Builds can still be done locally for testing if needed.
Uptime is missing the e
Good catch, I'll fix this when I'm back at a computer
Resolved in latest commit
would be helpful for a project who has daily pushes
This PR would affect teams using uptime-kuma behind corporate proxies, since
ghcr.io
is not usually added to those.True, although given hub.docker.com's policies on charging for downloads etc, I suspect that many organisations will start to migrate over to GHCR or similar in the near future.
I'd also argue that as a rule getting new sites added to a whitelist, especially when that site is owned by a "trusted entity" such as github, is probably less of a challenge than if we were hosting on "Dave's Docker Service", so shouldn't prevent us from implementing this approach.
I'd say the workflow could publish the final image to both ghcr.io and docker.io 👍🏻
I like this PR cause it makes the release process also open source, right now it isn't.
context: ./
file: ./docker/dockerfile
push: true
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64
Can we use multiline array here (or list like for
branches
)?Related issue: https://github.com/louislam/uptime-kuma/issues/440
@proffalken Can you add support for pushing to both
ghcr.io
andhub.docker.com
?@gaby I'll see what I can do in the next week, work and life are reasonably busy right now!
My preference is still the
npm run build-docker
command on my local machine.Not saying that this PR is not good, but I just don't want to maintain more things in the future. For example, a few week ago, I added Debian + Alpine based images support, I believe that this yaml file have to be updated too.
Also, the build time is very good on my local machine while Github Action is not always good.
But maybe we can just setup CI for only build (not publish) in GitHub?
I bet you don't build Docker image after each commit.
If we have CI, we could start fixing build issues right after pushing wrong commit/PR
^ this! Also if we build a
:dev
image on each merge, etc. Its super easy to create a release since that would only take retagging the image.FWIW, the code in this PR automatically generates a number of images with each run, including the following:
master
/main
(souptime-kuma:1.2.3
when a commit is taggedv1.2.3
etc.)latest
when building frommaster
/main
(uptime-kuma:latest
)uptime-kuma:PR-213
for example)uptime-kuma:<SHA SUM>
)I think there's a couple of other tags it builds to as well, so all of this is already in there.
If you take a look at the MVentory setup that I took this from you can see the expected output.
with:
context: ./
file: ./docker/dockerfile
push: true
This should be:
${{ github.event_name != 'pull_request' }}
Else every pull-request is going to try to Push an image.
type=sha
type=raw,value=latest,enable=${{ endsWith(github.ref, github.event.repository.default_branch) }}
- name: Login to Docker Hub
This should be wrapped in an IF statement like:
That way it doesnt try to login into ghcr.io on every pull-request.
with:
context: ./
file: ./docker/dockerfile
push: true
Yup, that's deliberate, it means we can test the code for each PR in a dedicated docker image rather than waiting for the code to reach main/master before we know if the container works properly.
type=sha
type=raw,value=latest,enable=${{ endsWith(github.ref, github.event.repository.default_branch) }}
- name: Login to Docker Hub
See comment above - we should be logging in, building, and pushing an image for every PR in order to ensure confidence in the platform.
@louislam whilst we continue to discuss this, is there any chance you can add a "latest" tag when you publish new images?
My deployments are all automated and it gets frustrating when I forget that I've hard-coded the version for Uptime-Kuma when everything else is set to
:latest
.It's a minor irritation, but it would be nice to have ;)
It should always point to the latest version of uptime kuma. It is not suggested because of the breaking changes of version 2.x in the future.
https://hub.docker.com/layers/louislam/uptime-kuma/latest/images/sha256-d4947d0d9ed82b22b6364b55f52932c79ee4dbf22a330fb6b92bd09eda234cdc?context=explore
I completely missed this! :D
Thanks, and yeah, understand the issues with v2, this is for my test network so I'd expect stuff there to break every now and again with new releases of the various things running on it.
with:
context: ./
file: ./docker/dockerfile
push: true
The problem is that building and push an image per PR will create a lot of unused tags in the registry. The latest changes made by @louislam are now testing/linting the code using Github Actions which should cover your use-case.
This won't work, because the main branch for
uptime-kuma
ismaster
. You can replace it with:That way it doesn't matter if
master
ormain
is the default branch.with:
context: ./
file: ./docker/dockerfile
push: true
OK, so let's say I'm working on a PR, I push that code up, it lints/tests fine via Github actions, and we merge to master.
We then find that there's an issue with the container setup rather than the code (libc version change or something equally obscure), but we only find that out after it's been released to the wider world and causes a slew of github issues to be created.
If we create a container on each PR (and note that this is each PR, not each commit, it is rebuilt with the same tags each time) then we can test that the container itself works as well.
I'm really not convinced that "too many tags in the registry" is a strong enough argument when the alternative is a falling deployment for users of the application?
Happy to change this, good spot, thanks :)
with:
context: ./
file: ./docker/dockerfile
push: true
Fair enough, we do this against a private registry instead of a public one. I do get your point.
@proffalken With the recent changes of moving everything under
docker/
all the checks on this PR are going to fail.Yeah, just spotted that, should be good to go now :)
push: true
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
Maybe add a docker layer caching like this?
This should save some time on pulling.
Reference: Cache eviction policy
push: true
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
The cache policy is experimental though
@proffalken The build is having issues again, related to the base images not found.
This can be fixed by adding a step for building the
base
images before building the final image. The actions you are using support Caching so it will just use the cache from the previous stage.Ex.
The same would need to be duplicated for Alpine.
Caching between jobs is covered here: https://github.com/docker/build-push-action/blob/master/docs/advanced/test-before-push.md
- 'master'
pull_request:
branches:
- master
For every commit on a branch for which there is a pull request to the master, a new image is built.
Shouldn't be like this, only when pushing commits to the master (merge commits)
- 'master'
pull_request:
branches:
- master
IMO we should do it, because build on Windows is different from building on Linux container
- 'master'
pull_request:
branches:
- master
Building the container only on specific branches (ie master/main/dev) means that there is very exact control over when a new image gets built, and anyone attempting to put up a devious image would have to have get it to pass code review. Much harder for someone to accidentally get a devious image.
- 'master'
pull_request:
branches:
- master
Images are deliberately built for each PR so that they can be tested before releasing.
Failure to do so could result in underlying issues with the container (glibc / nodejs upgrade at the is level etc) breaking the deployment and unless we test at PR level then we would only find this out after the "production" container has been released.
Building and testing on branches is an established pattern within the ci/cd community because it brings value without adding unnecessary complication.
To specifically address your point about "bad" images, all images built from a PR would tagged with the PR number and result in the format
uptime-kuma:PR-3
, this means that someone would have to deliberately update their configuration to pull these images, and would never see them otherwise, even if their setup was configured foruptime-kuma:latest
.As the code would not be merged until after a review, the chances of a "bad" container making it through to release is just as likely as "bad" code making it through, at which point it stops being an issue with how we package and publish and becomes an issue with how we review the code.
- 'master'
pull_request:
branches:
- master
I'm used to a PR -> dev/nightly -> master/main process. All steps are still done before prod, in dev/nightly. Everything is still fully tested, including the container builds. Some things however, like steps accessing keys or pushing test images to public, wait for code review & approval/merging to dev before running.
I don't like the possibility of ANY container potentially containing un-reviewed code being public. Its unlikely, but we shouldn't assume someone won't just sort by newest tag and use that. Imagine someone un-experienced, who doesn't know git[hub/lab] reversing PR to mean Public Release instead of Pull Request. The more I think on it, the more I am against blindly pushing PRs to the registry.
Can you link some other projects that are using this process please? I will still be paranoid, but I will withdraw my comments if this is indeed a commonly accepted pattern.
Sorry if I'm overly security paranoid, but it pays to be so nowadays unfortunately.