Automatically build and publish to Github Container Registry #213

Open
proffalken wants to merge 6 commits from proffalken/feature/auto_build_and_release into master
proffalken commented 4 years ago (Migrated from github.com)
Owner

This PR compliments the sentiment behind #53 however it does the following:

  1. Build for multiple architectures using the Github Actions free workflow builders
  2. Release to GITHUB CONTAINER REGISTRY instead of hub.docker.com to avoid the container download restrictions

It also creates tagged images for the following scenarios:

  • Each new PR that is created, tagged with the PR number and SHA-1
  • Each merge to master, tagged with the SHA-1, auto-incremented version number, and latest for the most recent version

This is a "copy&paste" from https://github.com/MakeMonmouth/mmbot/blob/main/.github/workflows/container_build.yml and will require packages to be enabled for this repo, however it already uses the new dynamic GITHUB_TOKEN setting to authenticate.

This PR compliments the sentiment behind #53 however it does the following: 1. Build for multiple architectures using the Github Actions free workflow builders 2. Release to GITHUB CONTAINER REGISTRY instead of hub.docker.com to avoid the [container download restrictions](https://docs.docker.com/docker-hub/download-rate-limit/) It also creates tagged images for the following scenarios: * Each new PR that is created, tagged with the PR number and SHA-1 * Each merge to master, tagged with the SHA-1, auto-incremented version number, and `latest` for the most recent version This is a "copy&paste" from https://github.com/MakeMonmouth/mmbot/blob/main/.github/workflows/container_build.yml and will require `packages` to be [enabled for this repo](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry), however it already uses the new dynamic GITHUB_TOKEN setting to authenticate.
bsord commented 4 years ago (Migrated from github.com)
Owner

I think this is a great improvement and is the 'standard' i've seen across many repos these days. Builds can still be done locally for testing if needed.

I think this is a great improvement and is the 'standard' i've seen across many repos these days. Builds can still be done locally for testing if needed.
austin-engle (Migrated from github.com) reviewed 4 years ago
austin-engle (Migrated from github.com) commented 4 years ago
Owner

Uptime is missing the e

Uptime is missing the e
proffalken (Migrated from github.com) reviewed 4 years ago
proffalken (Migrated from github.com) commented 4 years ago
Poster
Owner

Good catch, I'll fix this when I'm back at a computer

Good catch, I'll fix this when I'm back at a computer
proffalken (Migrated from github.com) reviewed 4 years ago
proffalken (Migrated from github.com) commented 4 years ago
Poster
Owner

Resolved in latest commit

Resolved in latest commit
MichelBaie commented 4 years ago (Migrated from github.com)
Owner

would be helpful for a project who has daily pushes

would be helpful for a project who has daily pushes
gaby commented 4 years ago (Migrated from github.com)
Owner

This PR would affect teams using uptime-kuma behind corporate proxies, since ghcr.io is not usually added to those.

This PR would affect teams using uptime-kuma behind corporate proxies, since `ghcr.io` is not usually added to those.
proffalken commented 4 years ago (Migrated from github.com)
Poster
Owner

This PR would affect teams using uptime-kuma behind corporate proxies, since ghcr.io is not usually added to those.

True, although given hub.docker.com's policies on charging for downloads etc, I suspect that many organisations will start to migrate over to GHCR or similar in the near future.

I'd also argue that as a rule getting new sites added to a whitelist, especially when that site is owned by a "trusted entity" such as github, is probably less of a challenge than if we were hosting on "Dave's Docker Service", so shouldn't prevent us from implementing this approach.

> This PR would affect teams using uptime-kuma behind corporate proxies, since `ghcr.io` is not usually added to those. True, although given hub.docker.com's policies on charging for downloads etc, I suspect that many organisations will start to migrate over to GHCR or similar in the near future. I'd also argue that as a rule getting new sites added to a whitelist, especially when that site is owned by a "trusted entity" such as github, is probably less of a challenge than if we were hosting on "Dave's Docker Service", so shouldn't prevent us from implementing this approach.
gaby commented 4 years ago (Migrated from github.com)
Owner

This PR would affect teams using uptime-kuma behind corporate proxies, since ghcr.io is not usually added to those.

True, although given hub.docker.com's policies on charging for downloads etc, I suspect that many organisations will start to migrate over to GHCR or similar in the near future.

I'd also argue that as a rule getting new sites added to a whitelist, especially when that site is owned by a "trusted entity" such as github, is probably less of a challenge than if we were hosting on "Dave's Docker Service", so shouldn't prevent us from implementing this approach.

I'd say the workflow could publish the final image to both ghcr.io and docker.io 👍🏻

I like this PR cause it makes the release process also open source, right now it isn't.

> > This PR would affect teams using uptime-kuma behind corporate proxies, since `ghcr.io` is not usually added to those. > > True, although given hub.docker.com's policies on charging for downloads etc, I suspect that many organisations will start to migrate over to GHCR or similar in the near future. > > I'd also argue that as a rule getting new sites added to a whitelist, especially when that site is owned by a "trusted entity" such as github, is probably less of a challenge than if we were hosting on "Dave's Docker Service", so shouldn't prevent us from implementing this approach. I'd say the workflow could publish the final image to both ghcr.io and docker.io 👍🏻 I like this PR cause it makes the release process also open source, right now it isn't.
Saibamen (Migrated from github.com) reviewed 4 years ago
context: ./
file: ./docker/dockerfile
push: true
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64
Saibamen (Migrated from github.com) commented 4 years ago
Owner

Can we use multiline array here (or list like for branches)?

Can we use multiline array here (or list like for `branches`)?
Saibamen commented 4 years ago (Migrated from github.com)
Owner
Related issue: https://github.com/louislam/uptime-kuma/issues/440
gaby commented 4 years ago (Migrated from github.com)
Owner

@proffalken Can you add support for pushing to both ghcr.io and hub.docker.com ?

@proffalken Can you add support for pushing to both `ghcr.io` and `hub.docker.com` ?
proffalken commented 4 years ago (Migrated from github.com)
Poster
Owner

@gaby I'll see what I can do in the next week, work and life are reasonably busy right now!

@gaby I'll see what I can do in the next week, work and life are reasonably busy right now!
louislam commented 4 years ago (Migrated from github.com)
Owner

My preference is still the npm run build-docker command on my local machine.

Not saying that this PR is not good, but I just don't want to maintain more things in the future. For example, a few week ago, I added Debian + Alpine based images support, I believe that this yaml file have to be updated too.

Also, the build time is very good on my local machine while Github Action is not always good.

My preference is still the `npm run build-docker` command on my local machine. Not saying that this PR is not good, but I just don't want to maintain more things in the future. For example, a few week ago, I added Debian + Alpine based images support, I believe that this yaml file have to be updated too. Also, the build time is very good on my local machine while Github Action is not always good.
Saibamen commented 4 years ago (Migrated from github.com)
Owner

But maybe we can just setup CI for only build (not publish) in GitHub?

I bet you don't build Docker image after each commit.

If we have CI, we could start fixing build issues right after pushing wrong commit/PR

But maybe we can just setup CI for only build (not publish) in GitHub? I bet you don't build Docker image after each commit. If we have CI, we could start fixing build issues right after pushing wrong commit/PR
gaby commented 4 years ago (Migrated from github.com)
Owner

But maybe we can just setup CI for only build (not publish) in GitHub?

I bet you don't build Docker image after each commit.

If we have CI, we could start fixing build issues right after pushing wrong commit/PR

^ this! Also if we build a :dev image on each merge, etc. Its super easy to create a release since that would only take retagging the image.

> But maybe we can just setup CI for only build (not publish) in GitHub? > > I bet you don't build Docker image after each commit. > > If we have CI, we could start fixing build issues right after pushing wrong commit/PR ^ this! Also if we build a `:dev` image on each merge, etc. Its super easy to create a release since that would only take retagging the image.
proffalken commented 4 years ago (Migrated from github.com)
Poster
Owner

^ this! Also if we build a :dev image on each merge, etc. Its super easy to create a release since that would only take retagging the image.

FWIW, the code in this PR automatically generates a number of images with each run, including the following:

  • Git Tag when building from master/main (so uptime-kuma:1.2.3 when a commit is tagged v1.2.3 etc.)
  • latest when building from master/main (uptime-kuma:latest)
  • PR ID (uptime-kuma:PR-213 for example)
  • Checksum (uptime-kuma:<SHA SUM>)

I think there's a couple of other tags it builds to as well, so all of this is already in there.

If you take a look at the MVentory setup that I took this from you can see the expected output.

> ^ this! Also if we build a `:dev` image on each merge, etc. Its super easy to create a release since that would only take retagging the image. FWIW, the code in this PR automatically generates a number of images with each run, including the following: * Git Tag when building from `master`/`main` (so `uptime-kuma:1.2.3` when a commit is tagged `v1.2.3` etc.) * `latest` when building from `master`/`main` (`uptime-kuma:latest`) * PR ID (`uptime-kuma:PR-213` for example) * Checksum (`uptime-kuma:<SHA SUM>`) I think there's a couple of other tags it builds to as well, so all of this is already in there. If you take a look at the [MVentory setup](https://github.com/MakeMonmouth/mventory/pkgs/container/mventory) that I took this from you can see the expected output.
gaby (Migrated from github.com) reviewed 4 years ago
with:
context: ./
file: ./docker/dockerfile
push: true
gaby (Migrated from github.com) commented 4 years ago
Owner

This should be:

${{ github.event_name != 'pull_request' }}

Else every pull-request is going to try to Push an image.

This should be: ```${{ github.event_name != 'pull_request' }}``` Else every pull-request is going to try to Push an image.
gaby (Migrated from github.com) reviewed 4 years ago
type=sha
type=raw,value=latest,enable=${{ endsWith(github.ref, github.event.repository.default_branch) }}
- name: Login to Docker Hub
gaby (Migrated from github.com) commented 4 years ago
Owner

This should be wrapped in an IF statement like:

- if: github.event_name != 'pull_request'
  name: Login to Docker Hub

That way it doesnt try to login into ghcr.io on every pull-request.

This should be wrapped in an IF statement like: ``` - if: github.event_name != 'pull_request' name: Login to Docker Hub ``` That way it doesnt try to login into ghcr.io on every pull-request.
proffalken (Migrated from github.com) reviewed 4 years ago
with:
context: ./
file: ./docker/dockerfile
push: true
proffalken (Migrated from github.com) commented 4 years ago
Poster
Owner

Yup, that's deliberate, it means we can test the code for each PR in a dedicated docker image rather than waiting for the code to reach main/master before we know if the container works properly.

Yup, that's deliberate, it means we can test the code for each PR in a dedicated docker image rather than waiting for the code to reach main/master before we know if the container works properly.
proffalken (Migrated from github.com) reviewed 4 years ago
type=sha
type=raw,value=latest,enable=${{ endsWith(github.ref, github.event.repository.default_branch) }}
- name: Login to Docker Hub
proffalken (Migrated from github.com) commented 4 years ago
Poster
Owner

See comment above - we should be logging in, building, and pushing an image for every PR in order to ensure confidence in the platform.

See comment above - we should be logging in, building, and pushing an image for every PR in order to ensure confidence in the platform.
proffalken commented 4 years ago (Migrated from github.com)
Poster
Owner

@louislam whilst we continue to discuss this, is there any chance you can add a "latest" tag when you publish new images?

My deployments are all automated and it gets frustrating when I forget that I've hard-coded the version for Uptime-Kuma when everything else is set to :latest.

It's a minor irritation, but it would be nice to have ;)

@louislam whilst we continue to discuss this, is there any chance you can add a "latest" tag when you publish new images? My deployments are all automated and it gets frustrating when I forget that I've hard-coded the version for Uptime-Kuma when everything else is set to `:latest`. It's a minor irritation, but it would be nice to have ;)
louislam commented 4 years ago (Migrated from github.com)
Owner

@louislam whilst we continue to discuss this, is there any chance you can add a "latest" tag when you publish new images?

My deployments are all automated and it gets frustrating when I forget that I've hard-coded the version for Uptime-Kuma when everything else is set to :latest.

It's a minor irritation, but it would be nice to have ;)

It should always point to the latest version of uptime kuma. It is not suggested because of the breaking changes of version 2.x in the future.

https://hub.docker.com/layers/louislam/uptime-kuma/latest/images/sha256-d4947d0d9ed82b22b6364b55f52932c79ee4dbf22a330fb6b92bd09eda234cdc?context=explore

> @louislam whilst we continue to discuss this, is there any chance you can add a "latest" tag when you publish new images? > > My deployments are all automated and it gets frustrating when I forget that I've hard-coded the version for Uptime-Kuma when everything else is set to `:latest`. > > It's a minor irritation, but it would be nice to have ;) It should always point to the latest version of uptime kuma. It is not suggested because of the breaking changes of version 2.x in the future. https://hub.docker.com/layers/louislam/uptime-kuma/latest/images/sha256-d4947d0d9ed82b22b6364b55f52932c79ee4dbf22a330fb6b92bd09eda234cdc?context=explore
proffalken commented 4 years ago (Migrated from github.com)
Poster
Owner

It should always point to the latest version of uptime kuma. It is not suggested because of the breaking changes of version 2.x in the future.

I completely missed this! :D

Thanks, and yeah, understand the issues with v2, this is for my test network so I'd expect stuff there to break every now and again with new releases of the various things running on it.

> It should always point to the latest version of uptime kuma. It is not suggested because of the breaking changes of version 2.x in the future. I completely missed this! :D Thanks, and yeah, understand the issues with v2, this is for my test network so I'd expect stuff there to break every now and again with new releases of the various things running on it.
gaby (Migrated from github.com) reviewed 4 years ago
with:
context: ./
file: ./docker/dockerfile
push: true
gaby (Migrated from github.com) commented 4 years ago
Owner

The problem is that building and push an image per PR will create a lot of unused tags in the registry. The latest changes made by @louislam are now testing/linting the code using Github Actions which should cover your use-case.

The problem is that building and push an image per PR will create a lot of unused tags in the registry. The latest changes made by @louislam are now testing/linting the code using Github Actions which should cover your use-case.
gaby (Migrated from github.com) reviewed 4 years ago
gaby (Migrated from github.com) commented 4 years ago
Owner

This won't work, because the main branch for uptime-kuma is master. You can replace it with:

type=raw,value=latest,enable=${{ endsWith(github.ref, github.event.repository.default_branch) }}

That way it doesn't matter if master or main is the default branch.

This won't work, because the main branch for `uptime-kuma` is `master`. You can replace it with: ``` type=raw,value=latest,enable=${{ endsWith(github.ref, github.event.repository.default_branch) }} ``` That way it doesn't matter if `master` or `main` is the default branch.
proffalken (Migrated from github.com) reviewed 4 years ago
with:
context: ./
file: ./docker/dockerfile
push: true
proffalken (Migrated from github.com) commented 4 years ago
Poster
Owner

OK, so let's say I'm working on a PR, I push that code up, it lints/tests fine via Github actions, and we merge to master.

We then find that there's an issue with the container setup rather than the code (libc version change or something equally obscure), but we only find that out after it's been released to the wider world and causes a slew of github issues to be created.

If we create a container on each PR (and note that this is each PR, not each commit, it is rebuilt with the same tags each time) then we can test that the container itself works as well.

I'm really not convinced that "too many tags in the registry" is a strong enough argument when the alternative is a falling deployment for users of the application?

OK, so let's say I'm working on a PR, I push that code up, it lints/tests fine via Github actions, and we merge to master. We then find that there's an issue with the container setup rather than the code (libc version change or something equally obscure), but we only find that out *after* it's been released to the wider world and causes a slew of github issues to be created. If we create a container on each PR (and note that this is each PR, not each commit, it is rebuilt with the same tags each time) then we can test that the container itself works as well. I'm really not convinced that "too many tags in the registry" is a strong enough argument when the alternative is a falling deployment for users of the application?
proffalken (Migrated from github.com) reviewed 4 years ago
proffalken (Migrated from github.com) commented 4 years ago
Poster
Owner

Happy to change this, good spot, thanks :)

Happy to change this, good spot, thanks :)
gaby (Migrated from github.com) reviewed 4 years ago
with:
context: ./
file: ./docker/dockerfile
push: true
gaby (Migrated from github.com) commented 4 years ago
Owner

Fair enough, we do this against a private registry instead of a public one. I do get your point.

Fair enough, we do this against a private registry instead of a public one. I do get your point.
gaby commented 4 years ago (Migrated from github.com)
Owner

@proffalken With the recent changes of moving everything under docker/ all the checks on this PR are going to fail.

@proffalken With the recent changes of moving everything under `docker/` all the checks on this PR are going to fail.
proffalken commented 4 years ago (Migrated from github.com)
Poster
Owner

@proffalken With the recent changes of moving everything under docker/ all the checks on this PR are going to fail.

Yeah, just spotted that, should be good to go now :)

> @proffalken With the recent changes of moving everything under `docker/` all the checks on this PR are going to fail. Yeah, just spotted that, should be good to go now :)
TonyRL (Migrated from github.com) reviewed 4 years ago
push: true
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
TonyRL (Migrated from github.com) commented 4 years ago
Owner

Maybe add a docker layer caching like this?

cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}

This should save some time on pulling.
Reference: Cache eviction policy

Maybe add a [docker layer caching](https://github.com/docker/build-push-action/blob/master/docs/advanced/cache.md#cache-backend-api) like this? ```yaml cache-from: type=gha, scope=${{ github.workflow }} cache-to: type=gha, mode=max, scope=${{ github.workflow }} ``` This should save some time on pulling. Reference: [Cache eviction policy](https://docs.github.com/en/actions/advanced-guides/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy)
gaby (Migrated from github.com) reviewed 4 years ago
push: true
platforms: linux/amd64, linux/arm/v7, linux/arm/v6, linux/arm64
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
gaby (Migrated from github.com) commented 4 years ago
Owner

The cache policy is experimental though

The cache policy is experimental though
gaby commented 4 years ago (Migrated from github.com)
Owner

@proffalken The build is having issues again, related to the base images not found.

This can be fixed by adding a step for building the base images before building the final image. The actions you are using support Caching so it will just use the cache from the previous stage.

Ex.

      - name: Build Base and push
        id: docker_build
        uses: docker/build-push-action@v2
        with:
          context: ./
          file: ./docker/debian-base.dockerfile

      - name: Build Final and push
        id: docker_build
        uses: docker/build-push-action@v2
        with:
          context: ./
          file: ./docker/dockerfile

The same would need to be duplicated for Alpine.

Caching between jobs is covered here: https://github.com/docker/build-push-action/blob/master/docs/advanced/test-before-push.md

@proffalken The build is having issues again, related to the base images not found. This can be fixed by adding a step for building the `base` images before building the final image. The actions you are using support Caching so it will just use the cache from the previous stage. Ex. ``` - name: Build Base and push id: docker_build uses: docker/build-push-action@v2 with: context: ./ file: ./docker/debian-base.dockerfile - name: Build Final and push id: docker_build uses: docker/build-push-action@v2 with: context: ./ file: ./docker/dockerfile ``` The same would need to be duplicated for Alpine. Caching between jobs is covered here: https://github.com/docker/build-push-action/blob/master/docs/advanced/test-before-push.md
deanomus (Migrated from github.com) reviewed 4 years ago
- 'master'
pull_request:
branches:
- master
deanomus (Migrated from github.com) commented 4 years ago
Owner

For every commit on a branch for which there is a pull request to the master, a new image is built.
Shouldn't be like this, only when pushing commits to the master (merge commits)

For every commit on a branch for which there is a pull request to the master, a new image is built. Shouldn't be like this, only when pushing commits to the master (merge commits)
Saibamen (Migrated from github.com) reviewed 4 years ago
- 'master'
pull_request:
branches:
- master
Saibamen (Migrated from github.com) commented 4 years ago
Owner

IMO we should do it, because build on Windows is different from building on Linux container

IMO we should do it, because build on Windows is different from building on Linux container
deefdragon (Migrated from github.com) reviewed 4 years ago
- 'master'
pull_request:
branches:
- master
deefdragon (Migrated from github.com) commented 4 years ago
Owner

Building the container only on specific branches (ie master/main/dev) means that there is very exact control over when a new image gets built, and anyone attempting to put up a devious image would have to have get it to pass code review. Much harder for someone to accidentally get a devious image.

Building the container only on specific branches (ie master/main/dev) means that there is very exact control over when a new image gets built, and anyone attempting to put up a devious image would have to have get it to pass code review. Much harder for someone to accidentally get a devious image.
proffalken (Migrated from github.com) reviewed 4 years ago
- 'master'
pull_request:
branches:
- master
proffalken (Migrated from github.com) commented 4 years ago
Poster
Owner

Images are deliberately built for each PR so that they can be tested before releasing.

Failure to do so could result in underlying issues with the container (glibc / nodejs upgrade at the is level etc) breaking the deployment and unless we test at PR level then we would only find this out after the "production" container has been released.

Building and testing on branches is an established pattern within the ci/cd community because it brings value without adding unnecessary complication.

To specifically address your point about "bad" images, all images built from a PR would tagged with the PR number and result in the format uptime-kuma:PR-3, this means that someone would have to deliberately update their configuration to pull these images, and would never see them otherwise, even if their setup was configured for uptime-kuma:latest.

As the code would not be merged until after a review, the chances of a "bad" container making it through to release is just as likely as "bad" code making it through, at which point it stops being an issue with how we package and publish and becomes an issue with how we review the code.

Images are deliberately built for each PR so that they can be tested before releasing. Failure to do so could result in underlying issues with the container (glibc / nodejs upgrade at the is level etc) breaking the deployment and unless we test at PR level then we would only find this out after the "production" container has been released. Building and testing on branches is an established pattern within the ci/cd community because it brings value without adding unnecessary complication. To specifically address your point about "bad" images, all images built from a PR would tagged with the PR number and result in the format `uptime-kuma:PR-3`, this means that someone would have to **deliberately** update their configuration to pull these images, and would **never** see them otherwise, even if their setup was configured for `uptime-kuma:latest`. As the code would not be merged until after a review, the chances of a "bad" container making it through to release is just as likely as "bad" code making it through, at which point it stops being an issue with how we package and publish and becomes an issue with how we review the code.
deefdragon (Migrated from github.com) reviewed 4 years ago
- 'master'
pull_request:
branches:
- master
deefdragon (Migrated from github.com) commented 4 years ago
Owner

I'm used to a PR -> dev/nightly -> master/main process. All steps are still done before prod, in dev/nightly. Everything is still fully tested, including the container builds. Some things however, like steps accessing keys or pushing test images to public, wait for code review & approval/merging to dev before running.

I don't like the possibility of ANY container potentially containing un-reviewed code being public. Its unlikely, but we shouldn't assume someone won't just sort by newest tag and use that. Imagine someone un-experienced, who doesn't know git[hub/lab] reversing PR to mean Public Release instead of Pull Request. The more I think on it, the more I am against blindly pushing PRs to the registry.

an established pattern within the ci/cd community

Can you link some other projects that are using this process please? I will still be paranoid, but I will withdraw my comments if this is indeed a commonly accepted pattern.

Sorry if I'm overly security paranoid, but it pays to be so nowadays unfortunately.

I'm used to a PR -> dev/nightly -> master/main process. All steps are still done before prod, in dev/nightly. Everything _is still fully tested_, including the container builds. Some things however, like steps accessing keys or pushing test images to public, wait for code review & approval/merging to dev before running. I don't like the possibility of ANY container potentially containing un-reviewed code being public. Its unlikely, but we shouldn't assume someone won't just sort by newest tag and use that. Imagine someone un-experienced, who doesn't know git[hub/lab] reversing PR to mean Public Release instead of Pull Request. The more I think on it, the more I am against blindly pushing PRs to the registry. > an established pattern within the ci/cd community Can you link some other projects that are using this process please? I will still be paranoid, but I will withdraw my comments if this is indeed a commonly accepted pattern. Sorry if I'm overly security paranoid, but it pays to be so nowadays unfortunately.
This pull request can be merged automatically.
You are not authorized to merge this pull request.
Sign in to join this conversation.
Loading…
There is no content yet.