Compare commits

..

41 commits

Author SHA1 Message Date
Alexis Métaireau
a647485fdb
Update the docs
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Release multi-arch container image / merge (push) Has been cancelled
Release multi-arch container image / provenance (push) Has been cancelled
2025-02-11 17:43:05 +01:00
Alexis Métaireau
769a78dd27
Reorganize the registry.py module to be simpler 2025-02-11 17:20:01 +01:00
Alexis Métaireau
46f510ab79
Check if the logIndex is greater than the last known one before upgrading
Each signature is logged to Rekor, and the log index is then part of
the signature itself. Ensuring that the logIndex is greater in the given
container image signature makes it possible to ensure that we're only
going forward in time, and avoid installing older container images
thinking that they are new than the current one.
2025-02-11 16:13:28 +01:00
Alexis Métaireau
8159d6ccb7
FIXUP: Update the default provenance workflow 2025-02-11 16:13:28 +01:00
Alexis Métaireau
5c2c401be6
DEMO Time! 2025-02-11 16:13:28 +01:00
Alexis Métaireau
a9043cef2c
Fix cli.py 2025-02-11 16:13:28 +01:00
Alexis Métaireau
d95d46ecc4
Add the ability to download diffoci for multiple platforms 2025-02-11 16:13:28 +01:00
Alexis Métaireau
351653ff37
Build images every day, on main and test/ commits 2025-02-11 16:13:28 +01:00
Alexis Métaireau
0daeeb867e
Check signatures before invoking the container.
Also, check for new container images when starting the application.
This replaces the usage of `share/image-id.txt` to ensure the image is trusted.
2025-02-11 16:13:27 +01:00
Alexis Métaireau
a5b5a78215
Fixup: remove rntime.py 2025-02-11 16:13:27 +01:00
Alexis Métaireau
dca0bd4bf2
Fixup: update docs 2025-02-11 16:13:27 +01:00
Alexis Métaireau
02e62c93f6
Fixup: use digest instead of hash 2025-02-11 16:13:27 +01:00
Alexis Métaireau
9a44110313
CI: Rename github workflow for multi-arch images publication 2025-02-11 16:13:27 +01:00
Alexis Métaireau
7d26c798c6
Fixup: registry, split Accept lines 2025-02-11 16:13:27 +01:00
Alexis Métaireau
8041ae2fb6
feat(icu): Add verification support for multi-arch images 2025-02-11 16:13:27 +01:00
Alexis Métaireau
2d9c00d681
fixup: Fix docs 2025-02-11 16:13:27 +01:00
Alex Pyrgiotis
1b7cfe4c7f
WIP: Add CI job for multi-arch builds 2025-02-11 16:13:27 +01:00
Alex Pyrgiotis
5accaef357
WIP: Verify local image 2025-02-11 16:13:27 +01:00
Alex Pyrgiotis
b42833df47
WIP: Make verify-attestation work for SLSA 3 attestations 2025-02-11 16:13:26 +01:00
Alexis Métaireau
858d31458b
fix(icu): update documentation and fixes 2025-02-11 16:13:26 +01:00
Alexis Métaireau
3b858dac27
Get image name from signatures for air-gapped archives
This allows to be sure that the image name is verified by a known public
key, rather than relying on an input by the user, which can lead to issues.
2025-02-11 16:13:26 +01:00
Alexis Métaireau
c6f5e61e0b
Add a dangerzone-image prepare-archive command 2025-02-11 16:13:26 +01:00
Alexis Métaireau
4d27449351
Locally store the signatures for oci-images archives
On air-gapped environements, it's now possible to load signatures
generated by `cosign save` commands. The signatures embedded in this
format will be converted to the one used by `cosign download signature`.
2025-02-11 16:13:26 +01:00
Alexis Métaireau
f30ced7834
Allow installation on air-gapped systems
- Verify the archive against the known public signature
- Prepare a new archive format (with signature removed)
- Load the new image and retag it with the expected tag

During this process, the signatures are lost and should instead be
converted to a known format. Additionally, the name fo the repository
should ideally come from the signatures rather than from the command
line.
2025-02-11 16:13:26 +01:00
Alexis Métaireau
d4547b8964
Ensure cosign is installed before trying to use it 2025-02-11 16:13:26 +01:00
Alexis Métaireau
9b60a101a1
Add a dev_scripts/dangerzone-image 2025-02-11 16:13:26 +01:00
Alexis Métaireau
2e7af4aebf
Some more refactoring 2025-02-11 16:13:26 +01:00
Alexis Métaireau
5921289454
Refactoring of dangerzone/updater/* 2025-02-11 16:13:26 +01:00
Alexis Métaireau
ab15d25a18
Move regsitry and cosign utilities to dangerzone/updater/*.
Placing these inside the `dangerzone` python package enables an
inclusion with the software itself, and also makes it possible for
end-users to attest the image.
2025-02-11 16:13:25 +01:00
Alexis Métaireau
225839960c
Verify podman/docker images against locally stored signatures 2025-02-11 16:13:25 +01:00
Alexis Métaireau
83a38eab0d
Automate the verification of image signatures 2025-02-11 16:13:25 +01:00
Alexis Métaireau
1ea76ded9b
Add an utility to retrieve manifest info 2025-02-11 16:13:25 +01:00
Alexis Métaireau
66ac7e56f8
Add a script to verify Github attestations 2025-02-11 16:13:25 +01:00
Alexis Métaireau
3f428d4824
FIXUP: test 2025-02-11 16:13:25 +01:00
Alexis Métaireau
2839c3b1ff
Add logs 2025-02-11 16:13:25 +01:00
Alexis Métaireau
fa540e53fa
Remove the tag from the attestation, what we attest is the hash, so no need for it 2025-02-11 16:13:25 +01:00
Alexis Métaireau
56b464fe58
Add the tag to the subject 2025-02-11 16:13:25 +01:00
Alexis Métaireau
2235cb1b36
Get the tag from git before retagging it 2025-02-11 16:13:25 +01:00
Alexis Métaireau
4c78a0117c
Checkout with depth:0 otherwise git commands aren't functional 2025-02-11 16:13:24 +01:00
Alexis Métaireau
13d12de087
Build: Use Github runners to build and sign container images on new tags 2025-02-11 16:13:24 +01:00
Alex Pyrgiotis
856de3fd46
grype: Ignore CVE-2025-0665
Some checks failed
Tests / macOS (x86_64) (push) Has been cancelled
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Ignore the CVE-2025-0665 vulnerability, since it's a libcurl one, and
the Dangerzone container does not make network calls. Also, it seems
that Debian Bookworm is not affected.
2025-02-10 12:31:08 +02:00
8 changed files with 160 additions and 136 deletions

View file

@ -37,3 +37,12 @@ ignore:
# [bookworm] - raptor2 <postponed> (Minor issue, revisit when fixed upstream) # [bookworm] - raptor2 <postponed> (Minor issue, revisit when fixed upstream)
# #
- vulnerability: CVE-2024-57823 - vulnerability: CVE-2024-57823
# CVE-2025-0665
# ==============
#
# Debian tracker: https://security-tracker.debian.org/tracker/CVE-2025-0665
# Verdict: Dangerzone is not affected because the vulnerable code is not
# present in Debian Bookworm. Also, libcurl is an HTTP client, and the
# Dangerzone container does not make any network calls.
- vulnerability: CVE-2025-0665

View file

@ -9,7 +9,7 @@ from . import errors
from .util import get_resource_path, get_subprocess_startupinfo from .util import get_resource_path, get_subprocess_startupinfo
OLD_CONTAINER_NAME = "dangerzone.rocks/dangerzone" OLD_CONTAINER_NAME = "dangerzone.rocks/dangerzone"
CONTAINER_NAME = "ghcr.io/almet/dangerzone/dangerzone" CONTAINER_NAME = "ghcr.io/freedomofpress/dangerzone/dangerzone"
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -111,7 +111,7 @@ def delete_image_tag(tag: str) -> None:
) )
def load_image_tarball_in_memory() -> None: def load_image_tarball_from_gzip() -> None:
log.info("Installing Dangerzone container image...") log.info("Installing Dangerzone container image...")
p = subprocess.Popen( p = subprocess.Popen(
[get_runtime(), "load"], [get_runtime(), "load"],
@ -142,7 +142,7 @@ def load_image_tarball_in_memory() -> None:
log.info("Successfully installed container image from") log.info("Successfully installed container image from")
def load_image_tarball_file(tarball_path: str) -> None: def load_image_tarball_from_tar(tarball_path: str) -> None:
cmd = [get_runtime(), "load", "-i", tarball_path] cmd = [get_runtime(), "load", "-i", tarball_path]
subprocess.run(cmd, startupinfo=get_subprocess_startupinfo(), check=True) subprocess.run(cmd, startupinfo=get_subprocess_startupinfo(), check=True)

View file

@ -3,7 +3,6 @@ from tempfile import NamedTemporaryFile
from . import cosign from . import cosign
# NOTE: You can grab the SLSA attestation for an image/tag pair with the following # NOTE: You can grab the SLSA attestation for an image/tag pair with the following
# commands: # commands:
# #
@ -51,7 +50,11 @@ def generate_cue_policy(repo, workflow, commit, branch):
def verify( def verify(
image_name: str, branch: str, commit: str, repository: str, workflow: str, image_name: str,
branch: str,
commit: str,
repository: str,
workflow: str,
) -> bool: ) -> bool:
""" """
Look up the image attestation to see if the image has been built Look up the image attestation to see if the image has been built

View file

@ -97,8 +97,8 @@ def list_remote_tags(image: str) -> None:
@main.command() @main.command()
@click.argument("image") @click.argument("image")
def get_manifest(image: str) -> None: def get_manifest(image: str) -> None:
"""Retrieves a remove manifest for a given image and displays it.""" """Retrieves a remote manifest for a given image and displays it."""
click.echo(registry.get_manifest(image)) click.echo(registry.get_manifest(image).content)
@main.command() @main.command()
@ -121,7 +121,7 @@ def get_manifest(image: str) -> None:
) )
@click.option( @click.option(
"--workflow", "--workflow",
default=".github/workflows/multi_arch_build.yml", default=".github/workflows/release-container-image.yml",
help="The path of the GitHub actions workflow this image was created from", help="The path of the GitHub actions workflow this image was created from",
) )
def attest_provenance( def attest_provenance(

View file

@ -52,3 +52,7 @@ class LocalSignatureNotFound(SignatureError):
class CosignNotInstalledError(SignatureError): class CosignNotInstalledError(SignatureError):
pass pass
class InvalidLogIndex(SignatureError):
pass

View file

@ -28,13 +28,7 @@ ACCEPT_MANIFESTS_HEADER = ",".join(
) )
class Image(namedtuple("Image", ["registry", "namespace", "image_name", "tag"])): Image = namedtuple("Image", ["registry", "namespace", "image_name", "tag"])
__slots__ = ()
@property
def full_name(self) -> str:
tag = f":{self.tag}" if self.tag else ""
return f"{self.registry}/{self.namespace}/{self.image_name}{tag}"
def parse_image_location(input_string: str) -> Image: def parse_image_location(input_string: str) -> Image:
@ -58,102 +52,67 @@ def parse_image_location(input_string: str) -> Image:
) )
class RegistryClient: def _get_auth_header(image) -> Dict[str, str]:
def __init__( auth_url = f"https://{image.registry}/token"
self,
image: Image | str,
):
if isinstance(image, str):
image = parse_image_location(image)
self._image = image
self._registry = image.registry
self._namespace = image.namespace
self._image_name = image.image_name
self._auth_token = None
self._base_url = f"https://{self._registry}"
self._image_url = f"{self._base_url}/v2/{self._namespace}/{self._image_name}"
def get_auth_token(self) -> Optional[str]:
if not self._auth_token:
auth_url = f"{self._base_url}/token"
response = requests.get( response = requests.get(
auth_url, auth_url,
params={ params={
"service": f"{self._registry}", "service": f"{image.registry}",
"scope": f"repository:{self._namespace}/{self._image_name}:pull", "scope": f"repository:{image.namespace}/{image.image_name}:pull",
}, },
) )
response.raise_for_status() response.raise_for_status()
self._auth_token = response.json()["token"] token = response.json()["token"]
return self._auth_token return {"Authorization": f"Bearer {token}"}
def get_auth_header(self) -> Dict[str, str]:
return {"Authorization": f"Bearer {self.get_auth_token()}"}
def list_tags(self) -> list: def _url(image):
url = f"{self._image_url}/tags/list" return f"https://{image.registry}/v2/{image.namespace}/{image.image_name}"
response = requests.get(url, headers=self.get_auth_header())
def list_tags(image_str: str) -> list:
image = parse_image_location(image_str)
url = f"{_url(image)}/tags/list"
response = requests.get(url, headers=_get_auth_header(image))
response.raise_for_status() response.raise_for_status()
tags = response.json().get("tags", []) tags = response.json().get("tags", [])
return tags return tags
def get_manifest(
self, def get_manifest(image_str) -> requests.Response:
tag: str,
) -> requests.Response:
"""Get manifest information for a specific tag""" """Get manifest information for a specific tag"""
manifest_url = f"{self._image_url}/manifests/{tag}" image = parse_image_location(image_str)
manifest_url = f"{_url(image)}/manifests/{image.tag}"
headers = { headers = {
"Accept": ACCEPT_MANIFESTS_HEADER, "Accept": ACCEPT_MANIFESTS_HEADER,
"Authorization": f"Bearer {self.get_auth_token()}",
} }
headers.update(_get_auth_header(image))
response = requests.get(manifest_url, headers=headers) response = requests.get(manifest_url, headers=headers)
response.raise_for_status() response.raise_for_status()
return response return response
def list_manifests(self, tag: str) -> list:
return (
self.get_manifest(
tag,
)
.json()
.get("manifests")
)
def get_blob(self, digest: str) -> requests.Response: def list_manifests(image_str) -> list:
url = f"{self._image_url}/blobs/{digest}" return get_manifest(image_str).json().get("manifests")
def get_blob(image, digest: str) -> requests.Response:
response = requests.get( response = requests.get(
url, f"{_url(image)}/blobs/{digest}",
headers={ headers={
"Authorization": f"Bearer {self.get_auth_token()}", "Authorization": f"Bearer {_get_auth_token(image)}",
}, },
) )
response.raise_for_status() response.raise_for_status()
return response return response
def get_manifest_digest(
self, tag: str, tag_manifest_content: Optional[bytes] = None def get_manifest_digest(
) -> str: image_str: str, tag_manifest_content: Optional[bytes] = None
) -> str:
image = parse_image_location(image_str)
if not tag_manifest_content: if not tag_manifest_content:
tag_manifest_content = self.get_manifest(tag).content tag_manifest_content = get_manifest(image).content
return sha256(tag_manifest_content).hexdigest() return sha256(tag_manifest_content).hexdigest()
# XXX Refactor this with regular functions rather than a class
def get_manifest_digest(image_str: str) -> str:
image = parse_image_location(image_str)
return RegistryClient(image).get_manifest_digest(image.tag)
def list_tags(image_str: str) -> list:
return RegistryClient(image_str).list_tags()
def get_manifest(image_str: str) -> bytes:
image = parse_image_location(image_str)
client = RegistryClient(image)
resp = client.get_manifest(image.tag)
return resp.content

View file

@ -4,6 +4,7 @@ import re
import subprocess import subprocess
import tarfile import tarfile
from base64 import b64decode, b64encode from base64 import b64decode, b64encode
from functools import reduce
from hashlib import sha256 from hashlib import sha256
from io import BytesIO from io import BytesIO
from pathlib import Path from pathlib import Path
@ -27,6 +28,8 @@ def get_config_dir() -> Path:
# XXX Store this somewhere else. # XXX Store this somewhere else.
DEFAULT_PUBKEY_LOCATION = get_resource_path("freedomofpress-dangerzone-pub.key") DEFAULT_PUBKEY_LOCATION = get_resource_path("freedomofpress-dangerzone-pub.key")
SIGNATURES_PATH = get_config_dir() / "signatures" SIGNATURES_PATH = get_config_dir() / "signatures"
LAST_LOG_INDEX = SIGNATURES_PATH / "last_log_index"
__all__ = [ __all__ = [
"verify_signature", "verify_signature",
"load_signatures", "load_signatures",
@ -127,22 +130,26 @@ def verify_signatures(
return True return True
def upgrade_container_image(image: str, manifest_digest: str, pubkey: str) -> bool: def get_last_log_index() -> int:
"""Verify and upgrade the image to the latest, if signed.""" SIGNATURES_PATH.mkdir(parents=True, exist_ok=True)
update_available, _ = is_update_available(image) if not LAST_LOG_INDEX.exists():
if not update_available: return 0
raise errors.ImageAlreadyUpToDate("The image is already up to date")
signatures = get_remote_signatures(image, manifest_digest) with open(LAST_LOG_INDEX) as f:
verify_signatures(signatures, manifest_digest, pubkey) return int(f.read())
# At this point, the signatures are verified
# We store the signatures just now to avoid storing unverified signatures
store_signatures(signatures, manifest_digest, pubkey)
# let's upgrade the image def get_log_index_from_signatures(signatures: List[Dict]) -> int:
# XXX Use the image digest here to avoid race conditions return reduce(
return runtime.container_pull(image) lambda acc, sig: max(acc, sig["Bundle"]["Payload"]["logIndex"]), signatures, 0
)
def write_log_index(log_index: int) -> None:
last_log_index_path = SIGNATURES_PATH / "last_log_index"
with open(log_index, "w") as f:
f.write(str(log_index))
def _get_blob(tmpdir: str, digest: str) -> Path: def _get_blob(tmpdir: str, digest: str) -> Path:
@ -178,7 +185,7 @@ def upgrade_container_image_airgapped(container_tar: str, pubkey: str) -> str:
if not cosign.verify_local_image(tmpdir, pubkey): if not cosign.verify_local_image(tmpdir, pubkey):
raise errors.SignatureVerificationError() raise errors.SignatureVerificationError()
# Remove the signatures from the archive. # Remove the signatures from the archive, otherwise podman is not able to load it
with open(Path(tmpdir) / "index.json") as f: with open(Path(tmpdir) / "index.json") as f:
index_json = json.load(f) index_json = json.load(f)
@ -195,6 +202,15 @@ def upgrade_container_image_airgapped(container_tar: str, pubkey: str) -> str:
image_name, signatures = convert_oci_images_signatures(json.load(f), tmpdir) image_name, signatures = convert_oci_images_signatures(json.load(f), tmpdir)
log.info(f"Found image name: {image_name}") log.info(f"Found image name: {image_name}")
# Ensure that we only upgrade if the log index is higher than the last known one
incoming_log_index = get_log_index_from_signatures(signatures)
last_log_index = get_last_log_index()
if incoming_log_index < last_log_index:
raise errors.InvalidLogIndex(
"The log index is not higher than the last known one"
)
image_digest = index_json["manifests"][0].get("digest").replace("sha256:", "") image_digest = index_json["manifests"][0].get("digest").replace("sha256:", "")
# Write the new index.json to the temp folder # Write the new index.json to the temp folder
@ -208,7 +224,7 @@ def upgrade_container_image_airgapped(container_tar: str, pubkey: str) -> str:
archive.add(Path(tmpdir) / "oci-layout", arcname="oci-layout") archive.add(Path(tmpdir) / "oci-layout", arcname="oci-layout")
archive.add(Path(tmpdir) / "blobs", arcname="blobs") archive.add(Path(tmpdir) / "blobs", arcname="blobs")
runtime.load_image_tarball_file(temporary_tar.name) runtime.load_image_tarball_from_tar(temporary_tar.name)
runtime.tag_image_by_digest(image_digest, image_name) runtime.tag_image_by_digest(image_digest, image_name)
store_signatures(signatures, image_digest, pubkey) store_signatures(signatures, image_digest, pubkey)
@ -283,9 +299,13 @@ def store_signatures(signatures: list[Dict], image_digest: str, pubkey: str) ->
Store signatures locally in the SIGNATURE_PATH folder, like this: Store signatures locally in the SIGNATURE_PATH folder, like this:
~/.config/dangerzone/signatures/ ~/.config/dangerzone/signatures/
<pubkey-digest> <pubkey-digest>
<image-digest>.json <image-digest>.json
<image-digest>.json <image-digest>.json
last_log_index
The last_log_index file is used to keep track of the last log index
processed by the updater.
The format used in the `.json` file is the one of `cosign download The format used in the `.json` file is the one of `cosign download
signature`, which differs from the "bundle" one used afterwards. signature`, which differs from the "bundle" one used afterwards.
@ -344,6 +364,7 @@ def get_remote_signatures(image: str, digest: str) -> List[Dict]:
"""Retrieve the signatures from the registry, via `cosign download`.""" """Retrieve the signatures from the registry, via `cosign download`."""
cosign.ensure_installed() cosign.ensure_installed()
# XXX: try/catch here
process = subprocess.run( process = subprocess.run(
["cosign", "download", "signature", f"{image}@sha256:{digest}"], ["cosign", "download", "signature", f"{image}@sha256:{digest}"],
capture_output=True, capture_output=True,
@ -382,3 +403,31 @@ def prepare_airgapped_archive(image_name, destination):
with tarfile.open(destination, "w") as archive: with tarfile.open(destination, "w") as archive:
archive.add(tmpdir, arcname=".") archive.add(tmpdir, arcname=".")
def upgrade_container_image(image: str, manifest_digest: str, pubkey: str) -> bool:
"""Verify and upgrade the image to the latest, if signed."""
update_available, _ = is_update_available(image)
if not update_available:
raise errors.ImageAlreadyUpToDate("The image is already up to date")
signatures = get_remote_signatures(image, manifest_digest)
verify_signatures(signatures, manifest_digest, pubkey)
# Ensure that we only upgrade if the log index is higher than the last known one
incoming_log_index = get_log_index_from_signatures(signatures)
last_log_index = get_last_log_index()
if incoming_log_index < last_log_index:
raise errors.InvalidLogIndex(
"The log index is not higher than the last known one"
)
# let's upgrade the image
# XXX Use the image digest here to avoid race conditions
upgraded = runtime.container_pull(image)
# At this point, the signatures are verified
# We store the signatures just now to avoid storing unverified signatures
store_signatures(signatures, manifest_digest, pubkey)
return upgraded

View file

@ -22,13 +22,13 @@ In case of sucess, it will report back:
``` ```
🎉 Successfully verified image 🎉 Successfully verified image
'ghcr.io/freedomofpress/dangerzone/dangerzone:20250129-0.8.0-149-gbf2f5ac@sha256:4da441235e84e93518778827a5c5745d532d7a4079886e1647924bee7ef1c14d' 'ghcr.io/freedomofpress/dangerzone/dangerzone:<tag>@sha256:<digest>'
and its associated claims: and its associated claims:
- ✅ SLSA Level 3 provenance - ✅ SLSA Level 3 provenance
- ✅ GitHub repo: apyrgio/dangerzone - ✅ GitHub repo: freedomofpress/dangerzone
- ✅ GitHub actions workflow: .github/workflows/multi_arch_build.yml - ✅ GitHub actions workflow: <workflow>
- ✅ Git branch: test/multi-arch - ✅ Git branch: <branch>
- ✅ Git commit: bf2f5accc24bd15a4f5c869a7f0b03b8fe48dfb6 - ✅ Git commit: <commit>
``` ```
## Sign and publish the remote image ## Sign and publish the remote image
@ -37,11 +37,11 @@ Once the image has been reproduced locally, we can add a signature to the contai
and update the `latest` tag to point to the proper hash. and update the `latest` tag to point to the proper hash.
```bash ```bash
cosign sign --sk ghcr.io/freedomofpress/dangerzone/dangerzone:20250129-0.8.0-149-gbf2f5ac@sha256:4da441235e84e93518778827a5c5745d532d7a4079886e1647924bee7ef1c14d cosign sign --sk ghcr.io/freedomofpress/dangerzone/dangerzone:${TAG}@sha256:${DIGEST}
# And mark bump latest # And mark bump latest
crane auth login ghcr.io -u USERNAME --password $(cat pat_token) crane auth login ghcr.io -u USERNAME --password $(cat pat_token)
crane tag ghcr.io/freedomofpress/dangerzone/dangerzone@sha256:4da441235e84e93518778827a5c5745d532d7a4079886e1647924bee7ef1c14d latest crane tag ghcr.io/freedomofpress/dangerzone/dangerzone@sha256:${DIGEST} latest
``` ```
## Install updates ## Install updates
@ -49,7 +49,7 @@ crane tag ghcr.io/freedomofpress/dangerzone/dangerzone@sha256:4da441235e84e93518
To check if a new container image has been released, and update your local installation with it, you can use the following commands: To check if a new container image has been released, and update your local installation with it, you can use the following commands:
```bash ```bash
dangerzone-image upgrade ghcr.io/almet/dangerzone/dangerzone dangerzone-image upgrade ghcr.io/freedomofpress/dangerzone/dangerzone
``` ```
## Verify locally ## Verify locally
@ -57,7 +57,7 @@ dangerzone-image upgrade ghcr.io/almet/dangerzone/dangerzone
You can verify that the image you have locally matches the stored signatures, and that these have been signed with a trusted public key: You can verify that the image you have locally matches the stored signatures, and that these have been signed with a trusted public key:
```bash ```bash
dangerzone-image verify-local ghcr.io/almet/dangerzone/dangerzone dangerzone-image verify-local ghcr.io/freedomofpress/dangerzone/dangerzone
``` ```
## Installing image updates to air-gapped environments ## Installing image updates to air-gapped environments
@ -73,7 +73,7 @@ This archive will contain all the needed material to validate that the new conta
On the machine on which you prepare the packages: On the machine on which you prepare the packages:
```bash ```bash
dangerzone-image prepare-archive --output dz-fa94872.tar ghcr.io/almet/dangerzone/dangerzone@sha256:fa948726aac29a6ac49f01ec8fbbac18522b35b2491fdf716236a0b3502a2ca7 dangerzone-image prepare-archive --output dz-fa94872.tar ghcr.io/freedomofpress/dangerzone/dangerzone@sha256:<digest>
``` ```
On the airgapped machine, copy the file and run the following command: On the airgapped machine, copy the file and run the following command: