Compare commits

...

96 commits

Author SHA1 Message Date
Alexis Métaireau
2a56c7f35c
Allow a different runtime on dangerzone-image commands.
This can be done with the newly added `--runtime` flag, which needs to
be passed to the first group, e.g:

```bash
dangerzone-cli --runtime docker COMMAND
```
2025-04-14 18:25:04 +02:00
Alexis Métaireau
a603700ae6
Display the {podman,docker} pull progress when installing a new image
The progressbars we see when using this same commands on the
command line doesn't seem to be passed to the python process here,
unfortunately.
2025-04-14 18:23:33 +02:00
Alexis Métaireau
9599291fa3
Add a dangerzone-image store-signature CLI command
This can be useful when signatures are missing from the system, for an
already present image, and can be used as a way to fix user issues.
2025-04-14 18:11:18 +02:00
Alexis Métaireau
e4b7d400c9
Replace the updater_check setting by updater_check_all
This new setting triggers the same user prompts, but the actual meaning of
it differs, since users will now be accepting to upgrade the container image
rather than just checking for new releases.

Changing the name of the setting will trigger this prompt for all users, effectively
ensuring they want their image to be automatically upgraded.
2025-04-14 18:11:16 +02:00
Alexis Métaireau
5a48de46a2
Split updater GUI code from the code checking for release updates
The code making the actual requests and checks now lives in the
`updater.releases` module. The code should be easier to read and to
reason about.

Tests have been updated to reflect this.
2025-04-14 18:10:57 +02:00
Alexis Métaireau
f668a44cdb
FIXUP commit for signature tests 2025-04-14 18:10:07 +02:00
Alexis Métaireau
6427d7a38b
Provide an is_update_available function
This function does all the needed checks before returning `True`, making it a good external API.

Under the hood, the registry now has an `is_new_remote_image_available`
which is just for checking the presence of a new image, but doesn't do
any verirications on it, and there is also a new `check_signatures_and_logindex` that ensures that these two are valid.
2025-04-14 18:10:07 +02:00
Alexis Métaireau
4bcd9ee660
FIXUP: Add a comment to update the DEFAULT_LOG_INDEX with releases 2025-04-14 18:10:07 +02:00
Alexis Métaireau
00baf6c87a
FIXUP: throw rather than bools 2025-04-14 18:10:07 +02:00
Alexis Métaireau
0ca9e714f4
FIXUP: Use user data dir rather than config 2025-04-14 18:10:07 +02:00
Alexis Métaireau
7e52fe4459
FIXUP: Use exceptions to ease the flow 2025-04-14 18:10:06 +02:00
Alexis Métaireau
d37630d37b
Introduce a subprocess_run utility function
This is done to avoid forgetting windows specific arguments when calling `subprocess.run`.
2025-04-14 18:10:04 +02:00
Alexis Métaireau
a797a2b6f8
FIXUP: Use the digest when pulling the container 2025-04-14 18:07:00 +02:00
Alexis Métaireau
0907ddc561
Add tests for registry 2025-04-14 18:02:20 +02:00
Alexis Métaireau
87a00d0f38
make the signature tests pass 2025-04-14 18:02:18 +02:00
Alexis Métaireau
26638a5f2a
(WIP) some more tests 2025-04-14 18:01:49 +02:00
Alexis Métaireau
651d988a37
parent a19e341378
author Alexis Métaireau <alexis@freedom.press> 1739380992 +0100
committer Alexis Métaireau <alexis@freedom.press> 1744646096 +0200
gpgsig -----BEGIN PGP SIGNATURE-----

 iHUEABYKAB0WIQRFQpTG/4uXFqX2QanGXHqJqP/FbgUCZ/0v0AAKCRDGXHqJqP/F
 bmA8AP9QZjs6bSmxvmzvYvwJa8wYIo6OsdUyQdoZ4kAMd5X7XwEA+WIbuxU/o2bG
 KisPBI+N8LSwwIke3eNS+ne6Vil7qwg=
 =XCYF
 -----END PGP SIGNATURE-----

(WIP) Add tests
2025-04-14 18:00:48 +02:00
Alexis Métaireau
a19e341378
(WIP) Check for container updates rather than using image-id.txt 2025-04-14 17:49:07 +02:00
Alexis Métaireau
a6a4aa1a3b
Add documentation for independent container updates 2025-04-14 17:46:27 +02:00
Alex Pyrgiotis
e4fccd791f
Publish and attest multi-architecture container images
A new `dangerzone-image attest-provenance` script is now available,
making it possible to verify the attestations of an image published on
the github container registry.

Container images are now build nightly and uploaded to the container
registry.
2025-04-14 17:46:27 +02:00
Alexis Métaireau
ee3689b32a
Add the ability to download diffoci for multiple platforms
The hash list provided on the Github releases page is now bundled in the
`reproduce-image.py` script, and the proper hashes are checked after
download.
2025-04-14 17:45:32 +02:00
Alexis Métaireau
3a6d73dcb8
Download and verify cosign signatures
Signatures are stored in the OCI Manifest v2 registry [0], and are
expected to follow the Cosign Signature Specification [0]

The following CLI utilities are provided with `dangerzone-image`:

For checking new container images, upgrading them and downloading them:

- `upgrade` allows to upgrade the current installed image to the
  last one available on the OCI registry, downloading and storing the
  signatures in the process.
- `verify-local` allows the verify the currently installed image against
  downloaded signatures and public key.

To prepare and install archives on air-gapped environments:

- `prepare-archive` helps to prepare an archive to install on another
  machine
- `load-archive` helps upgrade the local image to the archive given
  in argument.

Signatures are stored locally using the format provided by `cosign
download signature`, and the Rekor log index is used to ensure the
requested-to-install container image is fresher than the one already
present on the system.

[0] https://github.com/sigstore/cosign/blob/main/specs/SIGNATURE_SPEC.md
2025-04-14 17:17:19 +02:00
Alexis Métaireau
ff22c6e160
Add a dangerzone-image CLI script
It contains utilities to interact with OCI registries, like getting the list of
published tags and getting the content of a manifest. It does so
via the use of the Docker Registry API v2 [0].

The script has been added to the `dev_scripts`, and is also installed on
the system under `dangerzone-image`.

[0]  https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry
2025-04-14 16:27:40 +02:00
Alex Pyrgiotis
04096380ff
Include Ubuntu Plucky and Fedora 42 in our nightly repo checks
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
2025-04-10 12:00:15 +02:00
Alexis Métaireau
21ca927b8b
Send release notes to editorial during the release process
Some checks are pending
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
2025-04-09 20:55:31 +02:00
Alexis Métaireau
05040de212
Point download links to the 0.9.0 release 2025-04-09 17:08:50 +02:00
Alexis Métaireau
4014c8591b
Docs: Update the Podman Desktop docs for macOS
In order to access our custom seccomp policy, we require it to be
mounted on the podman machine.

Co-Author: Alex Pyrgiotis <alex.p@freedom.press>
2025-04-09 17:04:42 +02:00
Alex Pyrgiotis
6cd706af10
windows: Minor change to uninstallation message
Some checks are pending
Tests / Download and cache Tesseract data (push) Waiting to run
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
Refs #1026
2025-04-09 14:26:45 +02:00
Alex Pyrgiotis
634b171b97
windows: Detect Dangerzone 0.8.1 during install
Detect Dangerzone 0.8.1 versions during install, so that we can prompt
users to manually uninstall it.

Refs #929
2025-04-09 14:26:44 +02:00
Alexis Métaireau
c99c424f87
Document Podman Desktop experimental support for Windows and macOS
Some checks are pending
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (fedora 42) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run tests (ubuntu 25.04) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
2025-04-08 16:08:55 +02:00
Alex Pyrgiotis
19fa11410b
Update reference template for Qubes to Fedora 41
Closes #1078
2025-04-08 16:37:28 +03:00
Alex Pyrgiotis
10be85b9f2
container: Add workarounds for Podman Desktop support on Windows
In case we run on Windows and use Podman Desktop (for which we currently
offer experimental support), we must not pass some Podman flags in order
to avoid conversion errors.

Refs #1127
2025-04-08 16:36:08 +03:00
Alexis Métaireau
47d732e603
Document the Makefile targets
It now outputs the following:

```
build-linux                  Build linux packages (.rpm and .deb)
build-macos-arm              Build macOS Apple Silicon package (.dmg)
build-macos-intel            Build macOS intel package (.dmg)
Dockerfile                   Regenerate the Dockerfile from its template
fix                          apply all the suggestions from ruff
help                         Print this message and exit.
lint                         Check the code for linting, formatting, and typing issues with ruff and mypy
regenerate-reference-pdfs    Regenerate the reference PDFs
test                         Run the tests
test-large                   Run large test set
```
2025-04-08 16:34:34 +03:00
Alexis Métaireau
d6451290db
Move multithreading patch up so that it's working in the GUI 2025-04-08 16:34:34 +03:00
Alex Pyrgiotis
f0bb65cb4e
Bypass a cx-freeze issue for fitz._wxcolors
Bypass an issue with `cx-freeze` that fails to include the
`fitz._wxcolors` module in the final Windows artifact.

Refs #1128
2025-04-08 16:34:34 +03:00
Alex Pyrgiotis
0c741359cc
Make our build-image.py script runable on Windows 2025-04-08 16:34:34 +03:00
Alex Pyrgiotis
8c61894e25
Handle the case where Docker is not installed
Refs #1132
2025-04-08 16:33:15 +03:00
Alex Pyrgiotis
57667a96be
Add a way to unset the container runtime
Add a way to set the container runtime that Dangerzone uses back to the
default.
2025-04-07 18:23:13 +03:00
Alex Pyrgiotis
1a644e2506
Do not install poetry-plugin-export
Do not unconditionally install the Poetry plugin for exporting
dependencies as a requirements.txt file, since it's used only when
building a Debian package. Keep it instead in the Linux instructions and
when building a Dangerzone environment.
2025-04-07 18:23:10 +03:00
Alex Pyrgiotis
843e68cdf7
Handle the case of empty tesseract dirs during download 2025-04-07 18:22:52 +03:00
Alex Pyrgiotis
33b2a183ce
docs: Improve doit docs 2025-04-07 18:22:52 +03:00
Alex Pyrgiotis
c7121b69a3
Prefer poetry sync to poetry install --sync
Use `poetry sync` instead of `poetry install --sync`, since the latter
is deprecated and will be removed after June 2025, as seen in the
following warning message:

  The `--sync` option is deprecated and slated for removal in the next
  minor release after June 2025, use the `poetry sync` command instead.
2025-04-07 18:22:50 +03:00
Alex Pyrgiotis
0b3bf89d5b
Implicitly run doit with poetry run
Implicitly run `doit` with `poetry run`, else `poetry env remove --all`
will remove the calling Python interpreter.
2025-04-02 12:01:14 +03:00
Alex Pyrgiotis
e0b10c5e40
doit: Remove tessdata dir from targets
Remove the tesseract data dir from the doit targets, else we encounter
the following error:

  Traceback (most recent call last):
    [...]
    File "[...]/Library/Caches/pypoetry/virtualenvs/dangerzone-52Yr5wv_-py3.11/lib/python3.11/site-packages/doit/dependency.py", line 39, in get_file_md5
      with open(path, 'rb') as file_data:
           ^^^^^^^^^^^^^^^^
  IsADirectoryError: [Errno 21] Is a directory: 'share/tessdata'
2025-04-02 11:46:20 +03:00
Alex Pyrgiotis
092eec55d1
doit: Remove unused 'DEBIAN_VERSIONS' variable 2025-04-02 11:45:47 +03:00
Alex Pyrgiotis
14a480c3a3
doit: Fix typo in Fedora targets
Fix a typo when building a Fedora target. Also, add Fedora 42 support.
2025-04-02 11:44:50 +03:00
Alex Pyrgiotis
9df825db5c
debian: Use abbreviated months in changelog
Use abbreviated months in the Debian changelog, else we'll have warnings
like the following:

  LINE:  -- Freedom of the Press Foundation   <info@freedom.press>  Mon, 31 March 2025 15:57:18 +0300
  dpkg-source: warning: dangerzone/debian/changelog(l5): cannot parse non-conformant date '31 March 20
2025-04-02 11:35:31 +03:00
Alex Pyrgiotis
2ee22a497a
Reinstall deps after doit cleans everything
Make sure to reinstall the project dependencies once `doit clean` runs,
since it also removes itself.
2025-04-02 11:30:31 +03:00
Alex Pyrgiotis
b5c09e51d8
Update minimum Docker Desktop version
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Update the minimum Docker Desktop version prior to the 0.9.0 release.
The new version should also fix a recent Docker bug, whereby the container
stdout was truncated, and caused our conversions to fail.

Fixes #1101
2025-04-01 10:33:57 +03:00
Alex Pyrgiotis
37c7608c0f
Bump download links for 0.9.0 2025-04-01 10:33:57 +03:00
Alex Pyrgiotis
972b264236
Update the Dangerzone image and its dependencies
Bump the Debian container image, gVisor version, and H2Orestart plugin.
2025-04-01 10:33:55 +03:00
Alex Pyrgiotis
e38d8e5db0
Update changelog
Update our changelog with all the new changes that have been merged in
the 0.9.0 version.
2025-04-01 10:31:43 +03:00
Alex Pyrgiotis
f92833cdff
Bump version to 0.9.0 2025-04-01 10:26:27 +03:00
Alex Pyrgiotis
07aad5edba
Bump poetry.lock file
Bump the poetry.lock file using `poetry lock --regenerate`.
2025-04-01 10:26:26 +03:00
sudoforge
e8ca12eb11
Use a fully qualified URI for the debian image
Some checks are pending
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (fedora 42) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run tests (ubuntu 25.04) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
This change adds the registry prefix to the `debian` image we pull
from `docker.io/library`. By adding this, we improve support for
non-interactive builds, as users who do not have a preferred default
registry defined in their local configuration will no longer be prompted
to select which registry to pull this from.
2025-03-31 09:26:25 -07:00
sudoforge
491cca6341
Use a digest for the debian base image
66600f32dc introduced various improvements
to the determinism of the container image in this repository. This
change builds on this effort by ensuring that the base image is pulled
by digest. Image digests are immutable references, unlike tags, which
are mutable (except when optionally configured as immutable in certain
container registries, but not `docker.io`).
2025-03-31 08:04:05 -07:00
Alexis Métaireau
0a7b79f61a
Add a set-container-runtime option to dangerzone-cli
This sets the container runtime in the settings, and provides an easy
way to do so for users, without having to mess with the json settings.

When setting the container runtime, one can just pass "podman" and the
path to the executable will be stored in the settings.
2025-03-31 16:20:29 +02:00
Alexis Métaireau
86eab5d222
Ensure that only podman and docker container runtimes can be used 2025-03-31 16:20:29 +02:00
Alexis Métaireau
ed39c056bb
Reset terminal colors after printing the banner 2025-03-31 16:20:29 +02:00
Alexis Métaireau
983622fe59
Update CHANGELOG 2025-03-31 16:20:29 +02:00
Alexis Métaireau
8e99764952
Use a Runtime class to get information about container runtimes
This is useful to avoid parsing too many times the settings.
2025-03-31 16:20:28 +02:00
Alexis Métaireau
20cd9cfc5c
Allow to define a container_runtime_path 2025-03-31 16:20:28 +02:00
Alexis Métaireau
f082641b71
Only check Docker version if the container runtime is set to docker 2025-03-31 16:20:28 +02:00
Alexis Métaireau
c0215062bc
Allow to read the container runtime from the settings
Add a few tests for this along the way, and update the end-user messages
about Docker/Podman to account for this change.
2025-03-31 16:20:28 +02:00
Alexis Métaireau
b551a4dec4
Mock the settings rather than monkeypatching external modules 2025-03-31 16:20:28 +02:00
Alexis Métaireau
5a56a7f055
Decouple the Settings class from DangerzoneCore
No real reason to pass the whole object where what we really need is
just the location of the configuration folder.
2025-03-31 16:20:28 +02:00
Alexis Métaireau
ab6dd9c01d
Use pathlib.Path to return path locations 2025-03-31 16:20:28 +02:00
Alex Pyrgiotis
dfcb74b427
Improve our release instructions regarding versioned links
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Update our `RELEASE.md` so that we don't forget to bump the download
links in `INSTALL.md` prior to tagging a release. This way, we won't
have a versioned `INSTALL.md` page pointing to an older download link.

Note that this means that the latest version of the `INSTALL.md` page
will point to a broken link, in the short period of time between the
pre-release and the actual release. That's not an issue in our case,
because we don't point to the latest version of our `INSTALL.md` from
our `README.md`. We use versioned links instead, and thus we minimize
the chance that a user may encounter a broken link.

Fixes #1100
2025-03-28 15:04:05 +02:00
Alexis Métaireau
a910ccc273
Provide a way to opt-out from CHANGELOG check
Co-authored-by: Alex Pyrgiotis <alex.p@freedom.press>
2025-03-28 13:53:05 +01:00
dependabot[bot]
d868699bab
build(deps): bump slsa-framework/slsa-github-generator
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
Bumps [slsa-framework/slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/slsa-framework/slsa-github-generator/releases)
- [Changelog](https://github.com/slsa-framework/slsa-github-generator/blob/main/CHANGELOG.md)
- [Commits](https://github.com/slsa-framework/slsa-github-generator/compare/v2.0.0...v2.1.0)

---
updated-dependencies:
- dependency-name: slsa-framework/slsa-github-generator
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-26 14:54:50 +01:00
Alexis Métaireau
d6adfbc6c1
Skip PDF-diffing tests when using a dummy isolation provider. 2025-03-26 11:45:46 +01:00
Alexis Métaireau
687bd8585f
Update reference documents to their last version 2025-03-26 11:45:46 +01:00
Alexis Métaireau
b212bfc47e
Add a makefile target to regenerate reference PDFs
This leverages a new flag that can be passed during the tests to
regenerate the PDFs if needed.
2025-03-26 11:45:45 +01:00
Alexis Métaireau
bbc90be217
Publish the resulted diffs as github artifacts
Which makes it easier to inspect after CI run failures.
2025-03-26 11:45:45 +01:00
Alexis Métaireau
2d321bf257
Add a dependency to numpy for the tests
This is useful to reduce the computation time when creating PDF visual
diffs. Here is a comparison of the same operation using python arrays
and numpy arrays + lookups:

Python arrays:
```
diff took 5.094218431997433 seconds
diff took 3.1553626069980965 seconds
diff took 3.3721952960004273 seconds
diff took 3.2134646750018874 seconds
diff took 3.3410625500000606 seconds
diff took 3.2893160990024626 seconds
```

Numpy:
```
diff took 0.13705662599750212 seconds
diff took 0.05698924000171246 seconds
diff took 0.15319590600120137 seconds
diff took 0.06126453700198908 seconds
diff took 0.12916332699751365 seconds
diff took 0.05839455900058965 seconds
2025-03-26 11:45:44 +01:00
Alexis Métaireau
8bfeae4eed
tests: test for regressions when converting PDFs when running the tests
This stores a reference version of the converted PDFs and diffs them when
the newly converted document during the tests.
2025-03-26 11:45:43 +01:00
Alexis Métaireau
3ed71e8ee0
Document Operating System support
The goal is to have rules rather than specific versions, and a table to summarize everything.
2025-03-21 12:08:30 +01:00
Alexis Métaireau
fa8e8c6dbb
CI: Enforce updating the CHANGELOG in the CI
Some checks failed
Tests / windows (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Currently, this is only returning warnings, but we seem to just skip
them. As it's possible to merge PRs when the CI is red, issuing an error
would help us to think about populating this file.
2025-03-21 11:10:56 +01:00
Alex Pyrgiotis
8d05b5779d
ci: Reproducibly build a container image
Some checks are pending
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (fedora 42) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run tests (ubuntu 25.04) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
Create a reusable GitHub Actions workflow that does the following:

1. Create a multi-architecture container image for Dangerzone, instead
   of having two different tarballs (or no option at all)
2. Build the Dangerzone container image on our supported architectures
   (linux/amd64 and linux/arm64). It so happens that GitHub also offers
   ARM machine runners, which speeds up the build.
3. Combine the images from these two architectures into one, multi-arch
   image.
4. Generate provenance info for each manifest, and the root manifest
   list.
5. Check the image's reproduciblity.

Also, remove an older CI job for checking the reproducibility of the
image, which is now obsolete.

Fixes #1035
2025-03-20 17:24:42 +02:00
Alex Pyrgiotis
e1dbdff1da
Completely overhaul the reproduce-image.py script
Make a major change to the `reproduce-image.py` script: drop `diffoci`,
build the container image, and ensure it has the exact same hash as the
source image.

We can drop the `diffoci` script when comparing the two images, because
we are now able build bit-for-bit reproducible images.
2025-03-20 17:17:46 +02:00
Alex Pyrgiotis
a1402d5b6b
Fix a Podman regression regarding Buildkit images
Loading an image built with Buildkit in Podman 3.4 messes up its name.
The tag somehow becomes the name of the loaded image.

We know that older Podman versions are not generally affected, since
Podman v3.0.1 on Debian Bullseye works properly. Also, Podman v4.0 is
not affected, so it makes sense to target only Podman v3.4 for a fix.

The fix is simple, tag the image properly based on the expected tag from
`share/image-id.txt` and delete the incorrect tag.

Refs containers/podman#16490
2025-03-20 17:17:40 +02:00
Alex Pyrgiotis
51f432be6b
Fix references to container.tar.gz
Find all references to the `container.tar.gz` file, and replace them
with references to `container.tar`. Moreover, remove the `--no-save`
argument of `build-image.py` since we now always save the image.

Finally, fix some stale references to Poetry, which are not necessary
anymore.
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
69234507c4
Build container image using repro-build
Invoke the `repro-build` script when building a container image, instead
of the underlying Docker/Podman commands. The `repro-build` script
handles the underlying complexity to call Docker/Podman in a manner that
makes the image reproducible.

Moreover, mirror some arguments from the `repro-build` script, so that
consumers of `build-image.py` can pass them to it.

Important: the resulting image will be in .tar format, not .tar.gz,
starting from this commit. This means that our tests will be broken for
the next few commits.

Fixes #1074
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
94fad78f94
Vendor repro-build script
Vendor the `repro-build` script in our codebase, which will be used to
build our container image in a reproducible manner. We prefer to copy it
verbatim for the time-being, since its interface is not stable enough,
and the repro-build repo is not reviewed after all.

In the future, we want to store this script in a separate place, and
pull it when necessary.

Refs #1085
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
66600f32dc
Remove sources of non-determinism from our image
Make our container image more reproducible, by changing the following in
our Dockerfile:
1. Touch `/etc/apt/sources.list` with a UTC timestamp. Else, builds on
   different countries (!?) may result to different Unix epochs for the
   same date, and therefore different modification time for the
   file.
2. Turn the third column of `/etc/shadow` (date of last password change)
   for the `dangerzone` user into a constant number.
3. Fix r-s file permissions in some copied files, due to inconsistent
   COPY behavior in containerized vs non-containerized Buildkit. This
   requires creating a full file hierarchy in a separate directory (see
   new_root/).
4. Set a specific modification time for the entrypoint script, because
   rewrite-timestamp=true does not overwrite it.
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
d41f604969
Bump container image parameters
Bump all the values in Dockerfile.env, since there are new releases out
for all of them.
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
6d269572ae
Add support for Ubuntu 25.04 (plucky)
Closes #1090
2025-03-20 16:56:58 +02:00
Alex Pyrgiotis
c7ba9ee75c
Add support for Fedora 42
Closes #1091
2025-03-20 16:53:37 +02:00
Alexis Métaireau
418b68d4ca
Avoid passing wrong options -B to subprocesses
This is a common pitfall of pyinstaller, when using multiprocessing.

In our case, the spawned processes is passed the -B option, thinking
it's python (but it's dangerzone).

> -B     Don't write .pyc files on import. See also PYTHONDONTWRITEBYTECODE.

As a result, dangerzone is spawned with the -B option, which doesn't
mean anything for it.

> In the frozen application, sys.executable points to your application
> executable. So when the multiprocessing module in your main process
> attempts to spawn a subprocess (a worker or the resource tracker), it
> runs another instance of your program, with the following arguments for
> resource tracker:
>
> my_program -B -S -I -c "from multiprocessing.resource_tracker import main;main(5)"

https://pyinstaller.org/en/stable/common-issues-and-pitfalls.html#multi-processing
2025-03-17 17:47:42 +01:00
Alex Pyrgiotis
9ba95b5c20
Use correct Ubuntu version for conmon notice
Some checks failed
Tests / macOS (x86_64) (push) Has been cancelled
Tests / check-reproducibility (push) Has been cancelled
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Has been cancelled
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Has been cancelled
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Has been cancelled
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2025-03-17 15:40:25 +02:00
Alex Pyrgiotis
b043c97c41
Unpin the Debian-vendored PyMuPDF package
Unpin the PyMuPDF package that we vendor in our Debian packages. We
originally pinned it to version 1.24.11, because it was the last version
that supported Ubuntu Focal, but we can now unpin it, since we have
dropped Ubuntu Focal support.

Fixes #1018
2025-03-17 15:40:25 +02:00
Alex Pyrgiotis
4a48a2551b
Drop Ubuntu 20.04 (Focal) support
Drop Ubuntu 20.04 (Focal) support, because it's nearing its end-of-life
date. By doing so, we can remove several workarounds and notices we had
in place for this version, and most importantly, remove the pin to our
vendored PyMuPDF package.

Refs #1018
Refs #965
2025-03-17 15:40:25 +02:00
Alex Pyrgiotis
56663023f5
ci: Security scan ARM images
Some checks failed
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Has been cancelled
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Scan ARM images using Anchore's scan action, by utilizing the Ubuntu ARM
runners provided by GitHub. While our ARM images are used only in macOS
silicon platforms, we can use the Ubuntu ARM runners just for scanning.

Closes #1008
2025-03-10 18:45:26 +02:00
Alex Pyrgiotis
53a952235c
Specify version when installing WiX
Some checks are pending
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run-lint (push) Waiting to run
Tests / build-container-image (push) Waiting to run
Tests / Download and cache Tesseract data (push) Waiting to run
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / check-reproducibility (push) Waiting to run
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
Update our CI job and build instructions with the latest WiX version, so
that we don't encounter any installation issues when new WiX versions
are released.

Also, add a reminder in our release instruction to bump the WiX version
before we start a new release.

Fixes #1087
2025-03-10 18:03:24 +02:00
Erik Moeller
d2652ef6cd
Add reference to funding.json (required by floss.fund application)
Some checks failed
Tests / check-reproducibility (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
2025-03-06 15:54:36 +01:00
Alex Pyrgiotis
a6aa66f925
Remove a stale Shiboken6 pin
Some checks failed
Tests / build-container-image (push) Has been cancelled
Tests / Download and cache Tesseract data (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Remove the Shiboken6 pin for our Linux and macOS platforms, since a new
upstream package has been released, that has wheels for every platform.

Also, remove the `sed` command from our dangerzone.spec, whose purpose
was to nullify this pin for our Fedora packages.

Fixes #1061
2025-02-19 11:43:30 +02:00
117 changed files with 4994 additions and 1737 deletions

View file

@ -21,7 +21,7 @@ body:
label: Linux distribution
description: |
What is the name and version of your Linux distribution? You can find it out with `cat /etc/os-release`
placeholder: Ubuntu 20.04.6 LTS
placeholder: Ubuntu 22.04.5 LTS
validations:
required: true
- type: textarea

248
.github/workflows/build-push-image.yml vendored Normal file
View file

@ -0,0 +1,248 @@
name: Build and push multi-arch container image
on:
workflow_call:
inputs:
registry:
required: true
type: string
registry_user:
required: true
type: string
image_name:
required: true
type: string
reproduce:
required: true
type: boolean
secrets:
registry_token:
required: true
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install dev. dependencies
run: |-
sudo apt-get update
sudo apt-get install -y git python3-poetry --no-install-recommends
poetry install --only package
- name: Verify that the Dockerfile matches the commited template and params
run: |-
cp Dockerfile Dockerfile.orig
make Dockerfile
diff Dockerfile.orig Dockerfile
prepare:
runs-on: ubuntu-latest
outputs:
debian_archive_date: ${{ steps.params.outputs.debian_archive_date }}
source_date_epoch: ${{ steps.params.outputs.source_date_epoch }}
image: ${{ steps.params.outputs.full_image_name }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Compute image parameters
id: params
run: |
source Dockerfile.env
DEBIAN_ARCHIVE_DATE=$(date -u +'%Y%m%d')
SOURCE_DATE_EPOCH=$(date -u -d ${DEBIAN_ARCHIVE_DATE} +"%s")
TAG=${DEBIAN_ARCHIVE_DATE}-$(git describe --long --first-parent | tail -c +2)
FULL_IMAGE_NAME=${{ inputs.registry }}/${{ inputs.image_name }}:${TAG}
echo "debian_archive_date=${DEBIAN_ARCHIVE_DATE}" >> $GITHUB_OUTPUT
echo "source_date_epoch=${SOURCE_DATE_EPOCH}" >> $GITHUB_OUTPUT
echo "tag=${DEBIAN_ARCHIVE_DATE}-${TAG}" >> $GITHUB_OUTPUT
echo "full_image_name=${FULL_IMAGE_NAME}" >> $GITHUB_OUTPUT
echo "buildkit_image=${BUILDKIT_IMAGE}" >> $GITHUB_OUTPUT
build:
name: Build ${{ matrix.platform.name }} image
runs-on: ${{ matrix.platform.runs-on }}
needs:
- prepare
outputs:
debian_archive_date: ${{ needs.prepare.outputs.debian_archive_date }}
source_date_epoch: ${{ needs.prepare.outputs.source_date_epoch }}
image: ${{ needs.prepare.outputs.image }}
strategy:
fail-fast: false
matrix:
platform:
- runs-on: "ubuntu-24.04"
name: "linux/amd64"
- runs-on: "ubuntu-24.04-arm"
name: "linux/arm64"
steps:
- uses: actions/checkout@v4
- name: Prepare
run: |
platform=${{ matrix.platform.name }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ inputs.registry_user }}
password: ${{ secrets.registry_token }}
# Instructions for reproducibly building a container image are taken from:
# https://github.com/freedomofpress/repro-build?tab=readme-ov-file#build-and-push-a-container-image-on-github-actions
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: image=${{ needs.prepare.outputs.buildkit_image }}
- name: Build and push by digest
id: build
uses: docker/build-push-action@v6
with:
context: ./dangerzone/
file: Dockerfile
build-args: |
DEBIAN_ARCHIVE_DATE=${{ needs.prepare.outputs.debian_archive_date }}
SOURCE_DATE_EPOCH=${{ needs.prepare.outputs.source_date_epoch }}
provenance: false
outputs: type=image,"name=${{ inputs.registry }}/${{ inputs.image_name }}",push-by-digest=true,push=true,rewrite-timestamp=true,name-canonical=true
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Export digest
run: |
mkdir -p ${{ runner.temp }}/digests
digest="${{ steps.build.outputs.digest }}"
touch "${{ runner.temp }}/digests/${digest#sha256:}"
echo "Image digest is: ${digest}"
- name: Upload digest
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM_PAIR }}
path: ${{ runner.temp }}/digests/*
if-no-files-found: error
retention-days: 1
merge:
runs-on: ubuntu-latest
needs:
- build
outputs:
debian_archive_date: ${{ needs.build.outputs.debian_archive_date }}
source_date_epoch: ${{ needs.build.outputs.source_date_epoch }}
image: ${{ needs.build.outputs.image }}
digest_root: ${{ steps.image.outputs.digest_root }}
digest_amd64: ${{ steps.image.outputs.digest_amd64 }}
digest_arm64: ${{ steps.image.outputs.digest_arm64 }}
steps:
- uses: actions/checkout@v4
- name: Download digests
uses: actions/download-artifact@v4
with:
path: ${{ runner.temp }}/digests
pattern: digests-*
merge-multiple: true
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ inputs.registry_user }}
password: ${{ secrets.registry_token }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
- name: Create manifest list and push
working-directory: ${{ runner.temp }}/digests
run: |
DIGESTS=$(printf '${{ needs.build.outputs.image }}@sha256:%s ' *)
docker buildx imagetools create -t ${{ needs.build.outputs.image }} ${DIGESTS}
- name: Inspect image
id: image
run: |
# Inspect the image
docker buildx imagetools inspect ${{ needs.build.outputs.image }}
docker buildx imagetools inspect ${{ needs.build.outputs.image }} --format "{{json .Manifest}}" > manifest
# Calculate and print the digests
digest_root=$(jq -r .digest manifest)
digest_amd64=$(jq -r '.manifests[] | select(.platform.architecture=="amd64") | .digest' manifest)
digest_arm64=$(jq -r '.manifests[] | select(.platform.architecture=="arm64") | .digest' manifest)
echo "The image digests are:"
echo " Root: $digest_root"
echo " linux/amd64: $digest_amd64"
echo " linux/arm64: $digest_arm64"
# NOTE: Set the digests as an output because the `env` context is not
# available to the inputs of a reusable workflow call.
echo "digest_root=$digest_root" >> "$GITHUB_OUTPUT"
echo "digest_amd64=$digest_amd64" >> "$GITHUB_OUTPUT"
echo "digest_arm64=$digest_arm64" >> "$GITHUB_OUTPUT"
# This step calls the container workflow to generate provenance and push it to
# the container registry.
provenance:
needs:
- merge
strategy:
matrix:
manifest_type:
- root
- amd64
- arm64
permissions:
actions: read # for detecting the Github Actions environment.
id-token: write # for creating OIDC tokens for signing.
packages: write # for uploading attestations.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.1.0
with:
digest: ${{ needs.merge.outputs[format('digest_{0}', matrix.manifest_type)] }}
image: ${{ needs.merge.outputs.image }}
registry-username: ${{ inputs.registry_user }}
secrets:
registry-password: ${{ secrets.registry_token }}
# This step ensures that the image is reproducible
check-reproducibility:
if: ${{ inputs.reproduce }}
needs:
- merge
runs-on: ${{ matrix.platform.runs-on }}
strategy:
fail-fast: false
matrix:
platform:
- runs-on: "ubuntu-24.04"
name: "amd64"
- runs-on: "ubuntu-24.04-arm"
name: "arm64"
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Reproduce the same container image
run: |
./dev_scripts/reproduce-image.py \
--runtime \
docker \
--debian-archive-date \
${{ needs.merge.outputs.debian_archive_date }} \
--platform \
linux/${{ matrix.platform.name }} \
${{ needs.merge.outputs[format('digest_{0}', matrix.platform.name)] }}

View file

@ -33,14 +33,14 @@ jobs:
strategy:
matrix:
include:
- distro: ubuntu
version: "20.04"
- distro: ubuntu
version: "22.04"
- distro: ubuntu
version: "24.04"
- distro: ubuntu
version: "24.10"
- distro: ubuntu
version: "25.04"
- distro: debian
version: bullseye
- distro: debian
@ -51,6 +51,8 @@ jobs:
version: "40"
- distro: fedora
version: "41"
- distro: fedora
version: "42"
steps:
- name: Checkout
@ -85,19 +87,12 @@ jobs:
id: cache-container-image
uses: actions/cache@v4
with:
key: v4-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |
share/container.tar.gz
share/container.tar
share/image-id.txt
- name: Build and push Dangerzone image
- name: Build Dangerzone image
if: ${{ steps.cache-container-image.outputs.cache-hit != 'true' }}
run: |
sudo apt-get install -y python3-poetry
python3 ./install/common/build-image.py
echo ${{ github.token }} | podman login ghcr.io -u USERNAME --password-stdin
gunzip -c share/container.tar.gz | podman load
tag=$(cat share/image-id.txt)
podman push \
dangerzone.rocks/dangerzone:$tag \
${{ env.IMAGE_REGISTRY }}/dangerzone/dangerzone:tag

View file

@ -1,6 +1,7 @@
name: Check branch conformity
on:
pull_request:
types: ["opened", "labeled", "unlabeled", "reopened", "synchronize"]
jobs:
prevent-fixup-commits:
@ -20,16 +21,10 @@ jobs:
check-changelog:
runs-on: ubuntu-latest
name: Ensure CHANGELOG.md is populated for user-visible changes
steps:
- name: Checkout code
uses: actions/checkout@v4
# Pin the GitHub action to a specific commit that we have audited and know
# how it works.
- uses: tarides/changelog-check-action@509965da3b8ac786a5e2da30c2ccf9661189121f
with:
fetch-depth: 0
- name: ensure CHANGELOG.md is populated
env:
BASE_REF: ${{ github.event.pull_request.base.ref }}
shell: bash
run: |
if git diff --exit-code "origin/${BASE_REF}" -- CHANGELOG.md; then
echo "::warning::No CHANGELOG.md modifications were found in this pull request."
fi
changelog: CHANGELOG.md

View file

@ -19,14 +19,14 @@ jobs:
strategy:
matrix:
include:
- distro: ubuntu
version: "25.04" # plucky
- distro: ubuntu
version: "24.10" # oracular
- distro: ubuntu
version: "24.04" # noble
- distro: ubuntu
version: "22.04" # jammy
- distro: ubuntu
version: "20.04" # focal
- distro: debian
version: "trixie" # 13
- distro: debian
@ -34,18 +34,6 @@ jobs:
- distro: debian
version: "11" # bullseye
steps:
- name: Add Podman repo for Ubuntu Focal
if: matrix.distro == 'ubuntu' && matrix.version == 20.04
run: |
apt-get update && apt-get -y install curl wget gnupg2
. /etc/os-release
sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' \
> /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- \
| apt-key add -
apt update
apt-get install python-all -y
- name: Add packages.freedom.press PGP key (gpg)
if: matrix.version != 'trixie'
run: |
@ -93,6 +81,8 @@ jobs:
version: 40
- distro: fedora
version: 41
- distro: fedora
version: 42
steps:
- name: Add packages.freedom.press to our YUM sources
run: |

View file

@ -59,9 +59,9 @@ jobs:
id: cache-container-image
uses: actions/cache@v4
with:
key: v4-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |-
share/container.tar.gz
share/container.tar
share/image-id.txt
- name: Build Dangerzone container image
@ -72,8 +72,8 @@ jobs:
- name: Upload container image
uses: actions/upload-artifact@v4
with:
name: container.tar.gz
path: share/container.tar.gz
name: container.tar
path: share/container.tar
download-tessdata:
name: Download and cache Tesseract data
@ -125,9 +125,9 @@ jobs:
with:
dotnet-version: "8.x"
- name: Install WiX Toolset
run: dotnet tool install --global wix
run: dotnet tool install --global wix --version 5.0.2
- name: Add WiX UI extension
run: wix extension add --global WixToolset.UI.wixext
run: wix extension add --global WixToolset.UI.wixext/5.0.2
- name: Build the MSI installer
# NOTE: This also builds the .exe internally.
run: poetry run .\install\windows\build-app.bat
@ -186,14 +186,14 @@ jobs:
strategy:
matrix:
include:
- distro: ubuntu
version: "20.04"
- distro: ubuntu
version: "22.04"
- distro: ubuntu
version: "24.04"
- distro: ubuntu
version: "24.10"
- distro: ubuntu
version: "25.04"
- distro: debian
version: bullseye
- distro: debian
@ -226,9 +226,9 @@ jobs:
- name: Restore container cache
uses: actions/cache/restore@v4
with:
key: v4-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |-
share/container.tar.gz
share/container.tar
share/image-id.txt
fail-on-cache-miss: true
@ -255,14 +255,14 @@ jobs:
strategy:
matrix:
include:
- distro: ubuntu
version: "20.04"
- distro: ubuntu
version: "22.04"
- distro: ubuntu
version: "24.04"
- distro: ubuntu
version: "24.10"
- distro: ubuntu
version: "25.04"
- distro: debian
version: bullseye
- distro: debian
@ -310,7 +310,7 @@ jobs:
strategy:
matrix:
distro: ["fedora"]
version: ["40", "41"]
version: ["40", "41", "42"]
steps:
- name: Checkout
uses: actions/checkout@v4
@ -333,9 +333,9 @@ jobs:
- name: Restore container image
uses: actions/cache/restore@v4
with:
key: v4-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |-
share/container.tar.gz
share/container.tar
share/image-id.txt
fail-on-cache-miss: true
@ -383,14 +383,14 @@ jobs:
strategy:
matrix:
include:
- distro: ubuntu
version: "20.04"
- distro: ubuntu
version: "22.04"
- distro: ubuntu
version: "24.04"
- distro: ubuntu
version: "24.10"
- distro: ubuntu
version: "25.04"
- distro: debian
version: bullseye
- distro: debian
@ -401,6 +401,8 @@ jobs:
version: "40"
- distro: fedora
version: "41"
- distro: fedora
version: "42"
steps:
- name: Checkout
@ -428,9 +430,9 @@ jobs:
- name: Restore container image
uses: actions/cache/restore@v4
with:
key: v4-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |-
share/container.tar.gz
share/container.tar
share/image-id.txt
fail-on-cache-miss: true
@ -472,29 +474,10 @@ jobs:
xvfb-run -s '-ac' ./dev_scripts/env.py --distro ${{ matrix.distro }} --version ${{ matrix.version }} run --dev \
bash -c 'cd dangerzone; poetry run make test'
check-reproducibility:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Upload PDF diffs
uses: actions/upload-artifact@v4
with:
fetch-depth: 0
- name: Install dev. dependencies
run: |-
sudo apt-get update
sudo apt-get install -y git python3-poetry --no-install-recommends
poetry install --only package
- name: Verify that the Dockerfile matches the commited template and params
run: |-
cp Dockerfile Dockerfile.orig
make Dockerfile
diff Dockerfile.orig Dockerfile
- name: Build Dangerzone container image
run: |
python3 ./install/common/build-image.py --no-save
- name: Reproduce the same container image
run: |
./dev_scripts/reproduce-image.py
name: pdf-diffs-${{ matrix.distro }}-${{ matrix.version }}
path: tests/test_docs/diffs/*.jpeg
# Always run this step to publish test results, even on failures
if: ${{ always() }}

View file

@ -0,0 +1,22 @@
name: Release multi-arch container image
on:
workflow_dispatch:
push:
branches:
- main
- "test/**"
schedule:
- cron: "0 0 * * *" # Run every day at 00:00 UTC.
jobs:
build-push-image:
uses: ./.github/workflows/build-push-image.yml
with:
registry: ghcr.io/${{ github.repository_owner }}
registry_user: ${{ github.actor }}
image_name: dangerzone/dangerzone
reproduce: true
secrets:
registry_token: ${{ secrets.GITHUB_TOKEN }}

View file

@ -10,25 +10,23 @@ on:
jobs:
security-scan-container:
runs-on: ubuntu-latest
strategy:
matrix:
runs-on:
- ubuntu-24.04
- ubuntu-24.04-arm
runs-on: ${{ matrix.runs-on }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install container build dependencies
run: |
sudo apt install pipx
pipx install poetry
pipx inject poetry poetry-plugin-export
poetry install --only package
- name: Bump date of Debian snapshot archive
run: |
date=$(date "+%Y%m%d")
sed -i "s/DEBIAN_ARCHIVE_DATE=[0-9]\+/DEBIAN_ARCHIVE_DATE=${date}/" Dockerfile.env
make Dockerfile
- name: Build container image
run: python3 ./install/common/build-image.py --runtime docker --no-save
run: |
python3 ./install/common/build-image.py \
--debian-archive-date $(date "+%Y%m%d") \
--runtime docker
docker load -i share/container.tar
- name: Get image tag
id: tag
run: echo "tag=$(cat share/image-id.txt)" >> $GITHUB_OUTPUT
@ -58,7 +56,12 @@ jobs:
severity-cutoff: critical
security-scan-app:
runs-on: ubuntu-latest
strategy:
matrix:
runs-on:
- ubuntu-24.04
- ubuntu-24.04-arm
runs-on: ${{ matrix.runs-on }}
steps:
- name: Checkout
uses: actions/checkout@v4

View file

@ -9,11 +9,10 @@ jobs:
strategy:
matrix:
include:
- runs-on: ubuntu-latest
- runs-on: ubuntu-24.04
arch: i686
# Do not scan Silicon mac for now to avoid masking release scan results for other plaforms.
# - runs-on: macos-latest
# arch: arm64
- runs-on: ubuntu-24.04-arm
arch: arm64
runs-on: ${{ matrix.runs-on }}
steps:
- name: Checkout
@ -55,7 +54,12 @@ jobs:
severity-cutoff: critical
security-scan-app:
runs-on: ubuntu-latest
strategy:
matrix:
runs-on:
- ubuntu-24.04
- ubuntu-24.04-arm
runs-on: ${{ matrix.runs-on }}
steps:
- name: Checkout
uses: actions/checkout@v4

View file

@ -0,0 +1 @@
https://dangerzone.rocks/assets/json/funding.json

View file

@ -34,29 +34,6 @@ Install dependencies:
</table>
<table>
<tr>
<td>
<details>
<summary><i>:memo: Expand this section if you are on Ubuntu 20.04 (Focal).</i></summary>
</br>
The default Python version that ships with Ubuntu Focal (3.8) is not
compatible with PySide6, which requires Python 3.9 or greater.
You can install Python 3.9 using the `python3.9` package.
```bash
sudo apt install -y python3.9
```
Poetry will automatically pick up the correct version when running.
</details>
</td>
</tr>
</table>
```sh
sudo apt install -y podman dh-python build-essential make libqt6gui6 \
pipx python3 python3-dev
@ -132,33 +109,11 @@ sudo dnf install -y rpm-build podman python3 python3-devel python3-poetry-core \
pipx qt6-qtbase-gui
```
<table>
<tr>
<td>
<details>
<summary><i>:memo: Expand this section if you are on Fedora 41.</i></summary>
</br>
The default Python version that ships with Fedora 41 (3.13) is not
compatible with PySide6, which requires Python 3.12 or earlier.
You can install Python 3.12 using the `python3.12` package.
```bash
sudo dnf install -y python3.12
```
Poetry will automatically pick up the correct version when running.
</details>
</td>
</tr>
</table>
Install Poetry using `pipx`:
```sh
pipx install poetry
pipx inject poetry poetry-plugin-export
pipx inject poetry
```
Clone this repository:
@ -232,27 +187,27 @@ Overview of the qubes you'll create:
|--------------|----------|---------|
| dz | app qube | Dangerzone development |
| dz-dvm | app qube | offline disposable template for performing conversions |
| fedora-40-dz | template | template for the other two qubes |
| fedora-41-dz | template | template for the other two qubes |
#### In `dom0`:
The following instructions require typing commands in a terminal in dom0.
1. Create a new Fedora **template** (`fedora-40-dz`) for Dangerzone development:
1. Create a new Fedora **template** (`fedora-41-dz`) for Dangerzone development:
```
qvm-clone fedora-40 fedora-40-dz
qvm-clone fedora-41 fedora-41-dz
```
> :bulb: Alternatively, you can use your base Fedora 40 template in the
> following instructions. In that case, skip this step and replace
> `fedora-40-dz` with `fedora-40` in the steps below.
> `fedora-41-dz` with `fedora-41` in the steps below.
2. Create an offline disposable template (app qube) called `dz-dvm`, based on the `fedora-40-dz`
2. Create an offline disposable template (app qube) called `dz-dvm`, based on the `fedora-41-dz`
template. This will be the qube where the documents will be sanitized:
```
qvm-create --class AppVM --label red --template fedora-40-dz \
qvm-create --class AppVM --label red --template fedora-41-dz \
--prop netvm="" --prop template_for_dispvms=True \
--prop default_dispvm='' dz-dvm
```
@ -261,7 +216,7 @@ The following instructions require typing commands in a terminal in dom0.
and initiating the sanitization process:
```
qvm-create --class AppVM --label red --template fedora-40-dz dz
qvm-create --class AppVM --label red --template fedora-41-dz dz
qvm-volume resize dz:private $(numfmt --from=auto 20Gi)
```
@ -306,12 +261,12 @@ test it.
./install/linux/build-rpm.py --qubes
```
4. Copy the produced `.rpm` file into `fedora-40-dz`
4. Copy the produced `.rpm` file into `fedora-41-dz`
```sh
qvm-copy dist/*.x86_64.rpm
```
#### In the `fedora-40-dz` template
#### In the `fedora-41-dz` template
1. Install the `.rpm` package you just copied
@ -319,7 +274,7 @@ test it.
sudo dnf install ~/QubesIncoming/dz/*.rpm
```
2. Shutdown the `fedora-40-dz` template
2. Shutdown the `fedora-41-dz` template
### Developing Dangerzone
@ -350,7 +305,7 @@ For changes in the server side components, you can simply edit them locally,
and they will be mirrored to the disposable qube through the `dz.ConvertDev`
RPC call.
The only reason to build a new Qubes RPM and install it in the `fedora-40-dz`
The only reason to build a new Qubes RPM and install it in the `fedora-41-dz`
template for development is if:
1. The project requires new server-side components.
2. The code for `qubes/dz.ConvertDev` needs to be updated.
@ -371,7 +326,7 @@ cd dangerzone
Install Python dependencies:
```sh
python3 -m pip install poetry poetry-plugin-export
python3 -m pip install poetry
poetry install
```
@ -432,7 +387,7 @@ Install Microsoft Visual C++ 14.0 or greater. Get it with ["Microsoft C++ Build
Install [poetry](https://python-poetry.org/). Open PowerShell, and run:
```
python -m pip install poetry poetry-plugin-export
python -m pip install poetry
```
Install git from [here](https://git-scm.com/download/win), open a Windows terminal (`cmd.exe`) and clone this repository:
@ -478,13 +433,13 @@ poetry shell
Install [.NET SDK](https://dotnet.microsoft.com/en-us/download) version 6 or later. Then, open a terminal and install the latest version of [WiX Toolset .NET tool](https://wixtoolset.org/) **v5** with:
```sh
dotnet tool install --global wix --version 5.*
dotnet tool install --global wix --version 5.0.2
```
Install the WiX UI extension. You may need to open a new terminal in order to use the newly installed `wix` .NET tool:
```sh
wix extension add --global WixToolset.UI.wixext/5.x.y
wix extension add --global WixToolset.UI.wixext/5.0.2
```
> [!IMPORTANT]

View file

@ -5,9 +5,70 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
since 0.4.1, and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased](https://github.com/freedomofpress/dangerzone/compare/v0.8.1...HEAD)
## [Unreleased](https://github.com/freedomofpress/dangerzone/compare/v0.9.0...HEAD)
-
## [0.9.0](https://github.com/freedomofpress/dangerzone/compare/v0.9.0...0.8.1)
### Added
- Platform support: Add support for Fedora 42 ([#1091](https://github.com/freedomofpress/dangerzone/issues/1091))
- Platform support: Add support for Ubuntu 25.04 (Plucky Puffin) ([#1090](https://github.com/freedomofpress/dangerzone/issues/1090))
- (experimental): It is now possible to specify a custom container runtime in
the settings, by using the `container_runtime` key. It should contain the path
to the container runtime you want to use. Please note that this doesn't mean
we support more container runtimes than Podman and Docker for the time being,
but enables you to chose which one you want to use, independently of your
platform. ([#925](https://github.com/freedomofpress/dangerzone/issues/925))
- Document Operating System support [#986](https://github.com/freedomofpress/dangerzone/issues/986)
- Tests: Look for regressions when converting PDFs [#321](https://github.com/freedomofpress/dangerzone/issues/321)
- Ensure container image reproducibilty across different container runtimes and versions ([#1074](https://github.com/freedomofpress/dangerzone/issues/1074))
- Implement container image attestations ([#1035](https://github.com/freedomofpress/dangerzone/issues/1035))
- Inform user of outdated Docker Desktop Version ([#693](https://github.com/freedomofpress/dangerzone/issues/693))
- Add support for Python 3.13 ([#992](https://github.com/freedomofpress/dangerzone/issues/992))
- Publish the built artifacts in our CI pipelines ([#972](https://github.com/freedomofpress/dangerzone/pull/972))
### Fixed
- Fix our Debian Trixie installation instructions using Sequoia PGP ([#1052](https://github.com/freedomofpress/dangerzone/issues/1052))
- Fix the way multiprocessing works on macOS ([#873](https://github.com/freedomofpress/dangerzone/issues/873))
- Update minimum Docker Desktop version to fix an stdout truncation issue ([#1101](https://github.com/freedomofpress/dangerzone/issues/1101))
### Removed
- Platform support: Drop support for Ubuntu Focal, since it's nearing end-of-life ([#1018](https://github.com/freedomofpress/dangerzone/issues/1018))
- Platform support: Drop support for Fedora 39 ([#999](https://github.com/freedomofpress/dangerzone/issues/999))
## Changed
- Switch base image to Debian Stable ([#1046](https://github.com/freedomofpress/dangerzone/issues/1046))
- Track image tags instead of image IDs in `image-id.txt` ([#1020](https://github.com/freedomofpress/dangerzone/issues/1020))
- Migrate to Wix 4 (windows building tool) ([#602](https://github.com/freedomofpress/dangerzone/issues/602)).
Thanks [@jkarasti](https://github.com/jkarasti) for the contribution.
- Add a `--debug` flag to the CLI to help retrieve more logs ([#941](https://github.com/freedomofpress/dangerzone/pull/941))
- The `debian` base image is now fetched by digest. As a result, your local
container storage will no longer show a tag for this dependency
([#1116](https://github.com/freedomofpress/dangerzone/pull/1116)).
Thanks [@sudoforge](https://github.com/sudoforge) for the contribution.
- The `debian` base image is now referenced with a fully qualified URI,
including the registry hostname ([#1118](https://github.com/freedomofpress/dangerzone/pull/1118)).
Thanks [@sudoforge](https://github.com/sudoforge) for the contribution.
- Update the Dangerzone container image and its dependencies (gVisor, Debian base image, H2Orestart) to the latest versions:
* Debian image release: `bookworm-20250317-slim@sha256:1209d8fd77def86ceb6663deef7956481cc6c14a25e1e64daec12c0ceffcc19d`
* Debian snapshots date: `2025-03-31`
* gVisor release date: `2025-03-26`
* H2Orestart plugin: `v0.7.2` (`d09bc5c93fe2483a7e4a57985d2a8d0e4efae2efb04375fe4b59a68afd7241e2`)
### Development changes
- Make container image scanning work for Silicon macOS ([#1008](https://github.com/freedomofpress/dangerzone/issues/1008))
- Automate the main bulk of our release tasks ([#1016](https://github.com/freedomofpress/dangerzone/issues/1016))
- CI: Enforce updating the CHANGELOG in the CI ([#1108](https://github.com/freedomofpress/dangerzone/pull/1108))
- Add reference to funding.json (required by floss.fund application) ([#1092](https://github.com/freedomofpress/dangerzone/pull/1092))
- Lint: add ruff for linting and formatting ([#1029](https://github.com/freedomofpress/dangerzone/pull/1029)).
Thanks [@jkarasti](https://github.com/jkarasti) for the contribution.
- Work around a `cx_freeze` build issue ([#974](https://github.com/freedomofpress/dangerzone/issues/974))
- tests: mark the hancom office suite tests for rerun on failures ([#991](https://github.com/freedomofpress/dangerzone/pull/991))
- Update reference template for Qubes to Fedora 41 ([#1078](https://github.com/freedomofpress/dangerzone/issues/1078))
## [0.8.1](https://github.com/freedomofpress/dangerzone/compare/v0.8.1...0.8.0)
@ -22,6 +83,10 @@ since 0.4.1, and this project adheres to [Semantic Versioning](https://semver.or
- Platform support: Drop support for Fedora 39, since it's end-of-life ([#999](https://github.com/freedomofpress/dangerzone/pull/999))
## Updated
- Bump `slsa-framework/slsa-github-generator` from 2.0.0 to 2.1.0 ([#1109](https://github.com/freedomofpress/dangerzone/pull/1109))
### Development changes
Thanks [@jkarasti](https://github.com/jkarasti) for the contribution.

View file

@ -2,14 +2,14 @@
# Dockerfile args below. For more info about this file, read
# docs/developer/reproducibility.md.
ARG DEBIAN_IMAGE_DATE=20250113
ARG DEBIAN_IMAGE_DIGEST=sha256:1209d8fd77def86ceb6663deef7956481cc6c14a25e1e64daec12c0ceffcc19d
FROM debian:bookworm-${DEBIAN_IMAGE_DATE}-slim as dangerzone-image
FROM docker.io/library/debian@${DEBIAN_IMAGE_DIGEST} AS dangerzone-image
ARG GVISOR_ARCHIVE_DATE=20250120
ARG DEBIAN_ARCHIVE_DATE=20250127
ARG H2ORESTART_CHECKSUM=7760dc2963332c50d15eee285933ec4b48d6a1de9e0c0f6082946f93090bd132
ARG H2ORESTART_VERSION=v0.7.0
ARG GVISOR_ARCHIVE_DATE=20250326
ARG DEBIAN_ARCHIVE_DATE=20250331
ARG H2ORESTART_CHECKSUM=935e68671bde4ca63a364128077f1c733349bbcc90b7e6973bc7a2306494ec54
ARG H2ORESTART_VERSION=v0.7.2
ENV DEBIAN_FRONTEND=noninteractive
@ -22,8 +22,8 @@ RUN \
--mount=type=bind,source=./container_helpers/repro-sources-list.sh,target=/usr/local/bin/repro-sources-list.sh \
--mount=type=bind,source=./container_helpers/gvisor.key,target=/tmp/gvisor.key \
: "Hacky way to set a date for the Debian snapshot repos" && \
touch -d ${DEBIAN_ARCHIVE_DATE} /etc/apt/sources.list.d/debian.sources && \
touch -d ${DEBIAN_ARCHIVE_DATE} /etc/apt/sources.list && \
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list.d/debian.sources && \
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list && \
repro-sources-list.sh && \
: "Setup APT to install gVisor from its separate APT repo" && \
apt-get update && \
@ -52,9 +52,13 @@ RUN mkdir /opt/libreoffice_ext && cd /opt/libreoffice_ext \
&& rm /root/.wget-hsts
# Create an unprivileged user both for gVisor and for running Dangerzone.
# XXX: Make the shadow field "date of last password change" a constant
# number.
RUN addgroup --gid 1000 dangerzone
RUN adduser --uid 1000 --ingroup dangerzone --shell /bin/true \
--disabled-password --home /home/dangerzone dangerzone
--disabled-password --home /home/dangerzone dangerzone \
&& chage -d 99999 dangerzone \
&& rm /etc/shadow-
# Copy Dangerzone's conversion logic under /opt/dangerzone, and allow Python to
# import it.
@ -165,20 +169,50 @@ RUN mkdir /home/dangerzone/.containers
# The `ln` binary, even if you specify it by its full path, cannot run
# (probably because `ld-linux.so` can't be found). For this reason, we have
# to create the symlinks beforehand, in a previous build stage. Then, in an
# empty contianer image (scratch images), we can copy these symlinks and the
# /usr, and stich everything together.
# empty container image (scratch images), we can copy these symlinks and the
# /usr, and stitch everything together.
###############################################################################
# Create the filesystem hierarchy that will be used to symlink /usr.
RUN mkdir /new_root
RUN mkdir /new_root/root /new_root/run /new_root/tmp
RUN chmod 777 /new_root/tmp
RUN mkdir -p \
/new_root \
/new_root/root \
/new_root/run \
/new_root/tmp \
/new_root/home/dangerzone/dangerzone-image/rootfs
# Copy the /etc and /var directories under the new root directory. Also,
# copy /etc/, /opt, and /usr to the Dangerzone image rootfs.
#
# NOTE: We also have to remove the resolv.conf file, in order to not leak any
# DNS servers added there during image build time.
RUN cp -r /etc /var /new_root/ \
&& rm /new_root/etc/resolv.conf
RUN cp -r /etc /opt /usr /new_root/home/dangerzone/dangerzone-image/rootfs \
&& rm /new_root/home/dangerzone/dangerzone-image/rootfs/etc/resolv.conf
RUN ln -s /home/dangerzone/dangerzone-image/rootfs/usr /new_root/usr
RUN ln -s usr/bin /new_root/bin
RUN ln -s usr/lib /new_root/lib
RUN ln -s usr/lib64 /new_root/lib64
RUN ln -s usr/sbin /new_root/sbin
RUN ln -s usr/bin /new_root/home/dangerzone/dangerzone-image/rootfs/bin
RUN ln -s usr/lib /new_root/home/dangerzone/dangerzone-image/rootfs/lib
RUN ln -s usr/lib64 /new_root/home/dangerzone/dangerzone-image/rootfs/lib64
# Fix permissions in /home/dangerzone, so that our entrypoint script can make
# changes in the following folders.
RUN chown dangerzone:dangerzone \
/new_root/home/dangerzone \
/new_root/home/dangerzone/dangerzone-image/
# Fix permissions in /tmp, so that it can be used by unprivileged users.
RUN chmod 777 /new_root/tmp
COPY container_helpers/entrypoint.py /new_root
# HACK: For reasons that we are not sure yet, we need to explicitly specify the
# modification time of this file.
RUN touch -d ${DEBIAN_ARCHIVE_DATE}Z /new_root/entrypoint.py
## Final image
@ -188,24 +222,7 @@ FROM scratch
# /usr can be a symlink.
COPY --from=dangerzone-image /new_root/ /
# Copy the bare minimum to run Dangerzone in the inner container image.
COPY --from=dangerzone-image /etc/ /home/dangerzone/dangerzone-image/rootfs/etc/
COPY --from=dangerzone-image /opt/ /home/dangerzone/dangerzone-image/rootfs/opt/
COPY --from=dangerzone-image /usr/ /home/dangerzone/dangerzone-image/rootfs/usr/
RUN ln -s usr/bin /home/dangerzone/dangerzone-image/rootfs/bin
RUN ln -s usr/lib /home/dangerzone/dangerzone-image/rootfs/lib
RUN ln -s usr/lib64 /home/dangerzone/dangerzone-image/rootfs/lib64
# Copy the bare minimum to let the security scanner find vulnerabilities.
COPY --from=dangerzone-image /etc/ /etc/
COPY --from=dangerzone-image /var/ /var/
# Allow our entrypoint script to make changes in the following folders.
RUN chown dangerzone:dangerzone /home/dangerzone /home/dangerzone/dangerzone-image/
# Switch to the dangerzone user for the rest of the script.
USER dangerzone
COPY container_helpers/entrypoint.py /
ENTRYPOINT ["/entrypoint.py"]

View file

@ -1,9 +1,16 @@
# Can be bumped to the latest date in https://hub.docker.com/_/debian/tags?name=bookworm-
DEBIAN_IMAGE_DATE=20250113
# Should be the INDEX DIGEST from an image tagged `bookworm-<DATE>-slim`:
# https://hub.docker.com/_/debian/tags?name=bookworm-
#
# Tag for this digest: bookworm-20250317-slim
DEBIAN_IMAGE_DIGEST=sha256:1209d8fd77def86ceb6663deef7956481cc6c14a25e1e64daec12c0ceffcc19d
# Can be bumped to today's date
DEBIAN_ARCHIVE_DATE=20250127
DEBIAN_ARCHIVE_DATE=20250331
# Can be bumped to the latest date in https://github.com/google/gvisor/tags
GVISOR_ARCHIVE_DATE=20250120
GVISOR_ARCHIVE_DATE=20250326
# Can be bumped to the latest version and checksum from https://github.com/ebandal/H2Orestart/releases
H2ORESTART_CHECKSUM=7760dc2963332c50d15eee285933ec4b48d6a1de9e0c0f6082946f93090bd132
H2ORESTART_VERSION=v0.7.0
H2ORESTART_CHECKSUM=935e68671bde4ca63a364128077f1c733349bbcc90b7e6973bc7a2306494ec54
H2ORESTART_VERSION=v0.7.2
# Buildkit image (taken from freedomofpress/repro-build)
BUILDKIT_IMAGE="docker.io/moby/buildkit:v19.0@sha256:14aa1b4dd92ea0a4cd03a54d0c6079046ea98cd0c0ae6176bdd7036ba370cbbe"
BUILDKIT_IMAGE_ROOTLESS="docker.io/moby/buildkit:v0.19.0-rootless@sha256:e901cffdad753892a7c3afb8b9972549fca02c73888cf340c91ed801fdd96d71"

View file

@ -2,9 +2,9 @@
# Dockerfile args below. For more info about this file, read
# docs/developer/reproducibility.md.
ARG DEBIAN_IMAGE_DATE={{DEBIAN_IMAGE_DATE}}
ARG DEBIAN_IMAGE_DIGEST={{DEBIAN_IMAGE_DIGEST}}
FROM debian:bookworm-${DEBIAN_IMAGE_DATE}-slim as dangerzone-image
FROM docker.io/library/debian@${DEBIAN_IMAGE_DIGEST} AS dangerzone-image
ARG GVISOR_ARCHIVE_DATE={{GVISOR_ARCHIVE_DATE}}
ARG DEBIAN_ARCHIVE_DATE={{DEBIAN_ARCHIVE_DATE}}
@ -22,8 +22,8 @@ RUN \
--mount=type=bind,source=./container_helpers/repro-sources-list.sh,target=/usr/local/bin/repro-sources-list.sh \
--mount=type=bind,source=./container_helpers/gvisor.key,target=/tmp/gvisor.key \
: "Hacky way to set a date for the Debian snapshot repos" && \
touch -d ${DEBIAN_ARCHIVE_DATE} /etc/apt/sources.list.d/debian.sources && \
touch -d ${DEBIAN_ARCHIVE_DATE} /etc/apt/sources.list && \
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list.d/debian.sources && \
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list && \
repro-sources-list.sh && \
: "Setup APT to install gVisor from its separate APT repo" && \
apt-get update && \
@ -52,9 +52,13 @@ RUN mkdir /opt/libreoffice_ext && cd /opt/libreoffice_ext \
&& rm /root/.wget-hsts
# Create an unprivileged user both for gVisor and for running Dangerzone.
# XXX: Make the shadow field "date of last password change" a constant
# number.
RUN addgroup --gid 1000 dangerzone
RUN adduser --uid 1000 --ingroup dangerzone --shell /bin/true \
--disabled-password --home /home/dangerzone dangerzone
--disabled-password --home /home/dangerzone dangerzone \
&& chage -d 99999 dangerzone \
&& rm /etc/shadow-
# Copy Dangerzone's conversion logic under /opt/dangerzone, and allow Python to
# import it.
@ -165,20 +169,50 @@ RUN mkdir /home/dangerzone/.containers
# The `ln` binary, even if you specify it by its full path, cannot run
# (probably because `ld-linux.so` can't be found). For this reason, we have
# to create the symlinks beforehand, in a previous build stage. Then, in an
# empty contianer image (scratch images), we can copy these symlinks and the
# /usr, and stich everything together.
# empty container image (scratch images), we can copy these symlinks and the
# /usr, and stitch everything together.
###############################################################################
# Create the filesystem hierarchy that will be used to symlink /usr.
RUN mkdir /new_root
RUN mkdir /new_root/root /new_root/run /new_root/tmp
RUN chmod 777 /new_root/tmp
RUN mkdir -p \
/new_root \
/new_root/root \
/new_root/run \
/new_root/tmp \
/new_root/home/dangerzone/dangerzone-image/rootfs
# Copy the /etc and /var directories under the new root directory. Also,
# copy /etc/, /opt, and /usr to the Dangerzone image rootfs.
#
# NOTE: We also have to remove the resolv.conf file, in order to not leak any
# DNS servers added there during image build time.
RUN cp -r /etc /var /new_root/ \
&& rm /new_root/etc/resolv.conf
RUN cp -r /etc /opt /usr /new_root/home/dangerzone/dangerzone-image/rootfs \
&& rm /new_root/home/dangerzone/dangerzone-image/rootfs/etc/resolv.conf
RUN ln -s /home/dangerzone/dangerzone-image/rootfs/usr /new_root/usr
RUN ln -s usr/bin /new_root/bin
RUN ln -s usr/lib /new_root/lib
RUN ln -s usr/lib64 /new_root/lib64
RUN ln -s usr/sbin /new_root/sbin
RUN ln -s usr/bin /new_root/home/dangerzone/dangerzone-image/rootfs/bin
RUN ln -s usr/lib /new_root/home/dangerzone/dangerzone-image/rootfs/lib
RUN ln -s usr/lib64 /new_root/home/dangerzone/dangerzone-image/rootfs/lib64
# Fix permissions in /home/dangerzone, so that our entrypoint script can make
# changes in the following folders.
RUN chown dangerzone:dangerzone \
/new_root/home/dangerzone \
/new_root/home/dangerzone/dangerzone-image/
# Fix permissions in /tmp, so that it can be used by unprivileged users.
RUN chmod 777 /new_root/tmp
COPY container_helpers/entrypoint.py /new_root
# HACK: For reasons that we are not sure yet, we need to explicitly specify the
# modification time of this file.
RUN touch -d ${DEBIAN_ARCHIVE_DATE}Z /new_root/entrypoint.py
## Final image
@ -188,24 +222,7 @@ FROM scratch
# /usr can be a symlink.
COPY --from=dangerzone-image /new_root/ /
# Copy the bare minimum to run Dangerzone in the inner container image.
COPY --from=dangerzone-image /etc/ /home/dangerzone/dangerzone-image/rootfs/etc/
COPY --from=dangerzone-image /opt/ /home/dangerzone/dangerzone-image/rootfs/opt/
COPY --from=dangerzone-image /usr/ /home/dangerzone/dangerzone-image/rootfs/usr/
RUN ln -s usr/bin /home/dangerzone/dangerzone-image/rootfs/bin
RUN ln -s usr/lib /home/dangerzone/dangerzone-image/rootfs/lib
RUN ln -s usr/lib64 /home/dangerzone/dangerzone-image/rootfs/lib64
# Copy the bare minimum to let the security scanner find vulnerabilities.
COPY --from=dangerzone-image /etc/ /etc/
COPY --from=dangerzone-image /var/ /var/
# Allow our entrypoint script to make changes in the following folders.
RUN chown dangerzone:dangerzone /home/dangerzone /home/dangerzone/dangerzone-image/
# Switch to the dangerzone user for the rest of the script.
USER dangerzone
COPY container_helpers/entrypoint.py /
ENTRYPOINT ["/entrypoint.py"]

View file

@ -1,7 +1,41 @@
## Operating System support
Dangerzone can run on various Operating Systems (OS), and has automated tests
for most of them.
This section explains which OS we support, how long we support each version, and
how do we test Dangerzone against these.
You can find general support information in this table, and more details in the
following sections.
(Unless specified, the architecture of the OS is AMD64)
| Distribution | Supported releases | Automated tests | Manual QA |
| ------------ | ------------------------- | ---------------------- | --------- |
| Windows | 2 last releases | 🗹 (`windows-latest`) ◎ | 🗹 |
| macOS intel | 3 last releases | 🗹 (`macos-13`) ◎ | 🗹 |
| macOS silicon | 3 last releases | 🗹 (`macos-latest`) ◎ | 🗹 |
| Ubuntu | Follow upstream support ✰ | 🗹 | 🗹 |
| Debian | Current stable, Oldstable and LTS releases | 🗹 | 🗹 |
| Fedora | Follow upstream support | 🗹 | 🗹 |
| Qubes OS | [Beta support](https://github.com/freedomofpress/dangerzone/issues/413) ✢ | 🗷 | Latest Fedora template |
| Tails | Only the last release | 🗷 | Last release only |
Notes:
✰ Support for Ubuntu Focal [was dropped](https://github.com/freedomofpress/dangerzone/issues/1018)
✢ Qubes OS support assumes the use of a Fedora template. The supported releases follow our general support for Fedora.
◎ More information about where that points [in the runner-images repository](https://github.com/actions/runner-images/tree/main)
## MacOS
- Download [Dangerzone 0.8.1 for Mac (Apple Silicon CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.8.1/Dangerzone-0.8.1-arm64.dmg)
- Download [Dangerzone 0.8.1 for Mac (Intel CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.8.1/Dangerzone-0.8.1-i686.dmg)
- Download [Dangerzone 0.9.0 for Mac (Apple Silicon CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.9.0/Dangerzone-0.9.0-arm64.dmg)
- Download [Dangerzone 0.9.0 for Mac (Intel CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.9.0/Dangerzone-0.9.0-i686.dmg)
> [!TIP]
> We support the releases of macOS that are still within Apple's servicing timeline. Apple usually provides security updates for the latest 3 releases, but this isnt consistently applied and security fixes arent guaranteed for the non-latest releases. We are also dependent on [Docker Desktop windows support](https://docs.docker.com/desktop/setup/install/mac-install/)
You can also install Dangerzone for Mac using [Homebrew](https://brew.sh/): `brew install --cask dangerzone`
@ -11,24 +45,44 @@ You can also install Dangerzone for Mac using [Homebrew](https://brew.sh/): `bre
## Windows
- Download [Dangerzone 0.8.1 for Windows](https://github.com/freedomofpress/dangerzone/releases/download/v0.8.1/Dangerzone-0.8.1.msi)
- Download [Dangerzone 0.9.0 for Windows](https://github.com/freedomofpress/dangerzone/releases/download/v0.9.0/Dangerzone-0.9.0.msi)
> **Note**: you will also need to install [Docker Desktop](https://www.docker.com/products/docker-desktop/).
> This program needs to run alongside Dangerzone at all times, since it is what allows Dangerzone to
> create the secure environment.
> [!TIP]
> We generally support Windows releases that are still within [Microsofts servicing timeline](https://support.microsoft.com/en-us/help/13853/windows-lifecycle-fact-sheet).
>
> Docker sets the bottom line:
>
> > Docker only supports Docker Desktop on Windows for those versions of Windows that are still within [Microsofts servicing timeline](https://support.microsoft.com/en-us/help/13853/windows-lifecycle-fact-sheet). Docker Desktop is not supported on server versions of Windows, such as Windows Server 2019 or Windows Server 2022.
## Linux
On Linux, Dangerzone uses [Podman](https://podman.io/) instead of Docker Desktop for creating
an isolated environment. It will be installed automatically when installing Dangerzone.
> [!TIP]
> We support Ubuntu, Debian, and Fedora releases that are still within
> their respective servicing timelines, with a few twists:
>
> - Ubuntu: We follow upstream support with an extra cutoff date. No support for
> versions prior to the second oldest LTS release.
> - Fedora: We follow upstream support
> - Debian: current stable, oldstable and LTS releases.
Dangerzone is available for:
- Ubuntu 25.04 (plucky)
- Ubuntu 24.10 (oracular)
- Ubuntu 24.04 (noble)
- Ubuntu 22.04 (jammy)
- Ubuntu 20.04 (focal)
- Debian 13 (trixie)
- Debian 12 (bookworm)
- Debian 11 (bullseye)
- Fedora 42
- Fedora 41
- Fedora 40
- Tails
@ -40,35 +94,7 @@ Dangerzone is available for:
<tr>
<td>
<details>
<summary><i>:memo: Expand this section if you are on Ubuntu 20.04 (Focal).</i></summary>
</br>
Dangerzone requires [Podman](https://podman.io/), which is not available
through the official Ubuntu Focal repos. To proceed with the Dangerzone
installation, you need to add an extra OpenSUSE repo that provides Podman to
Ubuntu Focal users. You can follow the instructions below, which have been
copied from the [official Podman blog](https://podman.io/new/2021/06/16/new.html):
```bash
sudo apt-get update && sudo apt-get install curl wget gnupg2 -y
. /etc/os-release
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' \
> /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- \
| sudo apt-key add -
sudo apt update
```
</details>
</td>
</tr>
</table>
<table>
<tr>
<td>
<details>
<summary><i>:information_source: Backport notice for Ubuntu 24.04 (Noble) users regarding the <code>conmon</code> package</i></summary>
<summary><i>:information_source: Backport notice for Ubuntu 22.04 (Jammy) users regarding the <code>conmon</code> package</i></summary>
</br>
The `conmon` version that Podman uses and Ubuntu Jammy ships, has a bug
@ -205,8 +231,8 @@ After confirming that it matches, type `y` (for yes) and the installation should
> [!IMPORTANT]
> This section will install Dangerzone in your **default template**
> (`fedora-40` as of writing this). If you want to install it in a different
> one, make sure to replace `fedora-40` with the template of your choice.
> (`fedora-41` as of writing this). If you want to install it in a different
> one, make sure to replace `fedora-41` with the template of your choice.
The following steps must be completed once. Make sure you run them in the
specified qubes.
@ -223,7 +249,7 @@ Create a **disposable**, offline app qube (`dz-dvm`), based on your default
template. This will be the qube where the documents will be sanitized:
```
qvm-create --class AppVM --label red --template fedora-40 \
qvm-create --class AppVM --label red --template fedora-41 \
--prop netvm="" --prop template_for_dispvms=True \
--prop default_dispvm='' dz-dvm
```
@ -236,7 +262,7 @@ document, with the following contents:
dz.Convert * @anyvm @dispvm:dz-dvm allow
```
#### In the `fedora-40` template
#### In the `fedora-41` template
Install Dangerzone:
@ -297,7 +323,7 @@ Our [GitHub Releases page](https://github.com/freedomofpress/dangerzone/releases
hosts the following files:
* Windows installer (`Dangerzone-<version>.msi`)
* macOS archives (`Dangerzone-<version>-<arch>.dmg`)
* Container images (`container-<version>-<arch>.tar.gz`)
* Container images (`container-<version>-<arch>.tar`)
* Source package (`dangerzone-<version>.tar.gz`)
All these files are accompanied by signatures (as `.asc` files). We'll explain
@ -325,7 +351,7 @@ gpg --verify Dangerzone-0.6.1-i686.dmg.asc Dangerzone-0.6.1-i686.dmg
For the container images:
```
gpg --verify container-0.6.1-i686.tar.gz.asc container-0.6.1-i686.tar.gz
gpg --verify container-0.6.1-i686.tar.asc container-0.6.1-i686.tar
```
For the source package:

View file

@ -22,7 +22,7 @@ fix: ## apply all the suggestions from ruff
ruff format
.PHONY: test
test:
test: ## Run the tests
# Make each GUI test run as a separate process, to avoid segfaults due to
# shared state.
# See more in https://github.com/freedomofpress/dangerzone/issues/493
@ -47,25 +47,32 @@ test-large: test-large-init ## Run large test set
python -m pytest --tb=no tests/test_large_set.py::TestLargeSet -v $(JUNIT_FLAGS) --junitxml=$(TEST_LARGE_RESULTS)
python $(TEST_LARGE_RESULTS)/report.py $(TEST_LARGE_RESULTS)
Dockerfile: Dockerfile.env Dockerfile.in
Dockerfile: Dockerfile.env Dockerfile.in ## Regenerate the Dockerfile from its template
poetry run jinja2 Dockerfile.in Dockerfile.env > Dockerfile
.PHONY: poetry-install
poetry-install: ## Install project dependencies
poetry install
.PHONY: build-clean
build-clean:
doit clean
poetry run doit clean
.PHONY: build-macos-intel
build-macos-intel: build-clean
doit -n 8
build-macos-intel: build-clean poetry-install ## Build macOS intel package (.dmg)
poetry run doit -n 8
.PHONY: build-macos-arm
build-macos-arm: build-clean
doit -n 8 macos_build_dmg
build-macos-arm: build-clean poetry-install ## Build macOS Apple Silicon package (.dmg)
poetry run doit -n 8 macos_build_dmg
.PHONY: build-linux
build-linux: build-clean
doit -n 8 fedora_rpm debian_deb
build-linux: build-clean poetry-install ## Build linux packages (.rpm and .deb)
poetry run doit -n 8 fedora_rpm debian_deb
.PHONY: regenerate-reference-pdfs
regenerate-reference-pdfs: ## Regenerate the reference PDFs
pytest tests/test_cli.py -k regenerate --generate-reference-pdfs
# Makefile self-help borrowed from the securedrop-client project
# Explaination of the below shell command should it ever break.
# 1. Set the field separator to ": ##" and any make targets that might appear between : and ##

View file

@ -14,13 +14,15 @@ _Read more about Dangerzone in the [official site](https://dangerzone.rocks/abou
Follow the instructions for each platform:
* [macOS](https://github.com/freedomofpress/dangerzone/blob/v0.8.1/INSTALL.md#macos)
* [Windows](https://github.com/freedomofpress/dangerzone/blob/v0.8.1//INSTALL.md#windows)
* [Ubuntu Linux](https://github.com/freedomofpress/dangerzone/blob/v0.8.1/INSTALL.md#ubuntu-debian)
* [Debian Linux](https://github.com/freedomofpress/dangerzone/blob/v0.8.1/INSTALL.md#ubuntu-debian)
* [Fedora Linux](https://github.com/freedomofpress/dangerzone/blob/v0.8.1/INSTALL.md#fedora)
* [Qubes OS (beta)](https://github.com/freedomofpress/dangerzone/blob/v0.8.0/INSTALL.md#qubes-os)
* [Tails](https://github.com/freedomofpress/dangerzone/blob/v0.8.1/INSTALL.md#tails)
* [macOS](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#macos)
* [Windows](https://github.com/freedomofpress/dangerzone/blob/v0.9.0//INSTALL.md#windows)
* [Ubuntu Linux](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#ubuntu-debian)
* [Debian Linux](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#ubuntu-debian)
* [Fedora Linux](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#fedora)
* [Qubes OS (beta)](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#qubes-os)
* [Tails](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#tails)
You can read more about our operating system support [here](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#operating-system-support).
## Some features
@ -80,3 +82,7 @@ Dangerzone gets updates to improve its features _and_ to fix problems. So, updat
1. Check which version of Dangerzone you are currently using: run Dangerzone, then look for a series of numbers to the right of the logo within the app. The format of the numbers will look similar to `0.4.1`
2. Now find the latest available version of Dangerzone: go to the [download page](https://dangerzone.rocks/#downloads). Look for the version number displayed. The number will be using the same format as in Step 1.
3. Is the version on the Dangerzone download page higher than the version of your installed app? Go ahead and update.
### Can I use Podman Desktop?
Yes! We've introduced [experimental support for Podman Desktop](https://github.com/freedomofpress/dangerzone/blob/main/docs/podman-desktop.md) on Windows and macOS.

View file

@ -10,15 +10,18 @@ Here is a list of tasks that should be done before issuing the release:
You can generate its content with the the `poetry run ./dev_scripts/generate-release-tasks.py` command.
- [ ] [Add new Linux platforms and remove obsolete ones](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#add-new-linux-platforms-and-remove-obsolete-ones)
- [ ] Bump the Python dependencies using `poetry lock`
- [ ] Check for new [WiX releases](https://github.com/wixtoolset/wix/releases) and update it if needed
- [ ] Update `version` in `pyproject.toml`
- [ ] Update `share/version.txt`
- [ ] Update the "Version" field in `install/linux/dangerzone.spec`
- [ ] Bump the Debian version by adding a new changelog entry in `debian/changelog`
- [ ] [Bump the minimum Docker Desktop versions](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#bump-the-minimum-docker-desktop-version) in `isolation_provider/container.py`
- [ ] Bump the dates and versions in the `Dockerfile`
- [ ] Update the download links in our `INSTALL.md` page to point to the new version (the download links will be populated after the release)
- [ ] Update screenshot in `README.md`, if necessary
- [ ] CHANGELOG.md should be updated to include a list of all major changes since the last release
- [ ] A draft release should be created. Copy the release notes text from the template at [`docs/templates/release-notes`](https://github.com/freedomofpress/dangerzone/tree/main/docs/templates/)
- [ ] Send the release notes to editorial for review
- [ ] Do the QA tasks
## Add new Linux platforms and remove obsolete ones
@ -121,7 +124,7 @@ Here is what you need to do:
# In case of a new Python installation or minor version upgrade, e.g., from
# 3.11 to 3.12, reinstall Poetry
python3 -m pip install poetry poetry-plugin-export
python3 -m pip install poetry
# You can verify the correct Python version is used
poetry debug info
@ -139,7 +142,7 @@ Here is what you need to do:
poetry env remove --all
# Install the dependencies
poetry install --sync
poetry sync
```
- [ ] Build the container image and the OCR language data
@ -149,7 +152,7 @@ Here is what you need to do:
poetry run ./install/common/download-tessdata.py
# Copy the container image to the assets folder
cp share/container.tar.gz ~dz/release-assets/$VERSION/dangerzone-$VERSION-arm64.tar.gz
cp share/container.tar ~dz/release-assets/$VERSION/dangerzone-$VERSION-arm64.tar
cp share/image-id.txt ~dz/release-assets/$VERSION/.
```
@ -203,7 +206,7 @@ The Windows release is performed in a Windows 11 virtual machine (as opposed to
```bash
# In case of a new Python installation or minor version upgrade, e.g., from
# 3.11 to 3.12, reinstall Poetry
python3 -m pip install poetry poetry-plugin-export
python3 -m pip install poetry
# You can verify the correct Python version is used
poetry debug info
@ -221,12 +224,12 @@ The Windows release is performed in a Windows 11 virtual machine (as opposed to
poetry env remove --all
# Install the dependencies
poetry install --sync
poetry sync
```
- [ ] Copy the container image into the VM
> [!IMPORTANT]
> Instead of running `python .\install\windows\build-image.py` in the VM, run the build image script on the host (making sure to build for `linux/amd64`). Copy `share/container.tar.gz` and `share/image-id.txt` from the host into the `share` folder in the VM.
> Instead of running `python .\install\windows\build-image.py` in the VM, run the build image script on the host (making sure to build for `linux/amd64`). Copy `share/container.tar` and `share/image-id.txt` from the host into the `share` folder in the VM.
- [ ] Run `poetry run .\install\windows\build-app.bat`
- [ ] When you're done you will have `dist\Dangerzone.msi`
@ -317,9 +320,8 @@ To publish the release, you can follow these steps:
- [ ] Run container scan on the produced container images (some time may have passed since the artifacts were built)
```bash
gunzip --keep -c ./share/container.tar.gz > /tmp/container.tar
docker pull anchore/grype:latest
docker run --rm -v /tmp/container.tar:/container.tar anchore/grype:latest /container.tar
docker run --rm -v ./share/container.tar:/container.tar anchore/grype:latest /container.tar
```
- [ ] Collect the assets in a single directory, calculate their SHA-256 hashes, and sign them.
@ -340,7 +342,7 @@ To publish the release, you can follow these steps:
- [ ] Update the [Dangerzone website](https://github.com/freedomofpress/dangerzone.rocks) to link to the new installers.
- [ ] Update the brew cask release of Dangerzone with a [PR like this one](https://github.com/Homebrew/homebrew-cask/pull/116319)
- [ ] Update version and download links in `README.md`
- [ ] Update version and links to our installation instructions (`INSTALL.md`) in `README.md`
## Post-release

View file

@ -4,6 +4,12 @@ import sys
logger = logging.getLogger(__name__)
# Call freeze_support() to avoid passing unknown options to the subprocess.
# See https://github.com/freedomofpress/dangerzone/issues/873
import multiprocessing
multiprocessing.freeze_support()
try:
from . import vendor # type: ignore [attr-defined]

View file

@ -11,6 +11,7 @@ from .isolation_provider.container import Container
from .isolation_provider.dummy import Dummy
from .isolation_provider.qubes import Qubes, is_qubes_native_conversion
from .logic import DangerzoneCore
from .settings import Settings
from .util import get_version, replace_control_chars
@ -37,7 +38,7 @@ def print_header(s: str) -> None:
)
@click.argument(
"filenames",
required=True,
required=False,
nargs=-1,
type=click.UNPROCESSED,
callback=args.validate_input_filenames,
@ -48,17 +49,43 @@ def print_header(s: str) -> None:
flag_value=True,
help="Run Dangerzone in debug mode, to get logs from gVisor.",
)
@click.option(
"--set-container-runtime",
required=False,
help=(
"The name or full path of the container runtime you want Dangerzone to use."
" You can specify the value 'default' if you want to take back your choice, and"
" let Dangerzone use the default runtime for this OS"
),
)
@click.version_option(version=get_version(), message="%(version)s")
@errors.handle_document_errors
def cli_main(
output_filename: Optional[str],
ocr_lang: Optional[str],
filenames: List[str],
filenames: Optional[List[str]],
archive: bool,
dummy_conversion: bool,
debug: bool,
set_container_runtime: Optional[str] = None,
) -> None:
setup_logging()
display_banner()
if set_container_runtime:
settings = Settings()
if set_container_runtime == "default":
settings.unset_custom_runtime()
click.echo(
"Instructed Dangerzone to use the default container runtime for this OS"
)
else:
container_runtime = settings.set_custom_runtime(
set_container_runtime, autosave=True
)
click.echo(f"Set the settings container_runtime to {container_runtime}")
sys.exit(0)
elif not filenames:
raise click.UsageError("Missing argument 'FILENAMES...'")
if getattr(sys, "dangerzone_dev", False) and dummy_conversion:
dangerzone = DangerzoneCore(Dummy())
@ -67,7 +94,6 @@ def cli_main(
else:
dangerzone = DangerzoneCore(Container(debug=debug))
display_banner()
if len(filenames) == 1 and output_filename:
dangerzone.add_document_from_filename(filenames[0], output_filename, archive)
elif len(filenames) > 1 and output_filename:
@ -320,4 +346,10 @@ def display_banner() -> None:
+ Style.DIM
+ ""
)
print(Back.BLACK + Fore.YELLOW + Style.DIM + "╰──────────────────────────╯")
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ "╰──────────────────────────╯"
+ Style.RESET_ALL
)

View file

@ -1,28 +1,69 @@
import gzip
import logging
import os
import platform
import shutil
import subprocess
from typing import List, Tuple
from pathlib import Path
from typing import IO, Callable, List, Optional, Tuple
from . import errors
from .settings import Settings
from .util import get_resource_path, get_subprocess_startupinfo
CONTAINER_NAME = "dangerzone.rocks/dangerzone"
OLD_CONTAINER_NAME = "dangerzone.rocks/dangerzone"
CONTAINER_NAME = (
"ghcr.io/almet/dangerzone/dangerzone"
) # FIXME: Change this to the correct container name
log = logging.getLogger(__name__)
def get_runtime_name() -> str:
if platform.system() == "Linux":
runtime_name = "podman"
else:
# Windows, Darwin, and unknown use docker for now, dangerzone-vm eventually
runtime_name = "docker"
return runtime_name
class Runtime(object):
"""Represents the container runtime to use.
- It can be specified via the settings, using the "container_runtime" key,
which should point to the full path of the runtime;
- If the runtime is not specified via the settings, it defaults
to "podman" on Linux and "docker" on macOS and Windows.
"""
def __init__(self) -> None:
settings = Settings()
if settings.custom_runtime_specified():
self.path = Path(settings.get("container_runtime"))
if not self.path.exists():
raise errors.UnsupportedContainerRuntime(self.path)
self.name = self.path.stem
else:
self.name = self.get_default_runtime_name()
self.path = Runtime.path_from_name(self.name)
if self.name not in ("podman", "docker"):
raise errors.UnsupportedContainerRuntime(self.name)
@staticmethod
def path_from_name(name: str) -> Path:
name_path = Path(name)
if name_path.is_file():
return name_path
else:
runtime = shutil.which(name_path)
if runtime is None:
raise errors.NoContainerTechException(name)
return Path(runtime)
@staticmethod
def get_default_runtime_name() -> str:
return "podman" if platform.system() == "Linux" else "docker"
def get_runtime_version() -> Tuple[int, int]:
def subprocess_run(*args, **kwargs) -> subprocess.CompletedProcess:
"""subprocess.run with the correct startupinfo for Windows."""
return subprocess.run(*args, startupinfo=get_subprocess_startupinfo(), **kwargs)
def get_runtime_version(runtime: Optional[Runtime] = None) -> Tuple[int, int]:
"""Get the major/minor parts of the Docker/Podman version.
Some of the operations we perform in this module rely on some Podman features
@ -31,23 +72,23 @@ def get_runtime_version() -> Tuple[int, int]:
just knowing the major and minor version, since writing/installing a full-blown
semver parser is an overkill.
"""
runtime = runtime or Runtime()
# Get the Docker/Podman version, using a Go template.
runtime = get_runtime_name()
if runtime == "podman":
if runtime.name == "podman":
query = "{{.Client.Version}}"
else:
query = "{{.Server.Version}}"
cmd = [runtime, "version", "-f", query]
cmd = [str(runtime.path), "version", "-f", query]
try:
version = subprocess.run(
version = subprocess_run(
cmd,
startupinfo=get_subprocess_startupinfo(),
capture_output=True,
check=True,
).stdout.decode()
except Exception as e:
msg = f"Could not get the version of the {runtime.capitalize()} tool: {e}"
msg = f"Could not get the version of the {runtime.name.capitalize()} tool: {e}"
raise RuntimeError(msg) from e
# Parse this version and return the major/minor parts, since we don't need the
@ -57,20 +98,12 @@ def get_runtime_version() -> Tuple[int, int]:
return (int(major), int(minor))
except Exception as e:
msg = (
f"Could not parse the version of the {runtime.capitalize()} tool"
f"Could not parse the version of the {runtime.name.capitalize()} tool"
f" (found: '{version}') due to the following error: {e}"
)
raise RuntimeError(msg)
def get_runtime() -> str:
container_tech = get_runtime_name()
runtime = shutil.which(container_tech)
if runtime is None:
raise errors.NoContainerTechException(container_tech)
return runtime
def list_image_tags() -> List[str]:
"""Get the tags of all loaded Dangerzone images.
@ -78,10 +111,11 @@ def list_image_tags() -> List[str]:
images. This can be useful when we want to find which are the local image tags,
and which image ID does the "latest" tag point to.
"""
runtime = Runtime()
return (
subprocess.check_output(
[
get_runtime(),
str(runtime.path),
"image",
"list",
"--format",
@ -96,54 +130,156 @@ def list_image_tags() -> List[str]:
)
def add_image_tag(image_id: str, new_tag: str) -> None:
"""Add a tag to the Dangerzone image."""
runtime = Runtime()
log.debug(f"Adding tag '{new_tag}' to image '{image_id}'")
subprocess.check_output(
[str(runtime.path), "tag", image_id, new_tag],
startupinfo=get_subprocess_startupinfo(),
)
def delete_image_tag(tag: str) -> None:
"""Delete a Dangerzone image tag."""
name = CONTAINER_NAME + ":" + tag
log.warning(f"Deleting old container image: {name}")
runtime = Runtime()
log.warning(f"Deleting old container image: {tag}")
try:
subprocess.check_output(
[get_runtime(), "rmi", "--force", name],
[str(runtime.name), "rmi", "--force", tag],
startupinfo=get_subprocess_startupinfo(),
)
except Exception as e:
log.warning(
f"Couldn't delete old container image '{name}', so leaving it there."
f"Couldn't delete old container image '{tag}', so leaving it there."
f" Original error: {e}"
)
def get_expected_tag() -> str:
"""Get the tag of the Dangerzone image tarball from the image-id.txt file."""
with open(get_resource_path("image-id.txt")) as f:
return f.read().strip()
def load_image_tarball() -> None:
runtime = Runtime()
log.info("Installing Dangerzone container image...")
p = subprocess.Popen(
[get_runtime(), "load"],
stdin=subprocess.PIPE,
startupinfo=get_subprocess_startupinfo(),
)
chunk_size = 4 << 20
compressed_container_path = get_resource_path("container.tar.gz")
with gzip.open(compressed_container_path) as f:
while True:
chunk = f.read(chunk_size)
if len(chunk) > 0:
if p.stdin:
p.stdin.write(chunk)
else:
break
_, err = p.communicate()
if p.returncode < 0:
if err:
error = err.decode()
tarball_path = get_resource_path("container.tar")
try:
res = subprocess.run(
[str(runtime.path), "load", "-i", str(tarball_path)],
startupinfo=get_subprocess_startupinfo(),
capture_output=True,
check=True,
)
except subprocess.CalledProcessError as e:
if e.stderr:
error = e.stderr.decode()
else:
error = "No output"
raise errors.ImageInstallationException(
f"Could not install container image: {error}"
)
log.info("Successfully installed container image from")
# Loading an image built with Buildkit in Podman 3.4 messes up its name. The tag
# somehow becomes the name of the loaded image [1].
#
# We know that older Podman versions are not generally affected, since Podman v3.0.1
# on Debian Bullseye works properly. Also, Podman v4.0 is not affected, so it makes
# sense to target only Podman v3.4 for a fix.
#
# The fix is simple, tag the image properly based on the expected tag from
# `share/image-id.txt` and delete the incorrect tag.
#
# [1] https://github.com/containers/podman/issues/16490
if runtime.name == "podman" and get_runtime_version(runtime) == (3, 4):
expected_tag = get_expected_tag()
bad_tag = f"localhost/{expected_tag}:latest"
good_tag = f"{CONTAINER_NAME}:{expected_tag}"
log.debug(
f"Dangerzone images loaded in Podman v3.4 usually have an invalid tag."
" Fixing it..."
)
add_image_tag(bad_tag, good_tag)
delete_image_tag(bad_tag)
def tag_image_by_digest(digest: str, tag: str) -> None:
"""Tag a container image by digest.
The sha256: prefix should be omitted from the digest.
"""
runtime = Runtime()
image_id = get_image_id_by_digest(digest)
cmd = [str(runtime.path), "tag", image_id, tag]
log.debug(" ".join(cmd))
subprocess_run(cmd, check=True)
def get_image_id_by_digest(digest: str) -> str:
"""Get an image ID from a digest.
The sha256: prefix should be omitted from the digest.
"""
runtime = Runtime()
cmd = [
str(runtime.path),
"images",
"-f",
f"digest=sha256:{digest}",
"--format",
"{{.Id}}",
]
log.debug(" ".join(cmd))
process = subprocess_run(cmd, check=True, capture_output=True)
# In case we have multiple lines, we only want the first one.
return process.stdout.decode().strip().split("\n")[0]
def container_pull(image: str, manifest_digest: str, callback: Callable):
"""Pull a container image from a registry."""
runtime = Runtime()
cmd = [str(runtime.path), "pull", f"{image}@sha256:{manifest_digest}"]
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
)
for line in process.stdout: # type: ignore
callback(line)
process.wait()
if process.returncode != 0:
raise errors.ContainerPullException(
f"Could not pull the container image: {process.returncode}"
)
def get_local_image_digest(image: str) -> str:
"""
Returns a image hash from a local image name
"""
# Get the image hash from the "podman images" command.
# It's not possible to use "podman inspect" here as it
# returns the digest of the architecture-bound image
runtime = Runtime()
cmd = [str(runtime.path), "images", image, "--format", "{{.Digest}}"]
log.debug(" ".join(cmd))
try:
result = subprocess_run(
cmd,
capture_output=True,
check=True,
)
lines = result.stdout.decode().strip().split("\n")
if len(lines) != 1:
raise errors.MultipleImagesFoundException(
f"Expected a single line of output, got {len(lines)} lines"
)
image_digest = lines[0].replace("sha256:", "")
if not image_digest:
raise errors.ImageNotPresentException(
f"The image {image} does not exist locally"
)
return image_digest
except subprocess.CalledProcessError as e:
raise errors.ImageNotPresentException(
f"The image {image} does not exist locally"
)

View file

@ -122,21 +122,37 @@ def handle_document_errors(func: F) -> F:
#### Container-related errors
class ImageNotPresentException(Exception):
class ContainerException(Exception):
pass
class ImageInstallationException(Exception):
class ImageNotPresentException(ContainerException):
pass
class NoContainerTechException(Exception):
class MultipleImagesFoundException(ContainerException):
pass
class ImageInstallationException(ContainerException):
pass
class NoContainerTechException(ContainerException):
def __init__(self, container_tech: str) -> None:
super().__init__(f"{container_tech} is not installed")
class NotAvailableContainerTechException(Exception):
class NotAvailableContainerTechException(ContainerException):
def __init__(self, container_tech: str, error: str) -> None:
self.error = error
self.container_tech = container_tech
super().__init__(f"{container_tech} is not available")
class UnsupportedContainerRuntime(ContainerException):
pass
class ContainerPullException(ContainerException):
pass

View file

@ -24,6 +24,8 @@ from ..document import Document
from ..isolation_provider.container import Container
from ..isolation_provider.dummy import Dummy
from ..isolation_provider.qubes import Qubes, is_qubes_native_conversion
from ..updater import errors as updater_errors
from ..updater import releases
from ..util import get_resource_path, get_version
from .logic import DangerzoneGui
from .main_window import MainWindow
@ -51,7 +53,7 @@ class Application(QtWidgets.QApplication):
def __init__(self, *args: typing.Any, **kwargs: typing.Any) -> None:
super(Application, self).__init__(*args, **kwargs)
self.setQuitOnLastWindowClosed(False)
with open(get_resource_path("dangerzone.css"), "r") as f:
with get_resource_path("dangerzone.css").open("r") as f:
style = f.read()
self.setStyleSheet(style)
@ -161,16 +163,15 @@ def gui_main(dummy_conversion: bool, filenames: Optional[List[str]]) -> bool:
window.register_update_handler(updater.finished)
log.debug("Consulting updater settings before checking for updates")
if updater.should_check_for_updates():
should_check = updater.should_check_for_updates()
if should_check:
log.debug("Checking for updates")
updater.start()
else:
log.debug("Will not check for updates, based on updater settings")
# Ensure the status of the toggle updates checkbox is updated, after the user is
# prompted to enable updates.
window.toggle_updates_action.setChecked(bool(updater.check))
window.toggle_updates_action.setChecked(should_check)
if filenames:
open_files(filenames)

View file

@ -63,7 +63,7 @@ class DangerzoneGui(DangerzoneCore):
path = get_resource_path("dangerzone.ico")
else:
path = get_resource_path("icon.png")
return QtGui.QIcon(path)
return QtGui.QIcon(str(path))
def open_pdf_viewer(self, filename: str) -> None:
if platform.system() == "Darwin":
@ -252,7 +252,7 @@ class Alert(Dialog):
def create_layout(self) -> QtWidgets.QBoxLayout:
logo = QtWidgets.QLabel()
logo.setPixmap(
QtGui.QPixmap.fromImage(QtGui.QImage(get_resource_path("icon.png")))
QtGui.QPixmap.fromImage(QtGui.QImage(str(get_resource_path("icon.png"))))
)
label = QtWidgets.QLabel()

View file

@ -1,3 +1,4 @@
import io
import logging
import os
import platform
@ -5,30 +6,32 @@ import tempfile
import typing
from multiprocessing.pool import ThreadPool
from pathlib import Path
from typing import List, Optional
from typing import Callable, List, Optional
# FIXME: See https://github.com/freedomofpress/dangerzone/issues/320 for more details.
if typing.TYPE_CHECKING:
from PySide2 import QtCore, QtGui, QtSvg, QtWidgets
from PySide2.QtCore import Qt
from PySide2.QtGui import QTextCursor
from PySide2.QtWidgets import QAction, QTextEdit
else:
try:
from PySide6 import QtCore, QtGui, QtSvg, QtWidgets
from PySide6.QtCore import Qt
from PySide6.QtGui import QAction
from PySide6.QtGui import QAction, QTextCursor
from PySide6.QtWidgets import QTextEdit
except ImportError:
from PySide2 import QtCore, QtGui, QtSvg, QtWidgets
from PySide2.QtCore import Qt
from PySide2.QtGui import QTextCursor
from PySide2.QtWidgets import QAction, QTextEdit
from .. import errors
from ..document import SAFE_EXTENSION, Document
from ..isolation_provider.qubes import is_qubes_native_conversion
from ..updater.releases import UpdateReport
from ..util import format_exception, get_resource_path, get_version
from .logic import Alert, CollapsibleBox, DangerzoneGui, UpdateDialog
from .updater import UpdateReport
log = logging.getLogger(__name__)
@ -55,20 +58,13 @@ about updates.</p>
HAMBURGER_MENU_SIZE = 30
WARNING_MESSAGE = """\
<p><b>Warning:</b> Ubuntu Focal systems and their derivatives will
stop being supported in subsequent Dangerzone releases. We encourage you to upgrade to a
more recent version of your operating system in order to get security updates.</p>
"""
def load_svg_image(filename: str, width: int, height: int) -> QtGui.QPixmap:
"""Load an SVG image from a filename.
This answer is basically taken from: https://stackoverflow.com/a/25689790
"""
path = get_resource_path(filename)
svg_renderer = QtSvg.QSvgRenderer(path)
svg_renderer = QtSvg.QSvgRenderer(str(path))
image = QtGui.QImage(width, height, QtGui.QImage.Format_ARGB32)
# Set the ARGB to 0 to prevent rendering artifacts
image.fill(0x00000000)
@ -136,9 +132,8 @@ class MainWindow(QtWidgets.QMainWindow):
# Header
logo = QtWidgets.QLabel()
logo.setPixmap(
QtGui.QPixmap.fromImage(QtGui.QImage(get_resource_path("icon.png")))
)
icon_path = str(get_resource_path("icon.png"))
logo.setPixmap(QtGui.QPixmap.fromImage(QtGui.QImage(icon_path)))
header_label = QtWidgets.QLabel("Dangerzone")
header_label.setFont(self.dangerzone.fixed_font)
header_label.setStyleSheet("QLabel { font-weight: bold; font-size: 50px; }")
@ -171,7 +166,7 @@ class MainWindow(QtWidgets.QMainWindow):
self.toggle_updates_action.triggered.connect(self.toggle_updates_triggered)
self.toggle_updates_action.setCheckable(True)
self.toggle_updates_action.setChecked(
bool(self.dangerzone.settings.get("updater_check"))
bool(self.dangerzone.settings.get("updater_check_all"))
)
# Add the "Exit" action
@ -192,6 +187,9 @@ class MainWindow(QtWidgets.QMainWindow):
header_layout.addWidget(self.hamburger_button)
header_layout.addSpacing(15)
# Content widget, contains all the window content except waiting widget
self.content_widget = ContentWidget(self.dangerzone)
if self.dangerzone.isolation_provider.should_wait_install():
# Waiting widget replaces content widget while container runtime isn't available
self.waiting_widget: WaitingWidget = WaitingWidgetContainer(self.dangerzone)
@ -201,9 +199,6 @@ class MainWindow(QtWidgets.QMainWindow):
self.waiting_widget = WaitingWidget()
self.dangerzone.is_waiting_finished = True
# Content widget, contains all the window content except waiting widget
self.content_widget = ContentWidget(self.dangerzone)
# Only use the waiting widget if container runtime isn't available
if self.dangerzone.is_waiting_finished:
self.waiting_widget.hide()
@ -228,11 +223,16 @@ class MainWindow(QtWidgets.QMainWindow):
self.setProperty("OSColorMode", self.dangerzone.app.os_color_mode.value)
if hasattr(self.dangerzone.isolation_provider, "check_docker_desktop_version"):
is_version_valid, version = (
self.dangerzone.isolation_provider.check_docker_desktop_version()
)
if not is_version_valid:
self.handle_docker_desktop_version_check(is_version_valid, version)
try:
is_version_valid, version = (
self.dangerzone.isolation_provider.check_docker_desktop_version()
)
if not is_version_valid:
self.handle_docker_desktop_version_check(is_version_valid, version)
except errors.UnsupportedContainerRuntime as e:
pass # It's caught later in the flow.
except errors.NoContainerTechException as e:
pass # It's caught later in the flow.
self.show()
@ -284,7 +284,7 @@ class MainWindow(QtWidgets.QMainWindow):
def toggle_updates_triggered(self) -> None:
"""Change the underlying update check settings based on the user's choice."""
check = self.toggle_updates_action.isChecked()
self.dangerzone.settings.set("updater_check", check)
self.dangerzone.settings.set("updater_check_all", check)
self.dangerzone.settings.save()
def handle_docker_desktop_version_check(
@ -439,15 +439,21 @@ class MainWindow(QtWidgets.QMainWindow):
class InstallContainerThread(QtCore.QThread):
finished = QtCore.Signal(str)
process_stdout = QtCore.Signal(str)
def __init__(self, dangerzone: DangerzoneGui) -> None:
def __init__(
self, dangerzone: DangerzoneGui, callback: Optional[Callable] = None
) -> None:
super(InstallContainerThread, self).__init__()
self.dangerzone = dangerzone
def run(self) -> None:
error = None
try:
installed = self.dangerzone.isolation_provider.install()
should_upgrade = self.dangerzone.settings.get("updater_check_all")
installed = self.dangerzone.isolation_provider.install(
should_upgrade=bool(should_upgrade), callback=self.process_stdout.emit
)
except Exception as e:
log.error("Container installation problem")
error = format_exception(e)
@ -482,11 +488,20 @@ class TracebackWidget(QTextEdit):
# Enable copying
self.setTextInteractionFlags(Qt.TextSelectableByMouse)
self.current_output = ""
def set_content(self, error: Optional[str] = None) -> None:
if error:
self.setPlainText(error)
self.setVisible(True)
def process_output(self, line):
self.current_output += line
self.setText(self.current_output)
cursor = self.textCursor()
cursor.movePosition(QTextCursor.MoveOperation.End)
self.setTextCursor(cursor)
class WaitingWidgetContainer(WaitingWidget):
# These are the possible states that the WaitingWidget can show.
@ -581,8 +596,15 @@ class WaitingWidgetContainer(WaitingWidget):
self.finished.emit()
def state_change(self, state: str, error: Optional[str] = None) -> None:
custom_runtime = self.dangerzone.settings.custom_runtime_specified()
if state == "not_installed":
if platform.system() == "Linux":
if custom_runtime:
self.show_error(
"<strong>We could not find the container runtime defined in your settings</strong><br><br>"
"Please check your settings, install it if needed, and retry."
)
elif platform.system() == "Linux":
self.show_error(
"<strong>Dangerzone requires Podman</strong><br><br>"
"Install it and retry."
@ -595,26 +617,38 @@ class WaitingWidgetContainer(WaitingWidget):
)
elif state == "not_running":
if platform.system() == "Linux":
if custom_runtime:
self.show_error(
"<strong>We were unable to start the container runtime defined in your settings</strong><br><br>"
"Please check your settings, install it if needed, and retry."
)
elif platform.system() == "Linux":
# "not_running" here means that the `podman image ls` command failed.
message = (
self.show_error(
"<strong>Dangerzone requires Podman</strong><br><br>"
"Podman is installed but cannot run properly. See errors below"
"Podman is installed but cannot run properly. See errors below",
error,
)
else:
message = (
self.show_error(
"<strong>Dangerzone requires Docker Desktop</strong><br><br>"
"Docker is installed but isn't running.<br><br>"
"Open Docker and make sure it's running in the background."
"Open Docker and make sure it's running in the background.",
error,
)
self.show_error(message, error)
else:
self.show_message(
"Installing the Dangerzone container image.<br><br>"
"This might take a few minutes..."
)
self.traceback.setVisible(True)
self.install_container_t = InstallContainerThread(self.dangerzone)
self.install_container_t.finished.connect(self.installation_finished)
self.install_container_t.process_stdout.connect(
self.traceback.process_output
)
self.install_container_t.start()
@ -626,17 +660,6 @@ class ContentWidget(QtWidgets.QWidget):
self.dangerzone = dangerzone
self.conversion_started = False
self.warning_label = None
if platform.system() == "Linux":
# Add the warning message only for ubuntu focal
os_release_path = Path("/etc/os-release")
if os_release_path.exists():
os_release = os_release_path.read_text()
if "Ubuntu 20.04" in os_release or "focal" in os_release:
self.warning_label = QtWidgets.QLabel(WARNING_MESSAGE)
self.warning_label.setWordWrap(True)
self.warning_label.setProperty("style", "warning")
# Doc selection widget
self.doc_selection_widget = DocSelectionWidget(self.dangerzone)
self.doc_selection_widget.documents_selected.connect(self.documents_selected)
@ -662,8 +685,6 @@ class ContentWidget(QtWidgets.QWidget):
# Layout
layout = QtWidgets.QVBoxLayout()
if self.warning_label:
layout.addWidget(self.warning_label) # Add warning at the top
layout.addWidget(self.settings_widget, stretch=1)
layout.addWidget(self.documents_list, stretch=1)
layout.addWidget(self.doc_selection_wrapper, stretch=1)
@ -894,22 +915,16 @@ class SettingsWidget(QtWidgets.QWidget):
self.safe_extension_name_layout.setSpacing(0)
self.safe_extension_name_layout.addWidget(self.safe_extension_filename)
self.safe_extension_name_layout.addWidget(self.safe_extension)
# FIXME: Workaround for https://github.com/freedomofpress/dangerzone/issues/339.
# We should drop this once we drop Ubuntu Focal support.
if hasattr(QtGui, "QRegularExpressionValidator"):
QRegEx = QtCore.QRegularExpression
QRegExValidator = QtGui.QRegularExpressionValidator
else:
QRegEx = QtCore.QRegExp # type: ignore [assignment]
QRegExValidator = QtGui.QRegExpValidator # type: ignore [assignment]
self.dot_pdf_validator = QRegExValidator(QRegEx(r".*\.[Pp][Dd][Ff]"))
self.dot_pdf_validator = QtGui.QRegularExpressionValidator(
QtCore.QRegularExpression(r".*\.[Pp][Dd][Ff]")
)
if platform.system() == "Linux":
illegal_chars_regex = r"[/]"
elif platform.system() == "Darwin":
illegal_chars_regex = r"[\\]"
else:
illegal_chars_regex = r"[\"*/:<>?\\|]"
self.illegal_chars_regex = QRegEx(illegal_chars_regex)
self.illegal_chars_regex = QtCore.QRegularExpression(illegal_chars_regex)
self.safe_extension_layout = QtWidgets.QHBoxLayout()
self.safe_extension_layout.addWidget(self.save_checkbox)
self.safe_extension_layout.addWidget(self.safe_extension_label)
@ -1328,7 +1343,7 @@ class DocumentWidget(QtWidgets.QWidget):
def load_status_image(self, filename: str) -> QtGui.QPixmap:
path = get_resource_path(filename)
img = QtGui.QImage(path)
img = QtGui.QImage(str(path))
image = QtGui.QPixmap.fromImage(img)
return image.scaled(QtCore.QSize(15, 15))

View file

@ -1,15 +1,7 @@
"""A module that contains the logic for checking for updates."""
import json
import logging
import platform
import sys
import time
import typing
from typing import Optional
from packaging import version
if typing.TYPE_CHECKING:
from PySide2 import QtCore, QtWidgets
else:
@ -18,36 +10,33 @@ else:
except ImportError:
from PySide2 import QtCore, QtWidgets
# XXX implict import for "markdown" module required for Cx_Freeze to build on Windows
# See https://github.com/freedomofpress/dangerzone/issues/501
import html.parser # noqa: F401
import markdown
import requests
from ..util import get_version
from ..updater import errors, releases
from .logic import Alert, DangerzoneGui
log = logging.getLogger(__name__)
MSG_CONFIRM_UPDATE_CHECKS = """\
<p><b>Do you want Dangerzone to automatically check for updates?</b></p>
<p>
<b>Do you want Dangerzone to automatically check for updates and apply them?</b>
</p>
<p>If you accept, Dangerzone will check the
<p>If you accept, Dangerzone will check for updates of the sandbox and apply them
automatically. This will ensure that you always have the latest version of the sandbox,
which is critical for the software to operate securely.</p>
<p>Sandbox updates may include security patches and bug fixes, but won't include new features.</p>
<p>Additionally, Dangerzone will check the
<a href="https://github.com/freedomofpress/dangerzone/releases">latest releases page</a>
in github.com on startup. Otherwise it will make no network requests and
won't inform you about new releases.</p>
in github.com, and inform you about new releases.
Otherwise it will make no network requests and won't inform you about new releases.</p>
<p>If you prefer another way of getting notified about new releases, we suggest adding
to your RSS reader our
<a href="https://fosstodon.org/@dangerzone.rss">Mastodon feed</a>. For more information
about updates, check
<a href="https://github.com/freedomofpress/dangerzone/wiki/Updates">this webpage</a>.</p>
<a href="https://dangerzone.rocks/feed.xml">Dangerzone News feed</a>.</p>
"""
UPDATE_CHECK_COOLDOWN_SECS = 60 * 60 * 12 # Check for updates at most every 12 hours.
class UpdateCheckPrompt(Alert):
"""The prompt that asks the users if they want to enable update checks."""
@ -55,7 +44,7 @@ class UpdateCheckPrompt(Alert):
x_pressed = False
def closeEvent(self, event: QtCore.QEvent) -> None:
"""Detect when a user has pressed "X" in the title bar.
"""Detect when a user has pressed "X" in the title bar (to close the dialog).
This function is called when a user clicks on "X" in the title bar. We want to
differentiate between the user clicking on "Cancel" and clicking on "X", since
@ -76,72 +65,32 @@ class UpdateCheckPrompt(Alert):
return buttons_layout
class UpdateReport:
"""A report for an update check."""
def __init__(
self,
version: Optional[str] = None,
changelog: Optional[str] = None,
error: Optional[str] = None,
):
self.version = version
self.changelog = changelog
self.error = error
def empty(self) -> bool:
return self.version is None and self.changelog is None and self.error is None
class UpdaterThread(QtCore.QThread):
"""Check asynchronously for Dangerzone updates.
The Updater class is mainly responsible for the following:
1. Asking the user if they want to enable update checks or not.
2. Determining when it's the right time to check for updates.
3. Hitting the GitHub releases API and learning about updates.
The Updater class is mainly responsible for
asking the user if they want to enable update checks or not.
Since checking for updates is a task that may take some time, we perform it
asynchronously, in a Qt thread. This thread then triggers a signal, and informs
whoever has connected to it.
asynchronously, in a Qt thread.
When finished, this thread triggers a signal with the results.
"""
finished = QtCore.Signal(UpdateReport)
GH_RELEASE_URL = (
"https://api.github.com/repos/freedomofpress/dangerzone/releases/latest"
)
REQ_TIMEOUT = 15
finished = QtCore.Signal(releases.UpdateReport)
def __init__(self, dangerzone: DangerzoneGui):
super().__init__()
self.dangerzone = dangerzone
###########
# Helpers for updater settings
#
# These helpers make it easy to retrieve specific updater-related settings, as well
# as save the settings file, only when necessary.
@property
def check(self) -> Optional[bool]:
return self.dangerzone.settings.get("updater_check")
@check.setter
def check(self, val: bool) -> None:
self.dangerzone.settings.set("updater_check", val, autosave=True)
def prompt_for_checks(self) -> Optional[bool]:
"""Ask the user if they want to be informed about Dangerzone updates."""
log.debug("Prompting the user for update checks")
# FIXME: Handle the case where a user clicks on "X", instead of explicitly
# making a choice. We should probably ask them again on the next run.
prompt = UpdateCheckPrompt(
self.dangerzone,
message=MSG_CONFIRM_UPDATE_CHECKS,
ok_text="Check Automatically",
cancel_text="Don't Check",
ok_text="Enable sandbox updates",
cancel_text="Do not make any requests",
)
check = prompt.launch()
if not check and prompt.x_pressed:
@ -149,167 +98,18 @@ class UpdaterThread(QtCore.QThread):
return bool(check)
def should_check_for_updates(self) -> bool:
"""Determine if we can check for updates based on settings and user prefs.
Note that this method only checks if the user has expressed an interest for
learning about new updates, and not whether we should actually make an update
check. Those two things are distinct, actually. For example:
* A user may have expressed that they want to learn about new updates.
* A previous update check may have found out that there's a new version out.
* Thus we will always show to the user the cached info about the new version,
and won't make a new update check.
"""
log.debug("Checking platform type")
# TODO: Disable updates for Homebrew installations.
if platform.system() == "Linux" and not getattr(sys, "dangerzone_dev", False):
log.debug("Running on Linux, disabling updates")
if not self.check: # if not overidden by user
self.check = False
return False
log.debug("Checking if first run of Dangerzone")
if self.dangerzone.settings.get("updater_last_check") is None:
log.debug("Dangerzone is running for the first time, updates are stalled")
self.dangerzone.settings.set("updater_last_check", 0, autosave=True)
return False
log.debug("Checking if user has already expressed their preference")
if self.check is None:
log.debug("User has not been asked yet for update checks")
self.check = self.prompt_for_checks()
return bool(self.check)
elif not self.check:
log.debug("User has expressed that they don't want to check for updates")
return False
return True
def can_update(self, cur_version: str, latest_version: str) -> bool:
if version.parse(cur_version) == version.parse(latest_version):
return False
elif version.parse(cur_version) > version.parse(latest_version):
# FIXME: This is a sanity check, but we should improve its wording.
raise Exception("Received version is older than the latest version")
else:
return True
def _get_now_timestamp(self) -> int:
return int(time.time())
def _should_postpone_update_check(self) -> bool:
"""Consult and update cooldown timer.
If the previous check happened before the cooldown period expires, do not check
again.
"""
current_time = self._get_now_timestamp()
last_check = self.dangerzone.settings.get("updater_last_check")
if current_time < last_check + UPDATE_CHECK_COOLDOWN_SECS:
log.debug("Cooling down update checks")
return True
else:
return False
def get_latest_info(self) -> UpdateReport:
"""Get the latest release info from GitHub.
Also, render the changelog from Markdown format to HTML, so that we can show it
to the users.
"""
try:
res = requests.get(self.GH_RELEASE_URL, timeout=self.REQ_TIMEOUT)
except Exception as e:
raise RuntimeError(
f"Encountered an exception while checking {self.GH_RELEASE_URL}: {e}"
should_check: Optional[bool] = releases.should_check_for_releases(
self.dangerzone.settings
)
if res.status_code != 200:
raise RuntimeError(
f"Encountered an HTTP {res.status_code} error while checking"
f" {self.GH_RELEASE_URL}"
)
try:
info = res.json()
except json.JSONDecodeError:
raise ValueError(f"Received a non-JSON response from {self.GH_RELEASE_URL}")
try:
version = info["tag_name"].lstrip("v")
changelog = markdown.markdown(info["body"])
except KeyError:
raise ValueError(
f"Missing required fields in JSON response from {self.GH_RELEASE_URL}"
)
return UpdateReport(version=version, changelog=changelog)
# XXX: This happens in parallel with other tasks. DO NOT alter global state!
def _check_for_updates(self) -> UpdateReport:
"""Check for updates locally and remotely.
Check for updates in two places:
1. In our settings, in case we have cached the latest version/changelog from a
previous run.
2. In GitHub, by hitting the latest releases API.
"""
log.debug("Checking for Dangerzone updates")
latest_version = self.dangerzone.settings.get("updater_latest_version")
if version.parse(get_version()) < version.parse(latest_version):
log.debug("Determined that there is an update due to cached results")
return UpdateReport(
version=latest_version,
changelog=self.dangerzone.settings.get("updater_latest_changelog"),
)
# If the previous check happened before the cooldown period expires, do not
# check again. Else, bump the last check timestamp, before making the actual
# check. This is to ensure that even failed update checks respect the cooldown
# period.
if self._should_postpone_update_check():
return UpdateReport()
else:
self.dangerzone.settings.set(
"updater_last_check", self._get_now_timestamp(), autosave=True
)
log.debug("Checking the latest GitHub release")
report = self.get_latest_info()
log.debug(f"Latest version in GitHub is {report.version}")
if report.version and self.can_update(latest_version, report.version):
log.debug(
f"Determined that there is an update due to a new GitHub version:"
f" {latest_version} < {report.version}"
)
return report
log.debug("No need to update")
return UpdateReport()
##################
# Logic for running update checks asynchronously
def check_for_updates(self) -> UpdateReport:
"""Check for updates and return a report with the findings:
There are three scenarios when we check for updates, and each scenario returns a
slightly different answer:
1. No new updates: Return an empty update report.
2. Updates are available: Return an update report with the latest version and
changelog, in HTML format.
3. Update check failed: Return an update report that holds just the error
message.
"""
try:
res = self._check_for_updates()
except Exception as e:
log.exception("Encountered an error while checking for upgrades")
res = UpdateReport(error=str(e))
return res
except errors.NeedUserInput:
should_check = self.prompt_for_checks()
if should_check is not None:
self.dangerzone.settings.set(
"updater_check_all", should_check, autosave=True
)
return bool(should_check)
def run(self) -> None:
self.finished.emit(self.check_for_updates())
has_updates = releases.check_for_updates(self.dangerzone.settings)
self.finished.emit(has_updates)

View file

@ -95,7 +95,7 @@ class IsolationProvider(ABC):
return self.debug or getattr(sys, "dangerzone_dev", False)
@abstractmethod
def install(self) -> bool:
def install(self, should_upgrade: bool, callback: Callable) -> bool:
pass
def convert(

View file

@ -3,17 +3,18 @@ import os
import platform
import shlex
import subprocess
from typing import List, Tuple
from typing import Callable, List, Tuple
from .. import container_utils, errors
from .. import container_utils, errors, updater
from ..container_utils import Runtime
from ..document import Document
from ..util import get_resource_path, get_subprocess_startupinfo
from .base import IsolationProvider, terminate_process_group
TIMEOUT_KILL = 5 # Timeout in seconds until the kill command returns.
MINIMUM_DOCKER_DESKTOP = {
"Darwin": "4.36.0",
"Windows": "4.36.0",
"Darwin": "4.40.0",
"Windows": "4.40.0",
}
# Define startupinfo for subprocesses
@ -50,11 +51,19 @@ class Container(IsolationProvider):
* Do not map the host user to the container, with `--userns nomap` (available
from Podman 4.1 onwards)
"""
if container_utils.get_runtime_name() == "podman":
runtime = Runtime()
if runtime.name == "podman":
security_args = ["--log-driver", "none"]
security_args += ["--security-opt", "no-new-privileges"]
if container_utils.get_runtime_version() >= (4, 1):
security_args += ["--userns", "nomap"]
# We perform a platform check to avoid the following Podman Desktop
# error on Windows:
#
# Error: nomap is only supported in rootless mode
#
# See also: https://github.com/freedomofpress/dangerzone/issues/1127
if platform.system() != "Windows":
security_args += ["--userns", "nomap"]
else:
security_args = ["--security-opt=no-new-privileges:true"]
@ -64,8 +73,16 @@ class Container(IsolationProvider):
#
# [1] https://github.com/freedomofpress/dangerzone/issues/846
# [2] https://github.com/containers/common/blob/d3283f8401eeeb21f3c59a425b5461f069e199a7/pkg/seccomp/seccomp.json
seccomp_json_path = get_resource_path("seccomp.gvisor.json")
security_args += ["--security-opt", f"seccomp={seccomp_json_path}"]
seccomp_json_path = str(get_resource_path("seccomp.gvisor.json"))
# We perform a platform check to avoid the following Podman Desktop
# error on Windows:
#
# Error: opening seccomp profile failed: open
# C:\[...]\dangerzone\share\seccomp.gvisor.json: no such file or directory
#
# See also: https://github.com/freedomofpress/dangerzone/issues/1127
if runtime.name == "podman" and platform.system() != "Windows":
security_args += ["--security-opt", f"seccomp={seccomp_json_path}"]
security_args += ["--cap-drop", "all"]
security_args += ["--cap-add", "SYS_CHROOT"]
@ -77,41 +94,37 @@ class Container(IsolationProvider):
return security_args
@staticmethod
def install() -> bool:
"""Install the container image tarball, or verify that it's already installed.
Perform the following actions:
1. Get the tags of any locally available images that match Dangerzone's image
name.
2. Get the expected image tag from the image-id.txt file.
- If this tag is present in the local images, then we can return.
- Else, prune the older container images and continue.
3. Load the image tarball and make sure it matches the expected tag.
"""
old_tags = container_utils.list_image_tags()
expected_tag = container_utils.get_expected_tag()
if expected_tag not in old_tags:
# Prune older container images.
log.info(
f"Could not find a Dangerzone container image with tag '{expected_tag}'"
)
for tag in old_tags:
container_utils.delete_image_tag(tag)
def install(
should_upgrade: bool, callback: Callable, last_try: bool = False
) -> bool:
"""Check if an update is available and install it if necessary."""
if not should_upgrade:
log.debug("Skipping container upgrade check as requested by the settings")
else:
return True
# Load the image tarball into the container runtime.
container_utils.load_image_tarball()
# Check that the container image has the expected image tag.
# See https://github.com/freedomofpress/dangerzone/issues/988 for an example
# where this was not the case.
new_tags = container_utils.list_image_tags()
if expected_tag not in new_tags:
raise errors.ImageNotPresentException(
f"Could not find expected tag '{expected_tag}' after loading the"
" container image tarball"
update_available, image_digest = updater.is_update_available(
container_utils.CONTAINER_NAME,
updater.DEFAULT_PUBKEY_LOCATION,
)
if update_available and image_digest:
log.debug("Upgrading container image to %s", image_digest)
updater.upgrade_container_image(
container_utils.CONTAINER_NAME,
image_digest,
updater.DEFAULT_PUBKEY_LOCATION,
callback=callback,
)
else:
log.debug("No update available for the container")
try:
updater.verify_local_image(
container_utils.CONTAINER_NAME, updater.DEFAULT_PUBKEY_LOCATION
)
except errors.ImageNotPresentException:
if last_try:
raise
log.debug("Container image not found, trying to install it.")
return Container.install(
should_upgrade=should_upgrade, callback=callback, last_try=True
)
return True
@ -122,12 +135,11 @@ class Container(IsolationProvider):
@staticmethod
def is_available() -> bool:
container_runtime = container_utils.get_runtime()
runtime_name = container_utils.get_runtime_name()
runtime = Runtime()
# Can we run `docker/podman image ls` without an error
with subprocess.Popen(
[container_runtime, "image", "ls"],
[str(runtime.path), "image", "ls"],
stdout=subprocess.DEVNULL,
stderr=subprocess.PIPE,
startupinfo=get_subprocess_startupinfo(),
@ -135,14 +147,18 @@ class Container(IsolationProvider):
_, stderr = p.communicate()
if p.returncode != 0:
raise errors.NotAvailableContainerTechException(
runtime_name, stderr.decode()
runtime.name, stderr.decode()
)
return True
def check_docker_desktop_version(self) -> Tuple[bool, str]:
# On windows and darwin, check that the minimum version is met
version = ""
if platform.system() != "Linux":
runtime = Runtime()
runtime_is_docker = runtime.name == "docker"
platform_is_not_linux = platform.system() != "Linux"
if runtime_is_docker and platform_is_not_linux:
with subprocess.Popen(
["docker", "version", "--format", "{{.Server.Platform.Name}}"],
stdout=subprocess.PIPE,
@ -192,7 +208,15 @@ class Container(IsolationProvider):
command: List[str],
name: str,
) -> subprocess.Popen:
container_runtime = container_utils.get_runtime()
runtime = Runtime()
image_digest = container_utils.get_local_image_digest(
container_utils.CONTAINER_NAME
)
updater.verify_local_image(
container_utils.CONTAINER_NAME,
updater.DEFAULT_PUBKEY_LOCATION,
)
security_args = self.get_runtime_security_args()
debug_args = []
if self.debug:
@ -201,9 +225,7 @@ class Container(IsolationProvider):
enable_stdin = ["-i"]
set_name = ["--name", name]
prevent_leakage_args = ["--rm"]
image_name = [
container_utils.CONTAINER_NAME + ":" + container_utils.get_expected_tag()
]
image_name = [container_utils.CONTAINER_NAME + "@sha256:" + image_digest]
args = (
["run"]
+ security_args
@ -214,7 +236,7 @@ class Container(IsolationProvider):
+ image_name
+ command
)
return self.exec([container_runtime] + args)
return self.exec([str(runtime.path)] + args)
def kill_container(self, name: str) -> None:
"""Terminate a spawned container.
@ -226,8 +248,8 @@ class Container(IsolationProvider):
connected to the Docker daemon, and killing it will just close the associated
standard streams.
"""
container_runtime = container_utils.get_runtime()
cmd = [container_runtime, "kill", name]
runtime = Runtime()
cmd = [str(runtime.path), "kill", name]
try:
# We do not check the exit code of the process here, since the container may
# have stopped right before invoking this command. In that case, the
@ -283,10 +305,10 @@ class Container(IsolationProvider):
# after a podman kill / docker kill invocation, this will likely be the case,
# else the container runtime (Docker/Podman) has experienced a problem, and we
# should report it.
container_runtime = container_utils.get_runtime()
runtime = Runtime()
name = self.doc_to_pixels_container_name(document)
all_containers = subprocess.run(
[container_runtime, "ps", "-a"],
[str(runtime.path), "ps", "-a"],
capture_output=True,
startupinfo=get_subprocess_startupinfo(),
)
@ -297,19 +319,20 @@ class Container(IsolationProvider):
# FIXME hardcoded 1 until length conversions are better handled
# https://github.com/freedomofpress/dangerzone/issues/257
return 1
runtime = Runtime() # type: ignore [unreachable]
n_cpu = 1 # type: ignore [unreachable]
n_cpu = 1
if platform.system() == "Linux":
# if on linux containers run natively
cpu_count = os.cpu_count()
if cpu_count is not None:
n_cpu = cpu_count
elif container_utils.get_runtime_name() == "docker":
elif runtime.name == "docker":
# For Windows and MacOS containers run in VM
# So we obtain the CPU count for the VM
n_cpu_str = subprocess.check_output(
[container_utils.get_runtime(), "info", "--format", "{{.NCPU}}"],
[str(runtime.path), "info", "--format", "{{.NCPU}}"],
text=True,
startupinfo=get_subprocess_startupinfo(),
)

View file

@ -130,7 +130,6 @@ def is_qubes_native_conversion() -> bool:
# This disambiguates if it is running a Qubes targetted build or not
# (Qubes-specific builds don't ship the container image)
compressed_container_path = get_resource_path("container.tar.gz")
return not os.path.exists(compressed_container_path)
return not get_resource_path("container.tar").exists()
else:
return False

View file

@ -23,16 +23,13 @@ class DangerzoneCore(object):
# Initialize terminal colors
colorama.init(autoreset=True)
# App data folder
self.appdata_path = util.get_config_dir()
# Languages supported by tesseract
with open(get_resource_path("ocr-languages.json"), "r") as f:
with get_resource_path("ocr-languages.json").open("r") as f:
unsorted_ocr_languages = json.load(f)
self.ocr_languages = dict(sorted(unsorted_ocr_languages.items()))
# Load settings
self.settings = Settings(self)
self.settings = Settings()
self.documents: List[Document] = []
self.isolation_provider = isolation_provider

View file

@ -1,29 +1,25 @@
import json
import logging
import os
import platform
from pathlib import Path
from typing import TYPE_CHECKING, Any, Dict
from packaging import version
from .document import SAFE_EXTENSION
from .util import get_version
from .util import get_config_dir, get_version
log = logging.getLogger(__name__)
if TYPE_CHECKING:
from .logic import DangerzoneCore
SETTINGS_FILENAME: str = "settings.json"
class Settings:
settings: Dict[str, Any]
def __init__(self, dangerzone: "DangerzoneCore") -> None:
self.dangerzone = dangerzone
self.settings_filename = os.path.join(
self.dangerzone.appdata_path, SETTINGS_FILENAME
)
def __init__(self) -> None:
self.settings_filename = get_config_dir() / SETTINGS_FILENAME
self.default_settings: Dict[str, Any] = self.generate_default_settings()
self.load()
@ -37,7 +33,7 @@ class Settings:
"open": True,
"open_app": None,
"safe_extension": SAFE_EXTENSION,
"updater_check": None,
"updater_check_all": None,
"updater_last_check": None, # last check in UNIX epoch (secs since 1970)
# FIXME: How to invalidate those if they change upstream?
"updater_latest_version": get_version(),
@ -45,6 +41,22 @@ class Settings:
"updater_errors": 0,
}
def custom_runtime_specified(self) -> bool:
return "container_runtime" in self.settings
def set_custom_runtime(self, runtime: str, autosave: bool = False) -> Path:
from .container_utils import Runtime # Avoid circular import
container_runtime = Runtime.path_from_name(runtime)
self.settings["container_runtime"] = str(container_runtime)
if autosave:
self.save()
return container_runtime
def unset_custom_runtime(self) -> None:
self.settings.pop("container_runtime")
self.save()
def get(self, key: str) -> Any:
return self.settings[key]
@ -91,6 +103,6 @@ class Settings:
self.save()
def save(self) -> None:
os.makedirs(self.dangerzone.appdata_path, exist_ok=True)
with open(self.settings_filename, "w") as settings_file:
self.settings_filename.parent.mkdir(parents=True, exist_ok=True)
with self.settings_filename.open("w") as settings_file:
json.dump(self.settings, settings_file, indent=4)

View file

@ -0,0 +1,10 @@
import logging
log = logging.getLogger(__name__)
from .signatures import (
DEFAULT_PUBKEY_LOCATION,
is_update_available,
upgrade_container_image,
verify_local_image,
)

View file

@ -0,0 +1,90 @@
import subprocess
from tempfile import NamedTemporaryFile
from . import cosign
# NOTE: You can grab the SLSA attestation for an image/tag pair with the following
# commands:
#
# IMAGE=ghcr.io/apyrgio/dangerzone/dangerzone
# TAG=20250129-0.8.0-149-gbf2f5ac
# DIGEST=$(crane digest ${IMAGE?}:${TAG?})
# ATT_MANIFEST=${IMAGE?}:${DIGEST/:/-}.att
# ATT_BLOB=${IMAGE?}@$(crane manifest ${ATT_MANIFEST?} | jq -r '.layers[0].digest')
# crane blob ${ATT_BLOB?} | jq -r '.payload' | base64 -d | jq
CUE_POLICY = r"""
// The predicateType field must match this string
predicateType: "https://slsa.dev/provenance/v0.2"
predicate: {{
// This condition verifies that the builder is the builder we
// expect and trust. The following condition can be used
// unmodified. It verifies that the builder is the container
// workflow.
builder: {{
id: =~"^https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@refs/tags/v[0-9]+.[0-9]+.[0-9]+$"
}}
invocation: {{
configSource: {{
// This condition verifies the entrypoint of the workflow.
// Replace with the relative path to your workflow in your
// repository.
entryPoint: "{workflow}"
// This condition verifies that the image was generated from
// the source repository we expect. Replace this with your
// repository.
uri: =~"^git\\+https://github.com/{repository}@refs/heads/{branch}"
// Add a condition to check for a specific commit hash
digest: {{
sha1: "{commit}"
}}
}}
}}
}}
"""
def verify(
image_name: str,
branch: str,
commit: str,
repository: str,
workflow: str,
) -> bool:
"""
Look up the image attestation to see if the image has been built
on Github runners, and from a given repository.
"""
cosign.ensure_installed()
policy = CUE_POLICY.format(
repository=repository, workflow=workflow, commit=commit, branch=branch
)
# Put the value in files and verify with cosign
with (
NamedTemporaryFile(mode="w", suffix=".cue") as policy_f,
):
policy_f.write(policy)
policy_f.flush()
# Call cosign with the temporary file paths
cmd = [
"cosign",
"verify-attestation",
"--type",
"slsaprovenance",
"--policy",
policy_f.name,
"--certificate-oidc-issuer",
"https://token.actions.githubusercontent.com",
"--certificate-identity-regexp",
"^https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@refs/tags/v[0-9]+.[0-9]+.[0-9]+$",
image_name,
]
result = subprocess.run(cmd, capture_output=True)
if result.returncode != 0:
error = result.stderr.decode()
raise Exception(f"Attestation cannot be verified. {error}")
return True

183
dangerzone/updater/cli.py Normal file
View file

@ -0,0 +1,183 @@
#!/usr/bin/python
import functools
import logging
import click
from .. import container_utils
from ..container_utils import get_runtime_name
from . import attestations, errors, log, registry, signatures
DEFAULT_REPOSITORY = "freedomofpress/dangerzone"
DEFAULT_BRANCH = "main"
DEFAULT_IMAGE_NAME = "ghcr.io/freedomofpress/dangerzone/dangerzone"
@click.group()
@click.option("--debug", is_flag=True)
@click.option("--runtime", default=get_runtime_name())
def main(debug: bool, runtime: str) -> None:
if debug:
click.echo("Debug mode enabled")
level = logging.DEBUG
else:
level = logging.INFO
logging.basicConfig(level=level)
if runtime != get_runtime_name():
click.echo(f"Using container runtime: {runtime}")
container_utils.RUNTIME_NAME = runtime
@main.command()
@click.argument("image", default=DEFAULT_IMAGE_NAME)
@click.option("--pubkey", default=signatures.DEFAULT_PUBKEY_LOCATION)
def upgrade(image: str, pubkey: str) -> None:
"""Upgrade the image to the latest signed version."""
manifest_digest = registry.get_manifest_digest(image)
try:
callback = functools.partial(click.echo, nl=False)
signatures.upgrade_container_image(image, manifest_digest, pubkey, callback)
click.echo(f"✅ The local image {image} has been upgraded")
click.echo(f"✅ The image has been signed with {pubkey}")
click.echo(f"✅ Signatures has been verified and stored locally")
except errors.ImageAlreadyUpToDate as e:
click.echo(f"{e}")
raise click.Abort()
except Exception as e:
click.echo(f"{e}")
raise click.Abort()
@main.command()
@click.argument("image", default=DEFAULT_IMAGE_NAME)
@click.option("--pubkey", default=signatures.DEFAULT_PUBKEY_LOCATION)
def store_signatures(image: str, pubkey: str) -> None:
manifest_digest = registry.get_manifest_digest(image)
sigs = signatures.get_remote_signatures(image, manifest_digest)
signatures.verify_signatures(sigs, manifest_digest, pubkey)
signatures.store_signatures(sigs, manifest_digest, pubkey, update_logindex=False)
click.echo(f"✅ Signatures has been verified and stored locally")
@main.command()
@click.argument("image_filename")
@click.option("--pubkey", default=signatures.DEFAULT_PUBKEY_LOCATION)
@click.option("--force", is_flag=True)
def load_archive(image_filename: str, pubkey: str, force: bool) -> None:
"""Upgrade the local image to the one in the archive."""
try:
loaded_image = signatures.upgrade_container_image_airgapped(
image_filename, pubkey, bypass_logindex=force
)
click.echo(
f"✅ Installed image {image_filename} on the system as {loaded_image}"
)
except errors.ImageAlreadyUpToDate as e:
click.echo(f"{e}")
except errors.InvalidLogIndex as e:
click.echo(f"❌ Trying to install image older that the currently installed one")
raise click.Abort()
except Exception as e:
click.echo(f"{e}")
raise click.Abort()
@main.command()
@click.argument("image")
@click.option("--output", default="dangerzone-airgapped.tar")
def prepare_archive(image: str, output: str) -> None:
"""Prepare an archive to upgrade the dangerzone image on an airgapped environment."""
signatures.prepare_airgapped_archive(image, output)
click.echo(f"✅ Archive {output} created")
@main.command()
@click.argument("image", default=DEFAULT_IMAGE_NAME)
@click.option("--pubkey", default=signatures.DEFAULT_PUBKEY_LOCATION)
def verify_local(image: str, pubkey: str) -> None:
"""
Verify the local image signature against a public key and the stored signatures.
"""
# XXX remove a potentiel :tag
if signatures.verify_local_image(image, pubkey):
click.echo(
(
f"Verifying the local image:\n\n"
f"pubkey: {pubkey}\n"
f"image: {image}\n\n"
f"✅ The local image {image} has been signed with {pubkey}"
)
)
@main.command()
@click.argument("image")
def list_remote_tags(image: str) -> None:
"""List the tags available for a given image."""
click.echo(f"Existing tags for {image}")
for tag in registry.list_tags(image):
click.echo(tag)
@main.command()
@click.argument("image")
def get_manifest(image: str) -> None:
"""Retrieves a remote manifest for a given image and displays it."""
click.echo(registry.get_manifest(image).content)
@main.command()
@click.argument("image_name")
# XXX: Do we really want to check against this?
@click.option(
"--branch",
default=DEFAULT_BRANCH,
help="The Git branch that the image was built from",
)
@click.option(
"--commit",
required=True,
help="The Git commit the image was built from",
)
@click.option(
"--repository",
default=DEFAULT_REPOSITORY,
help="The github repository to check the attestation for",
)
@click.option(
"--workflow",
default=".github/workflows/release-container-image.yml",
help="The path of the GitHub actions workflow this image was created from",
)
def attest_provenance(
image_name: str,
branch: str,
commit: str,
repository: str,
workflow: str,
) -> None:
"""
Look up the image attestation to see if the image has been built
on Github runners, and from a given repository.
"""
# TODO: Parse image and make sure it has a tag. Might even check for a digest.
# parsed = registry.parse_image_location(image)
verified = attestations.verify(image_name, branch, commit, repository, workflow)
if verified:
click.echo(
f"🎉 Successfully verified image '{image_name}' and its associated claims:"
)
click.echo(f"- ✅ SLSA Level 3 provenance")
click.echo(f"- ✅ GitHub repo: {repository}")
click.echo(f"- ✅ GitHub actions workflow: {workflow}")
click.echo(f"- ✅ Git branch: {branch}")
click.echo(f"- ✅ Git commit: {commit}")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,32 @@
import subprocess
from . import errors, log
def ensure_installed() -> None:
try:
subprocess.run(["cosign", "version"], capture_output=True, check=True)
except subprocess.CalledProcessError:
raise errors.CosignNotInstalledError()
def verify_local_image(oci_image_folder: str, pubkey: str) -> bool:
"""Verify the given path against the given public key"""
ensure_installed()
cmd = [
"cosign",
"verify",
"--key",
pubkey,
"--offline",
"--local-image",
oci_image_folder,
]
log.debug(" ".join(cmd))
result = subprocess.run(cmd, capture_output=True)
if result.returncode == 0:
log.info("Signature verified")
return True
log.info("Failed to verify signature", result.stderr)
return False

View file

@ -0,0 +1,64 @@
class UpdaterError(Exception):
pass
class ImageAlreadyUpToDate(UpdaterError):
pass
class ImageNotFound(UpdaterError):
pass
class SignatureError(UpdaterError):
pass
class RegistryError(UpdaterError):
pass
class AirgappedImageDownloadError(UpdaterError):
pass
class NoRemoteSignatures(SignatureError):
pass
class SignatureVerificationError(SignatureError):
pass
class SignatureExtractionError(SignatureError):
pass
class SignaturesFolderDoesNotExist(SignatureError):
pass
class InvalidSignatures(SignatureError):
pass
class SignatureMismatch(SignatureError):
pass
class LocalSignatureNotFound(SignatureError):
pass
class CosignNotInstalledError(SignatureError):
pass
class InvalidLogIndex(SignatureError):
pass
class NeedUserInput(UpdaterError):
"""The user has not yet been prompted to know if they want to check for updates."""
pass

View file

@ -0,0 +1,139 @@
import re
from collections import namedtuple
from hashlib import sha256
from typing import Dict, Optional, Tuple
import requests
from .. import container_utils as runtime
from .. import errors as dzerrors
from . import errors, log
__all__ = [
"get_manifest_digest",
"list_tags",
"get_manifest",
"parse_image_location",
]
SIGSTORE_BUNDLE = "application/vnd.dev.sigstore.bundle.v0.3+json"
IMAGE_INDEX_MEDIA_TYPE = "application/vnd.oci.image.index.v1+json"
ACCEPT_MANIFESTS_HEADER = ",".join(
[
"application/vnd.docker.distribution.manifest.v1+json",
"application/vnd.docker.distribution.manifest.v1+prettyjws",
"application/vnd.docker.distribution.manifest.v2+json",
"application/vnd.oci.image.manifest.v1+json",
"application/vnd.docker.distribution.manifest.list.v2+json",
IMAGE_INDEX_MEDIA_TYPE,
]
)
Image = namedtuple("Image", ["registry", "namespace", "image_name", "tag", "digest"])
def parse_image_location(input_string: str) -> Image:
"""Parses container image location into an Image namedtuple"""
pattern = (
r"^"
r"(?P<registry>[a-zA-Z0-9.-]+)/"
r"(?P<namespace>[a-zA-Z0-9-]+)/"
r"(?P<image_name>[^:@]+)"
r"(?::(?P<tag>[a-zA-Z0-9.-]+))?"
r"(?:@(?P<digest>sha256:[a-zA-Z0-9]+))?"
r"$"
)
match = re.match(pattern, input_string)
if not match:
raise ValueError("Malformed image location")
return Image(
registry=match.group("registry"),
namespace=match.group("namespace"),
image_name=match.group("image_name"),
tag=match.group("tag") or "latest",
digest=match.group("digest"),
)
def _get_auth_header(image: Image) -> Dict[str, str]:
auth_url = f"https://{image.registry}/token"
response = requests.get(
auth_url,
params={
"service": f"{image.registry}",
"scope": f"repository:{image.namespace}/{image.image_name}:pull",
},
)
response.raise_for_status()
token = response.json()["token"]
return {"Authorization": f"Bearer {token}"}
def _url(image: Image) -> str:
return f"https://{image.registry}/v2/{image.namespace}/{image.image_name}"
def list_tags(image_str: str) -> list:
image = parse_image_location(image_str)
url = f"{_url(image)}/tags/list"
response = requests.get(url, headers=_get_auth_header(image))
response.raise_for_status()
tags = response.json().get("tags", [])
return tags
def get_manifest(image_str: str) -> requests.Response:
"""Get manifest information for a specific tag"""
image = parse_image_location(image_str)
manifest_url = f"{_url(image)}/manifests/{image.tag}"
headers = {
"Accept": ACCEPT_MANIFESTS_HEADER,
}
headers.update(_get_auth_header(image))
response = requests.get(manifest_url, headers=headers)
response.raise_for_status()
return response
def list_manifests(image_str: str) -> list:
return get_manifest(image_str).json().get("manifests")
def get_blob(image: Image, digest: str) -> requests.Response:
response = requests.get(
f"{_url(image)}/blobs/{digest}", headers=_get_auth_header(image)
)
response.raise_for_status()
return response
def get_manifest_digest(
image_str: str, tag_manifest_content: Optional[bytes] = None
) -> str:
if not tag_manifest_content:
tag_manifest_content = get_manifest(image_str).content
return sha256(tag_manifest_content).hexdigest()
def is_new_remote_image_available(image_str: str) -> Tuple[bool, str]:
"""
Check if a new remote image is available on the registry.
"""
remote_digest = get_manifest_digest(image_str)
image = parse_image_location(image_str)
if image.digest:
local_digest = image.digest
else:
try:
local_digest = runtime.get_local_image_digest(image_str)
except dzerrors.ImageNotPresentException:
log.debug("No local image found")
return True, remote_digest
log.debug("Remote digest: %s", remote_digest)
log.debug("Local digest: %s", local_digest)
return (remote_digest != local_digest, remote_digest)

View file

@ -0,0 +1,191 @@
import json
import platform
import sys
import time
from typing import Optional
import markdown
import requests
from packaging import version
from .. import util
from ..settings import Settings
from . import errors, log
# Check for updates at most every 12 hours.
UPDATE_CHECK_COOLDOWN_SECS = 60 * 60 * 12
GH_RELEASE_URL = (
"https://api.github.com/repos/freedomofpress/dangerzone/releases/latest"
)
REQ_TIMEOUT = 15
class UpdateReport:
"""A report for an update check."""
def __init__(
self,
version: Optional[str] = None,
changelog: Optional[str] = None,
error: Optional[str] = None,
):
self.version = version
self.changelog = changelog
self.error = error
def empty(self) -> bool:
return self.version is None and self.changelog is None and self.error is None
def _get_now_timestamp() -> int:
return int(time.time())
def _should_postpone_update_check(settings) -> bool:
"""Consult and update cooldown timer.
If the previous check happened before the cooldown period expires, do not check
again.
"""
current_time = _get_now_timestamp()
last_check = settings.get("updater_last_check")
if current_time < last_check + UPDATE_CHECK_COOLDOWN_SECS:
log.debug("Cooling down update checks")
return True
else:
return False
def ensure_sane_update(cur_version: str, latest_version: str) -> bool:
if version.parse(cur_version) == version.parse(latest_version):
return False
elif version.parse(cur_version) > version.parse(latest_version):
# FIXME: This is a sanity check, but we should improve its wording.
raise Exception("Received version is older than the latest version")
else:
return True
def fetch_release_info() -> UpdateReport:
"""Get the latest release info from GitHub.
Also, render the changelog from Markdown format to HTML, so that we can show it
to the users.
"""
try:
res = requests.get(GH_RELEASE_URL, timeout=REQ_TIMEOUT)
except Exception as e:
raise RuntimeError(
f"Encountered an exception while checking {GH_RELEASE_URL}: {e}"
)
if res.status_code != 200:
raise RuntimeError(
f"Encountered an HTTP {res.status_code} error while checking"
f" {GH_RELEASE_URL}"
)
try:
info = res.json()
except json.JSONDecodeError:
raise ValueError(f"Received a non-JSON response from {GH_RELEASE_URL}")
try:
version = info["tag_name"].lstrip("v")
changelog = markdown.markdown(info["body"])
except KeyError:
raise ValueError(
f"Missing required fields in JSON response from {GH_RELEASE_URL}"
)
return UpdateReport(version=version, changelog=changelog)
def should_check_for_releases(settings: Settings) -> bool:
"""Determine if we can check for release updates based on settings and user prefs.
Note that this method only checks if the user has expressed an interest for
learning about new updates, and not whether we should actually make an update
check. Those two things are distinct, actually. For example:
* A user may have expressed that they want to learn about new updates.
* A previous update check may have found out that there's a new version out.
* Thus we will always show to the user the cached info about the new version,
and won't make a new update check.
"""
check = settings.get("updater_check_all")
log.debug("Checking platform type")
# TODO: Disable updates for Homebrew installations.
if platform.system() == "Linux" and not getattr(sys, "dangerzone_dev", False):
log.debug("Running on Linux, disabling updates")
if not check: # if not overidden by user
settings.set("updater_check_all", False, autosave=True)
return False
log.debug("Checking if first run of Dangerzone")
if settings.get("updater_last_check") is None:
log.debug("Dangerzone is running for the first time, updates are stalled")
settings.set("updater_last_check", 0, autosave=True)
return False
log.debug("Checking if user has already expressed their preference")
if check is None:
log.debug("User has not been asked yet for update checks")
raise errors.NeedUserInput()
elif not check:
log.debug("User has expressed that they don't want to check for updates")
return False
return True
def check_for_updates(settings) -> UpdateReport:
"""Check for updates locally and remotely.
Check for updates (locally and remotely) and return a report with the findings:
There are three scenarios when we check for updates, and each scenario returns a
slightly different answer:
1. No new updates: Return an empty update report.
2. Updates are available: Return an update report with the latest version and
changelog, in HTML format.
3. Update check failed: Return an update report that holds just the error
message.
"""
try:
log.debug("Checking for Dangerzone updates")
latest_version = settings.get("updater_latest_version")
if version.parse(util.get_version()) < version.parse(latest_version):
log.debug("Determined that there is an update due to cached results")
return UpdateReport(
version=latest_version,
changelog=settings.get("updater_latest_changelog"),
)
# If the previous check happened before the cooldown period expires, do not
# check again. Else, bump the last check timestamp, before making the actual
# check. This is to ensure that even failed update checks respect the cooldown
# period.
if _should_postpone_update_check(settings):
return UpdateReport()
else:
settings.set("updater_last_check", _get_now_timestamp(), autosave=True)
log.debug("Checking the latest GitHub release")
report = fetch_release_info()
log.debug(f"Latest version in GitHub is {report.version}")
if report.version and ensure_sane_update(latest_version, report.version):
log.debug(
f"Determined that there is an update due to a new GitHub version:"
f" {latest_version} < {report.version}"
)
return report
log.debug("No need to update")
return UpdateReport()
except Exception as e:
log.exception("Encountered an error while checking for upgrades")
return UpdateReport(error=str(e))

View file

@ -0,0 +1,500 @@
import json
import platform
import re
import subprocess
import tarfile
from base64 import b64decode, b64encode
from functools import reduce
from hashlib import sha256
from io import BytesIO
from pathlib import Path
from tempfile import NamedTemporaryFile, TemporaryDirectory
from typing import Callable, Dict, List, Optional, Tuple
from .. import container_utils as runtime
from .. import errors as dzerrors
from ..util import get_resource_path
from . import cosign, errors, log, registry
try:
import platformdirs
except ImportError:
import appdirs as platformdirs # type: ignore[no-redef]
def appdata_dir() -> Path:
return Path(platformdirs.user_data_dir("dangerzone"))
# RELEASE: Bump this value to the log index of the latest signature
# to ensures the software can't upgrade to container images that predates it.
DEFAULT_LOG_INDEX = 0
# XXX Store this somewhere else.
DEFAULT_PUBKEY_LOCATION = get_resource_path("freedomofpress-dangerzone-pub.key")
SIGNATURES_PATH = appdata_dir() / "signatures"
LAST_LOG_INDEX = SIGNATURES_PATH / "last_log_index"
__all__ = [
"verify_signature",
"load_and_verify_signatures",
"store_signatures",
"verify_offline_image_signature",
]
def signature_to_bundle(sig: Dict) -> Dict:
"""Convert a cosign-download signature to the format expected by cosign bundle."""
bundle = sig["Bundle"]
payload = bundle["Payload"]
return {
"base64Signature": sig["Base64Signature"],
"Payload": sig["Payload"],
"cert": sig["Cert"],
"chain": sig["Chain"],
"rekorBundle": {
"SignedEntryTimestamp": bundle["SignedEntryTimestamp"],
"Payload": {
"body": payload["body"],
"integratedTime": payload["integratedTime"],
"logIndex": payload["logIndex"],
"logID": payload["logID"],
},
},
"RFC3161Timestamp": sig["RFC3161Timestamp"],
}
def verify_signature(signature: dict, image_digest: str, pubkey: str | Path) -> None:
"""
Verifies that:
- the signature has been signed by the given public key
- the signature matches the given image digest
"""
# XXX - Also verify the identity/docker-reference field against the expected value
# e.g. ghcr.io/freedomofpress/dangerzone/dangerzone
cosign.ensure_installed()
signature_bundle = signature_to_bundle(signature)
try:
payload_bytes = b64decode(signature_bundle["Payload"])
payload_digest = json.loads(payload_bytes)["critical"]["image"][
"docker-manifest-digest"
]
except Exception as e:
raise errors.SignatureVerificationError(
f"Unable to extract the payload digest from the signature: {e}"
)
if payload_digest != f"sha256:{image_digest}":
raise errors.SignatureMismatch(
"The given signature does not match the expected image digest "
f"({payload_digest}, {image_digest})"
)
with (
NamedTemporaryFile(mode="w") as signature_file,
NamedTemporaryFile(mode="bw") as payload_file,
):
json.dump(signature_bundle, signature_file)
signature_file.flush()
payload_file.write(payload_bytes)
payload_file.flush()
if isinstance(pubkey, str):
pubkey = Path(pubkey)
cmd = [
"cosign",
"verify-blob",
"--key",
str(pubkey.absolute()),
"--bundle",
signature_file.name,
payload_file.name,
]
log.debug(" ".join(cmd))
result = subprocess.run(cmd, capture_output=True)
if result.returncode != 0 or result.stderr != b"Verified OK\n":
log.debug("Failed to verify signature", result.stderr)
raise errors.SignatureVerificationError("Failed to verify signature")
log.debug("Signature verified")
class Signature:
def __init__(self, signature: Dict):
self.signature = signature
@property
def payload(self) -> Dict:
return json.loads(b64decode(self.signature["Payload"]))
@property
def manifest_digest(self) -> str:
full_digest = self.payload["critical"]["image"]["docker-manifest-digest"]
return full_digest.replace("sha256:", "")
def is_update_available(image_str: str, pubkey: str) -> Tuple[bool, Optional[str]]:
"""
Check if a new image is available, doing all the necessary checks ensuring it
would be safe to upgrade.
"""
new_image_available, remote_digest = registry.is_new_remote_image_available(
image_str
)
if not new_image_available:
return False, None
try:
check_signatures_and_logindex(image_str, remote_digest, pubkey)
return True, remote_digest
except errors.InvalidLogIndex:
return False, None
def check_signatures_and_logindex(
image_str: str, remote_digest: str, pubkey: str
) -> list[Dict]:
signatures = get_remote_signatures(image_str, remote_digest)
verify_signatures(signatures, remote_digest, pubkey)
incoming_log_index = get_log_index_from_signatures(signatures)
last_log_index = get_last_log_index()
if incoming_log_index < last_log_index:
raise errors.InvalidLogIndex(
f"The incoming log index ({incoming_log_index}) is "
f"lower than the last known log index ({last_log_index})"
)
return signatures
def verify_signatures(
signatures: List[Dict],
image_digest: str,
pubkey: str,
) -> bool:
if len(signatures) < 1:
raise errors.SignatureVerificationError("No signatures found")
for signature in signatures:
verify_signature(signature, image_digest, pubkey)
return True
def get_last_log_index() -> int:
SIGNATURES_PATH.mkdir(parents=True, exist_ok=True)
if not LAST_LOG_INDEX.exists():
return DEFAULT_LOG_INDEX
with open(LAST_LOG_INDEX) as f:
return int(f.read())
def get_log_index_from_signatures(signatures: List[Dict]) -> int:
def _reducer(accumulator: int, signature: Dict) -> int:
try:
logIndex = int(signature["Bundle"]["Payload"]["logIndex"])
except (KeyError, ValueError):
return accumulator
return max(accumulator, logIndex)
return reduce(_reducer, signatures, 0)
def write_log_index(log_index: int) -> None:
last_log_index_path = SIGNATURES_PATH / "last_log_index"
with open(last_log_index_path, "w") as f:
f.write(str(log_index))
def _get_blob(tmpdir: str, digest: str) -> Path:
return Path(tmpdir) / "blobs" / "sha256" / digest.replace("sha256:", "")
def upgrade_container_image_airgapped(
container_tar: str, pubkey: str, bypass_logindex: bool = False
) -> str:
"""
Verify the given archive against its self-contained signatures, then
upgrade the image and retag it to the expected tag.
Right now, the archive is extracted and reconstructed, requiring some space
on the filesystem.
:return: The loaded image name
"""
# XXX Use a memory buffer instead of the filesystem
with TemporaryDirectory() as tmpdir:
def _get_signature_filename(manifests: List[Dict]) -> Path:
for manifest in manifests:
if (
manifest["annotations"].get("kind")
== "dev.cosignproject.cosign/sigs"
):
return _get_blob(tmpdir, manifest["digest"])
raise errors.SignatureExtractionError()
with tarfile.open(container_tar, "r") as archive:
archive.extractall(tmpdir)
if not cosign.verify_local_image(tmpdir, pubkey):
raise errors.SignatureVerificationError()
# Remove the signatures from the archive, otherwise podman is not able to load it
with open(Path(tmpdir) / "index.json") as f:
index_json = json.load(f)
signature_filename = _get_signature_filename(index_json["manifests"])
index_json["manifests"] = [
manifest
for manifest in index_json["manifests"]
if manifest["annotations"].get("kind")
in ("dev.cosignproject.cosign/imageIndex", "dev.cosignproject.cosign/image")
]
with open(signature_filename, "r") as f:
image_name, signatures = convert_oci_images_signatures(json.load(f), tmpdir)
log.info(f"Found image name: {image_name}")
if not bypass_logindex:
# Ensure that we only upgrade if the log index is higher than the last known one
incoming_log_index = get_log_index_from_signatures(signatures)
last_log_index = get_last_log_index()
if incoming_log_index < last_log_index:
raise errors.InvalidLogIndex(
"The log index is not higher than the last known one"
)
image_digest = index_json["manifests"][0].get("digest").replace("sha256:", "")
# Write the new index.json to the temp folder
with open(Path(tmpdir) / "index.json", "w") as f:
json.dump(index_json, f)
with NamedTemporaryFile(suffix=".tar") as temporary_tar:
with tarfile.open(temporary_tar.name, "w") as archive:
# The root is the tmpdir
archive.add(Path(tmpdir) / "index.json", arcname="index.json")
archive.add(Path(tmpdir) / "oci-layout", arcname="oci-layout")
archive.add(Path(tmpdir) / "blobs", arcname="blobs")
runtime.load_image_tarball_from_tar(temporary_tar.name)
runtime.tag_image_by_digest(image_digest, image_name)
store_signatures(signatures, image_digest, pubkey)
return image_name
def convert_oci_images_signatures(
signatures_manifest: Dict, tmpdir: str
) -> Tuple[str, List[Dict]]:
def _to_cosign_signature(layer: Dict) -> Dict:
signature = layer["annotations"]["dev.cosignproject.cosign/signature"]
bundle = json.loads(layer["annotations"]["dev.sigstore.cosign/bundle"])
payload_body = json.loads(b64decode(bundle["Payload"]["body"]))
payload_location = _get_blob(tmpdir, layer["digest"])
with open(payload_location, "rb") as f:
payload_b64 = b64encode(f.read()).decode()
return {
"Base64Signature": payload_body["spec"]["signature"]["content"],
"Payload": payload_b64,
"Cert": None,
"Chain": None,
"Bundle": bundle,
"RFC3161Timestamp": None,
}
layers = signatures_manifest.get("layers", [])
signatures = [_to_cosign_signature(layer) for layer in layers]
if not signatures:
raise errors.SignatureExtractionError()
payload_location = _get_blob(tmpdir, layers[0]["digest"])
with open(payload_location, "r") as f:
payload = json.load(f)
image_name = payload["critical"]["identity"]["docker-reference"]
return image_name, signatures
def get_file_digest(file: Optional[str] = None, content: Optional[bytes] = None) -> str:
"""Get the sha256 digest of a file or content"""
if not file and not content:
raise errors.UpdaterError("No file or content provided")
if file:
with open(file, "rb") as f:
content = f.read()
if content:
return sha256(content).hexdigest()
return ""
def load_and_verify_signatures(
image_digest: str,
pubkey: str,
bypass_verification: bool = False,
signatures_path: Optional[Path] = None,
) -> List[Dict]:
"""
Load signatures from the local filesystem
See store_signatures() for the expected format.
"""
if not signatures_path:
signatures_path = SIGNATURES_PATH
pubkey_signatures = signatures_path / get_file_digest(pubkey)
if not pubkey_signatures.exists():
msg = (
f"Cannot find a '{pubkey_signatures}' folder."
"You might need to download the image signatures first."
)
raise errors.SignaturesFolderDoesNotExist(msg)
with open(pubkey_signatures / f"{image_digest}.json") as f:
log.debug("Loading signatures from %s", f.name)
signatures = json.load(f)
if not bypass_verification:
verify_signatures(signatures, image_digest, pubkey)
return signatures
def store_signatures(
signatures: list[Dict], image_digest: str, pubkey: str, update_logindex: bool = True
) -> None:
"""
Store signatures locally in the SIGNATURE_PATH folder, like this:
~/.config/dangerzone/signatures/
<pubkey-digest>
<image-digest>.json
<image-digest>.json
last_log_index
The last_log_index file is used to keep track of the last log index
processed by the updater.
The format used in the `.json` file is the one of `cosign download
signature`, which differs from the "bundle" one used afterwards.
It can be converted to the one expected by cosign verify --bundle with
the `signature_to_bundle()` function.
This function must be used only if the provided signatures have been verified.
"""
def _get_digest(sig: Dict) -> str:
payload = json.loads(b64decode(sig["Payload"]))
return payload["critical"]["image"]["docker-manifest-digest"]
# All the signatures should share the same digest.
digests = list(map(_get_digest, signatures))
if len(set(digests)) != 1:
raise errors.InvalidSignatures("Signatures do not share the same image digest")
if f"sha256:{image_digest}" != digests[0]:
raise errors.SignatureMismatch(
f"Signatures do not match the given image digest (sha256:{image_digest}, {digests[0]})"
)
pubkey_signatures = SIGNATURES_PATH / get_file_digest(pubkey)
pubkey_signatures.mkdir(parents=True, exist_ok=True)
with open(pubkey_signatures / f"{image_digest}.json", "w") as f:
log.info(
f"Storing signatures for {image_digest} in {pubkey_signatures}/{image_digest}.json"
)
json.dump(signatures, f)
if update_logindex:
write_log_index(get_log_index_from_signatures(signatures))
def verify_local_image(image: str, pubkey: str) -> bool:
"""
Verifies that a local image has a valid signature
"""
log.info(f"Verifying local image {image} against pubkey {pubkey}")
try:
image_digest = runtime.get_local_image_digest(image)
except subprocess.CalledProcessError:
raise errors.ImageNotFound(f"The image {image} does not exist locally")
log.debug(f"Image digest: {image_digest}")
load_and_verify_signatures(image_digest, pubkey)
return True
def get_remote_signatures(image: str, digest: str) -> List[Dict]:
"""Retrieve the signatures from the registry, via `cosign download signatures`."""
cosign.ensure_installed()
try:
process = subprocess.run(
["cosign", "download", "signature", f"{image}@sha256:{digest}"],
capture_output=True,
check=True,
)
except subprocess.CalledProcessError as e:
raise errors.NoRemoteSignatures(e)
# Remove the last return, split on newlines, convert from JSON
signatures_raw = process.stdout.decode("utf-8").strip().split("\n")
signatures = list(filter(bool, map(json.loads, signatures_raw)))
if len(signatures) < 1:
raise errors.NoRemoteSignatures("No signatures found for the image")
return signatures
def prepare_airgapped_archive(image_name: str, destination: str) -> None:
if "@sha256:" not in image_name:
raise errors.AirgappedImageDownloadError(
"The image name must include a digest, e.g. ghcr.io/freedomofpress/dangerzone/dangerzone@sha256:123456"
)
cosign.ensure_installed()
# Get the image from the registry
with TemporaryDirectory() as tmpdir:
msg = f"Downloading image {image_name}. \nIt might take a while."
log.info(msg)
process = subprocess.run(
["cosign", "save", image_name, "--dir", tmpdir],
capture_output=True,
check=True,
)
if process.returncode != 0:
raise errors.AirgappedImageDownloadError()
with tarfile.open(destination, "w") as archive:
archive.add(tmpdir, arcname=".")
def upgrade_container_image(
image: str, manifest_digest: str, pubkey: str, callback: Callable
) -> str:
"""Verify and upgrade the image to the latest, if signed."""
update_available, remote_digest = registry.is_new_remote_image_available(image)
if not update_available:
raise errors.ImageAlreadyUpToDate("The image is already up to date")
signatures = check_signatures_and_logindex(image, remote_digest, pubkey)
runtime.container_pull(image, manifest_digest, callback=callback)
# Store the signatures just now to avoid storing them unverified
store_signatures(signatures, manifest_digest, pubkey)
return manifest_digest

View file

@ -1,50 +1,49 @@
import pathlib
import platform
import subprocess
import sys
import traceback
import unicodedata
from pathlib import Path
try:
import platformdirs
except ImportError:
import appdirs as platformdirs
import appdirs as platformdirs # type: ignore[no-redef]
def get_config_dir() -> str:
return platformdirs.user_config_dir("dangerzone")
def get_config_dir() -> Path:
return Path(platformdirs.user_config_dir("dangerzone"))
def get_resource_path(filename: str) -> str:
def get_resource_path(filename: str) -> Path:
if getattr(sys, "dangerzone_dev", False):
# Look for resources directory relative to python file
project_root = pathlib.Path(__file__).parent.parent
project_root = Path(__file__).parent.parent
prefix = project_root / "share"
else:
if platform.system() == "Darwin":
bin_path = pathlib.Path(sys.executable)
bin_path = Path(sys.executable)
app_path = bin_path.parent.parent
prefix = app_path / "Resources" / "share"
elif platform.system() == "Linux":
prefix = pathlib.Path(sys.prefix) / "share" / "dangerzone"
prefix = Path(sys.prefix) / "share" / "dangerzone"
elif platform.system() == "Windows":
exe_path = pathlib.Path(sys.executable)
exe_path = Path(sys.executable)
dz_install_path = exe_path.parent
prefix = dz_install_path / "share"
else:
raise NotImplementedError(f"Unsupported system {platform.system()}")
resource_path = prefix / filename
return str(resource_path)
return prefix / filename
def get_tessdata_dir() -> pathlib.Path:
def get_tessdata_dir() -> Path:
if getattr(sys, "dangerzone_dev", False) or platform.system() in (
"Windows",
"Darwin",
):
# Always use the tessdata path from the Dangerzone ./share directory, for
# development builds, or in Windows/macOS platforms.
return pathlib.Path(get_resource_path("tessdata"))
return get_resource_path("tessdata")
# In case of Linux systems, grab the Tesseract data from any of the following
# locations. We have found some of the locations through trial and error, whereas
@ -55,11 +54,11 @@ def get_tessdata_dir() -> pathlib.Path:
#
# [1] https://tesseract-ocr.github.io/tessdoc/Installation.html
tessdata_dirs = [
pathlib.Path("/usr/share/tessdata/"), # on some Debian
pathlib.Path("/usr/share/tesseract/tessdata/"), # on Fedora
pathlib.Path("/usr/share/tesseract-ocr/tessdata/"), # ? (documented)
pathlib.Path("/usr/share/tesseract-ocr/4.00/tessdata/"), # on Ubuntu Focal
pathlib.Path("/usr/share/tesseract-ocr/5/tessdata/"), # on Debian Trixie
Path("/usr/share/tessdata/"), # on some Debian
Path("/usr/share/tesseract/tessdata/"), # on Fedora
Path("/usr/share/tesseract-ocr/tessdata/"), # ? (documented)
Path("/usr/share/tesseract-ocr/4.00/tessdata/"), # on Debian Bullseye
Path("/usr/share/tesseract-ocr/5/tessdata/"), # on Debian Trixie
]
for dir in tessdata_dirs:
@ -70,8 +69,9 @@ def get_tessdata_dir() -> pathlib.Path:
def get_version() -> str:
"""Returns the Dangerzone version string."""
try:
with open(get_resource_path("version.txt")) as f:
with get_resource_path("version.txt").open() as f:
version = f.read().strip()
except FileNotFoundError:
# In dev mode, in Windows, get_resource_path doesn't work properly for the container, but luckily

8
debian/changelog vendored
View file

@ -1,8 +1,14 @@
dangerzone (0.9.0) unstable; urgency=low
* Released Dangerzone 0.9.0
-- Freedom of the Press Foundation <info@freedom.press> Mon, 31 Mar 2025 15:57:18 +0300
dangerzone (0.8.1) unstable; urgency=low
* Released Dangerzone 0.8.1
-- Freedom of the Press Foundation <info@freedom.press> Tue, 22 December 2024 22:03:28 +0300
-- Freedom of the Press Foundation <info@freedom.press> Tue, 22 Dec 2024 22:03:28 +0300
dangerzone (0.8.0) unstable; urgency=low

13
dev_scripts/dangerzone-image Executable file
View file

@ -0,0 +1,13 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import sys
# Load dangerzone module and resources from the source code tree
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.dangerzone_dev = True
from dangerzone.updater import cli
cli.main()

View file

@ -60,24 +60,6 @@ Run Dangerzone in the end-user environment:
"""
# NOTE: For Ubuntu 20.04 specifically, we need to install some extra deps, mainly for
# Podman. This needs to take place both in our dev and end-user environment. See the
# corresponding note in our Installation section:
#
# https://github.com/freedomofpress/dangerzone/blob/main/INSTALL.md#ubuntu-debian
DOCKERFILE_UBUNTU_2004_DEPS = r"""
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y python-all python3.9 curl wget gnupg2 \
&& rm -rf /var/lib/apt/lists/*
RUN . /etc/os-release \
&& sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_$VERSION_ID/ /' \
> /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list" \
&& wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_$VERSION_ID/Release.key -O- \
| apt-key add -
"""
# XXX: overcome the fact that ubuntu images (starting on 23.04) ship with the 'ubuntu'
# user by default https://bugs.launchpad.net/cloud-images/+bug/2005129
# Related issue https://github.com/freedomofpress/dangerzone/pull/461
@ -114,33 +96,18 @@ RUN apt-get update \
RUN apt-get update \
&& apt-get install -y --no-install-recommends dh-python make build-essential \
git {qt_deps} pipx python3 python3-pip python3-venv dpkg-dev debhelper python3-setuptools \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# NOTE: `pipx install poetry` fails on Ubuntu Focal, when installed through APT. By
# installing the latest version, we sidestep this issue.
RUN bash -c 'if [[ "$(pipx --version)" < "1" ]]; then \
apt-get update \
&& apt-get remove -y pipx \
&& apt-get install -y --no-install-recommends python3-pip \
&& pip install pipx \
&& rm -rf /var/lib/apt/lists/*; \
else true; fi'
RUN pipx install poetry
RUN apt-get update \
&& apt-get install -y --no-install-recommends mupdf thunar \
&& rm -rf /var/lib/apt/lists/*
"""
# NOTE: Fedora 41 comes with Python 3.13 installed. Our Python project is not compatible
# yet with Python 3.13, because PySide6 cannot work with this Python version. To
# sidestep this, install Python 3.12 *only* in dev environments.
DOCKERFILE_BUILD_DEV_FEDORA_41_DEPS = r"""
# Install Python 3.12 since our project is not compatible yet with Python 3.13.
RUN dnf install -y python3.12
"""
# FIXME: Install Poetry on Fedora via package manager.
DOCKERFILE_BUILD_DEV_FEDORA_DEPS = r"""
RUN dnf install -y git rpm-build podman python3 python3-devel python3-poetry-core \
pipx make qt6-qtbase-gui \
pipx make qt6-qtbase-gui gcc gcc-c++\
&& dnf clean all
# FIXME: Drop this fix after it's resolved upstream.
@ -564,8 +531,6 @@ class Env:
if self.distro == "fedora":
install_deps = DOCKERFILE_BUILD_DEV_FEDORA_DEPS
if self.version == "41":
install_deps += DOCKERFILE_BUILD_DEV_FEDORA_41_DEPS
else:
# Use Qt6 in all of our Linux dev environments, and add a missing
# libxcb-cursor0 dependency
@ -573,12 +538,7 @@ class Env:
# See https://github.com/freedomofpress/dangerzone/issues/482
qt_deps = "libqt6gui6 libxcb-cursor0"
install_deps = DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
if self.distro == "ubuntu" and self.version in ("20.04", "focal"):
qt_deps = "libqt5gui5 libxcb-cursor0" # Ubuntu Focal has only Qt5.
install_deps = (
DOCKERFILE_UBUNTU_2004_DEPS + DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
)
elif self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
if self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
# Ubuntu Jammy misses a dependency to `libxkbcommon-x11-0`, which we can
# install indirectly via `qt6-qpa-plugins`.
qt_deps += " qt6-qpa-plugins"
@ -592,6 +552,8 @@ class Env:
"noble",
"24.10",
"ocular",
"25.04",
"plucky",
):
install_deps = (
DOCKERFILE_UBUNTU_REM_USER + DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
@ -642,11 +604,7 @@ class Env:
install_cmd = "dnf install -y"
else:
install_deps = DOCKERFILE_BUILD_DEBIAN_DEPS
if self.distro == "ubuntu" and self.version in ("20.04", "focal"):
install_deps = (
DOCKERFILE_UBUNTU_2004_DEPS + DOCKERFILE_BUILD_DEBIAN_DEPS
)
elif self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
if self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
# Ubuntu Jammy requires a more up-to-date conmon
# package (see https://github.com/freedomofpress/dangerzone/issues/685)
install_deps = DOCKERFILE_CONMON_UPDATE + DOCKERFILE_BUILD_DEBIAN_DEPS
@ -655,6 +613,8 @@ class Env:
"noble",
"24.10",
"ocular",
"25.04",
"plucky",
):
install_deps = DOCKERFILE_UBUNTU_REM_USER + DOCKERFILE_BUILD_DEBIAN_DEPS
package_pattern = f"dangerzone_{version}-*_*.deb"

View file

@ -251,29 +251,6 @@ Install dependencies:
</table>
<table>
<tr>
<td>
<details>
<summary><i>:memo: Expand this section if you are on Ubuntu 20.04 (Focal).</i></summary>
</br>
The default Python version that ships with Ubuntu Focal (3.8) is not
compatible with PySide6, which requires Python 3.9 or greater.
You can install Python 3.9 using the `python3.9` package.
```bash
sudo apt install -y python3.9
```
Poetry will automatically pick up the correct version when running.
</details>
</td>
</tr>
</table>
```sh
sudo apt install -y podman dh-python build-essential make libqt6gui6 \
pipx python3 python3-dev
@ -350,33 +327,11 @@ sudo dnf install -y rpm-build podman python3 python3-devel python3-poetry-core \
pipx qt6-qtbase-gui
```
<table>
<tr>
<td>
<details>
<summary><i>:memo: Expand this section if you are on Fedora 41.</i></summary>
</br>
The default Python version that ships with Fedora 41 (3.13) is not
compatible with PySide6, which requires Python 3.12 or earlier.
You can install Python 3.12 using the `python3.12` package.
```bash
sudo dnf install -y python3.12
```
Poetry will automatically pick up the correct version when running.
</details>
</td>
</tr>
</table>
Install Poetry using `pipx`:
```sh
pipx install poetry
pipx inject poetry poetry-plugin-export
pipx inject poetry
```
Clone this repository:
@ -442,7 +397,7 @@ Install Microsoft Visual C++ 14.0 or greater. Get it with ["Microsoft C++ Build
Install [poetry](https://python-poetry.org/). Open PowerShell, and run:
```
python -m pip install poetry poetry-plugin-export
python -m pip install poetry
```
Install git from [here](https://git-scm.com/download/win), open a Windows terminal (`cmd.exe`) and clone this repository:
@ -880,8 +835,8 @@ class QAWindows(QABase):
"Install Poetry and the project's dependencies", ref=REF_BUILD, auto=True
)
def install_poetry(self):
self.run("python", "-m", "pip", "install", "poetry", "poetry-plugin-export")
self.run("poetry", "install", "--sync")
self.run("python", "-m", "pip", "install", "poetry")
self.run("poetry", "sync")
@QABase.task("Build Dangerzone container image", ref=REF_BUILD, auto=True)
def build_image(self):
@ -1035,11 +990,6 @@ class QADebianTrixie(QADebianBased):
VERSION = "trixie"
class QAUbuntu2004(QADebianBased):
DISTRO = "ubuntu"
VERSION = "20.04"
class QAUbuntu2204(QADebianBased):
DISTRO = "ubuntu"
VERSION = "22.04"
@ -1055,6 +1005,11 @@ class QAUbuntu2410(QADebianBased):
VERSION = "24.10"
class QAUbuntu2504(QADebianBased):
DISTRO = "ubuntu"
VERSION = "25.04"
class QAFedora(QALinux):
"""Base class for Fedora distros.
@ -1072,6 +1027,10 @@ class QAFedora(QALinux):
)
class QAFedora42(QAFedora):
VERSION = "42"
class QAFedora41(QAFedora):
VERSION = "41"

680
dev_scripts/repro-build.py Executable file
View file

@ -0,0 +1,680 @@
#!/usr/bin/env python3
import argparse
import datetime
import hashlib
import json
import logging
import os
import pprint
import shlex
import shutil
import subprocess
import sys
import tarfile
from pathlib import Path
logger = logging.getLogger(__name__)
MEDIA_TYPE_INDEX_V1_JSON = "application/vnd.oci.image.index.v1+json"
MEDIA_TYPE_MANIFEST_V1_JSON = "application/vnd.oci.image.manifest.v1+json"
ENV_RUNTIME = "REPRO_RUNTIME"
ENV_DATETIME = "REPRO_DATETIME"
ENV_SDE = "REPRO_SOURCE_DATE_EPOCH"
ENV_CACHE = "REPRO_CACHE"
ENV_BUILDKIT = "REPRO_BUILDKIT_IMAGE"
ENV_ROOTLESS = "REPRO_ROOTLESS"
DEFAULT_BUILDKIT_IMAGE = "moby/buildkit:v0.19.0@sha256:14aa1b4dd92ea0a4cd03a54d0c6079046ea98cd0c0ae6176bdd7036ba370cbbe"
DEFAULT_BUILDKIT_IMAGE_ROOTLESS = "moby/buildkit:v0.19.0-rootless@sha256:e901cffdad753892a7c3afb8b9972549fca02c73888cf340c91ed801fdd96d71"
MSG_BUILD_CTX = """Build environment:
- Container runtime: {runtime}
- BuildKit image: {buildkit_image}
- Rootless support: {rootless}
- Caching enabled: {use_cache}
- Build context: {context}
- Dockerfile: {dockerfile}
- Output: {output}
Build parameters:
- SOURCE_DATE_EPOCH: {sde}
- Build args: {build_args}
- Tag: {tag}
- Platform: {platform}
Podman-only arguments:
- BuildKit arguments: {buildkit_args}
Docker-only arguments:
- Docker Buildx arguments: {buildx_args}
"""
def pretty_error(obj: dict, msg: str):
raise Exception(f"{msg}\n{pprint.pprint(obj)}")
def get_key(obj: dict, key: str) -> object:
if key not in obj:
pretty_error(f"Could not find key '{key}' in the dictionary:", obj)
return obj[key]
def run(cmd, dry=False, check=True):
action = "Would have run" if dry else "Running"
logger.debug(f"{action}: {shlex.join(cmd)}")
if not dry:
subprocess.run(cmd, check=check)
def snip_contents(contents: str, num: int) -> str:
contents = contents.replace("\n", "")
if len(contents) > num:
return (
contents[:num]
+ f" [... {len(contents) - num} characters omitted."
+ " Pass --show-contents to print them in their entirety]"
)
return contents
def detect_container_runtime() -> str:
"""Auto-detect the installed container runtime in the system."""
if shutil.which("docker"):
return "docker"
elif shutil.which("podman"):
return "podman"
else:
return None
def parse_runtime(args) -> str:
if args.runtime is not None:
return args.runtime
runtime = os.environ.get(ENV_RUNTIME)
if runtime is None:
raise RuntimeError("No container runtime detected in your system")
if runtime not in ("docker", "podman"):
raise RuntimeError(
"Only 'docker' or 'podman' container runtimes"
" are currently supported by this script"
)
def parse_use_cache(args) -> bool:
if args.no_cache:
return False
return bool(int(os.environ.get(ENV_CACHE, "1")))
def parse_rootless(args, runtime: str) -> bool:
rootless = args.rootless or bool(int(os.environ.get(ENV_ROOTLESS, "0")))
if runtime != "podman" and rootless:
raise RuntimeError("Rootless mode is only supported with Podman runtime")
return rootless
def parse_sde(args) -> str:
sde = os.environ.get(ENV_SDE, args.source_date_epoch)
dt = os.environ.get(ENV_DATETIME, args.datetime)
if (sde is not None and dt is not None) or (sde is None and dt is None):
raise RuntimeError("You need to pass either a source date epoch or a datetime")
if sde is not None:
return str(sde)
if dt is not None:
d = datetime.datetime.fromisoformat(dt)
# If the datetime is naive, assume its timezone is UTC. The check is
# taken from:
# https://docs.python.org/3/library/datetime.html#determining-if-an-object-is-aware-or-naive
if d.tzinfo is None or d.tzinfo.utcoffset(d) is None:
d = d.replace(tzinfo=datetime.timezone.utc)
return int(d.timestamp())
def parse_buildkit_image(args, rootless: bool, runtime: str) -> str:
default = DEFAULT_BUILDKIT_IMAGE_ROOTLESS if rootless else DEFAULT_BUILDKIT_IMAGE
img = args.buildkit_image or os.environ.get(ENV_BUILDKIT, default)
if runtime == "podman" and not img.startswith("docker.io/"):
img = "docker.io/" + img
return img
def parse_build_args(args) -> str:
return args.build_arg or []
def parse_buildkit_args(args, runtime: str) -> str:
if not args.buildkit_args:
return []
if runtime != "podman":
raise RuntimeError("Cannot specify BuildKit arguments using the Podman runtime")
return shlex.split(args.buildkit_args)
def parse_buildx_args(args, runtime: str) -> str:
if not args.buildx_args:
return []
if runtime != "docker":
raise RuntimeError(
"Cannot specify Docker Buildx arguments using the Podman runtime"
)
return shlex.split(args.buildx_args)
def parse_image_digest(args) -> str | None:
if not args.expected_image_digest:
return None
parsed = args.expected_image_digest.split(":", 1)
if len(parsed) == 1:
return parsed[0]
else:
return parsed[1]
def parse_path(path: str | None) -> str | None:
return path and str(Path(path).absolute())
##########################
# OCI parsing logic
#
# Compatible with:
# * https://github.com/opencontainers/image-spec/blob/main/image-layout.md
def oci_print_info(parsed: dict, full: bool) -> None:
print(f"The OCI tarball contains an index and {len(parsed) - 1} manifest(s):")
print()
print(f"Image digest: {parsed[1]['digest']}")
for i, info in enumerate(parsed):
print()
if i == 0:
print(f"Index ({info['path']}):")
else:
print(f"Manifest {i} ({info['path']}):")
print(f" Digest: {info['digest']}")
print(f" Media type: {info['media_type']}")
print(f" Platform: {info['platform'] or '-'}")
contents = info["contents"] if full else snip_contents(info["contents"], 600)
print(f" Contents: {contents}")
print()
def oci_normalize_path(path):
if path.startswith("sha256:"):
hash_algo, checksum = path.split(":")
path = f"blobs/{hash_algo}/{checksum}"
return path
def oci_get_file_from_tarball(tar: tarfile.TarFile, path: str) -> dict:
"""Get file from an OCI tarball.
If the filename cannot be found, search again by prefixing it with "./", since we
have encountered path names in OCI tarballs prefixed with "./".
"""
try:
return tar.extractfile(path).read().decode()
except KeyError:
if not path.startswith("./") and not path.startswith("/"):
path = "./" + path
try:
return tar.extractfile(path).read().decode()
except KeyError:
# Do not raise here, so that we can raise the original exception below.
pass
raise
def oci_parse_manifest(tar: tarfile.TarFile, path: str, platform: dict | None) -> dict:
"""Parse manifest information in JSON format.
Interestingly, the platform info for a manifest is not included in the
manifest itself, but in the descriptor that points to it. So, we have to
carry it from the previous manifest and include in the info here.
"""
path = oci_normalize_path(path)
contents = oci_get_file_from_tarball(tar, path)
digest = "sha256:" + hashlib.sha256(contents.encode()).hexdigest()
contents_dict = json.loads(contents)
media_type = get_key(contents_dict, "mediaType")
manifests = contents_dict.get("manifests", [])
if platform:
os = get_key(platform, "os")
arch = get_key(platform, "architecture")
platform = f"{os}/{arch}"
return {
"path": path,
"contents": contents,
"digest": digest,
"media_type": media_type,
"platform": platform,
"manifests": manifests,
}
def oci_parse_manifests_dfs(
tar: tarfile.TarFile, path: str, parsed: list, platform: dict | None = None
) -> None:
info = oci_parse_manifest(tar, path, platform)
parsed.append(info)
for m in info["manifests"]:
oci_parse_manifests_dfs(tar, m["digest"], parsed, m.get("platform"))
def oci_parse_tarball(path: Path) -> dict:
parsed = []
with tarfile.TarFile.open(path) as tar:
oci_parse_manifests_dfs(tar, "index.json", parsed)
return parsed
##########################
# Image building logic
def podman_build(
context: str,
dockerfile: str | None,
tag: str | None,
buildkit_image: str,
sde: int,
rootless: bool,
use_cache: bool,
output: Path,
build_args: list,
platform: str,
buildkit_args: list,
dry: bool,
):
rootless_args = []
rootful_args = []
if rootless:
rootless_args = [
"--userns",
"keep-id:uid=1000,gid=1000",
"--security-opt",
"seccomp=unconfined",
"--security-opt",
"apparmor=unconfined",
"-e",
"BUILDKITD_FLAGS=--oci-worker-no-process-sandbox",
]
else:
rootful_args = ["--privileged"]
dockerfile_args_podman = []
dockerfile_args_buildkit = []
if dockerfile:
dockerfile_args_podman = ["-v", f"{dockerfile}:/tmp/Dockerfile"]
dockerfile_args_buildkit = ["--local", "dockerfile=/tmp"]
else:
dockerfile_args_buildkit = ["--local", "dockerfile=/tmp/work"]
tag_args = f",name={tag}" if tag else ""
cache_args = []
if use_cache:
cache_args = [
"--export-cache",
"type=local,mode=max,dest=/tmp/cache",
"--import-cache",
"type=local,src=/tmp/cache",
]
_build_args = []
for arg in build_args:
_build_args.append("--opt")
_build_args.append(f"build-arg:{arg}")
platform_args = ["--opt", f"platform={platform}"] if platform else []
cmd = [
"podman",
"run",
"-it",
"--rm",
"-v",
"buildkit_cache:/tmp/cache",
"-v",
f"{output.parent}:/tmp/image",
"-v",
f"{context}:/tmp/work",
"--entrypoint",
"buildctl-daemonless.sh",
*rootless_args,
*rootful_args,
*dockerfile_args_podman,
buildkit_image,
"build",
"--frontend",
"dockerfile.v0",
"--local",
"context=/tmp/work",
"--opt",
f"build-arg:SOURCE_DATE_EPOCH={sde}",
*_build_args,
"--output",
f"type=docker,dest=/tmp/image/{output.name},rewrite-timestamp=true{tag_args}",
*cache_args,
*dockerfile_args_buildkit,
*platform_args,
*buildkit_args,
]
run(cmd, dry)
def docker_build(
context: str,
dockerfile: str | None,
tag: str | None,
buildkit_image: str,
sde: int,
use_cache: bool,
output: Path,
build_args: list,
platform: str,
buildx_args: list,
dry: bool,
):
builder_id = hashlib.sha256(buildkit_image.encode()).hexdigest()
builder_name = f"repro-build-{builder_id}"
tag_args = ["-t", tag] if tag else []
cache_args = [] if use_cache else ["--no-cache", "--pull"]
cmd = [
"docker",
"buildx",
"create",
"--name",
builder_name,
"--driver-opt",
f"image={buildkit_image}",
]
run(cmd, dry, check=False)
dockerfile_args = ["-f", dockerfile] if dockerfile else []
_build_args = []
for arg in build_args:
_build_args.append("--build-arg")
_build_args.append(arg)
platform_args = ["--platform", platform] if platform else []
cmd = [
"docker",
"buildx",
"--builder",
builder_name,
"build",
"--build-arg",
f"SOURCE_DATE_EPOCH={sde}",
*_build_args,
"--provenance",
"false",
"--output",
f"type=docker,dest={output},rewrite-timestamp=true",
*cache_args,
*tag_args,
*dockerfile_args,
*platform_args,
*buildx_args,
context,
]
run(cmd, dry)
##########################
# Command logic
def build(args):
runtime = parse_runtime(args)
use_cache = parse_use_cache(args)
sde = parse_sde(args)
rootless = parse_rootless(args, runtime)
buildkit_image = parse_buildkit_image(args, rootless, runtime)
build_args = parse_build_args(args)
platform = args.platform
buildkit_args = parse_buildkit_args(args, runtime)
buildx_args = parse_buildx_args(args, runtime)
tag = args.tag
dockerfile = parse_path(args.file)
output = Path(parse_path(args.output))
dry = args.dry
context = parse_path(args.context)
logger.info(
MSG_BUILD_CTX.format(
runtime=runtime,
buildkit_image=buildkit_image,
sde=sde,
rootless=rootless,
use_cache=use_cache,
context=context,
dockerfile=dockerfile or "(not provided)",
tag=tag or "(not provided)",
output=output,
build_args=",".join(build_args) or "(not provided)",
platform=platform or "(default)",
buildkit_args=" ".join(buildkit_args) or "(not provided)",
buildx_args=" ".join(buildx_args) or "(not provided)",
)
)
try:
if runtime == "docker":
docker_build(
context,
dockerfile,
tag,
buildkit_image,
sde,
use_cache,
output,
build_args,
platform,
buildx_args,
dry,
)
else:
podman_build(
context,
dockerfile,
tag,
buildkit_image,
sde,
rootless,
use_cache,
output,
build_args,
platform,
buildkit_args,
dry,
)
except subprocess.CalledProcessError as e:
logger.error(f"Failed with {e.returncode}")
sys.exit(e.returncode)
def analyze(args) -> None:
expected_image_digest = parse_image_digest(args)
tarball_path = Path(args.tarball)
parsed = oci_parse_tarball(tarball_path)
oci_print_info(parsed, args.show_contents)
if expected_image_digest:
cur_digest = parsed[1]["digest"].split(":")[1]
if cur_digest != expected_image_digest:
raise Exception(
f"The image does not have the expected digest: {cur_digest} != {expected_image_digest}"
)
print(f"✅ Image digest matches {expected_image_digest}")
def define_build_cmd_args(parser: argparse.ArgumentParser) -> None:
parser.add_argument(
"--runtime",
choices=["docker", "podman"],
default=detect_container_runtime(),
help="The container runtime for building the image (default: %(default)s)",
)
parser.add_argument(
"--datetime",
metavar="YYYY-MM-DD",
default=None,
help=(
"Provide a date and (optionally) a time in ISO format, which will"
" be used as the timestamp of the image layers"
),
)
parser.add_argument(
"--buildkit-image",
metavar="NAME:TAG@DIGEST",
default=None,
help=(
"The BuildKit container image which will be used for building the"
" reproducible container image. Make sure to pass the '-rootless'"
" variant if you are using rootless Podman"
" (default: docker.io/moby/buildkit:v0.19.0)"
),
)
parser.add_argument(
"--source-date-epoch",
"--sde",
metavar="SECONDS",
type=int,
default=None,
help="Provide a Unix timestamp for the image layers",
)
parser.add_argument(
"--no-cache",
default=False,
action="store_true",
help="Do not use existing cached images for the container build. Build from the start with a new set of cached layers.",
)
parser.add_argument(
"--rootless",
default=False,
action="store_true",
help="Run BuildKit in rootless mode (Podman only)",
)
parser.add_argument(
"-f",
"--file",
metavar="FILE",
default=None,
help="Pathname of a Dockerfile",
)
parser.add_argument(
"-o",
"--output",
metavar="FILE",
default=Path.cwd() / "image.tar",
help="Path to save OCI tarball (default: %(default)s)",
)
parser.add_argument(
"-t",
"--tag",
metavar="TAG",
default=None,
help="Tag the built image with the name %(metavar)s",
)
parser.add_argument(
"--build-arg",
metavar="ARG=VALUE",
action="append",
default=None,
help="Set build-time variables",
)
parser.add_argument(
"--platform",
metavar="PLAT1,PLAT2",
default=None,
help="Set platform for the image",
)
parser.add_argument(
"--buildkit-args",
metavar="'ARG1 ARG2'",
default=None,
help="Extra arguments for BuildKit (Podman only)",
)
parser.add_argument(
"--buildx-args",
metavar="'ARG1 ARG2'",
default=None,
help="Extra arguments for Docker Buildx (Docker only)",
)
parser.add_argument(
"--dry",
default=False,
action="store_true",
help="Do not run any commands, just print what would happen",
)
parser.add_argument(
"context",
metavar="CONTEXT",
help="Path to the build context",
)
def parse_args() -> dict:
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(dest="command", help="Available commands")
build_parser = subparsers.add_parser("build", help="Perform a build operation")
build_parser.set_defaults(func=build)
define_build_cmd_args(build_parser)
analyze_parser = subparsers.add_parser("analyze", help="Analyze an OCI tarball")
analyze_parser.set_defaults(func=analyze)
analyze_parser.add_argument(
"tarball",
metavar="FILE",
help="Path to OCI image in .tar format",
)
analyze_parser.add_argument(
"--expected-image-digest",
metavar="DIGEST",
default=None,
help="The expected digest for the provided image",
)
analyze_parser.add_argument(
"--show-contents",
default=False,
action="store_true",
help="Show full file contents",
)
return parser.parse_args()
def main() -> None:
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
args = parse_args()
if not hasattr(args, "func"):
args.func = build
args.func(args)
if __name__ == "__main__":
sys.exit(main())

View file

@ -4,6 +4,7 @@ import argparse
import hashlib
import logging
import pathlib
import platform
import stat
import subprocess
import sys
@ -11,131 +12,72 @@ import urllib.request
logger = logging.getLogger(__name__)
DIFFOCI_URL = "https://github.com/reproducible-containers/diffoci/releases/download/v0.1.5/diffoci-v0.1.5.linux-amd64"
DIFFOCI_CHECKSUM = "01d25fe690196945a6bd510d30559338aa489c034d3a1b895a0d82a4b860698f"
DIFFOCI_PATH = (
pathlib.Path.home() / ".local" / "share" / "dangerzone-dev" / "helpers" / "diffoci"
)
IMAGE_NAME = "dangerzone.rocks/dangerzone"
if platform.system() in ["Darwin", "Windows"]:
CONTAINER_RUNTIME = "docker"
elif platform.system() == "Linux":
CONTAINER_RUNTIME = "podman"
def run(*args):
"""Simple function that runs a command, validates it, and returns the output"""
"""Simple function that runs a command and checks the result."""
logger.debug(f"Running command: {' '.join(args)}")
return subprocess.run(
args,
check=True,
stdout=subprocess.PIPE,
).stdout
return subprocess.run(args, check=True)
def git_commit_get():
return run("git", "rev-parse", "--short", "HEAD").decode().strip()
def git_determine_tag():
return run("git", "describe", "--long", "--first-parent").decode().strip()[1:]
def git_verify(commit, source):
if not commit in source:
raise RuntimeError(
f"Image '{source}' does not seem to be built from commit '{commit}'"
)
def diffoci_hash_matches(diffoci):
"""Check if the hash of the downloaded diffoci bin matches the expected one."""
m = hashlib.sha256()
m.update(diffoci)
diffoci_checksum = m.hexdigest()
return diffoci_checksum == DIFFOCI_CHECKSUM
def diffoci_is_installed():
"""Determine if diffoci has been installed.
Determine if diffoci has been installed, by checking if the binary exists, and if
its hash is the expected one. If the binary exists but the hash is different, then
this is a sign that we need to update the local diffoci binary.
"""
if not DIFFOCI_PATH.exists():
return False
return diffoci_hash_matches(DIFFOCI_PATH.open("rb").read())
def diffoci_download():
"""Download the diffoci tool, based on a URL and its checksum."""
with urllib.request.urlopen(DIFFOCI_URL) as f:
diffoci_bin = f.read()
if not diffoci_hash_matches(diffoci_bin):
raise ValueError(
"Unexpected checksum for downloaded diffoci binary:"
f" {diffoci_checksum} !={DIFFOCI_CHECKSUM}"
)
DIFFOCI_PATH.parent.mkdir(parents=True, exist_ok=True)
DIFFOCI_PATH.open("wb+").write(diffoci_bin)
DIFFOCI_PATH.chmod(DIFFOCI_PATH.stat().st_mode | stat.S_IEXEC)
def diffoci_diff(source, local_target):
"""Diff the source image against the recently built target image using diffoci."""
target = f"podman://{local_target}"
try:
return run(
str(DIFFOCI_PATH),
"diff",
source,
target,
"--semantic",
"--verbose",
)
except subprocess.CalledProcessError as e:
error = e.stdout.decode()
raise RuntimeError(
f"Could not rebuild an identical image to {source}. Diffoci report:\n{error}"
)
def build_image(tag, use_cache=False):
def build_image(
platform=None,
runtime=None,
cache=True,
date=None,
):
"""Build the Dangerzone container image with a special tag."""
platform_args = [] if not platform else ["--platform", platform]
runtime_args = [] if not runtime else ["--runtime", runtime]
cache_args = [] if cache else ["--use-cache", "no"]
date_args = [] if not date else ["--debian-archive-date", date]
run(
"python3",
"./install/common/build-image.py",
"--no-save",
"--use-cache",
str(use_cache),
"--tag",
tag,
*platform_args,
*runtime_args,
*cache_args,
*date_args,
)
def parse_args():
image_tag = git_determine_tag()
# TODO: Remove the local "podman://" prefix once we have started pushing images to a
# remote.
default_image_name = f"podman://{IMAGE_NAME}:{image_tag}"
parser = argparse.ArgumentParser(
prog=sys.argv[0],
description="Dev script for verifying container image reproducibility",
)
parser.add_argument(
"--source",
default=default_image_name,
"--platform",
default=None,
help=f"The platform for building the image (default: current platform)",
)
parser.add_argument(
"--runtime",
choices=["docker", "podman"],
default=CONTAINER_RUNTIME,
help=f"The container runtime for building the image (default: {CONTAINER_RUNTIME})",
)
parser.add_argument(
"--no-cache",
default=False,
action="store_true",
help=(
"The name of the image that you want to reproduce. If the image resides in"
" the local Docker / Podman engine, you can prefix it with podman:// or"
f" docker:// accordingly (default: {default_image_name})"
"Do not use existing cached images for the container build."
" Build from the start with a new set of cached layers."
),
)
parser.add_argument(
"--use-cache",
default=False,
action="store_true",
help="Whether to reuse the build cache (off by default for better reproducibility)",
"--debian-archive-date",
default=None,
help="Use a specific Debian snapshot archive, by its date",
)
parser.add_argument(
"digest",
help="The digest of the image that you want to reproduce",
)
return parser.parse_args()
@ -148,32 +90,25 @@ def main():
)
args = parse_args()
logger.info(f"Ensuring that current Git commit matches image '{args.source}'")
commit = git_commit_get()
git_verify(commit, args.source)
if not diffoci_is_installed():
logger.info(f"Downloading diffoci helper from {DIFFOCI_URL}")
diffoci_download()
tag = f"reproduce-{commit}"
target = f"{IMAGE_NAME}:{tag}"
logger.info(f"Building container image and tagging it as '{target}'")
build_image(tag, args.use_cache)
logger.info(f"Building container image")
build_image(
args.platform,
args.runtime,
not args.no_cache,
args.debian_archive_date,
)
logger.info(
f"Ensuring that source image '{args.source}' is semantically identical with"
f" built image '{target}'"
f"Check that the reproduced image has the expected digest: {args.digest}"
)
run(
"./dev_scripts/repro-build.py",
"analyze",
"--show-contents",
"share/container.tar",
"--expected-image-digest",
args.digest,
)
try:
diffoci_diff(args.source, target)
except subprocess.CalledProcessError as e:
raise RuntimeError(
f"Could not reproduce image {args.source} for commit {commit}"
)
breakpoint()
logger.info(f"Successfully reproduced image '{args.source}' from commit '{commit}'")
if __name__ == "__main__":

View file

@ -11,8 +11,8 @@ log = logging.getLogger(__name__)
DZ_ASSETS = [
"container-{version}-i686.tar.gz",
"container-{version}-arm64.tar.gz",
"container-{version}-i686.tar",
"container-{version}-arm64.tar",
"Dangerzone-{version}.msi",
"Dangerzone-{version}-arm64.dmg",
"Dangerzone-{version}-i686.dmg",

View file

@ -42,7 +42,8 @@ doit <task>
## Tips and tricks
* You can run `doit list --all -s` to see the full list of tasks, their
dependencies, and whether they are up to date.
dependencies, and whether they are up to date (U) or will run (R). Note that
certain small tasks are always configured to run.
* You can run `doit info <task>` to see which dependencies are missing.
* You can pass the following environment variables to the script, in order to
affect some global parameters:

View file

@ -0,0 +1,83 @@
# Independent Container Updates
Since version 0.9.0, Dangerzone is able to ship container images independently
from releases of the software.
One of the main benefits of doing so is to shorten the time neede to distribute the security fixes for the containers. Being the place where the actual conversion of documents happen, it's a way to keep dangerzone users secure.
If you are a dangerzone user, this all happens behind the curtain, and you should not have to know anything about that to enjoy these "in-app" updates. If you are using dangerzone in an air-gapped environment, check the sections below.
## Checking attestations
Each night, new images are built and pushed to the container registry, alongside
with a provenance attestation, enabling anybody to ensure that the image has
been originally built by Github CI runners, from a defined source repository (in our case `freedomofpress/dangerzone`).
To verify the attestations against our expectations, use the following command:
```bash
dangerzone-image attest-provenance ghcr.io/freedomofpress/dangerzone/dangerzone --repository freedomofpress/dangerzone
```
In case of sucess, it will report back:
```
🎉 Successfully verified image
'ghcr.io/freedomofpress/dangerzone/dangerzone:<tag>@sha256:<digest>'
and its associated claims:
- ✅ SLSA Level 3 provenance
- ✅ GitHub repo: freedomofpress/dangerzone
- ✅ GitHub actions workflow: <workflow>
- ✅ Git branch: <branch>
- ✅ Git commit: <commit>
```
## Sign and publish the remote image
Once the image has been reproduced locally, we can add a signature to the container registry,
and update the `latest` tag to point to the proper hash.
```bash
cosign sign --sk ghcr.io/freedomofpress/dangerzone/dangerzone:${TAG}@sha256:${DIGEST}
# And mark bump latest
crane auth login ghcr.io -u USERNAME --password $(cat pat_token)
crane tag ghcr.io/freedomofpress/dangerzone/dangerzone@sha256:${DIGEST} latest
```
## Install updates
To check if a new container image has been released, and update your local installation with it, you can use the following commands:
```bash
dangerzone-image upgrade ghcr.io/freedomofpress/dangerzone/dangerzone
```
## Verify locally
You can verify that the image you have locally matches the stored signatures, and that these have been signed with a trusted public key:
```bash
dangerzone-image verify-local ghcr.io/freedomofpress/dangerzone/dangerzone
```
## Installing image updates to air-gapped environments
Three steps are required:
1. Prepare the archive
2. Transfer the archive to the air-gapped system
3. Install the archive on the air-gapped system
This archive will contain all the needed material to validate that the new container image has been signed and is valid.
On the machine on which you prepare the packages:
```bash
dangerzone-image prepare-archive --output dz-fa94872.tar ghcr.io/freedomofpress/dangerzone/dangerzone@sha256:<digest>
```
On the airgapped machine, copy the file and run the following command:
```bash
dangerzone-image load-archive dz-fa94872.tar
```

View file

@ -27,7 +27,7 @@ This means that rebuilding the image without updating our Dockerfile will
Here are the necessary variables that make up our image in the `Dockerfile.env`
file:
* `DEBIAN_IMAGE_DATE`: The date that the Debian container image was released
* `DEBIAN_IMAGE_DIGEST`: The index digest for the Debian container image
* `DEBIAN_ARCHIVE_DATE`: The Debian snapshot repo that we want to use
* `GVISOR_ARCHIVE_DATE`: The gVisor APT repo that we want to use
* `H2ORESTART_CHECKSUM`: The SHA-256 checksum of the H2ORestart plugin
@ -47,21 +47,21 @@ trigger a CI error.
For a simple way to reproduce a Dangerzone container image, you can checkout the
commit this image was built from (you can find it from the image tag in its
`g<commit>` portion), and run the following command in a Linux environment:
`g<commit>` portion), retrieve the date it was built (also included in the image
tag), and run the following command in any environment:
```
./dev_scripts/reproduce-image.py --source <image>
./dev_scripts/reproduce-image.py \
--debian-archive-date <date> \
<digest>
```
This command will download the `diffoci` helper, build a container image from
the current Git commit, and ensure that the built image matches the source one,
with the exception of image names and file timestamps.
where:
* `<date>` should be given in YYYYMMDD format, e.g, 20250226
* `<digest>` is the SHA-256 hash of the image for the **current platform**, with
or without the `sha256:` prefix.
> [!TIP]
> If the source image is not pushed to a registry, and is local instead, you
> can prefix it with `docker://` or `podman://` accordingly, so that `diffoci`
> can load it from the local Docker / Podman container engine. For example:
>
> ```
> ./dev_scripts/reproduce.py --source podman://dangerzone.rocks/dangerzone:0.8.0-125-g725ce3b
> ```
This command will build a container image from the current Git commit and the
provided date for the Debian archives. Then, it will compare the digest of the
manifest against the provided one. This is a simple way to ensure that the
created image is bit-for-bit reproducible.

View file

@ -11,8 +11,7 @@ https://github.com/freedomofpress/dangerzone/wiki/Updates
## Design overview
This feature introduces a hamburger icon that will be visible across almost all
of the Dangerzone windows. This will be used to notify the users about updates.
A hamburger icon is visible across almost all of the Dangerzone windows, and is used to notify the users when there are new releases.
### First run
@ -21,8 +20,7 @@ _We detect it's the first time Dangerzone runs because the
Add the following keys in our `settings.json` file.
* `"updater_check": None`: Whether to check for updates or not. `None` means
that the user has not decided yet, and is the default.
* `"updater_check_all": True`: Whether or not to check and apply independent container updates and check for new releases.
* `"updater_last_check": None`: The last time we checked for updates (in seconds
from Unix epoch). None means that we haven't checked yet.
* `"updater_latest_version": "0.4.2"`: The latest version that the Dangerzone
@ -32,43 +30,19 @@ Add the following keys in our `settings.json` file.
* `"updater_errors: 0`: The number of update check errors that we have
encountered in a row.
Note:
* If on Linux, make `"updater_check": False`, since we normally have
other update channels for these platforms.
Previously, `"updater_check"` was used to determine if we should check for new releases, and has been replaced by `"updater_check_all"` when adding support for independent container updates.
### Second run
_We detect it's the second time Dangerzone runs because
`settings["updater_check"] is not None and settings["updater_last_check"] is
`settings["updater_check_all"] is not None and settings["updater_last_check"] is
None`._
Before starting up the main window, show this window:
* Title: Dangerzone Updater
* Body:
> Do you want Dangerzone to automatically check for updates?
>
> If you accept, Dangerzone will check the latest releases page in github.com
> on startup. Otherwise it will make no network requests and won't inform you
> about new releases.
>
> If you prefer another way of getting notified about new releases, we suggest adding
> to your RSS reader our [Mastodon feed](https://fosstodon.org/@dangerzone.rss). For more information
> about updates, check [this webpage](https://github.com/freedomofpress/dangerzone/wiki/Updates).
* Buttons:
- Check Automaticaly: Store `settings["updater_check"] = True`
- Don't Check: Store `settings["updater_check"] = False`
Note:
* Users will be able to change their choice from the hamburger menu, which will
contain an entry called "Check for updates", that users can check and uncheck.
Before starting up the main window, the user is prompted if they want to enable update checks.
### Subsequent runs
_We perform the following only if `settings["updater_check"] == True`._
_We perform the following only if `settings["updater_check_all"] == True`._
1. Spawn a new thread so that we don't block the main window.
2. Check if we have cached information about a release (version and changelog).

53
docs/podman-desktop.md Normal file
View file

@ -0,0 +1,53 @@
# Podman Desktop support
Starting with Dangerzone 0.9.0, it is possible to use Podman Desktop on
Windows and macOS. The support for this container runtime is currently only
experimental. If you try it out and encounter issues, please reach to us, we'll
be glad to help.
With [Podman Desktop](https://podman-desktop.io/) installed on your machine,
here are the required steps to change the dangerzone container runtime.
You will be required to open a terminal and follow these steps:
## On macOS
You will need to configure podman to access the shared Dangerzone resources:
```bash
podman machine stop
podman machine rm
cat > ~/.config/containers/containers.conf <<EOF
[machine]
volumes = ["/Users:/Users", "/private:/private", "/var/folders:/var/folders", "/Applications/Dangerzone.app:/Applications/Dangerzone.app"]
EOF
podman machine init
podman machine set --rootful=false
podman machine start
```
Then, set the container runtime to podman using this command:
```bash
/Applications/Dangerzone.app/Contents/MacOS/dangerzone-cli --set-container-runtime podman
```
In order to get back to the default behaviour (Docker Desktop on macOS), pass
the `default` value instead:
```bash
/Applications/Dangerzone.app/Contents/MacOS/dangerzone-cli --set-container-runtime default
```
## On Windows
To set the container runtime to podman, use this command:
```bash
'C:\Program Files\Dangerzone\dangerzone-cli.exe' --set-container-runtime podman
```
To revert back to the default behavior, pass the `default` value:
```bash
'C:\Program Files\Dangerzone\dangerzone-cli.exe' --set-container-runtime podman
```

14
dodo.py
View file

@ -8,8 +8,7 @@ from doit.action import CmdAction
ARCH = "arm64" if platform.machine() == "arm64" else "i686"
VERSION = open("share/version.txt").read().strip()
FEDORA_VERSIONS = ["40", "41"]
DEBIAN_VERSIONS = ["bullseye", "focal", "jammy", "mantic", "noble", "trixie"]
FEDORA_VERSIONS = ["40", "41", "42"]
### Global parameters
@ -44,7 +43,6 @@ def list_language_data():
tessdata_dir = Path("share") / "tessdata"
langs = json.loads(open(tessdata_dir.parent / "ocr-languages.json").read()).values()
targets = [tessdata_dir / f"{lang}.traineddata" for lang in langs]
targets.append(tessdata_dir)
return targets
@ -57,7 +55,7 @@ IMAGE_DEPS = [
*list_files("dangerzone/container_helpers"),
"install/common/build-image.py",
]
IMAGE_TARGETS = ["share/container.tar.gz", "share/image-id.txt"]
IMAGE_TARGETS = ["share/container.tar", "share/image-id.txt"]
SOURCE_DEPS = [
*list_files("assets"),
@ -124,7 +122,7 @@ def build_deb(cwd):
def build_rpm(version, cwd, qubes=False):
"""Build an .rpm package on the requested Fedora distro."""
return build_linux_pkg(distro="Fedora", version=version, cwd=cwd, qubes=qubes)
return build_linux_pkg(distro="fedora", version=version, cwd=cwd, qubes=qubes)
### Tasks
@ -188,8 +186,8 @@ def task_download_tessdata():
def task_build_image():
"""Build the container image using ./install/common/build-image.py"""
img_src = "share/container.tar.gz"
img_dst = RELEASE_DIR / f"container-{VERSION}-{ARCH}.tar.gz" # FIXME: Add arch
img_src = "share/container.tar"
img_dst = RELEASE_DIR / f"container-{VERSION}-{ARCH}.tar" # FIXME: Add arch
img_id_src = "share/image-id.txt"
img_id_dst = RELEASE_DIR / "image-id.txt" # FIXME: Add arch
@ -208,7 +206,7 @@ def task_build_image():
def task_poetry_install():
"""Setup the Poetry environment"""
return {"actions": ["poetry install --sync"], "clean": ["poetry env remove --all"]}
return {"actions": ["poetry sync"], "clean": ["poetry env remove --all"]}
def task_macos_build_dmg():

View file

@ -1,20 +1,17 @@
import argparse
import gzip
import platform
import secrets
import subprocess
import sys
from pathlib import Path
BUILD_CONTEXT = "dangerzone/"
BUILD_CONTEXT = "dangerzone"
IMAGE_NAME = "dangerzone.rocks/dangerzone"
if platform.system() in ["Darwin", "Windows"]:
CONTAINER_RUNTIME = "docker"
elif platform.system() == "Linux":
CONTAINER_RUNTIME = "podman"
ARCH = platform.machine()
def str2bool(v):
if isinstance(v, bool):
@ -50,6 +47,16 @@ def determine_git_tag():
)
def determine_debian_archive_date():
"""Get the date of the Debian archive from Dockerfile.env."""
for env in Path("Dockerfile.env").read_text().split("\n"):
if env.startswith("DEBIAN_ARCHIVE_DATE"):
return env.split("=")[1]
raise Exception(
"Could not find 'DEBIAN_ARCHIVE_DATE' build argument in Dockerfile.env"
)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
@ -59,16 +66,15 @@ def main():
help=f"The container runtime for building the image (default: {CONTAINER_RUNTIME})",
)
parser.add_argument(
"--no-save",
action="store_true",
help="Do not save the container image as a tarball in share/container.tar.gz",
"--platform",
default=None,
help=f"The platform for building the image (default: current platform)",
)
parser.add_argument(
"--compress-level",
type=int,
choices=range(0, 10),
default=9,
help="The Gzip compression level, from 0 (lowest) to 9 (highest, default)",
"--output",
"-o",
default=str(Path("share") / "container.tar"),
help="Path to store the container image",
)
parser.add_argument(
"--use-cache",
@ -83,63 +89,63 @@ def main():
default=None,
help="Provide a custom tag for the image (for development only)",
)
parser.add_argument(
"--debian-archive-date",
"-d",
default=determine_debian_archive_date(),
help="Use a specific Debian snapshot archive, by its date (default %(default)s)",
)
parser.add_argument(
"--dry",
default=False,
action="store_true",
help="Do not run any commands, just print what would happen",
)
args = parser.parse_args()
tarball_path = Path("share") / "container.tar.gz"
image_id_path = Path("share") / "image-id.txt"
print(f"Building for architecture '{ARCH}'")
tag = args.tag or determine_git_tag()
image_name_tagged = IMAGE_NAME + ":" + tag
tag = args.tag or f"{args.debian_archive_date}-{determine_git_tag()}"
image_name_tagged = f"{IMAGE_NAME}:{tag}"
print(f"Will tag the container image as '{image_name_tagged}'")
with open(image_id_path, "w") as f:
f.write(tag)
image_id_path = Path("share") / "image-id.txt"
if not args.dry:
with open(image_id_path, "w") as f:
f.write(tag)
# Build the container image, and tag it with the calculated tag
print("Building container image")
cache_args = [] if args.use_cache else ["--no-cache"]
platform_args = [] if not args.platform else ["--platform", args.platform]
rootless_args = [] if args.runtime == "docker" else ["--rootless"]
rootless_args = []
dry_args = [] if not args.dry else ["--dry"]
subprocess.run(
[
args.runtime,
sys.executable,
str(Path("dev_scripts") / "repro-build.py"),
"build",
BUILD_CONTEXT,
"--runtime",
args.runtime,
"--build-arg",
f"DEBIAN_ARCHIVE_DATE={args.debian_archive_date}",
"--datetime",
args.debian_archive_date,
*dry_args,
*cache_args,
"-f",
"Dockerfile",
*platform_args,
*rootless_args,
"--tag",
image_name_tagged,
"--output",
args.output,
"-f",
"Dockerfile",
BUILD_CONTEXT,
],
check=True,
)
if not args.no_save:
print("Saving container image")
cmd = subprocess.Popen(
[
CONTAINER_RUNTIME,
"save",
image_name_tagged,
],
stdout=subprocess.PIPE,
)
print("Compressing container image")
chunk_size = 4 << 20
with gzip.open(
tarball_path,
"wb",
compresslevel=args.compress_level,
) as gzip_f:
while True:
chunk = cmd.stdout.read(chunk_size)
if len(chunk) > 0:
gzip_f.write(chunk)
else:
break
cmd.wait(5)
if __name__ == "__main__":
sys.exit(main())

View file

@ -51,6 +51,8 @@ def main():
if files == expected_files:
logger.info("Skipping tessdata download, language data already exists")
return
elif not files:
logger.info("Tesseract dir is empty, proceeding to download language data")
else:
logger.info(f"Found {tessdata_dir} but contents do not match")
return 1

View file

@ -66,14 +66,14 @@ def build(build_dir, qubes=False):
print("* Creating a Python sdist")
tessdata = root / "share" / "tessdata"
tessdata_bak = root / "tessdata.bak"
container_tar_gz = root / "share" / "container.tar.gz"
container_tar_gz_bak = root / "container.tar.gz.bak"
container_tar = root / "share" / "container.tar"
container_tar_bak = root / "container.tar.bak"
if tessdata.exists():
tessdata.rename(tessdata_bak)
stash_container = qubes and container_tar_gz.exists()
if stash_container and container_tar_gz.exists():
container_tar_gz.rename(container_tar_gz_bak)
stash_container = qubes and container_tar.exists()
if stash_container and container_tar.exists():
container_tar.rename(container_tar_bak)
try:
subprocess.run(["poetry", "build", "-f", "sdist"], cwd=root, check=True)
# Copy and unlink the Dangerzone sdist, instead of just renaming it. If the
@ -84,8 +84,8 @@ def build(build_dir, qubes=False):
finally:
if tessdata_bak.exists():
tessdata_bak.rename(tessdata)
if stash_container and container_tar_gz_bak.exists():
container_tar_gz_bak.rename(container_tar_gz)
if stash_container and container_tar_bak.exists():
container_tar_bak.rename(container_tar)
print("* Building RPM package")
cmd = [

View file

@ -18,7 +18,7 @@
#
# * Qubes packages include some extra files under /etc/qubes-rpc, whereas
# regular RPM packages include the container image under
# /usr/share/container.tar.gz
# /usr/share/container.tar
# * Qubes packages have some extra dependencies.
# 3. It is best to consume this SPEC file using the `install/linux/build-rpm.py`
# script, which handles the necessary scaffolding for building the package.
@ -32,7 +32,7 @@ Name: dangerzone-qubes
Name: dangerzone
%endif
Version: 0.8.1
Version: 0.9.0
Release: 1%{?dist}
Summary: Take potentially dangerous PDFs, office documents, or images and convert them to safe PDFs
@ -216,12 +216,6 @@ convert the documents within a secure sandbox.
%prep
%autosetup -p1 -n dangerzone-%{version}
# Bypass the version pin for Fedora as the 6.8.1.1 package is causing trouble
# A 6.8.1.1 package was only released with a wheel for macOS, but was picked by
# Fedora packagers. We cannot use "*" when PyPI is involved as it will fail to download the latest version.
# For Fedora, we can pick any of the released versions.
sed -i '/shiboken6 = \[/,/\]/c\shiboken6 = "*"' pyproject.toml
%generate_buildrequires
%pyproject_buildrequires -R

View file

@ -31,23 +31,6 @@ def main():
cmd = ["poetry", "export", "--only", "debian"]
container_requirements_txt = subprocess.check_output(cmd)
# XXX: Hack for Ubuntu Focal.
#
# The `requirements.txt` file is generated from our `pyproject.toml` file, and thus
# specifies that the minimum Python version is 3.9. This was to accommodate to
# PySide6, which is installed in macOS / Windows via `poetry` and works with Python
# 3.9+. [1]
#
# The Python version in Ubuntu Focal though is 3.8. This generally was not much of
# an issue, since we used the package manager to install dependencies. However, it
# becomes an issue when we want to vendor the PyMuPDF package, using `pip`. In order
# to sidestep this virtual limitation, we can just change the Python version in the
# generated `requirements.txt` file in Ubuntu Focal from 3.9 to 3.8.
#
# [1] https://github.com/freedomofpress/dangerzone/pull/818
if sys.version.startswith("3.8"):
container_requirements_txt = container_requirements_txt.replace(b"3.9", b"3.8")
logger.info(f"Vendoring PyMuPDF under '{args.dest}'")
# We prefer to call the CLI version of `pip`, instead of importing it directly, as
# instructed here:

View file

@ -1,40 +0,0 @@
#!/bin/bash
# Development script for installing Podman on Ubuntu Focal. Mainly to be used as
# part of our CI pipelines, where we may install Podman on environments that
# don't have sudo.
set -e
if [[ "$EUID" -ne 0 ]]; then
SUDO=sudo
else
SUDO=
fi
provide() {
$SUDO apt-get update
$SUDO apt-get install curl wget gnupg2 -y
source /etc/os-release
$SUDO sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' \
> /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- \
| $SUDO apt-key add -
$SUDO apt-get update -qq -y
}
install() {
$SUDO apt-get -qq --yes install podman
podman --version
}
if [[ "$1" == "--repo-only" ]]; then
provide
elif [[ "$1" == "" ]]; then
provide
install
else
echo "Unexpected argument: $1"
echo "Usage: $0 [--repo-only]"
exit 1
fi

View file

@ -193,7 +193,7 @@ def main():
Path="C:\\Program Files (x86)\\Dangerzone",
)
ET.SubElement(directory_search_el, "FileSearch", Name="dangerzone.exe")
registry_search_el = ET.SubElement(package_el, "Property", Id="DANGERZONE080FOUND")
registry_search_el = ET.SubElement(package_el, "Property", Id="DANGERZONE08FOUND")
ET.SubElement(
registry_search_el,
"RegistrySearch",
@ -202,11 +202,19 @@ def main():
Name="DisplayName",
Type="raw",
)
ET.SubElement(
registry_search_el,
"RegistrySearch",
Root="HKLM",
Key="SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\{8AAC0808-3556-4164-9D15-6EC1FB673AB2}",
Name="DisplayName",
Type="raw",
)
ET.SubElement(
package_el,
"Launch",
Condition="NOT OLDDANGERZONEFOUND AND NOT DANGERZONE080FOUND",
Message="A previous version of [ProductName] is already installed. Please uninstall it from Programs and Features before proceeding with the installation.",
Condition="NOT OLDDANGERZONEFOUND AND NOT DANGERZONE08FOUND",
Message='A previous version of [ProductName] is already installed. Please uninstall it from "Apps & Features" before proceeding with the installation.',
)
# Add the ProgramMenuFolder StandardDirectory

888
poetry.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,6 @@
[tool.poetry]
name = "dangerzone"
version = "0.8.1"
version = "0.9.0"
description = "Take potentially dangerous PDFs, office documents, or images and convert them to safe PDFs"
authors = ["Freedom of the Press Foundation <info@freedom.press>", "Micah Lee <micah.lee@theintercept.com>"]
license = "AGPL-3.0"
@ -23,17 +23,11 @@ pyxdg = {version = "*", platform = "linux"}
requests = "*"
markdown = "*"
packaging = "*"
# shiboken6 released a 6.8.1.1 version only for macOS
# and it's getting picked by poetry, so pin it instead.
shiboken6 = [
{version = "*", platform = "darwin"},
{version = "<6.8.1.1", platform = "linux"},
{version = "<6.8.1.1", platform = "win32"},
]
[tool.poetry.scripts]
dangerzone = 'dangerzone:main'
dangerzone-cli = 'dangerzone:main'
dangerzone-image = "dangerzone.updater.cli:main"
# Dependencies required for packaging the code on various platforms.
[tool.poetry.group.package.dependencies]
@ -64,9 +58,10 @@ pytest-cov = "^5.0.0"
strip-ansi = "*"
pytest-subprocess = "^1.5.2"
pytest-rerunfailures = "^14.0"
numpy = "2.0" # bump when we remove python 3.9 support
[tool.poetry.group.debian.dependencies]
pymupdf = "1.24.11" # Last version to support python 3.8 (needed for Ubuntu Focal support)
pymupdf = "^1.24.11"
[tool.poetry.group.dev.dependencies]
httpx = "^0.27.2"

View file

@ -13,7 +13,7 @@ setup(
description="Dangerzone",
options={
"build_exe": {
"packages": ["dangerzone", "dangerzone.gui"],
"packages": ["dangerzone", "dangerzone.gui", "pymupdf._wxcolors"],
"excludes": ["test", "tkinter"],
"include_files": [("share", "share"), ("LICENSE", "LICENSE")],
"include_msvcr": True,

View file

@ -1 +1 @@
0.8.1
0.9.0

View file

@ -0,0 +1,7 @@
This folder contains signature-folders used for the testing the signatures implementation.
The following folders are used:
- `valid`: this folder contains signatures which should be considered valid and generated with the key available at `tests/assets/test.pub.key`
- `invalid`: this folder contains signatures which should be considered invalid, because their format doesn't match the expected one. e.g. it uses plain text instead of base64-encoded text.
- `tempered`: This folder contain signatures which have been tempered-with. The goal is to have signatures that looks valid, but actually aren't.

View file

@ -0,0 +1,18 @@
[
{
"Base64Signature": "Invalid base64 signature",
"Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hbG1ldC9kYW5nZXJ6b25lL2RhbmdlcnpvbmUifSwiaW1hZ2UiOnsiZG9ja2VyLW1hbmlmZXN0LWRpZ2VzdCI6InNoYTI1NjoxOWU4ZWFjZDc1ODc5ZDA1ZjY2MjFjMmVhOGRkOTU1ZTY4ZWUzZTA3YjQxYjlkNTNmNGM4Y2M5OTI5YTY4YTY3In0sInR5cGUiOiJjb3NpZ24gY29udGFpbmVyIGltYWdlIHNpZ25hdHVyZSJ9LCJvcHRpb25hbCI6bnVsbH0=",
"Cert": null,
"Chain": null,
"Bundle": {
"SignedEntryTimestamp": "MEUCIC9oXH9VVP96frVOmDw704FBqMN/Bpm2RMdTm6BtSwL/AiEA6mCIjhV65fYuy4CwjsIzQHi/oW6IBwtd6oCvN2dI6HQ=",
"Payload": {
"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiJmMjEwNDJjY2RjOGU0ZjA1ZGEzNmE5ZjU4ODg5MmFlZGRlMzYzZTQ2ZWNjZGZjM2MyNzAyMTkwZDU0YTdmZmVlIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FWUNJUUNWaTJGUFI3Mjl1aHAvY3JFdUNTOW9yQzRhMnV0OHN3dDdTUnZXYUVSTGp3SWhBSlM1dzU3MHhsQnJsM2Nhd1Y1akQ1dk85RGh1dkNrdCtzOXJLdGc2NzVKQSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=",
"integratedTime": 1738752154,
"logIndex": 168898587,
"logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"
}
},
"RFC3161Timestamp": null
}
]

View file

@ -0,0 +1,18 @@
[
{
"Base64Signature": "MEQCICi2AOAJbS1k3334VMSo+qxaI4f5VoNnuVExZ4tfIu7rAiAiwuKdo8rGfFMGMLSFSQvoLF3JuwFy4JtNW6kQlwH7vg==",
"Payload": "Invalid base64 payload",
"Cert": null,
"Chain": null,
"Bundle": {
"SignedEntryTimestamp": "MEUCIEvx6NtFeAag9TplqMLjVczT/tC6lpKe9SnrxbehBlxfAiEA07BE3f5JsMLsUsmHD58D6GaZr2yz+yQ66Os2ps8oKz8=",
"Payload": {
"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI4YmJmNGRiNjBmMmExM2IyNjI2NTI3MzljNWM5ZTYwNjNiMDYyNjVlODU1Zjc3MTdjMTdlYWY4YzViZTQyYWUyIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FUUNJQ2kyQU9BSmJTMWszMzM0Vk1TbytxeGFJNGY1Vm9ObnVWRXhaNHRmSXU3ckFpQWl3dUtkbzhyR2ZGTUdNTFNGU1F2b0xGM0p1d0Z5NEp0Tlc2a1Fsd0g3dmc9PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=",
"integratedTime": 1738859497,
"logIndex": 169356501,
"logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"
}
},
"RFC3161Timestamp": null
}
]

View file

@ -0,0 +1,18 @@
[
{
"Base64Signature": "MEQCIDJxvB7lBU+VNYBD0xw/3Bi8wY7GPJ2fBP7mUFbguApoAiAIpuQT+sgatOY6yXkkA8K/sM40d5/gt7jQywWPbq5+iw==",
"Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hcHlyZ2lvL2RhbmdlcnpvbmUvZGFuZ2Vyem9uZSJ9LCJpbWFnZSI6eyJkb2NrZXItbWFuaWZlc3QtZGlnZXN0Ijoic2hhMjU2OjRkYTQ0MTIzNWU4NGU5MzUxODc3ODgyN2E1YzU3NDVkNTMyZDdhNDA3OTg4NmUxNjQ3OTI0YmVlN2VmMWMxNGQifSwidHlwZSI6ImNvc2lnbiBjb250YWluZXIgaW1hZ2Ugc2lnbmF0dXJlIn0sIm9wdGlvbmFsIjpudWxsfQ==",
"Cert": null,
"Chain": null,
"Bundle": {
"SignedEntryTimestamp": "Invalid signed entry timestamp",
"Payload": {
"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiIyMGE2ZDU1NTk4Y2U0NjU3NWZkZjViZGU3YzhhYWE2YTU2ZjZlMGRmOWNiYTY1MTJhMDAxODhjMTU1NGIzYjE3In19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FUUNJREp4dkI3bEJVK1ZOWUJEMHh3LzNCaTh3WTdHUEoyZkJQN21VRmJndUFwb0FpQUlwdVFUK3NnYXRPWTZ5WGtrQThLL3NNNDBkNS9ndDdqUXl3V1BicTUraXc9PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=",
"integratedTime": 1738688492,
"logIndex": 168652066,
"logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"
}
},
"RFC3161Timestamp": null
}
]

View file

@ -0,0 +1,18 @@
[
{
"Base64Signature": "MEUCIQC2WlJH+B8VuX1c6i4sDwEGEZc53hXUD6/ds9TMJ3HrfwIgCxSnrNYRD2c8XENqfqc+Ik1gx0DK9kPNsn/Lt8V/dCo=",
"Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hbG1ldC9kYW5nZXJ6b25lL2RhbmdlcnpvbmUifSwiaW1hZ2UiOnsiZG9ja2VyLW1hbmlmZXN0LWRpZ2VzdCI6InNoYTI1Njo3YjIxZGJkZWJmZmVkODU1NjIxZGZjZGVhYTUyMjMwZGM2NTY2OTk3Zjg1MmVmNWQ2MmIwMzM4YjQ2Nzk2ZTAxIn0sInR5cGUiOiJjb3NpZ24gY29udGFpbmVyIGltYWdlIHNpZ25hdHVyZSJ9LCJvcHRpb25hbCI6bnVsbH0=",
"Cert": null,
"Chain": null,
"Bundle": {
"SignedEntryTimestamp": "MEYCIQDn04gOHqiZcwUO+NVV9+29+abu6O/k1ve9zatJ3gVu9QIhAJL3E+mqVPdMPfMSdhHt2XDQsYzfRDDJNJEABQlbV3Jg",
"Payload": {
"body": "Invalid bundle payload body",
"integratedTime": 1738862352,
"logIndex": 169369149,
"logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"
}
},
"RFC3161Timestamp": null
}
]

View file

@ -0,0 +1,18 @@
[
{
"Base64Signature": "MAIhAJWLYU9Hvb26Gn9ysS4JL2isLhra63yzC3tJG9ZoREuPAiEAlLnDnvTGUGuXdxrBXmMPm870OG68KS36z2sq2DrvkkAK",
"Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hbG1ldC9kYW5nZXJ6b25lL2RhbmdlcnpvbmUifSwiaW1hZ2UiOnsiZG9ja2VyLW1hbmlmZXN0LWRpZ2VzdCI6InNoYTI1NjoxOWU4ZWFjZDc1ODc5ZDA1ZjY2MjFjMmVhOGRkOTU1ZTY4ZWUzZTA3YjQxYjlkNTNmNGM4Y2M5OTI5YTY4YTY3In0sInR5cGUiOiJjb3NpZ24gY29udGFpbmVyIGltYWdlIHNpZ25hdHVyZSJ9LCJvcHRpb25hbCI6bnVsbH0=",
"Cert": null,
"Chain": null,
"Bundle": {
"SignedEntryTimestamp": "MEUCIC9oXH9VVP96frVOmDw704FBqMN/Bpm2RMdTm6BtSwL/AiEA6mCIjhV65fYuy4CwjsIzQHi/oW6IBwtd6oCvN2dI6HQ=",
"Payload": {
"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiJmMjEwNDJjY2RjOGU0ZjA1ZGEzNmE5ZjU4ODg5MmFlZGRlMzYzZTQ2ZWNjZGZjM2MyNzAyMTkwZDU0YTdmZmVlIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FWUNJUUNWaTJGUFI3Mjl1aHAvY3JFdUNTOW9yQzRhMnV0OHN3dDdTUnZXYUVSTGp3SWhBSlM1dzU3MHhsQnJsM2Nhd1Y1akQ1dk85RGh1dkNrdCtzOXJLdGc2NzVKQSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=",
"integratedTime": 1738752154,
"logIndex": 168898587,
"logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"
}
},
"RFC3161Timestamp": null
}
]

View file

@ -0,0 +1,18 @@
[
{
"Base64Signature": "MEQCICi2AOAJbS1k3334VMSo+qxaI4f5VoNnuVExZ4tfIu7rAiAiwuKdo8rGfFMGMLSFSQvoLF3JuwFy4JtNW6kQlwH7vg==",
"Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9oNHh4MHIvZGFuZ2Vyem9uZS9kYW5nZXJ6b25lIn0sImltYWdlIjp7ImRvY2tlci1tYW5pZmVzdC1kaWdlc3QiOiJzaGEyNTY6MjIwYjUyMjAwZTNlNDdiMWI0MjAxMDY2N2ZjYWE5MzM4NjgxZTY0ZGQzZTM0YTM0ODczODY2Y2IwNTFkNjk0ZSJ9LCJ0eXBlIjoiY29zaWduIGNvbnRhaW5lciBpbWFnZSBzaWduYXR1cmUifSwib3B0aW9uYWwiOm51bGx9Cg==",
"Cert": null,
"Chain": null,
"Bundle": {
"SignedEntryTimestamp": "MEUCIEvx6NtFeAag9TplqMLjVczT/tC6lpKe9SnrxbehBlxfAiEA07BE3f5JsMLsUsmHD58D6GaZr2yz+yQ66Os2ps8oKz8=",
"Payload": {
"body": "eyJhcGlWZXJzaW9uIjoiNi42LjYiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI4YmJmNGRiNjBmMmExM2IyNjI2NTI3MzljNWM5ZTYwNjNiMDYyNjVlODU1Zjc3MTdjMTdlYWY4YzViZTQyYWUyIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FUUNJQ2kyQU9BSmJTMWszMzM0Vk1TbytxeGFJNGY1Vm9ObnVWRXhaNHRmSXU3ckFpQWl3dUtkbzhyR2ZGTUdNTFNGU1F2b0xGM0p1d0Z5NEp0Tlc2a1Fsd0g3dmc9PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0K",
"integratedTime": 1738859497,
"logIndex": 169356501,
"logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"
}
},
"RFC3161Timestamp": null
}
]

View file

@ -0,0 +1 @@
This folder contain signatures which have been tempered-with. The goal is to have signatures that looks valid, but actually aren't.

View file

@ -0,0 +1 @@
[{"Base64Signature": "MEYCIQCVi2FPR729uhp/crEuCS9orC4a2ut8swt7SRvWaERLjwIhAJS5w570xlBrl3cawV5jD5vO9DhuvCkt+s9rKtg675JA", "Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hbG1ldC9kYW5nZXJ6b25lL2RhbmdlcnpvbmUifSwiaW1hZ2UiOnsiZG9ja2VyLW1hbmlmZXN0LWRpZ2VzdCI6InNoYTI1NjoxOWU4ZWFjZDc1ODc5ZDA1ZjY2MjFjMmVhOGRkOTU1ZTY4ZWUzZTA3YjQxYjlkNTNmNGM4Y2M5OTI5YTY4YTY3In0sInR5cGUiOiJjb3NpZ24gY29udGFpbmVyIGltYWdlIHNpZ25hdHVyZSJ9LCJvcHRpb25hbCI6bnVsbH0=", "Cert": null, "Chain": null, "Bundle": {"SignedEntryTimestamp": "MEUCIC9oXH9VVP96frVOmDw704FBqMN/Bpm2RMdTm6BtSwL/AiEA6mCIjhV65fYuy4CwjsIzQHi/oW6IBwtd6oCvN2dI6HQ=", "Payload": {"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiJmMjEwNDJjY2RjOGU0ZjA1ZGEzNmE5ZjU4ODg5MmFlZGRlMzYzZTQ2ZWNjZGZjM2MyNzAyMTkwZDU0YTdmZmVlIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FWUNJUUNWaTJGUFI3Mjl1aHAvY3JFdUNTOW9yQzRhMnV0OHN3dDdTUnZXYUVSTGp3SWhBSlM1dzU3MHhsQnJsM2Nhd1Y1akQ1dk85RGh1dkNrdCtzOXJLdGc2NzVKQSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=", "integratedTime": 1738752154, "logIndex": 168898587, "logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"}}, "RFC3161Timestamp": null}]

View file

@ -0,0 +1 @@
[{"Base64Signature": "MEQCICi2AOAJbS1k3334VMSo+qxaI4f5VoNnuVExZ4tfIu7rAiAiwuKdo8rGfFMGMLSFSQvoLF3JuwFy4JtNW6kQlwH7vg==", "Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hbG1ldC9kYW5nZXJ6b25lL2RhbmdlcnpvbmUifSwiaW1hZ2UiOnsiZG9ja2VyLW1hbmlmZXN0LWRpZ2VzdCI6InNoYTI1NjoyMjBiNTIyMDBlM2U0N2IxYjQyMDEwNjY3ZmNhYTkzMzg2ODFlNjRkZDNlMzRhMzQ4NzM4NjZjYjA1MWQ2OTRlIn0sInR5cGUiOiJjb3NpZ24gY29udGFpbmVyIGltYWdlIHNpZ25hdHVyZSJ9LCJvcHRpb25hbCI6bnVsbH0=", "Cert": null, "Chain": null, "Bundle": {"SignedEntryTimestamp": "MEUCIEvx6NtFeAag9TplqMLjVczT/tC6lpKe9SnrxbehBlxfAiEA07BE3f5JsMLsUsmHD58D6GaZr2yz+yQ66Os2ps8oKz8=", "Payload": {"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI4YmJmNGRiNjBmMmExM2IyNjI2NTI3MzljNWM5ZTYwNjNiMDYyNjVlODU1Zjc3MTdjMTdlYWY4YzViZTQyYWUyIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FUUNJQ2kyQU9BSmJTMWszMzM0Vk1TbytxeGFJNGY1Vm9ObnVWRXhaNHRmSXU3ckFpQWl3dUtkbzhyR2ZGTUdNTFNGU1F2b0xGM0p1d0Z5NEp0Tlc2a1Fsd0g3dmc9PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=", "integratedTime": 1738859497, "logIndex": 169356501, "logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"}}, "RFC3161Timestamp": null}]

View file

@ -0,0 +1 @@
[{"Base64Signature": "MEQCIDJxvB7lBU+VNYBD0xw/3Bi8wY7GPJ2fBP7mUFbguApoAiAIpuQT+sgatOY6yXkkA8K/sM40d5/gt7jQywWPbq5+iw==", "Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hcHlyZ2lvL2RhbmdlcnpvbmUvZGFuZ2Vyem9uZSJ9LCJpbWFnZSI6eyJkb2NrZXItbWFuaWZlc3QtZGlnZXN0Ijoic2hhMjU2OjRkYTQ0MTIzNWU4NGU5MzUxODc3ODgyN2E1YzU3NDVkNTMyZDdhNDA3OTg4NmUxNjQ3OTI0YmVlN2VmMWMxNGQifSwidHlwZSI6ImNvc2lnbiBjb250YWluZXIgaW1hZ2Ugc2lnbmF0dXJlIn0sIm9wdGlvbmFsIjpudWxsfQ==", "Cert": null, "Chain": null, "Bundle": {"SignedEntryTimestamp": "MEYCIQDuuuHoyZ2i4HKxik4Ju/MWkELwc1w5SfzcpCV7G+vZHAIhAO25R/+lIfQ/kMfC4PfeoWDwLpvnH9cq6dVSzl12i1su", "Payload": {"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiIyMGE2ZDU1NTk4Y2U0NjU3NWZkZjViZGU3YzhhYWE2YTU2ZjZlMGRmOWNiYTY1MTJhMDAxODhjMTU1NGIzYjE3In19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FUUNJREp4dkI3bEJVK1ZOWUJEMHh3LzNCaTh3WTdHUEoyZkJQN21VRmJndUFwb0FpQUlwdVFUK3NnYXRPWTZ5WGtrQThLL3NNNDBkNS9ndDdqUXl3V1BicTUraXc9PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=", "integratedTime": 1738688492, "logIndex": 168652066, "logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"}}, "RFC3161Timestamp": null}]

View file

@ -0,0 +1 @@
[{"Base64Signature": "MEUCIQC2WlJH+B8VuX1c6i4sDwEGEZc53hXUD6/ds9TMJ3HrfwIgCxSnrNYRD2c8XENqfqc+Ik1gx0DK9kPNsn/Lt8V/dCo=", "Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hbG1ldC9kYW5nZXJ6b25lL2RhbmdlcnpvbmUifSwiaW1hZ2UiOnsiZG9ja2VyLW1hbmlmZXN0LWRpZ2VzdCI6InNoYTI1Njo3YjIxZGJkZWJmZmVkODU1NjIxZGZjZGVhYTUyMjMwZGM2NTY2OTk3Zjg1MmVmNWQ2MmIwMzM4YjQ2Nzk2ZTAxIn0sInR5cGUiOiJjb3NpZ24gY29udGFpbmVyIGltYWdlIHNpZ25hdHVyZSJ9LCJvcHRpb25hbCI6bnVsbH0=", "Cert": null, "Chain": null, "Bundle": {"SignedEntryTimestamp": "MEYCIQDn04gOHqiZcwUO+NVV9+29+abu6O/k1ve9zatJ3gVu9QIhAJL3E+mqVPdMPfMSdhHt2XDQsYzfRDDJNJEABQlbV3Jg", "Payload": {"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiIzZWQwNWJlYTc2ZWFmMzBmYWM1NzBlNzhlODBlZmQxNDNiZWQxNzFjM2VjMDY5MWI2MDU3YjdhMDAzNGEyMzhlIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FVUNJUUMyV2xKSCtCOFZ1WDFjNmk0c0R3RUdFWmM1M2hYVUQ2L2RzOVRNSjNIcmZ3SWdDeFNuck5ZUkQyYzhYRU5xZnFjK0lrMWd4MERLOWtQTnNuL0x0OFYvZENvPSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=", "integratedTime": 1738862352, "logIndex": 169369149, "logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"}}, "RFC3161Timestamp": null}]

View file

@ -0,0 +1 @@
[{"Base64Signature": "MEQCIHqXEMuAmt1pFCsHC71+ejlG5kjKrf1+AQW202OY3vhsAiA0BoDAVgAk9K7SgIRBpIV6u0veyB1iypzV0DteNh3IoQ==", "Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hbG1ldC9kYW5nZXJ6b25lL2RhbmdlcnpvbmUifSwiaW1hZ2UiOnsiZG9ja2VyLW1hbmlmZXN0LWRpZ2VzdCI6InNoYTI1NjpmYTk0ODcyNmFhYzI5YTZhYzQ5ZjAxZWM4ZmJiYWMxODUyMmIzNWIyNDkxZmRmNzE2MjM2YTBiMzUwMmEyY2E3In0sInR5cGUiOiJjb3NpZ24gY29udGFpbmVyIGltYWdlIHNpZ25hdHVyZSJ9LCJvcHRpb25hbCI6bnVsbH0=", "Cert": null, "Chain": null, "Bundle": {"SignedEntryTimestamp": "MEUCIQCrZ+2SSYdpIOEbyUXXaBxeqT8RTujpqdXipls9hmNvDgIgdWV84PiCY2cI49QjHjun7lj25/znGMDiwjCuPjIPA6Q=", "Payload": {"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI5ZjcwM2I4NTM4MjM4N2U2OTgwNzYxNDg1YzU0NGIzNmJmMThmNTA5ODQwMTMxYzRmOTJhMjE4OTI3MTJmNDJmIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FUUNJSHFYRU11QW10MXBGQ3NIQzcxK2VqbEc1a2pLcmYxK0FRVzIwMk9ZM3Zoc0FpQTBCb0RBVmdBazlLN1NnSVJCcElWNnUwdmV5QjFpeXB6VjBEdGVOaDNJb1E9PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=", "integratedTime": 1737478056, "logIndex": 164177381, "logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"}}, "RFC3161Timestamp": null}, {"Base64Signature": "MEYCIQDg8MeymBLOn+Khue0yK1yQy4Fu/+GXmyC/xezXO/p1JgIhAN6QLojKzkZGxyYirbqRbZCVcIM4YN3Y18FXwpW4RuUy", "Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hbG1ldC9kYW5nZXJ6b25lL2RhbmdlcnpvbmUifSwiaW1hZ2UiOnsiZG9ja2VyLW1hbmlmZXN0LWRpZ2VzdCI6InNoYTI1NjpmYTk0ODcyNmFhYzI5YTZhYzQ5ZjAxZWM4ZmJiYWMxODUyMmIzNWIyNDkxZmRmNzE2MjM2YTBiMzUwMmEyY2E3In0sInR5cGUiOiJjb3NpZ24gY29udGFpbmVyIGltYWdlIHNpZ25hdHVyZSJ9LCJvcHRpb25hbCI6bnVsbH0=", "Cert": null, "Chain": null, "Bundle": {"SignedEntryTimestamp": "MEUCIQCQLlrH2xo/bA6r386vOwA0OjUe0TqcxROT/Wo220jvGgIgPgRlKnQxWoXlD/Owf1Ogk5XlfXAt2f416LDbk4AoEvk=", "Payload": {"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI5ZjcwM2I4NTM4MjM4N2U2OTgwNzYxNDg1YzU0NGIzNmJmMThmNTA5ODQwMTMxYzRmOTJhMjE4OTI3MTJmNDJmIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FWUNJUURnOE1leW1CTE9uK0todWUweUsxeVF5NEZ1LytHWG15Qy94ZXpYTy9wMUpnSWhBTjZRTG9qS3prWkd4eVlpcmJxUmJaQ1ZjSU00WU4zWTE4Rlh3cFc0UnVVeSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=", "integratedTime": 1737557525, "logIndex": 164445483, "logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"}}, "RFC3161Timestamp": null}, {"Base64Signature": "MEQCIEhUVYVW6EdovGDSSZt1Ffc86OfzEKAas94M4eFK7hoFAiA4+6219LktmgJSKuc2ObsnL5QjHyNLk58BwY0s8gBHbQ==", "Payload": "eyJjcml0aWNhbCI6eyJpZGVudGl0eSI6eyJkb2NrZXItcmVmZXJlbmNlIjoiZ2hjci5pby9hbG1ldC9kYW5nZXJ6b25lL2RhbmdlcnpvbmUifSwiaW1hZ2UiOnsiZG9ja2VyLW1hbmlmZXN0LWRpZ2VzdCI6InNoYTI1NjpmYTk0ODcyNmFhYzI5YTZhYzQ5ZjAxZWM4ZmJiYWMxODUyMmIzNWIyNDkxZmRmNzE2MjM2YTBiMzUwMmEyY2E3In0sInR5cGUiOiJjb3NpZ24gY29udGFpbmVyIGltYWdlIHNpZ25hdHVyZSJ9LCJvcHRpb25hbCI6bnVsbH0=", "Cert": null, "Chain": null, "Bundle": {"SignedEntryTimestamp": "MEQCIDRUTMwL+/eW79ARRLE8h/ByCrvo0rOn3vUYQg1E6KIBAiBi/bzoqcL2Ik27KpwfFosww4l7yI+9IqwCvUlkQgEB7g==", "Payload": {"body": "eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI5ZjcwM2I4NTM4MjM4N2U2OTgwNzYxNDg1YzU0NGIzNmJmMThmNTA5ODQwMTMxYzRmOTJhMjE4OTI3MTJmNDJmIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FUUNJRWhVVllWVzZFZG92R0RTU1p0MUZmYzg2T2Z6RUtBYXM5NE00ZUZLN2hvRkFpQTQrNjIxOUxrdG1nSlNLdWMyT2Jzbkw1UWpIeU5MazU4QndZMHM4Z0JIYlE9PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGYjBVd1ExaE1SMlptTnpsbVVqaExlVkJ1VTNaUFdUYzBWVUpyZEFveWMweHBLMkZXUmxWNlV6RlJkM1EwZDI5emVFaG9ZMFJPTWtJMlVWTnpUR3gyWjNOSU9ESnhObkZqUVRaUVRESlRaRk12Y0RScVYwZEJQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=", "integratedTime": 1737567664, "logIndex": 164484602, "logID": "c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d"}}, "RFC3161Timestamp": null}]

View file

@ -0,0 +1,4 @@
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEoE0CXLGff79fR8KyPnSvOY74UBkt
2sLi+aVFUzS1Qwt4wosxHhcDN2B6QSsLlvgsH82q6qcA6PL2SdS/p4jWGA==
-----END PUBLIC KEY-----

View file

@ -13,6 +13,7 @@ from dangerzone.gui import Application
sys.dangerzone_dev = True # type: ignore[attr-defined]
# Use this fixture to make `pytest-qt` invoke our custom QApplication.
# See https://pytest-qt.readthedocs.io/en/latest/qapplication.html#testing-custom-qapplications
@pytest.fixture(scope="session")
@ -122,7 +123,7 @@ test_docs_compressed_dir = Path(__file__).parent.joinpath(SAMPLE_COMPRESSED_DIRE
test_docs = [
p
for p in test_docs_dir.rglob("*")
for p in test_docs_dir.glob("*")
if p.is_file()
and not (p.name.endswith(SAFE_EXTENSION) or p.name.startswith("sample_bad"))
]
@ -133,6 +134,7 @@ for_each_doc = pytest.mark.parametrize(
)
# External Docs - base64 docs encoded for externally sourced documents
# XXX to reduce the chance of accidentally opening them
test_docs_external_dir = Path(__file__).parent.joinpath(SAMPLE_EXTERNAL_DIRECTORY)
@ -160,3 +162,31 @@ def for_each_external_doc(glob_pattern: str = "*") -> Callable:
class TestBase:
sample_doc = str(test_docs_dir.joinpath(BASIC_SAMPLE_PDF))
def pytest_configure(config: pytest.Config) -> None:
config.addinivalue_line(
"markers",
"reference_generator: Used to mark the test cases that regenerate reference documents",
)
def pytest_addoption(parser: pytest.Parser) -> None:
parser.addoption(
"--generate-reference-pdfs",
action="store_true",
default=False,
help="Regenerate reference PDFs",
)
def pytest_collection_modifyitems(
config: pytest.Config, items: List[pytest.Item]
) -> None:
if not config.getoption("--generate-reference-pdfs"):
skip_generator = pytest.mark.skip(
reason="Only run when --generate-reference-pdfs is provided"
)
for item in items:
if "reference_generator" in item.keywords:
item.add_marker(skip_generator)

View file

@ -21,34 +21,25 @@ def get_qt_app() -> Application:
def generate_isolated_updater(
tmp_path: Path,
monkeypatch: MonkeyPatch,
app_mocker: Optional[MockerFixture] = None,
mocker: MockerFixture,
mock_app: bool = False,
) -> UpdaterThread:
"""Generate an Updater class with its own settings."""
if app_mocker:
app = app_mocker.MagicMock()
else:
app = get_qt_app()
app = mocker.MagicMock() if mock_app else get_qt_app()
dummy = Dummy()
# XXX: We can monkey-patch global state without wrapping it in a context manager, or
# worrying that it will leak between tests, for two reasons:
#
# 1. Parallel tests in PyTest take place in different processes.
# 2. The monkeypatch fixture tears down the monkey-patch after each test ends.
monkeypatch.setattr(util, "get_config_dir", lambda: tmp_path)
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
dangerzone = DangerzoneGui(app, isolation_provider=dummy)
updater = UpdaterThread(dangerzone)
return updater
@pytest.fixture
def updater(
tmp_path: Path, monkeypatch: MonkeyPatch, mocker: MockerFixture
) -> UpdaterThread:
return generate_isolated_updater(tmp_path, monkeypatch, mocker)
def updater(tmp_path: Path, mocker: MockerFixture) -> UpdaterThread:
return generate_isolated_updater(tmp_path, mocker, mock_app=True)
@pytest.fixture
def qt_updater(tmp_path: Path, monkeypatch: MonkeyPatch) -> UpdaterThread:
return generate_isolated_updater(tmp_path, monkeypatch)
def qt_updater(tmp_path: Path, mocker: MockerFixture) -> UpdaterThread:
return generate_isolated_updater(tmp_path, mocker, mock_app=False)

View file

@ -24,9 +24,10 @@ from dangerzone.gui.main_window import (
QtGui,
WaitingWidgetContainer,
)
from dangerzone.gui.updater import UpdateReport, UpdaterThread
from dangerzone.gui.updater import UpdaterThread
from dangerzone.isolation_provider.container import Container
from dangerzone.isolation_provider.dummy import Dummy
from dangerzone.updater import releases
from .test_updater import assert_report_equal, default_updater_settings
@ -96,7 +97,7 @@ def test_default_menu(
updater: UpdaterThread,
) -> None:
"""Check that the default menu entries are in order."""
updater.dangerzone.settings.set("updater_check", True)
updater.dangerzone.settings.set("updater_check_all", True)
window = MainWindow(updater.dangerzone)
menu_actions = window.hamburger_button.menu().actions()
@ -114,7 +115,7 @@ def test_default_menu(
toggle_updates_action.trigger()
assert not toggle_updates_action.isChecked()
assert updater.dangerzone.settings.get("updater_check") is False
assert updater.dangerzone.settings.get("updater_check_all") is False
def test_no_update(
@ -127,12 +128,12 @@ def test_no_update(
# Check that when no update is detected, e.g., due to update cooldown, an empty
# report is received that does not affect the menu entries.
curtime = int(time.time())
updater.dangerzone.settings.set("updater_check", True)
updater.dangerzone.settings.set("updater_check_all", True)
updater.dangerzone.settings.set("updater_errors", 9)
updater.dangerzone.settings.set("updater_last_check", curtime)
expected_settings = default_updater_settings()
expected_settings["updater_check"] = True
expected_settings["updater_check_all"] = True
expected_settings["updater_errors"] = 0 # errors must be cleared
expected_settings["updater_last_check"] = curtime
@ -147,7 +148,7 @@ def test_no_update(
# Check that the callback function gets an empty report.
handle_updates_spy.assert_called_once()
assert_report_equal(handle_updates_spy.call_args.args[0], UpdateReport())
assert_report_equal(handle_updates_spy.call_args.args[0], releases.UpdateReport())
# Check that the menu entries remain exactly the same.
menu_actions_after = window.hamburger_button.menu().actions()
@ -165,14 +166,14 @@ def test_update_detected(
) -> None:
"""Test that a newly detected version leads to a notification to the user."""
qt_updater.dangerzone.settings.set("updater_check", True)
qt_updater.dangerzone.settings.set("updater_check_all", True)
qt_updater.dangerzone.settings.set("updater_last_check", 0)
qt_updater.dangerzone.settings.set("updater_errors", 9)
# Make requests.get().json() return the following dictionary.
mock_upstream_info = {"tag_name": "99.9.9", "body": "changelog"}
mocker.patch("dangerzone.gui.updater.requests.get")
requests_mock = updater_module.requests.get
mocker.patch("dangerzone.updater.releases.requests.get")
requests_mock = releases.requests.get
requests_mock().status_code = 200 # type: ignore [call-arg]
requests_mock().json.return_value = mock_upstream_info # type: ignore [attr-defined, call-arg]
@ -191,12 +192,13 @@ def test_update_detected(
# Check that the callback function gets an update report.
handle_updates_spy.assert_called_once()
assert_report_equal(
handle_updates_spy.call_args.args[0], UpdateReport("99.9.9", "<p>changelog</p>")
handle_updates_spy.call_args.args[0],
releases.UpdateReport("99.9.9", "<p>changelog</p>"),
)
# Check that the settings have been updated properly.
expected_settings = default_updater_settings()
expected_settings["updater_check"] = True
expected_settings["updater_check_all"] = True
expected_settings["updater_last_check"] = qt_updater.dangerzone.settings.get(
"updater_last_check"
)
@ -277,13 +279,13 @@ def test_update_error(
) -> None:
"""Test that an error during an update check leads to a notification to the user."""
# Test 1 - Check that the first error does not notify the user.
qt_updater.dangerzone.settings.set("updater_check", True)
qt_updater.dangerzone.settings.set("updater_check_all", True)
qt_updater.dangerzone.settings.set("updater_last_check", 0)
qt_updater.dangerzone.settings.set("updater_errors", 0)
# Make requests.get() return an errorthe following dictionary.
mocker.patch("dangerzone.gui.updater.requests.get")
requests_mock = updater_module.requests.get
# Make requests.get() return an error
mocker.patch("dangerzone.updater.releases.requests.get")
requests_mock = releases.requests.get
requests_mock.side_effect = Exception("failed") # type: ignore [attr-defined]
window = MainWindow(qt_updater.dangerzone)
@ -304,7 +306,7 @@ def test_update_error(
# Check that the settings have been updated properly.
expected_settings = default_updater_settings()
expected_settings["updater_check"] = True
expected_settings["updater_check_all"] = True
expected_settings["updater_last_check"] = qt_updater.dangerzone.settings.get(
"updater_last_check"
)

View file

@ -12,7 +12,9 @@ from pytestqt.qtbot import QtBot
from dangerzone import settings
from dangerzone.gui import updater as updater_module
from dangerzone.gui.updater import UpdateReport, UpdaterThread
from dangerzone.gui.updater import UpdaterThread
from dangerzone.updater import releases
from dangerzone.updater.releases import UpdateReport
from dangerzone.util import get_version
from ..test_settings import default_settings_0_4_1, save_settings
@ -48,9 +50,7 @@ def test_default_updater_settings(updater: UpdaterThread) -> None:
)
def test_pre_0_4_2_settings(
tmp_path: Path, monkeypatch: MonkeyPatch, mocker: MockerFixture
) -> None:
def test_pre_0_4_2_settings(tmp_path: Path, mocker: MockerFixture) -> None:
"""Check settings of installations prior to 0.4.2.
Check that installations that have been upgraded from a version < 0.4.2 to >= 0.4.2
@ -58,7 +58,7 @@ def test_pre_0_4_2_settings(
in their settings.json file.
"""
save_settings(tmp_path, default_settings_0_4_1())
updater = generate_isolated_updater(tmp_path, monkeypatch, mocker)
updater = generate_isolated_updater(tmp_path, mocker, mock_app=True)
assert (
updater.dangerzone.settings.get_updater_settings() == default_updater_settings()
)
@ -83,12 +83,10 @@ def test_post_0_4_2_settings(
# version is 0.4.3.
expected_settings = default_updater_settings()
expected_settings["updater_latest_version"] = "0.4.3"
monkeypatch.setattr(
settings, "get_version", lambda: expected_settings["updater_latest_version"]
)
monkeypatch.setattr(settings, "get_version", lambda: "0.4.3")
# Ensure that the Settings class will correct the latest version field to 0.4.3.
updater = generate_isolated_updater(tmp_path, monkeypatch, mocker)
updater = generate_isolated_updater(tmp_path, mocker, mock_app=True)
assert updater.dangerzone.settings.get_updater_settings() == expected_settings
# Simulate an updater check that found a newer Dangerzone version (e.g., 0.4.4).
@ -108,7 +106,7 @@ def test_post_0_4_2_settings(
def test_linux_no_check(updater: UpdaterThread, monkeypatch: MonkeyPatch) -> None:
"""Ensure that Dangerzone on Linux does not make any update check."""
expected_settings = default_updater_settings()
expected_settings["updater_check"] = False
expected_settings["updater_check_all"] = False
expected_settings["updater_last_check"] = None
# XXX: Simulate Dangerzone installed via package manager.
@ -118,19 +116,18 @@ def test_linux_no_check(updater: UpdaterThread, monkeypatch: MonkeyPatch) -> Non
assert updater.dangerzone.settings.get_updater_settings() == expected_settings
def test_user_prompts(
updater: UpdaterThread, monkeypatch: MonkeyPatch, mocker: MockerFixture
) -> None:
def test_user_prompts(updater: UpdaterThread, mocker: MockerFixture) -> None:
"""Test prompting users to ask them if they want to enable update checks."""
settings = updater.dangerzone.settings
# First run
#
# When Dangerzone runs for the first time, users should not be asked to enable
# updates.
expected_settings = default_updater_settings()
expected_settings["updater_check"] = None
expected_settings["updater_check_all"] = None
expected_settings["updater_last_check"] = 0
assert updater.should_check_for_updates() is False
assert updater.dangerzone.settings.get_updater_settings() == expected_settings
assert settings.get_updater_settings() == expected_settings
# Second run
#
@ -142,16 +139,16 @@ def test_user_prompts(
# Check disabling update checks.
prompt_mock().launch.return_value = False # type: ignore [attr-defined]
expected_settings["updater_check"] = False
expected_settings["updater_check_all"] = False
assert updater.should_check_for_updates() is False
assert updater.dangerzone.settings.get_updater_settings() == expected_settings
assert settings.get_updater_settings() == expected_settings
# Reset the "updater_check" field and check enabling update checks.
updater.dangerzone.settings.set("updater_check", None)
# Reset the "updater_check_all" field and check enabling update checks.
settings.set("updater_check_all", None)
prompt_mock().launch.return_value = True # type: ignore [attr-defined]
expected_settings["updater_check"] = True
expected_settings["updater_check_all"] = True
assert updater.should_check_for_updates() is True
assert updater.dangerzone.settings.get_updater_settings() == expected_settings
assert settings.get_updater_settings() == expected_settings
# Third run
#
@ -159,7 +156,7 @@ def test_user_prompts(
# checks.
prompt_mock().side_effect = RuntimeError("Should not be called") # type: ignore [attr-defined]
for check in [True, False]:
updater.dangerzone.settings.set("updater_check", check)
settings.set("updater_check_all", check)
assert updater.should_check_for_updates() == check
@ -167,43 +164,44 @@ def test_update_checks(
updater: UpdaterThread, monkeypatch: MonkeyPatch, mocker: MockerFixture
) -> None:
"""Test version update checks."""
settings = updater.dangerzone.settings
# This dictionary will simulate GitHub's response.
mock_upstream_info = {"tag_name": f"v{get_version()}", "body": "changelog"}
# Make requests.get().json() return the above dictionary.
mocker.patch("dangerzone.gui.updater.requests.get")
requests_mock = updater_module.requests.get
mocker.patch("dangerzone.updater.releases.requests.get")
requests_mock = updater_module.releases.requests.get
requests_mock().status_code = 200 # type: ignore [call-arg]
requests_mock().json.return_value = mock_upstream_info # type: ignore [attr-defined, call-arg]
# Always assume that we can perform multiple update checks in a row.
monkeypatch.setattr(updater, "_should_postpone_update_check", lambda: False)
mocker.patch(
"dangerzone.updater.releases._should_postpone_update_check", return_value=False
)
# Test 1 - Check that the current version triggers no updates.
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert_report_equal(report, UpdateReport())
# Test 2 - Check that a newer version triggers updates, and that the changelog is
# rendered from Markdown to HTML.
mock_upstream_info["tag_name"] = "v99.9.9"
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert_report_equal(
report, UpdateReport(version="99.9.9", changelog="<p>changelog</p>")
)
# Test 3 - Check that HTTP errors are converted to error reports.
requests_mock.side_effect = Exception("failed") # type: ignore [attr-defined]
report = updater.check_for_updates()
error_msg = (
f"Encountered an exception while checking {updater.GH_RELEASE_URL}: failed"
)
report = releases.check_for_updates(settings)
error_msg = f"Encountered an exception while checking {updater_module.releases.GH_RELEASE_URL}: failed"
assert_report_equal(report, UpdateReport(error=error_msg))
# Test 4 - Check that cached version/changelog info do not trigger an update check.
updater.dangerzone.settings.set("updater_latest_version", "99.9.9")
updater.dangerzone.settings.set("updater_latest_changelog", "<p>changelog</p>")
settings.set("updater_latest_version", "99.9.9")
settings.set("updater_latest_changelog", "<p>changelog</p>")
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert_report_equal(
report, UpdateReport(version="99.9.9", changelog="<p>changelog</p>")
)
@ -211,14 +209,16 @@ def test_update_checks(
def test_update_checks_cooldown(updater: UpdaterThread, mocker: MockerFixture) -> None:
"""Make sure Dangerzone only checks for updates every X hours"""
updater.dangerzone.settings.set("updater_check", True)
updater.dangerzone.settings.set("updater_last_check", 0)
settings = updater.dangerzone.settings
settings.set("updater_check_all", True)
settings.set("updater_last_check", 0)
# Mock some functions before the tests start
cooldown_spy = mocker.spy(updater, "_should_postpone_update_check")
timestamp_mock = mocker.patch.object(updater, "_get_now_timestamp")
mocker.patch("dangerzone.gui.updater.requests.get")
requests_mock = updater_module.requests.get
cooldown_spy = mocker.spy(updater_module.releases, "_should_postpone_update_check")
timestamp_mock = mocker.patch.object(updater_module.releases, "_get_now_timestamp")
mocker.patch("dangerzone.updater.releases.requests.get")
requests_mock = updater_module.releases.requests.get
# # Make requests.get().json() return the version info that we want.
mock_upstream_info = {"tag_name": "99.9.9", "body": "changelog"}
@ -231,9 +231,9 @@ def test_update_checks_cooldown(updater: UpdaterThread, mocker: MockerFixture) -
curtime = int(time.time())
timestamp_mock.return_value = curtime
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert cooldown_spy.spy_return is False
assert updater.dangerzone.settings.get("updater_last_check") == curtime
assert settings.get("updater_last_check") == curtime
assert_report_equal(report, UpdateReport("99.9.9", "<p>changelog</p>"))
# Test 2: Advance the current time by 1 second, and ensure that no update will take
@ -242,41 +242,39 @@ def test_update_checks_cooldown(updater: UpdaterThread, mocker: MockerFixture) -
curtime += 1
timestamp_mock.return_value = curtime
requests_mock.side_effect = Exception("failed") # type: ignore [attr-defined]
updater.dangerzone.settings.set("updater_latest_version", get_version())
updater.dangerzone.settings.set("updater_latest_changelog", None)
settings.set("updater_latest_version", get_version())
settings.set("updater_latest_changelog", None)
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert cooldown_spy.spy_return is True
assert updater.dangerzone.settings.get("updater_last_check") == curtime - 1 # type: ignore [unreachable]
assert settings.get("updater_last_check") == curtime - 1 # type: ignore [unreachable]
assert_report_equal(report, UpdateReport())
# Test 3: Advance the current time by <cooldown period> seconds. Ensure that
# Dangerzone checks for updates again, and the last check timestamp gets bumped.
curtime += updater_module.UPDATE_CHECK_COOLDOWN_SECS
curtime += updater_module.releases.UPDATE_CHECK_COOLDOWN_SECS
timestamp_mock.return_value = curtime
requests_mock.side_effect = None
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert cooldown_spy.spy_return is False
assert updater.dangerzone.settings.get("updater_last_check") == curtime
assert settings.get("updater_last_check") == curtime
assert_report_equal(report, UpdateReport("99.9.9", "<p>changelog</p>"))
# Test 4: Make Dangerzone check for updates again, but this time, it should
# encounter an error while doing so. In that case, the last check timestamp
# should be bumped, so that subsequent checks don't take place.
updater.dangerzone.settings.set("updater_latest_version", get_version())
updater.dangerzone.settings.set("updater_latest_changelog", None)
settings.set("updater_latest_version", get_version())
settings.set("updater_latest_changelog", None)
curtime += updater_module.UPDATE_CHECK_COOLDOWN_SECS
curtime += updater_module.releases.UPDATE_CHECK_COOLDOWN_SECS
timestamp_mock.return_value = curtime
requests_mock.side_effect = Exception("failed")
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert cooldown_spy.spy_return is False
assert updater.dangerzone.settings.get("updater_last_check") == curtime
error_msg = (
f"Encountered an exception while checking {updater.GH_RELEASE_URL}: failed"
)
assert settings.get("updater_last_check") == curtime
error_msg = f"Encountered an exception while checking {updater_module.releases.GH_RELEASE_URL}: failed"
assert_report_equal(report, UpdateReport(error=error_msg))
@ -284,16 +282,17 @@ def test_update_errors(
updater: UpdaterThread, monkeypatch: MonkeyPatch, mocker: MockerFixture
) -> None:
"""Test update check errors."""
settings = updater.dangerzone.settings
# Mock requests.get().
mocker.patch("dangerzone.gui.updater.requests.get")
requests_mock = updater_module.requests.get
mocker.patch("dangerzone.updater.releases.requests.get")
requests_mock = releases.requests.get
# Always assume that we can perform multiple update checks in a row.
monkeypatch.setattr(updater, "_should_postpone_update_check", lambda: False)
monkeypatch.setattr(releases, "_should_postpone_update_check", lambda: False)
# Test 1 - Check that request exceptions are being detected as errors.
requests_mock.side_effect = Exception("bad url") # type: ignore [attr-defined]
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert report.error is not None
assert "bad url" in report.error
assert "Encountered an exception" in report.error
@ -304,7 +303,7 @@ def test_update_errors(
requests_mock.return_value = MockResponse500() # type: ignore [attr-defined]
requests_mock.side_effect = None # type: ignore [attr-defined]
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert report.error is not None
assert "Encountered an HTTP 500 error" in report.error
@ -316,7 +315,7 @@ def test_update_errors(
return json.loads("bad json")
requests_mock.return_value = MockResponseBadJSON() # type: ignore [attr-defined]
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert report.error is not None
assert "Received a non-JSON response" in report.error
@ -328,7 +327,7 @@ def test_update_errors(
return {}
requests_mock.return_value = MockResponseEmpty() # type: ignore [attr-defined]
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert report.error is not None
assert "Missing required fields in JSON" in report.error
@ -340,7 +339,7 @@ def test_update_errors(
return {"tag_name": "vbad_version", "body": "changelog"}
requests_mock.return_value = MockResponseBadVersion() # type: ignore [attr-defined]
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert report.error is not None
assert "Invalid version" in report.error
@ -352,7 +351,7 @@ def test_update_errors(
return {"tag_name": "v99.9.9", "body": ["bad", "markdown"]}
requests_mock.return_value = MockResponseBadMarkdown() # type: ignore [attr-defined]
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert report.error is not None
# Test 7 - Check that a valid response passes.
@ -363,36 +362,38 @@ def test_update_errors(
return {"tag_name": "v99.9.9", "body": "changelog"}
requests_mock.return_value = MockResponseValid() # type: ignore [attr-defined]
report = updater.check_for_updates()
report = releases.check_for_updates(settings)
assert_report_equal(report, UpdateReport("99.9.9", "<p>changelog</p>"))
def test_update_check_prompt(
qtbot: QtBot,
qt_updater: UpdaterThread,
monkeypatch: MonkeyPatch,
mocker: MockerFixture,
) -> None:
"""Test that the prompt to enable update checks works properly."""
# Force Dangerzone to check immediately for updates
qt_updater.dangerzone.settings.set("updater_last_check", 0)
settings = qt_updater.dangerzone.settings
settings.set("updater_last_check", 0)
# Test 1 - Check that on the second run of Dangerzone, the user is prompted to
# choose if they want to enable update checks.
def check_button_labels() -> None:
dialog = qt_updater.dangerzone.app.activeWindow()
assert dialog.ok_button.text() == "Check Automatically" # type: ignore [attr-defined]
assert dialog.cancel_button.text() == "Don't Check" # type: ignore [attr-defined]
assert dialog.ok_button.text() == "Enable sandbox updates" # type: ignore [attr-defined]
assert dialog.cancel_button.text() == "Do not make any requests" # type: ignore [attr-defined]
dialog.ok_button.click() # type: ignore [attr-defined]
QtCore.QTimer.singleShot(500, check_button_labels)
mocker.patch(
"dangerzone.updater.releases._should_postpone_update_check", return_value=False
)
res = qt_updater.should_check_for_updates()
assert res is True
# Test 2 - Check that when the user chooses to enable update checks, we
# store that decision in the settings.
qt_updater.check = None
settings.set("updater_check_all", None, autosave=True)
def click_ok() -> None:
dialog = qt_updater.dangerzone.app.activeWindow()
@ -402,11 +403,11 @@ def test_update_check_prompt(
res = qt_updater.should_check_for_updates()
assert res is True
assert qt_updater.check is True
assert settings.get("updater_check_all") is True
# Test 3 - Same as the previous test, but check that clicking on cancel stores the
# opposite decision.
qt_updater.check = None # type: ignore [unreachable]
settings.set("updater_check_all", None) # type: ignore [unreachable]
def click_cancel() -> None:
dialog = qt_updater.dangerzone.app.activeWindow()
@ -416,11 +417,11 @@ def test_update_check_prompt(
res = qt_updater.should_check_for_updates()
assert res is False
assert qt_updater.check is False
assert settings.get("updater_check_all") is False
# Test 4 - Same as the previous test, but check that clicking on "X" does not store
# any decision.
qt_updater.check = None
settings.set("updater_check_all", None, autosave=True)
def click_x() -> None:
dialog = qt_updater.dangerzone.app.activeWindow()
@ -430,4 +431,4 @@ def test_update_check_prompt(
res = qt_updater.should_check_for_updates()
assert res is False
assert qt_updater.check is None
assert settings.get("updater_check_all") is None

View file

@ -5,9 +5,11 @@ import pytest
from pytest_mock import MockerFixture
from pytest_subprocess import FakeProcess
from dangerzone import container_utils, errors
from dangerzone import errors
from dangerzone.container_utils import Runtime
from dangerzone.isolation_provider.container import Container
from dangerzone.isolation_provider.qubes import is_qubes_native_conversion
from dangerzone.util import get_resource_path
from .base import IsolationProviderTermination, IsolationProviderTest
@ -23,42 +25,51 @@ def provider() -> Container:
return Container()
@pytest.fixture
def runtime_path() -> str:
return str(Runtime().path)
class TestContainer(IsolationProviderTest):
def test_is_available_raises(self, provider: Container, fp: FakeProcess) -> None:
def test_is_available_raises(
self, provider: Container, fp: FakeProcess, runtime_path: str
) -> None:
"""
NotAvailableContainerTechException should be raised when
the "podman image ls" command fails.
"""
fp.register_subprocess(
[container_utils.get_runtime(), "image", "ls"],
[runtime_path, "image", "ls"],
returncode=-1,
stderr="podman image ls logs",
)
with pytest.raises(errors.NotAvailableContainerTechException):
provider.is_available()
def test_is_available_works(self, provider: Container, fp: FakeProcess) -> None:
def test_is_available_works(
self, provider: Container, fp: FakeProcess, runtime_path: str
) -> None:
"""
No exception should be raised when the "podman image ls" can return properly.
"""
fp.register_subprocess(
[container_utils.get_runtime(), "image", "ls"],
[runtime_path, "image", "ls"],
)
provider.is_available()
def test_install_raise_if_image_cant_be_installed(
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
self, provider: Container, fp: FakeProcess, runtime_path: str
) -> None:
"""When an image installation fails, an exception should be raised"""
fp.register_subprocess(
[container_utils.get_runtime(), "image", "ls"],
[runtime_path, "image", "ls"],
)
# First check should return nothing.
fp.register_subprocess(
[
container_utils.get_runtime(),
runtime_path,
"image",
"list",
"--format",
@ -68,11 +79,13 @@ class TestContainer(IsolationProviderTest):
occurrences=2,
)
# Make podman load fail
mocker.patch("gzip.open", mocker.mock_open(read_data=""))
fp.register_subprocess(
[container_utils.get_runtime(), "load"],
[
runtime_path,
"load",
"-i",
get_resource_path("container.tar").absolute(),
],
returncode=-1,
)
@ -80,18 +93,22 @@ class TestContainer(IsolationProviderTest):
provider.install()
def test_install_raises_if_still_not_installed(
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
self, provider: Container, fp: FakeProcess, runtime_path: str
) -> None:
"""When an image keep being not installed, it should return False"""
fp.register_subprocess(
[runtime_path, "version", "-f", "{{.Client.Version}}"],
stdout="4.0.0",
)
fp.register_subprocess(
[container_utils.get_runtime(), "image", "ls"],
[runtime_path, "image", "ls"],
)
# First check should return nothing.
fp.register_subprocess(
[
container_utils.get_runtime(),
runtime_path,
"image",
"list",
"--format",
@ -101,10 +118,13 @@ class TestContainer(IsolationProviderTest):
occurrences=2,
)
# Patch gzip.open and podman load so that it works
mocker.patch("gzip.open", mocker.mock_open(read_data=""))
fp.register_subprocess(
[container_utils.get_runtime(), "load"],
[
runtime_path,
"load",
"-i",
get_resource_path("container.tar").absolute(),
],
)
with pytest.raises(errors.ImageNotPresentException):
provider.install()
@ -191,7 +211,7 @@ class TestContainer(IsolationProviderTest):
reason="Linux specific",
)
def test_linux_skips_desktop_version_check_returns_true(
self, mocker: MockerFixture, provider: Container
self, provider: Container
) -> None:
assert (True, "") == provider.check_docker_desktop_version()

View file

@ -7,10 +7,13 @@ import platform
import shutil
import sys
import tempfile
import time
import traceback
from pathlib import Path
from typing import Optional, Sequence
import fitz
import numpy as np
import pytest
from click.testing import CliRunner, Result
from pytest_mock import MockerFixture
@ -190,11 +193,68 @@ class TestCliConversion(TestCliBasic):
result = self.run_cli([sample_pdf, "--ocr-lang", "piglatin"])
result.assert_failure()
@pytest.mark.reference_generator
@for_each_doc
def test_formats(self, doc: Path) -> None:
result = self.run_cli(str(doc))
def test_regenerate_reference(self, doc: Path) -> None:
reference = (doc.parent / "reference" / doc.stem).with_suffix(".pdf")
result = self.run_cli([str(doc), "--output-filename", str(reference)])
result.assert_success()
@for_each_doc
def test_formats(self, doc: Path, tmp_path_factory: pytest.TempPathFactory) -> None:
reference = (doc.parent / "reference" / doc.stem).with_suffix(".pdf")
destination = tmp_path_factory.mktemp(doc.stem).with_suffix(".pdf")
result = self.run_cli([str(doc), "--output-filename", str(destination)])
result.assert_success()
# Do not check against reference versions when using a dummy isolation provider
if os.environ.get("DUMMY_CONVERSION", False):
return
converted = fitz.open(destination)
ref = fitz.open(reference)
errors = []
if len(converted) != len(ref):
errors.append("different number of pages")
diffs = doc.parent / "diffs"
diffs.mkdir(parents=True, exist_ok=True)
for page, ref_page in zip(converted, ref):
curr_pixmap = page.get_pixmap(dpi=150)
ref_pixmap = ref_page.get_pixmap(dpi=150)
if curr_pixmap.tobytes() != ref_pixmap.tobytes():
errors.append(f"page {page.number} differs")
t0 = time.perf_counter()
arr_ref = np.frombuffer(ref_pixmap.samples, dtype=np.uint8).reshape(
ref_pixmap.height, ref_pixmap.width, ref_pixmap.n
)
arr_curr = np.frombuffer(curr_pixmap.samples, dtype=np.uint8).reshape(
curr_pixmap.height, curr_pixmap.width, curr_pixmap.n
)
# Find differences (any channel differs)
diff = (arr_ref != arr_curr).any(axis=2)
# Get coordinates of differences
diff_coords = np.where(diff)
# Mark differences in red
for y, x in zip(diff_coords[0], diff_coords[1]):
# Note: PyMuPDF's set_pixel takes (x, y) not (y, x)
ref_pixmap.set_pixel(int(x), int(y), (255, 0, 0)) # Red
t1 = time.perf_counter()
print(f"diff took {t1 - t0} seconds")
ref_pixmap.save(diffs / f"{destination.stem}_{page.number}.jpeg")
if len(errors) > 0:
raise AssertionError(
f"The resulting document differs from the reference. See {str(diffs)} for a visual diff."
)
def test_output_filename(self, sample_pdf: str) -> None:
temp_dir = tempfile.mkdtemp(prefix="dangerzone-")
output_filename = str(Path(temp_dir) / "safe.pdf")

View file

@ -0,0 +1,60 @@
from pathlib import Path
import pytest
from pytest_mock import MockerFixture
from dangerzone import errors
from dangerzone.container_utils import Runtime
from dangerzone.settings import Settings
def test_get_runtime_name_from_settings(mocker: MockerFixture, tmp_path: Path) -> None:
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
mocker.patch("dangerzone.container_utils.Path.exists", return_value=True)
settings = Settings()
settings.set("container_runtime", "/opt/somewhere/docker", autosave=True)
assert Runtime().name == "docker"
def test_get_runtime_name_linux(mocker: MockerFixture, tmp_path: Path) -> None:
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
mocker.patch("platform.system", return_value="Linux")
mocker.patch(
"dangerzone.container_utils.shutil.which", return_value="/usr/bin/podman"
)
mocker.patch("dangerzone.container_utils.os.path.exists", return_value=True)
runtime = Runtime()
assert runtime.name == "podman"
assert runtime.path == Path("/usr/bin/podman")
def test_get_runtime_name_non_linux(mocker: MockerFixture, tmp_path: Path) -> None:
mocker.patch("platform.system", return_value="Windows")
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
mocker.patch(
"dangerzone.container_utils.shutil.which", return_value="/usr/bin/docker"
)
mocker.patch("dangerzone.container_utils.os.path.exists", return_value=True)
runtime = Runtime()
assert runtime.name == "docker"
assert runtime.path == Path("/usr/bin/docker")
mocker.patch("platform.system", return_value="Something else")
runtime = Runtime()
assert runtime.name == "docker"
assert runtime.path == Path("/usr/bin/docker")
assert Runtime().name == "docker"
def test_get_unsupported_runtime_name(mocker: MockerFixture, tmp_path: Path) -> None:
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
settings = Settings()
settings.set(
"container_runtime", "/opt/somewhere/new-kid-on-the-block", autosave=True
)
with pytest.raises(errors.UnsupportedContainerRuntime):
assert Runtime().name == "new-kid-on-the-block"

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show more