Compare commits

...

1153 commits
v0.3.1 ... main

Author SHA1 Message Date
Alex Pyrgiotis
d9efcd8a26
Retain Grype ignore list from current branch
Some checks are pending
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (fedora 42) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run tests (ubuntu 25.04) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
When security scanning our poetry.lock file for the **released**
Dangerzone version, retain the Grype ignore
list (.grype.yaml) of the current branch, which would be otherwise
overwritten by a git checkout to the latest released tag (v0.9.0 as of
writing this). This way, we can instruct Grype to ignore vulnerabilities
in the latest Dangerzone release.
2025-04-28 15:24:41 +03:00
Alex Pyrgiotis
a127eef9db
Ignore CVE-2025-43859 / GHSA-vqfr-h8mv-ghfj
Ignore an h11 vulnerability that is present in the Dangerzone
application released from the `v0.9.0` tag. This vulnerability
reportedly affects web servers behind reverse proxies, which is not
Dangerzone's case.
2025-04-28 15:22:23 +03:00
dependabot[bot]
847926f59a
build(deps-dev): bump h11 from 0.14.0 to 0.16.0
Bumps [h11](https://github.com/python-hyper/h11) from 0.14.0 to 0.16.0.
- [Commits](https://github.com/python-hyper/h11/compare/v0.14.0...v0.16.0)

---
updated-dependencies:
- dependency-name: h11
  dependency-version: 0.16.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-28 14:29:10 +03:00
Alexis Métaireau
ec7f6b7321
Fix Debian-derivatives installation instructions
The way to handle the trust for a PGP key has changed in recent versions
of `apt-secure` and now requires the use of PGP keys in something
different than the internal GPG keybox database.

When updating the CI checks, I found that there were a difference between
them and the instructions that were provided in the INSTALL.md file, which
was using the armored version.

The instructions now require the unarmored keys, stored in a `.gpg`
file, and installation of these keys differ depending on the system,
using `sq` on newer distributions.
2025-04-28 10:05:18 +02:00
Alexis Métaireau
83be5fb151
Release container is now using the .tar format
Some checks failed
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
Update the CI check to account for it.
2025-04-14 15:08:32 +02:00
Alex Pyrgiotis
04096380ff
Include Ubuntu Plucky and Fedora 42 in our nightly repo checks
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
2025-04-10 12:00:15 +02:00
Alexis Métaireau
21ca927b8b
Send release notes to editorial during the release process
Some checks are pending
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
2025-04-09 20:55:31 +02:00
Alexis Métaireau
05040de212
Point download links to the 0.9.0 release 2025-04-09 17:08:50 +02:00
Alexis Métaireau
4014c8591b
Docs: Update the Podman Desktop docs for macOS
In order to access our custom seccomp policy, we require it to be
mounted on the podman machine.

Co-Author: Alex Pyrgiotis <alex.p@freedom.press>
2025-04-09 17:04:42 +02:00
Alex Pyrgiotis
6cd706af10
windows: Minor change to uninstallation message
Some checks are pending
Tests / Download and cache Tesseract data (push) Waiting to run
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
Refs #1026
2025-04-09 14:26:45 +02:00
Alex Pyrgiotis
634b171b97
windows: Detect Dangerzone 0.8.1 during install
Detect Dangerzone 0.8.1 versions during install, so that we can prompt
users to manually uninstall it.

Refs #929
2025-04-09 14:26:44 +02:00
Alexis Métaireau
c99c424f87
Document Podman Desktop experimental support for Windows and macOS
Some checks are pending
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (fedora 42) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run tests (ubuntu 25.04) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
2025-04-08 16:08:55 +02:00
Alex Pyrgiotis
19fa11410b
Update reference template for Qubes to Fedora 41
Closes #1078
2025-04-08 16:37:28 +03:00
Alex Pyrgiotis
10be85b9f2
container: Add workarounds for Podman Desktop support on Windows
In case we run on Windows and use Podman Desktop (for which we currently
offer experimental support), we must not pass some Podman flags in order
to avoid conversion errors.

Refs #1127
2025-04-08 16:36:08 +03:00
Alexis Métaireau
47d732e603
Document the Makefile targets
It now outputs the following:

```
build-linux                  Build linux packages (.rpm and .deb)
build-macos-arm              Build macOS Apple Silicon package (.dmg)
build-macos-intel            Build macOS intel package (.dmg)
Dockerfile                   Regenerate the Dockerfile from its template
fix                          apply all the suggestions from ruff
help                         Print this message and exit.
lint                         Check the code for linting, formatting, and typing issues with ruff and mypy
regenerate-reference-pdfs    Regenerate the reference PDFs
test                         Run the tests
test-large                   Run large test set
```
2025-04-08 16:34:34 +03:00
Alexis Métaireau
d6451290db
Move multithreading patch up so that it's working in the GUI 2025-04-08 16:34:34 +03:00
Alex Pyrgiotis
f0bb65cb4e
Bypass a cx-freeze issue for fitz._wxcolors
Bypass an issue with `cx-freeze` that fails to include the
`fitz._wxcolors` module in the final Windows artifact.

Refs #1128
2025-04-08 16:34:34 +03:00
Alex Pyrgiotis
0c741359cc
Make our build-image.py script runable on Windows 2025-04-08 16:34:34 +03:00
Alex Pyrgiotis
8c61894e25
Handle the case where Docker is not installed
Refs #1132
2025-04-08 16:33:15 +03:00
Alex Pyrgiotis
57667a96be
Add a way to unset the container runtime
Add a way to set the container runtime that Dangerzone uses back to the
default.
2025-04-07 18:23:13 +03:00
Alex Pyrgiotis
1a644e2506
Do not install poetry-plugin-export
Do not unconditionally install the Poetry plugin for exporting
dependencies as a requirements.txt file, since it's used only when
building a Debian package. Keep it instead in the Linux instructions and
when building a Dangerzone environment.
2025-04-07 18:23:10 +03:00
Alex Pyrgiotis
843e68cdf7
Handle the case of empty tesseract dirs during download 2025-04-07 18:22:52 +03:00
Alex Pyrgiotis
33b2a183ce
docs: Improve doit docs 2025-04-07 18:22:52 +03:00
Alex Pyrgiotis
c7121b69a3
Prefer poetry sync to poetry install --sync
Use `poetry sync` instead of `poetry install --sync`, since the latter
is deprecated and will be removed after June 2025, as seen in the
following warning message:

  The `--sync` option is deprecated and slated for removal in the next
  minor release after June 2025, use the `poetry sync` command instead.
2025-04-07 18:22:50 +03:00
Alex Pyrgiotis
0b3bf89d5b
Implicitly run doit with poetry run
Implicitly run `doit` with `poetry run`, else `poetry env remove --all`
will remove the calling Python interpreter.
2025-04-02 12:01:14 +03:00
Alex Pyrgiotis
e0b10c5e40
doit: Remove tessdata dir from targets
Remove the tesseract data dir from the doit targets, else we encounter
the following error:

  Traceback (most recent call last):
    [...]
    File "[...]/Library/Caches/pypoetry/virtualenvs/dangerzone-52Yr5wv_-py3.11/lib/python3.11/site-packages/doit/dependency.py", line 39, in get_file_md5
      with open(path, 'rb') as file_data:
           ^^^^^^^^^^^^^^^^
  IsADirectoryError: [Errno 21] Is a directory: 'share/tessdata'
2025-04-02 11:46:20 +03:00
Alex Pyrgiotis
092eec55d1
doit: Remove unused 'DEBIAN_VERSIONS' variable 2025-04-02 11:45:47 +03:00
Alex Pyrgiotis
14a480c3a3
doit: Fix typo in Fedora targets
Fix a typo when building a Fedora target. Also, add Fedora 42 support.
2025-04-02 11:44:50 +03:00
Alex Pyrgiotis
9df825db5c
debian: Use abbreviated months in changelog
Use abbreviated months in the Debian changelog, else we'll have warnings
like the following:

  LINE:  -- Freedom of the Press Foundation   <info@freedom.press>  Mon, 31 March 2025 15:57:18 +0300
  dpkg-source: warning: dangerzone/debian/changelog(l5): cannot parse non-conformant date '31 March 20
2025-04-02 11:35:31 +03:00
Alex Pyrgiotis
2ee22a497a
Reinstall deps after doit cleans everything
Make sure to reinstall the project dependencies once `doit clean` runs,
since it also removes itself.
2025-04-02 11:30:31 +03:00
Alex Pyrgiotis
b5c09e51d8
Update minimum Docker Desktop version
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Update the minimum Docker Desktop version prior to the 0.9.0 release.
The new version should also fix a recent Docker bug, whereby the container
stdout was truncated, and caused our conversions to fail.

Fixes #1101
2025-04-01 10:33:57 +03:00
Alex Pyrgiotis
37c7608c0f
Bump download links for 0.9.0 2025-04-01 10:33:57 +03:00
Alex Pyrgiotis
972b264236
Update the Dangerzone image and its dependencies
Bump the Debian container image, gVisor version, and H2Orestart plugin.
2025-04-01 10:33:55 +03:00
Alex Pyrgiotis
e38d8e5db0
Update changelog
Update our changelog with all the new changes that have been merged in
the 0.9.0 version.
2025-04-01 10:31:43 +03:00
Alex Pyrgiotis
f92833cdff
Bump version to 0.9.0 2025-04-01 10:26:27 +03:00
Alex Pyrgiotis
07aad5edba
Bump poetry.lock file
Bump the poetry.lock file using `poetry lock --regenerate`.
2025-04-01 10:26:26 +03:00
sudoforge
e8ca12eb11
Use a fully qualified URI for the debian image
Some checks are pending
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (fedora 42) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run tests (ubuntu 25.04) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
This change adds the registry prefix to the `debian` image we pull
from `docker.io/library`. By adding this, we improve support for
non-interactive builds, as users who do not have a preferred default
registry defined in their local configuration will no longer be prompted
to select which registry to pull this from.
2025-03-31 09:26:25 -07:00
sudoforge
491cca6341
Use a digest for the debian base image
66600f32dc introduced various improvements
to the determinism of the container image in this repository. This
change builds on this effort by ensuring that the base image is pulled
by digest. Image digests are immutable references, unlike tags, which
are mutable (except when optionally configured as immutable in certain
container registries, but not `docker.io`).
2025-03-31 08:04:05 -07:00
Alexis Métaireau
0a7b79f61a
Add a set-container-runtime option to dangerzone-cli
This sets the container runtime in the settings, and provides an easy
way to do so for users, without having to mess with the json settings.

When setting the container runtime, one can just pass "podman" and the
path to the executable will be stored in the settings.
2025-03-31 16:20:29 +02:00
Alexis Métaireau
86eab5d222
Ensure that only podman and docker container runtimes can be used 2025-03-31 16:20:29 +02:00
Alexis Métaireau
ed39c056bb
Reset terminal colors after printing the banner 2025-03-31 16:20:29 +02:00
Alexis Métaireau
983622fe59
Update CHANGELOG 2025-03-31 16:20:29 +02:00
Alexis Métaireau
8e99764952
Use a Runtime class to get information about container runtimes
This is useful to avoid parsing too many times the settings.
2025-03-31 16:20:28 +02:00
Alexis Métaireau
20cd9cfc5c
Allow to define a container_runtime_path 2025-03-31 16:20:28 +02:00
Alexis Métaireau
f082641b71
Only check Docker version if the container runtime is set to docker 2025-03-31 16:20:28 +02:00
Alexis Métaireau
c0215062bc
Allow to read the container runtime from the settings
Add a few tests for this along the way, and update the end-user messages
about Docker/Podman to account for this change.
2025-03-31 16:20:28 +02:00
Alexis Métaireau
b551a4dec4
Mock the settings rather than monkeypatching external modules 2025-03-31 16:20:28 +02:00
Alexis Métaireau
5a56a7f055
Decouple the Settings class from DangerzoneCore
No real reason to pass the whole object where what we really need is
just the location of the configuration folder.
2025-03-31 16:20:28 +02:00
Alexis Métaireau
ab6dd9c01d
Use pathlib.Path to return path locations 2025-03-31 16:20:28 +02:00
Alex Pyrgiotis
dfcb74b427
Improve our release instructions regarding versioned links
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Update our `RELEASE.md` so that we don't forget to bump the download
links in `INSTALL.md` prior to tagging a release. This way, we won't
have a versioned `INSTALL.md` page pointing to an older download link.

Note that this means that the latest version of the `INSTALL.md` page
will point to a broken link, in the short period of time between the
pre-release and the actual release. That's not an issue in our case,
because we don't point to the latest version of our `INSTALL.md` from
our `README.md`. We use versioned links instead, and thus we minimize
the chance that a user may encounter a broken link.

Fixes #1100
2025-03-28 15:04:05 +02:00
Alexis Métaireau
a910ccc273
Provide a way to opt-out from CHANGELOG check
Co-authored-by: Alex Pyrgiotis <alex.p@freedom.press>
2025-03-28 13:53:05 +01:00
dependabot[bot]
d868699bab
build(deps): bump slsa-framework/slsa-github-generator
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
Bumps [slsa-framework/slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/slsa-framework/slsa-github-generator/releases)
- [Changelog](https://github.com/slsa-framework/slsa-github-generator/blob/main/CHANGELOG.md)
- [Commits](https://github.com/slsa-framework/slsa-github-generator/compare/v2.0.0...v2.1.0)

---
updated-dependencies:
- dependency-name: slsa-framework/slsa-github-generator
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-26 14:54:50 +01:00
Alexis Métaireau
d6adfbc6c1
Skip PDF-diffing tests when using a dummy isolation provider. 2025-03-26 11:45:46 +01:00
Alexis Métaireau
687bd8585f
Update reference documents to their last version 2025-03-26 11:45:46 +01:00
Alexis Métaireau
b212bfc47e
Add a makefile target to regenerate reference PDFs
This leverages a new flag that can be passed during the tests to
regenerate the PDFs if needed.
2025-03-26 11:45:45 +01:00
Alexis Métaireau
bbc90be217
Publish the resulted diffs as github artifacts
Which makes it easier to inspect after CI run failures.
2025-03-26 11:45:45 +01:00
Alexis Métaireau
2d321bf257
Add a dependency to numpy for the tests
This is useful to reduce the computation time when creating PDF visual
diffs. Here is a comparison of the same operation using python arrays
and numpy arrays + lookups:

Python arrays:
```
diff took 5.094218431997433 seconds
diff took 3.1553626069980965 seconds
diff took 3.3721952960004273 seconds
diff took 3.2134646750018874 seconds
diff took 3.3410625500000606 seconds
diff took 3.2893160990024626 seconds
```

Numpy:
```
diff took 0.13705662599750212 seconds
diff took 0.05698924000171246 seconds
diff took 0.15319590600120137 seconds
diff took 0.06126453700198908 seconds
diff took 0.12916332699751365 seconds
diff took 0.05839455900058965 seconds
2025-03-26 11:45:44 +01:00
Alexis Métaireau
8bfeae4eed
tests: test for regressions when converting PDFs when running the tests
This stores a reference version of the converted PDFs and diffs them when
the newly converted document during the tests.
2025-03-26 11:45:43 +01:00
Alexis Métaireau
3ed71e8ee0
Document Operating System support
The goal is to have rules rather than specific versions, and a table to summarize everything.
2025-03-21 12:08:30 +01:00
Alexis Métaireau
fa8e8c6dbb
CI: Enforce updating the CHANGELOG in the CI
Some checks failed
Tests / windows (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / build-install-rpm (fedora 42) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (fedora 42) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / run tests (ubuntu 25.04) (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (ubuntu 25.04) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Currently, this is only returning warnings, but we seem to just skip
them. As it's possible to merge PRs when the CI is red, issuing an error
would help us to think about populating this file.
2025-03-21 11:10:56 +01:00
Alex Pyrgiotis
8d05b5779d
ci: Reproducibly build a container image
Some checks are pending
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 25.04) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 42) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (fedora 42) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run tests (ubuntu 25.04) (push) Blocked by required conditions
Release multi-arch container image / build-push-image (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Waiting to run
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Waiting to run
Create a reusable GitHub Actions workflow that does the following:

1. Create a multi-architecture container image for Dangerzone, instead
   of having two different tarballs (or no option at all)
2. Build the Dangerzone container image on our supported architectures
   (linux/amd64 and linux/arm64). It so happens that GitHub also offers
   ARM machine runners, which speeds up the build.
3. Combine the images from these two architectures into one, multi-arch
   image.
4. Generate provenance info for each manifest, and the root manifest
   list.
5. Check the image's reproduciblity.

Also, remove an older CI job for checking the reproducibility of the
image, which is now obsolete.

Fixes #1035
2025-03-20 17:24:42 +02:00
Alex Pyrgiotis
e1dbdff1da
Completely overhaul the reproduce-image.py script
Make a major change to the `reproduce-image.py` script: drop `diffoci`,
build the container image, and ensure it has the exact same hash as the
source image.

We can drop the `diffoci` script when comparing the two images, because
we are now able build bit-for-bit reproducible images.
2025-03-20 17:17:46 +02:00
Alex Pyrgiotis
a1402d5b6b
Fix a Podman regression regarding Buildkit images
Loading an image built with Buildkit in Podman 3.4 messes up its name.
The tag somehow becomes the name of the loaded image.

We know that older Podman versions are not generally affected, since
Podman v3.0.1 on Debian Bullseye works properly. Also, Podman v4.0 is
not affected, so it makes sense to target only Podman v3.4 for a fix.

The fix is simple, tag the image properly based on the expected tag from
`share/image-id.txt` and delete the incorrect tag.

Refs containers/podman#16490
2025-03-20 17:17:40 +02:00
Alex Pyrgiotis
51f432be6b
Fix references to container.tar.gz
Find all references to the `container.tar.gz` file, and replace them
with references to `container.tar`. Moreover, remove the `--no-save`
argument of `build-image.py` since we now always save the image.

Finally, fix some stale references to Poetry, which are not necessary
anymore.
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
69234507c4
Build container image using repro-build
Invoke the `repro-build` script when building a container image, instead
of the underlying Docker/Podman commands. The `repro-build` script
handles the underlying complexity to call Docker/Podman in a manner that
makes the image reproducible.

Moreover, mirror some arguments from the `repro-build` script, so that
consumers of `build-image.py` can pass them to it.

Important: the resulting image will be in .tar format, not .tar.gz,
starting from this commit. This means that our tests will be broken for
the next few commits.

Fixes #1074
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
94fad78f94
Vendor repro-build script
Vendor the `repro-build` script in our codebase, which will be used to
build our container image in a reproducible manner. We prefer to copy it
verbatim for the time-being, since its interface is not stable enough,
and the repro-build repo is not reviewed after all.

In the future, we want to store this script in a separate place, and
pull it when necessary.

Refs #1085
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
66600f32dc
Remove sources of non-determinism from our image
Make our container image more reproducible, by changing the following in
our Dockerfile:
1. Touch `/etc/apt/sources.list` with a UTC timestamp. Else, builds on
   different countries (!?) may result to different Unix epochs for the
   same date, and therefore different modification time for the
   file.
2. Turn the third column of `/etc/shadow` (date of last password change)
   for the `dangerzone` user into a constant number.
3. Fix r-s file permissions in some copied files, due to inconsistent
   COPY behavior in containerized vs non-containerized Buildkit. This
   requires creating a full file hierarchy in a separate directory (see
   new_root/).
4. Set a specific modification time for the entrypoint script, because
   rewrite-timestamp=true does not overwrite it.
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
d41f604969
Bump container image parameters
Bump all the values in Dockerfile.env, since there are new releases out
for all of them.
2025-03-20 17:15:15 +02:00
Alex Pyrgiotis
6d269572ae
Add support for Ubuntu 25.04 (plucky)
Closes #1090
2025-03-20 16:56:58 +02:00
Alex Pyrgiotis
c7ba9ee75c
Add support for Fedora 42
Closes #1091
2025-03-20 16:53:37 +02:00
Alexis Métaireau
418b68d4ca
Avoid passing wrong options -B to subprocesses
This is a common pitfall of pyinstaller, when using multiprocessing.

In our case, the spawned processes is passed the -B option, thinking
it's python (but it's dangerzone).

> -B     Don't write .pyc files on import. See also PYTHONDONTWRITEBYTECODE.

As a result, dangerzone is spawned with the -B option, which doesn't
mean anything for it.

> In the frozen application, sys.executable points to your application
> executable. So when the multiprocessing module in your main process
> attempts to spawn a subprocess (a worker or the resource tracker), it
> runs another instance of your program, with the following arguments for
> resource tracker:
>
> my_program -B -S -I -c "from multiprocessing.resource_tracker import main;main(5)"

https://pyinstaller.org/en/stable/common-issues-and-pitfalls.html#multi-processing
2025-03-17 17:47:42 +01:00
Alex Pyrgiotis
9ba95b5c20
Use correct Ubuntu version for conmon notice
Some checks failed
Tests / macOS (x86_64) (push) Has been cancelled
Tests / check-reproducibility (push) Has been cancelled
Scan latest app and container / security-scan-container (ubuntu-24.04) (push) Has been cancelled
Scan latest app and container / security-scan-container (ubuntu-24.04-arm) (push) Has been cancelled
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Has been cancelled
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2025-03-17 15:40:25 +02:00
Alex Pyrgiotis
b043c97c41
Unpin the Debian-vendored PyMuPDF package
Unpin the PyMuPDF package that we vendor in our Debian packages. We
originally pinned it to version 1.24.11, because it was the last version
that supported Ubuntu Focal, but we can now unpin it, since we have
dropped Ubuntu Focal support.

Fixes #1018
2025-03-17 15:40:25 +02:00
Alex Pyrgiotis
4a48a2551b
Drop Ubuntu 20.04 (Focal) support
Drop Ubuntu 20.04 (Focal) support, because it's nearing its end-of-life
date. By doing so, we can remove several workarounds and notices we had
in place for this version, and most importantly, remove the pin to our
vendored PyMuPDF package.

Refs #1018
Refs #965
2025-03-17 15:40:25 +02:00
Alex Pyrgiotis
56663023f5
ci: Security scan ARM images
Some checks failed
Scan latest app and container / security-scan-app (ubuntu-24.04) (push) Has been cancelled
Scan latest app and container / security-scan-app (ubuntu-24.04-arm) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Scan ARM images using Anchore's scan action, by utilizing the Ubuntu ARM
runners provided by GitHub. While our ARM images are used only in macOS
silicon platforms, we can use the Ubuntu ARM runners just for scanning.

Closes #1008
2025-03-10 18:45:26 +02:00
Alex Pyrgiotis
53a952235c
Specify version when installing WiX
Some checks are pending
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run-lint (push) Waiting to run
Tests / build-container-image (push) Waiting to run
Tests / Download and cache Tesseract data (push) Waiting to run
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / check-reproducibility (push) Waiting to run
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
Update our CI job and build instructions with the latest WiX version, so
that we don't encounter any installation issues when new WiX versions
are released.

Also, add a reminder in our release instruction to bump the WiX version
before we start a new release.

Fixes #1087
2025-03-10 18:03:24 +02:00
Erik Moeller
d2652ef6cd
Add reference to funding.json (required by floss.fund application)
Some checks failed
Tests / check-reproducibility (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
2025-03-06 15:54:36 +01:00
Alex Pyrgiotis
a6aa66f925
Remove a stale Shiboken6 pin
Some checks failed
Tests / build-container-image (push) Has been cancelled
Tests / Download and cache Tesseract data (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Remove the Shiboken6 pin for our Linux and macOS platforms, since a new
upstream package has been released, that has wheels for every platform.

Also, remove the `sed` command from our dangerzone.spec, whose purpose
was to nullify this pin for our Fedora packages.

Fixes #1061
2025-02-19 11:43:30 +02:00
Alex Pyrgiotis
856de3fd46
grype: Ignore CVE-2025-0665
Some checks failed
Tests / macOS (x86_64) (push) Has been cancelled
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Ignore the CVE-2025-0665 vulnerability, since it's a libcurl one, and
the Dangerzone container does not make network calls. Also, it seems
that Debian Bookworm is not affected.
2025-02-10 12:31:08 +02:00
Alex Pyrgiotis
88a6b37770
Add support for Python 3.13
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Bump our max supported Python version to 3.13, now that PySide6 supports
it.

Fixes #992
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
fb90243668
Symlink /usr in Debian container image
Update our Dockerfile and entrypoint script in order to reuse the /usr
dir in the inner and outer container image.

Refs #1048
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
9724a16d81
Mask some extra paths in gVisor's OCI config
Mask some paths of the outer container in the OCI config of the inner
container. This is done to avoid leaking any sensitive information from
Podman / Docker / gVisor, since we reuse the same rootfs

Refs #1048
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
cf43a7a0c4
docs: Add design document for artifact reproducibility
Refs #1047
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
cae4187550
Update RELEASE.md
Co-authored-by: Alexis Métaireau <alexis@freedom.press>
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
cfa4478ace
ci: Add a CI job that enforces image reproducibility
Add a CI job that uses the `reproduce.py` dev script to enforce image
reproducibility, for every PR that we send to the repo.

Fixes #1047
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
2557be9bc0
dev_scripts: Add script for enforcing image reproducibility
Add a dev script for Linux platforms that verifies that a source image
can be reproducibly built from the current Git commit. The
reproducibility check is enforced by the `diffoci` tool, which is
downloaded as part of running the script.
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
235d71354a
Allow setting a tag for the container image
Allow setting a tag for the container image, when building it with the
`build-image.py` script. This should be used for development purposes
only, since the proper image name should be dictated by the script.
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
5d49f5abdb
ci: Scan the latest image for CVEs
Update the Debian snapshot date to the current one, so that we always
scan the latest image for CVEs.

Refs #1057
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
0ce7773ca1
Render the Dockerfile from a template and some params
Allow updating the Dockerfile from a template and some envs, so that
it's easier to bump the dates in it.
2025-01-27 21:40:27 +02:00
Alex Pyrgiotis
fa27f4b063
Add jinja2-cli package dependency
Add jinja2-cli as a package dependency, since it will be used to create
the Dockerfile from some user parameters and a template.
2025-01-23 23:26:56 +02:00
Alex Pyrgiotis
8e8a515b64
Allow using the container engine cache when building our image
Remove our suggestions for not using the container cache, which stemmed
from the fact that our Dangerzone image was not reproducible. Now that
we have switched to Debian Stable and the Dockerfile is all we need to
reproducibly build the exact same container image, we can just use the
cache to speed up builds.
2025-01-23 23:25:43 +02:00
Alex Pyrgiotis
270cae1bc0
Rename vendor-pymupdf.py to debian-vendor-pymupdf.py
Rename the `vendor-pymupdf.py` script to `debian-vendor-pymupdf.py`,
since it's used only when building Debian packages.
2025-01-23 23:25:43 +02:00
Alex Pyrgiotis
14bb6c0e39
Do not use poetry.lock when building the container image
Remove all the scaffolding in our `build-image.py` script for using the
`poetry.lock` file, now that we install PyMuPDF from the Debian repos.
2025-01-23 23:25:39 +02:00
Alex Pyrgiotis
033ce0986d
Switch base image to Debian Stable
Switch base image from Alpine Linux to Debian Stable, in order to reduce
our image footprint, improve our security posture, and build our
container image reproducibly.

Fixes #1046
Refs #1047
2025-01-23 23:24:48 +02:00
Alex Pyrgiotis
935396565c
Reuse the same rootfs for the inner and outer container
Remove the need to copy the Dangerzone container image (used by the
inner container) within a wrapper gVisor image (used by the outer
container). Instead, use the root of the container filesystem for both
containers. We can do this safely because we don't mount any secrets to
the container, and because gVisor offers a read-only view of the
underlying filesystem

Fixes #1048
2025-01-23 23:24:48 +02:00
Alex Pyrgiotis
e29837cb43
Copy gVisor public key and a helper script in container helpers
Download and copy the following artifacts that will be used for building
a Debian-based Dangerzone container image in the subsequent commits:
* The APT key for the gVisor repo [1]
* A helper script for building reproducible Debian images [2]

[1] https://gvisor.dev/archive.key
[2] d15cf12b26/repro-sources-list.sh
2025-01-23 23:24:48 +02:00
Alex Pyrgiotis
8568b4bb9d
Move container-only build context to dangerzone/container
Move container-only build context (currently just the entrypoint script)
from `dangerzone/gvisor_wrapper` to `dangerzone/container_helpers`.
Update the rest of the scripts to use this location as well.
2025-01-23 23:24:48 +02:00
Alex Pyrgiotis
be1fa7a395
Whitespace fixes 2025-01-23 23:24:47 +02:00
Alexis Métaireau
b2f4e2d523
Bump poetry.lock
Some checks are pending
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
2025-01-23 16:26:07 +01:00
Alexis Métaireau
7409966253
Remove ${Python3:Depends} as it's not used at the moment. 2025-01-23 16:26:06 +01:00
Alexis Métaireau
40fb6579f6
Alternatives for debian/control 2025-01-23 16:26:06 +01:00
Alexis Métaireau
6ae91b024e
Use platformdirs to find user configuration files
The previous library we were using for this (`appdirs`) is dead upstream
and not supported anymore in debian testing.

Fixes #1058
2025-01-23 16:26:06 +01:00
Alexis Métaireau
c2841dcc08
Run ruff format
Some checks failed
Tests / build-container-image (push) Has been cancelled
Tests / Download and cache Tesseract data (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2025-01-23 14:48:33 +01:00
Alexis Métaireau
df5ccb3f75
Fedora: bypass the shiboken specific version. 2025-01-23 14:39:50 +01:00
Alexis Métaireau
9c6c2e1051
build: pin shiboken6 to specific versions 2025-01-23 12:52:48 +01:00
Alexis Métaireau
23f3ad1f46
doc: bump the Docker Desktop version as part of the RELEASE procedure
Some checks failed
Tests / macOS (x86_64) (push) Has been cancelled
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2025-01-21 10:21:24 +01:00
Alexis Métaireau
970a82f432
Bind Alert instances to the main window alert property 2025-01-21 10:21:24 +01:00
Alexis Métaireau
3d5cacfffb
Warn users if the minimum version of Docker Desktop is not met
This only happens on Windows and macOS.

Fixes #693
2025-01-21 10:21:24 +01:00
Alexis Métaireau
c407e2ff84
doc: update Debian Trixie installation instructions
Some checks are pending
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
Starting with Debian Trixie, `apt secure` relies on `sqv` to do its verification, which doesn't support the GPG keybox database format.

At the same time, using the standard PGP base64 format makes the verification fail for versions of `apt secure` which relies on `gpg`, as the subkey isn't detected there.

Fixes #1055
2025-01-20 14:10:15 +01:00
Alexis Métaireau
7f418118e6
CI: Drop Fedora 39 from the CI checks
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
2025-01-16 11:51:22 +01:00
Alexis Métaireau
02602b072a
Remove intermediate variables for conversion start/end logs
Some checks are pending
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run-lint (push) Waiting to run
Tests / build-container-image (push) Waiting to run
Tests / Download and cache Tesseract data (push) Waiting to run
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
Also, state that the logs are incomplete in the header.
2025-01-16 11:35:07 +01:00
Alexis Métaireau
acf20ef700
Add a --debug flag to the CLI to help retrieve more logs
When the flag is set, the `RUNSC_DEBUG=1` environment variable is added
to the outer container, and stderr is captured in a separate thread, before printing its output.
2025-01-16 11:35:06 +01:00
Alexis Métaireau
3499010d8e
docs(install): store GPG keys in the base64 format 2025-01-15 19:48:00 +01:00
Alexis Métaireau
2423fc18c5
CI: Store the signature key using the base64 format
The GPG binary format used until now doesn't seem to please `sqv` which
is now used by default on debian trixie.

Fixes #1052
2025-01-15 19:39:02 +01:00
Alexis Métaireau
1298e9c398
build: add build_scripts/env.py to the hashed files
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
It contains information that define the build environments, and as such, modifying it should result in a new release of the dev containers.
2025-01-08 06:18:30 +01:00
Alexis Métaireau
00e58a8707
build: add poetry-plugin-export to the dependencies
Since Poetry 2.0.0, the `export` command has been removed and it's
advised to use the "poetry-plugin-export" package instead.

This commit adds this dependency to the different places it's needed
(debian environments, CI, build instructions, etc).
2025-01-08 06:18:01 +01:00
Alexis Métaireau
77975a8e50
Update links to the 0.8.1 release
Some checks failed
Tests / build-container-image (push) Has been cancelled
Tests / Download and cache Tesseract data (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2024-12-24 18:11:17 +01:00
Alexis Métaireau
5b9e9c82fc
Add a security advisory for gst-plugins-base 2024-12-24 18:11:17 +01:00
Alexis Métaireau
f4fa1f87eb
Bump version to 0.8.1 2024-12-24 18:11:17 +01:00
Alexis Métaireau
eb345562da
Lint: Add click to the dependencies used by mypy
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2024-12-17 17:44:51 +01:00
jkarasti
d080d03f5a
Lint: Enable isort (I) rules 2024-12-17 17:44:32 +01:00
jkarasti
767bfa7e48
Lint: Fix unused-variable (F841) 2024-12-17 17:44:32 +01:00
jkarasti
37ec91aae2
Lint: Fix f-string-missing-placeholders (F541) 2024-12-17 17:44:32 +01:00
jkarasti
cecfe63338
Lint: Fix unused-import (F401) 2024-12-17 17:44:32 +01:00
jkarasti
4da6b92e12
Format: Run ruff format over the source code 2024-12-17 17:44:31 +01:00
jkarasti
b06d1aebed
Lint: Remove unused black and isort dependencies 2024-12-17 17:44:30 +01:00
jkarasti
da5490a5a1
Lint: Merge mypy makefile targets into the lint target 2024-12-17 17:44:09 +01:00
jkarasti
e96b44e10a
Lint: adapt Makefile targets for ruff
- Use `ruff` instead of `black` and `isort` in the `lint` target for linting and code formatting.

- Add a new target `fix` which applies all suggestions from `ruff check` and `ruff format`.
2024-12-17 17:44:09 +01:00
jkarasti
7624624471
Lint: add ruff for linting and formatting 2024-12-17 17:44:07 +01:00
Alex Pyrgiotis
fb7c2088e2
grype: Ignore CVE-2024-11053
Ignore the CVE-2024-11053 vulnerability, since it's a libcurl one, and
the Dangerzone container does not make network calls.

Also, clear the previous vulnerabilities, now that we have a new image
out.
2024-12-17 17:41:07 +01:00
Alexis Métaireau
1ea2f109cb
Run apt update before running apt get install 2024-12-17 17:24:46 +01:00
dependabot[bot]
df3063a825
build(deps): bump anchore/scan-action from 5 to 6
Some checks are pending
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
Bumps [anchore/scan-action](https://github.com/anchore/scan-action) from 5 to 6.
- [Release notes](https://github.com/anchore/scan-action/releases)
- [Changelog](https://github.com/anchore/scan-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/anchore/scan-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: anchore/scan-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-16 19:49:37 +02:00
jkarasti
57bb7286ef
Install more type stubs wanted by mypy 2024-12-16 19:49:03 +02:00
Alex Pyrgiotis
fbe05065c9
docs: Update release instructions
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Update our release instructions with a way to run manual tasks via
`doit`. Also, add developer documentation on how to use `doit`, and some
tips and tricks.
2024-12-10 15:28:16 +02:00
Alex Pyrgiotis
54ffc63c4f
Add build-* targets in Makefile based on doit
Add Make targets that build release artifacts with doit.
2024-12-10 15:28:16 +02:00
Alex Pyrgiotis
bdc4cf13c4
Add doit configuration options 2024-12-10 15:28:16 +02:00
Alex Pyrgiotis
92d7bd6bee
Automate a large portion of our release tasks
Create a `dodo.py` file where we define the dependencies and targets of
each release task, as well as how to run it. Currently, we have
automated all of our Linux and macOS tasks, except for adding Linux
packages to the respective APT/YUM repos.

The tasks we have automated follow below:

    build_image               Build the container image using ./install/common/build-image.py
    check_container_runtime   Test that the container runtime is ready.
    clean_container_runtime   Clean the storage space of the container runtime.
    clean_prompt              Make sure that the user really wants to run the clean tasks.
    debian_deb                Build a Debian package for Debian Bookworm.
    debian_env                Build a Debian Bookworm dev environment.
    download_tessdata         Download the Tesseract data using ./install/common/download-tessdata.py
    fedora_env                Build Fedora dev environments.
    fedora_env:40             Build Fedora 40 dev environments
    fedora_env:41             Build Fedora 41 dev environments
    fedora_rpm                Build Fedora packages for every supported version.
    fedora_rpm:40             Build a Fedora 40 package
    fedora_rpm:40-qubes       Build a Fedora 40 package for Qubes
    fedora_rpm:41             Build a Fedora 41 package
    fedora_rpm:41-qubes       Build a Fedora 41 package for Qubes
    git_archive               Build a Git archive of the repo.
    init_release_dir          Create a directory for release artifacts.
    macos_build_dmg           Build the macOS .dmg file for Dangerzone.
    macos_check_cert          Test that the Apple developer certificate can be used.
    macos_check_system        Run macOS specific system checks, as well as the generic ones.
    poetry_install            Setup the Poetry environment

Closes #1016
2024-12-10 15:27:20 +02:00
Alex Pyrgiotis
7c5a191a5c
Add doit in Poetry as package dependency
Add the doit automation tool in our `pyproject.toml` and `poetry.lock`
file as a package-related dependency, since we don't want to ship it to
our end users.
2024-12-10 11:34:25 +02:00
Alex Pyrgiotis
4bd794dbd1
Allow passing true/false to --use-cache build arg 2024-12-10 11:34:25 +02:00
Alex Pyrgiotis
3eac00b873
ci: Work with image tarballs that are not tagged as 'latest'
Now that our image tarball is not tagged as 'latest', we must first grab
the image tag first, and then refer to it. We can grab the tag either
from `share/image-id.txt` (if available) or with:

    docker load dangerzone.rocks/dangerzone --format {{ .Tag }}
2024-12-10 11:31:39 +02:00
Alex Pyrgiotis
ec9f8835e0
Move container security arg to proper place
Now that #748 has been merged, we can move the `--userns nomap` argument
to the list with the rest of our security arguments.
2024-12-10 11:31:39 +02:00
Alex Pyrgiotis
0383081394
Factor out container utilities to separate module 2024-12-10 11:31:39 +02:00
Alex Pyrgiotis
25fba42022
Extend the interface of the isolation provider
Add the following two methods in the isolation provider:
1. `.is_available()`: Mainly used for the Container isolation provider,
   it specifies whether the container runtime is up and running. May be
   used in the future by other similar providers.
2. `.should_wait_install()`: Whether the isolation provider takes a
   while to be installed. Should be `True` only for the Container
   isolation provider, for the time being.
2024-12-10 11:29:00 +02:00
Alex Pyrgiotis
e54567b7d4
Fix minor typos in our docs 2024-12-10 11:29:00 +02:00
Alex Pyrgiotis
2a8355fb88
Update our release instructions 2024-12-10 11:29:00 +02:00
Alex Pyrgiotis
e22c795cb7
container: Revamp container image installation
Revamp the container image installation process in a way that does not
involve using image IDs. We don't want to rely on image IDs anymore,
since they are brittle (see
https://github.com/freedomofpress/dangerzone/issues/933). Instead, we
use image tags, as provided in the `image-id.txt` file.  This allows us
to check fast if an image is up to date, and we no longer need to
maintain multiple image IDs from various container runtimes.

Refs #933
Refs #988
Fixes #1020
2024-12-10 11:29:00 +02:00
Alex Pyrgiotis
909560353d
Build and tag Dangerzone images
Build Dangerzone images and tag them with a unique ID that stems from
the Git reop. Note that using tags as image IDs instead of regular image
IDs breaks the current Dangerzone expectations, but this will be
addressed in subsequent commits.
2024-12-10 11:18:23 +02:00
Alex Pyrgiotis
6a5e76f2b4
Build and tag Dangerzone images
Build Dangerzone images and tag them with a unique ID that stems from
the Git reop. Note that using tags as image IDs instead of regular image
IDs breaks the current Dangerzone expectations, but this will be
addressed in subsequent commits.
2024-12-10 11:18:23 +02:00
Alex Pyrgiotis
20152fac13
container: Factor out loading an image tarball 2024-12-10 11:18:23 +02:00
Alex Pyrgiotis
6b51d56e9f
container: Manipulate Dangerzone image tags
Add the following methods that allow the `Container` isolation provider
to work with tags for the Dangerzone image:
* `list_image_tag()`
* `delete_image_tag()`
* `add_image_tag()`
2024-12-10 11:18:23 +02:00
Alex Pyrgiotis
309bd12423
Move container-specific method from base class
Move the `is_runtime_available()` method from the base
`IsolationProvider` class, and into the `Dummy` provider class. This
method was originally defined in the base class, in order to be mocked
in our tests for the `Dummy` provider. There's no reason for the `Qubes`
class to have it though, so we can just move it to the `Dummy` provider.
2024-12-09 19:19:21 +02:00
Alex Pyrgiotis
1c0a99fcd2
Update changelog
Some checks are pending
Tests / Download and cache Tesseract data (push) Waiting to run
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
2024-12-09 18:46:25 +02:00
jkarasti
4b5f4b27d7
Fix: Dangerzone installed using an msi built with WiX Toolset v3 is not uninstalled by an msi built with WiX Toolset v5
Workaround for an issue after upgrading from WiX Toolset v3 to v5 where the previous
version of Dangerzone is not uninstalled during the upgrade by checking if the older installation
exists in "C:\Program Files (x86)\Dangerzone".

Also handle a special case for Dangerzone 0.8.0 which allows choosing the install location
during install by checking if the registry key for it exists.

Note that this seems to allow installing Dangerzone 0.8.0 after installing Dangerzone from this branch.
In this case the installer errors until Dangerzone 0.8.0 is uninstalled again
2024-12-09 18:42:12 +02:00
JKarasti
f537d54ed2
Change: Build a 64-bit installer 2024-12-09 18:42:12 +02:00
JKarasti
32641603ee
Docs: Update documentation for WiX Toolset 5 2024-12-09 18:42:12 +02:00
JKarasti
a915ae8442
Change: Update the build-app.bat script to work with WiX Toolset v5
- WiX Toolset v3 used to validate the msi package by default. In v5 that has moved to a new command, so add a new validation step to the script.

- Also emove the step that uses `insignia.exe` to sign the Dangerzone.msi with the digital signatures from its external cab archives.

  In WiX Toolset v4 and newer, insignia is replaced with a new command `wix msi inscribe`, but we tell wix to embed the cabinets into the .msi
  (That's what`EmbedCab="yes"` in the Media / MediaTemplate element does) so singning them separately is not necessary. [0]

  [0] https://wixtoolset.org/docs/tools/signing/
2024-12-09 18:42:12 +02:00
JKarasti
38a803085f
CI: Use WiX Toolset v5 to build the msi 2024-12-09 18:42:11 +02:00
JKarasti
2053c98c09
Change: Write Dangerzone.wxs inside the script directly
Also reduce duplication slightly by definig `build_dir`, `cx_freeze_dir` and `dist_dir`
2024-12-09 18:42:11 +02:00
JKarasti
3db1ca1fbb
Fix: Make GUIDs uppercase
See [1]

[1] https://learn.microsoft.com/en-us/windows/win32/msi/guid
2024-12-09 18:42:11 +02:00
JKarasti
3fff16cc7e
Change: Write dangerzone version and upgradecode into Package and SummaryInformation elements directly 2024-12-09 18:42:11 +02:00
JKarasti
8bd9c05832
Refactor: build_dir_xml() function
- rename for clarity
- remove unnecessary checks
2024-12-09 18:42:11 +02:00
JKarasti
41e78c907f
Change: Wrap all files to be included in the .msi in a ComponentGroupRef
With this, all the files are organised into Components,
each of which points to a Directory defined in the StandardDirectory element.
This simplifies the Feature element considerable as only thing it needs to
include everything in the built msi is a reference to `ApplicationComponents`
2024-12-09 18:42:11 +02:00
JKarasti
265c1dde97
Refactor: Simplify build_data() function
- Rename variables to be more clear about what they do:
- reorganise code
- simplify a few checks
2024-12-09 18:42:11 +02:00
JKarasti
ccb302462d
Change: Swap Media element with MediaTemplate
This is a new default and makes authoring slightly simpler without any functional changes.
2024-12-09 18:42:11 +02:00
JKarasti
4eadc30605
Change: Convert Wix UI extension authoring to WiX Toolset v5
Due to limitations of the xml.etree.ElementTree library, add the items in the root element as a dictionary
2024-12-09 18:42:11 +02:00
JKarasti
abb71e0fe5
Change: Wrap ProgramFilesFolder component with a StandardDirectory component 2024-12-09 18:42:11 +02:00
JKarasti
4638444290
Change: Wrap ProgramMenuFolder component with a StandardDirectory component 2024-12-09 18:42:11 +02:00
jkarasti
68da50a6b2
Change: Disable AllowSameVersionUpgrades
Since running `wix msi validate` with it set to `yes` causes an error.
2024-12-09 18:42:11 +02:00
JKarasti
cc5ba29455
Change: Merge Product into Package element
- The Keywords and Description items move under a new SummaryInformation element.
- Shuffle things around so that elements previously under the product element are now under the Package element.
- Rename SummaryCodepage in SummaryInformation to Codepage and remove a duplicate Manufacturer item.
- Remove InstallerVersion and let WiX set it to default value. (500 a.k.a Windows 7)
2024-12-09 18:42:11 +02:00
JKarasti
180b9442ab
Change: Rename INSTALLDIR to INSTALLFOLDER
It's the new default name for it
2024-12-09 18:42:11 +02:00
JKarasti
f349e16523
Change: Update WiX schema namespace
Also rename `root_el` to `wix_el`.

WiX version 5 uses the same namespace.
2024-12-09 18:42:11 +02:00
JKarasti
adddb1ecb7
Change: Stop generating an XML declaration at the top of the WiX authoring
It's not needed anymore.
2024-12-09 18:42:11 +02:00
JKarasti
8e57d81a74
Fix: Make generated WiX authoring pass WixCop checks
WixCop.exe is a built in formatting tool that comes with WiX toolset v3. This fixes `wix convert` command not beins able to run
2024-12-09 18:42:11 +02:00
JKarasti
3bcf5fc147
Fix: SyntaxWarning while generating Dangerzone.wxs 2024-12-09 18:42:10 +02:00
Alexis Métaireau
60df4f7e35
docs: Update the release instructions
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
This commit makes changes to the release instructions, prefering bash
exemples when that's possible. As a result, the QA.md and RELEASE.md
files have been separated and a new `generate-release-tasks.py` script
is making its apparition.
2024-12-03 15:07:50 +01:00
Alexis Métaireau
9fa3c80404
Update QA script to support Fedora 41 2024-12-03 15:07:50 +01:00
Alexis Métaireau
4bf7f9cbb4
docs: Add a step to download tesseract data in the RELEASE notes 2024-12-03 15:07:49 +01:00
Alexis Métaireau
fdc27c4d3b
CI: check that the changelog is populated on each pull request
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2024-12-02 11:57:30 +01:00
Alexis Métaireau
23f5f96220
build: Publish the built artifacts
- Fedora `.rpm` files
- Windows `.msi`
- macOS `.app`

Are now published as part of the CI pipelines.
2024-12-02 11:35:54 +01:00
Alexis Métaireau
5744215d99
Issue templates: rephrase how we ask docker info to the users
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
People might not now if the issue is related to docker or not, and we've
had to ask them for additional information after they opened the issue.

This makes it clearer that this information might be useful.
2024-11-28 18:04:24 +01:00
Alex Pyrgiotis
c89988654c
Drop checks for the FPF-maintained PySide6 package
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
There are various place in our release process
(build/installation/release instructions and CI checks) where we make
sure that the FPF-maintained PySide6 package works in Fedora 39. Now
that Fedora 39 is nearing its EOL date, we can remove those.
2024-11-26 16:06:38 +01:00
Alex Pyrgiotis
7eaa0cfe50
Drop Fedora 39 support
Drop Fedora 39 support by removing it from our CI and installation
instructions.

Closes #999
2024-11-26 16:06:35 +01:00
Alexis Métaireau
9d69e3b261
CI: Do not scan release assets for mac silicon for now. 2024-11-25 18:54:46 +01:00
Alex Pyrgiotis
1d2a91e8c5
FIXUP: Small fixes
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 39) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 39) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2024-11-21 18:55:33 +02:00
Alex Pyrgiotis
82c29b2098
Make README.md point to INSTALL.md for instructions
Our repo's README.md should point to our INSTALL.md for installation
instructions, and not the other way around. This fixes an issue with
INSTALL.md pointing to a stale README.md version. Updating our README
before tagging is not possible, since the latest version is the one that
our users visit, and it can't point to download links that do not exist.

Fixes #1003
2024-11-21 18:55:33 +02:00
Alex Pyrgiotis
ce5aca4ba1
dev_scripts: Implement two more steps
Implement the following steps from the QA docs:

1. Check if the latest Python version that we support is installed. For
   example, we currently support Python 3.12, so we add code to check
   that the latest Python 3.12.x version is installed.
2. Download the Tesseract data using our script, both on Windows and
   Linux.
2024-11-21 18:29:43 +02:00
Alex Pyrgiotis
13f38cc8a9
Update our description 2024-11-21 18:29:43 +02:00
Alex Pyrgiotis
57df6fdfe5
Increase the size of the dz qube to 5GiB
Increase the size of the `dz` qube in our build instructions. We
increase it from 2GiB (default), to 5GiB (suggested), in order to cater
for some extra space that our build instructions need (e.g., the
download of the Tesseract data).
2024-11-21 18:29:43 +02:00
Alexis Métaireau
20354e7c11
CI: Use grep + cut rather than jq to get the version number
Some checks are pending
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 39) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 39) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
Github macOS runners don't come with `jq` pre-installed.
2024-11-21 12:34:15 +01:00
Alexis Métaireau
d722800a4b
Update Lock file
Some checks are pending
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 39) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 39) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
2024-11-20 17:42:59 +01:00
Alexis Métaireau
4cfc633cdb
Add a script to help generate release notes from merged pull requests 2024-11-20 17:42:59 +01:00
Alexis Métaireau
944d58dd8d
CI: Update container scanning to account for the arm64 architecture. 2024-11-20 17:12:20 +01:00
Alexis Métaireau
f3806b96af
Reapply "Disable gVisor's DirectFS feature.""
This reverts commit 68f8338d20.

Fixes #982
2024-11-20 16:41:56 +01:00
Alexis Métaireau
c4bb7c28c8
Unpin gVisor, now that upstream is able to support Linux Yama Mode 2
Fixes #298
2024-11-20 16:41:55 +01:00
Alexis Métaireau
630083bdea
CI: Only run the CI on pull requests, and on the "main" branch
Previously, the actions were duplicated, due to the fact when developing
we often create feature branches and open pull requests.

This new setup requires us to open pull requests to trigger the CI.
2024-11-20 15:56:28 +01:00
Alexis Métaireau
504a9e1df2
tests: mark the hancom office suite tests for rerun on failures
Some checks failed
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 39) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 39) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
It seem that these tests are flaky, and as a result our CI pipeline is
failing from time to time. This will rerun it automatically when there
is an error.

See https://github.com/freedomofpress/dangerzone/issues/968 for more
information
2024-11-19 18:00:47 +01:00
jkarasti
a54a8f2057
Chore: Refresh lock file
Some checks failed
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 39) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 39) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
2024-11-13 17:49:53 +02:00
jkarasti
35abd14f5f
Fix: Executables built with cx_freeze broken after On-Host Pixels to PDF conversion
On-Host Pixels to PDF conversion uncovered an incompatibility between pymupdf and cx_freeze. This bumps cx_freeze to 7.2.5 which includes the fix.
2024-11-13 17:49:53 +02:00
jkarasti
1bd18a175b
Revert "Fix: Error with cx_freeze when building the windows executables"
This reverts commit 95d7d8a4d9.
2024-11-13 17:49:52 +02:00
Alex Pyrgiotis
96aa56a6dc
Remove version prefix v from container filename
Some checks failed
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 39) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 39) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
2024-11-06 13:53:52 +02:00
Alex Pyrgiotis
91932046f5
ci: Use the new container filename in our assets
The filename of the container image tarball that we published in our
release assets has changed from `container.tar.gz` to
`container-0.8.0-i686.tar.gz`. Change the scan action accordingly.
2024-11-06 13:41:37 +02:00
Alex Pyrgiotis
c8411de433
Update download links for 0.8.0 assets 2024-11-06 13:36:30 +02:00
Alex Pyrgiotis
95150bcfc1
Minor changelog fixes
Some checks failed
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 39) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 39) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2024-11-04 16:12:05 +02:00
Alexis Métaireau
bae109717c
Prepare the CHANGELOG for 0.8.0 2024-11-04 14:49:18 +01:00
Alexis Métaireau
00480551ca
build: use the version-less container released-asset for now
Some checks failed
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 39) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 39) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
2024-10-31 18:27:53 +01:00
Alexis Métaireau
32deea10c4
Bump version to 0.8.0
Some checks are pending
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 39) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 39) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
2024-10-31 14:22:13 +01:00
Alexis Métaireau
f540a67d06
Update RELEASE.md to upload container.tar.gz for both i686 and arm64 architectures.
Some checks are pending
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 39) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 39) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
2024-10-30 19:11:24 +01:00
Alex Pyrgiotis
68f8338d20
Revert "Disable gVisor's DirectFS feature."
This reverts commit 73b0f8b7d4.
Unfortunately, disabling DirectFS causes a problem in Linux systems that
enable Yama mode 2. Turns out that Tails is such a system, so we have to
revert this change, if we want to support it.

Refs #982
2024-10-30 19:10:26 +01:00
Alex Pyrgiotis
d561878e03
tests: Restore previously mocked function
Restore the `isolation_provider.base.kill_process_group()` function,
which was previously mocked, at the end of the
`test_linger_unkillable()` test. This function is initially mocked, in
order to simulate a hang process. After the mocking completes, the test
needs the original function once more, in order to actually kill the
spawned process.
2024-10-30 16:45:45 +01:00
Alexis Métaireau
59e1666c28
Drop support for Ubuntu Mantic (23.10), which is EOL since 11 Jul 2024. 2024-10-30 16:43:50 +01:00
jkarasti
95d7d8a4d9
Fix: Error with cx_freeze when building the windows executables 2024-10-30 17:41:15 +02:00
jkarasti
ed2791bbbc
Revert: "fix win build failure due to package autodiscovery"
This reverts commit 4d9f729654.

The error described in #178 doesen't happen anymore so this workaround is not needed.
2024-10-30 17:41:15 +02:00
Alexis Métaireau
c1cf16a705
chore: remove unused imports
Some checks are pending
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 23.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 23.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 39) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 39) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 23.10) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
2024-10-30 01:21:39 +01:00
Alexis Métaireau
281432fcaa
build: pin the PyMuPDF version to 1.24.11
This is the last PyMuPDF version to have support for python 3.8, which
is required for Ubuntu Focal (20.04)
2024-10-30 01:21:39 +01:00
Alexis Métaireau
71cc4b37e5
feat: show a deprecation warning for Ubuntu Focal (20.04) 2024-10-30 01:21:38 +01:00
Alex Pyrgiotis
5ed4a048a0
qubes: Do not close stderr
Some checks are pending
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 23.10) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 23.10) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 39) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 39) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 23.10) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
Do not close stderr as part of the Qubes termination logic, since we
need to read the debug logs. This shouldn't affect typical termination
scenarios, since we expect our disposable qube to be either busy reading
from stdin, or writing to stdout. If this is not the case, then
forcefully killing the `qrexec-client-vm` process should unblock the
qube.
2024-10-22 20:33:29 +03:00
Alex Pyrgiotis
50627d375c
Fix a small typo 2024-10-22 19:07:09 +03:00
Alex Pyrgiotis
8172195f95
tests: Add a doc with multimedia elements
Add a doc that contains an MP4 video in it, which has an audio and video
stream. This type of document could not be converted with the latest
Dangerzone releases, because PyMuPDF threw this error in the container's
stdout:

    MuPDF error: unsupported error: cannot create appearance stream for
    Screen annotations

This error message was treated literally by our client code, which
parsed the first few bytes in order to find out the page height/width.
This resulted to a misleading Dangerzone error, e.g.:

    A page exceeded the maximum height

This issue started occurring since 0.6.0, which added streaming support,
and was fixed by commit 3f86e7b465. That
fix was not accompanied by a test document that would ensure we would
not have this regression from now on, so we add it in this
commit.

Refs #877
Closes #917
2024-10-22 17:31:39 +03:00
Alex Pyrgiotis
f5242078a9
macos: Remove some stale entitlements
Remove some macOS entitlements that are not necessary for the current
iteration of Dangerzone. Those are the ability to run as a hypervisor,
and the ability to accept network connections. They are a relic from
when we were experimenting with VMs, instead of relying on Docker
Desktop.
2024-10-21 19:16:03 +03:00
dependabot[bot]
e68a43bbbf
build(deps): bump actions/stale from 5 to 9
Bumps [actions/stale](https://github.com/actions/stale) from 5 to 9.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v5...v9)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-21 14:19:30 +03:00
dependabot[bot]
10fb631b8e
build(deps): bump actions/setup-python from 4 to 5
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4 to 5.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-21 14:16:38 +03:00
Alexis Métaireau
796ca79289
Automate the closing of stale issues 2024-10-17 19:28:07 +02:00
Alexis Métaireau
a95b612e78
Catch installation errors and display them.
Fixes #193
2024-10-17 16:20:56 +02:00
Alex Pyrgiotis
03b3c9eba8
debian: Add Tesseract languages as a dependency 2024-10-17 15:50:12 +03:00
Alex Pyrgiotis
0ea8e71f15
ci: Check OCR in Debian/Fedora tests 2024-10-17 15:50:12 +03:00
Alex Pyrgiotis
4398986970
tests: Improve test for top-level conversion errors 2024-10-17 15:50:12 +03:00
Alex Pyrgiotis
1ca867c295
tests: Remove provider_wait fixtures 2024-10-17 15:50:12 +03:00
Alex Pyrgiotis
6e55e43fef
Make Dummy isolation provider more realistic
Make the Dummy isolation provider follow the rest of the isolation
providers and perform the second part of the conversion on the host. The
first part of the conversion is just a dummy script that reads a file
from stdin and prints pixels to stdout.
2024-10-17 15:50:12 +03:00
Alex Pyrgiotis
703bb0e42a
Remove dead docs 2024-10-17 15:50:12 +03:00
Alex Pyrgiotis
7ea7c8a0cc
Remove dead code 2024-10-17 15:50:12 +03:00
Alex Pyrgiotis
f42bb23229
Update the way we get debug logs
Move the logic for grabbing debug logs to a new place, now that we have
merged the two conversion stages (doc to pixels, pixels to PDF).
2024-10-17 15:50:12 +03:00
Alex Pyrgiotis
e34c36f7bc
Perform on-host pixels to PDF conversion
Extend the base isolation provider to immediately convert each page to
a PDF, and optionally use OCR. In contract with the way we did things
previously, there are no more two separate stages (document to pixels,
pixels to PDF). We now handle each page individually, for two main
reasons:

1. We don't want to buffer pixel data, either on disk or in memory,
   since they take a lot of space, and can potentially leave traces.
2. We can perform these operations in parallel, saving time. This is
   more evident when OCR is not used, where the time to convert a page
   to pixels, and then back to a PDF are comparable.
2024-10-17 15:50:12 +03:00
Alex Pyrgiotis
08f5ef6558
Update .deb/.rpm dependencies
Update .deb/.rpm specs to include PyMuPDF as a required package.
2024-10-17 15:50:11 +03:00
Alex Pyrgiotis
57475b369f
Make PyMuPDF a main Dangerzone dependency
The PyMuPDF package was previously mainly used within the Dangerzone
container, as well as on Qubes. With on-host conversion, PyMuPDF will be
used in all supported platforms by default. For this reason, we can
promote it to a main dependency.
2024-10-17 15:50:11 +03:00
Alex Pyrgiotis
28b7249a6a
Add new way to detect tessdata dir
Add a new way to detect where the Tesseract data are stored in a user's
system. On Linux, the Tesseract data should be installed via the package
manager. On macOS and Windows, they should be bundled with the
Dangerzone application.

There is also the exception of running Dangerzone locally, where even
on Linux, we should get the Tesseract data from the Dangerzone share/
folder.
2024-10-17 15:50:11 +03:00
Alex Pyrgiotis
d1e119452e
Ignore tesseract data when building DEB/RPM packages 2024-10-17 15:50:11 +03:00
Alex Pyrgiotis
477bdfcc2e
ci: Add GitHub action for tessdata 2024-10-17 15:50:11 +03:00
Alex Pyrgiotis
ffcf664a48
Update build instructions 2024-10-17 15:50:10 +03:00
Alex Pyrgiotis
cd8812a85a
Add script for downloading Tesseract data
Add a Python script that can run in all supported platforms, and can
download and extract the Tesseract language data from GitHub, while
also:

1. Checking that the expected hash matches.
2. Informing the user if the language data have already been downloaded.
3. Extracting only the subset of language data that Dangerzone needs
2024-10-17 15:50:10 +03:00
Alex Pyrgiotis
5bba249c87
Provide sanitized version of output filename 2024-10-17 15:33:58 +03:00
Alex Pyrgiotis
bc58b78db7
Better way to collect tests 2024-10-17 15:33:58 +03:00
Alex Pyrgiotis
fba009a7f0
ci: Be explicit about the Debian package we install in end-user envs 2024-10-17 15:33:58 +03:00
Alex Pyrgiotis
dd3ab71065
ci: Explicitly use Ubuntu 24.04 for our runner images
GitHub actions somehow managed to downgrade our runners from Ubuntu
24.04 to Ubuntu 22.04, even though we use `ubuntu-latest`. Make the
Ubuntu 24.04 requirement more explicit, until GitHub migrates fully to
this version for the `ubuntu-latest` tag.

Fixes #957
2024-10-17 14:40:45 +03:00
JKarasti
4abd4720be
Change: Verify the signatures of the signed files with signtool verify 2024-10-16 18:04:47 +03:00
JKarasti
b79113c1c5
Change: Switch to using SHA256 signature algorithm to sign the Dangerzone executables and installer. 2024-10-16 18:04:47 +03:00
dependabot[bot]
941131f7a9
build(deps): bump anchore/scan-action from 4 to 5
Bumps [anchore/scan-action](https://github.com/anchore/scan-action) from 4 to 5.
- [Release notes](https://github.com/anchore/scan-action/releases)
- [Changelog](https://github.com/anchore/scan-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/anchore/scan-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: anchore/scan-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-16 17:52:33 +03:00
Alex Pyrgiotis
b6bb9a1216
ci: Make repo checking work for unreleased Fedora versions
Unreleased Fedora versions may refer to themselves as "rawhide", instead
of their version (e.g., "41"). For this reason, we should try and
replace the "rawhide" string with the proper Fedora version.
2024-10-16 17:37:40 +03:00
Alex Pyrgiotis
eaef95b774
Call 'dnf config-manager' via the dnf-3 interface
Fedora 41 has a newer dnf interface (dnf v5), and the config-manager
plugin that we use is not compatible with it. Suggest running it with
`dnf-3` instead, which is present in all Fedora versions.
2024-10-16 15:58:44 +03:00
Alex Pyrgiotis
13f5658947
Improve instructions for Fedora 41
Update our changelog and release instructions, and add a note for
Fedora 41 users in our build instructions to install Python 3.12.

Fixes #947
2024-10-15 19:43:28 +03:00
Alex Pyrgiotis
d832881452
Build RPM package for Python 3.13.
Add a hacky line in pyproject.toml that bumps the Python requirement to
3.14, so that we can build a Dangerzone RPM.
2024-10-15 19:43:14 +03:00
Alex Pyrgiotis
f3fbc33fcd
dev_scripts: Allow building a Fedora 41 dev env
Use Python 3.12 in Fedora 41 dev environments, since Python 3.13
(default in Fedora 41) does not work with PySide6 from PyPI yet.
2024-10-15 19:43:14 +03:00
Alex Pyrgiotis
5a97182979
ci: Add Fedora 41 CI jobs 2024-10-15 19:43:14 +03:00
Alexis Métaireau
49c3c2c6bb
Add support for 24.10 (oracular)
Refs #947
2024-10-15 19:41:49 +03:00
Alex Pyrgiotis
8ad95981ea
dev_scripts: Add user fix for Ubuntu 24.10
It seems that the container image for Ubuntu 24.10 also ships with a
default Ubuntu user with UID 1000, so we need to remove it when creating
our dev environment.
2024-10-15 19:41:49 +03:00
Alex Pyrgiotis
8f5ae9d6ad
dev_scripts: Make user networking work in an Ubuntu 24.10 dev environment
Try installing `passt`, which is responsible for user networking in
later Podman releases. If not installed, building the container image
within an Ubuntu 24.10 environment fails with:

    setup network: could not find pasta, the network namespace can't be
    configured: exec: "pasta": executable file not found in $PATH

Note that this package is not available in older Ubuntu versions. In
these cases, we should swallow installation failures and continue.
2024-10-15 15:47:58 +03:00
Alex Pyrgiotis
1eff14539f
debian: Vendor PyMuPDf when building Debian package
Install PyMuPDF under ./dangerzone/vendor, right before we build the
.deb package. We vendor PyMuPDF just for Debian, since the provided
versions don't have OCR support enabled.

Currently, we don't use PyMuPDf on the host, but this will change once
we fully implement the on-host conversion feature.

Refs #625
2024-10-15 14:58:06 +03:00
Alex Pyrgiotis
91fbc466c5
Add an import preference for vendored packages
Prefer importing packages from ./dangerzone/vendor, if there is one
there, instead of using the system ones.
2024-10-15 14:58:06 +03:00
Alex Pyrgiotis
266d6c70a7
install: Add script for vendoring PyMuPDF
Add a script that installs PyMuPDF under ./dangerzone/vendor. This will
be useful in subsequent commits, for vendoring PyMuPDF when building
Debian packages.
2024-10-15 13:24:17 +03:00
Alex Pyrgiotis
44a6cc0017
dev_scripts: Install pip in dev environments
Install pip in dev environments, so that we can use it to vendor
PyMuPDf in subsequent commits.
2024-10-15 13:09:52 +03:00
Alex Pyrgiotis
8f71df56d9
Handle PyMuPDF 1.24.11 wheels in our Dockerfile
The PyMuPDF wheels for version 1.24.11 have changed the way they are
being built, which means we have to adapt our Dockerfile in order to
install them properly.
2024-10-15 13:08:33 +03:00
Alex Pyrgiotis
eebf10ca3d
Bump our Poetry dependencies 2024-10-15 13:04:09 +03:00
Alex Pyrgiotis
fed5e35e97
Add missing .pybuild dir in .gitignore 2024-10-15 13:04:09 +03:00
Alex Pyrgiotis
fd5aafdde9
ci: Start an Xvfb server in our CI tests
Remove the installation steps for Xvfb, since it's already included in
GitHub actions, and fire up an Xvfb server with disabled host-based
access control.

Initially, we tried to wrap our CI tests with `xvfb-run`, but any
X11 client within our Podman container failed with the following error
message:

    Authorization required, but no authorization protocol specified.

This error message is usually thrown when the X11 client does not
provide the magic cookie in the Xauthority file back to the X11 server.
In our case though, we can verify that commands in our Podman container
read the Xauthority file successfully:

    socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0) = 3
    connect(3, {sa_family=AF_UNIX, sun_path=@"/tmp/.X11-unix/X99"}, 21) = -1 ECONNREFUSED (Connection refused)
    close(3)                                = 0
    socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0) = 3
    getsockopt(3, SOL_SOCKET, SO_SNDBUF, [212992], [4]) = 0
    connect(3, {sa_family=AF_UNIX, sun_path="/tmp/.X11-unix/X99"}, 110) = 0
    getpeername(3, {sa_family=AF_UNIX, sun_path="/tmp/.X11-unix/X99"}, [124->21]) = 0
    uname({sysname="Linux", nodename="dangerzone-dev", ...}) = 0
    access("/home/runner/work/dangerzone/dangerzone/cookie", R_OK) = 0
    openat(AT_FDCWD, "/home/runner/work/dangerzone/dangerzone/cookie", O_RDONLY) = 4
    fstat(4, {st_mode=S_IFREG|0600, st_size=59, ...}) = 0
    read(4, "\1\0\0\rfv-az1915-957\0\299\0\22MIT-MAGIC"..., 4096) = 59
    read(4, "", 4096)                       = 0
    close(4)                                = 0
    fcntl(3, F_GETFL)                       = 0x2 (flags O_RDWR)
    fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK)    = 0
    fcntl(3, F_SETFD, FD_CLOEXEC)           = 0
    poll([{fd=3, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=3, revents=POLLOUT}])
    writev(3, [{iov_base="l\0\v\0\0\0\0\0\0\0\0\0", iov_len=12}, {iov_base="", iov_len=0}], 2) = 12
    recvfrom(3, 0x55a5635c0050, 8, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
    poll([{fd=3, events=POLLIN}], 1, -1)    = 1 ([{fd=3, revents=POLLIN}])
    recvfrom(3, "\0@\v\0\0\0\20\0", 8, 0, NULL, NULL) = 8
    recvfrom(3, "Authorization required, but no a"..., 64, 0, NULL, NULL) = 64
    write(2, "Authorization required, but no a"..., 64Authorization required, but no authorization protocol specified
    ) = 64

The line with the magic cookie is:

    read(4, "\1\0\0\rfv-az1915-957\0\299\0\22MIT-MAGIC"..., 4096) = 59

Since we are not sure why we are not allowed access to the X11 server
from the Podman container, we decided to disable host-based access
controls altogether. This is not a security concern, since this X11
session is a remote one. However, we shouldn't run tests this way in dev
machines.

Fixes #949
2024-10-14 17:02:43 +03:00
Alexis Métaireau
ee991cab6b
Use github issue templates
Fixes #920
2024-10-10 09:57:38 +02:00
Alexis Métaireau
5d98f802ea
CI: Replace set-output by environment variables
Fixes #944
2024-10-09 18:16:28 +02:00
Alex Pyrgiotis
93b960cd23
Bump H2ORestart to version 0.6.6
Follow Debian's lead [1] and bump this version to 0.6.6. This change
should bring some stability improvements to our CI tests as well.

[1]: https://packages.debian.org/unstable/text/libreoffice-h2orestart
2024-10-07 18:36:06 +03:00
bnewc
752eff02d8
Prevent user from using illegal characters in output filename
Add some checks in the Dangerzone GUI and CLI that will prevent a user
from mistakenly adding illegal characters in the output filename.
2024-10-07 18:04:47 +03:00
Alex Pyrgiotis
275189587e
tests: Test termination logic under default conditions
Do not use the `provider_wait` fixture in our termination logic tests,
and switch instead to the `provider` fixture, which instantiates a
typical isolation provider.

The `provider_wait` fixture's goal was to emulate how would the process
behave if it had fully spawned. In practice, this masked some
termination logic issues that became apparent in the WIP on-host
conversion PR. Now that we kill the spawned process via its process
group, we can just use the default isolation provider in our tests.

In practice, in this PR we just do `s/provider_wait/provider`, and
remove some stale code.
2024-10-07 17:37:57 +03:00
Alex Pyrgiotis
b5130b08b6
tests: Improve Dummy provider tests
Add a fixture that returns our stock Dummy provider. Also, explicitly
use a blocking Dummy provider (`DummyWait`) for a specific test case.
This will prove useful when we stop using the `provider_wait` variant of
our isolation providers in the next commits.
2024-10-07 17:37:42 +03:00
Alex Pyrgiotis
dc8a22c8e7
Fix the dummy provider
Make the dummy provider behave a bit more like the other providers, with
a proper function and termination logic. This will be helpful soon in
the tests.
2024-10-07 17:37:42 +03:00
Alex Pyrgiotis
d6410652cb
Kill the process group when conversion terminates
Instead of killing just the invoked Podman/Docker/qrexec process, kill
the whole process group, to make sure that other components that have
been spawned die as well. In the case of Podman, conmon is one of the
processes that lingers, so that's one way to kill it.
2024-10-07 17:37:39 +03:00
Alex Pyrgiotis
b9a3dd63ad
Always start conversion process in new session
Start the conversion process in a new session, so that we can later on
kill the process group, without killing the controlling script (i.e.,
the Dangezone UI). This should not affect the conversion process in any
other way.
2024-10-07 17:27:38 +03:00
Alex Pyrgiotis
8d856ff4c3
ci: Add Intel macOS runner
GitHub provides an Intel macOS runner as `macos-13`. Add it alongside
our M1 macOS runner (`macos-latest`), in order to cover all of our
target environments.
2024-10-07 12:48:03 +03:00
Alex Pyrgiotis
95660c3ec7
Make dummy tests faster
Remove the unnecessary sleep command in our dummy tests, which made them
run much slower.
2024-10-07 12:48:03 +03:00
Alex Pyrgiotis
58b4659ffd
Improve .gitattributes
It seems that we need to specify that Python files have LF line endings
on Windows environments, else they will get converted to CRLF. If this
happens, then the container image we build in this environment will have
Python files with wrong endings, and tests will break.

Refs #838 for previous attempt.
2024-10-07 12:48:02 +03:00
Alex Pyrgiotis
a001b5497c
Add release note for Debian packages 2024-10-02 16:49:46 +02:00
Alex Pyrgiotis
eb2d114ea7
install: Catch version errors when building DEBs
Make sure that the Debian package we build conforms to the expected
naming scheme else, it's possible that something is off. A scenario
we've encountered is bumping `share/version.txt`, but not
`debian/changelog`, which would create a Debian package with an older
version.
2024-10-02 16:49:46 +02:00
Alex Pyrgiotis
a32522f6c8
debian: Bump version to 0.7.1
Add a dummy entry in debian/changelog, to signal that the latest
Dangerzone version is 0.7.1.
2024-10-02 16:49:46 +02:00
Alexis Métaireau
025e5dda51
Switch from CircleCI runners to Github actions.
As part of this change, the dev (build) and end-user test images names
changed from `dangerzone.rocks/*` to `ghcr.io`.

A new `--sync` option is provided in the `env.py` command, in order to
retrieve the images from the registry, or build and upload otherwise.
2024-10-02 16:47:58 +02:00
Alexis Métaireau
3e434d08d1
Always use our own seccomp policy as a default.
As per Etienne Perot's comment on #908:

> Then it seems to me like it would be easy to simply apply this seccomp
profile under all container runtimes (since there's no reason why the
same image and the same command-line would call different syscalls under
different container runtimes).
2024-10-02 14:12:48 +02:00
Alexis Métaireau
eb10082a62
Merge branch 'hotfix-0.7.1' into main 2024-10-01 15:16:25 +02:00
Alexis Métaireau
eee405e29e
Update download links to use 0.7.1 2024-10-01 12:58:11 +02:00
Alex Pyrgiotis
2371d1c23c
Add release note for containerd graph driver
Fixes #933
2024-09-30 15:45:15 +03:00
Alexis Métaireau
9117ba5d6c
Bump version to 0.7.1 2024-09-30 12:40:06 +02:00
Alexis Métaireau
fb2f4ce695
Add 0.7.1 to the CHANGELOG 2024-09-30 12:38:41 +02:00
Alex Pyrgiotis
4423fc6232
Handle multiple image IDs in the image-ids.txt file.
Docker Desktop 4.30.0 uses the containerd image store by default, which
generates different IDs for the images, and as a result breaks the logic
we are using when verifying the images IDs are present.

Now, multiple IDs can be stored in the `image-id.txt` file.

Fixes #933
2024-09-30 12:34:34 +02:00
Alex Pyrgiotis
bd2dc0ea3c
Pin gVisor to the last working release
Temporarily pin gVisor to the latest working version
(`release-20240826.0`), since the latest one breaks our container image.

Refs #928
2024-09-27 12:55:59 +03:00
Alex Pyrgiotis
27d201a95b
container: Avoid pop-ups on Windows
Avoid window pop-ups on Windows systems, by using the `startupinfo`
argument of `subprocess.run`.
2024-09-27 12:55:46 +03:00
JKarasti
791444cd5d
Windows installer: Allow choosing installation directory during install
Switch to using `WixUI_InstallDir` dialog set in the windows installer and add the `WIXUI_INSTALLDIR` property it needs to let user choose where Dangerzone is installed.

resolves #148
2024-09-24 15:04:43 +03:00
Dustin Alandzes
830e551567
Fix broken link in the README.md (/about.html is now /about/) 2024-09-24 15:01:54 +03:00
Alex Pyrgiotis
1e30767278
docs: Update gVisor design doc
Update the gVisor design doc, to better reflect the current state of the
gVisor integration. More specifically, the following have changed since
this design doc was merged:

* We have dropped the need for the `SETFCAP` capability.
* We have added the SELinux label `container_engine_t` to the outer
  container.
2024-09-23 12:15:28 +03:00
Alexis Métaireau
c3c7fbbc20
Fix wrong container-runtime detection on Linux
Use "podman" when on Linux, and "docker" otherwise.

This commit also adds a text widget to the interface, showing the actual
content fo the error that happened, to help debug further if needed.

Fixes #212
2024-09-18 15:04:57 +02:00
amnak613
9b9e265b11
Added try excepts for unhandled exceptions
Fixes #776
2024-09-17 16:26:46 +03:00
Alexis Métaireau
d7f80965b1
Remove useless imports and fstrings from build-rpm.py 2024-09-11 16:20:28 +02:00
Alexis Métaireau
b375a7e96e
dev_scripts: store env data in the user's data dir.
Previously, these files where stored inside the repository (under
`dev_scripts/env/`), which could lead to conflicts with some tooling
(black, debian-helper).

(Linux only): as a convenience, here is how to move data to the new
location:

```bash
mkdir -p ~/.local/share/dangerzone-dev
mv dev_scripts/envs/ ~/.local/share/dangerzone-dev/.
```
2024-09-11 16:20:27 +02:00
Alexis Métaireau
396c3b56c8
packaging: replace stdeb by pybuild
As a result, a new `debian` folder is now living in the repository.
Debian packaging is now done manually rather than using tools that do
the heavy-lifting for us.

The `build-deb.py` script has also been updated to use `dpkg-buildpackage`
2024-09-11 16:20:27 +02:00
Alex Pyrgiotis
3002849b7f
Install Thunar in our Dangerzone environments
Install Thunar in our Dangerzone Linux environments, so that we can use
it for our drag-and-drop QA test.
2024-09-10 22:28:31 +03:00
Alex Pyrgiotis
d90f81e772
Ensure that the expected Python version is used 2024-09-10 22:28:31 +03:00
Alex Pyrgiotis
2e3ec0cece
Always bust builder cache building the container image
Do not use by default the builder cache, when we build the Dangerzone
container image. This way, we can always have the most fresh result when
we run the `./install/common/build-image.py` command.

If a dev wants to speed up non-release builds, we add the `--use-cache`
flag to use the builder cache.
2024-09-10 22:28:31 +03:00
Etienne Perot
73b0f8b7d4
Disable gVisor's DirectFS feature.
DirectFS is enabled by default in gVisor to improve I/O performance,
but comes at the cost of enabling the `openat(2)` syscall (with severe
restrictions, but still). As Dangerzone is not performance-sensitive,
and that it is desirable to guarantee for the document conversion
process to not open any files (to mimic some of what SELinux provides),
might as well disable it by default.

See #226.
2024-09-10 17:32:31 +03:00
Alexis Métaireau
2237f76219
Rename make lint-apply to make format 2024-09-10 15:55:16 +02:00
Alexis Métaireau
0c9f426b68
Do not throw on malformed Desktop Entries on Linux.
This just skips the malformed entry when it's found.

Fixes #899
2024-09-10 15:25:45 +02:00
Alexis Métaireau
df3b26583e
Bump pymupdf and poetry lockfile 2024-09-10 14:47:58 +02:00
Alexis Métaireau
e4af44c220
Use PyMuPDF wheels for non-ARM architectures.
This removes the need to build the PyMuPDF project by ourselves, but
only when on non-ARM architectures since the wheels for these are not
provided yet.

Changes the `Dockerfile` and `build-image.py` script, introducing a new
`ARCH` flag to conditionally build the wheels.
2024-09-10 14:47:57 +02:00
Alex Pyrgiotis
2bd09e994f
Ignore the recent libexpat CVEs
Ignore the recent libexpat CVEs, as they don't affect Dangerzone.

Closes #913
2024-09-10 12:10:44 +02:00
Alex Pyrgiotis
c8642cc59d
ci: Update our CircleCI machines to Ubuntu 22.04
Update our CircleCI machines for specific tests (Debian Bookworm and
Fedora 40). It seems that the newest Podman version (5.2.1+), when
creating a container using the `--userns nomap` triggers a permission
denied error in older kernels. E.g.:

    Error: crun: cannot stat `/tmp/storage-run-1000/containers/overlay-containers/d00932f2600df7b0d8f4cc78e2346487ec92bfd17307127f3ae8d4e5bbc7887b/userdata/hosts`: Permission denied: OCI permission denied

The solution that works for us is to update the machine image from
Ubuntu 20.04 to Ubuntu 22.04.
2024-09-09 20:40:39 +03:00
Alex Pyrgiotis
f739761405
dev_scripts: Download FPF's PySide6 RPM only for Fedora 39
Download the FPF-maintained python3-pyside6 RPM [1] only when we build
an end-user environment for Fedora 39. Else, from Fedora 40 onwards, we
can use the official `python3-pyside6` RPM.

Refs freedomofpress/maint-dangerzone-pyside6#5

[1]: https://packages.freedom.press/yum-tools-prod/dangerzone/f39/python3-pyside6-6.7.1-1.fc39.x86_64.rpm
2024-08-09 14:40:12 +03:00
Alex Pyrgiotis
168f0e53a8
Add link to Tails website
Point users to the installation instructions of Dangerzone in the Tails
website. These instructions were recently added to Tails, and we have
worked with the Tails developers to make this integration happen.

See also:
* https://tails.net/news/dangerzone/index.en.html
* https://gitlab.tails.boum.org/tails/tails/-/issues/20355
2024-08-09 14:37:42 +03:00
Alex Pyrgiotis
cfb5e75be9
tests: Do not let LibreOffice hang on the large test set
Some of the files in our large test set can make LibreOffice hang. We
do not have a proper solution for this yet, but we can at least make
the tests timeout quickly, so that they can finish at some point.

Refs #878
2024-08-09 14:32:19 +03:00
Alex Pyrgiotis
3f86e7b465
Make PyMuPDF always log to stderr
PyMUPDF logs to stdout by default, which is problematic because we use
the stdout of the conversion process to read the pixel stream of a
document.

Make PyMuPDF always log to stderr, by setting the following environment
variables: PYMUPDF_MESSAGE and PYMUPDF_LOG.

Fixes #877
2024-08-09 14:32:19 +03:00
Alex Pyrgiotis
08f03b4bb4
Remove some stale CVE entries from .grype.yaml
Our security scans no longer pick up some CVEs we have ignored in the
past, so we can safely remove them now.
2024-08-08 20:56:53 +03:00
Alex Pyrgiotis
141c1e8a23
Ignore CVE-2024-5175 from our security scans
Ignore CVE-2024-5175 from our security scans, because Dangerzone is not
affected by it. Our assessment follows:

The affected library, `libaom.so`, is linked by GStreamer's
`libgstaom.so` library. The vulnerable `aom_img_alloc` function is only
used when **encoding** a video to AV1. LibreOffce uses the **decode**
path instead, when generating thumbnails.

Closes #895
2024-08-08 20:53:06 +03:00
Alex Pyrgiotis
c1dbe9c3e3
dev_scripts: Handle Dangerzone packages with patch level != 1
Update our `env.py` script to auto-detect the correct Dangerzone package
name. This is useful when building an end-user environment, i.e., a
container image where we copy the respective Dangerzone .deb/.rpm
package and install it via a package manager.

To achieve this, we replace the hardcoded patch level (`-1`) in the
package name with a glob character (`*`). Then, we check in the
respective build directory if there's exactly one match for this
pattern. If yes, we return the full path. If not, we raise an exception.

Note that this limitation was triggered when we were building RPM
packages for the 0.7.0 hotfix release.

Refs #880
2024-07-30 18:36:53 +03:00
Alex Pyrgiotis
61e04d42ef
Bump the RPM patch level to 2
Bump the RPM patch level to 2, so that the rebuilt RPM package for
0.7.0 hotfix release can be installed over the existing 0.7.0-1 package.
2024-07-30 16:43:45 +03:00
Alex Pyrgiotis
0a181a3342
container: Set container_engine_t SELinux label
Set the `container_engine_t` SELinux on the **outer** Podman container,
so that gVisor does not break on systems where SELinux is enforcing.
This label is provided for container engines running within a container,
which fits our `runsc` within `crun` situation.

We have considered using the more permissive `label=disable` option, to
disable SELinux labels altogether, but we want to take advantage of as
many SELinux protections as we can, even for the **outer** container.

Cherry-picked from e1e63d14f8

Fixes #880
2024-07-30 16:41:13 +03:00
Alex Pyrgiotis
e1e63d14f8
container: Set container_engine_t SELinux label
Set the `container_engine_t` SELinux on the **outer** Podman container,
so that gVisor does not break on systems where SELinux is enforcing.
This label is provided for container engines running within a container,
which fits our `runsc` within `crun` situation.

We have considered using the more permissive `label=disable` option, to
disable SELinux labels altogether, but we want to take advantage of as
many SELinux protections as we can, even for the **outer** container.

Fixes #880
2024-07-26 16:34:19 +03:00
dependabot[bot]
069359ef15
build(deps): bump anchore/scan-action from 3 to 4
Bumps [anchore/scan-action](https://github.com/anchore/scan-action) from 3 to 4.
- [Release notes](https://github.com/anchore/scan-action/releases)
- [Changelog](https://github.com/anchore/scan-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/anchore/scan-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: anchore/scan-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-24 15:10:51 +03:00
Alexis Métaireau
df3f8f7cb5
Do not allow uploading the token as an asset 2024-07-24 15:04:09 +03:00
Alexis Métaireau
e87547d3a6
Docs: update the release instructions
Changes on the release instructions to ease the lives of readers.
2024-07-24 02:08:54 +03:00
Alex Pyrgiotis
2da0e993a2
Add a manual way to trigger GitHub Actions workflows 2024-07-10 18:23:17 +03:00
Alex Pyrgiotis
2300cdef20
Bump download links in README from 0.6.1 to 0.7.0 2024-07-10 17:57:40 +03:00
Alex Pyrgiotis
162ded6a75
ci: Disable Debian Trixie builds
Disable building packages in Debian Trixie, since it's Python version
has changed to 3.12, which is not compatible with `stdeb`.

Refs #773
2024-07-08 12:11:03 +03:00
dependabot[bot]
210c30eb87
build(deps): bump certifi from 2024.6.2 to 2024.7.4
Bumps [certifi](https://github.com/certifi/python-certifi) from 2024.6.2 to 2024.7.4.
- [Commits](https://github.com/certifi/python-certifi/compare/2024.06.02...2024.07.04)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-08 11:55:17 +03:00
Alex Pyrgiotis
add95a0d53
Ignore CVE-2024-5535 from our security scans
We believe that Dangerzone is not affected by CVE-2024-5535 for the
following reasons:

1. This CVE affects applications that make network calls. The Dangerzone
    container does not perform any such calls, and has no access to the
    internet.
2. The OpenSSL devs have marked this issue as low severity.
2024-07-05 17:20:03 +03:00
Alex Pyrgiotis
b6f399be6e
container: Avoid pop-ups on Windows
Avoid window pop-ups on Windows systems, by using the `startupinfo`
argument of `subprocess.run`.
2024-07-02 20:41:58 +03:00
Alex Pyrgiotis
756945931f
container: Handle case where docker kill hangs
We have encountered several conversions where the `docker kill` command
hangs.  Handle this case by specifying a timeout to this command. If the
timeout expires, log a warning and proceed with the rest of the
termination logic (i.e., kill the conversion process).

Fixes #854
2024-07-01 17:56:21 +03:00
Alex Pyrgiotis
4ea0650f42
tests: Skip a test for missing OCR files on Qubes
We have a container-specific test that deals with missing OCR files in
the container image. This test _can_ be run under Qubes, and it may
fail since it requires Podman.

Make the pytest guard more strict and don't allow running this test on
Qubes.

Also, fix a typo in the word "omission".
2024-06-27 22:11:50 +03:00
Alex Pyrgiotis
c89ef580e0
tests: Properly skip tests for isolation providers
The platform where we run our tests directly affects the isolation
providers we can choose. For instance, we cannot run Qubes tests on a
Windows/macOS platform, nor can we spawn containers in a Qubes platform,
if the `QUBES_CONVERSION` envvar has been specified.

This platform incompatibility was never an issue before, because
Dangerzone is capable of selecting the proper isolation provider under
the hood. However, with the addition of tests that target specific
isolation providers, it's possible that we may run by mistake a test
that does not apply to our platform.

To counter this, we employed `pytest.skipif()` guards around classes,
but we may omit those by mistake. Case in point, the `TestContainer`
class does not have such a guard, which means that we attempt to run
this test case on Qubes and it fails.

Add module-level guards in our isolation provider tests using pytest's
`pytest.skip("...", allow_module_level=True)` function, so that we make
such restrictions more explicit, and less easy to forget when we add a
new class.
2024-06-27 22:11:37 +03:00
Alex Pyrgiotis
3e37bbc5e9
Add the Dangerzone repo in our Qubes build instructions
Ask the user to add the Dangerzone repo, when following the build
instructions for Qubes. The reason is that on Fedora 39 and 40, there's
no other way to install PySide6 than use our repo.
2024-06-27 21:50:35 +03:00
Alex Pyrgiotis
f476102ee9
dev_scripts: Properly skip QA scenarios on Linux
With the addition of the drag-and-drop QA scenario, the numbering of the
QA steps has changed. Mirror this numbering change in the qa.py script
as well, which tracks which QA scenarios do not apply to Linux
platforms.
2024-06-27 21:47:51 +03:00
Alex Pyrgiotis
58bc9950c5
Remove an errand whitespace character 2024-06-27 21:47:16 +03:00
deeplow
d0e1df5546
Add drag and drop support for document selection 2024-06-27 11:51:41 +02:00
Alexis Métaireau
7744cd55ec
Pin pymupdf to 1.24.5 2024-06-26 19:42:55 +02:00
Alexis Métaireau
92ae942661
Use python 3.12 for Windows and macOS builds
Fixes #848
2024-06-26 19:42:54 +02:00
Alex Pyrgiotis
e7e3430ca1
Use a custom seccomp policy for older Docker Desktop releases
We are aware that some Docker Desktop releases before 25.0.0 ship with a
seccomp policy which disables the `ptrace(2)` system call. In such
cases, we opt to use our own seccomp policy which allows this system
call. This seccomp policy is the default one in the latest releases of
Podman, and we use it in Linux distributions where Podman version is <
4.0.

Fixes #846
2024-06-26 18:49:03 +03:00
Alexis Métaireau
19ab0cb615
Update the CHANGELOG for 0.7.0 2024-06-26 16:24:18 +02:00
Alexis Métaireau
c2a47ec46b
Drop support for Fedora 38
Fedora 38 is EOL since 21 May 2024, so this removes the specific branches
we had checking for it, and updates the related instructions.
2024-06-20 17:08:27 +02:00
Alexis Métaireau
431719e1d2
Update poetry.lock file with latest dependencies. 2024-06-20 16:38:42 +02:00
Alexis Métaireau
83061eae4f
Update version to 0.7.0 2024-06-20 15:56:34 +02:00
Alexis Métaireau
44d999e96d
Use LF line-ending for all content except images
This was mostly done to fix an issue where `gvisor_wrapper/
entrypoint.py` didn't have the correct line-ending on Windows, leading
to a situation where the containers couldn't start.
2024-06-20 12:12:22 +02:00
Alexis Métaireau
e81ecbc288
Revert "tests: run all the tests with one command"
This reverts commit 3ba9181888, and
reintroduces the pytest runs as separate processes.
2024-06-12 22:41:05 +02:00
Ro
fb66946694
Add __future__ annotations for backwards-compatible typehint 2024-06-12 22:41:05 +02:00
Ro
54ab9ce98f
Order list of PDF viewers and return default application first (Linux). 2024-06-12 22:41:04 +02:00
Etienne Perot
f03bc71855
Sandbox all Dangerzone document processing within gVisor.
This wraps the existing container image inside a gVisor-based sandbox.

gVisor is an open-source OCI-compliant container runtime.
It is a userspace reimplementation of the Linux kernel in a
memory-safe language.

It works by creating a sandboxed environment in which regular Linux
applications run, but their system calls are intercepted by gVisor.
gVisor then redirects these system calls and reinterprets them in
its own kernel. This means the host Linux kernel is isolated
from the sandboxed application, thereby providing protection against
Linux container escape attacks.

It also uses `seccomp-bpf` to provide a secondary layer of defense
against container escapes. Even if its userspace kernel gets
compromised, attackers would have to additionally have a Linux
container escape vector, and that exploit would have to fit within
the restricted `seccomp-bpf` rules that gVisor adds on itself.

Fixes #126
Fixes #224
Fixes #225
Fixes #228
2024-06-12 13:40:04 +03:00
Alex Pyrgiotis
e005ea33ea
Add Podman's default seccomp policy
Add Podman's default seccomp policy as of 2024-06-10 [1]. This policy
will be used in subsequent commits in platforms with Podman version 3,
whose seccomp policy does not allow the `ptrace()` syscall.

[1] d3283f8401/pkg/seccomp/seccomp.json
2024-06-12 13:40:04 +03:00
Alex Pyrgiotis
7179d6f734
Get container runtime version
Get the (major, minor) parts of the Docker/Podman version, to check if
some specific features can be used, or if we need a fallback. These
features are related with the upcoming gVisor integration, and will be
added in subsequent commits.
2024-06-12 13:40:04 +03:00
Alex Pyrgiotis
cf9a545c1a
Use TESSDATA_PREFIX if explicitly passed
Our logic for detecting the appropriate Tesseract data directory should
also take into account the canonical envvar, if explicitly passed.
2024-06-12 13:40:03 +03:00
Alex Pyrgiotis
277b1675ca
doc: Add design document for the gVisor integration
Add a design document for the gVisor integration, which is currently
under review. The associated pull request has lots of architectural
discussions about integrating gVisor, so in this document we collect
them all in one place.

Refs #590
2024-06-12 13:22:45 +03:00
Alex Pyrgiotis
5b00f56a1f
doc: Add design doc for the update notifications
Add a design document for the update notifications mechanism, adapted
from the write-up in the original GitHub issue.

Refs #189
2024-06-12 13:22:45 +03:00
Alex Pyrgiotis
0019f0d3d3
docs: Move dev_scripts docs under docs/ dir
Move the documentation on how to create and use containerized Dangerzone
environments under `docs/developer`, which seems like a more natural
place than a README under `dev_scripts/`.
2024-06-12 13:22:45 +03:00
3ba9181888
tests: run all the tests with one command
This is mainly to check if the CI makes it work properly, especially
on Ubuntu Focal, as described in #493
2024-06-05 17:13:32 +02:00
81ad3a65c2
tests: use qt_updater fixture rather than updater
I'm actually ensure how the previous version was working, but since we
are now loading the pytest fixtures automatically, it uncovered a misuse
in the tests.

The `updater` fixture sets `updater.dangerzone.app` to a magic mock
instance, whereas `qt_updater` returns the real QT app, which is what we
want in our tests.
2024-06-05 17:13:31 +02:00
9bad001c04
chore: remove fixture imports in the tests
They ideally should find their way by themselves.

> You don’t need to import the fixture you want to use in a test,
> it automatically gets discovered by pytest. The discovery of fixture
> functions starts at test classes, then test modules, then conftest.py
> files and finally builtin and third party plugins.>
>
> — [pytest docs](https://docs.pytest.org/en/4.6.x/fixture.html#conftest-py-sharing-fixture-functions)
2024-06-05 15:56:09 +02:00
Alexis Métaireau
d9d9ab91a3
docs: document why get_tmp_dir is required in the imports 2024-06-05 14:19:32 +02:00
Alexis Métaireau
697b1e0d03
chore: mark some lines as unreachable for mypy 2024-06-05 14:19:31 +02:00
Alexis Métaireau
55850bfe2f
refactor: use pathlib / separator rather than .joinpath
Mainly to help readability
2024-06-05 14:19:31 +02:00
Alexis Métaireau
eba30f3c17
fix: do not catch bare exceptions
Bare excepts will catch keyboard-exit exceptions, system-exit etc. which
is probably not what we want.
2024-06-05 14:19:31 +02:00
Alexis Métaireau
65a8827daa
chore: minor linting
A few minor changes about when to use `==` and when to use `is`.
Basically, this uses `is` for booleans, and `==` for other values.

With a few other changes about coding style which was enforced by
`ruff`.
2024-06-05 14:19:31 +02:00
Alexis Métaireau
cbbd6afcc1
chore: remove unused code
This commit removes code that's not being used, it can be exceptions
with the `as e` where the exception itself is not used, the same with
`with` statements, and some other parts where there were duplicated
code.
2024-06-05 14:19:31 +02:00
Alexis Métaireau
99f1e15fd2
chore: Do not use fstrings without placeholders
> f-strings are a convenient way to format strings, but they are not
> necessary if there are no placeholder expressions to format. In this
> case, a regular string should be used instead, as an f-string without
> placeholders can be confusing for readers, who may expect such a
> placeholder to be present.
>
> — [ruff docs](https://docs.astral.sh/ruff/rules/f-string-missing-placeholders/)
2024-06-05 14:19:31 +02:00
Alexis Métaireau
5aa4863b52
chore(imports): remove useless imports
As detected by [ruff](https://github.com/astral-sh/ruff)

Related to #254, although it doesn't provide the command to lint the
codebase itself.
2024-06-05 14:19:30 +02:00
Alexis Métaireau
850199c2a3
chore: update poetry.lock with latest versions 2024-06-04 19:57:40 +02:00
Alexis Métaireau
c01515b775
Bump the minimum python version to 3.9
The minimum python version when installing from source is now python
3.9, as Pyside6 6.7.1 dropped support for python 3.8 (see #780 for more
information).

On Debian-derivatives distributions, the minimum Python version is now
set to 3.8. In practice, because Pyside6 is not packaged for Debian, we
use Pyside2 [0], which is why we can relax the python version requirement.

In practice, when installing from source on an environment where
python3.9 is not the default python, poetry will look for it and use it
if available

> For various reasons, this Python version might not be compatible with
> the python range supported by the project. In this case, Poetry will
> try to find one that is and use it.
>
> [Poetry docs](https://python-poetry.org/docs/managing-environments/)

On Ubuntu Focal (20.04) where Python 3.9 is not installed by default,
it is possible to install it using the `python3.9` package.

Additionally, In version 1.24.3, PyMuPDF changed its package name from `fitz`
to `pymupdf` [2], resulting in a breakage on how it is installed in our
container. This is now fixed.

[0] More information on how Pyside6 packaging affects dangerzone on #221
[1] See [the current status of Pyside6 packaging](https://repology.org/
project/python:pyside6/packages)
[2] PyMuPDF changelog: https://pymupdf.readthedocs.io/en/latest/changes.html#change-log
2024-06-04 19:57:40 +02:00
Alex Pyrgiotis
2aee6f4ad2
Fix some minor lint issues 2024-06-04 13:16:06 +03:00
Alex Pyrgiotis
aebc091400
Explain how to create, sign, and verify source tarballs
Update our docs and scripts to be able to create a source tarball for a
Dangerzone version, sign it, and explain how can users verify it.

Closes #822
2024-06-03 12:59:22 +03:00
Alex Pyrgiotis
5320b33d17
dev_scripts: Bump PySide6 version to 6.7.1
Bump the PySide6 version used in our user environments to 6.7.1, to
mirror the one we ship to our users, and also fix a segfault issue in
our CI tests.

Refs #801
2024-05-29 19:28:59 +03:00
Alex Pyrgiotis
1e1d9274f0
Handle complaints about shebangs during RPM build
When building the Dangerzone RPMs, we were seeing the following shebang
warnings:

    + /usr/lib/rpm/redhat/brp-mangle-shebangs
    mangling shebang in /usr/lib/python3.12/site-packages/dangerzone/conversion/doc_to_pixels.py from /usr/bin/env python3 to #!/usr/bin/python3
    mangling shebang in /usr/lib/python3.12/site-packages/dangerzone/conversion/common.py from /usr/bin/env python3 to #!/usr/bin/python3
    mangling shebang in /usr/lib/python3.12/site-packages/dangerzone/conversion/pixels_to_pdf.py from /usr/bin/env python3 to #!/usr/bin/python3
    mangling shebang in /etc/qubes-rpc/dz.ConvertDev from /usr/bin/env python3 to #!/usr/bin/python3
    mangling shebang in /etc/qubes-rpc/dz.Convert from /bin/sh to #!/usr/bin/sh

These warnings are benign in nature, but coupled with #727, they could
lead to incorrect file permissions.

Remove shebangs from the following files, since they are not executed
directly, but are imported instead:

    dangerzone/conversion/common.py
    dangerzone/conversion/doc_to_pixels.py
    dangerzone/conversion/pixels_to_pdf.py

Also, accept the suggestions by Fedora (/bin/sh -> /usr/bin/sh,
/usr/bin/env python3 -> /usr/bin/python3) for the following files:

    qubes/dz.Convert
    qubes/dz.ConvertDev

Refs #727
2024-05-28 18:06:34 +03:00
Alex Pyrgiotis
797b28e191
install: Build RPM in different directory
Switch build directory for the `rpmbuild` command from
`./install/linux/rpm-build` to `~/rpmbuild`. The main reason for this is
that we want a build directory that will not be mounted in the
container, since we've experienced issues with file permissions.

Regarding the choice of directories, we went with `~/rpmbuild` because
it's outside the Dangerzone source, and also because it's the default
choice in Fedora [1].

[1]: 3ae1eeafee/rpmdev-setuptree (L60)

Closes #727
2024-05-28 18:06:33 +03:00
Alex Pyrgiotis
a22f12ab6a
install: Detect bad file permissions in RPMs
When building the Dangerzone RPM package, detect if the files bundled in
it have any incorrect permissions. We have seen in the past that
building RPMs from the Dangerzone source, mounted to a macOS Docker
container, can lead to files readable only by the root user (600 /
rw-------).

Refs #727
2024-05-28 13:15:05 +03:00
Alex Pyrgiotis
d97d04b911
Inform readers about Dangerzone's security audit
Dangezone has received a security audit in December 2023, and published
on February 2024. It would be nice for people seeing this project to
learn about this audit.
2024-05-24 15:59:11 +03:00
Alex Pyrgiotis
b5d1681225
Add some articles about the Dangerzone project
Add some articles about the Dangerzone project that may be useful for
those evaluating this tool. This article list is not complete, and has
been sampled from various links we have encountered in the past.
2024-05-24 15:59:11 +03:00
178f94e612
docs: fix a typo, it's dev_scripts 2024-05-24 11:54:44 +02:00
Alex Pyrgiotis
76898471e7
Bump Python system path to 3.12 in Dockerfile
Alpine Linux 3.20 was released recently [1]. As a result, the
`alpine:latest` image ref, that our Dockerfile uses, switched from the
3.19 to the 3.20 Alpine Linux release. This release has Python 3.12,
meaning that the following line in our Dockerfile now fails:

    COPY --from=pymupdf-build /usr/lib/python3.11/site-packages/fitz/ /usr/lib/python3.11/site-packages/fitz

Bump the Python version in the Python system path to 3.12, so that we
can successfully build the container image.

[1]: https://alpinelinux.org/posts/Alpine-3.20.0-released.html
2024-05-23 12:14:00 +03:00
Alex Pyrgiotis
65776d8c05
Quote command in installation instructions
Zsh users that attempt to run the following command in our Ubuntu/Debian
installation instructions:

    echo deb [signed-by=/etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg] \
        https://packages.freedom.press/apt-tools-prod ${VERSION_CODENAME?} main \
            | sudo tee /etc/apt/sources.list.d/fpf-apt-tools.list

encounter the following error:

    zsh: no matches found:
    [signed-by=/etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg]

Quote this command to ensure compatibility with other shells, and update
our CI checks.

Fixes #805
2024-05-22 15:00:39 +03:00
Naglis Jonaitis
210405b9fd
Fix Qt QAction import
In PySide2 QAction is available under `PySide2.QtWidgets`[1] whereas in
PySide6 it resides under `PySide6.QtGui`[2].

Closes #788

[1]: https://doc.qt.io/qtforpython-5/PySide2/QtWidgets/QAction.html#PySide2.QtWidgets.PySide2.QtWidgets.QAction
[2]: https://doc.qt.io/qtforpython-6/PySide6/QtGui/QAction.html
2024-05-14 16:27:44 +03:00
Naglis Jonaitis
8694fb21ec
Use exec instead of exec_ for Qt dialogs
`exec_` is being deprecated in favor of `exec`.

Also use `launch()` helper method for `Dialog` subclasses.

Fixes #595
2024-05-14 16:23:20 +03:00
Alex Pyrgiotis
5dcccd1ced
ci: Test Fedora 40 and Ubuntu 24.04 installation instructions 2024-05-14 16:16:24 +03:00
Alex Pyrgiotis
aa8d00b328
Bump download links to 0.6.1 2024-05-13 19:25:59 +03:00
Alex Pyrgiotis
88a2d151ab
Update changelog entries 2024-05-09 17:36:05 +03:00
Alex Pyrgiotis
a8e51c17d9
Install Python from python.org
Add a note in our release instructions to install Python from
python.org. This should fix some incompatibilities with older macOS
versions.

Refs #471
2024-05-09 17:36:04 +03:00
Alex Pyrgiotis
8c59589be1
Inform users about Pyside6 and conmon packages
Inform users that for specific distros and versions, we install some
extra packages (PySide6, conmon), in order to fix some incompatibilities
between Dangerzone and the base system. Provide also a link to the
source / build instructions for the package, as well as any relevant
issues.

Fixes #767
2024-05-09 17:36:04 +03:00
Alex Pyrgiotis
341e29c0e3
Make our collapsible blocks more noticable
Make our collapsible blocks in our instructions more noticeable, by
enclosing them in an HTML table (<table>).
2024-05-09 17:36:04 +03:00
Alex Pyrgiotis
d55dee2f37
Add user instructions for verifying our signatures
Add a section for our end-users in INSTALL.md, that explains how to
verify that our Dangerzone assets have been signed by our advertised
signing key.

This section explains what are the .asc files that users see next to our
release assets, and how they can verify each asset individually using
GPG. It is heavily inspired by a similar section for OnionShare.

Closes #761
2024-05-09 17:36:04 +03:00
Alex Pyrgiotis
83c165ae33
dev_scripts: Sign our assets and calculate their hashes
Add a new script called `sign-assets.py`, which produces the hash of all
the Dangerzone assets for a release (Windows/macOS installers, container
image), and signs them individually.

Also update our RELEASE.md document, to incorporate this script into our
release workflow.
2024-05-09 17:32:07 +03:00
Alex Pyrgiotis
f6a39ec140
Add some extra entries to the 0.6.1 changelog 2024-05-09 16:46:16 +03:00
Alex Pyrgiotis
549ed23193
dev_scripts: Fix bug during env build
Create the build directory first, and then add the PySide6 package in
it.
2024-05-09 16:46:16 +03:00
Alex Pyrgiotis
b97e9540c1
Fix minor typos in RELEASE.md 2024-05-09 16:46:16 +03:00
Alex Pyrgiotis
ff25fa3045
Fix stuck conversion processes
Gracefully terminate certain conversion processes that may get stuck
when writing lots of data to stdout. Also, handle a race condition when
a conversion process terminates slightly after the associated container.

Fixes #791
2024-05-09 16:46:15 +03:00
Alex Pyrgiotis
0557e34429
Exclude Dangerzone from the discovered PDF viewers
We have recently [1] changed the name of the Dangerzone application to
capital-case "Dangerzone", but this breaks our PDF viewer detection
logic. Adjust our check to exclude Dangerzone from the list.

Fixes #790

[1]: See commit 3d426ed36b
2024-05-09 15:57:42 +03:00
Alex Pyrgiotis
37bf9badf4
Remove extraneous log sanitization
Remove an extra call to `replace_control_chars()`, as well as an
unnecessary method.
2024-05-09 15:57:42 +03:00
Alex Pyrgiotis
0b45360384
Keep newlines when reading debug logs
In d632908a44 we improved our
`replace_control_chars()` function, by replacing every control or
invalid Unicode character with a placeholder one. This change, however,
made our debug logs harder to read, since newlines were not preserved.

There are indeed various cases in which replacing newlines is wise
(e.g., in filenames), so we should keep this behavior by default.
However, specifically for reading debug logs, we add an option to keep
newlines to improve readability, at no expense to security.
2024-05-09 15:57:42 +03:00
Alex Pyrgiotis
e11aaec3ac
Always use sys.exit when exiting the application
The `exit()` [1] function is not necessarily present in every Python
environment, as it's added by the `site` module. Also, this function is
"[...] useful for the interactive interpreter shell and should not be
used in programs"

For this reason, we replace all such occurrences with `sys.exit()` [2],
which is the canonical function to exit Python programs.

[1]: https://docs.python.org/3/library/constants.html#exit
[2]: https://docs.python.org/3/library/sys.html#sys.exit
2024-05-09 15:57:42 +03:00
Alex Pyrgiotis
d6202cd028
Invoke external command on Windows properly
On Windows, if we don't use the `startupinfo=` argument of
subprocess.Popen, then a terminal window will flash while running the
command.

Use `startupinfo=` when killing a container, as we do for every other
command.
2024-05-09 15:57:42 +03:00
Alex Pyrgiotis
1c70ee6771
Fix archiving the same doc twice on Windows
On Windows, if we somehow attempt to archive the same document twice
(e.g, because it got archived once, and then we copy it back), we will
get an error, because Windows does not overwrite the target path, if it
already exists.

Fix this issue by always removing the previously archived version, when
performing the next archival action, and update our tests.
2024-05-09 15:57:42 +03:00
Alex Pyrgiotis
63b12abbdf
tests: Fix text sanitization tests on Windows
Fix a failing test case on Windows, due to a character that cannot
exist in a filename.
2024-05-09 15:57:42 +03:00
Naglis Jonaitis
ff1677672e
Bump pytest-cov package version
pytest-cov 3.0.0 running under pytest 7.2 produces warnings during CI
(see also pytest-dev/pytest-cov#561 [1]):

> PytestDeprecationWarning: The hookimpl CovPlugin.pytest_configure_node
> uses old-style configuration options (marks or attributes).

and

> PytestDeprecationWarning: The hookimpl CovPlugin.pytest_testnodedown
> uses old-style configuration options (marks or attributes).

The warnings were fixed in pytest-cov 4.0.0 [2].

[1]: https://github.com/pytest-dev/pytest-cov/issues/561
[2]: https://github.com/pytest-dev/pytest-cov/issues/561#issuecomment-1297143745
2024-05-08 15:40:14 +03:00
Naglis Jonaitis
5b6cc861d8
Don't use pytest-mock mocker.patch as context manager
Quote from `pytest-mock` docs [1]:

> The purpose of this plugin is to make the use of context managers and
> function decorators for mocking unnecessary, so it will emit a warning
> when used as such.

Thus using it as a context manager currently produces a warning during
test runs in CI which is extra noise that could make new (possibly more
important) warnings harder to spot.

[1]: https://pytest-mock.readthedocs.io/en/latest/usage.html#usage-as-context-manager
2024-05-08 15:40:14 +03:00
Naglis Jonaitis
c3a570eb7d
Use %F field code in .desktop entry
On Linux, the `%u` field code results in multiple Dangerzone instances
being launched when opening multiple documents with Dangerzone from
e.g. Nautilus, as `%u` signifies that the application (in this case -
Dangerzone) can only open a single file/URL at once.

This changes the field code to `%F` as Dangerzone (now) supports
converting multiple files at once. We use `%F` (multiple local files)
instead of `%U` (multiple files and/or URLs) since Dangerzone does not
support opening URLs.

See also the Desktop Entry Specification [1] for more information on the
field codes.

Fixes #797

[1]: https://specifications.freedesktop.org/desktop-entry-spec/latest/ar01s07.html
2024-05-08 14:17:35 +03:00
Naglis Jonaitis
8cdb2d5720
Set the desktop filename and app name of the Qt application
Currently, the app ID of the Dangerzone GUI application when running
under Wayland is `python3`, which is not very useful if one wants to
automate some action related to the Dangerzone application window (e.g.
to always start Dangerzone window in floating mode under Sway WM).

Setting the desktop filename property also sets the app ID of the
application under Wayland. According to Qt documentation[1], the
property value should be the name of the application's .desktop file but
without the extension.

Qt documentation also states:

> This property gives a precise indication of what desktop entry
> represents the application and it is needed by the windowing system to
> retrieve such information without resorting to imprecise heuristics.

Therefore I also think that setting this property is needed to display
the correct application name and icon (taken from the .desktop entry)
when running under certain windowing systems (like Wayland)
(see also #402).

Note that this property is not enough, as we've encountered systems
where setting just the desktop file name does not alter the detected
application name by the window manager. For this reason, we also use
set the application name [2] to `dangerzone`, to remove any ambiguity.

[1]: https://doc.qt.io/qt-6/qguiapplication.html#desktopFileName-prop
[2]: https://doc.qt.io/qt-6/qcoreapplication.html#applicationName-prop

Fixes #402
2024-04-25 17:23:02 +03:00
Alex Pyrgiotis
307ecd495c
tests: Ignore a lint error found by mypy 1.9.0
Ignore a lint error that has started showing up since mypy 1.9.0. The
official docs show that the `.instance()` method does not accept a `cls`
argument [1], so either the stubs or mypy are wrong here.

[1]: https://doc.qt.io/qtforpython-6.5/PySide6/QtCore/QCoreApplication.html#PySide6.QtCore.PySide6.QtCore.QCoreApplication.instance
2024-04-25 16:23:39 +03:00
Alex Pyrgiotis
53062a9c36
Bump Poetry dependencies
Bump our poetry.lock file, to get the latest versions of our
dependencies. Note that we are aware that this bump does not bring in
the latest PySide6 version.

Refs #773
2024-04-25 16:23:39 +03:00
Alex Pyrgiotis
d54a152eeb
Update pre-release task for PySide6
Update the description in the pre-release task for PySide6, since a lot
has changed after writing this section. Now that `python3-pyside6` is in
the Fedora Rawhide repo, and will soon get backported in the stable
repos, we no longer check for newer upstream versions, but if Fedora
finally did the backport.

Refs freedomofpress/maint-dangerzone-pyside6#5
2024-04-25 16:23:39 +03:00
Alex Pyrgiotis
83265009a3
Small revamp in our release instructions
Reword, revise, remove release procedure steps, to better reflect what's
the proper time to perform a step.
2024-04-25 16:23:39 +03:00
Alex Pyrgiotis
2e3e3842df
Add entries to changelog for 0.6.1 2024-04-25 16:23:39 +03:00
Alex Pyrgiotis
bc36c97840
Bump version to 0.6.1 2024-04-25 16:23:39 +03:00
Naglis Jonaitis
d632908a44
Fix printing of filenames with surrogate escapes
On Unix systems a filename can be a sequence of bytes that is not valid
UTF-8. Python uses[1] surrogate escapes to allow to decode such
filenames to Unicode (bytes that cannot be decoded are replaced by a
surrogate; upon encoding the surrogate is converted to the original
byte).

From `click` docs[2]:

> Invalid bytes or surrogate escapes will raise an error when written
> to a stream with `errors="strict"`. This will typically happen with
> `stdout` when the locale is something like `en_GB.UTF-8`.

To fix that, we use `utils.replace_control_chars()` before printing the
filenames to `stdout` so that surrogate escapes are replaced by �.

Fixes #768
2024-04-25 14:11:25 +03:00
Naglis Jonaitis
52ced04507
Relax the restrictions of util.replace_control_chars
The `util.replace_control_chars()` function was overly strict, and
would replace every non-ASCII character with "_". This included both
control characters, as well as normal characters in a non-English
alphabet.

Relax these restrictions by checking each character and deciding if it's
a Unicode control character, using the `unicodedata` Python package.
With this change, emojis and non-English letters are now allowed.
2024-04-25 14:11:16 +03:00
Alex Pyrgiotis
2fa592eb69
tests: Add a fixture for uncommon filenames
Add a pytest fixture that crafts a filename with Unicode characters that
are not considered common for this use. By default, this fixture uses
an invalid Unicode character as well, but we strip it in case of macOS
(APFS) since filenames must be UTF-8 encoded.

[1]: https://en.wikipedia.org/wiki/Filename#Comparison_of_filename_limitations
2024-04-25 13:23:22 +03:00
Alex Pyrgiotis
94179a1d91
ci: Include test dependencies when linting
Include the test dependencies when linting, especially `pytest`. We need
this because `mypy` cannot understand the `pytest.raises` exception, and
specifically the fact that it catches exceptions. It assumes that the
next code block is unreachable, since it doesn't see any try ... except.
2024-04-24 16:27:14 +03:00
Alex Pyrgiotis
d4974b1229
tests: Add termination tests for Dummy provider
Add termination tests for the Dummy provider, so that we can have
cross-platform coverage in our Windows/macOS CI runners, which can't use
the Container isolation providers.
2024-04-24 15:06:01 +03:00
Alex Pyrgiotis
abc66840a8
tests: Add termination tests for Qubes
Add termination-related tests for Qubes. To achieve this, we need
to make a change to the Qubes isolation provider. More specifically,
we need to make the isolation provider yield control to the caller only
when the disposable qube is up and running.

Qubes does not provide us a solid guarantee to do so, but we've found a
hacky workaround, whereby we wait until the `qrexec-client-vm` process
opens a `/dev/xen` character device. This should happen, in theory, once
the disposable qube is ready, and has sent a `MSG_SERVICE_CONNECT` RPC
message to the caller.
2024-04-24 14:39:15 +03:00
Alex Pyrgiotis
875d49fe10
tests: Add termination tests for containers
Add termination-related tests for containers. To achieve this, we need
to make a change to the container isolation provider. More specifically,
we need to make the isolation provider yield control to the caller only
when the container is up and running. Failure to do so may lead to
lingering processes.
2024-04-24 14:39:15 +03:00
Alex Pyrgiotis
fec7609547
tests: Add some termination-related test cases
Add some test cases in the isolation provider tests, that check how it
behaves when a process completes successfully, lingers, or cannot
terminate.

These tests cannot run yet, since they must be imported by a concrete
isolation provider test class. In subsequent commits, we will start
enabling them.
2024-04-24 14:39:15 +03:00
Alex Pyrgiotis
f57d2f7191
isolation_provider: Always terminate spawned process
Previously, we always assumed that the spawned process would quit
within 3 seconds. This was an arbitrary call, and did not work in
practice.

We can improve our standing here by doing the following:

1. Make `Popen.wait()` calls take a generous amount of time (since they
   are usually on the sad path), and handle any timeout errors that they
   throw. This way, a slow conversion process cleanup does not take too
   much of our users time, nor is it reported as an error.
2. Always make sure that once the conversion of doc to pixels is over,
   the corresponding process will finish within a reasonable amount of
   time as well.

Fixes #749
2024-04-24 14:39:15 +03:00
Alex Pyrgiotis
cd4cbdb00a
isolation_provider: Get exit code without timing out
Get the exit code of the spawned process for the doc-to-pixels phase,
without timing out. More specifically, if the spawned process has not
finished within a generous amount of time (hardcode to 15 seconds),
return UnexpectedConversionError, with a custom message.

This way, the happy path is not affected, and we still make our best to
learn the underlying cause of the I/O error.
2024-04-24 14:36:14 +03:00
Alex Pyrgiotis
171a7eca52
isolation_provider: Terminate doc-to-pixels proc
Extend the IsolationProvider class with a
`terminate_doc_to_pixels_proc()` method, which must be implemented by
the Qubes/Container providers and gracefully terminate a process started
for the doc to pixels phase.

Refs #563
2024-04-24 14:36:14 +03:00
Alex Pyrgiotis
a63f4b85eb
isolation_provider: Set a unique name for spawned containers
Set a unique name for spawned containers, based on the ID of the
provided document. This ID is not globally unique, as it has few bits of
entropy.  However, since we only want to avoid collisions within a
single Dangerzone invocation, and since we can't support multiple
containers running in parallel, this ID will suffice.
2024-04-24 14:33:33 +03:00
Alex Pyrgiotis
6850d31edc
isolation_provider: Pass doc when creating doc-to-pixels proc
Pass the Document instance that will be converted to the
`IsolationProvider.start_doc_to_pixels_proc()` method. Concrete classes
can then associate this name with the started process, so that they can
later on kill it.
2024-04-24 14:33:33 +03:00
Alex Pyrgiotis
b920de36d1
Announce our Ubuntu Noble / Fedora 40 support
Closes #762
2024-04-24 14:30:40 +03:00
Alex Pyrgiotis
7a9facb3c1
dev_scripts: Add Ubuntu Noble / Fedora 40 in our QA scripts 2024-04-23 18:00:48 +03:00
Alex Pyrgiotis
88c39a4fd5
ci: Add Ubuntu Noble / Fedora 40 support in GitHub actions
Extend our GitHub actions job to build an end-user environment for
Ubuntu Noble / Fedora 40, and then run a simple test in it.
2024-04-23 18:00:48 +03:00
Alex Pyrgiotis
0cd3241556
ci: Build and test Dangerzone in Fedora 40 on CircleCI
Extend our CircleCI jobs to:
* Build an .rpm package for Dangerzone in Fedora 40
* Run CI tests in Fedora 40
2024-04-23 18:00:48 +03:00
Alex Pyrgiotis
ad1b866dbb
ci: Test Dangerzone in Ubuntu Noble on CircleCI
Extend our CircleCI jobs to run CI tests in Ubuntu Noble.

This commit also adds support for building the Dangerzone .deb package
in Ubuntu Noble, but does not actually enable it. The reason is that
stdeb, which produces our Debian packages, does not work with Python
3.12, which ships with Ubuntu Noble

Refs #773
2024-04-23 18:00:48 +03:00
Archit Sharma
114881c291
Added Dependabot for Github actions
Signed-off-by: Archit Sharma <74408634+iArchitSharma@users.noreply.github.com>

Fixes #782
2024-04-22 22:02:15 +03:00
Alex Pyrgiotis
7cd73cab0e
ci: Bump PySide6 version in Fedora end-user envs
Our end-user Fedora environments, that we create for testing how
Dangerzone would operate on a clean Fedora system, require PySide6 to be
installed. This package is not available from the official Fedora repos
yet.

We have a way instead to check the poetry.lock file, grab the latest
PySide6 version from there, and install it from a URL. This is no longer
necessary, now that PySide6 6.7.0 will soon be available in all stable
Fedora releases. Since the last release maintained by FPF will be
6.6.3.1, we should pin this version in our env.py script. This way, we
can bump poetry.lock independently, and let Windows/macOS users get
different versions.

Refs freedomofpress/maint-dangerzone-pyside6#5
2024-04-19 00:54:07 +03:00
Naglis Jonaitis
7c4e62954f
Update GitHub actions
The `checkout`, `setup-python`, `upload-artifact` and `download-artifact`
actions produce warnings about deprecated Node.js 16.

Update the actions to use Node.js 20.
2024-04-09 14:39:26 +03:00
Naglis Jonaitis
fc503d0a96
Fix test-large phony target name 2024-04-08 17:38:18 +03:00
Naglis Jonaitis
a4b20ae101
Avoid DUMMY_CONVERSION env var treated as bool in CI
`DUMMY_CONVERSION: True` is treated as a boolean value in YAML[1]. As a
result, during GitHub CI the environment variable setup during tests is
formatted as `DUMMY_CONVERSION=true`.

The value is used[2] in tests and passed as the `condition` to the
`pytest.mark.skipif`[3] decorator. The `skipif` `condition` can be
either a `bool` or `str`. When it is a `str` (our case, as we use
`os.environ.get()`), it is treated as a condition string[4] by pytest.

Since the condition string is `eval()`ed[5] by pytest, trying to
evaluate `true` results in:

> Failed: Error evaluating 'skipif' condition
>     true
> NameError: name 'true' is not defined

To avoid the implicit conversion to a JSON boolean, or marking the
"True" value as a string literal, use the value `1` instead.

[1]: https://yaml.org/type/bool.html
[2]: 9bb1993e77/tests/isolation_provider/base.py (L25)
[3]: https://docs.pytest.org/en/stable/reference/reference.html#pytest-mark-skipif-ref
[4]: https://docs.pytest.org/en/stable/historical-notes.html#string-conditions
[5]: f75dd87eb7/src/_pytest/skipping.py (L117)
2024-04-08 15:24:19 +03:00
deeplow
9bb1993e77
Create tests/test_settings.py with extra coverage
Previously settings was implicitly tested on tests/gui/test_updater.py.
However this was concerned with updater-related tests only, which
incidentally covered almost all of settings.py. However, a few tests
were missing. This commit increases the test coverage, but also tests
additional test conditions.

The goal is to help us increase the test coverage of the previous
scenario, which tested for the persistence of user data (settings). This
way we can drop the requirement to test this on linux hosts, which is
slightly harder (more cumbersome) to do.
2024-04-01 18:18:41 +03:00
deeplow
dfcb10c494
Move settings.json into constant
Move settings.json into a constant so that they can later be referred to
by the testing module.
2024-04-01 18:18:41 +03:00
deeplow
ad16a0e471
Fix Settings().set() when setting new setting
Settings().set() would fail if we were trying to set a setting that did
not exist before. The reason is because before setting it would try to
get the previous value, but though direct key access, which would lead
to an exception.
2024-04-01 18:18:41 +03:00
deeplow
5c86927269
Change "external state" QA scenario to only win/mac
The previous scenario 10 tested the handling of state upon Dangerzone
updates. This, however was particularly difficult to do on Linux due to
the need to add a repository and install, especially in our
semi-automated QA environment.

For this reason this commits removes Linux from this scenario and moves
it closer to the top of the scenarios list to reduce the change of
state "contamination". In other words, before testing the new version,
the tester now installs a previous version and then the new one, thus
guaranteeing that there is no inconsistent state due to installing an
earlier version later in QA.

Fixes #719
2024-04-01 18:18:40 +03:00
Naglis Jonaitis
b284a55dc6
Fix typos 2024-03-28 13:23:36 +02:00
Alex Pyrgiotis
29d6854eca
Minor Wix-related fixes
Fix an outdated instruction for installing WiX, and point to the correct
executable for Windows, which was rebuilt for the new WiX version.
2024-03-23 15:06:21 +02:00
Štěpán Němec
c98bd358ac
Bump PyMuPDF dependency to unbreak Dangerzone image build
The problem (MuPDF C++ bindings generation breakage) was
apparently caused by a recent libclang update on pypi, and
fixed in the 1.24.0 release[1].

Fixes #750
[1]: https://github.com/pymupdf/PyMuPDF/issues/3279
2024-03-22 17:14:42 +02:00
Alex Pyrgiotis
ab1772b9af
ci: Update WiX Toolset path
Update the WiX Toolset from 3.11 to 3.14, since the former is no longer
available in GitHub CI runners.
2024-03-13 21:04:39 +02:00
Alex Pyrgiotis
c40338a13c
Unpin PyMuPDF dependency
Unpin the PyMuPDF dependency, now that we have a way to silence its
debug logs that have been added in its new `fitz` implementation.

Refs #700
2024-03-13 21:03:15 +02:00
Alex Pyrgiotis
ce5adb33fd
Bump poetry dependencies 2024-03-13 21:03:15 +02:00
Alex Pyrgiotis
74c467eaf7
conversion: Do not let PyMuPDF print to stdout
PyMuPDF has some hardcoded log messages that print to stdout [1]. We don't
have a way to silence them, because they don't use the Python logging
infrastructure.

What we can do here is silence a particular call that's been creating
debug messages. For a long term solution, we have sent a PR to the
PyMuPDF team, and we will follow up there [2].

Fixes #700

[1]: https://github.com/freedomofpress/dangerzone/issues/700
[2]: https://github.com/pymupdf/PyMuPDF/pull/3137
2024-03-13 21:03:15 +02:00
Alex Pyrgiotis
be8e2aa36b
Allow setting the compression level of the image
There are times where we may want to build the container image for
testing, but compression takes too much time. If we don't plan to use
this image for production builds, we can specify instead a compression
level that is so low, that the image will be compressed instantly.

In this commit, we allow the user to specify the Gzip compression level,
and even set it to 0. The default will always be 9, so that we don't
make a mistake during release.
2024-03-13 21:03:13 +02:00
Alex Pyrgiotis
a31f3370d0
Capture missing logs in second-stage conversion
For a while now, we didn't get logs for the second-stage conversion
when using containers. Extend the code to log any captured output from
the second stage conversion, only if we run Dangerzone via our dev
entrypoint.

Note that the Qubes isolation provider was always logging output from
the second stage of the conversion.
2024-03-13 20:59:50 +02:00
deeplow
0449840ec3
dz.ConvertDev: do not teleport .pyc files
On Qubes the conversion in dev mode would fail when converting from a
Fedora 38 development qube via a Fedora 39 disposable qube. The reason
was that dz.ConvertDev was receiving `.pyc` files, which were compiled
for python 3.11 but running on python 3.12.

Unfortunately PyZipFile objects cannot send source python files, even
though the documentation is a little bit unclear on this [1].

Fixes #723

[1]: https://docs.python.org/3/library/zipfile.html#pyzipfile-objects
2024-03-13 07:13:39 +00:00
deeplow
297feab63d
Ctx mgr to ensure destuction of container-pip-deps.txt
The file container-pip-dependencies.txt was being left a directory when
building the docker image. This meant that it was being packaged when it
wasn't supposed to.

To avoid this, we remove file with the help from a context manager.

The change is minimal and the biggest part of the diff are indentation
changes.

Fixes #739
2024-03-05 17:54:34 +00:00
deeplow
4f08f99e93
Add release notes template
Simplifies the release announcement drafting by providing some
templates. It would have been preferable to be a .github config file,
but GitHub does not yet support content templates for release notes.
2024-03-05 14:48:37 +00:00
deeplow
41c48106fb
RELEASE.md: add check for verifying last-minute criticals 2024-03-05 14:46:05 +00:00
Alex Pyrgiotis
f75d471ec8
Fix OCR bug in Qubes Fedora 38 templates
Provide a fix for an OCR bug that affected Fedora 38 templates of Qubes
OS. In that specific configuration, the PyMuPDF version accepts the
Tesseract data directory only from the `TESSDATA_PREFIX` environment
variable. Our mistake was that we were setting this environment variable
in a dev script, instead of setting it for all configurations.

In this commit, we set an attribute in the fitz.fitz module, so that
both dev scripts and end-user installations can work. This is hacky, but
it targets an old PyMuPDF release after all, so we don't expect things
to break in the long run.

Fixes #737
2024-03-04 16:53:04 +02:00
Alex Pyrgiotis
d35eb56b4b
ci: Test Fedora 39 build instructions 2024-02-26 23:26:24 +02:00
deeplow
a5eb0a5f9d
README.md bump version to 0.6.0 2024-02-26 21:00:00 +02:00
Alex Pyrgiotis
f8984e4b49
Revert "README.md bump version to 0.6.0"
This reverts commit 2784260812.
2024-02-21 17:10:33 +02:00
Alex Pyrgiotis
5b6911af84
Properly add new file extensions
Accept `.svg` and `.bmp` files when browsing via the Dangerzone GUI.
Support for these extensions has already been added in the converter
code that runs in the sandbox (cd99122385)
but they were erroneously left out from the filter in the Dangerzone
main window.
2024-02-20 16:02:38 +02:00
Alex Pyrgiotis
e73f10f99b
Handle gracefully unknown error codes
Do not throw exceptions for unknown error codes. If
`get_proc_exception()` gets called from within an exception context and
raises an exception itself, then this exception will not get caught, and
it will get lost.

Prefer instead to return an exception class that we have for this
purpose, and show to the user the unknown error code of the converesion
process.
2024-02-20 16:00:35 +02:00
Alex Pyrgiotis
aeb8c33b6e
Update expected output for a QA scenario
Inform testers that the container code no longer returns "UNTRUSTED >"
strings in its output. Every string is trusted now, and the output will
be similar for container and Qubes isolation providers alike.
2024-02-20 16:00:35 +02:00
Alex Pyrgiotis
d376e1da00
tests: Adapt Qubes tests
Adapt Qubes tests to the addition of the conversion process in
doc_to_pixels() call.
2024-02-20 15:58:42 +02:00
Alex Pyrgiotis
bc55a64864
Appease lint checker 2024-02-20 15:55:46 +02:00
Alex Pyrgiotis
96cf5d0b4b
ci: Improve commit message lint
Improve the commit message check, by logging only the commit title, and
doing away with the extra spaces.
2024-02-20 15:55:45 +02:00
Alex Pyrgiotis
634523dac9
Get underlying error when conversion fails
When we get an early EOF from the converter process, we should
immediately get the exit code of that process, to find out the actual
underlying error. Currently, the exception we raise masks the underlying
error.

Raise a ConverterProcException, that in turns makes our error handling
code read the exit code of the spawned process, and converts it to a
helpful error message.

Fixes #714
2024-02-20 15:55:45 +02:00
Alex Pyrgiotis
6ee1d14c9a
Start conversion process earlier
Start the conversion process earlier, so that we have a reference to the
Popen object in case of an exception.
2024-02-20 15:55:45 +02:00
deeplow
e4a5dbce46
Don't show 50% duplicated progress info
50% would show twice in the conversion progress due to an overlap in
conversion progress values. The doc_to_pixels would be from 0-50% and
the pixels_to_pdf from 50%-100%.

This commit makes the first part go from 0 to 49% instead.

Fixes #715
2024-02-20 13:47:15 +00:00
deeplow
eb19926f9c
Update screenshots (hamburger menu + capitalization) 2024-02-20 13:45:38 +00:00
deeplow
2784260812
README.md bump version to 0.6.0 2024-02-20 13:45:38 +00:00
Alex Pyrgiotis
531a5bc96f
qa: Add extra actions in the Windows QA script 2024-02-19 17:13:57 +02:00
Alex Pyrgiotis
fd241e5964
qa: Consume stdin on Windows platforms
On Windows platforms, we can't consume the stdin using select(), because
it's not available for pipes [1]. We can instead consume it using some
native Windows calls.

[1]: From https://docs.python.org/3/library/select.html#select.select:

     "File objects on Windows are not acceptable, but sockets are. On
     Windows, the underlying select() function is provided by the
     WinSock library, and does not handle file descriptors that don’t
     originate from WinSock."
2024-02-19 17:13:57 +02:00
Etienne Perot
04508d9694
Check that image build was successful. 2024-02-19 15:37:50 +02:00
deeplow
e375624fdc
Bump Qubes Fedora on RELEASE.md
Fixes #712
2024-02-15 14:42:01 +00:00
deeplow
22ab6f65bf
Bump CodeQL upload action to V3 due to deprecation
The following warning was showing up in our conversion logs [1]:

| Warning: CodeQL Action v2 will be deprecated on December 5th, 2024.
| Please update all occurrences of the CodeQL Action in your workflow
| files to v3. For more information, see https://github.blog/changelog/2024-01-12-code-scanning-deprecation-of-codeql-action-v2/

[1]: https://github.com/freedomofpress/dangerzone/actions/runs/7916735564/job/21611227503?pr=718
2024-02-15 14:40:33 +00:00
deeplow
f569695bb0
CI: Prevent fixup / wip commits 2024-02-14 13:15:27 +00:00
deeplow
75f8d76c5b
Appease new version of black lint tool 2024-02-13 11:36:10 +00:00
deeplow
7168a4078a
Bump poetry dependencies 2024-02-13 11:36:09 +00:00
deeplow
d2065ea76e
FIXUP: add clang-dev contribution 2024-02-13 11:12:19 +00:00
deeplow
9ddb9734ea
Update changelog for v0.6.0 2024-02-13 11:12:19 +00:00
deeplow
832775f34e
Bump version to 0.6.0 2024-02-13 11:12:19 +00:00
deeplow
8f11156ce4
Deprecate Ubuntu Lunar Lobster (EOL)
Fixes #705
2024-02-13 11:07:11 +00:00
Alex Pyrgiotis
2703448d60
Update Jammy build instructions regarding conmon
Update the build instructions for Ubuntu Jammy regarding conmon, now
that oldstable-proposed-updates no longer offers a patched conmon
package. Propose instead to install conmon from our apt-tools-prod repo.
2024-02-13 12:33:57 +02:00
Alex Pyrgiotis
42c64569af
dev_scripts: Install conmon from our apt-tools-prod repo
Instead of installing a patched conmon version from the
oldstable-proposed-updates repo, install it from our apt-tools-prod
repo. This applies to just Ubuntu Jammy, since the rest of the platforms
don't have this problem.
2024-02-13 11:55:32 +02:00
Alex Pyrgiotis
0d7b6e8533
dev_scripts: Do not backport conmon in Bullseye
Now that the conmon package with version 2.0.25+ds1-1.1+deb11u1 has been
released [1] for Debian Bullseye, there is no need to install it from
the oldstable-proposed-updates repo any more.

[1]: https://tracker.debian.org/pkg/conmon
2024-02-13 11:26:15 +02:00
deeplow
3fb797cdd1
Temporarily pin PyMuPDF==1.23.8 in container
PyMuPDF 1.23.9 swapped the new fitz implementation (fitz_new)
with the fitz module. In the new module there are prints in the code
that interfere with our stdout for sending JSON from the container.
Pinning the version seems to have no adverse consequences [1], since
fitz_old hasn't had significant changes and it gives breathing room for
the print-related issue to be tackled in PR [2].

Fixes temporarily #700

[1]: https://github.com/freedomofpress/dangerzone/issues/700#issuecomment-1938357651
[2]: https://github.com/pymupdf/PyMuPDF/pull/3137
2024-02-12 11:37:46 +00:00
deeplow
879fca6f9f
Remove uneeded TESSDATA_PREFIX setting in container
The container image does not need the TESSDATA_PREFIX env variable since
its PyMuPDF version is new enough to support `tessdata` as an argument
when calling the PyMuPDF tesseract method.
2024-02-07 13:14:08 +00:00
deeplow
6006beeb03
Fix OCR on Qubes: PyMuPDF required TESSDATA_PREFIX
PyMuPDF versions lower than 1.22.5 pass the tesseract data path as
an argument to `pixmap.pdfocr_tobytes()` [1], but lower versions require
setting instead the TESSDATA_PREFIX environment variable [2].

Because on Qubes the pixels to pdf conversion happens on the host and
Qubes has a lower PyMuPDF package version, we need to pass instead via
environment variable.

NOTE: the TESSDATA_PREFIX env. variable was set in dangerzone-cli
instead of closer to the calling method in `doc_to_pixels.py` since
PyMuPDF reads this variable as soon as the fitz module is imported
[3][4].

[1]: https://pymupdf.readthedocs.io/en/latest/pixmap.html#Pixmap.pdfocr_tobytes
[2]: https://pymupdf.readthedocs.io/en/latest/installation.html#enabling-integrated-ocr-support
[3]: https://github.com/pymupdf/PyMuPDF/discussions/2439
[4]: https://github.com/pymupdf/PyMuPDF/blob/5d6a7db/src/__init__.py#L159

Fixes #682
2024-02-07 13:13:10 +00:00
Alex Pyrgiotis
d1afe4c30a
Fix Podman crashes due to old conmon version
Switching from mounting files to writing to stdout has introduced some
Podman crashes in specific environments (Ubuntu Jammy / Debian Bullseye)
due to a conmon bug that affects version 2.0.25.

Fixing it for various permutations of the environments we support
requires the following:

1. CI tests: Install conmon from the oldstable-proposed-updates in
   our Debian Bullseye / Ubuntu Jammy dev/end-user environments.
2. Developers: Add a line in BUILD.md that suggests users to install
   conmon from the oldstable-proposed-updates repo, or some other repo
   they prefer.
3. End-user installations: We will build conmon for Ubuntu Jammy, and
   wait until the proposed updates repo gets merged in Debian Bullseye.

Fixes #685
2024-02-07 12:53:15 +00:00
deeplow
8a32d80762
Remove leftover progress variable in pixels_to_pdf
Since the progress information is now inferred on host based on the
number of pages obtained, progress-tracking variables should be removed
from the server.
2024-02-06 20:11:52 +00:00
deeplow
69c2a02d81
Remove timeouts
Remove timeouts due to several reasons:

1. Lost purpose: after implementing the containers page streaming the
   only subprocess we have left is LibreOffice. So don't have such a
   big risk of commands hanging (the original reason for timeouts).

2. Little benefit: predicting execution time is generically unsolvable
   computer science problem. Ultimately we were guessing an arbitrary
   time based on the number of pages and the document size. As a guess
   we made it pretty lax (30s per page or MB). A document hanging for
   this long will probably lead to user frustration in any case and the
   user may be compelled to abort the conversion.

3. Technical Challenges with non-blocking timeout: there have been
several technical challenges in keeping timeouts that we've made effort
to accommodate. A significant one was having to do non-blocking read to
ensure we could timeout when reading conversion stream (and then used
here)

Fixes #687
2024-02-06 20:11:43 +00:00
deeplow
4d3f2b32c7
Revert "Add Stopwatch implementation"
This reverts commit 344d6f7bfa.
Stopwatch is no longer needed now that we're removing timeouts.
2024-02-06 19:42:42 +00:00
deeplow
f31374e33c
Revert "Add non-blocking read utility"
This reverts commit fea193e935.

This is part of the purge of timeout-related code since we no longer
need it [1]. Non-blocking reads were introduced in the reverted commit
in order to be able to cut a stream mid-way due to a timeout. This is
no longer needed now that we're getting rid of timeouts.

[1]: https://github.com/freedomofpress/dangerzone/issues/687
2024-02-06 19:42:41 +00:00
deeplow
07dd54cd13
Fix hanging: disable container logging
The conversion was hanging arbitrarily [1] on some systems. Sometimes it
would send the full page other times stop half-way.

Originally found by @apyrgio.

Co-authored-by: @apyrgio

[1]: https://github.com/freedomofpress/dangerzone/pull/627#issuecomment-1892491968
2024-02-06 19:42:41 +00:00
deeplow
f3032a7142
Make big endian explicit in int to bytes
Fix issues in older distros that don't yet support python 3.11 where
endianness was not a default argument [1]. This is in response to CI
failures [2].

[1]: https://docs.python.org/3/library/stdtypes.html#int.to_bytes
[2]: https://app.circleci.com/pipelines/github/freedomofpress/dangerzone/2186/workflows/e340ca21-85ce-42b6-9bc3-09e66f96684a/jobs/27380y
2024-02-06 19:42:41 +00:00
deeplow
5e169a832b
Bump CI macOS python version to 3.11
Attempt to fix missing issue installing poetry [1].

[1]: https://github.com/freedomofpress/dangerzone/actions/runs/7487413482/job/20379748604?pr=627
2024-02-06 19:42:41 +00:00
deeplow
1835756b45
Allow each conversion to have its own proc
If we increased the number of parallel conversions, we'd run into an
issue where the streams were getting mixed together. This was because
the Converter.proc was a single attribute. This breaks it down into a
local variable such that this mixup doesn't happen.
2024-02-06 19:42:41 +00:00
deeplow
943bab2def
Move Qubes-specific tests also to containers
Now that Qubes and Containers essentially share the same code, we can
have both run the same tests.
2024-02-06 19:42:41 +00:00
deeplow
61e7a3c107
Fix isolation provider tests
Conversions methods had changed and that was part of the reason why
the tests were failing. Furthermore, due to the `provider.proc`, which
stores the associated qrexec / container process, "server" exceptions
raise a IterruptedConversion error (now ConverterProcException), which
then requires interpretation of the process exit code to obtain the
"real" exception.
2024-02-06 19:42:41 +00:00
deeplow
0a54f6461a
Speed up container image building (pull + build)
Avoids downloading the container image 4 times in the multi-stage build
by first pulling the alpine image once and then building without any
pulls.

Implemented following a suggestion of @apyrgio.
2024-02-06 19:42:41 +00:00
deeplow
550786adfe
Remove untrusted progress parsing (stderr instead)
Now that only the second container can send JSON-encoded progress
information, we can the untrusted JSON parsing. The parse_progress was
also renamed to `parse_progress_trusted` to ensure future developers
don't mistake this as a safe method.

The old methods for sending untrusted JSON were repurposed to send the
progress instead to stderr for troubleshooting in development mode.

Fixes #456
2024-02-06 19:42:40 +00:00
deeplow
c991e530d0
Fix IsolationProvider.percentage variable reuse
If one converted more than one document, since the state of
IsolationProvider.percentage would be stored in the IsolationProvider
instance, it would get reused for the second document. The fix is to
keep it as a local variable, but we can explore having progress stored
on the document itself, for example. Or having one IsolationProvider per
conversion.
2024-02-06 19:42:40 +00:00
deeplow
0a099540c8
Stream pages in containers: merge isolation providers
Merge Qubes and Containers isolation providers core code into the class
parent IsolationProviders abstract class.

This is done by streaming pages in containers for exclusively in first
conversion process. The commit is rather large due to the multiple
interdependencies of the code, making it difficult to split into various
commits.

The main conversion method (_convert) now in the superclass simply calls
two methods:
  - doc_to_pixels()
  - pixels_to_pdf()

Critically, doc_to_pixels is implemented in the superclass, diverging
only in a specialized method called "start_doc_to_pixels_proc()". This
method obtains the process responsible that communicates with the
isolation provider (container / disp VM) via `podman/docker` and qrexec
on Containers and Qubes respectively.

Known regressions:
  - progress reports stopped working on containers

Fixes #443
2024-02-06 19:42:33 +00:00
deeplow
331b6514e8
Containers: remove debug messages (via files)
Remove container_log messages ahead of debug info being sent over
standard streams.
2024-02-06 18:54:39 +00:00
deeplow
dca46d0a6b
Homogenize qubes and containers inner convert method
Simple rename of the __convert() method in the Qubes conversion to make
the code structurally similar.
2024-02-06 18:54:31 +00:00
Alex Pyrgiotis
93bf0af348
ci: Reclaim some of the used space
Reclaim some storage space in the middle of the CI job that builds and
installs Dangerzone in Fedora. The reason is that previously, we
encountered an issues with CI runners running out of space.
2024-02-05 15:35:12 +02:00
deeplow
7f0346686d
Add Dangerzone logo to Fedora build
Fixes #645
2024-02-01 13:53:49 +00:00
deeplow
cd99122385
Adds file formats: epub svg bmp pnm bpm ppm
Partially fix for #660. Missing some files due to limitations [1]:
- PSD - only available from PyMuPDF>=1.23.0 (qubes-fedora is lower)
- TXT - only available from PyMuPDF>=1.23.7 (qubes-fedora is lower)
- JXR - PyMuPDF was refusing to due to missing codec [1]
- JPX - Generated test file was rejected by PyMuPDF [2]
- FB2 - Most often cannot be detected by mime type alone [3]
- CBZ - (idem)
- XPS - (idem)
- MOBI - (idem)
- PAM - General version of other file format already included, so I
  decided not to include this extension [0]

New test files were generated locally:
 - epub - generated with calibre's convert-ebook from another
   sample file
 - svg - generated with inkscape from a mix of a default template
   (hexagons) and a logo's PNG file
 - bmp, pnm, bpm, ppm - generated with ImageMagick's 'convert' from
   tests/test_docs/sample-png.png

[0]: https://github.com/freedomofpress/dangerzone/issues/660#issuecomment-1914681487
[1]: https://github.com/freedomofpress/dangerzone/issues/660#issuecomment-1916803201
[2]: https://github.com/freedomofpress/dangerzone/issues/660#issuecomment-1916870347
[3]: https://github.com/freedomofpress/dangerzone/issues/688
2024-01-31 19:58:48 +00:00
deeplow
4e720aa6e2
Replace 'None' conversion type with "PyMuPDF"
Replaced for clarity over the fact that this conversion is in fact
handled by PyMuPDF.
2024-01-31 19:58:36 +00:00
Alex Pyrgiotis
3e10fd1df4
Explain what happens when PySide6 gets updated
Explain what happens when we bump our `poetry.lock`, and a new
Pyside6 version. Also, have a step-by-step guide on how the maintainer
should create a new PySide6 RPM and update FPF's repo, so that
Dangerzone can be released.
2024-01-31 17:11:31 +02:00
Alex Pyrgiotis
46d5827772
Elaborate on how to add/remove Linux platforms
Explain what's the process behind adding/removing Linux platforms, prior
to a release.
2024-01-31 17:11:30 +02:00
Alex Pyrgiotis
3bc3c6c120
ci: Build and install Dangerzone RPMs
Add some Fedora CI jobs that build RPMs, install them in an end-user
environment, and make a simple conversion and GUI import check. These
are basically smoke tests for Fedora, similar to the ones we have for
Debian.
2024-01-31 17:11:30 +02:00
Alex Pyrgiotis
d54ef875a6
Add official support for Fedora 39
Now that we can create a Dangerzone RPM that depends on PySide6, we can
officially support Fedora 39 as a platform. Add this platform in our CI
tests, as well as our install/release notes.

Fixes #606
2024-01-31 17:11:30 +02:00
Alex Pyrgiotis
b0da1dde5f
dev_scripts: Build end-user Fedora env with PySide6
Extend the env.py script to build an end-user, Fedora 39+ environment
with PySide6 installed, as a regular RPM package. Previously, this was
only possible for development environments with PySide6 downloaded from
PyPI.

As a way to simplify builds, the env.py script offers the option to
download the RPM package itself from FPF's RPM repo [1], if the package
has been uploaded.

[1]: https://packages.freedom.press/yum-tools-prod
2024-01-31 17:11:30 +02:00
Alex Pyrgiotis
84037d4ffb
dev_scripts: Return exit code for failures
The env.py dev script does not return an exit code for failures, so we
add the necessary 'return' statements to do so.
2024-01-31 17:07:32 +02:00
Alex Pyrgiotis
3684b7ff61
Build Dangerzone RPM with PySide6 dependency
Update our RPM spec file to include PySide6 as a dependency, for Fedora
39 onward.
2024-01-31 17:07:32 +02:00
Alex Pyrgiotis
d7ee162852
Add support for Python 3.12
Fedora 39 ships with Python 3.12 by default, which Dangerzone previously
did not support due to limitations from the PySide6 package. Now that
the PySide6 package has been updated to 6.6.1, and the limitation has
lifted, we should to reflect this in pyproject.toml.
2024-01-31 17:07:32 +02:00
Alex Pyrgiotis
741c8311ee
Bump python dependencies via poetry lock 2024-01-31 17:07:32 +02:00
Alex Pyrgiotis
72ddbfd55a
dev_scripts: Install a subset of Podman deps
Install a subset of Podman dependencies, so that we don't also install
Systemd. Doing so can introduce some subtle issues of its own, which is
why we prefer cherry-picking the Podman packages we really need.

Fixes #689
2024-01-30 14:24:45 +02:00
Alex Pyrgiotis
d854657883
Include data files only in source distribution
Make Poetry include data files only in the source distribution, and not
on our wheels. This mainly makes RPM packaging a bit easier, but does
not solve the problem of how to install files to
`/usr/share/dangerzone`.

Also, include files using globs, which is the way Poetry prefers.

Fixes #678
Refs #677
2024-01-23 16:19:45 +02:00
Alex Pyrgiotis
067e787a3d
install: Remove .gitignore for rpm-build
Remove the .gitignore file for rpm-build, because it leads to making
Poetry ignore the Dangerzone module, when building the Python wheel.

Refs #678
2024-01-23 16:19:44 +02:00
deeplow
629278ae4a
Add capitalization to the changelog 2024-01-23 09:10:47 +00:00
sudwhiwdh
3d426ed36b
Linux desktop entry capitalisation 2024-01-22 11:49:42 +00:00
sudwhiwdh
b4ef47e101
GUI header capitalisation 2024-01-22 11:38:54 +00:00
Prateek Jain
699b116d4d
Add clang-dev to Dockerfile 2024-01-15 16:54:00 +00:00
Alex Pyrgiotis
a6755080ad
Ignore CVE-2023-7104 from our security scans
Our security scans for the released container image have flagged
CVE-2023-7104. Our assessment is that this CVE doesn't affect
Dangerzone, mainly because our understanding is that attackers cannot
embed SQLite dbs within LibreOffice spreadsheets.
2024-01-09 20:28:01 +02:00
Alex Pyrgiotis
2f318f1633
Remove stale ignored CVEs
Remove some CVEs from our ignore list of Grype, which affected previous
Dangerzone images.
2024-01-09 20:18:11 +02:00
deeplow
f27296cd45
Replace MIT license with AGPLv3
License change required due to the inclusion of the AGPL-licensed
PyMuPDF. This library greatly benefited Dangerzone in many aspects
detailed in [1].

Fixes #658

[1]: https://github.com/freedomofpress/dangerzone/issues/658
2024-01-04 09:57:49 +00:00
Alex Pyrgiotis
7e21d5e8c4
ci: Use Docker for building images, instead of Podman 2024-01-03 15:57:49 +00:00
Alex Pyrgiotis
f254575cb4
install: Make build image script more flexible
Add the following functionality to the build image script:

1. Let the user choose the container runtime of their choice. In some
   systems, both Docker and Podman may be available, so we need to let
   the user choose which runtime they want.
2. Let users choose if they want to save the image. For non-production
   builds, we may want to simply build the container image, without
   the time penalty of compression.
2024-01-03 15:57:41 +00:00
deeplow
f1d90c6fa9
Compress per page when not using OCR
Make the compression happen per page when OCR is not enabled [1].

[1]: https://github.com/freedomofpress/dangerzone/pull/622#discussion_r1410986342
2024-01-03 12:58:36 +00:00
deeplow
e2531279c0
FIXUP Revert "Disable image compression when saving PDF"
This reverts commit f074db0beaa50389634203657f9b46307164a353.
2024-01-03 12:58:36 +00:00
deeplow
f676891482
Remove Dockerfile dependencies replaced by PyMuPDF
PyMuPDF replaced the need for almost all dependencies, which this commit
now removes.

We are also removing tesseract-ocr as a dependency since
(to our surprise) PyMuPDF ships directly with tesseract binaries [1].
However, now that tesseract-ocr is not available directly as a binary
tool, the `test_ocr.py` needed to be changed.

Fixes #658

[1]: https://github.com/freedomofpress/dangerzone/issues/658#issuecomment-1861033149
2024-01-03 12:58:36 +00:00
deeplow
ee35e28aa6
Disable image compression when saving PDF
Some tests [1] lead to the conclusion that ocr_compression does the same
to the file (performance and size-wise) to the file as deflating images
when saving the file. However, both methods active do add a bit of extra
time. For this reason we're disabling the image deflation (default
option).

[1]: https://github.com/freedomofpress/dangerzone/pull/622#discussion_r1434042296
2024-01-03 12:58:36 +00:00
deeplow
6f61e44502
Solve import errors by lazy-loading fitz module
Qubes does on-host pixels-to-pdf whereas the containers version doesn't.
This leads to an issue where on the containers version it tries to load
fitz, which isn't installed there, just because it's trying to check if
it should run the Qubes version.

The error it was showing was something like this:

    ImportError while loading conftest '/home/user/dangerzone/tests/conftest.py'.
        tests/__init__.py:8: in <module>
            from dangerzone.document import SAFE_EXTENSION
        dangerzone/__init__.py:16: in <module>
            from .gui import gui_main as main
        dangerzone/gui/__init__.py:28: in <module>
            from ..isolation_provider.qubes import Qubes, is_qubes_native_conversion
        dangerzone/isolation_provider/qubes.py:15: in <module>
            from ..conversion.pixels_to_pdf import PixelsToPDF
        dangerzone/conversion/pixels_to_pdf.py:16: in <module>
            import fitz
        E   ModuleNotFoundError: No module named 'fitz'

For context see discussion in [1].

[1]: https://github.com/freedomofpress/dangerzone/pull/622#issuecomment-1839164885
2024-01-03 12:58:36 +00:00
deeplow
773fcfa75b
Add poetry as CI container build dependency
Due to the new build-image.py, which now uses `poetry export` we need to
explicitly install poetry in the CI before building the container image.
2024-01-03 12:58:36 +00:00
deeplow
80db7bb02e
Remove pre-pymupdf exceptions and detect pymupdf ones 2024-01-03 12:58:35 +00:00
deeplow
e0b092692d
Multi-stage Dockerfile build
Breaks down the container build into multiple stages in order to speed
up build times. Building PyMuPDF was taking too long and this way it can
be cached.

The original version was made by @apyrgio
2024-01-03 12:58:35 +00:00
deeplow
1cd87f73a8
Bump pymupdf to 1.23.8 2024-01-03 12:58:35 +00:00
deeplow
2b082913a0
Bump pymupdf version 1.23.7
The build was failing due to a missing kernel libraries. Adding the
linux-headers dependency solves the issue.
2024-01-03 12:58:35 +00:00
deeplow
250d8356cd
Hash-verify container pip install & merge build-image
Ensure that when the container image is installing pymupdf (unavailable
in the repos) with verified hashes. To do so, it has the pymupdf
dependency declared in a "container" group in `pyproject.toml`, which
then gets exported into a requirements.txt, which is then used for
hash-verification when building the container.

Because this required modifying the container image build scripts, they
were all merged to avoid duplicate code. This was an overdue change
anyways.
2024-01-03 12:58:35 +00:00
deeplow
7b57cb209e
PIP force --break-system-packages
We're intentionally bypassing PEP 668 [1], which prevents the
installation of non-distro python wheels alongside system packages to
avoid incompatibilities at distro-level.

We are intentionally bypassing this since our container image is a
controlled environment (we only ship a version after rigorous testing).

[1]: https://peps.python.org/pep-0668/
2024-01-03 12:58:35 +00:00
deeplow
b75417bbec
Remove all server-side timeouts from doc to pixels
Now we're using client-side timeouts so the server side-ones are not
needed. Implemented following the suggestion from @apyrgio [1].

[1]: https://github.com/freedomofpress/dangerzone/pull/622#discussion_r1413906514
2024-01-03 12:58:35 +00:00
deeplow
576cbd3382
Fix DPI mismatch between doc2pixels and pixels2pdf
The original document was larger in dimensions than the original one due
to a mismatch in DPI settings. When converting documents to pixels we
were setting the DPI to 150 pixels per inch. Then when converting back
into a PDF we were using 70 DPI. This difference would result in an
overall larger document in dimensions (though not necessarily in file
size).

Fixes #626
2024-01-03 12:58:34 +00:00
deeplow
e5dbe25abb
Replace 'convert' with PyMuPDF for images
PyMuPDF can also convert images of the types we already support so we
don't need ImageMagick's 'convert'.
2024-01-03 12:58:34 +00:00
deeplow
a3a64882a3
Add PyMuPDF to dev env in Qubes
Since PyMuPDF is now used in Pixels to PDF we needed to add it to the
qubes development environment.
2024-01-03 12:58:32 +00:00
deeplow
77d5ea5940
Add PyMuPDF in pixels_to_pdf replacing old logic
Adding PyMuPDF essentially make the code much simpler since it can do
everything that we'd need multiple programs for. It also includes
tesseract-OCR integration, which this commit makes use of.
2024-01-03 12:56:33 +00:00
deeplow
ba17016643
Doc_to_pixels: remove unneeded timeout
Timeout can no longer be used since we're not calling a subprocess. We
could still implement it, but it's more worthy to reply in
yet-to-implement client-side timeouts (in containers).
2024-01-03 12:40:45 +00:00
deeplow
317deadbe4
Replace pdfinfo logic (get # pages) with PyMuPDF 2024-01-03 12:40:45 +00:00
deeplow
327ab8791f
Replace pdftoppm logic with PyMuPDF (native python)
Use PyMuPDF (AGPL-licensed) within the container conversion to replace
the pdf conversion to RGB. This massively simplifies the code since
PyMuPDF is a native python library.
2024-01-03 12:40:45 +00:00
deeplow
e923ac0788
Remove whitespace
Remove whitespace accidentally added in [1].

[1]: commit d6c162ea080f0df27f3109bf4aab84788704272c
2024-01-03 10:52:47 +00:00
deeplow
555cd33eb6
Simplify Qubes install instructions
Many instructions relied on the fact that the developer would have to
copy over the RPC policies and install the dependencies manually on the
template. This is no longer needed since a Qubes-built package ships
the necessary RPC policies and dependencies.

Removing the dependencies installation also helps with documentation
maintenance since it would be yet another place where we would need to
keep the dependency list up to date.
2024-01-03 10:52:47 +00:00
deeplow
5849800606
Improve "Developing Dangerzone" docs section
Make it clearer that we are talking about the two main
development-workflow differences when developing on Qubes.
2024-01-03 10:52:46 +00:00
deeplow
d1eb4ec76c
Remove duplicate "cd dangerzone" instruction 2024-01-03 10:52:46 +00:00
deeplow
3f6437cf66
Remove poetry install part from Qubes instructions
Make the first part of the Dangerzone development just to install the
Qubes RPC policies. Poetry install and other development related tasks
should be pointed to in the Fedora part of the instructions to avoid
duplication.
2024-01-03 10:52:46 +00:00
deeplow
6597b57452
Apply 2023-10-25 advisory in BUILD instructions
On the security advisory done in 2023-10-25 we updated the instructions
in INSTALL.md, but missed the ones in BUILD.md, leaving developers with
a network path. This is not too critical since it's development but it
should be fixed in any case.

[1]: https://github.com/freedomofpress/dangerzone/blob/5acb968/docs/advisories/2023-10-25.md
2024-01-03 10:52:46 +00:00
deeplow
0ae7f89dea
Add note that Qubes instr. are on dom0 terminal
It was not entirely clear that what we showed should be run in a
terminal.
2024-01-03 10:52:46 +00:00
deeplow
5121b4f702
Qubes: clarify instructions for skipping step 1
Make it clearer that step 1 should be skipped entirely when the user
wants to install it on their default template.
2024-01-03 10:52:46 +00:00
deeplow
cac06caf82
Correct Qubes Instructions: dz-dvm is not disposable
The qube dz-dvm is not a disposable qube but rather a disposable
template qube (aka. app qube).
2024-01-03 10:52:46 +00:00
Alex Pyrgiotis
5bf7549b55
Fix typo 2023-12-29 18:30:48 +02:00
Alex Pyrgiotis
9f713ebb8b
ci: Test official installation instructions
Create a new GitHub Actions workflow which aims to continuously test our
official installation instructions. The way we do it is the following:

1. Create two jobs, one for the Debian-based distros, and one for Fedora
   ones.
2. Copy the instructions from INSTALL.md into each job.
3. Create a matrix that runs the installation jobs in parallel, for each
   supported distro and version.

The jobs will run only on 00:00 UTC, and not on every PR, since it
wouldn't make sense otherwise.

Fix #653
2023-12-21 21:51:07 +02:00
Alex Pyrgiotis
12eda5d73c
dev_scripts: Add missing git dependency
Add missing git dependency, which is required to run the `isort` command
on the development environment.
2023-12-21 21:38:39 +02:00
Alex Pyrgiotis
e137976581
dev_scripts: Upload release assets to GitHub
Add a script to upload release assets to GitHub. This script can take
either a release ID, a Git tag, or the latest draft release.

Note that while GitHub's official client can upload assets to releases,
it cannot upload them to draft releases [1], hence why we created this
script.

[1]: https://cli.github.com/manual/gh_release_upload
2023-12-21 21:38:39 +02:00
deeplow
42228647e0
Fix lint due to inconsistent qa.py and RELEASE.md
Missed during the merge of PR #654 [1].

[1]: https://github.com/freedomofpress/dangerzone/pull/654
2023-12-19 08:10:18 +00:00
deeplow
2c5f04c2c3
Add instructions for adding release tag
Instructions only stated how to verify the release tag bug not how
to make it.
2023-12-19 08:06:14 +00:00
deeplow
184abfd5fc
Fix Qubes indentation 2023-12-18 08:19:26 +00:00
deeplow
418e388535
Add note that Windows 11 is in a VM 2023-12-18 08:18:27 +00:00
deeplow
2594dab31d
Simplify initial setup section titles 2023-12-18 08:18:27 +00:00
deeplow
bb653b3425
Right-click (scenario 8) can be tested under Qubes
Fixes #641
2023-12-18 08:18:27 +00:00
deeplow
d0e9eea55c
"Checklist-ize" RELEASE.md 2023-12-18 08:18:27 +00:00
deeplow
24ddda4070
Add point about creating an issue for QA & Release 2023-12-18 08:18:27 +00:00
deeplow
b3fed27178
Move container building notice to release instructions 2023-12-18 08:18:27 +00:00
deeplow
65afdc68cd
Add 'Release' section and indent subsections 2023-12-18 08:18:27 +00:00
deeplow
01b107ced9
Title-case various sections for consistency 2023-12-18 08:18:26 +00:00
deeplow
05b8e59d67
Make RELEASE Windows structure similar to macOS 2023-12-18 08:18:26 +00:00
deeplow
3d21e17e3b
Reorganize macOS release into setup and building 2023-12-18 08:18:26 +00:00
deeplow
a936780266
Move pre-release instructions to top of RELEASE
The instructions to cut a release were after all the scenarios which
made them easy to miss.
2023-12-18 08:18:26 +00:00
Moon Sungjoon
63aea4cb45
Enable HWP conversion on MacOS (Apple silicon CPU)
This PR reverts the patch that disables HWP / HWPX conversion on MacOS M1.
It does not fix conversion on Qubes OS (#494).

Previously, HWP / HWPX conversion didn't work on MacOS (Apple silicon CPU) (#498)
because libreoffice wasn't built with Java support on Alpine Linux for ARM (aarch64).

Gratefully, the Alpine team has enabled Java support on the aarch64
system [1], so we can enable it again for ARM architectures.
And this patch is included in Alpine 3.19

This commit was included in #541 and reverted on #562 due to a stability issue.

Fixes #498

[1]: 74d443f479
2023-12-13 12:57:22 +02:00
Alex Pyrgiotis
bd5b3792e2
Bump README links to v0.5.1 artifacts 2023-12-08 21:20:09 +02:00
deeplow
dd22946c0d
Add issue #647 to CHANGELOG (qubes deps. missing) 2023-12-08 11:43:49 +00:00
deeplow
780ea18d22
Remove support for Fedora 37 (EOL)
Fixes #637
2023-12-08 11:08:25 +00:00
Alex Pyrgiotis
1ea21e52a5
Add security advisory 2023-12-07 2023-12-08 11:06:58 +00:00
deeplow
06b68f2572
Update CHANGELOG for v0.5.1 release 2023-12-08 10:41:47 +00:00
deeplow
6c59b1f41d
Adds missing client-side packages to Qubes-Dangerzone
Dangerzone was failing to convert documents in Qubes due to missing
client-side dependencies. In particular poppler-utils, ghostscript and
graphicsmagick.

Fixes #647
2023-12-08 10:35:15 +00:00
Alex Pyrgiotis
9bad7ab3bb
Improve the instructions for QA step 10
Clarify how can a tester install the previous version of Dangerzone in
the step 10 of the QA.

Closes #597
2023-12-07 20:45:29 +02:00
Alex Pyrgiotis
7f50ad2e48
ci: Make our security scans stricter
Our security scans previously alerted us on critical CVEs that have a
fix. In this commit, we ask to be alerted on CVEs that don't have a fix
yet, so that we can have them in our radar.

Since the introduction of these security checks, we have only once
encountered a case where our container was vulnerable to a CVE that
Alpine Linux had not fixed yet. This means that the maintenance burden
of this change will probably be minimal.
2023-12-06 17:57:19 +02:00
Alex Pyrgiotis
7fc797f913
Bump version to 0.5.1 2023-12-06 17:54:25 +02:00
deeplow
612ac061de
Bump python dependencies via poetry lock 2023-12-06 09:59:30 +00:00
dependabot[bot]
6876fa569d
Bump urllib3 from 2.0.6 to 2.0.7
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.0.6 to 2.0.7.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.0.6...2.0.7)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-13 20:40:53 +02:00
Garrett Robinson
53115b3ffa
Use more descriptive button labels in update check prompt 2023-10-31 12:52:34 +00:00
deeplow
5acb96884a
Security advisory 2023-10-25: prevent dz-dvm network via dispVMs
In Qubes the disposable netVM is internet connected. For this reason,
on Qubes we chose create our own disposable VM (dz-dvm). However, in
reality this could still be bypassed since dz-dvm had the default
disposable dispvm.

By setting the default_dispvm to '' we prevent this bypass. For VMs
users who have already followed the setup instructions, the following
command should (to be ran in dom0) will fix this issue:

   qvm-prefs dz-dvm default_dispvm ''
2023-10-25 18:26:36 +01:00
deeplow
0aeef1c2d0
CHANGELOG: Fix issue #513 description 2023-10-19 20:43:38 +01:00
Alex Pyrgiotis
bd01facaf1
Bump README refs to v0.5.0 2023-10-19 21:58:17 +03:00
deeplow
8d167382a3
v0.5.0 changelog: add missing fixes 2023-10-17 20:52:17 +01:00
Alex Pyrgiotis
44a73007a8
Drop last mention to Fedora 36 2023-10-17 15:22:20 +03:00
Erik Moeller
822f5bcd4c
Minor tweaks to Qubes build docs
- `keyring` command will only work if `python3-keyring` is installed
- fix `cp` command (`qubes` directory not included in prior command)
2023-10-17 11:45:02 +03:00
Alex Pyrgiotis
a2dafdb505
Add ubuntu 23.10 (mantic) support
Fixes #601
2023-10-17 11:31:30 +03:00
deeplow
2f98135f5a
Skip scenario 9 on linux (Qubes-specific) 2023-10-16 08:43:26 +01:00
Alex Pyrgiotis
f02597aa4f
Make isort use .gitignore properly
By using `--skip / --extend-skip .gitignore`, we actually never read the
.gitignore file. We have to use `--skip-gitignore` instead.

This requires Git in the development environment, so we need to install
Git in our CI runners as well.
2023-10-13 22:45:37 +03:00
Alex Pyrgiotis
ba5adb33c0
Fix a bug in "Change Selection"
Fix a bug in the "Change Selection" action, whereby changing your
selection and picking files from another directory results in:

    "Dangerzone does not support adding documents from multiple
    locations. The newly added documents were ignored."

To fix this, change the output directory when we change selection as
well.
2023-10-13 22:45:11 +03:00
Alex Pyrgiotis
edfba0c783
Qubes: Fix progress in first stage of Qubes conversion 2023-10-13 22:44:37 +03:00
deeplow
186ddd6b1e
Allow user to override update checking on Linux
The original intention of leaving the update checkbox in the hamburger
menu was to let non-supported Linux distros (e.g. compiled from source)
to check for updates. However, on Linux it ended up being disabled
forcefully by default on startup.

This takes into account an overriden update checkbox.

Fixes #596
2023-10-13 17:01:53 +01:00
deeplow
18898992f1
BUILD.md: Add instructions to clone the git repo 2023-10-13 07:47:27 +01:00
Alex Pyrgiotis
b11920a3af
Add a note in build instructions for dev environments 2023-10-11 15:54:10 +01:00
Alex Pyrgiotis
2256f9fb4e
ci: Test building Qubes package in CircleCI 2023-10-11 15:54:09 +01:00
Alex Pyrgiotis
c4c46a0a8d
Small fixes for Qubes PRM
This commit fixes 3 small issues with the way we produce our Qubes RPM:

1. The `.exists()` method follows symlinks by default, whereas we want
   to check if a symlink exists. This functionality has been added in
   Python
   3.12.

   Instead of checking if a symlink exists and then removing it, simply
   remove it and don't throw an error if it doesn't exist in the first
   place.

2. The `dz.Convert*` policies were not installed with the executable bit
   set, therefore the qube could not start.

3. The `dz.ConvertDev` policy in particular had an ambiguous shebang,
   thus we change it to explicitly call Python3
2023-10-11 15:54:06 +01:00
deeplow
39fe539b2e
Mirror RELEASE.md text in qa.py
qa.py should be in sync with RELEASE.md, or else it fails with

    $ ./dev_scripts/qa.py --check-refs

This was accidentally introduced in
https://github.com/freedomofpress/dangerzone/pull/583/files
2023-10-11 15:31:45 +01:00
Alex Pyrgiotis
8dc8372998
Add extra Qubes QA scenarios
Add some QA scenarios that target QA testing on Qubes.
2023-10-11 10:33:31 +01:00
Alex Pyrgiotis
3daf0e2cb7
Do not show file previews in case of exceptions
If a Qubes conversion encounters an exception that is not a subclass of
ConversionException, it will still show a preview of a file that does
not exist.

Send an error progress report in that case, so that the GUI code can
detect that an error occurred and not open a file preview

Fixes #581
2023-10-05 11:11:42 +03:00
Alex Pyrgiotis
bdf3f8babc
qubes: Clean up temporary files
Create a temporary dir before the conversion begins, and store every
file necessary for the conversion there. We are mostly concerned about
the second stage of the conversion, which runs in the host. The first
stage runs in a disposable qube and cleanup is implicit.

Fixes #575
Fixes #436
2023-10-04 14:05:23 +03:00
Alex Pyrgiotis
f37d89f042
conversion: Allow using a temp dir other than /tmp
Extend the PixelsToPDF converter by adding an additional `tempdir`
argument. This argument can be used to make the conversion use a
different temporary directory other than `/tmp`.

For containers, this extra arguments makes no difference, as it won't be
used. For Qubes, this argument will allow storing files in a temporary
dir that will be cleaned up once the conversion completes. Previously,
these files would linger in the user's `/tmp`.

Refs #575
2023-10-04 14:00:53 +03:00
deeplow
c4fdebc80d
Update Poetry lock file
Run `poetry lock` and update the existing dependencies again thanks
to a urrlib vulnerability that was announced a bit after our last
dependency bump.
2023-10-03 09:56:30 +01:00
Alex Pyrgiotis
2a0ef78d91
Update our changelog for 0.5.0 2023-10-03 11:32:38 +03:00
Alex Pyrgiotis
1961899bed
Bump version to 0.5.0 2023-10-03 11:32:38 +03:00
Alex Pyrgiotis
89a36efe89
tests: Fix typo 2023-10-03 11:32:37 +03:00
deeplow
049fa7d484
Update notarization process (altool deprecated)
Following de deprecation notice of the Apple notarization tool 'altool',
we're updating the instructions to reflect the change to the new tool
'notarytool'.

The migration process essentially required updating the commands and
migrating credentials. It is documented in [1].

Fixes #506

[1]: https://developer.apple.com/documentation/technotes/tn3147-migrating-to-the-latest-notarization-tool
2023-10-02 16:03:32 +01:00
Alex Pyrgiotis
a8ee8cdd4a
Update Poetry lock file
Run `poetry lock` and update the existing dependencies.
2023-10-02 17:58:52 +03:00
Alex Pyrgiotis
4f66353639
Add dark mode logic in our dialogs
Make our dialogs set the OSColorMode CSS property, so that we can
properly style them.

Refs #528
2023-10-02 16:34:56 +03:00
Alex Pyrgiotis
6232062146
Add missing newline char 2023-10-02 15:41:29 +03:00
Alex Pyrgiotis
b7b76174ab
qubes: Log captured output for the second stage
Log the captured command output during the second stage, only in dev
environments. This follows what we have already done for the first
stage.
2023-10-02 15:41:29 +03:00
Alex Pyrgiotis
16603875d6
qubes: Display all errors in second stage
If a command encounters an error or times out during the second stage of
the conversion in Qubes, handle it the same way as we would have handled
it in the first stage:

1. Get its error message.
2. Throw an UnexpectedConversionError exception, with the original
   message.

Note that, because the second stage takes place locally, users will see
the original content of the error.

Refs #567
Closes #430
2023-10-02 15:41:17 +03:00
Alex Pyrgiotis
2016965c84
Revert "Enable HWP conversion on MacOS M1"
This reverts commit 214ce9720d. The
rationale is that we want to wait until the LibreOffice package that
allows HWP conversion in Alpine Linux lands in `alpine:latest`.

For more info, read
https://github.com/freedomofpress/dangerzone/issues/498#issuecomment-1739894100
2023-10-02 14:22:47 +03:00
Alex Pyrgiotis
6973845ec9
Revert "Switch to the edge repo of Alpine Linux"
This reverts commit acd615e0e1. The
rationale is that we want to wait until the LibreOffice package that
allows HWP conversion in Alpine Linux lands in `alpine:latest`.

For more info, read
https://github.com/freedomofpress/dangerzone/issues/498#issuecomment-1739894100
2023-10-02 14:22:46 +03:00
deeplow
7daeccdfea
Prevent PDF from overwriting num_pages in Qubes
This should only affect the alpha version of Qubes OS (in containers
it only allows the attacker to control the timeout). In short, an
attacker could have PDF metadata that would show before "Pages:" in
the `pdfinfo` command output and this would essentially override the
number of pages measured in the server. This could enable the attacker
to shorten the number of pages of a document for example.

Fixes #565
2023-10-02 12:18:12 +01:00
deeplow
dabdf6c286
FIXUP: rename to QubesQrexecFailed instead 2023-10-02 12:06:18 +01:00
deeplow
eb488b16c5
FIXUP: rename QubesNotEnoughRAMError to QubesConversionStartFailed 2023-10-02 11:51:55 +01:00
deeplow
9cfac7ac2a
Generalize "out of RAM" error to reflect other issues
When qrexec-client-vm fails, it could be a symptom of various issues:
  - the system being out of RAM
  - dz-dvm not existing

The exit code is the same in all cases (126), which makes it
particularly tricky to solve in the client application. For this reason
the approach is now to tell the user to see the qubes error notification
on the top right of their screen.
2023-10-02 11:06:17 +01:00
Alex Pyrgiotis
ccf4132ea0
conversion: Add sanity check for page count
Add a sanity check at the end of the conversion from doc to pixels, to
ensure that the resulting document will have the same number of pages as
the original one.

Refs #560
2023-09-28 22:50:54 +03:00
Alex Pyrgiotis
b4e5cf5be7
qubes: Stream page data in real time
Stream page data back to the caller, immediately after we read them from
pdftoppm. This way, we have more accurate progress reports and timeouts.

Fixes #557
2023-09-28 22:50:54 +03:00
Alex Pyrgiotis
4bb959f220
conversion: Add anchor points for streaming page data/metadata
Introduce 4 new methods that can be overloaded by the Qubes isolation
provider to stream page data/metadata back to the caller. For the time
being, these methods do what they did before, i.e., write this info in
files within the pixels directory.
2023-09-28 22:50:53 +03:00
Alex Pyrgiotis
6012cd1491
Improve EOF detection when reading command output
Do not read a line from the command output and then check if
we are at EOF, because it's possible that the writer immediately exited
after writing the last line of output. Instead, switch the order of
actions.

This is a very serious bug that can lead to Dangerzone excluding the
last page of the document. It should have bit us right from the start
(see aeeed411a0), but it seems that the
small period of time it takes the kernel to close the file descriptors
was hiding this bug.

Fixes #560
2023-09-28 22:50:53 +03:00
Garrett Robinson
79c1d6db0f
Use extend_skip to avoid overriding isort's skip default
This preserves isort's default behavior of ignoring virtualenvs with
common names like `venv` or `.venv`, which is helpful when running
`isort` in a local development environment that uses such a
virtualenv.
2023-09-28 17:21:00 +03:00
Garrett Robinson
eab768f950
Style safe_extension_filename consistently in Dark Mode
To be consistent with Light Mode, the background of the
safe_extension_filename QLabel should match the adjacent QTextField,
but the text should be "grayed out"/disabled to indicate that it's not
supposed to be editable.
2023-09-28 17:20:54 +03:00
Garrett Robinson
40b6240097
Only set certain colors in light mode 2023-09-28 17:20:50 +03:00
Garrett Robinson
46f978e6f0
Detect OS color mode and set as property for stylesheets
Sets the detected OS color mode (dark/light) as a property on the
QApplication so it can be referenced in stylesheets to select style
rules suited to the OS color mode.
2023-09-28 17:20:34 +03:00
deeplow
23bee23d81
Disable isolation_provider tests on dummy conversion
Windows and macOS in CI (which don't support nested virtualization)
and thus Docker aren't really candidates for isolation_provider tests.
2023-09-28 11:08:53 +01:00
deeplow
0a6b33ebed
Qubes: detect qube failing to start (missing RAM)
In Qubes OS it's often the case that the user doesn't have enough
RAM to start the conversion. In this case it raises BrokenPipeException
and exits with code 126.

It didn't seem possible to distinguish this kind of failure to one
where the user has misconfigured qrexec policies.

NOTE: this approach is not ideal UX-wise. After the first doc failing
the next one will also try and fail. Upon first failure we should
inform the user that they need to close some programs or qubes.
2023-09-28 11:08:50 +01:00
deeplow
63f03d5bcd
Add limit and test to max width and height of docs 2023-09-28 11:08:47 +01:00
deeplow
6f26fc6303
Qubes: add test if MAX_PAGES is enforced in client
Because the server also checks the MAX_PAGES limit, the test in base
would hide the fact that the client is not enforcing the limit. This
ensures that's not the case.

When the pages in containers are streamed (#443), then this test should
be in base.py.
2023-09-28 11:06:36 +01:00
deeplow
54b8ffbf96
Add page limit of 10000
Theoretically the max pages would be 65536 (2byte unsigned int.
However this limit is much higher than practical documents have
and larger ones can lead to unforseen problems, for example RAM
limitations.

We thus opted to use a lower limit of 10K. The limit must be
detected client-side, given that the server is distrusted. However
we also check it in the server, just as a fail-early mechanism.
2023-09-28 11:01:14 +01:00
deeplow
afba362d22
Tests: split isolation provider tests per provider
Isolation provider tests done in tests/test_base.py and had
pytest.mark.parameterize() for each isolation provider. This logic
would not work well when we had test that diverge. We could have marked
each one as compatible with one provider or another, but in the end it
turned out to be better to have the common ones in a base class and
the divergent ones in each.

NOTE: this has a strange side-effect: inherited test classes need to
have imports for all of the fixtures even if they are not explictly used
2023-09-28 09:53:29 +01:00
Alex Pyrgiotis
18b73d94b0
qubes: Find out reason of interrupted conversions
If a conversion has been interrupted (usually due to an EOF), figure out
why this happened by checking the exit code of the spawned process.
2023-09-26 17:35:26 +03:00
Alex Pyrgiotis
30196ff35b
errors: Add error for interrupted conversions
Add an error for interrupted conversions, in order to better
differentiate this scenario from other ValueErrors that may be raised
throughout the code's lifetime.
2023-09-26 17:35:26 +03:00
Alex Pyrgiotis
0273522fb1
qubes: Store the process for the spawned qube
Store, in an instance attribute, the process that we have started for
the spawned disposable qube. In subsequent commits, we will use it from
other places as well, aside from the `_convert` method.

Note that this commit does not alter the conversion logic, and only does
the following:
1. Renames `p.` to `self.proc.`
2. Adds an `__init__` method to the Qubes isolation provider, and
   initializes the `self.proc` attribute to `None`.
3. Adds an assert that `self.proc` is not `None` after it's spawned, to
   placate Mypy.
2023-09-26 17:35:25 +03:00
deeplow
e08b6defc3
Round conversion progress from float to int
Fixes #553
2023-09-26 15:20:41 +01:00
deeplow
8d37ff15e0
Remove duplicated Qubes message: "Safe PDF Created"
Fixes #555.  This is a leftover from when we didn't have progress
reports from the second stage conversion (AKA. pixels to PDF) in #429.
2023-09-26 12:16:48 +01:00
Alex Pyrgiotis
a67c080898
Add changelog entry for Qubes beta integration 2023-09-25 12:51:41 +03:00
Alex Pyrgiotis
af7087af65
Update our release/QA instructions for Qubes
Update the release/QA instructions for Qubes, so that they take into
account the fact that we can now publish a Qubes RPM through our
official repos.
2023-09-25 12:51:41 +03:00
Alex Pyrgiotis
c94c8c8ba5
Add installation instructions for Qubes
Add instructions for installing Dangerzone on Qubes from our official
repos. These instructions are adapted from the build instructions, but
have been greatly simplified because we don't need some of the qubes
that the development environment needs.

Closes #431
2023-09-25 12:51:40 +03:00
Alex Pyrgiotis
22a58d83df
install: Add Tesseract models as package reqs
Add Tesseract models for the 10 most spoken languages as package
requirements for Qubes. For containers, this problem is already solved
since we install all Tesseract models.

If a user is not covered by the installed models, they can install
extras on their own. We will add a note for this in subsequent commits.

Refs #431
2023-09-25 12:51:40 +03:00
Alex Pyrgiotis
215fa8b558
install: Add conflict if Dangerzone is installed
Add a "Conflicts:" entry in the RPM spec, in case another version of
Dangerzone is already installed.
2023-09-25 12:49:58 +03:00
Alex Pyrgiotis
81b4a8deb5
Minor fixes in Fedora installation section 2023-09-25 12:49:58 +03:00
Alex Pyrgiotis
cbca9110ca
Switch to tessdata-fast Tesseract model
Switch to the tessdata-fast Tesseract model, instead of the tessdata
one. The tessdata-fast Tesseract model is much smaller, and a bit faster
than the other one. Also, it's the model that Debian/Fedora ship by
default.

Closes #545
2023-09-25 12:48:05 +03:00
Alex Pyrgiotis
e64d1da61f
qubes: Pass OCR parameters properly
Pass OCR parameters to conversion functions as arguments, instead of
setting environment variables.

Fixes #455
2023-09-20 18:04:40 +03:00
Alex Pyrgiotis
8a0c0a4673
Make parameter actually optional 2023-09-20 17:58:39 +03:00
Alex Pyrgiotis
20157bef58
Fix typo 2023-09-20 17:45:44 +03:00
Alex Pyrgiotis
99dd5f5139
qubes: Add client-side timeouts
Extend the client-side capabilities of the Qubes isolation provider, by
adding client-side timeout logic.

This implementation brings the same logic that we used server-side to
the client, by taking into account the original file size and the number
of pages that the server returns.

Since the code does not have the exact same insight as the server has,
the calculated timeouts are in two places:

1. The timeout for getting the number of pages. This timeout takes into
   account:
   * the disposable qube startup time, and
   * the time it takes to convert a file type to PDF
2. The total timeout for converting the PDF into pixels, in the same way
   that we do it on the server-side.

Besides these changes, we also ensure that partial reads (e.g., due to
EOF) are detected (see exact=... argument)

Some things that are not resolved in this commit are:
* We have both client-side and server-side timeouts for the first phase
  of the conversion. Once containers can stream data back to the
  application (see #443), these server-side timeouts can be removed.
* We do not show a proper error message when a timeout occurs. This will
  be part of the error handling PR (see #430)

Fixes #446
Refs #443
Refs #430
2023-09-20 17:32:42 +03:00
Alex Pyrgiotis
55a4491ced
Consolidate import statements 2023-09-20 17:14:24 +03:00
Alex Pyrgiotis
c547ffc3b4
conversion: Factor out calculate_timeout
Factor out the logic behind the calculate_timeout() method, used in
Dangerzone conversions, so that isolation providers can call it
directly.
2023-09-20 17:14:24 +03:00
Alex Pyrgiotis
fea193e935
Add non-blocking read utility
Add a function that can read data from non-blocking fds, which we will
used later on to read from standard streams with a timeout.
2023-09-20 17:14:24 +03:00
Alex Pyrgiotis
344d6f7bfa
Add Stopwatch implementation
Add a simple stopwatch implementation to track the elapsed time since an
event, or the remaining time until a timeout.
2023-09-20 17:14:23 +03:00
Alex Pyrgiotis
fbe13bb114
Refer to Qubes in the project's description 2023-09-20 16:48:53 +03:00
Alex Pyrgiotis
a3bb740b19
Remove some stale Qubes refs in setup.py 2023-09-20 16:48:53 +03:00
Alex Pyrgiotis
01d63e4eda
install: Build Dangerzone RPMs using our SPEC file
Replace the deprecated `bdist_rpm` method of creating RPMs for
Dangerzone. Instead, update our `install/linux/build-rpm.py` script, to
build Dangerzone RPMs using our SPEC file under
`install/linux/dangerzone.spec`. The script now essentially creates a
source distribution (sdist) using `poetry build`, and then uses
`rpmbuild` to create binary and source RPMs.

Fixes #298
2023-09-20 16:48:53 +03:00
Alex Pyrgiotis
6cc2a953ff
install: Add directory for building Dangerzone RPMs
Add an `rpm-build` directory under `install/linux`, which will be used
for building Dangerzone RPMs. For the time being, it only has a
.gitignore file there, but in the future, invoking
`install/linux/build-rpm.py` will populate it.
2023-09-20 16:48:53 +03:00
Alex Pyrgiotis
f5abe0abd0
Update RPM dependencies
Update the dependencies required to build RPM packages. More
specifically, remove the older python3-setuptools dependency, and depend
instead on python3-devel and python3-poetry-core.

Note that this commit may break our CI, but it will be resolved in
subsequent commits.
2023-09-20 16:48:53 +03:00
Alex Pyrgiotis
33197f26b7
install: Introduce a SPEC file for creating RPMs
Introduce a SPEC file that can be used to create an RPM from a Python
source distribution. Some notable features of this SPEC file follow:

1. We can use this SPEC file to create both regular RPM packages and
   ones targeted for Qubes.
2. It has a post installation script that removes stale .egg-info
   directories, which previously caused issues to our users.
3. It automatically creates a changelog from our Git logs, which differs
   from the actual CHANGELOG.md.
4. It folloes the latest Fedora guidelines (as of writing this) for
   packaging Python projects.

Fixes #514
2023-09-20 16:48:52 +03:00
Alex Pyrgiotis
3dea16bcd2
Include non-Python data files into Python package
Update our pyproject.toml file to include some non-Python data files,
e.g., our container image and assets. This way, we can use `poetry
build` to create a source distribution / Python wheel from our source
repository.

Note that this list of data files is already defined in our `setup.py`
script. In that script, one can find some extra goodies:

1. We can conditionally include data files in our Python package. We use
   this to include Qubes data only in our Qubes packages.
2. We can specify where will the data files be installed in the end-user
   system.

The above are non-goals for Poetry [1], especially (2), because modern
Python wheels are not supposed to install files in arbitrary places
within the user's host, nor should the install invocation use sudo.
Instead, this is a task that's better suited for the .deb / .rpm
packages.

So, why do we bother updating our `pyproject.toml` and not use
`setup.py` instead? Because `setup.py` is deprecated [2,3], and the
latest Python packaging RFCs [4], as well as most recent Fedora
guidelines [5] use `pyproject.toml` as the source of truth, instead of
`setup.py`.

In subsequent commits, we will also use just `pyproject.toml` for RPM
packaging.

[1]: https://github.com/python-poetry/poetry/issues/890
[2]: https://peps.python.org/pep-0517/#source-trees
[3]: https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html
[4]: https://peps.python.org/pep-0517/
[5]: https://docs.fedoraproject.org/en-US/packaging-guidelines/Python/
2023-09-20 16:38:55 +03:00
Alex Pyrgiotis
5431e059bf
Update build-system entry in pyproject.toml
Update the `build-backend` attribute, in accordance with the Python
Poetry docs [1]. Also, bump the minimum required poetry-core version to
1.2.0, since this is the version that introduced the Poetry dependency
groups [2], i.e., the [tool.poetry.group] sections in pyproject.toml.

[1]: https://python-poetry.org/docs/pyproject/#poetry-and-pep-517
[2]: https://python-poetry.org/docs/managing-dependencies/#dependency-groups
2023-09-20 16:38:55 +03:00
Alex Pyrgiotis
b83d2495eb
Remove stale dangerzone-container entrypoint
The dangerzone-container entrypoint, as specified in pyproject.toml, is
stale, for the following reasons:

1. It's not mentioned in the setup.py script, so it was never included
   in our Linux distributions.
2. The code in `dangerzone.__init__.py` that decides if it will invoke
   the GUI or CLI backend, just takes `dangerzone-cli` into account for
   this decision, and does not mention dangerzone-container anywhere.
2023-09-20 16:38:55 +03:00
Alex Pyrgiotis
7bc0129f94
Let black and isort respect .gitignore
In order to let isort respect .gitignore, we need to specify this in the
tool.isort entry, in pyproject.toml.

For black, we don't need any extra tweaks. This is weird, since until a
few months ago black did not respect .gitignore. Maybe something has
changed in the meantime but if not, we should revert this change.
2023-09-20 16:38:55 +03:00
Alex Pyrgiotis
29c0181b4d
Add test_docs_large in our .gitignore 2023-09-20 16:38:54 +03:00
deeplow
94f569cdf5
Add error code for unexpected errors in conversion 2023-09-19 15:52:47 +01:00
deeplow
8e4f04a52e
Shift to conversion exit codes by 128
Distinguish from podman or other errors in called binaries by shifting
the error codes by 128.
2023-09-19 15:34:00 +01:00
deeplow
b4c3e07d36
Remove attacker-controlled error messages
Creates exceptions in the server code to be shared with the client via an
identifying exit code. These exceptions are then reconstructed in the
client.

Refs #456 but does not completely fix it. Unexpected exceptions and
progress descriptions are still passed in Containers.
2023-09-19 15:33:20 +01:00
Moon Sungjoon
214ce9720d
Enable HWP conversion on MacOS M1
This PR reverts the patch that disables HWP / HWPX conversion on MacOS
M1. It does not fix conversion on Qubes OS (#494)

Previously, HWP / HWPX conversion didn't work on MacOS M1 systems (#498)
because libreoffice wasn't built with Java support on Alpine Linux for
ARM (aarch64).

Gratefully, the Alpine team has enabled Java support on the aarch64
system [1], so we can enable it again for ARM architectures.

Fixes #498

[1]: 74d443f479
2023-09-06 13:10:18 +03:00
Moon Sungjoon
acd615e0e1
Switch to the edge repo of Alpine Linux
The Alpine Linux team has enabled Java support for LibreOffice on ARM
architecture:

    74d443f479

This commit is included in 7.5.5.2-r2, so the installed LibreOffice
package should be 7.5.5.2-r2 or higher to fix this issue.

However 3.18 doesn't have the 7.5.5.2-r2 package:

    https://pkgs.alpinelinux.org/package/v3.18/community/aarch64/libreoffice

The Dangerzone image uses the alpine:latest image which is 3.18 as of
writing this.

For this reason, we switch to the edge repo of Alpine Linux, which
includes this fix.

Refs #498
Refs #540
Refs #542
2023-09-06 13:09:34 +03:00
deeplow
ed298ec5b0
BUILD.md fix typo: dz-dvm is not a template 2023-08-29 19:29:43 +01:00
deeplow
ab3293ff70
BUILD.md replace deprecated cmd qvm-copy-to-vm
qvm-copy-to-vm since a long time doesn't respect the qube name
provided. Instead it is enforced by the dom0 policy prompt. This is
probably a leftover from a command ran in dom0, where this command
actually works.
2023-08-29 19:29:41 +01:00
deeplow
688bfe056b
BUILD.md: cd into dangerzone/ after cloning 2023-08-29 19:29:31 +01:00
deeplow
831c3250c2
Add overview table of qubes 2023-08-29 19:20:36 +01:00
deeplow
4f2de90f93
Add overview table of qubes 2023-08-24 14:50:53 +01:00
deeplow
c3cdca977f
Qubes alpha: bump fedora version (37 -> 38) 2023-08-24 14:42:54 +01:00
deeplow
8ae88eb10a
Ensure updates checkbox updated after updates accepted
Ensure the status of the toggle updates checkbox is updated, after the user is
prompted to enable updates.
2023-08-23 16:46:45 +01:00
deeplow
8221a56c7d
Revert "Propagate "update check" prompt to UI checkbox"
This reverts commit 3915a86642502b673aa0e47931823acbe66f1043.
2023-08-23 16:46:44 +01:00
deeplow
1695cc7a6c
Propagate "update check" prompt to UI checkbox
The "check for updates" button wasn't showing up immediately as checked
as soon as the user is prompted for checking updates. This fixes that.

Fixes #513
2023-08-23 16:46:33 +01:00
deeplow
89365b585c
Add tests documentation 2023-08-22 16:11:44 +01:00
deeplow
9ec9cc5f87
Replace armor guards that indicate isolated output 2023-08-22 16:11:41 +01:00
deeplow
a0bcd12635
Large test run: hide traceback to avoid spam
Some tests are expected to fail. To avoid having potentially thousands
of tracebacks of the failed docs at the end, we're deactivating that
reporting.
2023-08-22 16:11:39 +01:00
deeplow
fa215063ee
Add logging for second container 2023-08-22 16:11:38 +01:00
deeplow
75369cf621
Adapt code so it works for reporting script
Reporting script now parses JunitXML instead of a series of
".container_log" files. The script in in changed submodule.

Additionally it makes failed tests actually fail so that this is
recorded in the JunitXML report.
2023-08-22 16:11:36 +01:00
deeplow
eb16285790
Replace container output command prefix ">>>"
In the junitxml this prefix would look ugly ("&gt&gt&gt") because it has
to escape any non-xml tags.
2023-08-22 16:11:35 +01:00
deeplow
48b2e7bc3c
Log command to debug log for traceback purposes
Log commands so we can trace back which errors / outputs are from each
command.
2023-08-22 16:11:34 +01:00
deeplow
b73ce5bf6a
Add large test logic and documentation
Adds a large pool of document that can and should be used prior to a
release to understand effects of the new release over a real-world
scenario.

Documents are stored in an external git LFS repo under
`tests/test_docs_large` and currently it's about 11K documents gathered
from multiple PDF readers and office suite's test sets.

Documentation on how to run the tests is under
`docs/developer/TESTING.md`
2023-08-22 16:11:31 +01:00
deeplow
f41cefde1d
Add "armor" around conversion log
Add GPG-styled "armor" around conversion logs

    -----CONVERSION LOG START-----
    Creator:         Writer
    Producer:        LibreOffice 6.4
    [...]
    -----CONVERSION LOG END-----
2023-08-22 16:11:28 +01:00
deeplow
9f1abe2836
Replace non-printable ascii in conversion log
Certain characters may be abused. Particularly ANSI escape codes.
Solution inspired by Qubes OS's hardening of ther RPC mechanism [1]:

> Terminal control characters are a security issue, which in worst case
> amount to arbitrary command execution. In the simplest case this
> requires two often found codes: terminal title setting (which puts
> arbitrary string in the window title) and title repo reporting (which
> puts that string on the shell's standard input. [sic]
>
>  -- qvm-run.rst [2]

[1]: e005836286
[2]: c70da44702/doc/manpages/qvm-run.rst (L126)
2023-08-22 16:11:27 +01:00
deeplow
95cef8cf0a
Containers: capture conversion logs
Store the conversion log to a file (captured-output.txt) in the
container and when in development mode, have its output displayed on the
terminal output.
2023-08-22 16:11:26 +01:00
deeplow
e2accc2da1
Ignore large tests when doing "make test" 2023-08-22 16:11:24 +01:00
deeplow
d6bce4dec5
Qubes: close qrexec stdin and stout
Ensure a server cannon keep the client hannging if more data than
necessary is sent. This applies to container and the Qubes
implmentation.
2023-08-22 16:11:23 +01:00
deeplow
874b8865e2
Qubes: strategy for capturing conversion logs
Use qrexec stdout to send conversion data (pixels) and stderr to send
conversion progress at the end of the conversion. This happens
regardless of whether or not the conversion is in developer mode or not.

It's the client that decides if it reads the debug data from stderr or
not. In this case, it only reads it if developer mode is enabled.
2023-08-22 16:11:20 +01:00
Alex Pyrgiotis
00adf223a5
Add release requirements for Apple account 2023-08-22 12:05:40 +03:00
Alex Pyrgiotis
404c49874b
Prefer grabbing the altool password from the keychain
Closes #522
2023-08-22 12:05:40 +03:00
Alex Pyrgiotis
098e532bd2
dev_scripts: Ditch sudo requirement for Docker
We don't tend to use Docker for development tasks in Linux, since we
have Podman for that. In MacOS and Windows, we do use Docker, but
typically without sudo.

Make our MacOS / Windows dev tasks non-interactive, by ditching the
`sudo` invocation.

Closes #519
2023-08-22 12:05:40 +03:00
deeplow
e512ba2b6a
Updater dialog: make "yes" the default button
Fixes #507
2023-08-21 13:07:05 +01:00
Erik Moeller
5143103b96
Minor tweaks: grammar, fragment links 2023-08-21 13:05:52 +01:00
deeplow
f5b5751546
Adds minimal install advice on README.md
Makes it clear that one needs to install Docker for Desktop to use Dangerzone
on Mac or Windows and Podman on linux. The app itself will warn the user about
this, but we should state the prerequisites more clearly upfront.

Mentions mac and windows in INSTALL.md so that anyone reading this page does
not wrongly assume that Dangerzone is a Linux-only app.

Fixes #475
2023-08-21 13:05:50 +01:00
deeplow
8d05bcc10f
Update windows certificate in build-app.bat 2023-08-21 13:04:14 +01:00
Alex Pyrgiotis
2fa56282a6
Update references to 0.4.2 artifacts 2023-08-08 18:40:06 +03:00
deeplow
1837c826a6
Fix Qubes section from INSTALL.md
In Qubes by default the conversion happens in containers just like
other systems. This removes the mention that it used VMs by default.
2023-08-07 19:02:58 +01:00
deeplow
e8b28d6f87
Explicitly import html.parser for Cx_Freeze to build
The markdown dependency uses importlib to monkeypatch 'html.parser'
[1]. Due to this approach 'html.parser' is never explicitly stated
as a dependency. This works fine in most cases, since it's part of
the python standard lib. But on Windows the build tool (CxFreeze)
ships in the .exe only the modules needed. And because html.parser
is never mentioned, it fails with an error (see issue #501).

Fixes #501

[1]: https://github.com/Python-Markdown/markdown/blob/master/markdown/htmlparser.py#L29
2023-08-05 17:09:42 +01:00
deeplow
b7e212efd9
Add python3-pyside2.qtsvg dependency to debian builds
When the updater was added in commit 5b17f75047 [1], it it had missed
the dependencies on Debian.

[1]: 5b17f75047
2023-08-05 17:02:54 +01:00
deeplow
356f835d32
env.py: make env run in GUI mode (--no-gui otherwise)
Now that we have GUI tests, it makes more sense to have running with
the X11 socket mounted in the environment than not.
2023-08-05 17:02:26 +01:00
Alex Pyrgiotis
e3a8a651f1
Disable HWP / HWPX conversion on MacOS M1 / Qubes
The HWP / HWPX conversion feature does not work on the following
platforms:

* MacOS with Apple Silicon CPU
* Native Qubes OS

For this reason, we need to:

1. Disable it on the GUI side, by not allowing the user to select these
   files.
2. Throw an error on the isolation provider side, in case the user
   directly attempts to convert the file (either through CLI or via
   "Open With").

Refs #494
Refs #498
2023-08-05 16:50:49 +01:00
Alex Pyrgiotis
bc83341d2a
conversion: Detect when LibreOffice silently fails
Sometimes, LibreOffice returns with status code 0, but in reality, it
fails. It doesn't create a file, and Dangerzone does not detect this.
What happens next is that it fails in the next command, and throws an
unrelated error.

Detect that LibreOffice fails, by checking if the output file exists,
after the PDF conversion.
2023-08-05 16:50:47 +01:00
Alex Pyrgiotis
6736fb0153
Factor out MIME type detection
Factor out the MIME type detection logic, so that we can use it both in
Qubes and containers.
2023-08-05 16:50:35 +01:00
Alex Pyrgiotis
03df60db5f
Always pull base image when building ours
Always pull the base container image (alpine:latest) before building our
own container image. Else, in an environments that we haven't touched
for a while, an older image may be used.
2023-08-02 13:47:59 +03:00
Alex Pyrgiotis
0296844e36
Bump version to 0.4.2 2023-08-02 13:43:05 +03:00
Alex Pyrgiotis
4828299c99
Update changelog 2023-08-02 13:43:04 +03:00
Alex Pyrgiotis
664e0c1477
Update our release instructions
Update our release instructions in the following ways:

1. Make sure to check the Python dependencies / version before the
   release.
2. Make sure to upload the final container.tar.gz image as a release
   artifact.
2023-08-02 13:43:03 +03:00
deeplow
e2718c6f64
Update changelog with HWP support 2023-08-01 14:37:15 +01:00
Moon Sungjoon
075475c306
Add test files for hwp/hwpx (base64 encoded)
Add extra files and base64 encode externally contributed docs. This
prevents the accidental opening of such documents, since they couldn't
be rebuit by the Dangerzone developers to ensure their safety.
2023-08-01 14:37:14 +01:00
Moon Sungjoon
fa22e96af7
Clean up HWP/HWPX MIME types
Use the MIME types actually used by the `file` command, which was
recently changed for the detection of the HWPX format [1].

application/hwp+zip -> application/x-hwp+zip

But the HWPX format includes a 'mimetype' file, which contains the
MIME type string "application/hwp+zip", so that was left so because
it may be possible to detect it as "application/hwp+zip".

[1]: ceef7ead3a
2023-08-01 14:35:28 +01:00
Moon Sungjoon
a453c890a0
Fix dynamic loading of LibreOffice extensions
HWPX MIME type is recognized as 'application/zip' with current version of file command (file-5.44).
It will be recognized as 'application/hwp+zip' when new version of file is released.

For a temporary fix, when MIME type of file is 'application/zip',
check the file type again (without the MIME option).
And then check if it's 'Zip data (MIME type "application/hwp+zip"?)' or not.
2023-08-01 14:28:36 +01:00
deeplow
d16961bed6
Security: Dynamically load libreoffice extension (PoC)
Only load the LibreOffice extension for opening hwp/hwpx when it is
actually needed. Adding an extension to libreoffice may allow for it to
run arbitrary code. This makes it trust more scalable by trusting
LibreOffice extensions only for the filetypes which they target.

Reasoning
---------

Assuming a malicious `.oxt` extension this means that the extension has
arbitrary code execution in the container. While this is not an
existential threat in itself, we should not expose every Dangerzone user
to it. This is achieved by dynamically loading the extension at runtime
only when needed.

This ensures that a compromised extension will in its least malicious
form be able to modify the visual content of any hancom office files but
not *every file*. In the more malicious version, if the code execution
manages to do a container escape, this will only affect users that have
converted a Hancom office file.
2023-08-01 14:28:34 +01:00
Moon Sungjoon
3e895adbab
Add hwp hwpx support
hwp/hwpx has several custom MIME types

.hwp:
 - application/x-hwp
 - application/haansofthwp
 - application/vnd.hancom.hwp

.hwpx:
 - application/haansofthwpx
 - application/vnd.hancom.hwpx,
 - application/hwp+zip

Fixes #243
2023-08-01 14:27:18 +01:00
Moon Sungjoon
d8cc24cebe
container: Add H2ORestart for HWP/HWPX support
H2ORestart is a LibreOffice extension which adds Hancom HWP/HWPX (Hangul Word Processor)
supports for LibreOffice. This format is widely used in South Korea.

Version: v0.5.7
Extension Repository: https://github.com/ebandal/H2Orestart/releases
2023-08-01 14:07:51 +01:00
Alex Pyrgiotis
76a1a885f5
Force Podman use the overlay storage driver
Force Podman to use the overlay storage driver in our Dangerzone
environments. We have seen that in certain cases, Podman may opt to use
the vfs storage driver instead, which is more space-intensive.

Closes #489
2023-08-01 15:18:24 +03:00
Alex Pyrgiotis
6c374d8a7e
qubes: Mark Dangerzone messages as trusted
Mark the messages that Dangerzone creates once a conversion step
finishes as trusted, since they do not contain any string not controlled
by us.
2023-08-01 14:43:49 +03:00
deeplow
72536a05ac
container: Improve parsing of progress reports
Improve the `parse_progress()` method of the container isolation
provider in the following ways:

1. Make sure that the fields of the progress report have the expected
   type.
2. In case of a JSON parsing error, sanitize the invalid string so that
   it doesn't contain escape sequences, or the user considers it as
   trusted.
2023-08-01 14:43:49 +03:00
Alex Pyrgiotis
9410b68c1d
Sanitize progress reports in a provider-agnostic way
Update the common `print_progress()` method in the base
`IsolationProvider` class, with two extra features:

1. Always sanitize the provided text argument.
2. Mark the sanitized text argument as untrusted.

This is default behavior from now on, since this function is commonly
used to parse progress reports from the conversion sandbox.
2023-08-01 14:43:48 +03:00
Alex Pyrgiotis
cfa0c01d8f
Sanitize filenames before logging them
Sanitize filenames in various places in the code, before we write them
to the user's terminal. Filenames, especially in Linux, can contain
virtually any character except for '\0' and '/', so it's important to
sanitize them.
2023-08-01 14:43:48 +03:00
deeplow
3788139d26
Add utility for sanitizing strings
Add `replace_control_chars()` function in `util.py`, which can be used
to sanitize strings from ANSI escape sequences or weird Unicode symbols.
2023-08-01 14:43:48 +03:00
Alex Pyrgiotis
cb08c198ad
Force rendering of error messages as plain text
Make the `error_label` widget always render messages as plain text,
instead of auto discovering if the text is rich. We need this because
the error message may contain input from the sandbox, which we consider
untrusted.
2023-08-01 14:43:48 +03:00
Alex Pyrgiotis
a72a31980d
Run GUI tests on separate processes
Run our GUI tests on separate processes, because the combination of
Ubuntu Focal, Qt5, PySide6, and pytest-qt somehow leads to segfaults,
probably due to stale global state.

Closes #493
2023-08-01 14:43:42 +03:00
Alex Pyrgiotis
77f4b8115c
Add missing reset ANSI sequence
Do not forget to reset the red text once we print an error string to the
terminal
2023-08-01 14:38:32 +03:00
Alex Pyrgiotis
9768714b4a
Make isort compatible with Black
The isort tool is not compatible with Black by default. This leads to a
tug of war between these tools, when we run `make lint-apply` -> `make
lint`. Fix this by forcing isort to be compatible with Black.
2023-08-01 14:38:32 +03:00
Alex Pyrgiotis
81811e0aac
Add collapsible dialog for errors
Move the error message from a text browser to a collapsible widget.
2023-08-01 14:29:27 +03:00
deeplow
53ec1cad63
Add update error red dot to hamburger menu 2023-08-01 14:29:11 +03:00
Alex Pyrgiotis
c9eac42855
Improve updater messages
Improve the wording of updater messages for better UX.
2023-08-01 14:29:10 +03:00
Alex Pyrgiotis
d5ca6bb422
updater: Move "Ok" button to the right
Move the "Ok" button in the prompt that asks users if they want to
enable update checks to the right, to further reinforce that this is
the default action.
2023-07-28 19:57:46 +03:00
Alex Pyrgiotis
bc4bba4fa1
tests: Add full test coverage for updater checks
Fully test the update check logic, by introducing several Qt tests.
Also, improve the `UpdaterThread.get_letest_info()` method, that gets
the latest version and changelog from GitHub, with several checks.
These checks are also tested in our newly added tests.
2023-07-28 12:18:59 +03:00
Alex Pyrgiotis
fdc53efc35
tests: Test our own custom QApplication
By default, `pytest-qt` initializes the default QApplication class that
PySide offers. Dangerzone, however, defines its own QApplication
subclass.

Create a `qapp_cls` fixture that will force `pytest-qt` to use this
subclass. For more info, see:
https://pytest-qt.readthedocs.io/en/latest/qapplication.html#testing-custom-qapplications
2023-07-28 12:18:58 +03:00
Alex Pyrgiotis
24ba914cc8
updater: Differentiate between "X" and "Cancel"
We want to differentiate between the user clicking on "Cancel" and
clicking on "X", since in the second case, we want to remind them again
on the next run.
2023-07-28 11:50:44 +03:00
Alex Pyrgiotis
f6b5e1293d
gui: Add references to dialog buttons
Add references to dialog buttons, so that we can click on them from our
GUI tests.
2023-07-28 11:50:44 +03:00
Alex Pyrgiotis
a2177bfd34
Remove some stale FIXMEs 2023-07-28 11:50:44 +03:00
Alex Pyrgiotis
8d86b0a15f
Rename "Changelog" to "What's new" 2023-07-28 11:50:43 +03:00
Alex Pyrgiotis
b4bcd833e6
Force text color to be black 2023-07-28 10:41:01 +03:00
Alex Pyrgiotis
c541227dd3
Drop Ubuntu 22.10 (Kinetic Kudu) support
Drop support for Ubuntu 22.10 (Kinetic Kudu), because it's past its EOL
date [1].

Closes #485

[1]: https://endoflife.date/ubuntu
2023-07-28 10:40:04 +03:00
deeplow
f66375bd44
Add QA instructions for Qubes alpha support 2023-07-26 14:03:15 +01:00
deeplow
1ab14dbd86
Use containers in Qubes until Beta
Reverse the logic in Qubes to run in containers by default and only
perform the conversion with VMs when explicitly set by the env var
QUBES_CONVERSION=1. This will avoid surprises when someone installs
Dangerzone on Qubes expecting it to work out of the box just like any
other Linux.

Fixes #451
2023-07-26 14:02:06 +01:00
deeplow
8b8f2a207c
Remove remains of parallel tests completely
Parallel tests had given us issues in the part [1]. This time, they
weren't playing well with pytest-qt. One hypothesis is that Qt
application components run as singletons and don't play well when there
are two instances.

The symptom we were experiencing was infinite recursion and removing
pytest-xdist solved the issue.

[1]: https://github.com/freedomofpress/dangerzone/issues/217
2023-07-25 15:17:24 +01:00
deeplow
8254844724
Pass sample_pdf as fixture instead of via class
Now that sample_doc was renamed to sample_pdf it could cause some
confusion the fact that that the TestBase class had an attribute called
sample_doc which referenced  the sample PDF.

By removing this attribute and passing the fixture instead we are
following a more pytest-native approach of passing arguments explicitly.
2023-07-25 15:00:32 +01:00
deeplow
6216761058
remove number from test_doc2 variable
create a pytest fixture for a .doc file and .pdf file
2023-07-25 15:00:31 +01:00
deeplow
9ca27fd6fe
Add unit test to document change button
Fixes #428
2023-07-25 15:00:29 +01:00
deeplow
250a481f31
Store ref file_selection dialog
Allow an outside module (e.g. tests) to be able to "grab" the document
selection dialog.
2023-07-25 15:00:27 +01:00
deeplow
2bd97a036a
Add logic to handle documents removal
This implements the backend part of changing documents.
2023-07-25 15:00:12 +01:00
deeplow
d0c86fbbe2
Add change docs button to settings window
Implements the GUI logic necessary to change the selected document. When
"Change Selection" is clicked, it opens a File Dialog on the directory
of the previously selected files (if any)

Fixes #428
2023-07-25 13:44:26 +01:00
Alex Pyrgiotis
a478d14025
Update Poetry lock file
Run `poetry lock` and update the existing dependencies.

Closes #480
Closes #482
2023-07-25 15:02:44 +03:00
Alex Pyrgiotis
26cf3db4b4
Install Qt6 in CI runners and dev environments
Upgrade from Qt5 to Qt6 in our CI runners and dev environments, since
the latest PySide6 versions do not support Qt5. This leaves only our
Debian / Fedora packages relying on Qt5, since there's no PySide6
package for them yet.

There are some caveats to the Qt6 upgrade:

1. Debian Bullseye has a missing dependency to `libgl1`, so we need to
   install it separately.
2. Ubuntu Jammy has a missing dependency to `libxkbcommon-x11-0`, which
   we have to install separately.
3. Ubuntu Focal does not have Qt6, but surprisingly PySide6 works with
   Qt5.
4. All Debian-based distros require `libxcb-cursor0`.

As a side effect, we have to make our `env.py` a bit more complicated,
to cater to these exceptions.

Refs #482
2023-07-25 14:53:17 +03:00
Alex Pyrgiotis
77b380e7df
Fix proper signal type for UpdateReport
Change the signal type in `UpdaterThread.check_for_updates()` from
`dict` to `UpdateReport`. The `dict` parameter is stale and should have
never been used.
2023-07-25 14:52:49 +03:00
Alex Pyrgiotis
17ecde3173
dev_scripts: Fix wrong usage of Dockerfile snippet
When building the *end-user* environment for Ubuntu Lunar using
`./dev_scripts/env.py ... build`, we erroneously used a Dockerfile
snippet that is actually reserved for the *development* environment.

This pairing worked by chance, but we should use the proper Dockerfile
snippet, so that we don't mix these two environments.
2023-07-25 14:52:49 +03:00
Alex Pyrgiotis
52e5da52b1
Add Debian Trixie to list of supported platforms
Add the Debian Trixie distro to the list of supported platforms in our
INSTALL.md file. This was an omission from when we merged #462.
2023-07-25 14:52:48 +03:00
deeplow
74a4e80ba1
Fix comment about docker used on Ubuntu 2023-07-25 12:38:51 +01:00
Moon Sungjoon
494f498d17
Remove pipes module and use shlex instead
Thanks: https://github.com/tox-dev/tox/pull/2418/files

Closes #373
2023-07-24 18:13:00 +03:00
Alex Pyrgiotis
47b337143c
tests: Add Qt test for updates
Add a very rudimentary test for GUI update logic.

Refs #290
2023-07-24 16:54:16 +03:00
Alex Pyrgiotis
ca81b4a5f3
Add pytest-qt test dependency 2023-07-24 16:49:31 +03:00
Alex Pyrgiotis
5b17f75047
Inform the user for new updates
Add a hamburger button in the main window of Dangerzone, that will be
the entry point for update information. Whenever a new update is
released, users will see a green notification bubble. If an update error
happens, they will see a red notification bubble.

In the hamburger menu, users have the option to enable or disable update
checks. Depending on the update check status, users will see in a pop-up
dialog more info about the new update or the error.

Closes #189
2023-07-24 16:49:25 +03:00
Alex Pyrgiotis
58c5fc846a
gui: Add Update Dialog
Add a dialog that we will show for update-related tasks. This dialog has
a different layout than the Alert class: it has a message, followed by
a widget that the user chooses (can be a text box or collapsible
element), and then one last message.
2023-07-24 14:22:28 +03:00
Alex Pyrgiotis
64ca90c92f
Add a Qt widget for creating collapsible sections
Add a Qt widget called "CollapsibleBox", in order to build sections that
you can hide/show with a single click. There is no native widget for
this functionality, so we borrow some code from a StackOverflow user:
https://stackoverflow.com/a/52617714
2023-07-24 14:22:27 +03:00
Alex Pyrgiotis
20a25f1dd4
Allow more types of dialogs
Factor out some parts of the Alert class into a more generic dialog
class. This class will be used for a new type of dialog that we will
introduce in a subsequent commit.

Note that this commit does not alter the functionality of the Alert
class.
2023-07-24 14:22:27 +03:00
Alex Pyrgiotis
0e3255d091
share: Add hamburger menu status icons
Add three status icon for the hamburger menu:

* hamburger_menu.svg: The typical hamburger menu. Taken from
  https://commons.wikimedia.org/wiki/File:Hamburger_icon.svg, which is
  in the public domain.
* hamburger_menu_update_error.svg: A hamburger menu with a red
  notification bubble on the top right corner.
* hamburger_menu_update_success.svg: A hamburger menu with a green
  notification bubble on the top right corner.
2023-07-24 14:22:27 +03:00
Alex Pyrgiotis
47171a2722
tests: Add tests for update logic 2023-07-24 14:22:27 +03:00
Alex Pyrgiotis
f6e945805e
tests: Add a Pytest fixture for Updater
Add a Pytest fixture that returns an UpdaterThread instance which has
its own unique settings directory. Note that the UpdaterThread instance
needs to be slightly nerfed, so that it doesn't rely on Qt functionality
or any isolation providers.
2023-07-24 14:22:27 +03:00
Alex Pyrgiotis
5ae8b871b6
Add UpdaterThread class
Add a new Python module called "updater", which contains the logic for
prompting the user to enable updates, and checking our GitHub releases
for new updates.

This class has some light dependency to Qt functionality, since it needs
to:

* Show a prompt to the user,
* Run update checks asynchronously in a Qt thread,
* Provide the main window with the result of the update check

Refs #189
2023-07-24 14:22:27 +03:00
Alex Pyrgiotis
0ad489f80b
Get default settings without Settings instance
Get the default settings of Dangezone for the current version, without
having to instantiate the Settings class. Note that instantiating the
Settings class also writes the settings to the underlying
`settings.json` file, and there are cases where we don't want this
behavior.
2023-07-24 14:22:26 +03:00
Alex Pyrgiotis
266addb5b7
Make it easier to get and save updater settings
Add the following two features in the Settings class:

1. Add a way to save the settings, if the contents of a key have
   changed.
2. Add a way to get all the updater settings, by getting fetching the
   keys that start with `"updater_"`.
2023-07-24 14:22:26 +03:00
Alex Pyrgiotis
2df459bcfc
Add default settings for Dangerzone updater
Add some settings prefixed with `"updater_"`, which will be used for
updates later on.
2023-07-24 14:22:26 +03:00
Alex Pyrgiotis
a1e3cb27a7
ci: Pass X11 socket to tests
Pass the X11 socket of the Linux CI runners to the container where our
CI tests run, with the `-g` flag of `dev_scripts/env.py`. By having a
working X11 socket, we can run GUI tests. Prior to this fix, we would
encounter this error:

    tests/gui/test_main_window.py::test_change_document_button qt.qpa.xcb: could not connect to display
    qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
    This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

    Available platform plugins are: xcb, offscreen, wayland-egl, wayland, eglfs, vnc, minimalegl, vkkhrdisplay, linuxfb, minimal.

Another alternative we considered was to use the
`QT_QPA_PLATFORM=offscreen` environment variable. This alternative
works, but it's less close to the end-user's environment, so we decided
in favor of the approach above.
2023-07-24 14:22:26 +03:00
Alex Pyrgiotis
f58e31efe6
Run tests sequentially
Run tests sequentially, because in subsequent commits we will add
Qt tests that do not play nice when `pytest` creates new processes [1].

Also, remove the pytest wrapper, whose main task was to decide if tests
can run in parallel [2].

[1]: https://bugreports.qt.io/projects/PYSIDE/issues/PYSIDE-2393
[2]: https://github.com/freedomofpress/dangerzone/issues/217
2023-07-24 14:22:26 +03:00
Manil Chowdhury
a1bbcdf2b6
Fix typos, clarify steps 2023-07-14 15:21:48 +03:00
Manil Chowdhury
680c57b8c5
Add to FAQ: suggest updating app
Maintainers have indicated preference for encouraging users to update app version rather than use workarounds to fix issues.
2023-07-14 15:21:48 +03:00
Manil Chowdhury
6926444c88
Add note for MacOS 11+ users blocked by SIP
A workaround to an issue related to SIP imposed on Docker has been identified in #371. Update README.md to include friendly instructions for MacOS 11+ users blocked by this issue.
2023-07-14 15:21:47 +03:00
deeplow
ef41cab76e
Add progress reports on Qubes (GUI)
Fixes #429
2023-07-13 12:57:23 +01:00
deeplow
bf38c24d99
Merge stdout_callback with print_progress
stdout_callback is used to flow progress information from the conversion
to some front-end. It was always used in tandem with printing to the
terminal (which is kind of a front-end). So it made sense to put them
always together.
2023-07-13 12:57:04 +01:00
deeplow
206c262554
Bump python version on Windows to 3.11
Python 3.10.12 fixes some CVEs for which Dangerzone does not appear to be
affected, however its binaries are not made available by the python
foundation. Moving to 3.11 should be trivial since this was already
deployed in Fedora 37+.
2023-07-06 14:32:31 +01:00
deeplow
e989069712
Add ubuntu 23.04 (lunar) support
The Ubuntu 23.04 docker image includes a user by default (ubuntu) which
overtakes the 1000 uid and so our user becomes 1001 which makes the user
directory unwritable. The solution as suggested in [1] was to remove
that user.

[1]: https://bugs.launchpad.net/cloud-images/+bug/2005129

Fixes #452
2023-06-28 11:07:59 +01:00
deeplow
e773add68e
Adds support for Debian Trixie (13)
Fixes #452
2023-06-28 11:05:47 +01:00
Moon Sungjoon
5b58576854
Add --no-cache on the apk install option 2023-06-26 20:02:58 +03:00
Alex Pyrgiotis
20b24a6c71
Add development instructions for Qubes integration
Add instructions aimed at developers who want to try out Qubes
integration.

Fixes #411
2023-06-21 15:06:22 +03:00
deeplow
a1d40fde78
Create an RPM for Qubes
Allow creating an RPM package that is to be installed specifically on
Qubes. This package has the following extra properties from our regular
RPM packages:

1. Make `python3-magic`, `libreoffice` and `tesseract` requirements
   for installing Dangerzone, since the conversion takes place in a
   disposable qube that needs these packages.
2. Ignore the container.tar.gz file, if it exists.
3. Add our RPC calls under `/etc/qubes-rpc`
2023-06-21 11:46:43 +03:00
deeplow
5191556dcd
Use the Qubes isolation provider from CLI/GUI
Autodetect in the CLI/GUI if we should run the conversion in disposable
qubes.
2023-06-21 11:46:43 +03:00
deeplow
baeab9d7eb
Add Qubes isolation provider
Add an isolation provider for Qubes, that performs the document
conversion as follows:

Document to pixels phase
------------------------

1. Starts a disposable qube by calling either the dz.Convert or the
   dz.ConvertDev RPC call, depending on the execution context.
2. Sends the file to disposable qube through its stdin.
   * If we call the conversion from the development environment, also
     pass the conversion module as a Python zipfile, before the
     suspicious document.
3. Reads the number of pages, their dimensions, and the page data.

Pixels to PDF phase
-------------------

1. Writes the page data under /tmp/dangerzone, so that the
   `pixels_to_pdf` module can read them.
2. Pass OCR parameters as envvars.
3. Call the `pixels_to_pdf` main function, as if it was running within a
   container. Wait until the PDF gets created.
4. Move the resulting PDF to the proper directory.

Fixes #414
2023-06-21 11:46:34 +03:00
Alex Pyrgiotis
c194606550
Add Qubes RPC calls
Add two RPC calls that can run on disposable VMs:

* dz.Convert: This call simply imports the dangerzone package and runs
  the Qubes wrapper for the "document to pixels" code. This call is
  similar to the way we run the conversion part in a container.
* dz.ConvertDev: This call is for development purposes, and does the
  following:
  - First it receives the `dangerzone.conversion` module as Python
    zipfile. This way, we can quickly iterate on changes on the
    server-side part of Qubes, without altering the templates.
  - Second, it calls the Qubes wrapper for the "document to pixels"
    code, as dz.Convert does.
2023-06-21 11:45:08 +03:00
deeplow
a83f5dfc7a
Add Qubes-specific code for disposable VMs
The "document to pixels" code assumes that the client has called it with
some mount points in which it can write files. This is true for the
container isolation provider, but not for Qubes, who can communicate
with the client only via stdin/stdout.

Add a Qubes wrapper for this code that reads the suspicious document
from stdin and writes the pages to stdout. The on-wire format is the
same as the one that TrustedPDF uses.
2023-06-21 11:45:04 +03:00
Alex Pyrgiotis
cfdaec23c5
Support multiple Python libraries for libmagic
It seems that there are at least two Python libraries with libmagic
support:

* PyPI: python-magic (https://pypi.org/project/python-magic/)
  On Fedora it's `python3-magic`
* PyPI: filemagic (https://pypi.org/project/filemagic/)
  On Fedora it's `python3-file-magic`

The first package corresponds to the `py3-magic` package on Alpine
Linux, and it's the one we install in the container. The second package
uses a different API, and it's the only one we can use on Qubes.

To make matters worse, we:

* Can't install the first package on Fedora, because it installs the
  second under the hood:
  https://bugzilla.redhat.com/show_bug.cgi?id=1899279
* Can't install the second package on Alpine Linux (untested), due to
  Musl being used instead of libC:
  https://stackoverflow.com/a/53936722

Ultimately, we need to support both, by trying the first API, and on
failure using the other API.
2023-06-21 11:45:00 +03:00
deeplow
9410da762c
Check if conversion code runs on Qubes
Add a way to check if the code runs (or should run) on Qubes.

Refs #451
2023-06-21 11:44:58 +03:00
deeplow
a0d1a68302
Use /tmp/dangerzone for Qubes compatibility
For using in containers, creating a /dangerzone directory is fine but it
is more standard to do this in /tmp.
2023-06-21 11:44:53 +03:00
deeplow
814d533c3b
Restructure container code
The files in `container/` no longer make sense to have that name since
the "document to pixels" part will run in Qubes OS in its own virtual
machine.

To adapt to this, this PR does the following:
- Moves all the files in `container` to `dangerzone/conversion`
- Splits the old `container/dangerzone.py` into its two components
  `dangerzone/conversion/{doc_to_pixels,pixels_to_pdf}.py` with a
  `common.py` file for shared functions
- Moves the Dockerfile to the project root and adapts it to the new
  container code location
- Updates the CircleCI config to properly cache Docker images.
- Updates our install scripts to properly build Docker images.
- Adds the new conversion module to the container image, so that it can
  be imported as a package.
- Adapts the container isolation provider to use the new way of calling
  the code.

NOTE: We have made zero changes to the conversion code in this commit,
except for necessary imports in order to factor out some common parts.
Any changes necessary for Qubes integration follow in the subsequent
commits.
2023-06-21 11:44:47 +03:00
Alex Pyrgiotis
9a45bc12c5
ci: Fix CI races in Debian Bullseye tests 2023-06-07 10:54:37 +03:00
Alex Pyrgiotis
a2506e6968
ci: Ignore CVE-2023-28322 from security scans
Ignore CVE-2023-28322 from our security scans, because it targets
`libcurl`, which is not used/exploitable in our offline container.
2023-06-06 12:15:34 +03:00
Alex Pyrgiotis
3f3d0be2b4
ci: Test building a .deb and installing it
Update our GitHub Actions workflow with the following tests:

1. Build a .deb for Dangerzone on Debian Bookworm.
2. Install this .deb on every Debian-based platform that we support.
3. Test that the installed version runs successfully.

This way, we can be sure that .deb that we create on a single Debian
version (here we choose Debian Bookworm) works on all platforms.

Refs #358
2023-05-25 07:55:19 +03:00
Alex Pyrgiotis
517d3b58f8
dev_scripts: Map host user UID to container UID 1000
When we run our Dangerzone environments through dev_scripts/env.py, we
use the Podman flag `--userns keep-id`. This option maps the UID in the
host to the *same* UID in the container. This way, the container can
access mounted files from the host.

The reason this works is because the user within the container has UID
1000, and the user in the host *typically* has UID 1000 as well. This
setup can break though if the user outside the host has a different UID.
For instance, the UID of the GitHub actions user that runs our CI
command is 1001.

To fix this, we need to always map the host user UID (whatever that is)
to container UID 1000. We can achieve this with the following mapping:

  1000:0:1         # Map container UID 1000 to subordinate UID 0
                   # (sub UID 0 = owner of the user ns = host user UID)
  0:1:1000         # Map container UIDs 0-999 to subordinate UIDs 1-1000
  1001:1001:64536  # Map container UIDs 1001-65535 to subordinate UIDs 1001-65535

Refs #228
2023-05-25 07:55:19 +03:00
Alex Pyrgiotis
91f8f8b387
ci: Install recommended Podman packages
In Debian-based images, there are some Podman dependencies that are
marked as recommended, but are essential for rootless containers. These
dependencies will not be installed in our Dangerzone environments, due
to the `--no-install-recommends` flag.

Our approach was to find these dependencies through trial and error,
and hardcode them in our image. Turns out though that there are some
dependencies (e.g., `netavark`) that may be necessary in some Debian
flavors, and not others.

In order to not impact the readability of the env.py file, we prefer
installing Podman with all of its recommended packages. On one hand,
this will make the image size of our Debian-based Dangerzone
environments slightly larger, but on the other hand, it will make CI
tests less flaky.
2023-05-25 07:51:02 +03:00
Alex Pyrgiotis
14063349bb
ci: Fix transient errors in Debian Bullseye
Fix transient errors in Debian Bullseye CI tests by using a different
machine image (Ubuntu 22.04 vs Ubuntu 20.04), and solving some Podman
config issues along the way.

Fixes #388
2023-05-24 13:45:56 +03:00
Alex Pyrgiotis
641aa131c9
ci: Add test for OCR languages
Test that the languages that we provide to users for OCR match the
languages that are installed in the container image

Fixes #417
2023-05-24 13:43:29 +03:00
Alex Pyrgiotis
5bd609781d
Remove Kurdish (Arabic) language
Remove the Kurdish (Arabic) language ("kur_ara") from the list of
languages that we offer for OCR, since it's not included in the
installed languages.

Interestingly, it is not present in the Apline Linux repos as well, so
this was probably an omission in the first place.
2023-05-24 13:43:29 +03:00
Alex Pyrgiotis
35e439f9e8
Restore the OCR languages
Restore the OCR languages to the state they were in
66d3c40163, with some minor changes. We
can now do so because we download all the trained models, not just the
ones that Alpine Linux offers.
2023-05-24 13:43:29 +03:00
Alex Pyrgiotis
a0d6f0d719
container: Grab trained OCR models from GitHub
Grab Tesseract's trained models from GitHub, instead of from the Alpine
Linux repos. Over the past few months, the models in the Alpine Linux
repos did not remain stable, leading to CI issues.

Since the models are already pre-trained and available through
Tesseract's repo on GitHub, we can use the release tarball that they
offer to install them in the container image, which is basically what
the upstream packages are doing as well.

In order to make sure that we have no regressions, at the time of this
commit we ensured that the hashes of the models offered through the
Alpine Linux repos and the models offered from the GitHub release are
the same. Also, in order to detect future regressions or foul play, we
check the downloaded models against a known checksum. Given that these
models change every few years, updating the checksum should not be an
issue.

Fix #357
2023-05-23 16:27:40 +03:00
deeplow
8059c8e1f1
Deprecate Fedora 36 support
Fixes #420
2023-05-23 09:22:59 +01:00
deeplow
09a0e51c3f
Sync qa.py and README (missed on PR#416) 2023-05-18 12:38:52 +01:00
Alex Pyrgiotis
3d822e1aa3
container: Install a renamed package
The tesseract-ocr-data-ell package for the Greek language has been
renamed to tesseract-ocr-data-grc. Use the new name in our Dockerfile.
2023-05-17 20:29:13 +03:00
Alex Pyrgiotis
8b2c5bba75
ci: Ignore two CVEs from our security scans
Ignore two CVEs from our security scans, which were triggered when
scanning the Dangerzone container image for v0.4.1. These CVEs do not
affect out users, and we offer an explanation why.
2023-05-17 20:29:13 +03:00
Alex Pyrgiotis
75be9b5c00
ci: Add security scanning
Add two GitHub Actions workflows, that perform the following checks:

* Security scan the Python dependencies of the Dangerzone application
  (`poetry.lock`), for the current/main branch.
* Build and security scan the Dangerzone container image for the
  current/main branch.
* Security scan the Python dependencies of the Dangerzone application
  (`poetry.lock`), for the latest release of Dangerzone (currently
  v0.4.1).
* Download and security scan the Dangerzone container image for the
  latest release of Dangerzone (currently v0.4.1).

The first two checks will run on branch pushes, PRs, and nightly. The
last two checks will run only nightly, since the code in the current
branch cannot affect already released artifacts.

Also, besides the security scans, these workflows will also update the
Security alerts in the GitHub page for the Dangerzone project, and print
the SARIF report to the stdout, for debugging purposes.

Closes #222
2023-05-17 20:29:13 +03:00
Chris Kerr
1a82962224
Fix typo
"keying" -> "keyring"

Signed-off-by: deeplow <deeplower@protonmail.com>
2023-05-17 08:52:34 +01:00
Alex Pyrgiotis
558b4bffea
Update changelog for Fedora 38 2023-05-16 16:20:32 +03:00
Alex Pyrgiotis
f4b29b72fc
Add support for Fedora 38 in the QA script
Update the release instructions and the QA script to support Fedora 38.
2023-05-16 16:20:32 +03:00
Alex Pyrgiotis
739ef87d6c
ci: Add checks for Fedora 38
Update our CircleCI config with checks for Fedora 38:

* Build RPMs
* Run tests
2023-05-16 16:20:32 +03:00
sudwhiwdh
f8f9cf304e
fix gui typo 2023-05-08 12:53:09 +01:00
Erik Moeller
8bdafce660
Appease linter 2023-04-24 11:50:58 +03:00
Alex Pyrgiotis
1ae7581df6
Use a different certificate for MacOS
Replace our reference to an Apple development certificate with a
Developer ID Application certificate. The former is not accepted during
the code notarization phase, whereas the latter is.
2023-04-24 11:50:58 +03:00
Alex Pyrgiotis
4c346154b2
Minor fixes in the Fedora release instructions 2023-04-24 11:50:57 +03:00
Alex Pyrgiotis
7f7d8bc2cc
Update notarization instructions 2023-04-24 11:50:57 +03:00
Erik Moeller
cdd0d3a647
Minor changelog tweaks 2023-04-18 13:19:26 -07:00
Alex Pyrgiotis
70a2e710d6
Bump version to 0.4.1
This release brings a split in the MacOS binaries, since we now have
separate ones for Intel and Apple Silicon architectures, so we must
reflect this in the README as well.
2023-04-18 23:01:00 +03:00
Alex Pyrgiotis
d6ffa0ea2e
CHANGELOG: Point to the correct issue 2023-04-18 23:01:00 +03:00
Alex Pyrgiotis
e36213c0c8
CHANGELOG: add entry about change in release keys 2023-04-18 23:01:00 +03:00
Alex Pyrgiotis
bb5a709250
CHANGELOG: fix issue number 2023-04-18 23:01:00 +03:00
Alex Pyrgiotis
b5c1c1192e
Add user instructions for installing Debian packages 2023-04-14 10:54:43 +03:00
Alex Pyrgiotis
dce516b4e8
Add dev instructions for building Debian packages 2023-04-14 10:54:43 +03:00
deeplow
49f72320d9
Update macOS release building instructions
Make the instructions consistent with the release building changes.
2023-04-14 08:50:58 +01:00
deeplow
592009d4d1
Fix build_app_bundle() (missing arguments) 2023-04-14 08:50:48 +01:00
deeplow
18557f88fc
Allow "create-dmg" to be in other places
If installed with homebrew, create-dmg will be installed at a different
location. It makes more sense to use the 'which' utility to find where
it is.
2023-04-14 08:48:07 +01:00
deeplow
21875714b8
Update apple development key ID 2023-04-14 08:48:05 +01:00
deeplow
78959100a8
Update fedora installation / release instructions
Changes instructions from the packagecloud setup to
packages.freedom.press

Delegates the key import to .repo configuration, following the example
of docker's install instructions [1].

[1]: https://docs.docker.com/engine/install/fedora/#install-docker-engine
2023-04-14 08:41:55 +01:00
deeplow
1c0dfb45f5
Update Apple account to FPF's Developer ID 2023-04-10 10:41:03 +01:00
deeplow
3f23010394
Redo macOS build-app.py and add --codesign-only opt
Redoes the build-app.py script to add an option to sign only an already-
produced app bundle.
2023-04-10 10:40:01 +01:00
Alex Pyrgiotis
bb0de52b01
Bump version to 0.4.1-rc3 2023-04-03 19:36:48 +03:00
Alex Pyrgiotis
7fe01d6470
install/windows: Remove -rc identifiers from version
Remove any -rc identifiers (e.g., 0.4.1-rc3) from the Dangerzone
version, if it includes them. If we don't remove them, then building
the MSI for Windows will fail as follows:

    error CNDL0108: The Product/@Version attribute's value, '0.4.1-rc3',
    is not a valid version. Legal version values should look like
    'x.x.x.x' where x is an integer from 0 to 65534.
2023-04-03 19:35:19 +03:00
Alex Pyrgiotis
6c7c0b615f
dev_scripts: Add missing packages in Dangerzone envs
Install the following packages in Dangerzone envs:

* python3-setuptools: We've seen that this package is necessary to build
  the RPM package for Dangerzone. The error that we encountered was the
  following:

      * Deleting old build and dist
      * Building RPM package
      Traceback (most recent call last):
        File "/home/user/dangerzone/setup.py", line 5, in <module>
          import setuptools
      ModuleNotFoundError: No module named 'setuptools'
      Traceback (most recent call last):
        File "/home/user/./dangerzone/install/linux/build-rpm.py", line 43, in <module>
          main()
        File "/home/user/./dangerzone/install/linux/build-rpm.py", line 30, in main
          subprocess.run(
        File "/usr/lib64/python3.11/subprocess.py", line 571, in run
          raise CalledProcessError(retcode, process.args,
      subprocess.CalledProcessError: Command 'python3 setup.py bdist_rpm --requires='podman,python3-pyside2,python3-appdirs,python3-click,python3-pyxdg,python3-colorama'' returned non-zero exit status 1.

* fuse-overlayfs: In Ubuntu 22.10 (at least), we encountered the
  following error when running Podman:

      ERRO[0000] User-selected graph driver "overlay" overwritten by
      graph driver "vfs" from database - delete libpod local files to
      resolve

  The `vfs` driver is much slower than the `overlayfs` storage driver,
  so we need to fix this. The reason why we encounter this error is
  explained in the Podman docs [1]:

      [...] and is vfs for non-root users when fuse-overlayfs is not
      available.

  Normally, the `fuse-overlayfs` package would have been installed, but
  we don't install it due to the `--no-install-recommends` flag, so we
  install it manually.

[1]: https://docs.podman.io/en/latest/markdown/podman.1.html#storage-driver-value
2023-04-03 18:58:56 +03:00
Alex Pyrgiotis
33f81f5064
tests: Add sample files for extra MIME types
In PR #378 ("container: Allow converting more document formats"), we
added support for the following MIME types:

* application/zip
* application/octet-stream
* application/x-ole-storage
* application/vnd.oasis.opendocument.spreadsheet-template
* application/vnd.oasis.opendocument.text-template

However, we forgot to add some tests for these MIME types in the repo.
In this commit, we add a file for each of these MIME types, to make sure
we have no regressions in the future.
2023-04-03 18:58:56 +03:00
Alex Pyrgiotis
58a8241844
container: Run LibreOffice in safe mode
The main use of safe mode [1] in LibreOffice is to run with a fresh user
profile, in case the default one got borked somehow. This is actually
not a concern of ours, since the user's profile is in the container and
is not persistent.

The main reason we want to preemptively run LibreOffice in safe mode is
to remove hardware acceleration capabilities. Whether hardware
acceleration actually works in a container is another question, but we
want to be extra sure.

[1]: https://help.libreoffice.org/latest/en-US/text/shared/01/profile_safe_mode.html
2023-03-28 14:47:07 +03:00
Alex Pyrgiotis
a1c87a207a
container: Allow converting more document formats
Remove the association between MIME types and export filters, because
LibreOffice is able to auto-detect them on its own. Instead, ask
LibreOffice to simply convert the document to a .pdf.

This association was cumbersome for yet another reason; there are MIME
types that may be associated with more than one file type. That's why
it's better to let LibreOffice decide the proper filter for the
conversion.

Our current understanding is that this change won't widen our attack
surface for the following reasons:

* The output filters for PDF documents are pretty specific, and we don't
  affect the input filters somehow.
* The default behavior of LibreOffice on Alpine Linux is to disable
  macros.

Closes #369
2023-03-28 14:46:47 +03:00
Alex Pyrgiotis
8b846820d2
Update typing hints for Mypy 1.1.1
Due to a bump in our Python dependencies, we now install Mypy 1.1.1
instead of 0.982. This change triggered the following errors:

* Incompatible default for argument <a> (default has type
  None, argument has type <t>):

  Mypy further explains here that PEP 484 prohibits implicit Optional,
  so we need to make these types explicit Optional.

* Unused "type: ignore" comment, use narrower [method-assign] instead of
  [assignment]:

  Mypy has specialized some of its lints, meaning that we should switch
  to the newer variants.

Also, it detected several other small inconsistencies. We fix all of
these errors in this commit.
2023-03-27 15:19:43 +03:00
Alex Pyrgiotis
1f308e9cc5
Reformat code with Black 23
Due to a bump in our Python dependencies, we now install Black 23
instead of 22, which detects some of our files as badly formatted.
2023-03-27 15:17:23 +03:00
Alex Pyrgiotis
b102b2bd49
Update Poetry lock file
Run `poetry lock` and allow updating the existing dependencies. This
fixes a CI regression that was introduced by Poetry 1.4.1, which added
stricter Python wheels validation

Fixes #376
2023-03-27 15:15:26 +03:00
Alex Pyrgiotis
7613941e1f
ci: Do not deploy to PackageCloud
Pave the way for deploying .deb and .rpm packages to
packages.freedom.press. Remove the code that deploys to PackageCloud
once we tag a commit with `v<semver>`.

Refs #291
2023-03-27 13:41:08 +03:00
Alex Pyrgiotis
8a7d52b471
Update Changelog for 0.4.1 2023-03-27 12:32:36 +03:00
deeplow
bc50917362
Sort OCR languages when loading them from json
Because now the ocr-languages.json is sorted by tesseract language arg
name, we'll want to sort the languages the user sees alphabetically.
2023-03-16 14:23:31 +00:00
deeplow
58332fdd6e
tesseract: add new lanaguages and others
Tagalo was replaced with filipino [1] in newer tesseract versions, so it
doesn't make sense for us to use the new name and map it to the old
"tgl" name (Tagalo) under the hood.

Language names obtained from tesseract's man page [2].

[1]: 58f7a72f00
[2]: https://github.com/tesseract-ocr/tesseract/blob/main/doc/tesseract.1.asc
2023-03-16 14:23:30 +00:00
deeplow
d8d83ff036
Remove languages not supported
When the ocr languages list was originally introduced (commit b527776),
the container was running in a ubuntu 18.04 [1]. Later it changed to
alpine linux. Unfortunately it has less languages than in ubuntu.

This commit removes those languages. Fixes #355

[1]: b527776e28 (diff-ec032b25a6c2af24eaf4128c85090c5ce0dcbab64e64eace10be9f4e4683a71bR1)
2023-03-16 14:23:28 +00:00
deeplow
66d3c40163
Sort OCR languages by tesseract arg name
Make it easier to compare the list of languages with the output of
`tesseract --list-langs`.
2023-03-16 14:23:25 +00:00
Alex Pyrgiotis
d768099912
Grab just the image ID
When building the image, grab the image id using `-q`, which removes all
the decorations in the output and just keeps the image ID.
2023-03-09 19:04:59 +02:00
Alex Pyrgiotis
a33dcfbb51
Replace First Look Media references
Update several references to First Look Media in the code, to better
reflect the current status, where Freedom of the Press Foundation has
taken over the stewardship of the project.

Fixes #343
2023-03-08 18:40:55 +02:00
Alex Pyrgiotis
330766665d
Update instructions in qa.py 2023-03-08 17:56:25 +02:00
Alex Pyrgiotis
b74258b6d2
Remove stale QA requirement
Remove a stale QA requirement for running the tests manually in the rest
of our Linux distros. Our CI jobs take care of that, so we don't need to
do it.
2023-03-08 17:40:26 +02:00
Alex Pyrgiotis
4668443be6
install: Use the full image tag
Use the full image tag (dangerzone.rocks/dangerzone:latest) when
building the image. Else, we risk creating a `share/image-id.txt` file
with multiple IDs in it, if we have another
`dangerzone.rocks/dangerzone` image (with a different tag) in our dev
environment.
2023-03-08 17:40:26 +02:00
Alex Pyrgiotis
c719fc4f54
Update our MacOS QA instructions
Update our QA instructions for ARM-based MacOS systems. The main change
in 0.4.1 is that we can build an ARM container image for Dangerzone,
which is different from Intel Macs. So, we need to build and test it
during release.
2023-03-08 17:40:26 +02:00
Alex Pyrgiotis
5a0c4d0a03
Bump timeouts
Perform the following timeout bumps:

1. Increase the minimum timeout per page/MiB by x3. The rationale is that
   10 seconds is a reasonable timeout, but to be on the safe side, it's
   best if we multiply it by a safety factor.
2. Increase the minimum timeout from 10 seconds to 60 seconds. 10
   seconds may be too little if the application runtime (e.g.,
   LibreOffice) is slow to start due to background CPU thrashing.
2023-03-08 17:38:59 +02:00
Alex Pyrgiotis
a2049349b1
ci: Add missing CI tests for Ubuntu Focal / Debian Bullseye 2023-03-08 17:36:42 +02:00
Alex Pyrgiotis
b32f215c7c
dev_scripts: Handle alt name for Ubuntu Focal 2023-03-08 17:36:42 +02:00
Alex Pyrgiotis
aaecfdb63e
dev_scripts: Immitate mkdir -p when creating state dirs
The first time we run the env.py script, we may not have the necessary
dirs under envs. It's best to create them with `parents=True`.
2023-03-08 17:36:42 +02:00
Alex Pyrgiotis
96d8cdef94
Suggest users to install Poetry via pipx
Replace the command to install Poetry globally via `pip` in our build
instructions, with a command that installs Poetry under ~/.local/bin
via `pipx`. The rationale is the same as in the previous commit, i.e.,
PEP 668 does not allow it.

Note that in this case, we don't have any CI restrictions, so we could
use the official installer instead. However, for security reasons, we
prefer suggesting `pipx` to the users, and of course give them a list of
alternatives.

Note that for Windows and MacOS we leave the command as is, until we
figure out how PEP 668 applies in there.
2023-03-08 17:36:42 +02:00
Alex Pyrgiotis
7310977343
dev_scripts: Install Poetry via pipx
We can no longer install Poetry via `pip`, since Debian Bookworm now
enforces PEP 668, meaning that both `pip install poetry` and `pip
install --user poetry` cannot work [1]. Since we use the same
installation steps for all of our dev environments, we need to find a
common way to install Poetry.

Poetry's website provides several ways to install Poetry [2]. Moreover,
it also has a special section with CI recommendations [3]. In this
section, it strongly suggests to install Poetry via `pipx`, instead of
the installer script that you download from the Internet.

Follow Poetry's suggestion to install it via `pipx` in CI environments,
with one minor change. Do not use `pipx ensurepath`, as that will
affect the `.bashrc` of the dev environment, which at some point in the
future may be mounted by the dev. Instead, set a PATH environment
variable that includes `~/.local/bin`.

[1]: https://github.com/freedomofpress/dangerzone/issues/351
[2]: https://python-poetry.org/docs/#installation
[3]: https://python-poetry.org/docs/#ci-recommendations

Fixes #351
2023-03-08 17:36:42 +02:00
Alex Pyrgiotis
7979dbd653
ci: Install Poetry via APT on Debian Bookworm
We no longer need to install Poetry via PyPI, since the upstream Debian
issues have been fixed. Moreover, PEP 668 [1] is now enforced in Debian
Bookworm, so we can't install Poetry globally via `pip` in any case.

For these reasons, prefer installing Poetry via APT.

[1]: https://peps.python.org/pep-0668/

Refs #351
2023-03-08 17:23:06 +02:00
deeplow
e840c7a18c
Fix "Choose..." dialog not opening on Qt6
When clicking on the "Choose..." button nothing would happen visually
and it would show the error:

  Traceback (most recent call last):
    File "/home/user/dangerzone/dangerzone/gui/main_window.py", line 614, in select_output_directory
      dialog.setFileMode(QtWidgets.QFileDialog.DirectoryOnly)

According to the PySide docs, QFileDialog.DirectoryOnly has been
deprecated in Qt4.6 [1]. This was not an issue probably on PySide2
because it must have used an earlier Qt version.

Fixes #360

[1]: https://doc.qt.io/qtforpython-5/PySide2/QtWidgets/QFileDialog.html#PySide2.QtWidgets.PySide2.QtWidgets.QFileDialog.FileMode
2023-03-01 12:49:46 +00:00
Alex Pyrgiotis
56c5d77afd
Build Windows MSI/.exe in GitHub actions
Update our GitHub actions manifest to also build a dummy Windows MSI
installer for Dangerzone, so that we don't find out issues during
release.
2023-02-23 09:12:06 +00:00
deeplow
f307e03215
Windows build: link to adding Wix to PATH 2023-02-23 09:12:04 +00:00
deeplow
fb85421db8
Fix Windows build for PySide6 (illegal file names)
Building the `.msi` on Windows was failing in the `candle.exe` step due
to some files in the PySide6 library being too long (PySide6/examples)
or having illegal character (`+`) in their file names
(PySide6/qml/QtQuick).

Skipping copying these files to the `.msi` fixes the issue. Skipping
`examples/` should be of no impact since they're just examples and
skipping `qml/QtQuick` shouldn't cause issues because we don't use QML.

Reverts commit `bbbf822` and adapts it from PySide2 to PySide6.
2023-02-23 09:12:02 +00:00
deeplow
541fe7f382
Container: ignore non-progress pdftoppm output
pdftoppm raises Syntax issues and Errors on a variety of documents.
But it still produces usable results despite the failures. From the
user's perspective it's best to have a document even if imperfect than
having none at all. For this reason, we ignore non-relevant output.
2023-02-21 19:05:21 +00:00
deeplow
dbd0450542
Add poppler-data package due to missing fonts
Some documents were reporting the following error when running them
over pdftoppm:

    Syntax Error: Missing language pack for 'Adobe-Japan1' mapping

This did not necessarily make the document fail but it could be
that some fonts were not properly rendered due to the missing package.
2023-02-21 18:39:14 +00:00
Alex Pyrgiotis
9bf65bc829
dev_scripts: Add extra distros in QA script
Add some distros in the QA script that were missing from the list of our
supported ones.
2023-02-21 20:20:04 +02:00
Alex Pyrgiotis
ce86c1b126
dev_scripts: Enable building envs on Ubuntu Focal
Enable installing Podman in Ubuntu Focal, by re-using the instructions
we have in our installation section. This enables us building a dev
environment for Ubuntu Focal, which we couldn't previously.
2023-02-21 20:20:04 +02:00
Alex Pyrgiotis
5100e15213
Add missing build dependencies for Ubuntu Focal
Add some missing build dependencies that we encountered for Ubuntu
Focal, but they apply to the rest of the Debian-based distros as well.
2023-02-21 20:20:03 +02:00
Alex Pyrgiotis
79ccd14d5d
Fix PySide2 issue for Ubuntu Focal
Provide a fallback for QRegularExpressionValidator specifically for
Ubuntu Focal, because it's not present in PySide2 5.14. Instead,
fallback to QRegExpValidator if it doesn't exist.

Fixes #339
2023-02-21 20:17:05 +02:00
Alex Pyrgiotis
b94d0712c8
Minor corrections in test code 2023-02-17 01:15:08 +02:00
Alex Pyrgiotis
2042591964
container: Copy files before mounting them
Copy input files in a temporary dir before mounting them, thereby
changing their permissions, without affecting the original files. This
way, we can avoid cases where a file is accessible to the user only due
to a supplemental user group, which does not work for containers.

Fixes #157
Fixes #260
Fixes #335
2023-02-17 01:15:08 +02:00
Alex Pyrgiotis
ea73f5d820
container: Take SELinux labels into account
Take SELinux labels into account when mounting a file to the Dangerzone
container. Use the `:Z` flag (which is a no-op in non-SELinux systems)
to clear the existing SELinux label for a file, and apply one that
matches the container's.

Refs #335
2023-02-17 01:15:08 +02:00
Alex Pyrgiotis
d733890ca0
container: Do not leave stale temporary dirs
Do not leave stale temporary directories when conversion fails
unexpectedly. Instead, wrap the conversion operation in a context
manager that wipes the temporary dir afterwards.

Fixes #317
2023-02-17 01:15:08 +02:00
Alex Pyrgiotis
18bc77332d
tests: Run each test in separate config/cache dirs
Run each CLI command in a separate config/cache dir, to avoid leaks
between tests. Moreover, this way we are able to check the contents of
the config/cache dirs for a single CLI run.
2023-02-17 01:15:07 +02:00
Alex Pyrgiotis
44c324f9ac
Separate config dirs from temp dirs
Do not store temporary directories in the Dangerzone's config directory.
There are two reasons for that:

1. They are ephemeral, and they need a temporary place to be stored,
   preferably RAM-backed.
2. We need to set them while running our CI tests.
2023-02-17 01:06:44 +02:00
deeplow
9b3d98b20b
Build arm64 docker image for arm-based Macs
Remove --patform args completely so that by default we build natively
on each platform.

Partial fix for #50
2023-02-16 10:59:00 +00:00
Alex Pyrgiotis
93a06d72f0
Allow users to disable timeouts
Allow users to disable timeouts via the CLI, with the
`--disable-timeouts` argument. By default, the timeouts are always
enabled.

This option applies both to the CLI version of Dangerzone, and the GUI
one. For the latter, the user must start the GUI from their CLI (i.e.,
`dangerzone --disable-timeouts ...`)
2023-02-15 23:48:36 +02:00
Alex Pyrgiotis
f2a4f29cff
container: Introduce proportional timeouts
Introduce proportional timeouts in the container code, where the
conversion logic runs.

Previously, we had a single timeout for each command (120 seconds),
which didn't scale well either with the number of pages in a document,
or with the size of the document.

In this commit, we look into each operation, and we're trying to figure
out the following:

1. What's the number of pages we will operate on?
2. How large is the document?

Knowing the above, we can break down a command into multiple operations,
at least conceptually. Having a number of operations and a sane timeout
value per operation (10 seconds), we can multiply those and reach to a
timeout that fits the command better.

Fixes #306
Fixes #314
Refs #327
2023-02-15 23:46:53 +02:00
Maeve Andrews
c26326450b
Add a --distro option to build-deb.py
Add an optional --distro argument to build-deb.py, to specify the Debian
version in the package name, which currently is "1". This option may
prove useful when publishing packages to freedomofpress/apt-tools-prod,
where packages from different distros with the same names but different
contents are not accepted.
2023-02-14 15:49:51 +02:00
deeplow
b49d6de6bd
Sample PDFs: rename to include file format in name
Make it so all samples when converted don't map to the same file. This
makes it easier to manually inspect files.
2023-02-09 09:02:33 +00:00
deeplow
275df80484
GUI: exit with 1 when some conversion failed
Fixes: #318
2023-02-08 17:24:55 +00:00
Alex Pyrgiotis
23ee60d3f3
Add missing Dangerzone module in setup.py
While creating a Debian package for Dangerzone, we found out that the
`dangerzone.isolation_provider` submodule was not copied to the final
package. Turns out that it was missing from the packages list that we
define in `setup.py`.

Include this package in the proper section in `setup.py`.
2023-02-07 20:34:24 +02:00
Alex Pyrgiotis
aeeed411a0
container: Run commands asynchronously
Convert the Dangerzone script that in the container to run commands
asynchronously, via the asyncio module.

The main advantage of this approach is that it's fast, easy, and safe to
consume the command's streams, while the command is running in the
background.

Previously, we had implemented an approach that used non-blocking
sockets, but those are easy to get wrong. For instance, timeouts were
not exact, capturing output was brittle.

Fixes #325
2023-02-07 18:52:49 +02:00
Alex Pyrgiotis
24975fabd5
container: Reinstate OpenJDK 8 dependency
Commit d7be28ec2a assumed that OpenJDK was
required for the PDFtk package, which is no longer installed in the
Dangerzone image, and thus was removed.

Turns out that while LibreOffice does not depend on OpenJDK, it may
produce corrupted PDFs if installed without it, and will not abort the
operation.

Reinstate OpenJDK to fix the issue of corrupted PDFs.

Fixes #315
2023-02-07 18:52:49 +02:00
Alex Pyrgiotis
e5368b1ea0
ci: Run CI tests for Fedora 37
Run CI tests for Fedora 37 environments, now that we no longer require
PySide2 as a dev dependency.

Fixes #294
2023-02-07 18:52:09 +02:00
Alex Pyrgiotis
16375bfdf9
Use PySide6 in our dev environments
Drop PySide2 from our dependencies (previously used only on Linux
environments) and use PySide6 in all dev environments. The reason is
that PySide2 (from PyPI) does not support Python 3.11, and the variants
that do (Fedora/Debian packages) need to backport fixes from PySide6.

Our original attempt was to build PySide2 wheels for Python 3.11 but
it was not simple, nor maintainable. So, we were left with two options:

1. Install Python 3.10 in dev environments that have Python 3.11 by
   default.
2. Use PySide6 in all of our environments.

In both cases, we break package parity with the user's system, since we
are not testing Dangerzone under the same conditions. However, since
option (2) is forwards-compatible with where we want to move the
project (use Qt6 and PySide6), we chose that one.

Fixes #330
2023-02-07 18:52:09 +02:00
Alex Pyrgiotis
081c68c27f
dev_scripts: Alter the shadow-utils fix
Instead of reinstalling shadow-utils, use the actual fix that the Fedora
devs have suggested (rpm --restore shadow-utils). The previous method
does not seem to work on Fedora 37, and it threw the following error
when building the development environment:

    Installed package shadow-utils-2:4.12.3-3.fc37.x86_64 (from koji-override-0) not available.
    Error: No packages marked for reinstall.
    Error: building at STEP "RUN dnf reinstall -y shadow-utils && dnf clean all": while running runtime: exit status 1
2023-02-07 18:52:08 +02:00
Alex Pyrgiotis
e7eb3bf18b
dev_scripts: Fix a recursion issue in our PyTest wrapper
Fix an issue in our PyTest wrapper, that caused this recursion error:

```
  File "shibokensupport/signature/loader.py", line 61, in feature_importedgc
  File "shibokensupport/feature.py", line 137, in feature_importedgc
  File "shibokensupport/feature.py", line 148, in _mod_uses_pysidegc
  File "/usr/lib/python3.10/inspect.py", line 1147, in getsourcegc
    lines, lnum = getsourcelines(object)gc
  File "/usr/lib/python3.10/inspect.py", line 1129, in getsourcelinesgc
    lines, lnum = findsource(object)gc
  File "/usr/lib/python3.10/inspect.py", line 954, in findsourcegc
    lines = linecache.getlines(file, module.__dict__)gc
  File "/home/user/.cache/pypoetry/virtualenvs/dangerzone-hQU0mwlP-py3.10/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py", line 177, in __dict__gc
    self.__makeattr(name)gc
  File "/home/user/.cache/pypoetry/virtualenvs/dangerzone-hQU0mwlP-py3.10/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py", line 157, in __makeattrgc
    result = importobj(modpath, attrname)gc
  File "/home/user/.cache/pypoetry/virtualenvs/dangerzone-hQU0mwlP-py3.10/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py", line 75, in importobjgc
    module = __import__(modpath, None, None, ["__doc__"])gc
  File "shibokensupport/signature/loader.py", line 54, in feature_importgc
RecursionError: maximum recursion depth exceededgc
```

This error seems to be related to
https://github.com/pytest-dev/pytest/issues/1794. By not importing
`pytest` in our test wrapper, and instead executing directly, we can
avoid it.

Note that this seems to be triggered only by Shiboken6, which is why we
hadn't previously encountered it.
2023-02-07 18:52:08 +02:00
Alex Pyrgiotis
89e8b998d6
ci: Add a test dependency
Add libqt5gui5 as a test dependency in the 'convert-test-docs' step.
This package brings several other Qt and graphics libraries, which are
the ones that we actually require to run the tests *with PySide6*. Else,
we encounter this error:

```
Traceback (most recent call last):
  File "/home/circleci/project/dangerzone/gui/__init__.py", line 19, in <module>
    from PySide6 import QtCore, QtGui, QtWidgets
ImportError: libEGL.so.1: cannot open shared object file: No such file or directory
```

Note that the same package is not required when importing PySide2.QtGui,
which is why we hadn't encountered this issue before. Also, in the rest
of our environments, we explicitly install libqt5gui5, in order to run
the Dangerzone GUI.
2023-02-07 17:14:01 +02:00
Alex Pyrgiotis
63a8748423
ci: Remove Poetry version pin
Remove a Poetry version pin to 1.2.2, which causes installation issues
on systems with Python 3.11.

The pin was originally introduced because Poetry 1.3 was deemed
unstable, due to the following bugs:

* https://github.com/freedomofpress/dangerzone/issues/292#issuecomment-1351368122
* https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1029156

The first problem still stands, but we can circumvent it with the
`--no-ansi` flag, at no functionality cost. The second problem has been
resolved, but it never affected Ubuntu Focal in the first place.

Refs #292
2023-02-07 17:14:01 +02:00
deeplow
bbbf8224f1
install: Remove PySide2-related code for Windows 2023-01-30 11:42:24 +00:00
deeplow
81e9ccf30a
Add PySide6 dependency for Windows and MacOS
We're not yet adding them to Linux, since PySide6 is not yet available
in Linux distros' packages, whereas with Linux and macOS our packaging
process includes the shipped binaries.

Fixes #211
2023-01-30 11:42:18 +00:00
deeplow
ab2f9ead9a
Replace PySide2-stubs with types-PySide2
Replace PySide2-stubs with types-PySide2, both of which are projects
that provide PySide2 typing hints, for the following reasons:

1. types-PySide2 is more complete and allows us to ditch some 'type:
   ignore' comments for Mypy.
2. PySide2-stubs also brings PySide2 as a dependency, which cannot be
   installed in MacOS M1 machines.

Refs #177
2023-01-30 11:42:09 +00:00
deeplow
56b5b98f1e
Report exceptions raised in document conversion
Exceptions raised during the document conversion process would be
silently hidden. This was because ThreadPoolExecuter in logic.py created
various contexts and hid any exceptions raised.

Fixes #309
2023-01-26 18:53:20 +00:00
deeplow
06fe53b0d6
Make 'make test' use the Python interpreter
On Windows this was failing [1] because it did not know to run
./dev_scripts/pytest-wrapper.py in the Python interpreter. The forward
slashes didn't seem to cause issue.

[1]: https://github.com/freedomofpress/dangerzone/actions/runs/3967654249/jobs/6799870096
2023-01-25 16:36:31 +00:00
deeplow
bf6eacccf7
Run windows/mac tests daily 2023-01-25 16:35:46 +00:00
deeplow
a565d9e580
CI: add macOS and Winwdows tests via Github Actions
Adds tests for macOS and Windows with the dummy converter. Tests won't
actually perform the conversion. But it should be enough for us to test
the remainder of the codebase.

Fixes #229
2023-01-25 16:34:46 +00:00
deeplow
724dd2a71f
Make container-specific methods static
Make these methods callable without having to create an instance of the
Container class. This was needed to make pytest-wrapper.py cleaner.
2023-01-25 14:55:43 +00:00
deeplow
f5c4847af2
De-duplicate print_progress() logic 2023-01-25 14:53:28 +00:00
deeplow
a339eff648
Add dummy conversion to GUI 2023-01-25 14:53:26 +00:00
deeplow
da0cb6b3c5
Add dummy isolation provider to CLI
When enabled, the conversion part does nothing but print some simulated
output. This can be useful for testing non-conversion code (e.g. GUI).

Activated with the hidden flag --unsafe-dummy-conversion.
2023-01-25 14:51:50 +00:00
deeplow
538df18709
Split isolation providers into their own .py files
Provides more clear code organization having each provider in their own
python file rather than a single one.
2023-01-25 14:19:05 +00:00
deeplow
7ed1fd6b59
Isolation-provider-specific methods in _convert()
All isolation providers will some similar steps when convert() is
called. For this reason, all the common parts are captured in convert()
and then each isolation provider implements its own specific conversion
process in _convert() (which is called from the convert() method).
2023-01-25 13:10:39 +00:00
deeplow
a4f27afdc6
Abstract container into an IsolationProvider
Encapsulate container logic into an implementation of
AbstractIsolationProvider. This flexibility will allow for other types
of isolation managers, such as a Dummy one.
2023-01-24 11:03:39 +00:00
deeplow
1114a0dfa1
Rename container.py to isolation_provider.py
First step in encapsulating the isolation provider.
2023-01-24 11:03:36 +00:00
deeplow
2da973232b
Remove sudo: no longer needed
Fixes #232
2023-01-23 14:13:56 +00:00
deeplow
d7be28ec2a
Remove openjdk-8 as a dependency.
default-jre and java dependencies dependencies had been added initially
[1] because of libreoffice-java-common, which is no longer present.
Then, when the image was changed from ubuntu to alpine [2], default-jre
was replaced with openjdk-8.

If java is still a dependency for libreoffice, then it should be pulled
automatically.

[1] 9ecdb9e995
[2] 650ae6eee1
2023-01-23 14:13:48 +00:00
deeplow
272d25aee0
Make pdf to ppm conversion dependent on num pages 2023-01-23 14:01:32 +00:00
deeplow
d28aa5a25b
Remove PDFtk dependency (replace w/ pdftoppm)
PDFtk actually isn't needed. It was being used for breaking a PDF
into pages but this is something that be replaced by the already present
'pdftoppm'. Furthermore, by removing this dependency we contribute to
reproducible builds and overall supply chain security because it was
obtained from gitlab with no signature verification or version pinning.

The replacement 'pdftoppm' enabled us to do a shortcut:
 - before: PDF -> PDF pages -> PNG images -> RGB images
 - after:  PDF -> PPM images -> RGB images

And this last conversion step is trivial since the RGB format we were
using is just a PPM file without the metadata in its header.
2023-01-23 14:00:57 +00:00
deeplow
08937239a5
Fix qa.py following BUILD.md update in 3b2544a
This BUILD.md was merged into main without updating qa.py to reflect it
because our linters were down due to the now-fixed poetry bug (see prev
commit).
2023-01-20 09:58:37 +00:00
deeplow
affc0ca2a8
Unpin PIP in CI; replace w/ --no-ansi fix same bug
Alternative solution to commit 0ebfe45169
but without pining the pip version.
2023-01-20 09:52:39 +00:00
Alex Pyrgiotis
0ebfe45169
Fix a failing lint check
Fix a failing lint check, that got introduced due to an upstream Debian
bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1029156
2023-01-19 17:27:11 +02:00
Alex Pyrgiotis
a8421bcdb7
Fix exclusion of dev_scripts/envs from isort
The previous way of excluding files under `dev_scripts/envs` does not
seem to work. Ditching the glob and excluding the whole path works, so
we can go with that.
2023-01-19 17:27:11 +02:00
deeplow
3b2544a2cd
Add comment about poetry install keyring prompt
Running `poetry install` would show a keyring prompt asking the user for
a password or to create a new keyring. This should not be needed for a
successful install.

discussion context: https://github.com/freedomofpress/dangerzone/pull/284#issue-1477773398
2023-01-18 14:17:59 +00:00
Alex Pyrgiotis
7d0b6d44ba
ci: Remove Fedora 35 support
Fedora 35 has reached its end of life [1], so we remove it from our CI
builds.

Closes #308

[1]: https://endoflife.date/fedora
2023-01-16 18:48:09 +02:00
Alex Pyrgiotis
586240ec22
ci: Add CI tests for missing platforms
Use the `dev_scripts/env.py` script to run CI tests for some platforms
we couldn't run before.
2023-01-16 18:48:09 +02:00
Alex Pyrgiotis
ea99b1e1dd
Narrow down installed system packages
Narrow down the system packages that we install in dev environments. The
rationale is that we get most of the Python dependencies from Poetry, so
we don't need to install them from the system as well.

The packages that we do need to install are non-Python ones, and this
commit adds some that were missing: make, python3-stdeb. Also, we
explicitly install the base Qt5 libraries, in order to get the graphics
and C++ libraries that we can't get from PyPI.
2023-01-16 18:48:09 +02:00
Alex Pyrgiotis
f16b42bb18
Ignore dev_scripts/envs for tests/lints
Ignore the `dev_scripts/envs` folder when running tests or linting code,
as it may contain files that are not owned by the current user. In this
case, we've seen that pytest/black etc. fail.

This typically happens when the user has run Dangerzone in a
containerized environment (see #286), and Podman created a directory
with files owned by the user in the nested container.
2023-01-16 18:48:09 +02:00
Alex Pyrgiotis
e3431c7ac2
dev_scripts: Add documentation for the QA script
Add a short explanation of what is the purpose of the QA script, and
what it uses underneath.

Refs #287
2023-01-16 18:48:09 +02:00
Alex Pyrgiotis
14a7ca1ae5
dev_scripts: Add QA script
Add a script that makes the user go through the QA steps for a supported
Dangerzone platform, and may optionally run them automatically, if the
user agrees.

Closes #287
2023-01-16 18:48:09 +02:00
Alex Pyrgiotis
feec73c60c
dev_scripts: Add design document for env.py
Add a design document for `dev_scripts/env.py`, which is a script that
creates Dangerzone environments for various Linux distros. In this
design document, we explain various architectural decisions that we have
taken for this script, as well as how it works under the hood, what are
its shortcomings, etc.

Refs #286
2023-01-16 18:48:09 +02:00
Alex Pyrgiotis
b51691416f
dev_scripts: Introduce script for Dangerzone envs
Introduce `dev_scripts/env.py`, which is a script for building
Dangerzone environments for various Linux distros, and running commands
in them.

Closes #286
2023-01-16 18:48:09 +02:00
Alex Pyrgiotis
4eead90c00
install: Fail early when image build fails 2023-01-16 18:48:09 +02:00
Alex Pyrgiotis
624d480cca
install: Do not create intermediate tarfile for container
Skip the creation of the `share/container.tar` file, since it's not used
anywhere. Instead, pipe our `docker/podman save` invocations to `gzip`
directly, which will compress the tarfile on the fly. This saves both
time and disk space.
2023-01-16 18:48:08 +02:00
Alex Pyrgiotis
a0503c8c40
install: Do not create Debian source package twice 2023-01-16 18:48:08 +02:00
Alex Pyrgiotis
c36443b01e
Add note for python-all 2023-01-16 18:48:08 +02:00
deeplow
b9dc882663
CLI: prefix non-INFO logs with log type
In non-development mode, the CLI shows the user information via the INFO
log level. The message is shown directly without [INFO] as a prefix.
Otherwise it would quickly get annoying to the user seeing [INFO] on
every line of a CLI application.

However, if an error happens it's important for the user to recognize
it's an error or a warning. This commit prints the log level in these
cases.
2023-01-16 14:58:13 +00:00
deeplow
c442c443df
CLI: add missing logging format to non-dev env 2023-01-16 14:50:10 +00:00
deeplow
ad908f5d16
CLI: increase logging from ERROR to INFO level
ERROR level would only show errors and criticals and miss out on all
info-level logging.
2023-01-16 14:50:08 +00:00
deeplow
eb3fd5ae16
CLI: don't print DEBUG logs
The CLI version was mistakenly printing debug logs.
2023-01-16 14:50:06 +00:00
deeplow
c406c95cec
GUI: Add version to header bar
Fixes #219
2023-01-16 14:39:27 +00:00
deeplow
c08fddb443
Add unit test for --version 2023-01-16 14:39:25 +00:00
deeplow
fb3cb98793
Add --version flag 2023-01-16 14:39:24 +00:00
deeplow
0ab9f42dd9
Windows: fix "Open with" dialog showing dz description
The "open with" dialog on windows was showing the description of
Dangerzone instead of its app name. The issue was that on windows it
shows the description there.

Fixes #283
2023-01-16 11:38:08 +00:00
deeplow
84b8212e5d
Fix test instability: pytest in seq. podman<4.3.0
Instability in the automated tests sometimes would sometimes fail when
running "podman images --format {{.ID}}". It turns out that in versions
prior to podman 4.3.0, podman volumes (stored in
~/.local/share/contaiers) would get corrupted when multiple tests were
run in parallel.

The current solution is to wrap the test command to run sequentially in
versions prior to the fix and in parallel for versions after that.

Fixes #217
2023-01-09 11:54:24 +00:00
Ro
ffdc1425bb
Update Debian, Fedora instructions, add QubesOS instructions. 2023-01-09 11:13:12 +00:00
Alex Pyrgiotis
fc313d8744
ci: Fix convert-test-docs step
Fix the failing convert-test-docs step, by pinning Poetry to version
1.2.2. This way, we avoid a bug in Poetry 1.3 [1], which was recently
released on PyPI.

[1]: https://github.com/python-poetry/poetry/issues/7184

Closes #292
2022-12-15 18:32:48 +02:00
Alex Pyrgiotis
147caca524
ci: Fix failing build-debian-bookworm step
Debian has removed the python-all package from its Bookworm repos, which
breaks our CI tests. Looking into why python-all is required in the
first place, we found that it's an artificial stdeb requirement [1],
prior to 0.9.1 versions

The only platform affected by this issue is Ubuntu Focal, so our
solution is to install python-all specifically for that platform.

Finally, we further simplify our build tasks [2] (on Debian-like
distros) by not letting dh-python run tests when building the packages.
Running the tests has some issues after all:

1. It requires installing all the runtime dependencies of Dangerzone,
   since it uses `python -m unittest discover` underneath.
2. It doesn't aid in the stability of the package, since unittest cannot
   run test cases for PyTest.

[1]: https://github.com/astraw/stdeb/issues/153
[2]: https://github.com/freedomofpress/dangerzone/issues/292#issuecomment-1349967888
2022-12-15 18:30:19 +02:00
Alex Pyrgiotis
06f92747ab
ci: Fix the failing run-lint step
Fix the failing run-lint test by switching to Debian Bookworm for this
step, and installing Poetry 1.2.2 from the official repos. This way, we
circumvent a bug [1] in Poetry 1.3 (released on PyPI) and we greatly
simplify this step [2].

[1]: https://github.com/python-poetry/poetry/issues/7184
[2]: https://github.com/freedomofpress/dangerzone/issues/292#issuecomment-1351368122
2022-12-15 18:29:35 +02:00
Alex Pyrgiotis
e5ec5a279c
Separate Poetry dependencies into groups
Create two separate groups for Poetry dependencies:

1. test: Dependencies required for testing Dangerzone.
2. lint: Dependencies required for linting the code with `make lint`.
2022-12-15 18:28:10 +02:00
deeplow
b82808016a
README: make screenshots smaller and side-by-side 2022-12-07 10:51:04 +00:00
deeplow
c8707e8d4a
Update README screenshots for 0.4.0 release 2022-12-02 11:26:21 +00:00
Erik Moeller
fc5edb42be
Merge pull request #280 from freedomofpress/prepare-0.4.0
Prepare artifact links for 0.4.0
2022-12-01 16:50:56 -08:00
Alex Pyrgiotis
6517c4bc5f
Replace references to github.com/firstlookmedia
Replace references to github.com/firstlookmedia with
github.com/freedomofpress, since the ownership of these repos has been
transferred to the Freedom of the Press Foundation.
2022-12-01 22:31:42 +02:00
Erik Moeller
ed41dd7646
Merge pull request #281 from freedomofpress/fix-kudu
Use the proper codename for Ubuntu Kinetic Kudu
2022-12-01 11:24:53 -08:00
Alex Pyrgiotis
8658753d57
Use the proper codename for Ubuntu Kinetic Kudu
In a previous commit, we used the wrong codename for Ubuntu 22.10
"Kinetic Kudu". Instead of "kudu", we should use "kinetic".
2022-12-01 21:18:40 +02:00
deeplow
361001579e
Bump version to v0.4.0 2022-12-01 15:58:01 +00:00
Alex Pyrgiotis
03823bbd29
Update the QA section in RELEASE.md
Update the QA section in the RELEASE.md, based on the latest changes on
our main branch.
2022-12-01 17:53:48 +02:00
Alex Pyrgiotis
31402e0b97
Prepare artifact links for 0.4.0 2022-12-01 17:42:18 +02:00
deeplow
eb38c39557
Changelog: add exit confirmation feature 2022-12-01 15:24:19 +00:00
deeplow
aa1476d59b
Replace exit() with sys.exit() to work on Windows
Windows was complaining that 'exit' is not defined.
2022-12-01 15:03:34 +00:00
deeplow
766c455929
Windows: persist "Open safe documents after converting" setting
Now that safe PDFs can open on Windows right after conversion
(implemented in commit 5b2fefd), we need to save/load the "Open safe
documents after converting" setting.
2022-12-01 15:02:31 +00:00
deeplow
99f23216d6
Fix limited PATH in produced .exe and .msi
Cx-Freeze 6.13.0 limited the PATH of the build executables, making it so
Dangerzone couldn't find Docker through shutil.which().

More information on the issue is available at:
https://github.com/marcelotduarte/cx_Freeze/issues/1674
2022-12-01 14:58:30 +00:00
deeplow
5761255b56
Fix Python version in Windows build scripts
Windows python build scripts were still referencing the old python 3.9
version, whereas 3.10 is the currently used one in the dev environment.
2022-12-01 14:58:29 +00:00
deeplow
642d86899b
Fix timeout message: replace pdfseparate with pdftk 2022-12-01 14:51:52 +00:00
Alex Pyrgiotis
1ad6b59bb1
Support Ubuntu 22.10 "Kinetic Kudu"
Add support for the newly released Ubuntu 22.10 "Kinetic Kudu".

Closes #265
2022-12-01 01:05:00 +02:00
deeplow
cb75cfd958
Update changelog with 'open with' functionality 2022-11-30 12:51:02 +00:00
deeplow
7e42994f81
Prevent user from adding files from multiple dirs
Allowing this would lead to several UI edge-cases related to where the
files would be saved. Avoiding this is the easiest solution at the
moment.

In the future we should consider other options.
2022-11-30 12:49:20 +00:00
deeplow
06797ab626
Prevent adding duplicate documents
It was possible that users would add duplicate documents via 'open with
Dangerzone'. This would lead to unexpected situations and preventing it
both in the CLI and the GUI solves those issues.
2022-11-30 12:49:18 +00:00
deeplow
65d0b7a0d0
Allow adding more docs via 'open_with' while in settings
Handle the case where a user has already added some documents (either
through 'open with' or via Dangerzone 'select documents' button) and
then they want to add some more via the 'open_with' dialog.

It updates the settings to reflect the newly added documents and blocks
the user from adding them if a conversion is already in progress.
2022-11-30 12:49:17 +00:00
deeplow
cb68ba7d1c
Centralize 'document adding' in ContentWidget
Makes the ContentWidget a choke-point, where we can allow or prevent
adding more documents and where we can ensure that newly selected
documents are added immediately to the DangerzoneGui class.

Logically, the application flow should not change in any way.
2022-11-30 12:49:16 +00:00
deeplow
ce5558b5a2
Fix "open with" on macOS for single files
Fixes partially #268
2022-11-30 12:49:14 +00:00
deeplow
af5f7c70d3
Quit dangerzone on macOS when window is closed
Closing windows on macOS would not actually close Dangerzone. Now that
it is a single-window program, it makes sense for it to close
immediately.

Fixes #271
2022-11-29 16:01:27 +00:00
deeplow
466d83129e
Increase minimum window width for macOS
The save group box would get partially trimmed when running in macOS
this appears to be due to differences in rendering fonts and widget
sizes.

Refs #270
2022-11-29 15:56:09 +00:00
deeplow
d582e25606
Changelog: update for 0.4.0 release 2022-11-25 08:27:37 +00:00
deeplow
49b7736cb4
GUI: disable option if archive dir is not writable
Disable the option to move original documents to 'unsafe' subdirectory
when said directory is not writable.
2022-11-24 11:16:38 +00:00
deeplow
c36f73ac8d
Tests: add cli --archive param test 2022-11-24 11:16:37 +00:00
deeplow
b4849995e3
Add CLI support for archiving original / unsafe PDFs 2022-11-24 11:16:35 +00:00
deeplow
c6a0b59379
Add unit tests to cover archive-related methods
Additionally this adds the pytest-mock dev dependency to be able to mock
certain methods.
2022-11-24 11:16:34 +00:00
deeplow
f54446f2fd
Ensure archive directory can be created
Verifies that the archive directory can be created as soon as the
document is set to be archived.
2022-11-24 11:16:31 +00:00
deeplow
bbd0d98f50
Implement 'move to subdir' logic & store in settings
Fixes #251 by implementing the logic for archiving a document after
conversion into a default sub-directory.
2022-11-24 11:16:30 +00:00
deeplow
d3e125de55
Remove mypy ignore comments
For some reason, mypy was complaining that these statements were no
longer unreachable, but now it no longer is.
2022-11-24 11:16:28 +00:00
deeplow
8a31b085ee
Adjust window / settings widget proportions to fit
With the added new widgets not all widgets in the settings fit
perfectly.
2022-11-24 11:15:02 +00:00
deeplow
994e70c17a
Switch save widgets order
Move the 'safe_extension' widget to the top of the settings and the save
location widget to the bottom.
2022-11-24 09:32:15 +00:00
deeplow
a88f8cc44b
Release: add to instructions tooting release 2022-11-24 09:28:08 +00:00
deeplow
bc82163bc4
Inform user # of selected docs when in settings
Reminds the users of the number of documents selected when they are in
the settings.
2022-11-24 09:05:24 +00:00
Alex Pyrgiotis
0a993a682f
Add QA section in our release notes
Add a QA section in our release notes, which describes the list of
manual checks a developer needs to make before a release, to ensure that
we have no regressions.

Closes #246
2022-11-23 20:17:03 +02:00
Alex Pyrgiotis
cc4e39b3fc
Add a bad PDF file in our test samples
Add a bad PDF file in our test samples, which we can use for testing
purposes.
2022-11-23 20:16:50 +02:00
Alex Pyrgiotis
57fdf06f0f
Bump global timeout to two minutes
Bump the global timeout used for various steps from 1 minute to 2
minutes. The reason is that we've seen several reports of operations
failing due to timeout reasons, that were otherwise legitimately
running.

Also, bump the timeout used for compression, which has been reported as
problematic as well.

Refs #146
Refs #149
2022-11-23 18:13:41 +02:00
deeplow
1f18f77b64
Disable parallel conversions
Temporarily limit conversions to one at a time until timeout limitations
are resolved: https://github.com/freedomofpress/dangerzone/issues/257
2022-11-23 15:20:28 +00:00
deeplow
5b2fefd150
Open PDFs on Windows (instead of explorer.exe)
Homogenize GUI by having on Windows the option of opening documents after
conversion. This removes the need for windows-specific GTK widgets.
2022-11-21 12:39:29 +00:00
Alex Pyrgiotis
21dc5b29df
Remove duplicate doc ID logs 2022-11-21 12:39:27 +00:00
Alex Pyrgiotis
699258543a
Fail if a provided suffix cannot be applied
If a user has provided an output filename for a document, then we should
no longer accept suffixes. The reason is that we can't do something
meaningful with it, as we can't alter the provided output filename.

The proper behavior is to reject this action with an exception. Note
that this acts more of a safeguard, since (currently) there is no path
where a user may add a suffix to a document that already has an output
filename.
2022-11-21 12:39:25 +00:00
deeplow
8b3739707d
Rename document_selected to documents_selected 2022-11-21 12:39:24 +00:00
deeplow
aba699a238
Pass Documents instead of file list in document_selected
In the various UI widgets we need to know which documents were just
added. Previously, we passed the filenames around via a PySides signal.
2022-11-21 12:39:23 +00:00
deeplow
2aa329d524
Changelog: add multi-document support
Fixes #77
2022-11-21 12:39:21 +00:00
deeplow
39621fe51d
Limit n. parallel conversions in GUI
Limit the number of simultaneous document conversions to prevent
consuming too much CPU.
2022-11-21 12:39:20 +00:00
deeplow
45a865aae3
Prompt on exit: abort conversion?
Foot-shooting prevention by prompting the user if they are sure
they want to quit Dangerzone with ongoing conversions in progress.
2022-11-21 12:39:16 +00:00
deeplow
3c1e8a232d
Get OCR settings before conversion starts
In preparation for adding a limit on how many convert threads exist, we
are simplifying its logic. Getting ocr_lang doesn't seem to belong to
the thread.
2022-11-21 12:38:42 +00:00
deeplow
95a0536c61
Change start button text to plural when multiple docs 2022-11-21 12:38:27 +00:00
deeplow
06bd117d52
Align document labels
Aligns document labels following the design specified in issue #117.
It did not specify how it would change with window resize, so it
currently expands the progress bar / error message width and keeps the
document name fixed in size.
2022-11-21 12:38:25 +00:00
deeplow
bbc70df43b
Match styling of document-labels to design reference
- removes bold
- removes font size (default works)
- removes 'suspicious: ' label before the document name
2022-11-21 12:38:24 +00:00
deeplow
6707cbbc4a
Add conversion status icon next to each document
Allows the user to see the staus of each document at a gance.
2022-11-21 12:38:23 +00:00
deeplow
9641a61bb3
Typing: ignore 'unreachable' lint warning
Mypy complains about a line being unreachable. This is probably a false
positive. It must assume the code is not using a framework and thus it
can't when a PySide 'connect()' is being called.
2022-11-21 12:38:21 +00:00
deeplow
ce4efc0c25
Lint mypy: ignore type inconsistency w/ official docs
The official docs state the setProperty() method is (str, Any) but
mypy-pyside says it is (bytes, Any). So we ignore it.

[1]: https://doc.qt.io/qtforpython-5/PySide2/QtCore/QObject.html#PySide2.QtCore.PySide2.QtCore.QObject.setProperty
2022-11-21 12:38:20 +00:00
deeplow
df8e2f1b8b
Remove window management logic
Since everything now happens in a single window, there is no need
to have a way to keep track other windows. They simply won't exist.

But on windows and Linux it will still be possible to start
multiple windows by starting various Dangerzone processes. On MacOS
this doesn't seem to be as easy from the launcher, but it should
not be critical as multiple documents can be converted at the same
time in the one window.
2022-11-21 12:38:19 +00:00
deeplow
6f8eb96b35
Remove systray
Having the application in the systray is no longer needed, since
the new_window() logic no longer applies.
2022-11-21 12:38:17 +00:00
deeplow
814b8b9d0f
Unwrap ApplicationWrapper in GUI
Reverts commit b8e8c74 as the conditions that lead the
ApplicationWrapper to crash if not done with a wrapper no longer
seem to apply.
2022-11-21 12:38:16 +00:00
deeplow
c40502fb46
Don't close MainWindow when first conversion ends
First step in removing the multi-window approach, which got replaced
by multi-document single-window.

Fixes #205.
2022-11-21 12:38:14 +00:00
deeplow
f791dc70ab
Hide widgets: select docs -> settings -> conversion
To help debugging and visualizing what was happening, we set all
widgets to be visible at the same time. Now that is no longer needed,
we can hide them.

This keeps the original program flow:
  1. select the documents
  2. set the settings
  3. see the conversion progress

This diverges from the proposed design in issue #117 for simplification
and consistency (with past program flow) purposes.
2022-11-21 12:38:13 +00:00
deeplow
41017745ec
Add greyed out document name right before '-safe.pdf'
The user is supposed to only be able to select the safe PDF extension.
In a multi-file scenario, the extension will be the same for all files.

We follow here the design document [1]. To achieve this, we needed a
QLabel right next to a QLineEdit, to give the user the illusion that
it is the same graphical object.

[1]: https://github.com/firstlookmedia/dangerzone/files/6657536/DangerZone_NA02a.pdf
2022-11-21 12:38:12 +00:00
deeplow
0e36f8d2eb
Set application stylesheet (.css)
Sets the style for LineEdit boxes similarly to the specified design
in issue #117.
2022-11-21 12:38:10 +00:00
deeplow
e64954acfa
Validate safe-extension (-safe.pdf) before converting
Avoid conversion issues when saving the output file when it is set
wrongly. Inform the user with a red box saying "must end in .pdf"
and prevent the user from clicking "convert" before that is fixed.

Combines the validation logic with the already-existing 'update_ui()'
2022-11-21 12:38:09 +00:00
deeplow
1790231db0
Set default output dir and allow users changing it
Set the default directory for saving the file as the one from
the first document. This one will show just the directory name.
If the user changes it by choosing another directory, then show the new
directory name and its full path.
2022-11-21 12:37:57 +00:00
deeplow
4a42627f45
GUI settings: enable & adapt to muti-document
- shows settings again
- removes documents arg from settings widget - this is now stored
  under DangerzoneGui instance.
- removes widget 'dangerous_doc_label' - the doc label is already
  shown next to each document.
- 'Save as' button now serves the purpose of selecting where all
  output files should be saved. Before, it was for selecting where
  the file would be saved.
- 'save_lineedit' widget which was read-only and showed the path
  where the file would be saved, it now called 'safe_suffix' and is
  writable. It is where the user can type the safe file extension
  (e.g. '-safe.pdf'). Validation is not yet implemented.
- when 'start_button' is clicked it now changes the output_filename
  of all the documents to set their output directory to the one the
  user has selected (if 'save_checkbox' enabled) and to set their
  new 'safe_suffix'
- change to plural text for selection of multiple documents
2022-11-21 12:37:49 +00:00
deeplow
5a6c72f09e
Add output_dir manipulation methods to DocumentHolder
These will be needed in for the GUI's settings. This also adds test
cases for these documents. The methods are the following:

  - set_output_dir()
    For changing the output directory of the safe file

  - suffix setter and getter - for changing the suffix of the file
2022-11-21 12:37:47 +00:00
deeplow
fc3cfba450
Security: GUI (via CLI) wildcard injection mitigation
Similar to the mitigation implemented in the CLI version of dangerzone
(commit f9b564be)
2022-11-21 12:37:46 +00:00
deeplow
2e477b8a12
Initial refactor: GUI one-window multi-document support
Allows the user to:
  a) specify filenames via the terminal (for the GUI)
  b) select multiple documents via the GUI

The conversion process can't yet be started since the settings are
broken and disabled (expect mypy complaints).
2022-11-21 12:37:45 +00:00
deeplow
a8001d4f3e
Comment out settings_widget temporarily
The settings widget will be broken when we add multiple document
support in the GUI, at first, at least.
2022-11-21 12:37:43 +00:00
deeplow
bf8ca96a44
Rename 'convert_widget' to 'documents_list' 2022-11-21 12:37:42 +00:00
deeplow
0444fc56ec
Temporarily show all dangerzone widgets (for debugging) 2022-11-21 12:37:41 +00:00
deeplow
89f5e99b0c
Initial GUI multi-window opening via terminal
Allow opening multiple documents at the same time from the terminal
by calling

  $ dangerzone document1.pdf document2.pdf

It will open each document in its own window, making use of the
already existing 'multi-document multi-window' parallel conversion
implementation.
2022-11-21 12:37:39 +00:00
deeplow
1e16eca392
remove unneeded imports: plistlib, grp, getpass
plistlib:
  - originaly added in commit 3be1d63330
  - no longer needed

grp, getpass:
  - originally added in commit ae7c919d8e
  - used for finding the 'docker' executable. No longer needed.
2022-11-18 13:09:01 +00:00
deeplow
0b738ba490
Do not create outfile files when checking if writeable
Checking if files were writeable created files in the process. In the
case where someone adds a list of N files to dangerzone but exits before
converting, they would be left with N 0-byte files for the -safe
version. Now they don't.

Fixes #214
2022-11-14 09:04:54 +00:00
deeplow
93f17b3166
Specialize DocumentFilenameException() for disambiguation
All filename-related exceptions were of class DocumentFilenameException.
This made it difficult to disambiguate them. Specializing them makes it
it easier for tests to detect which exception in particular we want to
verify.
2022-11-14 09:04:23 +00:00
deeplow
1bdbb1959c
Changelog: add cli multi-doc support 2022-11-14 09:04:19 +00:00
deeplow
d71e230173
Update document state exclusively in convert()
The document's state update is better update in the convert() function.
This is because this function is always called for the conversion
progress regardless of the frontend.
2022-11-14 09:03:50 +00:00
deeplow
65ac0d19c3
Add an identifier to each document
With multiple input documents it is possible only one of them has
issues. Mentioning the document id can help debug.
2022-11-14 09:03:36 +00:00
deeplow
6d2fdf0afe
Deduplicate container output parsing (stdout_callback)
The container output logging logic was in both the CLI and the GUI.
This change moves the core parsing logic to container.py.

Since the code was largely the same, now cli does need to specify
a stdout_callback since all the necessary logging already happens.

The GUI now only adds an stdout_callback to detect if there was an
error during the conversion process.
2022-11-14 08:54:02 +00:00
deeplow
2d587f4082
Parallel cli bulk conversions via threading
Initial parallel document conversion: creates a pool of N threads
defined by the setting 'parallel_conversions'. Each thread calls
convert() on a document.
2022-11-14 08:54:00 +00:00
deeplow
e17912888a
Add test cases for bulk document conversion 2022-11-14 08:53:59 +00:00
deeplow
f9b564be03
Security: cli wildcard injection mitigation
Wildcard arguments like `*` can lead to security vulnerabilities
if files are maliciously named as would-be parameters. In the following
scenario if a file in the current directory was named '--help', running
the following command would show the help.

  $ dangerzone-cli *

By checking if parameters also happen to be files, we mitigate this
risk and have a chance to warn the user.
2022-11-14 08:53:38 +00:00
deeplow
981716ccff
Sequential bulk document support in cli
Basic implementation of bulk document support in dangerzone-cli.

Usage: dangerzone-cli [OPTIONS] doc1.pdf doc2.pdf
2022-11-14 08:51:00 +00:00
Alex Pyrgiotis
1147698287
Update changelog wrt Ubuntu Focal
Signed-off-by: Alex Pyrgiotis <alex.p@freedom.press>
2022-11-10 16:35:48 +02:00
Alex Pyrgiotis
e7a8ea8e9f
Add extra installation steps for Ubuntu Focal
Add extra installations steps for installing Podman in Ubuntu Focal,
since it's not present in the official Ubuntu repos. This is the final
requirement to reinstate Ubuntu Focal support.

Closes #206
2022-11-10 16:35:48 +02:00
Alex Pyrgiotis
badafaaf15
ci: Reinstate Ubuntu Focal support
Reinstate support for Ubuntu Focal, which was previously removed in
commit 229ebbda14.

Refs #206
2022-11-10 16:35:48 +02:00
Alex Pyrgiotis
1daaafe2a3
install: Introduce a script for installing Podman
Introduce a script for installing Podman in Ubuntu Focal, in
environments that may, or may not, have sudo installed.

Also, update our CircleCI configuration to use this script when
installing Podman.
2022-11-10 16:35:48 +02:00
Alex Pyrgiotis
5a3a46cd46
Support Click 7.x callback handling
Support Click version 7.x and below, which inspect the number of
arguments a callback handler supports.

Refs #206
2022-11-10 16:35:48 +02:00
Alex Pyrgiotis
ef5abe1419
Report missing supported versions
Report some Linux versions that were recently supported (Debian 12 /
Fedora 37) in the installation instructions. These instructions where
copied from the Dangerzone wiki, which is why the recently supported
versions were missing.
2022-11-10 16:35:48 +02:00
Alex Pyrgiotis
b9fdafe5cc
Copy installation instructions to source
Copy installation instructions from the Dangerzone wiki [1] into the
Dangerzone source. This has several benefits:

1. Devs can update installation instructions as part of a PR.
2. Users can see installation instructions for previous releases.

The last point is important, because we can update our instructions in
the main branch, without affecting the instructions a user follows from
the website (currently pointing to the Dangerzone Wiki).

Refs #240

[1]: https://github.com/freedomofpress/dangerzone/wiki/Installing-Dangerzone
2022-11-10 16:35:43 +02:00
Guthrie McAfee Armstrong
2085405d05
Remove redundant f-strings 2022-11-10 09:59:09 +00:00
deeplow
968fd20ac7
fix comma typo 2022-11-10 09:59:08 +00:00
deeplow
e4ff9801ee
make lint happy 2022-11-10 09:59:05 +00:00
Guthrie McAfee Armstrong
1bd8354228
simplify setting percentage to 0.0 2022-11-10 09:59:04 +00:00
Guthrie McAfee Armstrong
9989ffea37
catch ValueError, simplify try/except on top-level job runs
See https://github.com/freedomofpress/dangerzone/pull/167#discussion_r915757189
2022-11-10 09:59:02 +00:00
Guthrie McAfee Armstrong
6b44db9043
Update container/dangerzone.py
Co-authored-by: deeplow <47065258+deeplow@users.noreply.github.com>
2022-11-10 09:59:01 +00:00
Guthrie McAfee Armstrong
3ef8b183e2
Update container/dangerzone.py
Co-authored-by: deeplow <47065258+deeplow@users.noreply.github.com>
2022-11-10 09:58:59 +00:00
Guthrie McAfee Armstrong
2533eac4be
Rename ConversionJob back to DangerzoneConverter
Co-authored-by: deeplow <47065258+deeplow@users.noreply.github.com>
2022-11-10 09:58:57 +00:00
Guthrie McAfee Armstrong
5a4bf99211
Remove another "END OF FOR LOOP" comment 2022-11-10 09:58:54 +00:00
Guthrie McAfee Armstrong
c18f170caf
Remove "END OF FOR LOOP" comment
Co-authored-by: deeplow <47065258+deeplow@users.noreply.github.com>
2022-11-10 09:58:53 +00:00
Guthrie McAfee Armstrong
17939cb70c
Wrap dangerzone.py back into a class to keep track of percentage 2022-11-10 09:58:51 +00:00
Guthrie McAfee Armstrong
eaa08c9c3d
refactor dangerzone.py, raise exceptions instead of returning int
Standardize calls to subprocess.run to shrink file by about 100 lines
2022-11-10 09:58:50 +00:00
Guthrie McAfee Armstrong
7a84b89410
(container functions): Replace int return codes with raised exceptions 2022-11-10 09:58:48 +00:00
Guthrie McAfee Armstrong
c78b1ea71b
Flatten DangerzoneConverter methods into functions 2022-11-10 09:58:45 +00:00
Alex Pyrgiotis
82fc69655e
Align Poetry instructions across OSes
Align build instructions about Python Poetry, which where previously
present only on MacOS and Windows. With this commit we:

1. Add Poetry instructions on Linux.
2. Add missing Poetry instructions on Windows, when running Dangerzone
   from source.
2022-11-07 12:03:24 +02:00
Alex Pyrgiotis
1ea015bb68
Bump changelog 2022-11-07 12:03:24 +02:00
Alex Pyrgiotis
43617366a5
Update poetry.lock
Run `poetry update` to update the `poetry.lock` file to the latest
version.
2022-11-07 11:46:41 +02:00
deeplow
b1892077fa
Add fedora 37 support in CI
Fedora 37 had been removed (commit d7cbe41) due to lack of support by
packagecloud (our package hosting solution at the time). This will no
longer be true and thus we can add this distro to the list of supported.
2022-10-27 14:53:17 +01:00
deeplow
52bd7b3033
Add long description to setup.py
Building stdeb on bookworm is failing [1] on a missing long_description:

    File "/usr/lib/python3/dist-packages/stdeb/util.py", line 934, in __init__
        for line in long_description.split('\n'):
    AttributeError: 'NoneType' object has no attribute 'split'

[1]: https://app.circleci.com/pipelines/github/freedomofpress/dangerzone/484/workflows/38c579d5-b335-49ab-b56d-9539d93ef16e/jobs/2110
2022-10-27 14:49:25 +01:00
deeplow
77c7cba563
Add support for Debian Bookworm
Fixes #172
2022-10-27 14:49:23 +01:00
Alex Pyrgiotis
a14b4e9620
Fix a minor typo 2022-10-27 13:44:18 +01:00
deeplow
649e427486
Make DangerzoneGui a subclass of DangerzoneCore
Simplify state sharing by having all dangerzone core logic in one
single class instead of two.
2022-10-27 13:44:16 +01:00
deeplow
dca290fb6b
Rename gui.common.GuiCommon class to gui.logic.DangerzoneGui
Rename the `gui.common` module and `gui.common.GuiCommon` class
to `gui.logic` and `gui.logic.DangerzoneGui` respectively. We keep as is
the original names of the variables that hold instances of this class,
since they will change in subsequent commits.

This change is part of the initial refactor to make the DangerzoneGui
class handle the GUI logic of the Dangerzone project.
2022-10-27 13:44:15 +01:00
deeplow
cb8130042e
Rename global_common.GlobalCommon class to logic.Dangerzone
Rename the `global_common` module and `global_common.GlobalCommon` class
to `logic` and `logic.Dangerzone` respectively. Also rename variables
that hold instances of this class.

This change is part of the initial refactor to make the Dangerzone class
handle the core logic of the Dangerzone project.
2022-10-27 13:44:13 +01:00
deeplow
2bed3c10e4
Move safe PDF naming logic to document.py
Let the Document class suggest the default filename for the safe PDF,
based on the provided input filename, appended with the extension
`-safe.pdf`.

Previously, this logic was copy-pasted throughout the code, which made
it difficult to maintain.
2022-10-27 13:44:12 +01:00
deeplow
7aa08457bd
Always resolve relative paths in Document class
Make the Document class always resolve relative input/output file paths,
which are usually passed as arguments by users.

Previously, resolving relative filepaths was a job left to the
instantiators of the Document class. This was error-prone since this
conversion must happen in all the places where we instantiated the
Document class.
2022-10-27 13:44:11 +01:00
deeplow
be5a942a73
Add unit tests for document.py 2022-10-27 13:44:09 +01:00
Alex Pyrgiotis
a068770ab4
Validate filename arguments through Click
Implement Click's callback interface and create validators for the
input/output filenames, using the logic from the Document class. This
way, we can catch user errors as early as possible.
2022-10-27 13:44:08 +01:00
deeplow
db17bd0915
Validate I/O filenames in Document class
Factor out the filename validation logic and move it into the Document
class. Previously, the filename validation logic was scattered across
the CLI and GUI code.

Also, introduce a new errors.py module whose purpose is to handle
document-related errors, by providing:

* A special exception for them (DocumentFilenameExcpetion)
* A decorator that handles DocumentFilenameException, logs it and the
  underlying cause, and exits the program gracefully.
2022-10-27 13:44:06 +01:00
deeplow
e8b56627c9
Rename select_document() function to new_window()
Rename select_document() to new_window() to better encapsulate the fact
that this function is opening a new Dangerzone window.
2022-10-27 13:44:04 +01:00
deeplow
e487b7f0a9
Instantiate documents with a filename
Avoid setting document's filename via document.filename and instead
do it via object instantiation where possible.

Incidentally this has to change some window logic. When
select_document() is called it no longer checks if there is already an
open window with no document selected yet. The user can open as many
windows with unselected documents as they want.
2022-10-27 13:44:03 +01:00
deeplow
0493aca036
Rename common.Common class to document.Document
Rename the `common` module and `common.Common` class to `document` and
`document.Document` respectively. Also, rename the variables that hold
instances of this class.

This change reflects the fact that the class is responsible for tracking
the state of the document. When we add bulk document conversion,
allowing us to keep track of a document's state will be key. This name
change is a step towards that.
2022-10-27 13:44:01 +01:00
Alex Pyrgiotis
03c3541bdc
tests: Run Mypy against tests
Run Mypy static checks against our tests. This brings them inline with
the rest of the codebase, and we have an extra level of certainty that
the tests (and unit tests in particular) will not significantly diverge
from the code they are testing.
2022-10-25 19:09:23 +03:00
Alex Pyrgiotis
2279d48807
tests: Fix a Windows-only test 2022-10-25 19:09:23 +03:00
Alex Pyrgiotis
7d218e5522
tests: Fix path separator issues on Windows
Concatenate directories and filenames in a platform-independent way, by
using pathlib.Path. This fixes issues in the tests where the "/" path
separator made the tests fail on Windows.
2022-10-25 19:09:22 +03:00
Alex Pyrgiotis
ae67dfa5a9
tests: Test filenames with spaces in them
Add two tests that check if Dangerzone properly handles input and output
filenames with spaces in them. Previously this was not straight-forward
because we didn't tokenize arguments, which lead to Click splitting
filenames with spaces in two.
2022-10-25 19:09:22 +03:00
Alex Pyrgiotis
51d4fb04c8
tests: Tokenize CLI arguments
Pass tokenized arguments (i.e., arguments as lists of strings) to CLI
invocations, else Click will attempt to tokenize them internally. The
problem with leaving tokenization to Click is that it uses
`shlex.split()`, which is Unix-oriented, and may miss some cases in
Windows.
2022-10-25 19:09:22 +03:00
Alex Pyrgiotis
6b7797639c
tests: Wrap Click results with extra functionality
Wrap Click results (`Result`) with a new class (`CLIResult`), which
includes:

1. Assertion statements.
2. Logic for formatting and printing a Click result.
3. Invocation arguments, which are missing from the original `Result`
   class.
2022-10-25 19:09:17 +03:00
deeplow
a6c2b943f4
document new windows dev dep.: MS Visual C++ >= 14
On a windows system when running `pip install` it fails to install
`cx_Logging-3.0` with the error:

    error: Microsoft Visual C++ 14.0 or greater is required. Get it
    with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/

Installing this dependency solves the issue.
2022-10-25 10:23:02 +01:00
Guthrie McAfee Armstrong
e552411db2
Support Python 3.10
PySide2 5.15.2.1 added support for Python 3.10
2022-10-25 10:23:00 +01:00
deeplow
225cb2b1d2
Merge pull request #203 from origin/166-static-methods
Reduce "global_common" coupling by moving methods that could be
static onto "semantically-closer" py files.

Based on work initially made by @gmarmstrong on PR #166:

  - moves container-specific code out of global_common.py and into
    container.py
  - creates a util.py for static methods used through the whole app
  - move banner code from global_common onto cli.py given that it's
    only displayed there
  - updates tests to reflect these changes
  - move ocr_languages from global_common onto its own json file in
    share/ocr-languages.json to simplify global_common logic
2022-09-15 15:19:10 +01:00
deeplow
aecacee315
fix return type for container.install()
Note: the container installation failure is not addressed here. See
https://github.com/freedomofpress/dangerzone/issues/193
2022-09-15 13:26:05 +01:00
deeplow
82ac22e837
remove hardcoded 'docker' logging reference
Closes #122 as this was the last remaining hardcoded docker
reference where the code also applied to podman.
2022-09-15 12:17:22 +01:00
deeplow
57e455bbf1
remove "container" from container.py method names
Container-related methods recently moved to container.py no longer
need to have 'container' in their name as they are within the
container scope already.

Additonally it made it awkward to call from another module:

    from .. import container
    container.get_container_runtime()
2022-09-15 12:09:38 +01:00
deeplow
6202c0dba9
deduplicate container-tech-checking logic
The logic for detecting if we were are running on docker or podman
and identifying its respective binary were scattered across the
codebase. This centralizes it all in container.py
2022-09-15 12:09:37 +01:00
deeplow
a822870853
move global_common container logic to container.py
Container-specific methods in global_common class were basically
static methods. So it made sense to move these to container.py
2022-09-15 12:09:34 +01:00
deeplow
272281a29e
move to util: get_subprocess_startupinfo 2022-09-15 10:40:36 +01:00
deeplow
2d6826afa9
move ocr_languages from global_common to share/
ocr_languages can be treated as just a json file instead of being
in global_common. This way it is easier to maintain and makes
global_common cleaner.
2022-09-15 10:40:34 +01:00
deeplow
c0f0e7bf6a
move banner() code to cli & version() to util
- display_banner() was only displayed in CLI mode so it makes sense
for it to be in the CLI.
- get_version(), was mvoed to util since it is a static function
that is needed in multiple parts of the application.
2022-09-15 10:40:31 +01:00
deeplow
ce57fc0449
move get_resource_path to util.py
static methods that are used application-wide should belong to
the utilities python file.

inspired by @gmarmstrong's PR #166 on refactoring global_common
methods to be static and have a dzutil.py
2022-09-15 09:24:11 +01:00
deeplow
0f4e6e9156
create non-ascii filename test cases dynamically instead of static PDF
originally PDF files were included for these edge-cases but in
reality all we want to test is the filename itself. So it reduces
repo size if we have them generated dynamically.
2022-09-13 13:38:33 +01:00
deeplow
d5eefeab3d
report non-containerized code coverage 2022-09-13 13:17:22 +01:00
deeplow
01a5e3b7ca
fix type hints for gui-common (CI would fail)
CI fails: https://app.circleci.com/pipelines/github/freedomofpress/dangerzone/397/workflows/cba836ed-98df-41f8-8f34-abcde5a8c015/jobs/1538
2022-09-13 13:17:20 +01:00
deeplow
75fe45cfb6
replace automated test code in CI 2022-09-13 13:17:15 +01:00
deeplow
9e3d404ed8
make test: add to Makefile & enabled parallel runs
parallel pytest via pytest-xdist
2022-09-13 13:08:01 +01:00
deeplow
d3f478b17f
migrate to pytest & test_docs -> tests/test_docs
Use pytest instead of unittest to have greater flexibility in test
parametrization.
2022-09-13 13:07:58 +01:00
deeplow
4251d824ab
add pytest dep. for testing
The parameterizatin features of pytest over the default unittest
will be useful to reduce test code. Furthermore, pytest is already
used by folks at FPF so there won't be any learning curve if folks
want to work on it.
2022-09-13 13:07:15 +01:00
deeplow
f10446c309
make dz-cli exit(1) when it fails
Otherwise the failure cannot be detected easily by the calling
tests.
2022-09-13 13:07:13 +01:00
deeplow
84acf116c7
remove non-implemented tests
Since we're not doing test-drive-development, we should not have
tests for unimplemented features.
2022-09-13 13:07:12 +01:00
deeplow
377665c459
move tests to project root 2022-09-13 13:07:10 +01:00
Guthrie McAfee Armstrong
36d96ccb5c
Add unit tests 2022-09-13 13:06:59 +01:00
deeplow
e923c5a803
Merge pull request #200 from freedomofpress/missing-0.3.2-changelog 2022-09-12 08:41:21 +01:00
deeplow
20679c3159
Add missing entry in 0.3.2 changelog
The issue https://github.com/freedomofpress/dangerzone/pull/197 ended up being added in the release but had not been updated in the changelog.
2022-09-07 05:46:48 -04:00
Micah Lee
d7cbe419cc
Comment out deploying to fedora 37, because packagecloud.io does not support it yet 2022-09-06 10:43:18 -07:00
Micah Lee
5dd23d13f4
Update download links to 0.3.2 in readme 2022-09-06 10:27:40 -07:00
Micah Lee
b5249284a4
Merge pull request #197 from freedomofpress/196-container-leakage
remove container after use
2022-09-06 10:20:03 -07:00
deeplow
42b4d164cb
Merge pull request #198 from origin/0.3.2-release-updates
Updates to the macOS and Windows build scripts and documentation:
  - Switched from hardcoding the exact minor release of Python 3.9
    to just using Python 3.9
  - Switches from 32-bit Windows Python binaries to 64-bit
  - Install poetry in Windows using pip, which is much simpler and
    less error-prone than the PowerShell way
  - Includes instructions for making the Windows release in a
    Windows 11 VM, and building the container image on the host
  - Updates the fingerprint of the Windows signing key
  - Fixes a small bug with the .wxs file used to build the MSI
    package
2022-08-29 08:41:56 +01:00
Micah Lee
383bd5dbed
Enforce code style 2022-08-26 14:12:01 -07:00
Micah Lee
6713cce503
Updates to the macOS and Windows build scripts and documentation 2022-08-26 14:06:06 -07:00
deeplow
1fa1b90c30
remove container after use
The containers and their respective volumes where not being deleted.
By adding `--rm` to the `podman run` it now removes the containers
after use along with anonymous (unnamed) volumes [1]. The same
happens in docker [2].

Fixes #196

[1]: https://docs.podman.io/en/latest/markdown/podman-run.1.html#volume-v-source-volume-host-dir-container-dir-options
[2]: https://docs.docker.com/storage/volumes/#remove-volumes
2022-08-26 10:14:43 +01:00
deeplow
eabf7a9c18
bump version (0.3.2) & append to CHANGELOG.md 2022-08-25 09:23:40 +01:00
deeplow
6b385abfef
fix regression: --output-filename fails
--output-filename failed with the message:

   Safe PDF filename is not writable

Bug introduced in commit 95ed346.
2022-08-25 09:03:43 +01:00
deeplow
83e6b0475f
add to RELEASE intructions to bump brew cask
fixes #190
2022-08-22 13:20:30 +01:00
deeplow
ec3b92a008
install_container return true when already installed 2022-08-22 12:28:50 +01:00
deeplow
092456434b
don't type check dev scripts 2022-08-22 12:28:48 +01:00
deeplow
23e30ae40a
check that OCR_LANGUAGE has also been set 2022-08-22 12:28:46 +01:00
deeplow
463ff97b97
add type hints to container dz py code 2022-08-22 12:28:44 +01:00
deeplow
f44e6521b6
better handle QFileDialog.getOpenFileName filename 2022-08-22 12:28:39 +01:00
deeplow
e0b3c5b599
resolve naming conflict: QWidget.update()
QWidget.update() is already a built-in Qt method [1]. This method
was unintentionally being overriden. Renamed it to update_progress
to fix it.

[1]: https://doc.qt.io/qtforpython-5/PySide2/QtWidgets/QWidget.html#PySide2.QtWidgets.PySide2.QtWidgets.QWidget.update
2022-08-22 11:13:37 +01:00
deeplow
75ce244195
type hint application wrapper monkeypatch
ignore method assignment. Currently mypy cannot check this.
Related upstream issues:
  - https://github.com/python/mypy/issues/2427
  - https://github.com/python/mypy/issues/708
2022-08-22 11:13:35 +01:00
deeplow
bc7188eb4d
add dev dependency: PySide2-stubs
Mypy was returning many errors relating to PySide2, which didn't
make much sense. This is apparently because there are missing type
hinting stubs for PySide2.

The temporary solution is to add this devel dependency.

Upstream issue: (remove dep. when solved)
  - https://bugreports.qt.io/browse/PYSIDE-1675
2022-08-22 11:13:29 +01:00
deeplow
ec8bafa27c
add mypy lint check 2022-08-22 11:12:24 +01:00
deeplow
392c4bddb5
add blank line at end of file (black lint)
Satisfy the black lint tool
2022-08-22 11:12:22 +01:00
deeplow
201bf5ec03
simplify ansi disabling on mac (removing type issues) 2022-08-22 11:12:20 +01:00
deeplow
95ed34626d
fix type hint in checking if output files exist 2022-08-22 11:12:18 +01:00
deeplow
46a62c1669
fix type hints with commonn input/output filename
Input_filename and output_filename could be None or Str. This lead
to typing issues where the static analysis type hint tool could not
check that the type colisions would not happen in runtime.

So the logic was replaced by throwing a runtime exception if either
of these valiables is ever used without first having been set.
2022-08-22 11:12:16 +01:00
deeplow
7b46d1e3cf
fix spacing (black lint tool) 2022-08-22 11:12:14 +01:00
deeplow
f67c1c3656
fix TypeErrors "object is not subscriptable"
The type hint shoudld be List[] instead of list[] [1] and TypeError:
'ABCMeta' object is not subscriptable (using instead typing.Callable).

[1]: https://mail.python.org/pipermail/python-dev/2017-April/147818.html
2022-08-22 11:12:10 +01:00
deeplow
dcc0b269cd
fix typing for filename in gui_main (is optional) 2022-08-22 11:10:04 +01:00
deeplow
e76132a2f0
add typed hints to Settings dictionary
Originally tied to implment following PEP 589 [1] – TypedDict: Type
Hints for Dictionaries with a Fixed Set of Keys for the Settings
dict.

But this quickly turned out to very challenging without redoing the
code. So we opted instead for using the Any keyword.

[1]: https://peps.python.org/pep-0589/
2022-08-22 11:09:13 +01:00
deeplow
b1c039c4a4
add type hinting to systray (avoid circular imports) 2022-08-22 11:09:11 +01:00
deeplow
b34f7381b4
fix GlobalCommon ref. that was supposed to be Common
The type hints actually warned about this inconsistency.
2022-08-22 11:09:09 +01:00
deeplow
ccacf50db5
simplify resources_path logic to reolve type hint
The following logic was leading to type hint issues:

>	inspect.getfile(inspect.currentframe())

But this code is overly complex for what it does is the same as
simply __file__. So we kill two birds with one stone, so to speak.
2022-08-22 11:09:03 +01:00
deeplow
c69f228261
handle case for no Popen.stdin
Similar to the previous commit (cb0f828)
2022-08-22 10:52:39 +01:00
deeplow
f99131e30c
type hints for container.py & handle no stdout
We added the following check as well:

+        if stdout_callback and p.stdout is not None:

Because, according to the subprocess docs[1]:

>  If the stdout argument was not PIPE, this attribute is None.

In this case, it should not need to confirm that p.stdout is not
None in the mypy static analysis. However it still complained. So
we made mypy the favor and confirmed this was the case.

[1]: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdout
2022-08-22 10:52:17 +01:00
deeplow
78daf75638
add type hint to GuiCommon app argument 2022-08-22 10:49:04 +01:00
deeplow
4aab47af38
ignore type hint to windows-only subprocess command
`subprocess.STARTUPINFO()` only exists in windows systems. Because
of this, in linux-based systems it was raising type hint issues
as it didn't recognize the return function.
2022-08-22 10:49:02 +01:00
deeplow
6ddd411be8
add type get_container_runtime & handle no runtime
There was no code to handle if at this stage the runtime existed.
This caused issues with type hints since `shutil.which()` can
return None, which had not previously been accounted for.

We did not use the opportunity to consolidate all the code for
detecting the runtime, to make this review easier.
2022-08-22 10:48:57 +01:00
deeplow
665e4d54f7
add type hints (1st pass: non problematic cases) 2022-08-22 10:33:28 +01:00
deeplow
d579a47a84
add type hints (1st pass: non problematic cases) 2022-08-22 10:33:23 +01:00
deeplow
1f8e23f164
make mypy more pedantic
Borrow the mypy configuration from the securedrop-client Makefile
2022-08-22 10:30:40 +01:00
deeplow
75c4ee3d2b
add mypy lint to makefile 2022-08-22 10:30:38 +01:00
deeplow
93392f8206
add mypy as dev dependency (type checking lint) 2022-08-22 10:30:35 +01:00
deeplow
7bac3eb6b1
remove get_resource_path() comments (too long)
The black lint tool complained.
2022-08-22 10:15:32 +01:00
deeplow
ece36dc287
add lint checks on CI 2022-08-22 10:15:30 +01:00
deeplow
4d8e4c53e3
sort imports with isort linter 2022-08-22 10:15:26 +01:00
deeplow
90a51a0004
apply black lint tool's suggestions 2022-08-22 10:03:59 +01:00
deeplow
6fc0e2c15f
add Makefile with linters (black & isort)
- borrowed makefile self-help code from SecureDrop
- considered windows dev env case: GNU make available via Cygwin
2022-08-22 10:03:57 +01:00
deeplow
b73efb30ae
add isort as dev dependency 2022-08-22 10:03:49 +01:00
deeplow
bd51947fca
deduplicate container_args
The container arguments was duplicated. This could potentially lead
to refactor errors. For example security arg could be added in one
container call but forgotten to be added in a second one.
2022-08-22 09:24:40 +01:00
deeplow
345ac8a396
podman run with --userns=keep-id to mount volumes
Moving to /dangerzone was failing with insuficient permissions:

    Invalid JSON returned from container: PermissionError: [Errno
    13] Permission denied: '/dangerzone/page-3.rgb'

A previous approach was removed in commit 805222. It started with
root at first in a wrapper script and then dropped these
priviledges which running the script.

`--userns=keep-id` solves the mountpoint issues as it maps the user
starting the container is mapped in the container [1].

[1]: https://www.redhat.com/sysadmin/user-flag-rootless-containers
2022-08-22 08:44:00 +01:00
deeplow
21a9a6c98c
running dangerzone without root in container
There was previously a user created in the container but it was not
used via the dockerfile RUN directive (as pointed out by
gmarmstrong[1]).

Fixes #169

[1]: https://github.com/freedomofpress/dangerzone/issues/169#issue-1268399245
2022-08-22 08:43:58 +01:00
deeplow
2d4bad680e
drop all linux kernel capabilities from containers
These are not needed in order to convert documents in the
dangerzone containers.
2022-08-22 08:43:56 +01:00
deeplow
a02801cc2d
add again the --security-opt flag
Had previously been added but removed in a refactor (see commit
488dca).
2022-08-22 08:43:32 +01:00
Guthrie McAfee Armstrong
e63c931800
Remove psutil, termcolor, and wmi dependencies 2022-08-19 15:16:19 +01:00
Guthrie McAfee Armstrong
575c4b2302
Remove macholib dependency (fix #145) 2022-08-19 15:16:16 +01:00
Guthrie McAfee Armstrong
395eba0a74
Remove requests dependency 2022-08-19 15:16:14 +01:00
Guthrie McAfee Armstrong
0b9e91434d
Update poetry.lock 2022-08-19 15:15:00 +01:00
deeplow
f2f2e6f143
in cli-mode banner should be printed instead
Was calling color spillover to the adjacent text if the banner was
logged instead of printed. Since this is the CLI version, it could
make sense to have this printed.
2022-08-18 12:20:26 +01:00
deeplow
67d91be81a
replace prints with logging
fixes #144: printing non-ascii characters in a macOS application
opened directly from finder would sometimes lead to an error
message in /var/log/system.log similar to this:

  Failed to execute script 'dangerzone' due to unhandled exception:
  'ascii' codec can't encode character '\u201c' in position 1:
  ordinal not in range(128)
2022-08-18 12:07:23 +01:00
deeplow
c2a140807f
simplify get_resource_path logic
Simplifying the logic for obtaining resource paths by using pathlib
instead inspect.

Co-authored-by: Guthrie McAfee Armstrong <git@gmarmstrong.dev>
Based on commit bbce13d
2022-08-16 17:06:43 +01:00
deeplow
4d9f729654
fix win build failure due to package autodiscovery
Setuptools was trying to autodiscover packages with an error
described in #178 [1]. Adding the packages arg to setup() solves
it. In the future we may want to centralize the package list in
a pyproject.toml, once it goes out of beta in setuptools [2].

Fixes #178

[1]: https://github.com/freedomofpress/dangerzone/issues/178
[2]: https://setuptools.pypa.io/en/latest/userguide/package_discovery.html?highlight=package%20discovery#package-discovery-and-namespace-packages
2022-08-16 14:29:11 +01:00
deeplow
80a3543202
Merge branch 'update-ci' 2022-08-05 11:38:20 +01:00
deeplow
c713801e77
remove EOL ubuntu versions 2022-08-04 19:23:41 +01:00
deeplow
47364c200c
disable debian 12 while waiting on upsteam fix
More details at https://github.com/freedomofpress/dangerzone/issues/172
2022-07-20 10:23:58 +01:00
deeplow
72f5200de5
Merge pull request #171 from montoyamoraga/patch-1
delete repetition of word "of"
2022-07-15 05:13:09 -04:00
deeplow
a04ed076cb
update distros in CI (deprecate old & add new ver.) 2022-07-11 11:01:19 +01:00
aarón montoya-moraga
9733e562f9
delete repetition of word "of" 2022-06-26 01:59:54 -04:00
223 changed files with 27544 additions and 3444 deletions

View file

@ -1,388 +0,0 @@
version: 2.1
aliases:
- &install-dependencies-deb
name: Install dependencies (deb)
command: |
export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true
apt-get update
apt-get install -y git ssh podman python-all dh-python python3 python3-stdeb python3-pyside2.qtcore python3-pyside2.qtgui python3-pyside2.qtwidgets python3-appdirs python3-click python3-xdg python3-requests python3-colorama
- &install-dependencies-rpm
name: Install dependencies (rpm)
command: |
dnf install -y podman git openssh make automake gcc gcc-c++ rpm-build python3-setuptools python3-pyside2 python3-appdirs python3-click python3-pyxdg python3-requests python3-colorama
- &build-deb
name: Build the .deb package
command: |
./install/linux/build-deb.py
ls -lh deb_dist/
- &build-rpm
name: Build the .rpm package
command: |
./install/linux/build-rpm.py
ls -lh dist/
- &restore-cache
key: v1-{{ checksum "container/Dockerfile" }}-{{ checksum "container/dangerzone.py" }}
paths:
- /caches/container.tar.gz
- /caches/image-id.txt
- &copy-image
name: Copy container image into package
command: |
cp /caches/container.tar.gz share/
cp /caches/image-id.txt share/
- &deploy-packagecloud
command: |
VERSION=$(cat share/version.txt)
echo "PACKAGE_TYPE is ${PACKAGE_TYPE}"
echo "PACKAGECLOUD_DISTRO is ${PACKAGECLOUD_DISTRO}"
echo "VERSION is ${VERSION}"
echo ""
if [[ "${PACKAGE_TYPE}" == "deb" ]]; then
echo "pushing: deb_dist/dangerzone_${VERSION}-1_all.deb"
package_cloud push "firstlookmedia/code/${PACKAGECLOUD_DISTRO}" "deb_dist/dangerzone_${VERSION}-1_all.deb"
echo ""
echo "pushing: deb_dist/dangerzone_${VERSION}-1.dsc"
package_cloud push "firstlookmedia/code/${PACKAGECLOUD_DISTRO}" "deb_dist/dangerzone_${VERSION}-1.dsc"
elif [[ "${PACKAGE_TYPE}" == "rpm" ]]; then
echo "pushing: dist/dangerzone-${VERSION}-1.noarch.rpm"
package_cloud push "firstlookmedia/code/${PACKAGECLOUD_DISTRO}" "dist/dangerzone-${VERSION}-1.noarch.rpm"
echo ""
echo "pushing: dist/dangerzone-${VERSION}-1.src.rpm"
package_cloud push "firstlookmedia/code/${PACKAGECLOUD_DISTRO}" "dist/dangerzone-${VERSION}-1.src.rpm"
fi
jobs:
build-container-image:
working_directory: /app
docker:
- image: docker:dind
steps:
- checkout
- restore_cache:
keys:
- v1-{{ checksum "container/Dockerfile" }}-{{ checksum "container/dangerzone.py" }}
- setup_remote_docker
- run:
name: Build Dangerzone image
command: |
if [ -f "/caches/container.tar.gz" ]; then
echo "Already cached, skipping"
else
docker build --cache-from=dangerzone.rocks/dangerzone --tag dangerzone.rocks/dangerzone container
fi
- run:
name: Save Dangerzone image and image-id.txt to cache
command: |
if [ -f "/caches/container.tar.gz" ]; then
echo "Already cached, skipping"
else
mkdir -p /caches
docker save -o /caches/container.tar dangerzone.rocks/dangerzone
gzip -f /caches/container.tar
docker image ls dangerzone.rocks/dangerzone | grep "dangerzone.rocks/dangerzone" | tr -s ' ' | cut -d' ' -f3 > /caches/image-id.txt
fi
- save_cache:
key: v1-{{ checksum "container/Dockerfile" }}-{{ checksum "container/dangerzone.py" }}
paths:
- /caches/container.tar.gz
- /caches/image-id.txt
convert-test-docs:
machine:
image: ubuntu-2004:202111-01
steps:
# https://www.atlantic.net/dedicated-server-hosting/how-to-install-and-use-podman-on-ubuntu-20-04/
- run:
name: Install podman on Ubuntu 20.04
command: |
export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true
source /etc/os-release
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- | sudo apt-key add -
sudo apt-get update -qq -y
sudo apt-get -qq --yes install podman
podman --version
- checkout
- run:
name: Install poetry dependencies
command: |
sudo pip3 install poetry
poetry install
- run:
name: Prepare cache directory
command: |
sudo mkdir -p /caches
sudo chown -R $USER:$USER /caches
- restore_cache: *restore-cache
- run: *copy-image
- run:
name: Convert each test document
command: |
for FILE in $(ls test_docs); do
echo Converting $FILE
poetry run ./dev_scripts/dangerzone-cli test_docs/$FILE
echo
done
build-ubuntu-impish:
docker:
- image: ubuntu:21.10
resource_class: medium+
steps:
- run: *install-dependencies-deb
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-deb
build-ubuntu-hirsute:
docker:
- image: ubuntu:21.04
resource_class: medium+
steps:
- run: *install-dependencies-deb
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-deb
build-ubuntu-groovy:
docker:
- image: ubuntu:20.10
resource_class: medium+
steps:
- run: *install-dependencies-deb
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-deb
build-debian-bookworm:
docker:
- image: debian:bookworm
resource_class: medium+
steps:
- run: *install-dependencies-deb
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-deb
build-debian-bullseye:
docker:
- image: debian:bullseye
resource_class: medium+
steps:
- run: *install-dependencies-deb
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-deb
build-fedora-35:
docker:
- image: fedora:35
resource_class: medium+
steps:
- run: *install-dependencies-rpm
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-rpm
build-fedora-34:
docker:
- image: fedora:34
resource_class: medium+
steps:
- run: *install-dependencies-rpm
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-rpm
build-fedora-33:
docker:
- image: fedora:33
resource_class: medium+
steps:
- run: *install-dependencies-rpm
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-rpm
deploy-fedora:
docker:
- image: fedora:33
resource_class: medium+
steps:
- run: *install-dependencies-rpm
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-rpm
- run:
name: Install packagecloud.io
command: |
dnf module install -y ruby:2.7 # requires ruby 2.7
dnf --allowerasing -y distro-sync
dnf install -y ruby-devel
gem install package_cloud
- run:
name: Deploy fedora/33
environment:
PACKAGE_TYPE: "rpm"
PACKAGECLOUD_DISTRO: "fedora/33"
<<: *deploy-packagecloud
- run:
name: Deploy fedora/34
environment:
PACKAGE_TYPE: "rpm"
PACKAGECLOUD_DISTRO: "fedora/34"
<<: *deploy-packagecloud
- run:
name: Deploy fedora/35
environment:
PACKAGE_TYPE: "rpm"
PACKAGECLOUD_DISTRO: "fedora/35"
<<: *deploy-packagecloud
deploy-debian:
docker:
- image: debian:bullseye
resource_class: medium+
steps:
- run: *install-dependencies-deb
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-deb
- run:
name: Install packagecloud.io
command: |
apt-get install -y ruby-dev rubygems
gem install -N rake
gem install -N package_cloud
- run:
name: Deploy debian/bullseye
environment:
PACKAGE_TYPE: "deb"
PACKAGECLOUD_DISTRO: "debian/bullseye"
<<: *deploy-packagecloud
- run:
name: Deploy debian/bookworm
environment:
PACKAGE_TYPE: "deb"
PACKAGECLOUD_DISTRO: "debian/bookworm"
<<: *deploy-packagecloud
deploy-ubuntu:
docker:
- image: ubuntu:21.04
resource_class: medium+
steps:
- run: *install-dependencies-deb
- checkout
- restore_cache: *restore-cache
- run: *copy-image
- run: *build-deb
- run:
name: Install packagecloud.io
command: |
apt-get install -y ruby-dev rubygems
gem install -N rake
gem install -N package_cloud
- run:
name: Deploy ubuntu/impish
environment:
PACKAGE_TYPE: "deb"
PACKAGECLOUD_DISTRO: "ubuntu/impish"
<<: *deploy-packagecloud
- run:
name: Deploy ubuntu/hirsute
environment:
PACKAGE_TYPE: "deb"
PACKAGECLOUD_DISTRO: "ubuntu/hirsute"
<<: *deploy-packagecloud
- run:
name: Deploy ubuntu/groovy
environment:
PACKAGE_TYPE: "deb"
PACKAGECLOUD_DISTRO: "ubuntu/groovy"
<<: *deploy-packagecloud
workflows:
version: 2
build:
jobs:
- build-container-image
- convert-test-docs:
requires:
- build-container-image
- build-ubuntu-impish:
requires:
- build-container-image
- build-ubuntu-hirsute:
requires:
- build-container-image
- build-ubuntu-groovy:
requires:
- build-container-image
- build-debian-bullseye:
requires:
- build-container-image
- build-debian-bookworm:
requires:
- build-container-image
- build-fedora-35:
requires:
- build-container-image
- build-fedora-34:
requires:
- build-container-image
- build-fedora-33:
requires:
- build-container-image
build-and-deploy:
jobs:
- build-container-image:
filters:
tags:
only: /^v.*/
branches:
ignore: /.*/
- deploy-ubuntu:
requires:
- build-container-image
filters:
tags:
only: /^v.*/
branches:
ignore: /.*/
- deploy-debian:
requires:
- build-container-image
filters:
tags:
only: /^v.*/
branches:
ignore: /.*/
- deploy-fedora:
requires:
- build-container-image
filters:
tags:
only: /^v.*/
branches:
ignore: /.*/

5
.gitattributes vendored Normal file
View file

@ -0,0 +1,5 @@
* text=auto
*.py text eol=lf
*.jpg -text
*.gif -text
*.png -text

View file

@ -0,0 +1,67 @@
name: Bug Report (Linux)
description: File a bug report for Linux.
labels: ["bug", "triage"]
projects: ["freedomofpress/dangerzone"]
body:
- type: markdown
attributes:
value: |
Hi, and thanks for taking the time to open this bug report.
- type: textarea
id: what-happened
attributes:
label: What happened?
description: What was the expected behaviour, and what was the actual behaviour? Can you specify the steps you followed, so that we can reproduce?
placeholder: "A bug happened!"
validations:
required: true
- type: textarea
id: os-version
attributes:
label: Linux distribution
description: |
What is the name and version of your Linux distribution? You can find it out with `cat /etc/os-release`
placeholder: Ubuntu 22.04.5 LTS
validations:
required: true
- type: textarea
id: dangerzone-version
attributes:
label: Dangerzone version
description: Which version of Dangerzone are you using?
validations:
required: true
- type: textarea
id: podman-info
attributes:
label: Podman info
description: |
Please copy and paste the following commands in your terminal, and provide us with the output:
```shell
podman version
podman info -f 'json'
podman images
podman run hello-world
```
This will be automatically formatted into code, so no need for backticks.
render: shell
- type: textarea
id: logs
attributes:
label: Document conversion logs
description: |
If the bug occurs during document conversion, we'd like some logs from this process. Please copy and paste the following commands in your terminal, and provide us with the output (replace `/path/to/file` with the path to your document):
```bash
dangerzone-cli /path/to/file
```
render: shell
- type: textarea
id: additional-info
attributes:
label: Additional info
description: |
Please provide us with any additional info, such as logs, extra content, that may help us debug this issue.

View file

@ -0,0 +1,82 @@
name: Bug Report (MacOS)
description: File a bug report for MacOS.
labels: ["bug", "triage"]
projects: ["freedomofpress/dangerzone"]
body:
- type: markdown
attributes:
value: |
Hi, and thanks for taking the time to open this bug report.
- type: textarea
id: what-happened
attributes:
label: What happened?
description: What was the expected behaviour, and what was the actual behaviour? Can you specify the steps you followed, so that we can reproduce?
placeholder: "A bug happened!"
validations:
required: true
- type: textarea
id: os-version
attributes:
label: operating system version
description: Which version of MacOS do you use? You can follow [this link](https://support.apple.com/en-us/109033) to find out more.
placeholder: macOS Sequoia 15
validations:
required: true
- type: dropdown
id: proc-architecture
attributes:
label: Processor type
description: |
Which kind of processor do you use?
You can follow [this link](https://support.apple.com/en-us/109033) to find out more.
options:
- Intel
- Apple Silicon
validations:
required: true
- type: textarea
id: dangerzone-version
attributes:
label: Dangerzone version
description: Which version of Dangerzone are you using?
validations:
required: true
- type: textarea
id: docker-info
attributes:
label: Docker info
description: |
Please copy and paste the following commands in your
terminal, and provide us with the output:
```shell
docker version
docker info -f 'json'
docker images
docker run hello-world
```
This will be automatically formatted into code, so no need for backticks.
render: shell
- type: textarea
id: logs
attributes:
label: Document conversion logs
description: |
If the bug occurs during document conversion, we'd like some logs from this process. Please copy and paste the following commands in your terminal, and provide us with the output (replace `/path/to/file` with the path to your document):
```bash
/Applications/Dangerzone.app/Contents/MacOS/dangerzone-cli /path/to/file
```
render: shell
- type: textarea
id: additional-info
attributes:
label: Additional info
description: |
Please provide us with any additional info, such as logs, extra content, that may help us debug this issue.

View file

@ -0,0 +1,67 @@
name: Bug Report (Windows)
description: File a bug report for Windows.
labels: ["bug", "triage"]
projects: ["freedomofpress/dangerzone"]
body:
- type: markdown
attributes:
value: |
Hi, and thanks for taking the time to open this bug report.
- type: textarea
id: what-happened
attributes:
label: What happened?
description: What was the expected behaviour, and what was the actual behaviour? Can you specify the steps you followed, so that we can reproduce?
placeholder: "A bug happened!"
validations:
required: true
- type: textarea
id: os-version
attributes:
label: operating system version
description: |
Which version of Windows do you use? Follow [this link](https://learn.microsoft.com/en-us/windows/client-management/client-tools/windows-version-search) to find out.
validations:
required: true
- type: textarea
id: dangerzone-version
attributes:
label: Dangerzone version
description: Which version of Dangerzone are you using?
validations:
required: true
- type: textarea
id: docker-info
attributes:
label: Docker info
description: |
Please copy and paste the following commands in your
terminal, and provide us with the output:
```shell
docker version
docker info -f 'json'
docker images
docker run hello-world
```
This will be automatically formatted into code, so no need for backticks.
render: shell
- type: textarea
id: logs
attributes:
label: Document conversion logs
description: |
If the bug occurs during document conversion, we'd like some logs from this process. Please copy and paste the following commands in your terminal, and provide us with the output (replace `\path\to\file` with the path to your document):
```bash
'C:\Program Files (x86)\Dangerzone\dangerzone-cli.exe' \path\to\file
```
render: shell
- type: textarea
id: additional-info
attributes:
label: Additional info
description: |
Please provide us with any additional info, such as logs, extra content, that may help us debug this issue.

1
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View file

@ -0,0 +1 @@
blank_issues_enabled: true

View file

@ -0,0 +1,21 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**What is the feature you think should be a good addition to Dangerzone?**
?
**Is your feature request related to a problem? Please describe.**
It's always useful for us to know more about your context, and why you think
this would be a great addition. Don't hesitate to put some details about your
current workflow and how this could be useful to you.
**Additional context**
Add any other context or screenshots about the feature request here.

6
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

248
.github/workflows/build-push-image.yml vendored Normal file
View file

@ -0,0 +1,248 @@
name: Build and push multi-arch container image
on:
workflow_call:
inputs:
registry:
required: true
type: string
registry_user:
required: true
type: string
image_name:
required: true
type: string
reproduce:
required: true
type: boolean
secrets:
registry_token:
required: true
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install dev. dependencies
run: |-
sudo apt-get update
sudo apt-get install -y git python3-poetry --no-install-recommends
poetry install --only package
- name: Verify that the Dockerfile matches the commited template and params
run: |-
cp Dockerfile Dockerfile.orig
make Dockerfile
diff Dockerfile.orig Dockerfile
prepare:
runs-on: ubuntu-latest
outputs:
debian_archive_date: ${{ steps.params.outputs.debian_archive_date }}
source_date_epoch: ${{ steps.params.outputs.source_date_epoch }}
image: ${{ steps.params.outputs.full_image_name }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Compute image parameters
id: params
run: |
source Dockerfile.env
DEBIAN_ARCHIVE_DATE=$(date -u +'%Y%m%d')
SOURCE_DATE_EPOCH=$(date -u -d ${DEBIAN_ARCHIVE_DATE} +"%s")
TAG=${DEBIAN_ARCHIVE_DATE}-$(git describe --long --first-parent | tail -c +2)
FULL_IMAGE_NAME=${{ inputs.registry }}/${{ inputs.image_name }}:${TAG}
echo "debian_archive_date=${DEBIAN_ARCHIVE_DATE}" >> $GITHUB_OUTPUT
echo "source_date_epoch=${SOURCE_DATE_EPOCH}" >> $GITHUB_OUTPUT
echo "tag=${DEBIAN_ARCHIVE_DATE}-${TAG}" >> $GITHUB_OUTPUT
echo "full_image_name=${FULL_IMAGE_NAME}" >> $GITHUB_OUTPUT
echo "buildkit_image=${BUILDKIT_IMAGE}" >> $GITHUB_OUTPUT
build:
name: Build ${{ matrix.platform.name }} image
runs-on: ${{ matrix.platform.runs-on }}
needs:
- prepare
outputs:
debian_archive_date: ${{ needs.prepare.outputs.debian_archive_date }}
source_date_epoch: ${{ needs.prepare.outputs.source_date_epoch }}
image: ${{ needs.prepare.outputs.image }}
strategy:
fail-fast: false
matrix:
platform:
- runs-on: "ubuntu-24.04"
name: "linux/amd64"
- runs-on: "ubuntu-24.04-arm"
name: "linux/arm64"
steps:
- uses: actions/checkout@v4
- name: Prepare
run: |
platform=${{ matrix.platform.name }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ inputs.registry_user }}
password: ${{ secrets.registry_token }}
# Instructions for reproducibly building a container image are taken from:
# https://github.com/freedomofpress/repro-build?tab=readme-ov-file#build-and-push-a-container-image-on-github-actions
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: image=${{ needs.prepare.outputs.buildkit_image }}
- name: Build and push by digest
id: build
uses: docker/build-push-action@v6
with:
context: ./dangerzone/
file: Dockerfile
build-args: |
DEBIAN_ARCHIVE_DATE=${{ needs.prepare.outputs.debian_archive_date }}
SOURCE_DATE_EPOCH=${{ needs.prepare.outputs.source_date_epoch }}
provenance: false
outputs: type=image,"name=${{ inputs.registry }}/${{ inputs.image_name }}",push-by-digest=true,push=true,rewrite-timestamp=true,name-canonical=true
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Export digest
run: |
mkdir -p ${{ runner.temp }}/digests
digest="${{ steps.build.outputs.digest }}"
touch "${{ runner.temp }}/digests/${digest#sha256:}"
echo "Image digest is: ${digest}"
- name: Upload digest
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM_PAIR }}
path: ${{ runner.temp }}/digests/*
if-no-files-found: error
retention-days: 1
merge:
runs-on: ubuntu-latest
needs:
- build
outputs:
debian_archive_date: ${{ needs.build.outputs.debian_archive_date }}
source_date_epoch: ${{ needs.build.outputs.source_date_epoch }}
image: ${{ needs.build.outputs.image }}
digest_root: ${{ steps.image.outputs.digest_root }}
digest_amd64: ${{ steps.image.outputs.digest_amd64 }}
digest_arm64: ${{ steps.image.outputs.digest_arm64 }}
steps:
- uses: actions/checkout@v4
- name: Download digests
uses: actions/download-artifact@v4
with:
path: ${{ runner.temp }}/digests
pattern: digests-*
merge-multiple: true
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ inputs.registry_user }}
password: ${{ secrets.registry_token }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
- name: Create manifest list and push
working-directory: ${{ runner.temp }}/digests
run: |
DIGESTS=$(printf '${{ needs.build.outputs.image }}@sha256:%s ' *)
docker buildx imagetools create -t ${{ needs.build.outputs.image }} ${DIGESTS}
- name: Inspect image
id: image
run: |
# Inspect the image
docker buildx imagetools inspect ${{ needs.build.outputs.image }}
docker buildx imagetools inspect ${{ needs.build.outputs.image }} --format "{{json .Manifest}}" > manifest
# Calculate and print the digests
digest_root=$(jq -r .digest manifest)
digest_amd64=$(jq -r '.manifests[] | select(.platform.architecture=="amd64") | .digest' manifest)
digest_arm64=$(jq -r '.manifests[] | select(.platform.architecture=="arm64") | .digest' manifest)
echo "The image digests are:"
echo " Root: $digest_root"
echo " linux/amd64: $digest_amd64"
echo " linux/arm64: $digest_arm64"
# NOTE: Set the digests as an output because the `env` context is not
# available to the inputs of a reusable workflow call.
echo "digest_root=$digest_root" >> "$GITHUB_OUTPUT"
echo "digest_amd64=$digest_amd64" >> "$GITHUB_OUTPUT"
echo "digest_arm64=$digest_arm64" >> "$GITHUB_OUTPUT"
# This step calls the container workflow to generate provenance and push it to
# the container registry.
provenance:
needs:
- merge
strategy:
matrix:
manifest_type:
- root
- amd64
- arm64
permissions:
actions: read # for detecting the Github Actions environment.
id-token: write # for creating OIDC tokens for signing.
packages: write # for uploading attestations.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.1.0
with:
digest: ${{ needs.merge.outputs[format('digest_{0}', matrix.manifest_type)] }}
image: ${{ needs.merge.outputs.image }}
registry-username: ${{ inputs.registry_user }}
secrets:
registry-password: ${{ secrets.registry_token }}
# This step ensures that the image is reproducible
check-reproducibility:
if: ${{ inputs.reproduce }}
needs:
- merge
runs-on: ${{ matrix.platform.runs-on }}
strategy:
fail-fast: false
matrix:
platform:
- runs-on: "ubuntu-24.04"
name: "amd64"
- runs-on: "ubuntu-24.04-arm"
name: "arm64"
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Reproduce the same container image
run: |
./dev_scripts/reproduce-image.py \
--runtime \
docker \
--debian-archive-date \
${{ needs.merge.outputs.debian_archive_date }} \
--platform \
linux/${{ matrix.platform.name }} \
${{ needs.merge.outputs[format('digest_{0}', matrix.platform.name)] }}

98
.github/workflows/build.yml vendored Normal file
View file

@ -0,0 +1,98 @@
name: Build dev environments
on:
pull_request:
push:
branches:
- main
- "test/**"
schedule:
- cron: "0 0 * * *" # Run every day at 00:00 UTC.
permissions:
packages: write
env:
IMAGE_REGISTRY: ghcr.io/${{ github.repository_owner }}
REGISTRY_USER: ${{ github.actor }}
REGISTRY_PASSWORD: ${{ github.token }}
# Each day, build and publish to ghcr.io:
#
# - the dangerzone/dangerzone container image
# - the dangerzone/build/{debian,ubuntu,fedora}:version
# dev environments used to run the tests
#
# End-user environments are not published to the GHCR because
# they need .rpm or .deb files to be built, which is what we
# want to test.
jobs:
build-dev-environment:
name: "Build dev-env (${{ matrix.distro }}-${{ matrix.version }})"
runs-on: ubuntu-latest
strategy:
matrix:
include:
- distro: ubuntu
version: "22.04"
- distro: ubuntu
version: "24.04"
- distro: ubuntu
version: "24.10"
- distro: ubuntu
version: "25.04"
- distro: debian
version: bullseye
- distro: debian
version: bookworm
- distro: debian
version: trixie
- distro: fedora
version: "40"
- distro: fedora
version: "41"
- distro: fedora
version: "42"
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Login to GHCR
run: |
echo ${{ github.token }} | podman login ghcr.io -u USERNAME --password-stdin
- name: Build dev environment
run: |
./dev_scripts/env.py --distro ${{ matrix.distro }} \
--version ${{ matrix.version }} \
build-dev --sync
build-container-image:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get current date
id: date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Cache container image
id: cache-container-image
uses: actions/cache@v4
with:
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |
share/container.tar
share/image-id.txt
- name: Build Dangerzone image
if: ${{ steps.cache-container-image.outputs.cache-hit != 'true' }}
run: |
python3 ./install/common/build-image.py

30
.github/workflows/check_pr.yml vendored Normal file
View file

@ -0,0 +1,30 @@
name: Check branch conformity
on:
pull_request:
types: ["opened", "labeled", "unlabeled", "reopened", "synchronize"]
jobs:
prevent-fixup-commits:
runs-on: ubuntu-latest
env:
target: debian-bookworm
distro: debian
version: bookworm
steps:
- name: Checkout
uses: actions/checkout@v4
- name: prevent fixup commits
run: |
git fetch origin
git status
git log --pretty=format:%s origin/main..HEAD | grep -ie '^fixup\|^wip' && exit 1 || true
check-changelog:
runs-on: ubuntu-latest
name: Ensure CHANGELOG.md is populated for user-visible changes
steps:
# Pin the GitHub action to a specific commit that we have audited and know
# how it works.
- uses: tarides/changelog-check-action@509965da3b8ac786a5e2da30c2ccf9661189121f
with:
changelog: CHANGELOG.md

109
.github/workflows/check_repos.yml vendored Normal file
View file

@ -0,0 +1,109 @@
# Test official instructions for installing Dangerzone
# ====================================================
#
# The installation instructions have been copied from our INSTALL.md file.
# NOTE: When you change either place, please make sure to keep the two files in
# sync.
# NOTE: Because the commands run as root, the use of sudo is not necessary.
name: Test official instructions for installing Dangerzone
on:
schedule:
- cron: '0 0 * * *' # Run every day at 00:00 UTC.
workflow_dispatch:
jobs:
install-from-apt-repo:
name: "Install Dangerzone on ${{ matrix.distro}} ${{ matrix.version }}"
runs-on: ubuntu-latest
container: ${{ matrix.distro }}:${{ matrix.version }}
strategy:
matrix:
include:
- distro: ubuntu
version: "25.04" # plucky
- distro: ubuntu
version: "24.10" # oracular
- distro: ubuntu
version: "24.04" # noble
- distro: ubuntu
version: "22.04" # jammy
- distro: debian
version: "trixie" # 13
- distro: debian
version: "12" # bookworm
- distro: debian
version: "11" # bullseye
steps:
- name: Add packages.freedom.press PGP key (gpg --keyring)
if: matrix.version != 'trixie' && matrix.version != "25.04"
run: |
apt-get update && apt-get install -y gnupg2 ca-certificates
dirmngr # NOTE: This is a command that's necessary only in containers
# The key needs to be in the GPG keybox database format so the
# signing subkey is detected by apt-secure.
gpg --keyserver hkps://keys.openpgp.org \
--no-default-keyring --keyring ./fpf-apt-tools-archive-keyring.gpg \
--recv-keys "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281"
mkdir -p /etc/apt/keyrings/
mv ./fpf-apt-tools-archive-keyring.gpg /etc/apt/keyrings/.
- name: Add packages.freedom.press PGP key (sq)
if: matrix.version == 'trixie' || matrix.version == '25.04'
run: |
apt-get update && apt-get install -y ca-certificates sq
mkdir -p /etc/apt/keyrings/
# On debian trixie, apt-secure uses `sqv` to verify the signatures
# so we need to retrieve PGP keys and store them using the base64 format.
sq network keyserver \
--server hkps://keys.openpgp.org \
search "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281" \
--output - \
| sq packet dearmor \
> /etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg
- name: Add packages.freedom.press to our APT sources
run: |
. /etc/os-release
echo "deb [signed-by=/etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg] \
https://packages.freedom.press/apt-tools-prod ${VERSION_CODENAME?} main" \
| tee /etc/apt/sources.list.d/fpf-apt-tools.list
- name: Install Dangerzone
run: |
apt update
apt install -y dangerzone
install-from-yum-repo:
name: "Install Dangerzone on ${{ matrix.distro}} ${{ matrix.version }}"
runs-on: ubuntu-latest
container: ${{ matrix.distro }}:${{ matrix.version }}
strategy:
matrix:
include:
- distro: fedora
version: 40
- distro: fedora
version: 41
- distro: fedora
version: 42
steps:
- name: Add packages.freedom.press to our YUM sources
run: |
dnf install -y 'dnf-command(config-manager)'
dnf-3 config-manager --add-repo=https://packages.freedom.press/yum-tools-prod/dangerzone/dangerzone.repo
- name: Replace 'rawhide' string with Fedora version
# The previous command has created a `dangerzone.repo` file. The
# config-manager plugin should have substituted the $releasever variable
# with the Fedora version number. However, for unreleased Fedora
# versions, this gets translated to "rawhide", even though they do have
# a number. To fix this, we need to substitute the "rawhide" string
# witht the proper Fedora version.
run: |
source /etc/os-release
sed -i "s/rawhide/${VERSION_ID}/g" /etc/yum.repos.d/dangerzone.repo
- name: Install Dangerzone
# FIXME: We add the `-y` flag here, in lieu of a better way to check the
# Dangerzone signature.
run: dnf install -y dangerzone

483
.github/workflows/ci.yml vendored Normal file
View file

@ -0,0 +1,483 @@
name: Tests
on:
pull_request:
push:
branches:
- main
- "test/**"
schedule:
- cron: "2 0 * * *" # Run every day at 02:00 UTC.
workflow_dispatch:
permissions:
packages: write
env:
REGISTRY_USER: ${{ github.actor }}
REGISTRY_PASSWORD: ${{ github.token }}
IMAGE_REGISTRY: ghcr.io/${{ github.repository_owner }}
QT_SELECT: "qt6"
# Disable multiple concurrent runs on the same branch
# When a new CI build is triggered, it will cancel the
# other in-progress ones (for the same branch)
concurrency:
group: ${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
run-lint:
runs-on: ubuntu-latest
container:
image: debian:bookworm
steps:
- uses: actions/checkout@v4
- name: Install dev. dependencies
run: |-
apt-get update
apt-get install -y git make python3 python3-poetry --no-install-recommends
poetry install --only lint,test
- name: Run linters to enforce code style
run: poetry run make lint
- name: Check that the QA script is up to date with the docs
run: "./dev_scripts/qa.py --check-refs"
# This is already built daily by the "build.yml" file
# But we also want to include this in the checks that run on each push.
build-container-image:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get current date
id: date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Cache container image
id: cache-container-image
uses: actions/cache@v4
with:
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |-
share/container.tar
share/image-id.txt
- name: Build Dangerzone container image
if: ${{ steps.cache-container-image.outputs.cache-hit != 'true' }}
run: |
python3 ./install/common/build-image.py
- name: Upload container image
uses: actions/upload-artifact@v4
with:
name: container.tar
path: share/container.tar
download-tessdata:
name: Download and cache Tesseract data
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Cache Tessdata
id: cache-tessdata
uses: actions/cache@v4
with:
path: share/tessdata/
key: v1-tessdata-${{ hashFiles('./install/common/download-tessdata.py') }}
enableCrossOsArchive: true
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Download Tessdata
run: |-
if [ -f "share/tessdata" ]; then
echo "Already cached, skipping"
else
python3 ./install/common/download-tessdata.py
fi
windows:
runs-on: windows-latest
needs:
- download-tessdata
env:
DUMMY_CONVERSION: 1
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install poetry
- run: poetry install
- name: Restore cached tessdata
uses: actions/cache/restore@v4
with:
path: share/tessdata/
enableCrossOsArchive: true
fail-on-cache-miss: true
key: v1-tessdata-${{ hashFiles('./install/common/download-tessdata.py') }}
- name: Run CLI tests
run: poetry run make test
- name: Set up .NET CLI environment
uses: actions/setup-dotnet@v4
with:
dotnet-version: "8.x"
- name: Install WiX Toolset
run: dotnet tool install --global wix --version 5.0.2
- name: Add WiX UI extension
run: wix extension add --global WixToolset.UI.wixext/5.0.2
- name: Build the MSI installer
# NOTE: This also builds the .exe internally.
run: poetry run .\install\windows\build-app.bat
- name: Upload MSI installer
uses: actions/upload-artifact@v4
with:
name: Dangerzone.msi
path: "dist/Dangerzone.msi"
if-no-files-found: error
compression-level: 0
macOS:
name: "macOS (${{ matrix.arch }})"
runs-on: ${{ matrix.runner }}
needs:
- download-tessdata
strategy:
matrix:
include:
- runner: macos-latest # CPU type: Apple Silicon (M1)
arch: arch64
- runner: macos-13 # CPU type: Intel x86_64
arch: x86_64
env:
DUMMY_CONVERSION: 1
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Restore cached tessdata
uses: actions/cache/restore@v4
with:
path: share/tessdata/
enableCrossOsArchive: true
fail-on-cache-miss: true
key: v1-tessdata-${{ hashFiles('./install/common/download-tessdata.py') }}
- run: pip install poetry
- run: poetry install
- name: Run CLI tests
run: poetry run make test
- name: Build macOS app
run: poetry run python ./install/macos/build-app.py
- name: Upload macOS app
uses: actions/upload-artifact@v4
with:
name: Dangerzone-${{ matrix.arch }}.app
path: "dist/Dangerzone.app"
if-no-files-found: error
compression-level: 0
build-deb:
needs:
- build-container-image
name: "build-deb (${{ matrix.distro }} ${{ matrix.version }})"
runs-on: ubuntu-latest
strategy:
matrix:
include:
- distro: ubuntu
version: "22.04"
- distro: ubuntu
version: "24.04"
- distro: ubuntu
version: "24.10"
- distro: ubuntu
version: "25.04"
- distro: debian
version: bullseye
- distro: debian
version: bookworm
- distro: debian
version: trixie
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Login to GHCR
run: |
echo ${{ github.token }} | podman login ghcr.io -u USERNAME --password-stdin
- name: Get the dev environment
run: |
./dev_scripts/env.py \
--distro ${{ matrix.distro }} \
--version ${{ matrix.version }} \
build-dev --sync
- name: Get current date
id: date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Restore container cache
uses: actions/cache/restore@v4
with:
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |-
share/container.tar
share/image-id.txt
fail-on-cache-miss: true
- name: Build Dangerzone .deb
run: |
./dev_scripts/env.py --distro ${{ matrix.distro }} \
--version ${{ matrix.version }} \
run --dev --no-gui ./dangerzone/install/linux/build-deb.py
- name: Upload Dangerzone .deb
if: matrix.distro == 'debian' && matrix.version == 'bookworm'
uses: actions/upload-artifact@v4
with:
name: dangerzone.deb
path: "deb_dist/dangerzone_*_*.deb"
if-no-files-found: error
compression-level: 0
install-deb:
name: "install-deb (${{ matrix.distro }} ${{ matrix.version }})"
runs-on: ubuntu-latest
needs:
- build-deb
strategy:
matrix:
include:
- distro: ubuntu
version: "22.04"
- distro: ubuntu
version: "24.04"
- distro: ubuntu
version: "24.10"
- distro: ubuntu
version: "25.04"
- distro: debian
version: bullseye
- distro: debian
version: bookworm
- distro: debian
version: trixie
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Download Dangerzone .deb
uses: actions/download-artifact@v4
with:
name: dangerzone.deb
path: "deb_dist/"
- name: Build end-user environment
run: |
./dev_scripts/env.py --distro ${{ matrix.distro }} \
--version ${{ matrix.version }} \
build
- name: Run a test command
run: |
./dev_scripts/env.py --distro ${{ matrix.distro }} \
--version ${{ matrix.version }} \
run dangerzone-cli dangerzone/tests/test_docs/sample-pdf.pdf --ocr-lang eng
- name: Check that the Dangerzone GUI imports work
run: |
./dev_scripts/env.py --distro ${{ matrix.distro }} \
--version ${{ matrix.version }} \
run dangerzone --help
build-install-rpm:
name: "build-install-rpm (${{ matrix.distro }} ${{matrix.version}})"
runs-on: ubuntu-latest
needs:
- build-container-image
strategy:
matrix:
distro: ["fedora"]
version: ["40", "41", "42"]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Login to GHCR
run: |
echo ${{ github.token }} | podman login ghcr.io -u USERNAME --password-stdin
- name: Get the dev environment
run: |
./dev_scripts/env.py \
--distro ${{ matrix.distro }} \
--version ${{ matrix.version }} \
build-dev --sync
- name: Get current date
id: date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Restore container image
uses: actions/cache/restore@v4
with:
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |-
share/container.tar
share/image-id.txt
fail-on-cache-miss: true
- name: Build Dangerzone .rpm
run: |
./dev_scripts/env.py --distro ${{ matrix.distro }} --version ${{ matrix.version }} \
run --dev --no-gui ./dangerzone/install/linux/build-rpm.py
- name: Upload Dangerzone .rpm
uses: actions/upload-artifact@v4
with:
name: dangerzone-${{ matrix.distro }}-${{ matrix.version }}.rpm
path: "dist/dangerzone-*.x86_64.rpm"
if-no-files-found: error
compression-level: 0
# Reclaim some space in this step, now that the dev environment is no
# longer necessary. Previously, we encountered out-of-space issues while
# running this CI job.
- name: Reclaim some storage space
run: podman system reset -f
- name: Build end-user environment
run: |
./dev_scripts/env.py --distro ${{ matrix.distro }} \
--version ${{ matrix.version }} \
build
- name: Run a test command
run: |
./dev_scripts/env.py --distro ${{ matrix.distro }} --version ${{ matrix.version }} \
run dangerzone-cli dangerzone/tests/test_docs/sample-pdf.pdf --ocr-lang eng
- name: Check that the Dangerzone GUI imports work
run: |
./dev_scripts/env.py --distro ${{ matrix.distro }} --version ${{ matrix.version }} \
run dangerzone --help
run-tests:
name: "run tests (${{ matrix.distro }} ${{ matrix.version }})"
runs-on: ubuntu-latest
needs:
- build-container-image
- download-tessdata
strategy:
matrix:
include:
- distro: ubuntu
version: "22.04"
- distro: ubuntu
version: "24.04"
- distro: ubuntu
version: "24.10"
- distro: ubuntu
version: "25.04"
- distro: debian
version: bullseye
- distro: debian
version: bookworm
- distro: debian
version: trixie
- distro: fedora
version: "40"
- distro: fedora
version: "41"
- distro: fedora
version: "42"
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Login to GHCR
run: |
echo ${{ github.token }} | podman login ghcr.io -u USERNAME --password-stdin
- name: Get current date
id: date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Get the dev environment
run: |
./dev_scripts/env.py \
--distro ${{ matrix.distro }} \
--version ${{ matrix.version }} \
build-dev --sync
- name: Restore container image
uses: actions/cache/restore@v4
with:
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
path: |-
share/container.tar
share/image-id.txt
fail-on-cache-miss: true
- name: Restore cached tessdata
uses: actions/cache/restore@v4
with:
path: share/tessdata/
enableCrossOsArchive: true
fail-on-cache-miss: true
key: v1-tessdata-${{ hashFiles('./install/common/download-tessdata.py') }}
- name: Setup xvfb (Linux)
run: |
sudo apt update
# Stuff copied wildly from several stackoverflow posts
sudo apt-get install -y xvfb libxkbcommon-x11-0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0 libxcb-render-util0 libxcb-xinerama0 libxcb-xinput0 libxcb-xfixes0 libxcb-shape0 libglib2.0-0 libgl1-mesa-dev '^libxcb.*-dev' libx11-xcb-dev libglu1-mesa-dev libxrender-dev libxi-dev libxkbcommon-dev libxkbcommon-x11-dev
# start xvfb in the background
sudo /usr/bin/Xvfb $DISPLAY -screen 0 1280x1024x24 &
- name: Run CI tests
run: |-
# Pass the -ac Xserver flag, to disable host-based access controls.
# This should be used ONLY for testing [1]. If we don't pass this
# flag, the Podman container is not authorized [2] to access the Xvfb
# server.
#
# [1] From https://www.x.org/releases/X11R6.7.0/doc/Xserver.1.html#sect4:
#
# disables host-based access control mechanisms. Enables access by
# any host, and permits any host to modify the access control
# list. Use with extreme caution. This option exists primarily for
# running test suites remotely.
#
# [2] Fails with "Authorization required, but no authorization
# protocol specified". However, we have verified with strace(1)
# that the command in the Podman container can read the Xauthority
# file successfully.
xvfb-run -s '-ac' ./dev_scripts/env.py --distro ${{ matrix.distro }} --version ${{ matrix.version }} run --dev \
bash -c 'cd dangerzone; poetry run make test'
- name: Upload PDF diffs
uses: actions/upload-artifact@v4
with:
name: pdf-diffs-${{ matrix.distro }}-${{ matrix.version }}
path: tests/test_docs/diffs/*.jpeg
# Always run this step to publish test results, even on failures
if: ${{ always() }}

22
.github/workflows/close-issues.yml vendored Normal file
View file

@ -0,0 +1,22 @@
name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: actions/stale@v9
with:
days-before-issue-stale: 30
days-before-issue-close: 14
stale-issue-label: "stale"
stale-issue-message: "Marking this issue as stale because it has been open for 30 days with no activity. It will be closed in 14 days if there's no activity, or if the `stale` label is not removed. Does anyone want to add something?"
close-issue-message: "Closing this issue now. Don't hesitate to reopen if you have anything to add :-)"
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}
any-of-labels: needs info

View file

@ -0,0 +1,22 @@
name: Release multi-arch container image
on:
workflow_dispatch:
push:
branches:
- main
- "test/**"
schedule:
- cron: "0 0 * * *" # Run every day at 00:00 UTC.
jobs:
build-push-image:
uses: ./.github/workflows/build-push-image.yml
with:
registry: ghcr.io/${{ github.repository_owner }}
registry_user: ${{ github.actor }}
image_name: dangerzone/dangerzone
reproduce: true
secrets:
registry_token: ${{ secrets.GITHUB_TOKEN }}

91
.github/workflows/scan.yml vendored Normal file
View file

@ -0,0 +1,91 @@
name: Scan latest app and container
on:
push:
branches:
- main
pull_request:
schedule:
- cron: '0 0 * * *' # Run every day at 00:00 UTC.
workflow_dispatch:
jobs:
security-scan-container:
strategy:
matrix:
runs-on:
- ubuntu-24.04
- ubuntu-24.04-arm
runs-on: ${{ matrix.runs-on }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build container image
run: |
python3 ./install/common/build-image.py \
--debian-archive-date $(date "+%Y%m%d") \
--runtime docker
docker load -i share/container.tar
- name: Get image tag
id: tag
run: echo "tag=$(cat share/image-id.txt)" >> $GITHUB_OUTPUT
# NOTE: Scan first without failing, else we won't be able to read the scan
# report.
- name: Scan container image (no fail)
uses: anchore/scan-action@v6
id: scan_container
with:
image: "dangerzone.rocks/dangerzone:${{ steps.tag.outputs.tag }}"
fail-build: false
only-fixed: false
severity-cutoff: critical
- name: Upload container scan report
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ steps.scan_container.outputs.sarif }}
category: container
- name: Inspect container scan report
run: cat ${{ steps.scan_container.outputs.sarif }}
- name: Scan container image
uses: anchore/scan-action@v6
with:
image: "dangerzone.rocks/dangerzone:${{ steps.tag.outputs.tag }}"
fail-build: true
only-fixed: false
severity-cutoff: critical
security-scan-app:
strategy:
matrix:
runs-on:
- ubuntu-24.04
- ubuntu-24.04-arm
runs-on: ${{ matrix.runs-on }}
steps:
- name: Checkout
uses: actions/checkout@v4
# NOTE: Scan first without failing, else we won't be able to read the scan
# report.
- name: Scan application (no fail)
uses: anchore/scan-action@v6
id: scan_app
with:
path: "."
fail-build: false
only-fixed: false
severity-cutoff: critical
- name: Upload application scan report
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ steps.scan_app.outputs.sarif }}
category: app
- name: Inspect application scan report
run: cat ${{ steps.scan_app.outputs.sarif }}
- name: Scan application
uses: anchore/scan-action@v6
with:
path: "."
fail-build: true
only-fixed: false
severity-cutoff: critical

99
.github/workflows/scan_released.yml vendored Normal file
View file

@ -0,0 +1,99 @@
name: Scan released app and container
on:
schedule:
- cron: '0 0 * * *' # Run every day at 00:00 UTC.
workflow_dispatch:
jobs:
security-scan-container:
strategy:
matrix:
include:
- runs-on: ubuntu-24.04
arch: i686
- runs-on: ubuntu-24.04-arm
arch: arm64
runs-on: ${{ matrix.runs-on }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Download container image for the latest release and load it
run: |
VERSION=$(curl https://api.github.com/repos/freedomofpress/dangerzone/releases/latest | grep "tag_name" | cut -d '"' -f 4)
CONTAINER_FILENAME=container-${VERSION:1}-${{ matrix.arch }}.tar
wget https://github.com/freedomofpress/dangerzone/releases/download/${VERSION}/${CONTAINER_FILENAME} -O ${CONTAINER_FILENAME}
docker load -i ${CONTAINER_FILENAME}
- name: Get image tag
id: tag
run: |
tag=$(docker images dangerzone.rocks/dangerzone --format '{{ .Tag }}')
echo "tag=$tag" >> $GITHUB_OUTPUT
# NOTE: Scan first without failing, else we won't be able to read the scan
# report.
- name: Scan container image (no fail)
uses: anchore/scan-action@v6
id: scan_container
with:
image: "dangerzone.rocks/dangerzone:${{ steps.tag.outputs.tag }}"
fail-build: false
only-fixed: false
severity-cutoff: critical
- name: Upload container scan report
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ steps.scan_container.outputs.sarif }}
category: container-${{ matrix.arch }}
- name: Inspect container scan report
run: cat ${{ steps.scan_container.outputs.sarif }}
- name: Scan container image
uses: anchore/scan-action@v6
with:
image: "dangerzone.rocks/dangerzone:${{ steps.tag.outputs.tag }}"
fail-build: true
only-fixed: false
severity-cutoff: critical
security-scan-app:
strategy:
matrix:
runs-on:
- ubuntu-24.04
- ubuntu-24.04-arm
runs-on: ${{ matrix.runs-on }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Checkout the latest released tag
run: |
# Grab the latest Grype ignore list before git checkout overwrites it.
cp .grype.yaml .grype.yaml.new
VERSION=$(curl https://api.github.com/repos/freedomofpress/dangerzone/releases/latest | jq -r '.tag_name')
git checkout $VERSION
# Restore the newest Grype ignore list.
mv .grype.yaml.new .grype.yaml
# NOTE: Scan first without failing, else we won't be able to read the scan
# report.
- name: Scan application (no fail)
uses: anchore/scan-action@v6
id: scan_app
with:
path: "."
fail-build: false
only-fixed: false
severity-cutoff: critical
- name: Upload application scan report
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ steps.scan_app.outputs.sarif }}
category: app
- name: Inspect application scan report
run: cat ${{ steps.scan_app.outputs.sarif }}
- name: Scan application
uses: anchore/scan-action@v6
with:
path: "."
fail-build: true
only-fixed: false
severity-cutoff: critical

15
.gitignore vendored
View file

@ -22,6 +22,7 @@ var/
wheels/
pip-wheel-metadata/
share/python-wheels/
share/tessdata/
*.egg-info/
.installed.cfg
*.egg
@ -127,13 +128,25 @@ dmypy.json
# Pyre type checker
.pyre/
# Debian packaging
debian/.debhelper
debian/dangerzone
debian/files
debian/debhelper-build-stamp
debian/dangerzone.*
.pybuild/
# Other
.vscode
*.tar.gz
deb_dist
.DS_Store
tests/test_docs/**/*-safe.pdf
tests/test_docs_large/
install/windows/Dangerzone.wxs
test_docs/sample-safe.pdf
share/container.tar
share/container.tar.gz
share/image-id.txt
container/container-pip-requirements.txt
.doit.db.db

3
.gitmodules vendored Normal file
View file

@ -0,0 +1,3 @@
[submodule "tests/test_docs_large"]
path = tests/test_docs_large
url = https://github.com/freedomofpress/dangerzone-test-set

56
.grype.yaml Normal file
View file

@ -0,0 +1,56 @@
# This configuration file will be used to track CVEs that we can ignore for the
# latest release of Dangerzone, and offer our analysis.
ignore:
# CVE-2023-45853
# ==============
#
# Debian tracker: https://security-tracker.debian.org/tracker/CVE-2023-45853
# Verdict: Dangerzone is not affected because the zlib library in Debian is
# built in a way that is not vulnerable.
- vulnerability: CVE-2023-45853
# CVE-2024-38428
# ==============
#
# Debian tracker: https://security-tracker.debian.org/tracker/CVE-2024-38428
# Verdict: Dangerzone is not affected because it doesn't use wget in the
# container image (which also has no network connectivity).
- vulnerability: CVE-2024-38428
# CVE-2024-57823
# ==============
#
# Debian tracker: https://security-tracker.debian.org/tracker/CVE-2024-57823
# Verdict: Dangerzone is not affected. First things first, LibreOffice is
# using this library for parsing RDF metadata in a document [1], and has
# issued a fix for the vendored raptor2 package they have for other distros
# [2].
#
# On the other hand, the Debian security team has stated that this is a minor
# issue [3], and there's no fix from the developers yet. It seems that the
# Debian package is not affected somehow by this CVE, probably due to the way
# it's packaged.
#
# [1] https://wiki.documentfoundation.org/Documentation/DevGuide/Office_Development#RDF_metadata
# [2] https://cgit.freedesktop.org/libreoffice/core/commit/?id=2b50dc0e4482ac0ad27d69147b4175e05af4fba4
# [2] From https://security-tracker.debian.org/tracker/CVE-2024-57823:
#
# [bookworm] - raptor2 <postponed> (Minor issue, revisit when fixed upstream)
#
- vulnerability: CVE-2024-57823
# CVE-2025-0665
# ==============
#
# Debian tracker: https://security-tracker.debian.org/tracker/CVE-2025-0665
# Verdict: Dangerzone is not affected because the vulnerable code is not
# present in Debian Bookworm. Also, libcurl is an HTTP client, and the
# Dangerzone container does not make any network calls.
- vulnerability: CVE-2025-0665
# CVE-2025-43859
# ==============
#
# GitHub advisory: https://github.com/advisories/GHSA-vqfr-h8mv-ghfj
# Verdict: Dangerzone is not affected because the vulnerable code is triggered
# when parsing HTTP requests, e.g., by web **servers**. Dangerzone on the
# other hand performs HTTP requests, i.e., it operates as **client**.
- vulnerability: CVE-2025-43859
- vulnerability: GHSA-vqfr-h8mv-ghfj

View file

@ -0,0 +1 @@
https://dangerzone.rocks/assets/json/funding.json

346
BUILD.md
View file

@ -4,19 +4,93 @@
Install dependencies:
<table>
<tr>
<td>
<details>
<summary><i>:memo: Expand this section if you are on Ubuntu 22.04 (Jammy).</i></summary>
</br>
The `conmon` version that Podman uses and Ubuntu Jammy ships, has a bug
that gets triggered by Dangerzone
(more details in https://github.com/freedomofpress/dangerzone/issues/685).
If you want to run Dangerzone from source, you are advised to install a
patched `conmon` version. A simple way to do so is to enable our
apt-tools-prod repo, just for the `conmon` package:
```bash
sudo cp ./dev_scripts/apt-tools-prod.sources /etc/apt/sources.list.d/
sudo cp ./dev_scripts/apt-tools-prod.pref /etc/apt/preferences.d/
```
The `conmon` package provided in the above repo was built with the
following [instructions](https://github.com/freedomofpress/maint-dangerzone-conmon/tree/ubuntu/jammy/fpf).
Alternatively, you can install a `conmon` version higher than `v2.0.25` from
any repo you prefer.
</details>
</td>
</tr>
</table>
```sh
sudo apt install -y podman dh-python python3 python3-stdeb python3-pyside2.qtcore python3-pyside2.qtgui python3-pyside2.qtwidgets python3-appdirs python3-click python3-xdg python3-requests python3-colorama python3-psutil
sudo apt install -y podman dh-python build-essential make libqt6gui6 \
pipx python3 python3-dev
```
Install Poetry using `pipx` (recommended) and add it to your `$PATH`:
_(See also a list of [alternative installation
methods](https://python-poetry.org/docs/#installation))_
```sh
pipx ensurepath
pipx install poetry
pipx inject poetry poetry-plugin-export
```
After this, restart the terminal window, for the `poetry` command to be in your
`$PATH`.
Clone this repository:
```
git clone https://github.com/freedomofpress/dangerzone/
```
Change to the `dangerzone` folder, and install the poetry dependencies:
> **Note**: due to an issue with [poetry](https://github.com/python-poetry/poetry/issues/1917), if it prompts for your keyring, disable the keyring with `keyring --disable` and run the command again.
```
cd dangerzone
poetry install
```
Build the latest container:
```sh
./install/linux/build-image.sh
python3 ./install/common/build-image.py
```
Download the OCR language data:
```sh
python3 ./install/common/download-tessdata.py
```
Run from source tree:
```sh
# start a shell in the virtual environment
poetry shell
# run the CLI
./dev_scripts/dangerzone-cli --help
# run the GUI
./dev_scripts/dangerzone
```
@ -31,39 +105,228 @@ Create a .deb:
Install dependencies:
```sh
sudo dnf install -y rpm-build podman python3 python3-setuptools python3-pyside2 python3-appdirs python3-click python3-pyxdg python3-requests python3-colorama python3-psutil
sudo dnf install -y rpm-build podman python3 python3-devel python3-poetry-core \
pipx qt6-qtbase-gui
```
Install Poetry using `pipx`:
```sh
pipx install poetry
pipx inject poetry
```
Clone this repository:
```
git clone https://github.com/freedomofpress/dangerzone/
```
Change to the `dangerzone` folder, and install the poetry dependencies:
> **Note**: due to an issue with [poetry](https://github.com/python-poetry/poetry/issues/1917), if it prompts for your keyring, disable the keyring with `keyring --disable` and run the command again.
```
cd dangerzone
poetry install
```
Build the latest container:
```sh
./install/linux/build-image.sh
python3 ./install/common/build-image.py
```
Download the OCR language data:
```sh
python3 ./install/common/download-tessdata.py
```
Run from source tree:
```sh
# start a shell in the virtual environment
poetry shell
# run the CLI
./dev_scripts/dangerzone-cli --help
# run the GUI
./dev_scripts/dangerzone
```
> [!NOTE]
> Prefer running the following command in a Fedora development environment,
> created by `./dev_script/env.py`.
Create a .rpm:
```sh
./install/linux/build-rpm.py
```
## macOS
## Qubes OS
Install Xcode from the App Store.
> :warning: Native Qubes support is in beta stage, so the instructions below
> require switching between qubes, and are subject to change.
>
> If you want to build Dangerzone on Qubes and use containers instead of disposable
> qubes, please follow the instructions of Fedora / Debian instead.
### Initial Setup
The following steps must be completed once. Make sure you run them in the
specified qubes.
Overview of the qubes you'll create:
| qube | type | purpose |
|--------------|----------|---------|
| dz | app qube | Dangerzone development |
| dz-dvm | app qube | offline disposable template for performing conversions |
| fedora-41-dz | template | template for the other two qubes |
#### In `dom0`:
The following instructions require typing commands in a terminal in dom0.
1. Create a new Fedora **template** (`fedora-41-dz`) for Dangerzone development:
```
qvm-clone fedora-41 fedora-41-dz
```
> :bulb: Alternatively, you can use your base Fedora 40 template in the
> following instructions. In that case, skip this step and replace
> `fedora-41-dz` with `fedora-41` in the steps below.
2. Create an offline disposable template (app qube) called `dz-dvm`, based on the `fedora-41-dz`
template. This will be the qube where the documents will be sanitized:
```
qvm-create --class AppVM --label red --template fedora-41-dz \
--prop netvm="" --prop template_for_dispvms=True \
--prop default_dispvm='' dz-dvm
```
3. Create an **app** qube (`dz`) that will be used for Dangerzone development
and initiating the sanitization process:
```
qvm-create --class AppVM --label red --template fedora-41-dz dz
qvm-volume resize dz:private $(numfmt --from=auto 20Gi)
```
> :bulb: Alternatively, you can use a different app qube for Dangerzone
> development. In that case, replace `dz` with the qube of your choice in the
> steps below.
>
> In the commands above, we also resize the private volume of the `dz` qube
> to 20GiB, since you may need some extra storage space when developing on
> Dangerzone (e.g., for container images, Tesseract data, and Python
> virtualenvs).
4. Add an RPC policy (`/etc/qubes/policy.d/50-dangerzone.policy`) that will
allow launching a disposable qube (`dz-dvm`) when Dangerzone converts a
document, with the following contents:
```
dz.Convert * @anyvm @dispvm:dz-dvm allow
dz.ConvertDev * @anyvm @dispvm:dz-dvm allow
```
#### In the `dz` app qube
In the following steps you'll setup the development environment and
install a dangerzone build. This will make the development faster since it
loads the server code dynamically each time it's run, instead of having
to build and install a server package each time the developer wants to
test it.
1. Clone the Dangerzone project:
```
git clone https://github.com/freedomofpress/dangerzone
cd dangerzone
```
2. Follow the Fedora instructions for setting up the development environment.
3. Build a dangerzone `.rpm` for qubes with the command
```sh
./install/linux/build-rpm.py --qubes
```
4. Copy the produced `.rpm` file into `fedora-41-dz`
```sh
qvm-copy dist/*.x86_64.rpm
```
#### In the `fedora-41-dz` template
1. Install the `.rpm` package you just copied
```sh
sudo dnf install ~/QubesIncoming/dz/*.rpm
```
2. Shutdown the `fedora-41-dz` template
### Developing Dangerzone
From here on, developing Dangerzone is similar to Fedora. The only differences
are that you need to set the environment variable `QUBES_CONVERSION=1` when
you wish to test the Qubes conversion, run the following commands on the `dz` development qube:
```sh
# run the CLI
QUBES_CONVERSION=1 poetry run ./dev_scripts/dangerzone-cli --help
# run the GUI
QUBES_CONVERSION=1 poetry run ./dev_scripts/dangerzone
```
And when creating a `.rpm` you'll need to enable the `--qubes` flag.
> [!NOTE]
> Prefer running the following command in a Fedora development environment,
> created by `./dev_script/env.py`.
```sh
./install/linux/build-rpm.py --qubes
```
For changes in the server side components, you can simply edit them locally,
and they will be mirrored to the disposable qube through the `dz.ConvertDev`
RPC call.
The only reason to build a new Qubes RPM and install it in the `fedora-41-dz`
template for development is if:
1. The project requires new server-side components.
2. The code for `qubes/dz.ConvertDev` needs to be updated.
## macOS
Install [Docker Desktop](https://www.docker.com/products/docker-desktop). Make sure to choose your correct CPU, either Intel Chip or Apple Chip.
Install Python 3.9.9 [from python.org](https://www.python.org/downloads/release/python-399/).
Install the latest version of Python 3.12 [from python.org](https://www.python.org/downloads/macos/), and make sure `/Library/Frameworks/Python.framework/Versions/3.12/bin` is in your `PATH`.
Clone this repository:
```
git clone https://github.com/freedomofpress/dangerzone/
cd dangerzone
```
Install Python dependencies:
```sh
pip3 install --user poetry
python3 -m pip install poetry
poetry install
```
@ -76,7 +339,13 @@ brew install create-dmg
Build the dangerzone container image:
```sh
./install/macos/build-image.sh
python3 ./install/common/build-image.py
```
Download the OCR language data:
```sh
python3 ./install/common/download-tessdata.py
```
Run from source tree:
@ -110,38 +379,73 @@ The output is in the `dist` folder.
Install [Docker Desktop](https://www.docker.com/products/docker-desktop).
Install Python 3.9.9 (x86) [from python.org](https://www.python.org/downloads/release/python-399/). When installing it, make sure to check the "Add Python 3.9 to PATH" checkbox on the first page of the installer.
Install the latest version of Python 3.12 (64-bit) [from python.org](https://www.python.org/downloads/windows/). Make sure to check the "Add Python 3.12 to PATH" checkbox on the first page of the installer.
Install Microsoft Visual C++ 14.0 or greater. Get it with ["Microsoft C++ Build Tools"](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and make sure to select "Desktop development with C++" when installing.
Install [poetry](https://python-poetry.org/). Open PowerShell, and run:
```
(Invoke-WebRequest -Uri https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py -UseBasicParsing).Content | python
python -m pip install poetry
```
Install git from [here](https://git-scm.com/download/win), open a Windows terminal (`cmd.exe`) and clone this repository:
```
git clone https://github.com/freedomofpress/dangerzone/
```
Change to the `dangerzone` folder, and install the poetry dependencies:
```
cd dangerzone
poetry install
```
Build the dangerzone container image:
```sh
python .\install\windows\build-image.py
python3 .\install\common\build-image.py
```
Download the OCR language data:
```sh
python3 .\install\common\download-tessdata.py
```
After that you can launch dangerzone during development with:
```
.\dev_scripts\dangerzone.bat
# start a shell in the virtual environment
poetry shell
# run the CLI
.\dev_scripts\dangerzone-cli.bat --help
# run the GUI
.\dev_scripts\dangerzone.bat
```
### If you want to build the installer
### If you want to build the Windows installer
* Go to https://dotnet.microsoft.com/download/dotnet-framework and download and install .NET Framework 3.5 SP1 Runtime. I downloaded `dotnetfx35.exe`.
* Go to https://wixtoolset.org/releases/ and download and install WiX toolset. I downloaded `wix311.exe`.
* Add `C:\Program Files (x86)\WiX Toolset v3.11\bin` to the path.
Install [.NET SDK](https://dotnet.microsoft.com/en-us/download) version 6 or later. Then, open a terminal and install the latest version of [WiX Toolset .NET tool](https://wixtoolset.org/) **v5** with:
```sh
dotnet tool install --global wix --version 5.0.2
```
Install the WiX UI extension. You may need to open a new terminal in order to use the newly installed `wix` .NET tool:
```sh
wix extension add --global WixToolset.UI.wixext/5.0.2
```
> [!IMPORTANT]
> To avoid compatibility issues, ensure the WiX UI extension version matches the version of the WiX Toolset.
>
> Run `wix --version` to check the version of WiX Toolset you have installed and replace `5.x.y` with the full version number without the Git revision.
### If you want to sign binaries with Authenticode
@ -155,7 +459,7 @@ Open a command prompt, cd into the dangerzone directory, and run:
poetry run python .\setup-windows.py build
```
In `build\exe.win32-3.9\` you will find `dangerzone.exe`, `dangerzone-cli.exe`, and all supporting files.
In `build\exe.win32-3.12\` you will find `dangerzone.exe`, `dangerzone-cli.exe`, and all supporting files.
### To build the installer
@ -166,3 +470,9 @@ poetry run .\install\windows\build-app.bat
```
When you're done you will have `dist\Dangerzone.msi`.
## Updating the container image
The Dangezone container image is reproducible. This means that every time we
build it, the result will be bit-for-bit the same, with some minor exceptions.
Read more on how you can update it in `docs/developer/reproducibility.md`.

View file

@ -1,4 +1,420 @@
# Change Log
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
since 0.4.1, and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased](https://github.com/freedomofpress/dangerzone/compare/v0.9.0...HEAD)
## Changed
- Update installation instructions (and CI checks) for Debian derivatives ([#1141](https://github.com/freedomofpress/dangerzone/pull/1141))
## [0.9.0](https://github.com/freedomofpress/dangerzone/compare/v0.9.0...0.8.1)
### Added
- Platform support: Add support for Fedora 42 ([#1091](https://github.com/freedomofpress/dangerzone/issues/1091))
- Platform support: Add support for Ubuntu 25.04 (Plucky Puffin) ([#1090](https://github.com/freedomofpress/dangerzone/issues/1090))
- (experimental): It is now possible to specify a custom container runtime in
the settings, by using the `container_runtime` key. It should contain the path
to the container runtime you want to use. Please note that this doesn't mean
we support more container runtimes than Podman and Docker for the time being,
but enables you to chose which one you want to use, independently of your
platform. ([#925](https://github.com/freedomofpress/dangerzone/issues/925))
- Document Operating System support [#986](https://github.com/freedomofpress/dangerzone/issues/986)
- Tests: Look for regressions when converting PDFs [#321](https://github.com/freedomofpress/dangerzone/issues/321)
- Ensure container image reproducibilty across different container runtimes and versions ([#1074](https://github.com/freedomofpress/dangerzone/issues/1074))
- Implement container image attestations ([#1035](https://github.com/freedomofpress/dangerzone/issues/1035))
- Inform user of outdated Docker Desktop Version ([#693](https://github.com/freedomofpress/dangerzone/issues/693))
- Add support for Python 3.13 ([#992](https://github.com/freedomofpress/dangerzone/issues/992))
- Publish the built artifacts in our CI pipelines ([#972](https://github.com/freedomofpress/dangerzone/pull/972))
### Fixed
- Fix our Debian Trixie installation instructions using Sequoia PGP ([#1052](https://github.com/freedomofpress/dangerzone/issues/1052))
- Fix the way multiprocessing works on macOS ([#873](https://github.com/freedomofpress/dangerzone/issues/873))
- Update minimum Docker Desktop version to fix an stdout truncation issue ([#1101](https://github.com/freedomofpress/dangerzone/issues/1101))
### Removed
- Platform support: Drop support for Ubuntu Focal, since it's nearing end-of-life ([#1018](https://github.com/freedomofpress/dangerzone/issues/1018))
- Platform support: Drop support for Fedora 39 ([#999](https://github.com/freedomofpress/dangerzone/issues/999))
## Changed
- Switch base image to Debian Stable ([#1046](https://github.com/freedomofpress/dangerzone/issues/1046))
- Track image tags instead of image IDs in `image-id.txt` ([#1020](https://github.com/freedomofpress/dangerzone/issues/1020))
- Migrate to Wix 4 (windows building tool) ([#602](https://github.com/freedomofpress/dangerzone/issues/602)).
Thanks [@jkarasti](https://github.com/jkarasti) for the contribution.
- Add a `--debug` flag to the CLI to help retrieve more logs ([#941](https://github.com/freedomofpress/dangerzone/pull/941))
- The `debian` base image is now fetched by digest. As a result, your local
container storage will no longer show a tag for this dependency
([#1116](https://github.com/freedomofpress/dangerzone/pull/1116)).
Thanks [@sudoforge](https://github.com/sudoforge) for the contribution.
- The `debian` base image is now referenced with a fully qualified URI,
including the registry hostname ([#1118](https://github.com/freedomofpress/dangerzone/pull/1118)).
Thanks [@sudoforge](https://github.com/sudoforge) for the contribution.
- Update the Dangerzone container image and its dependencies (gVisor, Debian base image, H2Orestart) to the latest versions:
* Debian image release: `bookworm-20250317-slim@sha256:1209d8fd77def86ceb6663deef7956481cc6c14a25e1e64daec12c0ceffcc19d`
* Debian snapshots date: `2025-03-31`
* gVisor release date: `2025-03-26`
* H2Orestart plugin: `v0.7.2` (`d09bc5c93fe2483a7e4a57985d2a8d0e4efae2efb04375fe4b59a68afd7241e2`)
### Development changes
- Make container image scanning work for Silicon macOS ([#1008](https://github.com/freedomofpress/dangerzone/issues/1008))
- Automate the main bulk of our release tasks ([#1016](https://github.com/freedomofpress/dangerzone/issues/1016))
- CI: Enforce updating the CHANGELOG in the CI ([#1108](https://github.com/freedomofpress/dangerzone/pull/1108))
- Add reference to funding.json (required by floss.fund application) ([#1092](https://github.com/freedomofpress/dangerzone/pull/1092))
- Lint: add ruff for linting and formatting ([#1029](https://github.com/freedomofpress/dangerzone/pull/1029)).
Thanks [@jkarasti](https://github.com/jkarasti) for the contribution.
- Work around a `cx_freeze` build issue ([#974](https://github.com/freedomofpress/dangerzone/issues/974))
- tests: mark the hancom office suite tests for rerun on failures ([#991](https://github.com/freedomofpress/dangerzone/pull/991))
- Update reference template for Qubes to Fedora 41 ([#1078](https://github.com/freedomofpress/dangerzone/issues/1078))
## [0.8.1](https://github.com/freedomofpress/dangerzone/compare/v0.8.1...0.8.0)
- Update the container image
### Added
- Disable gVisor's DirectFS feature ([#226](https://github.com/freedomofpress/dangerzone/issues/226)).
Thanks [EtiennePerot](https://github.com/EtiennePerot) for the contribution.
### Removed
- Platform support: Drop support for Fedora 39, since it's end-of-life ([#999](https://github.com/freedomofpress/dangerzone/pull/999))
## Updated
- Bump `slsa-framework/slsa-github-generator` from 2.0.0 to 2.1.0 ([#1109](https://github.com/freedomofpress/dangerzone/pull/1109))
### Development changes
Thanks [@jkarasti](https://github.com/jkarasti) for the contribution.
- Automate a large portion of our release tasks with `doit` ([#1016](https://github.com/freedomofpress/dangerzone/issues/1016))
## [0.8.0](https://github.com/freedomofpress/dangerzone/compare/v0.8.0...0.7.1)
### Added
- Point to the installation instructions that the Tails team maintains for Dangerzone ([announcement](https://tails.net/news/dangerzone/index.en.html))
- Installation and execution errors are now caught and displayed in the interface ([#193](https://github.com/freedomofpress/dangerzone/issues/193))
- Prevent users from using illegal characters in output filename ([#362](https://github.com/freedomofpress/dangerzone/issues/362)). Thanks [@bnewc](https://github.com/bnewc) for the contribution!
- Add support for Fedora 41 ([#947](https://github.com/freedomofpress/dangerzone/issues/947))
- Add support for Ubuntu Oracular (24.10) ([#954](https://github.com/freedomofpress/dangerzone/pull/954))
### Fixed
- Update our macOS entitlements, removing now unneeded privileges ([#638](https://github.com/freedomofpress/dangerzone/issues/638))
- Make Dangerzone work on Linux systems with SELinux in enforcing mode ([#880](https://github.com/freedomofpress/dangerzone/issues/880))
- Process documents with embedded multimedia files without crashing ([#877](https://github.com/freedomofpress/dangerzone/issues/877))
- Search for applications that can read PDF files in a more reliable way on Linux ([#899](https://github.com/freedomofpress/dangerzone/issues/899))
- Handle and report some stray conversion errors ([#776](https://github.com/freedomofpress/dangerzone/issues/776)). Thanks [@amnak613](https://github.com/amnak613) for the contribution!
- Replace occurrences of the word "Docker" in Podman-related error messages in Linux ([#212](https://github.com/freedomofpress/dangerzone/issues/212))
### Changed
- The second phase of the conversion (pixels to PDF) now happens on the host. Instead of first grabbing all of the pixel data from the first container, storing them on disk, and then reconstructing the PDF on a second container, Dangerzone now immediately reconstructs the PDF **on the host**, while the doc to pixels conversion is still running on the first container. The sanitation is no less safe, since the boundaries between the sandbox and the host are still respected ([#625](https://github.com/freedomofpress/dangerzone/issues/625))
- PyMuPDF is now vendorized for Debian packages. This is done because the PyMuPDF package from the Debian repos lacks OCR support ([#940](https://github.com/freedomofpress/dangerzone/pull/940))
- Always use our own seccomp policy as a default ([#908](https://github.com/freedomofpress/dangerzone/issues/908))
- Debian packages are now amd64 only, which removes some warnings in Linux distros with 32-bit repos enabled ([#394](https://github.com/freedomofpress/dangerzone/issues/394))
- Allow choosing installation directory on Windows platforms ([#148](https://github.com/freedomofpress/dangerzone/issues/148)). Thanks [@jkarasti](https://github.com/jkarasti) for the contribution!
- Bumped H2ORestart LibreOffice extension to version 0.6.6 ([#943](https://github.com/freedomofpress/dangerzone/issues/943))
- Platform support: Ubuntu Focal (20.04) is now deprecated, and support will be dropped with the next release ([#965](https://github.com/freedomofpress/dangerzone/issues/965))
### Removed
- Platform support: Drop Ubuntu Mantic (23.10), since it's end-of-life ([#977](https://github.com/freedomofpress/dangerzone/pull/977))
### Development changes
- Build Debian packages with pybuild ([#773](https://github.com/freedomofpress/dangerzone/issues/773))
- Test Dangerzone on Intel macOS machines as well ([#932](https://github.com/freedomofpress/dangerzone/issues/932))
- Switch from CircleCI runners to Github actions ([#674](https://github.com/freedomofpress/dangerzone/issues/674))
- Sign Windows executables and installer with SHA256 rather than SHA1 ([#931](https://github.com/freedomofpress/dangerzone/pull/931)). Thanks [@jkarasti](https://github.com/jkarasti) for the contribution!
## [0.7.1](https://github.com/freedomofpress/dangerzone/compare/v0.7.1...v0.7.0)
### Fixed
- Fix an `image-id.txt` mismatch happening on Docker Desktop >= 4.30.0 ([#933](https://github.com/freedomofpress/dangerzone/issues/933))
## [0.7.0](https://github.com/freedomofpress/dangerzone/compare/v0.7.0...v0.6.1)
### Added
- Integrate Dangerzone with gVisor, a memory-safe application kernel, thanks to [@EtiennePerot](https://github.com/EtiennePerot) ([#126](https://github.com/freedomofpress/dangerzone/issues/126)).
As a result of this integration, we have also improved Dangerzone's security in the following ways:
* Prevent attacker from becoming root within the container ([#224](https://github.com/freedomofpress/dangerzone/issues/224))
* Use a restricted seccomp profile ([#225](https://github.com/freedomofpress/dangerzone/issues/225))
* Make use of user namespaces ([#228](https://github.com/freedomofpress/dangerzone/issues/228))
- Files can now be drag-n-dropped to Dangerzone ([issue #409](https://github.com/freedomofpress/dangerzone/issues/409))
### Fixed
- Fix a deprecation warning in PySide6, thanks to [@naglis](https://github.com/naglis) ([issue #595](https://github.com/freedomofpress/dangerzone/issues/595))
- Make update notifications work in systems with PySide2, thanks to [@naglis](https://github.com/naglis) ([issue #788](https://github.com/freedomofpress/dangerzone/issues/788))
- Updated the Dangerzone container image to use Alpine Linux 3.20 ([#812](https://github.com/freedomofpress/dangerzone/pull/812))
- Fix wrong file permissions in Fedora packages ([issue #727](https://github.com/freedomofpress/dangerzone/pull/727))
- Quote commands in installation instructions, making it compatible with `zsh` based shells. (issue [#805](https://github.com/freedomofpress/dangerzone/issues/805))
- Order the list of PDF viewers and return the default application first on Linux, thanks to [@rocodes](https://github.com/rocodes) (issue [#814](https://github.com/freedomofpress/dangerzone/pull/814))
### Removed
- Platform support: Drop Fedora 38, since it's end-of-life ([issue #840](https://github.com/freedomofpress/dangerzone/pull/840))
### Development changes
- Bumped the minimum python version to 3.9, due to Pyside6 dropping support for python 3.8 ([#780](https://github.com/freedomofpress/dangerzone/pull/780))
- Minor amendments to the codebase (in [#811](https://github.com/freedomofpress/dangerzone/pull/811))
- Use the original line ending (usually `LF`) for all content except images ([#838](https://github.com/freedomofpress/dangerzone/pull/838))
- Explained how to create, sign, and verify source tarballs ([#823](https://github.com/freedomofpress/dangerzone/pull/823))
- Added a design doc for the update notifications
- Added a design doc for the gVisor integration ([#815](https://github.com/freedomofpress/dangerzone/pull/815))
- Removed the python shebang from some files
## Dangerzone 0.6.1
### Added
- Platform support: Ubuntu 24.04 and Fedora 40 ([issue #762](https://github.com/freedomofpress/dangerzone/issues/762))
### Fixed
- Handle timeout errors (`"Timeout after 3 seconds"`) more gracefully ([issue #749](https://github.com/freedomofpress/dangerzone/issues/749))
- Make Dangerzone work in macOS versions prior to Ventura (13), thanks to [@maltfield](https://github.com/maltfield) ([issue #471](https://github.com/freedomofpress/dangerzone/issues/471))
- Make OCR work again in Qubes Fedora 38 templates ([issue #737](https://github.com/freedomofpress/dangerzone/issues/737))
- Make .svg / .bmp files selectable when browsing files via the Dangerzone GUI ([#722](https://github.com/freedomofpress/dangerzone/pull/722))
- Linux: Show the proper application name and icon for Dangerzone, in the user's window manager, thanks to [@naglis](https://github.com/naglis) ([issue #402](https://github.com/freedomofpress/dangerzone/issues/402))
- Linux: Allow opening multiple files at once, when selecting them from the user's file manager, thanks to [@naglis](https://github.com/naglis) ([issue #797](https://github.com/freedomofpress/dangerzone/issues/797))
- Linux: Do not include Dangerzone in the list of available PDF viewers, thanks to [@naglis](https://github.com/naglis) ([issue #790](https://github.com/freedomofpress/dangerzone/issues/790))
- Linux: Handle filenames with invalid Unicode characters in the Dangerzone CLI, thanks to [@naglis](https://github.com/naglis) ([issue #768](https://github.com/freedomofpress/dangerzone/issues/768))
### Changed
- Sign our release assets with the Dangerzone signing key, and provide
instructions to end-users ([issue #761](https://github.com/freedomofpress/dangerzone/issues/761)
- Use the newest reimplementation of the PyMuPDF rendering engine (`fitz`) ([issue #700](https://github.com/freedomofpress/dangerzone/issues/700))
- Development: Build Dangerzone using the latest Wix 3.14 release ([#746](https://github.com/freedomofpress/dangerzone/pull/746)
## Dangerzone 0.6.0
### Added
- Platform support: Fedora 39 ([issue #606](https://github.com/freedomofpress/dangerzone/issues/606))
- Add new file formats: epub svg and several image formats (BMP, PNM, BPM, PPM) ([issue #697](https://github.com/freedomofpress/dangerzone/issues/697))
## Fixed
- Fix mismatched between between original document and converted one ([issue #626](https://github.com/freedomofpress/dangerzone/issues/)). This does not affect the quality of the final document.
- Capitalize "dangerzone" on the application as well as on the Linux desktop shortcut, thanks to [@sudwhiwdh](https://github.com/sudwhiwdh) [#676](https://github.com/freedomofpress/dangerzone/pull/676)
- Fedora (Linux): Add missing Dangerzone logo on application launcher ([issue #645](https://github.com/freedomofpress/dangerzone/issues/645))
- Prevent document conversion from failing due to lack of space in the converter. This affected mainly systems with low computing resources such as Qubes OS ([issue #574](https://github.com/freedomofpress/dangerzone/issues/574))
- Add a missing dependency to our Apple Silicon container image, which affected dev environments only, thanks to [@prateekj117](https://github.com/prateekj117) ([#671](https://github.com/freedomofpress/dangerzone/pull/671))
- Development: Add missing check when building container image, thanks to [@EtiennePerot](https://github.com/EtiennePerot) ([#721](https://github.com/freedomofpress/dangerzone/pull/721))
### Changed
- Feature: Add support for HWP/HWPX files (Hancom Office) for macOS Apple Silicon devices ([issue #498](https://github.com/freedomofpress/dangerzone/issues/498), thanks to [@OctopusET](https://github.com/OctopusET))
- Replace Dangerzone document rendering engine from pdftoppm PyMuPDF, essentially replacing a variety of tools (gm / tesseract / pdfunite / ps2pdf) ([issue #658](https://github.com/freedomofpress/dangerzone/issues/658))
- Changed project license from MIT to AGPLv3 (related to [issue #658](https://github.com/freedomofpress/dangerzone/issues/658))
- Containers: stream pages instead of mounting directories. For users in practice this doesn't change much, but it opens up technical possibilities that go from security to usability. ([issue #443](https://github.com/freedomofpress/dangerzone/issues/443))
- Ubuntu Jammy (Linux): add external depedency (provided by the Dangerzone repository) which fixes podman crashing during standar stream I/O ([issue #685](https://github.com/freedomofpress/dangerzone/issues/685))
### Removed
- Removed timeouts ([issue #687](https://github.com/freedomofpress/dangerzone/issues/687))
- Platform support: Drop Ubuntu 23.04 (Lunar Lobster), since it's end-of-life ([issue #705](https://github.com/freedomofpress/dangerzone/issues/705))
## Dangerzone 0.5.1
### Fixed
- Our Qubes RPM package was missing critical dependencies for the conversion of a document from pixels to PDF ([issue #647](https://github.com/freedomofpress/dangerzone/issues/647))
### Changed
- Use more descriptive button labels in update check prompt ([issue #527](https://github.com/freedomofpress/dangerzone/issues/527), thanks to [@garrettr](https://github.com/garrettr))
### Removed
- Platform support: Drop Fedora 37, since it reached end-of-life ([issue #637](https://github.com/freedomofpress/dangerzone/issues/637))
### Security
- [Security advisory 2023-12-07](https://github.com/freedomofpress/dangerzone/blob/main/docs/advisories/2023-12-07.md): Protect our container image against
CVE-2023-43115, by updating GhostScript to version 10.02.0.
- [Security advisory 2023-10-25](https://github.com/freedomofpress/dangerzone/blob/main/docs/advisories/2023-10-25.md): prevent dz-dvm network via dispVMs. This was
officially communicated on the advisory date and is only included here since
this is the first release since it was announced.
## Dangerzone 0.5.0
### Added
- Platform support: Beta integration with Qubes OS ([issue #412](https://github.com/freedomofpress/dangerzone/issues/412))
- Platform support: Ubuntu 23.10 (Mantic Minotaur) ([issue #601](https://github.com/freedomofpress/dangerzone/issues/601))
- Add client-side timeouts in Qubes ([issue #446](https://github.com/freedomofpress/dangerzone/issues/446))
- Add installation instructions for Qubes ([issue #431](https://github.com/freedomofpress/dangerzone/issues/431))
- Development: Add tests that run Dangerzone against a pool of roughly 11K documents ([PR #386](https://github.com/freedomofpress/dangerzone/pull/386))
- Development: Grab the output of commands when in development mode ([issue #319](https://github.com/freedomofpress/dangerzone/issues/319))
### Fixed
- Fix a bug that was introduced in version 0.4.1 and could potentially lead to
excluding the last page of the sanitized document ([issue #560](https://github.com/freedomofpress/dangerzone/issues/560))
- Fix the parsing of a document's page count ([issue #565](https://github.com/freedomofpress/dangerzone/issues/565))
- Platform support: Fix broken Dangerzone upgrades in Fedora ([issue #514](https://github.com/freedomofpress/dangerzone/issues/514))
- Make progress reports in Qubes real-time ([issue #557](https://github.com/freedomofpress/dangerzone/issues/557))
- Improve the handling of various runtime errors in Qubes ([issue #430](https://github.com/freedomofpress/dangerzone/issues/430))
- Pass OCR parameters properly in Qubes ([issue #455](https://github.com/freedomofpress/dangerzone/issues/455))
- Fix dark mode support ([issue #550](https://github.com/freedomofpress/dangerzone/issues/550),
thanks to [@garrettr](https://github.com/garrettr))
- MacOS/Windows: Sync "Check for updates" checkbox with the user's choice ([issue #513](https://github.com/freedomofpress/dangerzone/issues/513))
- Fix issue where changing document selection to a file from a different directory would lead to an error ([issue #581](https://github.com/freedomofpress/dangerzone/issues/581))
- Qubes: in the cli version "Safe PDF created" would be shown twice ([issue #555](https://github.com/freedomofpress/dangerzone/issues/555))
- Qubes: in the cli version the percentage is now rounded to the unit ([issue #553](https://github.com/freedomofpress/dangerzone/issues/553))
- Qubes: clean up temporary files ([issue #575](https://github.com/freedomofpress/dangerzone/issues/575))
- Qubes: do not open document if the conversion failed ([issue #581](https://github.com/freedomofpress/dangerzone/issues/581))
- Development: Switch from the deprecated `bdist_rpm` toolchain to the more
modern RPM SPEC files, when building Fedora packages ([issue #298](https://github.com/freedomofpress/dangerzone/issues/298))
- Development: Make our dev scripts properly invoke Docker in MacOS / windows
([issue #519](https://github.com/freedomofpress/dangerzone/issues/519))
### Changed
- Shave off ~300MiB from our container image, using the fast variant of the
Tesseract OCR language models ([issue #545](https://github.com/freedomofpress/dangerzone/issues/545))
- When a user is asked to enable updates, make "Yes" the default option ([issue #507](https://github.com/freedomofpress/dangerzone/issues/507))
- Use Fedora 38 as a template in our Qubes build instructions ([PR #533](https://github.com/freedomofpress/dangerzone/issues/533))
- Improve the installation docs for newcomers ([issue #475](https://github.com/freedomofpress/dangerzone/issues/475))
- Development: Explain how to get the application password from the MacOS keychain ([issue #522](https://github.com/freedomofpress/dangerzone/issues/522))
### Removed
- Remove the `dangerzone-container` executable, since it was not used in practice by any user ([PR #538](https://github.com/freedomofpress/dangerzone/issues/538))
### Security
- Do not allow attackers to show error or log messages to Qubes users ([issue #456](https://github.com/freedomofpress/dangerzone/issues/456))
## Dangerzone 0.4.2
### Added
- Inform about new updates on MacOS/Windows platforms, by periodically checking
our GitHub releases page ([issue #189](https://github.com/freedomofpress/dangerzone/issues/189))
- Feature: Add support for HWP/HWPX files (Hancom Office) ([issue #243](https://github.com/freedomofpress/dangerzone/issues/243), thanks to [@OctopusET](https://github.com/OctopusET))
* **NOTE:** This feature is not yet supported on MacOS with Apple Silicon CPU
or Qubes OS ([issue #494](https://github.com/freedomofpress/dangerzone/issues/494),
[issue #498](https://github.com/freedomofpress/dangerzone/issues/498))
- Allow users to change their document selection from the UI ([issue #428](https://github.com/freedomofpress/dangerzone/issues/428))
- Add a note in our README for MacOS 11+ users blocked by SIP ([PR #401](https://github.com/freedomofpress/dangerzone/pull/401), thanks to [@keywordnew](https://github.com/keywordnew))
- Platform support: Alpha integration with Qubes OS ([issue #411](https://github.com/freedomofpress/dangerzone/issues/411))
- Platform support: Debian Trixie (13) ([issue #452](https://github.com/freedomofpress/dangerzone/issues/452))
- Platform support: Ubuntu 23.04 (Lunar Lobster) ([issue #453](https://github.com/freedomofpress/dangerzone/issues/453))
- Development: Use Qt6 in our CI runners and dev environments ([issue #482](https://github.com/freedomofpress/dangerzone/issues/482))
### Removed
- Platform support: Drop Fedora 36, since it's end-of-life ([issue #420](https://github.com/freedomofpress/dangerzone/issues/420))
- Platform support: Drop Ubuntu 22.10 (Kinetic Kudu), since it's end-of-life ([issue #485](https://github.com/freedomofpress/dangerzone/issues/485))
### Fixed
- Add missing language detection (OCR) models ([issue #357](https://github.com/freedomofpress/dangerzone/issues/357))
- Replace deprecated `pipes` module with `shlex` ([issue #373](https://github.com/freedomofpress/dangerzone/issues/373), thanks to [@OctopusET](https://github.com/OctopusET))
- Shrink container image with `--no-cache` option on `apk` ([issue #459](https://github.com/freedomofpress/dangerzone/issues/459), thanks to [@OctopusET](https://github.com/OctopusET))
### Security
- Continuously scan our Python dependencies and container image for
vulnerabilities ([issue #222](https://github.com/freedomofpress/dangerzone/issues/222))
- Sanitize potentially unsafe characters from strings that are shown in the
GUI/terminal ([PR #491](https://github.com/freedomofpress/dangerzone/pull/491))
## Dangerzone 0.4.1
### Added
- Feature: Add version info in the CLI and GUI ([issue #219](https://github.com/freedomofpress/dangerzone/issues/219))
- Development: Improve CI stability and coverage
([issue #292](https://github.com/freedomofpress/dangerzone/issues/292),
[issue #217](https://github.com/freedomofpress/dangerzone/issues/217),
[issue #229](https://github.com/freedomofpress/dangerzone/issues/229))
- Development: Provide dev scripts for testing Dangerzone in a container and
running our QA pipeline
([issue #286](https://github.com/freedomofpress/dangerzone/issues/286),
[issue #287](https://github.com/freedomofpress/dangerzone/issues/287))
- Development: Support Dangerzone development on Fedora 37
([issue #294](https://github.com/freedomofpress/dangerzone/issues/294))
- Development: Allow running Mypy on MacOS M1 machines ([issue #177](https://github.com/freedomofpress/dangerzone/issues/177))
- Development: Add dummy isolation provider for testing non-conversion-related
issues in virtualized Windows and MacOS, where Docker can't run, due to the
lack of nested virtualization ([issue #229](https://github.com/freedomofpress/dangerzone/issues/229))
- Add support for more MIME types that were previously disregarded ([issue #369](https://github.com/freedomofpress/dangerzone/issues/369))
- Platform support: Add support for Fedora 38
### Changed
- Full release under Freedom of the Press Foundation: signing keys have changed from the original developer Micah Lee / First Look Media to FPF's signing keys. Linux packages moved from Packagecloud to FPF's servers
- [Installation instructions updated](https://github.com/freedomofpress/dangerzone/blob/v0.4.1/INSTALL.md) to reflect change in key owership to FPF
- Platform support: MacOS (Apple Silicon) native application with significant
performance boost ([issue #50](https://github.com/freedomofpress/dangerzone/issues/50))
- Feature: Introduce PySide6 / Qt6 support on Windows, MacOS, and Linux (dev-only) ([issue #219](https://github.com/freedomofpress/dangerzone/issues/219))
- Feature: Adjust conversion timeouts based on the document's pages/size, and
allow users to disable them with `--disable-timeouts` (available when you run
the Dangerzone from the terminal) ([issue #327](https://github.com/freedomofpress/dangerzone/issues/327))
- Development: Update Linux instructions for development on Qubes
### Removed
- Platform support: Drop Fedora 35, since it's end-of-life ([issue #308](https://github.com/freedomofpress/dangerzone/issues/308))
- Bug fix: Remove unused PDFtk and sudo libraries from the container image, to
lower its attack surface and reduce its size ([issue #232](https://github.com/freedomofpress/dangerzone/issues/232))
### Fixed
- Feature: Convert documents with non-standard permissions or SELinux labels ([issue #335](https://github.com/freedomofpress/dangerzone/issues/335))
- Bug fix: Report exceptions during conversions ([issue #309](https://github.com/freedomofpress/dangerzone/issues/309))
- Bug fix: (Windows) Fix Dangerzone description on "Open With" ([issue #283](https://github.com/freedomofpress/dangerzone/issues/283))
- Bug fix: Remove document conversion artifacts when conversion fails and store
them on volatile memory instead of on a disk directory
([issue #317](https://github.com/freedomofpress/dangerzone/issues/317))
### Security
- Bug fix: Do not print debug logs in end-user executables ([issue #316](https://github.com/freedomofpress/dangerzone/issues/316))
## Dangerzone 0.4.0
- Platform support: Re-add Fedora 37 support
- Platform support: Add Debian Bookworm (12) support ([issue #172](https://github.com/freedomofpress/dangerzone/issues/172))
- Platform support: Reinstate Ubuntu Focal support ([issue #206](https://github.com/freedomofpress/dangerzone/issues/206))
- Platform support: Add Ubuntu 22.10 "Kinetic Kudu" support ([issue #265](https://github.com/freedomofpress/dangerzone/issues/265))
- Feature: Support bulk conversion to safe PDFs ([issue #77](https://github.com/freedomofpress/dangerzone/issues/77))
- Feature: Option to archive unsafe directories ([issue #255](https://github.com/freedomofpress/dangerzone/pull/255))
- Feature: Support python 3.10
- Feature: When quitting while still converting, confirm if user is sure
- Bug fix: Fix unit tests on Windows
- Bug fix: Do not hardcode "docker" in help messages, now that Podman is also used ([issue #122](https://github.com/freedomofpress/dangerzone/issues/122))
- Bug fix: Failed execution no longer produces an empty "safe" documents ([issue #214](https://github.com/freedomofpress/dangerzone/issues/214))
- Bug fix: Malfunctioning "New window" logic was replaced with multi-doc support ([issue #204](https://github.com/freedomofpress/dangerzone/issues/204))
- Bug fix: re-adds support for 'open with Dangerzone' from finder on macOS ([issue #268](https://github.com/freedomofpress/dangerzone/issues/268))
- Bug fix: (macOS) quit Dangerzone when main window is closed ([issue #271](https://github.com/freedomofpress/dangerzone/issues/271))
## Dangerzone 0.3.2
- Bug fix: some non-ascii characters like “ would prevent Dangerzone from working ([issue #144](https://github.com/freedomofpress/dangerzone/issues/144))
- Bug fix: error where Dangerzone would show "permission denied: '/tmp/input_file'" ([issue #157](https://github.com/freedomofpress/dangerzone/issues/157))
- Bug fix: remove containers after use, enabling Dangerzone to run after 1000+ converted docs ([issue #197](https://github.com/freedomofpress/dangerzone/pull/197))
- Security: limit container capabilities, run in container as non-root and limit privilege escalation ([issue #169](https://github.com/freedomofpress/dangerzone/issues/169))
## Dangerzone 0.3.1
@ -58,4 +474,4 @@
## Dangerzone 0.1
- First release
- First release

228
Dockerfile Normal file
View file

@ -0,0 +1,228 @@
# NOTE: Updating the packages to their latest versions requires bumping the
# Dockerfile args below. For more info about this file, read
# docs/developer/reproducibility.md.
ARG DEBIAN_IMAGE_DIGEST=sha256:1209d8fd77def86ceb6663deef7956481cc6c14a25e1e64daec12c0ceffcc19d
FROM docker.io/library/debian@${DEBIAN_IMAGE_DIGEST} AS dangerzone-image
ARG GVISOR_ARCHIVE_DATE=20250326
ARG DEBIAN_ARCHIVE_DATE=20250331
ARG H2ORESTART_CHECKSUM=935e68671bde4ca63a364128077f1c733349bbcc90b7e6973bc7a2306494ec54
ARG H2ORESTART_VERSION=v0.7.2
ENV DEBIAN_FRONTEND=noninteractive
# The following way of installing packages is taken from
# https://github.com/reproducible-containers/repro-sources-list.sh/blob/master/Dockerfile.debian-12,
# and adapted to allow installing gVisor from each own repo as well.
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
--mount=type=bind,source=./container_helpers/repro-sources-list.sh,target=/usr/local/bin/repro-sources-list.sh \
--mount=type=bind,source=./container_helpers/gvisor.key,target=/tmp/gvisor.key \
: "Hacky way to set a date for the Debian snapshot repos" && \
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list.d/debian.sources && \
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list && \
repro-sources-list.sh && \
: "Setup APT to install gVisor from its separate APT repo" && \
apt-get update && \
apt-get upgrade -y && \
apt-get install -y --no-install-recommends apt-transport-https ca-certificates gnupg && \
gpg -o /usr/share/keyrings/gvisor-archive-keyring.gpg --dearmor /tmp/gvisor.key && \
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases ${GVISOR_ARCHIVE_DATE} main" > /etc/apt/sources.list.d/gvisor.list && \
: "Install the necessary gVisor and Dangerzone dependencies" && \
apt-get update && \
apt-get install -y --no-install-recommends \
python3 python3-fitz libreoffice-nogui libreoffice-java-common \
python3 python3-magic default-jre-headless fonts-noto-cjk fonts-dejavu \
runsc unzip wget && \
: "Clean up for improving reproducibility (optional)" && \
rm -rf /var/cache/fontconfig/ && \
rm -rf /etc/ssl/certs/java/cacerts && \
rm -rf /var/log/* /var/cache/ldconfig/aux-cache
# Download H2ORestart from GitHub using a pinned version and hash. Note that
# it's available in Debian repos, but not in Bookworm yet.
RUN mkdir /opt/libreoffice_ext && cd /opt/libreoffice_ext \
&& H2ORESTART_FILENAME=h2orestart.oxt \
&& wget https://github.com/ebandal/H2Orestart/releases/download/$H2ORESTART_VERSION/$H2ORESTART_FILENAME \
&& echo "$H2ORESTART_CHECKSUM $H2ORESTART_FILENAME" | sha256sum -c \
&& install -dm777 "/usr/lib/libreoffice/share/extensions/" \
&& rm /root/.wget-hsts
# Create an unprivileged user both for gVisor and for running Dangerzone.
# XXX: Make the shadow field "date of last password change" a constant
# number.
RUN addgroup --gid 1000 dangerzone
RUN adduser --uid 1000 --ingroup dangerzone --shell /bin/true \
--disabled-password --home /home/dangerzone dangerzone \
&& chage -d 99999 dangerzone \
&& rm /etc/shadow-
# Copy Dangerzone's conversion logic under /opt/dangerzone, and allow Python to
# import it.
RUN mkdir -p /opt/dangerzone/dangerzone
RUN touch /opt/dangerzone/dangerzone/__init__.py
# Copy only the Python code, and not any produced .pyc files.
COPY conversion/*.py /opt/dangerzone/dangerzone/conversion/
# Create a directory that will be used by gVisor as the place where it will
# store the state of its containers.
RUN mkdir /home/dangerzone/.containers
###############################################################################
#
# REUSING CONTAINER IMAGES:
# Anatomy of a hack
# ========================
#
# The rest of the Dockerfile aims to do one thing: allow the final container
# image to actually contain two container images; one for the outer container
# (spawned by Podman/Docker Desktop), and one for the inner container (spawned
# by gVisor).
#
# This has already been done in the past, and we explain why and how in the
# design document for gVisor integration (should be in
# `docs/developer/gvisor.md`). In this iteration, we want to also
# achieve the following:
#
# 1. Have a small final image, by sharing some system paths between the inner
# and outer container image using symlinks.
# 2. Allow our security scanning tool to see the contents of the inner
# container image.
# 3. Make the outer container image operational, in the sense that you can use
# `apt` commands and perform a conversion with Dangerzone, outside the
# gVisor sandbox. This is helpful for debugging purposes.
#
# Below we'll explain how our design choices are informed by the above
# sub-goals.
#
# First, to achieve a small container image, we basically need to copy `/etc`,
# `/usr` and `/opt` from the original Dangerzone image to the **inner**
# container image (under `/home/dangerzone/dangerzone-image/rootfs/`)
#
# That's all we need. The rest of the files play no role, and we can actually
# mask them in gVisor's OCI config.
#
# Second, in order to let our security scanner find the installed packages,
# we need to copy the following dirs to the root of the **outer** container
# image:
# * `/etc`, so that the security scanner can detect the image type and its
# sources
# * `/var`, so that the security scanner can have access to the APT database.
#
# IMPORTANT: We don't symlink the `/etc` of the **outer** container image to
# the **inner** one, in order to avoid leaking files like
# `/etc/{hostname,hosts,resolv.conf}` that Podman/Docker mounts when running
# the **outer** container image.
#
# Third, in order to have an operational Debian image, we are _mostly_ covered
# by the dirs we have copied. There's a _rare_ case where during debugging, we
# may want to install a system package that has components in `/etc` and
# `/var`, which will not be available in the **inner** container image. In that
# case, the developer can do the necessary symlinks in the live container.
#
# FILESYSTEM HIERARCHY
# ====================
#
# The above plan leads to the following filesystem hierarchy:
#
# Outer container image:
#
# # ls -l /
# lrwxrwxrwx 1 root root 7 Jan 27 10:46 bin -> usr/bin
# -rwxr-xr-x 1 root root 7764 Jan 24 08:14 entrypoint.py
# drwxr-xr-x 1 root root 4096 Jan 27 10:47 etc
# drwxr-xr-x 1 root root 4096 Jan 27 10:46 home
# lrwxrwxrwx 1 root root 7 Jan 27 10:46 lib -> usr/lib
# lrwxrwxrwx 1 root root 9 Jan 27 10:46 lib64 -> usr/lib64
# drwxr-xr-x 2 root root 4096 Jan 27 10:46 root
# drwxr-xr-x 1 root root 4096 Jan 27 10:47 run
# lrwxrwxrwx 1 root root 8 Jan 27 10:46 sbin -> usr/sbin
# drwxrwxrwx 2 root root 4096 Jan 27 10:46 tmp
# lrwxrwxrwx 1 root root 44 Jan 27 10:46 usr -> /home/dangerzone/dangerzone-image/rootfs/usr
# drwxr-xr-x 11 root root 4096 Jan 27 10:47 var
#
# Inner container image:
#
# # ls -l /home/dangerzone/dangerzone-image/rootfs/
# total 12
# lrwxrwxrwx 1 root root 7 Jan 27 10:47 bin -> usr/bin
# drwxr-xr-x 43 root root 4096 Jan 27 10:46 etc
# lrwxrwxrwx 1 root root 7 Jan 27 10:47 lib -> usr/lib
# lrwxrwxrwx 1 root root 9 Jan 27 10:47 lib64 -> usr/lib64
# drwxr-xr-x 4 root root 4096 Jan 27 10:47 opt
# drwxr-xr-x 12 root root 4096 Jan 27 10:47 usr
#
# SYMLINKING /USR
# ===============
#
# It's surprisingly difficult (maybe even borderline impossible), to symlink
# `/usr` to a different path during image build. The problem is that /usr
# is very sensitive, and you can't manipulate it in a live system. That is, I
# haven't found a way to do the following, or something equivalent:
#
# rm -r /usr && ln -s /home/dangerzone/dangerzone-image/rootfs/usr/ /usr
#
# The `ln` binary, even if you specify it by its full path, cannot run
# (probably because `ld-linux.so` can't be found). For this reason, we have
# to create the symlinks beforehand, in a previous build stage. Then, in an
# empty container image (scratch images), we can copy these symlinks and the
# /usr, and stitch everything together.
###############################################################################
# Create the filesystem hierarchy that will be used to symlink /usr.
RUN mkdir -p \
/new_root \
/new_root/root \
/new_root/run \
/new_root/tmp \
/new_root/home/dangerzone/dangerzone-image/rootfs
# Copy the /etc and /var directories under the new root directory. Also,
# copy /etc/, /opt, and /usr to the Dangerzone image rootfs.
#
# NOTE: We also have to remove the resolv.conf file, in order to not leak any
# DNS servers added there during image build time.
RUN cp -r /etc /var /new_root/ \
&& rm /new_root/etc/resolv.conf
RUN cp -r /etc /opt /usr /new_root/home/dangerzone/dangerzone-image/rootfs \
&& rm /new_root/home/dangerzone/dangerzone-image/rootfs/etc/resolv.conf
RUN ln -s /home/dangerzone/dangerzone-image/rootfs/usr /new_root/usr
RUN ln -s usr/bin /new_root/bin
RUN ln -s usr/lib /new_root/lib
RUN ln -s usr/lib64 /new_root/lib64
RUN ln -s usr/sbin /new_root/sbin
RUN ln -s usr/bin /new_root/home/dangerzone/dangerzone-image/rootfs/bin
RUN ln -s usr/lib /new_root/home/dangerzone/dangerzone-image/rootfs/lib
RUN ln -s usr/lib64 /new_root/home/dangerzone/dangerzone-image/rootfs/lib64
# Fix permissions in /home/dangerzone, so that our entrypoint script can make
# changes in the following folders.
RUN chown dangerzone:dangerzone \
/new_root/home/dangerzone \
/new_root/home/dangerzone/dangerzone-image/
# Fix permissions in /tmp, so that it can be used by unprivileged users.
RUN chmod 777 /new_root/tmp
COPY container_helpers/entrypoint.py /new_root
# HACK: For reasons that we are not sure yet, we need to explicitly specify the
# modification time of this file.
RUN touch -d ${DEBIAN_ARCHIVE_DATE}Z /new_root/entrypoint.py
## Final image
FROM scratch
# Copy the filesystem hierarchy that we created in the previous stage, so that
# /usr can be a symlink.
COPY --from=dangerzone-image /new_root/ /
# Switch to the dangerzone user for the rest of the script.
USER dangerzone
ENTRYPOINT ["/entrypoint.py"]

16
Dockerfile.env Normal file
View file

@ -0,0 +1,16 @@
# Should be the INDEX DIGEST from an image tagged `bookworm-<DATE>-slim`:
# https://hub.docker.com/_/debian/tags?name=bookworm-
#
# Tag for this digest: bookworm-20250317-slim
DEBIAN_IMAGE_DIGEST=sha256:1209d8fd77def86ceb6663deef7956481cc6c14a25e1e64daec12c0ceffcc19d
# Can be bumped to today's date
DEBIAN_ARCHIVE_DATE=20250331
# Can be bumped to the latest date in https://github.com/google/gvisor/tags
GVISOR_ARCHIVE_DATE=20250326
# Can be bumped to the latest version and checksum from https://github.com/ebandal/H2Orestart/releases
H2ORESTART_CHECKSUM=935e68671bde4ca63a364128077f1c733349bbcc90b7e6973bc7a2306494ec54
H2ORESTART_VERSION=v0.7.2
# Buildkit image (taken from freedomofpress/repro-build)
BUILDKIT_IMAGE="docker.io/moby/buildkit:v19.0@sha256:14aa1b4dd92ea0a4cd03a54d0c6079046ea98cd0c0ae6176bdd7036ba370cbbe"
BUILDKIT_IMAGE_ROOTLESS="docker.io/moby/buildkit:v0.19.0-rootless@sha256:e901cffdad753892a7c3afb8b9972549fca02c73888cf340c91ed801fdd96d71"

228
Dockerfile.in Normal file
View file

@ -0,0 +1,228 @@
# NOTE: Updating the packages to their latest versions requires bumping the
# Dockerfile args below. For more info about this file, read
# docs/developer/reproducibility.md.
ARG DEBIAN_IMAGE_DIGEST={{DEBIAN_IMAGE_DIGEST}}
FROM docker.io/library/debian@${DEBIAN_IMAGE_DIGEST} AS dangerzone-image
ARG GVISOR_ARCHIVE_DATE={{GVISOR_ARCHIVE_DATE}}
ARG DEBIAN_ARCHIVE_DATE={{DEBIAN_ARCHIVE_DATE}}
ARG H2ORESTART_CHECKSUM={{H2ORESTART_CHECKSUM}}
ARG H2ORESTART_VERSION={{H2ORESTART_VERSION}}
ENV DEBIAN_FRONTEND=noninteractive
# The following way of installing packages is taken from
# https://github.com/reproducible-containers/repro-sources-list.sh/blob/master/Dockerfile.debian-12,
# and adapted to allow installing gVisor from each own repo as well.
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
--mount=type=bind,source=./container_helpers/repro-sources-list.sh,target=/usr/local/bin/repro-sources-list.sh \
--mount=type=bind,source=./container_helpers/gvisor.key,target=/tmp/gvisor.key \
: "Hacky way to set a date for the Debian snapshot repos" && \
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list.d/debian.sources && \
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list && \
repro-sources-list.sh && \
: "Setup APT to install gVisor from its separate APT repo" && \
apt-get update && \
apt-get upgrade -y && \
apt-get install -y --no-install-recommends apt-transport-https ca-certificates gnupg && \
gpg -o /usr/share/keyrings/gvisor-archive-keyring.gpg --dearmor /tmp/gvisor.key && \
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases ${GVISOR_ARCHIVE_DATE} main" > /etc/apt/sources.list.d/gvisor.list && \
: "Install the necessary gVisor and Dangerzone dependencies" && \
apt-get update && \
apt-get install -y --no-install-recommends \
python3 python3-fitz libreoffice-nogui libreoffice-java-common \
python3 python3-magic default-jre-headless fonts-noto-cjk fonts-dejavu \
runsc unzip wget && \
: "Clean up for improving reproducibility (optional)" && \
rm -rf /var/cache/fontconfig/ && \
rm -rf /etc/ssl/certs/java/cacerts && \
rm -rf /var/log/* /var/cache/ldconfig/aux-cache
# Download H2ORestart from GitHub using a pinned version and hash. Note that
# it's available in Debian repos, but not in Bookworm yet.
RUN mkdir /opt/libreoffice_ext && cd /opt/libreoffice_ext \
&& H2ORESTART_FILENAME=h2orestart.oxt \
&& wget https://github.com/ebandal/H2Orestart/releases/download/$H2ORESTART_VERSION/$H2ORESTART_FILENAME \
&& echo "$H2ORESTART_CHECKSUM $H2ORESTART_FILENAME" | sha256sum -c \
&& install -dm777 "/usr/lib/libreoffice/share/extensions/" \
&& rm /root/.wget-hsts
# Create an unprivileged user both for gVisor and for running Dangerzone.
# XXX: Make the shadow field "date of last password change" a constant
# number.
RUN addgroup --gid 1000 dangerzone
RUN adduser --uid 1000 --ingroup dangerzone --shell /bin/true \
--disabled-password --home /home/dangerzone dangerzone \
&& chage -d 99999 dangerzone \
&& rm /etc/shadow-
# Copy Dangerzone's conversion logic under /opt/dangerzone, and allow Python to
# import it.
RUN mkdir -p /opt/dangerzone/dangerzone
RUN touch /opt/dangerzone/dangerzone/__init__.py
# Copy only the Python code, and not any produced .pyc files.
COPY conversion/*.py /opt/dangerzone/dangerzone/conversion/
# Create a directory that will be used by gVisor as the place where it will
# store the state of its containers.
RUN mkdir /home/dangerzone/.containers
###############################################################################
#
# REUSING CONTAINER IMAGES:
# Anatomy of a hack
# ========================
#
# The rest of the Dockerfile aims to do one thing: allow the final container
# image to actually contain two container images; one for the outer container
# (spawned by Podman/Docker Desktop), and one for the inner container (spawned
# by gVisor).
#
# This has already been done in the past, and we explain why and how in the
# design document for gVisor integration (should be in
# `docs/developer/gvisor.md`). In this iteration, we want to also
# achieve the following:
#
# 1. Have a small final image, by sharing some system paths between the inner
# and outer container image using symlinks.
# 2. Allow our security scanning tool to see the contents of the inner
# container image.
# 3. Make the outer container image operational, in the sense that you can use
# `apt` commands and perform a conversion with Dangerzone, outside the
# gVisor sandbox. This is helpful for debugging purposes.
#
# Below we'll explain how our design choices are informed by the above
# sub-goals.
#
# First, to achieve a small container image, we basically need to copy `/etc`,
# `/usr` and `/opt` from the original Dangerzone image to the **inner**
# container image (under `/home/dangerzone/dangerzone-image/rootfs/`)
#
# That's all we need. The rest of the files play no role, and we can actually
# mask them in gVisor's OCI config.
#
# Second, in order to let our security scanner find the installed packages,
# we need to copy the following dirs to the root of the **outer** container
# image:
# * `/etc`, so that the security scanner can detect the image type and its
# sources
# * `/var`, so that the security scanner can have access to the APT database.
#
# IMPORTANT: We don't symlink the `/etc` of the **outer** container image to
# the **inner** one, in order to avoid leaking files like
# `/etc/{hostname,hosts,resolv.conf}` that Podman/Docker mounts when running
# the **outer** container image.
#
# Third, in order to have an operational Debian image, we are _mostly_ covered
# by the dirs we have copied. There's a _rare_ case where during debugging, we
# may want to install a system package that has components in `/etc` and
# `/var`, which will not be available in the **inner** container image. In that
# case, the developer can do the necessary symlinks in the live container.
#
# FILESYSTEM HIERARCHY
# ====================
#
# The above plan leads to the following filesystem hierarchy:
#
# Outer container image:
#
# # ls -l /
# lrwxrwxrwx 1 root root 7 Jan 27 10:46 bin -> usr/bin
# -rwxr-xr-x 1 root root 7764 Jan 24 08:14 entrypoint.py
# drwxr-xr-x 1 root root 4096 Jan 27 10:47 etc
# drwxr-xr-x 1 root root 4096 Jan 27 10:46 home
# lrwxrwxrwx 1 root root 7 Jan 27 10:46 lib -> usr/lib
# lrwxrwxrwx 1 root root 9 Jan 27 10:46 lib64 -> usr/lib64
# drwxr-xr-x 2 root root 4096 Jan 27 10:46 root
# drwxr-xr-x 1 root root 4096 Jan 27 10:47 run
# lrwxrwxrwx 1 root root 8 Jan 27 10:46 sbin -> usr/sbin
# drwxrwxrwx 2 root root 4096 Jan 27 10:46 tmp
# lrwxrwxrwx 1 root root 44 Jan 27 10:46 usr -> /home/dangerzone/dangerzone-image/rootfs/usr
# drwxr-xr-x 11 root root 4096 Jan 27 10:47 var
#
# Inner container image:
#
# # ls -l /home/dangerzone/dangerzone-image/rootfs/
# total 12
# lrwxrwxrwx 1 root root 7 Jan 27 10:47 bin -> usr/bin
# drwxr-xr-x 43 root root 4096 Jan 27 10:46 etc
# lrwxrwxrwx 1 root root 7 Jan 27 10:47 lib -> usr/lib
# lrwxrwxrwx 1 root root 9 Jan 27 10:47 lib64 -> usr/lib64
# drwxr-xr-x 4 root root 4096 Jan 27 10:47 opt
# drwxr-xr-x 12 root root 4096 Jan 27 10:47 usr
#
# SYMLINKING /USR
# ===============
#
# It's surprisingly difficult (maybe even borderline impossible), to symlink
# `/usr` to a different path during image build. The problem is that /usr
# is very sensitive, and you can't manipulate it in a live system. That is, I
# haven't found a way to do the following, or something equivalent:
#
# rm -r /usr && ln -s /home/dangerzone/dangerzone-image/rootfs/usr/ /usr
#
# The `ln` binary, even if you specify it by its full path, cannot run
# (probably because `ld-linux.so` can't be found). For this reason, we have
# to create the symlinks beforehand, in a previous build stage. Then, in an
# empty container image (scratch images), we can copy these symlinks and the
# /usr, and stitch everything together.
###############################################################################
# Create the filesystem hierarchy that will be used to symlink /usr.
RUN mkdir -p \
/new_root \
/new_root/root \
/new_root/run \
/new_root/tmp \
/new_root/home/dangerzone/dangerzone-image/rootfs
# Copy the /etc and /var directories under the new root directory. Also,
# copy /etc/, /opt, and /usr to the Dangerzone image rootfs.
#
# NOTE: We also have to remove the resolv.conf file, in order to not leak any
# DNS servers added there during image build time.
RUN cp -r /etc /var /new_root/ \
&& rm /new_root/etc/resolv.conf
RUN cp -r /etc /opt /usr /new_root/home/dangerzone/dangerzone-image/rootfs \
&& rm /new_root/home/dangerzone/dangerzone-image/rootfs/etc/resolv.conf
RUN ln -s /home/dangerzone/dangerzone-image/rootfs/usr /new_root/usr
RUN ln -s usr/bin /new_root/bin
RUN ln -s usr/lib /new_root/lib
RUN ln -s usr/lib64 /new_root/lib64
RUN ln -s usr/sbin /new_root/sbin
RUN ln -s usr/bin /new_root/home/dangerzone/dangerzone-image/rootfs/bin
RUN ln -s usr/lib /new_root/home/dangerzone/dangerzone-image/rootfs/lib
RUN ln -s usr/lib64 /new_root/home/dangerzone/dangerzone-image/rootfs/lib64
# Fix permissions in /home/dangerzone, so that our entrypoint script can make
# changes in the following folders.
RUN chown dangerzone:dangerzone \
/new_root/home/dangerzone \
/new_root/home/dangerzone/dangerzone-image/
# Fix permissions in /tmp, so that it can be used by unprivileged users.
RUN chmod 777 /new_root/tmp
COPY container_helpers/entrypoint.py /new_root
# HACK: For reasons that we are not sure yet, we need to explicitly specify the
# modification time of this file.
RUN touch -d ${DEBIAN_ARCHIVE_DATE}Z /new_root/entrypoint.py
## Final image
FROM scratch
# Copy the filesystem hierarchy that we created in the previous stage, so that
# /usr can be a symlink.
COPY --from=dangerzone-image /new_root/ /
# Switch to the dangerzone user for the rest of the script.
USER dangerzone
ENTRYPOINT ["/entrypoint.py"]

394
INSTALL.md Normal file
View file

@ -0,0 +1,394 @@
## Operating System support
Dangerzone can run on various Operating Systems (OS), and has automated tests
for most of them.
This section explains which OS we support, how long we support each version, and
how do we test Dangerzone against these.
You can find general support information in this table, and more details in the
following sections.
(Unless specified, the architecture of the OS is AMD64)
| Distribution | Supported releases | Automated tests | Manual QA |
| ------------ | ------------------------- | ---------------------- | --------- |
| Windows | 2 last releases | 🗹 (`windows-latest`) ◎ | 🗹 |
| macOS intel | 3 last releases | 🗹 (`macos-13`) ◎ | 🗹 |
| macOS silicon | 3 last releases | 🗹 (`macos-latest`) ◎ | 🗹 |
| Ubuntu | Follow upstream support ✰ | 🗹 | 🗹 |
| Debian | Current stable, Oldstable and LTS releases | 🗹 | 🗹 |
| Fedora | Follow upstream support | 🗹 | 🗹 |
| Qubes OS | [Beta support](https://github.com/freedomofpress/dangerzone/issues/413) ✢ | 🗷 | Latest Fedora template |
| Tails | Only the last release | 🗷 | Last release only |
Notes:
✰ Support for Ubuntu Focal [was dropped](https://github.com/freedomofpress/dangerzone/issues/1018)
✢ Qubes OS support assumes the use of a Fedora template. The supported releases follow our general support for Fedora.
◎ More information about where that points [in the runner-images repository](https://github.com/actions/runner-images/tree/main)
## MacOS
- Download [Dangerzone 0.9.0 for Mac (Apple Silicon CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.9.0/Dangerzone-0.9.0-arm64.dmg)
- Download [Dangerzone 0.9.0 for Mac (Intel CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.9.0/Dangerzone-0.9.0-i686.dmg)
> [!TIP]
> We support the releases of macOS that are still within Apple's servicing timeline. Apple usually provides security updates for the latest 3 releases, but this isnt consistently applied and security fixes arent guaranteed for the non-latest releases. We are also dependent on [Docker Desktop windows support](https://docs.docker.com/desktop/setup/install/mac-install/)
You can also install Dangerzone for Mac using [Homebrew](https://brew.sh/): `brew install --cask dangerzone`
> **Note**: you will also need to install [Docker Desktop](https://www.docker.com/products/docker-desktop/).
> This program needs to run alongside Dangerzone at all times, since it is what allows Dangerzone to
> create the secure environment.
## Windows
- Download [Dangerzone 0.9.0 for Windows](https://github.com/freedomofpress/dangerzone/releases/download/v0.9.0/Dangerzone-0.9.0.msi)
> **Note**: you will also need to install [Docker Desktop](https://www.docker.com/products/docker-desktop/).
> This program needs to run alongside Dangerzone at all times, since it is what allows Dangerzone to
> create the secure environment.
> [!TIP]
> We generally support Windows releases that are still within [Microsofts servicing timeline](https://support.microsoft.com/en-us/help/13853/windows-lifecycle-fact-sheet).
>
> Docker sets the bottom line:
>
> > Docker only supports Docker Desktop on Windows for those versions of Windows that are still within [Microsofts servicing timeline](https://support.microsoft.com/en-us/help/13853/windows-lifecycle-fact-sheet). Docker Desktop is not supported on server versions of Windows, such as Windows Server 2019 or Windows Server 2022.
## Linux
On Linux, Dangerzone uses [Podman](https://podman.io/) instead of Docker Desktop for creating
an isolated environment. It will be installed automatically when installing Dangerzone.
> [!TIP]
> We support Ubuntu, Debian, and Fedora releases that are still within
> their respective servicing timelines, with a few twists:
>
> - Ubuntu: We follow upstream support with an extra cutoff date. No support for
> versions prior to the second oldest LTS release.
> - Fedora: We follow upstream support
> - Debian: current stable, oldstable and LTS releases.
Dangerzone is available for:
- Ubuntu 25.04 (plucky)
- Ubuntu 24.10 (oracular)
- Ubuntu 24.04 (noble)
- Ubuntu 22.04 (jammy)
- Debian 13 (trixie)
- Debian 12 (bookworm)
- Debian 11 (bullseye)
- Fedora 42
- Fedora 41
- Fedora 40
- Tails
- Qubes OS (beta support)
### Ubuntu, Debian
<table>
<tr>
<td>
<details>
<summary><i>:information_source: Backport notice for Ubuntu 22.04 (Jammy) users regarding the <code>conmon</code> package</i></summary>
</br>
The `conmon` version that Podman uses and Ubuntu Jammy ships, has a bug
that gets triggered by Dangerzone
(more details in https://github.com/freedomofpress/dangerzone/issues/685).
To fix this, we provide our own `conmon` package through our APT repo, which
was built with the following [instructions](https://github.com/freedomofpress/maint-dangerzone-conmon/tree/ubuntu/jammy/fpf).
This package is essentially a backport of the `conmon` package
[provided](https://packages.debian.org/source/oldstable/conmon) by Debian
Bullseye.
</details>
</td>
</tr>
</table>
First, retrieve the PGP keys. The instructions differ depending on the specific
distribution you are using:
For Debian Trixie and Ubuntu Plucky (25.04), follow these instructions to
download the PGP keys:
```bash
sudo apt-get update && sudo apt-get install sq ca-certificates -y
sq network keyserver \
--server hkps://keys.openpgp.org \
search "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281" \
--output - | sq packet dearmor fpfdz.gpg
sudo mkdir -p /etc/apt/keyrings/
sudo mv fpfdz.gpg /etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg
```
On other Debian-derivatives:
```sh
sudo apt-get update && sudo apt-get install gnupg2 ca-certificates -y
sudo mkdir -p /etc/apt/keyrings/
sudo gpg --keyserver hkps://keys.openpgp.org \
--no-default-keyring --keyring /etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg \
--recv-keys "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281"
```
Then, on all distributions, add the URL of the repo in your APT sources:
```sh
. /etc/os-release
echo "deb [signed-by=/etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg] \
https://packages.freedom.press/apt-tools-prod ${VERSION_CODENAME?} main" \
| sudo tee /etc/apt/sources.list.d/fpf-apt-tools.list
```
Install Dangerzone:
```
sudo apt update
sudo apt install -y dangerzone
```
<table>
<tr>
<td>
<details>
<summary><i>:memo: Expand this section for a security notice on third-party Debian repos</i></summary>
</br>
This section follows the official instructions on configuring [third-party
Debian repos](https://wiki.debian.org/DebianRepository/UseThirdParty).
To mitigate a class of attacks against our APT repo (e.g., injecting packages
signed with an attacker key), we add an additional step in our instructions to
verify the downloaded GPG key against its fingerprint.
Aside from these protections, the user needs to be aware that Debian packages
run as `root` during the installation phase, so they need to place some trust
on our signed Debian packages. This holds for any third-party Debian repo.
</details>
</td>
</tr>
</table>
### Fedora
Type the following commands in a terminal:
```
sudo dnf install 'dnf-command(config-manager)'
sudo dnf-3 config-manager --add-repo=https://packages.freedom.press/yum-tools-prod/dangerzone/dangerzone.repo
sudo dnf install dangerzone
```
##### Verifying Dangerzone GPG key
<table>
<tr>
<td>
<details>
<summary>Importing GPG key 0x22604281: ... Is this ok [y/N]:</summary>
</br>
After some minutes of running the above command (depending on your internet speed) you'll be asked to confirm the fingerprint of our signing key. This is to make sure that in the case our servers are compromised your computer stays safe. It should look like this:
```console
--------------------------------------------------------------------------------
Total 389 kB/s | 732 MB 32:07
Dangerzone repository 3.8 MB/s | 3.8 kB 00:00
Importing GPG key 0x22604281:
Userid : "Dangerzone Release Key <dangerzone-release-key@freedom.press>"
Fingerprint: DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281
From : /etc/pki/rpm-gpg/RPM-GPG-dangerzone.pub
Is this ok [y/N]:
```
> **Note**: If it does not show this fingerprint confirmation or the fingerprint does not match, it is possible that our servers were compromised. Be distrustful and reach out to us.
The `Fingerprint` should be `DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281`. For extra security, you should confirm it matches the one at the bottom of our website ([dangerzone.rocks](https://dangerzone.rocks)) and our [Mastodon account](https://fosstodon.org/@dangerzone) bio.
After confirming that it matches, type `y` (for yes) and the installation should proceed.
</details>
</td>
</tr>
</table>
### Qubes OS
> [!WARNING]
> This section is for the beta version of native Qubes support. If you
> want to try out the stable Dangerzone version (which uses containers instead
> of virtual machines for isolation), please follow the Fedora or Debian
> instructions and adapt them as needed.
>
> **If you followed these instructions before October 25, 2023, please read [this security advisory](docs/advisories/2023-10-25.md).**
> This notice will be removed with the 1.0.0 release of Dangerzone.
> [!IMPORTANT]
> This section will install Dangerzone in your **default template**
> (`fedora-41` as of writing this). If you want to install it in a different
> one, make sure to replace `fedora-41` with the template of your choice.
The following steps must be completed once. Make sure you run them in the
specified qubes.
Overview of the qubes you'll create:
| qube | type | purpose |
|--------------|----------|---------|
| dz-dvm | app qube | offline disposable template for performing conversions |
#### In `dom0`:
Create a **disposable**, offline app qube (`dz-dvm`), based on your default
template. This will be the qube where the documents will be sanitized:
```
qvm-create --class AppVM --label red --template fedora-41 \
--prop netvm="" --prop template_for_dispvms=True \
--prop default_dispvm='' dz-dvm
```
Add an RPC policy (`/etc/qubes/policy.d/50-dangerzone.policy`) that will
allow launching a disposable qube (`dz-dvm`) when Dangerzone converts a
document, with the following contents:
```
dz.Convert * @anyvm @dispvm:dz-dvm allow
```
#### In the `fedora-41` template
Install Dangerzone:
```
sudo dnf-3 config-manager --add-repo=https://packages.freedom.press/yum-tools-prod/dangerzone/dangerzone.repo
sudo dnf install dangerzone-qubes
```
While Dangerzone gets installed, you will be prompted to accept a signing key.
Expand the instructions in the [Verifying Dangerzone GPG key](#verifying-dangerzone-gpg-key)
section to verify the key.
Finally, shutdown the template and restart the qubes where you want to use
Dangerzone in. Go to "Qube Settings" -> choose the "Applications" tab,
click on "Refresh applications", and then move "Dangerzone" from the "Available"
column to "Selected".
You can now launch Dangerzone from the list of applications for your qube, and
pass it a file to sanitize.
## Tails
Dangerzone is not yet available by default in Tails, but we have collaborated
with the Tails team to offer manual
[installation instructions](https://tails.net/doc/persistent_storage/additional_software/dangerzone/index.en.html)
for Tails users.
## Build from source
If you'd like to build from source, follow the [build instructions](BUILD.md).
## Verifying PGP signatures
You can verify that the package you download is legitimate and hasn't been
tampered with by verifying its PGP signature. For Windows and macOS, this step
is optional and provides defense in depth: the Dangerzone binaries include
operating system-specific signatures, and you can just rely on those alone if
you'd like.
### Obtaining signing key
Our binaries are signed with a PGP key owned by Freedom of the Press Foundation:
* Name: Dangerzone Release Key
* PGP public key fingerprint `DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281`
- You can download this key [from the keys.openpgp.org keyserver](https://keys.openpgp.org/vks/v1/by-fingerprint/DE28AB241FA48260FAC9B8BAA7C9B38522604281).
_(You can also cross-check this fingerprint with the fingerprint in our
[Mastodon page](https://fosstodon.org/@dangerzone) and the fingerprint in the
footer of our [official site](https://dangerzone.rocks))_
You must have GnuPG installed to verify signatures. For macOS you probably want
[GPGTools](https://gpgtools.org/), and for Windows you probably want
[Gpg4win](https://www.gpg4win.org/).
### Signatures
Our [GitHub Releases page](https://github.com/freedomofpress/dangerzone/releases)
hosts the following files:
* Windows installer (`Dangerzone-<version>.msi`)
* macOS archives (`Dangerzone-<version>-<arch>.dmg`)
* Container images (`container-<version>-<arch>.tar`)
* Source package (`dangerzone-<version>.tar.gz`)
All these files are accompanied by signatures (as `.asc` files). We'll explain
how to verify them below, using `0.6.1` as an example.
### Verifying
Once you have imported the Dangerzone release key into your GnuPG keychain,
downloaded the binary and ``.asc`` signature, you can verify the binary in a
terminal like this:
For the Windows binary:
```
gpg --verify Dangerzone-0.6.1.msi.asc Dangerzone-0.6.1.msi
```
For the macOS binaries (depending on your architecture):
```
gpg --verify Dangerzone-0.6.1-arm64.dmg.asc Dangerzone-0.6.1-arm64.dmg
gpg --verify Dangerzone-0.6.1-i686.dmg.asc Dangerzone-0.6.1-i686.dmg
```
For the container images:
```
gpg --verify container-0.6.1-i686.tar.asc container-0.6.1-i686.tar
```
For the source package:
```
gpg --verify dangerzone-0.6.1.tar.gz.asc dangerzone-0.6.1.tar.gz
```
We also hash all the above files with SHA-256, and provide a list of these
hashes as a separate file (`checksums-0.6.1.txt`). This file is signed as well,
and the signature is embedded within it. You can download this file and verify
it with:
```
gpg --verify checksums.txt
```
The expected output looks like this:
```
gpg: Signature made Mon Apr 22 09:29:22 2024 PDT
gpg: using RSA key 04CABEB5DD76BACF2BD43D2FF3ACC60F62EA51CB
gpg: Good signature from "Dangerzone Release Key <dangerzone-release-key@freedom.press>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281
Subkey fingerprint: 04CA BEB5 DD76 BACF 2BD4 3D2F F3AC C60F 62EA 51CB
```
If you don't see `Good signature from`, there might be a problem with the
integrity of the file (malicious or otherwise), and you should not install the
package.
The `WARNING:` shown above, is not a problem with the package, it only means you
haven't defined a level of "trust" for Dangerzone's PGP key.
If you want to learn more about verifying PGP signatures, the guides for
[Qubes OS](https://www.qubes-os.org/security/verifying-signatures/) and the
[Tor Project](https://support.torproject.org/tbb/how-to-verify-signature/) may
be useful.

674
LICENSE
View file

@ -1,21 +1,661 @@
MIT License
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (c) 2020-2021 First Look Media
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
Preamble
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

90
Makefile Normal file
View file

@ -0,0 +1,90 @@
LARGE_TEST_REPO_DIR:=tests/test_docs_large
GIT_DESC=$$(git describe)
JUNIT_FLAGS := --capture=sys -o junit_logging=all
MYPY_ARGS := --ignore-missing-imports \
--disallow-incomplete-defs \
--disallow-untyped-defs \
--show-error-codes \
--warn-unreachable \
--warn-unused-ignores \
--exclude $(LARGE_TEST_REPO_DIR)/*.py
.PHONY: lint
lint: ## Check the code for linting, formatting, and typing issues with ruff and mypy
ruff check
ruff format --check
mypy $(MYPY_ARGS) dangerzone
mypy $(MYPY_ARGS) tests
.PHONY: fix
fix: ## apply all the suggestions from ruff
ruff check --fix
ruff format
.PHONY: test
test: ## Run the tests
# Make each GUI test run as a separate process, to avoid segfaults due to
# shared state.
# See more in https://github.com/freedomofpress/dangerzone/issues/493
pytest --co -q tests/gui | grep -e '^tests/' | xargs -n 1 pytest -v
pytest -v --cov --ignore dev_scripts --ignore tests/gui --ignore tests/test_large_set.py
.PHONY: test-large-requirements
test-large-requirements:
@git-lfs --version || (echo "ERROR: you need to install 'git-lfs'" && false)
@xmllint --version || (echo "ERROR: you need to install 'xmllint'" && false)
test-large-init: test-large-requirements
@echo "initializing 'test_docs_large' submodule"
git submodule init $(LARGE_TEST_REPO_DIR)
git submodule update $(LARGE_TEST_REPO_DIR)
cd $(LARGE_TEST_REPO_DIR) && $(MAKE) clone-docs
TEST_LARGE_RESULTS:=$(LARGE_TEST_REPO_DIR)/results/junit/commit_$(GIT_DESC).junit.xml
.PHONY: test-large
test-large: test-large-init ## Run large test set
python -m pytest --tb=no tests/test_large_set.py::TestLargeSet -v $(JUNIT_FLAGS) --junitxml=$(TEST_LARGE_RESULTS)
python $(TEST_LARGE_RESULTS)/report.py $(TEST_LARGE_RESULTS)
Dockerfile: Dockerfile.env Dockerfile.in ## Regenerate the Dockerfile from its template
poetry run jinja2 Dockerfile.in Dockerfile.env > Dockerfile
.PHONY: poetry-install
poetry-install: ## Install project dependencies
poetry install
.PHONY: build-clean
build-clean:
poetry run doit clean
.PHONY: build-macos-intel
build-macos-intel: build-clean poetry-install ## Build macOS intel package (.dmg)
poetry run doit -n 8
.PHONY: build-macos-arm
build-macos-arm: build-clean poetry-install ## Build macOS Apple Silicon package (.dmg)
poetry run doit -n 8 macos_build_dmg
.PHONY: build-linux
build-linux: build-clean poetry-install ## Build linux packages (.rpm and .deb)
poetry run doit -n 8 fedora_rpm debian_deb
.PHONY: regenerate-reference-pdfs
regenerate-reference-pdfs: ## Regenerate the reference PDFs
pytest tests/test_cli.py -k regenerate --generate-reference-pdfs
# Makefile self-help borrowed from the securedrop-client project
# Explaination of the below shell command should it ever break.
# 1. Set the field separator to ": ##" and any make targets that might appear between : and ##
# 2. Use sed-like syntax to remove the make targets
# 3. Format the split fields into $$1) the target name (in blue) and $$2) the target descrption
# 4. Pass this file as an arg to awk
# 5. Sort it alphabetically
# 6. Format columns with colon as delimiter.
.PHONY: help
help: ## Print this message and exit.
@printf "Makefile for developing and testing dangerzone.\n"
@printf "Subcommands:\n\n"
@awk 'BEGIN {FS = ":.*?## "} /^[0-9a-zA-Z_-]+:.*?## / {printf "\033[36m%s\033[0m : %s\n", $$1, $$2}' $(MAKEFILE_LIST) \
| sort \
| column -s ':' -t

197
QA.md Normal file
View file

@ -0,0 +1,197 @@
## QA
To ensure that new releases do not introduce regressions, and support existing
and newer platforms, we have to test that the produced packages work as expected.
Check the following:
- [ ] Make sure that the tip of the `main` branch passes the CI tests.
- [ ] Make sure that the Apple account has a valid application password and has
agreed to the latest Apple terms (see [macOS release](#macos-release)
section).
Because it is repetitive, we wrote a script to help with the QA.
It can run the tasks for you, pausing when it needs manual intervention.
You can run it with a command like:
```bash
poetry run ./dev_scripts/qa.py {distro}-{version}
```
### The checklist
- [ ] Create a test build in Windows and make sure it works:
- [ ] Check if the suggested Python version is still supported.
- [ ] Create a new development environment with Poetry.
- [ ] Build the container image and ensure the development environment uses
the new image.
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
- [ ] Run the Dangerzone tests.
- [ ] Build and run the Dangerzone .exe
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
- [ ] Create a test build in macOS (Intel CPU) and make sure it works:
- [ ] Check if the suggested Python version is still supported.
- [ ] Create a new development environment with Poetry.
- [ ] Build the container image and ensure the development environment uses
the new image.
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
- [ ] Run the Dangerzone tests.
- [ ] Create and run an app bundle.
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
- [ ] Create a test build in macOS (M1/2 CPU) and make sure it works:
- [ ] Check if the suggested Python version is still supported.
- [ ] Create a new development environment with Poetry.
- [ ] Build the container image and ensure the development environment uses
the new image.
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
- [ ] Run the Dangerzone tests.
- [ ] Create and run an app bundle.
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
- [ ] Create a test build in the most recent Ubuntu LTS platform (Ubuntu 24.04
as of writing this) and make sure it works:
- [ ] Create a new development environment with Poetry.
- [ ] Build the container image and ensure the development environment uses
the new image.
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
- [ ] Run the Dangerzone tests.
- [ ] Create a .deb package and install it system-wide.
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
- [ ] Create a test build in the most recent Fedora platform (Fedora 41 as of
writing this) and make sure it works:
- [ ] Create a new development environment with Poetry.
- [ ] Build the container image and ensure the development environment uses
the new image.
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
- [ ] Run the Dangerzone tests.
- [ ] Create an .rpm package and install it system-wide.
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
- [ ] Create a test build in the most recent Qubes Fedora template (Fedora 40 as
of writing this) and make sure it works:
- [ ] Create a new development environment with Poetry.
- [ ] Run the Dangerzone tests.
- [ ] Create a Qubes .rpm package and install it system-wide.
- [ ] Ensure that the Dangerzone application appears in the "Applications"
tab.
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below) and make sure
they spawn disposable qubes.
### Scenarios
#### 1. Dangerzone correctly identifies that Docker/Podman is not installed
_(Only for MacOS / Windows)_
Temporarily hide the Docker/Podman binaries, e.g., rename the `docker` /
`podman` binaries to something else. Then run Dangerzone. Dangerzone should
prompt the user to install Docker/Podman.
#### 2. Dangerzone correctly identifies that Docker is not running
_(Only for MacOS / Windows)_
Stop the Docker Desktop application. Then run Dangerzone. Dangerzone should
prompt the user to start Docker Desktop.
#### 3. Updating Dangerzone handles external state correctly.
_(Applies to Windows/MacOS)_
Install the previous version of Dangerzone, downloaded from the website.
Open the Dangerzone application and enable some non-default settings.
**If there are new settings, make sure to change those as well**.
Close the Dangerzone application and get the container image for that
version. For example:
```
$ docker images dangerzone.rocks/dangerzone
REPOSITORY TAG IMAGE ID CREATED SIZE
dangerzone.rocks/dangerzone <tag> <image ID> <date> <size>
```
Then run the version under QA and ensure that the settings remain changed.
Afterwards check that new docker image was installed by running the same command
and seeing the following differences:
```
$ docker images dangerzone.rocks/dangerzone
REPOSITORY TAG IMAGE ID CREATED SIZE
dangerzone.rocks/dangerzone <other tag> <different ID> <newer date> <different size>
```
#### 4. Dangerzone successfully installs the container image
_(Only for Linux)_
Remove the Dangerzone container image from Docker/Podman. Then run Dangerzone.
Dangerzone should install the container image successfully.
#### 5. Dangerzone retains the settings of previous runs
Run Dangerzone and make some changes in the settings (e.g., change the OCR
language, toggle whether to open the document after conversion, etc.). Restart
Dangerzone. Dangerzone should show the settings that the user chose.
#### 6. Dangerzone reports failed conversions
Run Dangerzone and convert the `tests/test_docs/sample_bad_pdf.pdf` document.
Dangerzone should fail gracefully, by reporting that the operation failed, and
showing the following error message:
> The document format is not supported
#### 7. Dangerzone succeeds in converting multiple documents
Run Dangerzone against a list of documents, and tick all options. Ensure that:
* Conversions take place sequentially.
* Attempting to close the window while converting asks the user if they want to
abort the conversions.
* Conversions are completed successfully.
* Conversions show individual progress in real-time (double-check for Qubes).
* _(Only for Linux)_ The resulting files open with the PDF viewer of our choice.
* OCR seems to have detected characters in the PDF files.
* The resulting files have been saved with the proper suffix, in the proper
location.
* The original files have been saved in the `unsafe/` directory.
#### 8. Dangerzone is able to handle drag-n-drop
Run Dangerzone against a set of documents that you drag-n-drop. Files should be
added and conversion should run without issue.
> [!TIP]
> On our end-user container environments for Linux, we can start a file manager
> with `thunar &`.
#### 9. Dangerzone CLI succeeds in converting multiple documents
_(Only for Windows and Linux)_
Run Dangerzone CLI against a list of documents. Ensure that conversions happen
sequentially, are completed successfully, and we see their progress.
#### 10. Dangerzone can open a document for conversion via right-click -> "Open With"
_(Only for Windows, MacOS and Qubes)_
Go to a directory with office documents, right-click on one, and click on "Open
With". We should be able to open the file with Dangerzone, and then convert it.
#### 11. Dangerzone shows helpful errors for setup issues on Qubes
_(Only for Qubes)_
Check what errors does Dangerzone throw in the following scenarios. The errors
should point the user to the Qubes notifications in the top-right corner:
1. The `dz-dvm` template does not exist. We can trigger this scenario by
temporarily renaming this template.
2. The Dangerzone RPC policy does not exist. We can trigger this scenario by
temporarily renaming the `dz.Convert` policy.
3. The `dz-dvm` disposable Qube cannot start due to insufficient resources. We
can trigger this scenario by temporarily increasing the minimum required RAM
of the `dz-dvm` template to more than the available amount.

View file

@ -2,24 +2,32 @@
Take potentially dangerous PDFs, office documents, or images and convert them to a safe PDF.
![Settings](./assets/screenshot1.png)
![Converting](./assets/screenshot2.png)
Dangerzone works like this: You give it a document that you don't know if you can trust (for example, an email attachment). Inside of a sandbox, Dangerzone converts the document to a PDF (if it isn't already one), and then converts the PDF into raw pixel data: a huge list of of RGB color values for each page. Then, in a separate sandbox, Dangerzone takes this pixel data and converts it back into a PDF.
| ![Settings](./assets/screenshot1.png) | ![Converting](./assets/screenshot2.png)
|--|--|
_Read more about Dangerzone in the blog post [Dangerzone: Working With Suspicious Documents Without Getting Hacked](https://tech.firstlook.media/dangerzone-working-with-suspicious-documents-without-getting-hacked)._
Dangerzone works like this: You give it a document that you don't know if you can trust (for example, an email attachment). Inside of a sandbox, Dangerzone converts the document to a PDF (if it isn't already one), and then converts the PDF into raw pixel data: a huge list of RGB color values for each page. Then, outside of the sandbox, Dangerzone takes this pixel data and converts it back into a PDF.
_Read more about Dangerzone in the [official site](https://dangerzone.rocks/about/)._
## Getting started
- Download [Dangerzone 0.3.1 for Mac](https://github.com/firstlookmedia/dangerzone/releases/download/v0.3.1/Dangerzone-0.3.1.dmg)
- Download [Dangerzone 0.3.1 for Windows](https://github.com/firstlookmedia/dangerzone/releases/download/v0.3.1/Dangerzone-0.3.1.msi)
- See [installing Dangerzone](https://github.com/firstlookmedia/dangerzone/wiki/Installing-Dangerzone) on the wiki for Linux repositories
Follow the instructions for each platform:
You can also install Dangerzone for Mac using [Homebrew](https://brew.sh/): `brew install --cask dangerzone`
* [macOS](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#macos)
* [Windows](https://github.com/freedomofpress/dangerzone/blob/v0.9.0//INSTALL.md#windows)
* [Ubuntu Linux](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#ubuntu-debian)
* [Debian Linux](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#ubuntu-debian)
* [Fedora Linux](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#fedora)
* [Qubes OS (beta)](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#qubes-os)
* [Tails](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#tails)
You can read more about our operating system support [here](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#operating-system-support).
## Some features
- Sandboxes don't have network access, so if a malicious document can compromise one, it can't phone home
- Sandboxes use [gVisor](https://gvisor.dev/), an application kernel written in Go, that implements a substantial portion of the Linux system call interface.
- Dangerzone can optionally OCR the safe PDFs it creates, so it will have a text layer again
- Dangerzone compresses the safe PDF to reduce file size
- After converting, Dangerzone lets you open the safe PDF in the PDF viewer of your choice, which allows you to open PDFs and office docs in Dangerzone by default so you never accidentally open a dangerous document
@ -34,10 +42,47 @@ Dangerzone can convert these types of document into safe PDFs:
- ODF Spreadsheet (`.ods`)
- ODF Presentation (`.odp`)
- ODF Graphics (`.odg`)
- Hancom HWP (Hangul Word Processor) (`.hwp`, `.hwpx`)
* Not supported on
[Qubes OS](https://github.com/freedomofpress/dangerzone/issues/494)
- EPUB (`.epub`)
- Jpeg (`.jpg`, `.jpeg`)
- GIF (`.gif`)
- PNG (`.png`)
- SVG (`.svg`)
- other image formats (`.bmp`, `.pnm`, `.pbm`, `.ppm`)
Dangerzone was inspired by [Qubes trusted PDF](https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html), but it works in non-Qubes operating systems. It uses containers as sandboxes instead of virtual machines (using Docker for macOS, Windows, and Debian/Ubuntu, and [podman](https://podman.io/) for Fedora).
Dangerzone was inspired by [Qubes trusted PDF](https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html), but it works in non-Qubes operating systems. It uses containers as sandboxes instead of virtual machines (using Docker for macOS and Windows, and [podman](https://podman.io/) on Linux).
Set up a development environment by following [these instructions](/BUILD.md).
Set up a development environment by following [these instructions](/BUILD.md).
# License and Copyright
Licensed under the AGPLv3: [https://opensource.org/licenses/agpl-3.0](https://opensource.org/licenses/agpl-3.0)
Copyright (c) 2022-2024 Freedom of the Press Foundation and Dangerzone contributors
Copyright (c) 2020-2021 First Look Media
## See also
* [GIJN Toolbox: Cutting-Edge — and Free — Online Investigative Tools You Can Try Right Now](https://gijn.org/stories/cutting-edge-free-online-investigative-tools/)
* [When security matters: working with Qubes OS at the Guardian](https://www.theguardian.com/info/2024/apr/04/when-security-matters-working-with-qubes-os-at-the-guardian)
## FAQ
### Has Dangerzone received a security audit?
Yes, Dangerzone received its [first security audit](https://freedom.press/news/dangerzone-receives-favorable-audit/) by [Include Security](https://includesecurity.com/) in December 2023. The audit was generally favorable, as it didn't identify any high-risk findings, except for 3 low-risk and 7 informational findings.
### "I'm experiencing an issue while using Dangerzone."
Dangerzone gets updates to improve its features _and_ to fix problems. So, updating may be the simplest path to resolving the issue which brought you here. Here is how to update:
1. Check which version of Dangerzone you are currently using: run Dangerzone, then look for a series of numbers to the right of the logo within the app. The format of the numbers will look similar to `0.4.1`
2. Now find the latest available version of Dangerzone: go to the [download page](https://dangerzone.rocks/#downloads). Look for the version number displayed. The number will be using the same format as in Step 1.
3. Is the version on the Dangerzone download page higher than the version of your installed app? Go ahead and update.
### Can I use Podman Desktop?
Yes! We've introduced [experimental support for Podman Desktop](https://github.com/freedomofpress/dangerzone/blob/main/docs/podman-desktop.md) on Windows and macOS.

View file

@ -1,80 +1,350 @@
# Release instructions
This section documents the release process. Unless you're a dangerzone developer making a release, you'll probably never need to follow it.
This section documents how we currently release Dangerzone for the different distributions we support.
## Changelog, version, and signed git tag
## Pre-release
Before making a release, all of these should be complete:
Here is a list of tasks that should be done before issuing the release:
- [ ] Create a new issue named **QA and Release for version \<VERSION\>**, to track the general progress.
You can generate its content with the the `poetry run ./dev_scripts/generate-release-tasks.py` command.
- [ ] [Add new Linux platforms and remove obsolete ones](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#add-new-linux-platforms-and-remove-obsolete-ones)
- [ ] Bump the Python dependencies using `poetry lock`
- [ ] Check for new [WiX releases](https://github.com/wixtoolset/wix/releases) and update it if needed
- [ ] Update `version` in `pyproject.toml`
- [ ] Update `share/version.txt`
- [ ] Update version and download links in `README.md`, and screenshot if necessary
- [ ] Update the "Version" field in `install/linux/dangerzone.spec`
- [ ] Bump the Debian version by adding a new changelog entry in `debian/changelog`
- [ ] [Bump the minimum Docker Desktop versions](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#bump-the-minimum-docker-desktop-version) in `isolation_provider/container.py`
- [ ] Bump the dates and versions in the `Dockerfile`
- [ ] Update the download links in our `INSTALL.md` page to point to the new version (the download links will be populated after the release)
- [ ] Update screenshot in `README.md`, if necessary
- [ ] CHANGELOG.md should be updated to include a list of all major changes since the last release
- [ ] In `.circleci/config.yml`, add new platforms and remove obsolete platforms
- [ ] Create a test build in Windows and make sure it works
- [ ] Create a test build in macOS and make sure it works
- [ ] There must be a PGP-signed git tag for the version, e.g. for dangerzone 0.1.0, the tag must be `v0.1.0`
- [ ] A draft release should be created. Copy the release notes text from the template at [`docs/templates/release-notes`](https://github.com/freedomofpress/dangerzone/tree/main/docs/templates/)
- [ ] Send the release notes to editorial for review
- [ ] Do the QA tasks
Before making a release, verify the release git tag:
## Add new Linux platforms and remove obsolete ones
```
git fetch
git tag -v v$VERSION
```
Our currently supported Linux OSes are Debian, Ubuntu, Fedora (we treat Qubes OS
as a special case of Fedora, release-wise). For each of these platforms, we need
to check if a new version has been added, or if an existing one is now EOL
(https://endoflife.date/ is handy for this purpose).
If the tag verifies successfully and check it out:
In case of a new version (beta, RC, or official release):
```
git checkout v$VERSION
```
1. Add it in our CI workflows, to test if that version works.
* See `.circleci/config.yml` and `.github/workflows/ci.yml`, as well as
`dev_scripts/env.py` and `dev_scripts/qa.py`.
2. Do a test of this version locally with `dev_scripts/qa.py`. Focus on the
GUI part, since the basic functionality is already tested by our CI
workflows.
3. Add the new version in our `INSTALL.md` document, and drop a line in our
`CHANGELOG.md`.
4. If that version is a new stable release, update the `RELEASE.md` and
`BUILD.md` files where necessary.
4. Send a PR with the above changes.
## macOS release
In case of the removal of a version:
To make a macOS release, go to macOS build machine:
1. Remove any mention to this version from our repo.
* Consult the previous paragraph, but also `grep` your way around.
2. Add a notice in our `CHANGELOG.md` about the version removal.
## Bump the minimum Docker Desktop version
We embed the minimum docker desktop versions inside Dangerzone, as an incentive for our macOS and Windows users to upgrade to the latests version.
You can find the latest version at the time of the release by looking at [their release notes](https://docs.docker.com/desktop/release-notes/)
## Large Document Testing
Parallel to the QA process, the release candidate should be put through the large document tests in a dedicated machine to run overnight.
Follow the instructions in `docs/developer/TESTING.md` to run the tests.
These tests will identify any regressions or progression in terms of document coverage.
## Release
Once we are confident that the release will be out shortly, and doesn't need any more changes:
- [ ] Create a PGP-signed git tag for the version, e.g., for dangerzone `v0.1.0`:
```bash
git tag -s v0.1.0
git push origin v0.1.0
```
**Note**: release candidates are suffixed by `-rcX`.
> [!IMPORTANT]
> Because we don't have [reproducible builds](https://github.com/freedomofpress/dangerzone/issues/188)
> yet, building the Dangerzone container image in various platforms would lead
> to different container image IDs / hashes, due to different timestamps. To
> avoid this issue, we should build the final container image for x86_64
> architectures on **one** platform, and then copy it to the rest of the
> platforms, before creating our .deb / .rpm / .msi / app bundles.
### macOS Release
> [!TIP]
> You can automate these steps from your macOS terminal app with:
>
> ```
> export APPLE_ID=<email>
> make build-macos-intel # for Intel macOS
> make build-macos-arm # for Apple Silicon macOS
> ```
The following needs to happen for both Silicon and Intel chipsets.
#### Initial Setup
- Build machine must have:
- macOS 10.14
- Apple-trusted `Developer ID Application: FIRST LOOK PRODUCTIONS, INC.` and `Developer ID Installer: FIRST LOOK PRODUCTIONS, INC.` code-signing certificates installed
- An app-specific Apple ID password saved in the login keychain called `flockagent-notarize`
- Verify and checkout the git tag for this release
- Run `poetry install`
- Run `poetry run ./install/macos/build_app.py --with-codesign`; this will make `dist/Dangerzone.dmg`
- Notarize it: `xcrun altool --notarize-app --primary-bundle-id "media.firstlook.dangerzone" -u "micah@firstlook.org" -p "@keychain:dangerzone-notarize" --file dist/Dangerzone.dmg`
- Wait for it to get approved, check status with: `xcrun altool --notarization-history 0 -u "micah@firstlook.org" -p "@keychain:dangerzone-notarize"`
- (If it gets rejected, you can see why with: `xcrun altool --notarization-info [RequestUUID] -u "micah@firstlook.org" -p "@keychain:dangerzone-notarize"`)
- After it's approved, staple the ticket: `xcrun stapler staple dist/Dangerzone.dmg`
- Apple-trusted `Developer ID Application: Freedom of the Press Foundation (94ZZGGGJ3W)` code-signing certificates installed
- Apple account must have:
- A valid application password for `notarytool` in the Keychain. You can verify
this by running: `xcrun notarytool history --apple-id "<email>" --keychain-profile "dz-notarytool-release-key"`. If you don't find it, you can add it to the Keychain by running
`xcrun notarytool store-credentials dz-notarytool-release-key --apple-id <email> --team-id <team ID>`
with the respective `email` and `team ID` (the latter can be obtained [here](https://developer.apple.com/help/account/manage-your-team/locate-your-team-id))
- Agreed to any new terms and conditions. You can find those if you visit
https://developer.apple.com and login with the proper Apple ID.
This process ends up with the final file:
#### Releasing and Signing
```
dist/Dangerzone.dmg
```
Here is what you need to do:
Rename `Dangerzone.dmg` to `Dangerzone-$VERSION.dmg`.
- [ ] Verify and install the latest supported Python version from
[python.org](https://www.python.org/downloads/macos/) (do not use the one from
brew as it is known to [cause issues](https://github.com/freedomofpress/dangerzone/issues/471))
## Windows release
- [ ] Checkout the dependencies, and clean your local copy:
To make a Windows release, go to the Windows build machine:
```bash
- Build machine should be running Windows 10, and have the Windows codesigning certificate installed
- Verify and checkout the git tag for this release
- Run `poetry install`
- Run `poetry shell`, then `cd ..\pyinstaller`, `python setup.py install`, `exit`
- Run `poetry run install\windows\step1-build-exe.bat`
- Open a second command prompt _as an administratror_, cd to the dangerzone directory, and run: `install\windows\step2-make-symlink.bat`
- Back in the first command prompt, run: `poetry run install\windows\step3-build-installer.bat`
- When you're done you will have `dist\Dangerzone.msi`
# In case of a new Python installation or minor version upgrade, e.g., from
# 3.11 to 3.12, reinstall Poetry
python3 -m pip install poetry
# You can verify the correct Python version is used
poetry debug info
# Replace with the actual version
export DZ_VERSION=$(cat share/version.txt)
# Verify and checkout the git tag for this release:
git checkout -f v$VERSION
# Clean the git repository
git clean -df
# Clean up the environment
poetry env remove --all
# Install the dependencies
poetry sync
```
- [ ] Build the container image and the OCR language data
```bash
poetry run ./install/common/build-image.py
poetry run ./install/common/download-tessdata.py
# Copy the container image to the assets folder
cp share/container.tar ~dz/release-assets/$VERSION/dangerzone-$VERSION-arm64.tar
cp share/image-id.txt ~dz/release-assets/$VERSION/.
```
- [ ] Build the app bundle
```bash
poetry run ./install/macos/build-app.py
```
- [ ] Sign the application bundle, and notarize it
You need to run this command as the account that has access to the code signing certificate
This command assumes that you have created, and stored in the Keychain, an
application password associated with your Apple Developer ID, which will be
used specifically for `notarytool`.
```bash
# Sign the .App and make it a .dmg
poetry run ./install/macos/build-app.py --only-codesign
# Notarize it. You must run this command from the MacOS UI
# from a terminal application.
xcrun notarytool submit ./dist/Dangerzone.dmg --apple-id $APPLE_ID --keychain-profile "dz-notarytool-release-key" --wait && xcrun stapler staple dist/Dangerzone.dmg
# Copy the .dmg to the assets folder
ARCH=$(uname -m)
if [ "$ARCH" = "x86_64" ]; then
ARCH="i686"
fi
cp dist/Dangerzone.dmg ~dz/release-assets/$VERSION/Dangerzone-$VERSION-$ARCH.dmg
```
### Windows Release
The Windows release is performed in a Windows 11 virtual machine (as opposed to a physical one).
#### Initial Setup
- Download a VirtualBox VM image for Windows from here: https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/ and import it into VirtualBox. Also install the Oracle VM VirtualBox Extension Pack.
- Install updates
- Install git for Windows from https://git-scm.com/download/win, and clone the dangerzone repo
- Follow the Windows build instructions in `BUILD.md`, except:
- Don't install Docker Desktop (it won't work without nested virtualization)
- Install the Windows SDK from here: https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/ and add `C:\Program Files (x86)\Microsoft SDKs\ClickOnce\SignTool` to the path (you'll need it for `signtool.exe`)
- You'll also need the Windows codesigning certificate installed on the VM
#### Releasing and Signing
- [ ] Checkout the dependencies, and clean your local copy:
```bash
# In case of a new Python installation or minor version upgrade, e.g., from
# 3.11 to 3.12, reinstall Poetry
python3 -m pip install poetry
# You can verify the correct Python version is used
poetry debug info
# Replace with the actual version
export DZ_VERSION=$(cat share/version.txt)
# Verify and checkout the git tag for this release:
git checkout -f v$VERSION
# Clean the git repository
git clean -df
# Clean up the environment
poetry env remove --all
# Install the dependencies
poetry sync
```
- [ ] Copy the container image into the VM
> [!IMPORTANT]
> Instead of running `python .\install\windows\build-image.py` in the VM, run the build image script on the host (making sure to build for `linux/amd64`). Copy `share/container.tar` and `share/image-id.txt` from the host into the `share` folder in the VM.
- [ ] Run `poetry run .\install\windows\build-app.bat`
- [ ] When you're done you will have `dist\Dangerzone.msi`
Rename `Dangerzone.msi` to `Dangerzone-$VERSION.msi`.
## Linux release
### Linux release
Linux binaries are automatically built and deployed to repositories when a new tag is pushed.
> [!TIP]
> You can automate these steps from any Linux distribution with:
>
> ```
> make build-linux
> ```
>
> You can then add the created artifacts to the appropriate APT/YUM repo.
## Publishing the release
Below we explain how we build packages for each Linux distribution we support.
To publish the release:
#### Debian/Ubuntu
- Create a new release on GitHub, put the changelog in the description of the release, and upload the macOS and Windows installers
- Update the [Installing Dangerzone](https://github.com/firstlookmedia/dangerzone/wiki/Installing-Dangerzone) wiki page
- Update the [Dangerzone website](https://github.com/firstlookmedia/dangerzone.rocks) to link to the new installers
Because the Debian packages do not contain compiled Python code for a specific
Python version, we can create a single Debian package and use it for all of our
Debian-based distros.
Create a Debian Bookworm development environment. You can [follow the
instructions in our build section](https://github.com/freedomofpress/dangerzone/blob/main/BUILD.md#debianubuntu),
or create your own locally with:
```sh
# Create and run debian bookworm development environment
./dev_scripts/env.py --distro debian --version bookworm build-dev
./dev_scripts/env.py --distro debian --version bookworm run --dev bash
# Build the latest container
./dev_scripts/env.py --distro debian --version bookworm run --dev bash -c "cd dangerzone && poetry run ./install/common/build-image.py"
# Create a .deb
./dev_scripts/env.py --distro debian --version bookworm run --dev bash -c "cd dangerzone && ./install/linux/build-deb.py"
```
Publish the .deb under `./deb_dist` to the
[`freedomofpress/apt-tools-prod`](https://github.com/freedomofpress/apt-tools-prod)
repo, by sending a PR. Follow the instructions in that repo on how to do so.
#### Fedora
> **NOTE**: This procedure will have to be done for every supported Fedora version.
>
> In this section, we'll use Fedora 41 as an example.
Create a Fedora development environment. You can [follow the
instructions in our build section](https://github.com/freedomofpress/dangerzone/blob/main/BUILD.md#fedora),
or create your own locally with:
```sh
./dev_scripts/env.py --distro fedora --version 41 build-dev
# Build the latest container (skip if already built):
./dev_scripts/env.py --distro fedora --version 41 run --dev bash -c "cd dangerzone && poetry run ./install/common/build-image.py"
# Create a .rpm:
./dev_scripts/env.py --distro fedora --version 41 run --dev bash -c "cd dangerzone && ./install/linux/build-rpm.py"
```
Publish the .rpm under `./dist` to the
[`freedomofpress/yum-tools-prod`](https://github.com/freedomofpress/yum-tools-prod) repo, by sending a PR. Follow the instructions in that repo on how to do so.
#### Qubes
Create a .rpm for Qubes:
```sh
./dev_scripts/env.py --distro fedora --version 41 run --dev bash -c "cd dangerzone && ./install/linux/build-rpm.py --qubes"
```
and similarly publish it to the [`freedomofpress/yum-tools-prod`](https://github.com/freedomofpress/yum-tools-prod)
repo.
## Publishing the Release
To publish the release, you can follow these steps:
- [ ] Create an archive of the Dangerzone source in `tar.gz` format:
```bash
export VERSION=$(cat share/version.txt)
git archive --format=tar.gz -o dangerzone-${VERSION:?}.tar.gz --prefix=dangerzone/ v${VERSION:?}
```
- [ ] Run container scan on the produced container images (some time may have passed since the artifacts were built)
```bash
docker pull anchore/grype:latest
docker run --rm -v ./share/container.tar:/container.tar anchore/grype:latest /container.tar
```
- [ ] Collect the assets in a single directory, calculate their SHA-256 hashes, and sign them.
There is an `./dev_scripts/sign-assets.py` script to automate this task.
**Important:** Before running the script, make sure that it's the same container images as
the ones that are shipped in other platforms (see our [Pre-release](#Pre-release) section)
```bash
# Sign all the assets
./dev_scripts/sign-assets.py ~/release-assets/$VERSION/github --version $VERSION
```
- [ ] Upload all the assets to the draft release on GitHub.
```bash
find ~/release-assets/$VERSION/github | xargs -n1 ./dev_scripts/upload-asset.py --token ~/token --draft
```
- [ ] Update the [Dangerzone website](https://github.com/freedomofpress/dangerzone.rocks) to link to the new installers.
- [ ] Update the brew cask release of Dangerzone with a [PR like this one](https://github.com/Homebrew/homebrew-cask/pull/116319)
- [ ] Update version and links to our installation instructions (`INSTALL.md`) in `README.md`
## Post-release
- [ ] Toot release announcement on our mastodon account @dangerzone@fosstodon.org
- [ ] Extend the `check_repos.yml` CI test for the newly added platforms

14
THIRD_PARTY_NOTICE Normal file
View file

@ -0,0 +1,14 @@
This project includes third-party components as follows:
1. gVisor APT Key
- URL: https://gvisor.dev/archive.key
- Last updated: 2025-01-21
- Description: This is the public key used for verifying packages from the gVisor repository.
2. Reproducible Containers Helper Script
- URL: https://github.com/reproducible-containers/repro-sources-list.sh/blob/d15cf12b26395b857b24fba223b108aff1c91b26/repro-sources-list.sh
- Last updated: 2025-01-21
- Description: This script is used for building reproducible Debian images.
Please refer to the respective sources for licensing information and further details regarding the use of these components.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 88 KiB

View file

@ -1,102 +0,0 @@
FROM alpine:latest
# Install dependencies
RUN apk -U upgrade && \
apk add \
ghostscript \
graphicsmagick \
libreoffice \
openjdk8 \
poppler-utils \
python3 \
py3-magic \
py3-pillow \
sudo \
tesseract-ocr \
tesseract-ocr-data-afr \
tesseract-ocr-data-ara \
tesseract-ocr-data-aze \
tesseract-ocr-data-bel \
tesseract-ocr-data-ben \
tesseract-ocr-data-bul \
tesseract-ocr-data-cat \
tesseract-ocr-data-ces \
tesseract-ocr-data-chi_sim \
tesseract-ocr-data-chi_tra \
tesseract-ocr-data-chr \
tesseract-ocr-data-dan \
tesseract-ocr-data-deu \
tesseract-ocr-data-ell \
tesseract-ocr-data-enm \
tesseract-ocr-data-epo \
tesseract-ocr-data-equ \
tesseract-ocr-data-est \
tesseract-ocr-data-eus \
tesseract-ocr-data-fin \
tesseract-ocr-data-fra \
tesseract-ocr-data-frk \
tesseract-ocr-data-frm \
tesseract-ocr-data-glg \
tesseract-ocr-data-grc \
tesseract-ocr-data-heb \
tesseract-ocr-data-hin \
tesseract-ocr-data-hrv \
tesseract-ocr-data-hun \
tesseract-ocr-data-ind \
tesseract-ocr-data-isl \
tesseract-ocr-data-ita \
tesseract-ocr-data-ita_old \
tesseract-ocr-data-jpn \
tesseract-ocr-data-kan \
tesseract-ocr-data-kat \
tesseract-ocr-data-kor \
tesseract-ocr-data-lav \
tesseract-ocr-data-lit \
tesseract-ocr-data-mal \
tesseract-ocr-data-mkd \
tesseract-ocr-data-mlt \
tesseract-ocr-data-msa \
tesseract-ocr-data-nld \
tesseract-ocr-data-nor \
tesseract-ocr-data-pol \
tesseract-ocr-data-por \
tesseract-ocr-data-ron \
tesseract-ocr-data-rus \
tesseract-ocr-data-slk \
tesseract-ocr-data-slv \
tesseract-ocr-data-spa \
tesseract-ocr-data-spa_old \
tesseract-ocr-data-sqi \
tesseract-ocr-data-srp \
tesseract-ocr-data-swa \
tesseract-ocr-data-swe \
tesseract-ocr-data-tam \
tesseract-ocr-data-tel \
tesseract-ocr-data-tgl \
tesseract-ocr-data-tha \
tesseract-ocr-data-tur \
tesseract-ocr-data-ukr \
tesseract-ocr-data-vie
# Install pdftk
RUN \
wget https://gitlab.com/pdftk-java/pdftk/-/jobs/924565145/artifacts/raw/build/libs/pdftk-all.jar && \
mv pdftk-all.jar /usr/local/bin && \
chmod +x /usr/local/bin/pdftk-all.jar && \
echo '#!/bin/sh' > /usr/local/bin/pdftk && \
echo '/usr/bin/java -jar "/usr/local/bin/pdftk-all.jar" "$@"' >> /usr/local/bin/pdftk && \
chmod +x /usr/local/bin/pdftk
COPY dangerzone.py /usr/local/bin/
RUN chmod +x /usr/local/bin/dangerzone.py
# Add the unprivileged user
RUN adduser -h /home/user -s /bin/sh -D user
# /tmp/input_file is where the first convert expects the input file to be, and
# /tmp where it will write the pixel files
#
# /dangerzone is where the second script expects files to be put by the first one
#
# /safezone is where the wrapper eventually moves the sanitized files.
VOLUME /dangerzone /tmp/input_file /safezone

View file

@ -1,541 +0,0 @@
#!/usr/bin/env python3
"""
Here are the steps, with progress bar percentages for each step:
document_to_pixels
- 0%-3%: Convert document into a PDF (skipped if the input file is a PDF)
- 3%-5%: Split PDF into individual pages, and count those pages
- 5%-50%: Convert each page into pixels (each page takes 45/n%, where n is the number of pages)
pixels_to_pdf:
- 50%-95%: Convert each page of pixels into a PDF (each page takes 45/n%, where n is the number of pages)
- 95%-100%: Compress the final PDF
"""
import sys
import subprocess
import glob
import os
import json
import shutil
import magic
from PIL import Image
class DangerzoneConverter:
def __init__(self):
pass
def document_to_pixels(self):
percentage = 0.0
conversions = {
# .pdf
"application/pdf": {"type": None},
# .docx
"application/vnd.openxmlformats-officedocument.wordprocessingml.document": {
"type": "libreoffice",
"libreoffice_output_filter": "writer_pdf_Export",
},
# .doc
"application/msword": {
"type": "libreoffice",
"libreoffice_output_filter": "writer_pdf_Export",
},
# .docm
"application/vnd.ms-word.document.macroEnabled.12": {
"type": "libreoffice",
"libreoffice_output_filter": "writer_pdf_Export",
},
# .xlsx
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": {
"type": "libreoffice",
"libreoffice_output_filter": "calc_pdf_Export",
},
# .xls
"application/vnd.ms-excel": {
"type": "libreoffice",
"libreoffice_output_filter": "calc_pdf_Export",
},
# .pptx
"application/vnd.openxmlformats-officedocument.presentationml.presentation": {
"type": "libreoffice",
"libreoffice_output_filter": "impress_pdf_Export",
},
# .ppt
"application/vnd.ms-powerpoint": {
"type": "libreoffice",
"libreoffice_output_filter": "impress_pdf_Export",
},
# .odt
"application/vnd.oasis.opendocument.text": {
"type": "libreoffice",
"libreoffice_output_filter": "writer_pdf_Export",
},
# .odg
"application/vnd.oasis.opendocument.graphics": {
"type": "libreoffice",
"libreoffice_output_filter": "impress_pdf_Export",
},
# .odp
"application/vnd.oasis.opendocument.presentation": {
"type": "libreoffice",
"libreoffice_output_filter": "impress_pdf_Export",
},
# .ops
"application/vnd.oasis.opendocument.spreadsheet": {
"type": "libreoffice",
"libreoffice_output_filter": "calc_pdf_Export",
},
# .jpg
"image/jpeg": {"type": "convert"},
# .gif
"image/gif": {"type": "convert"},
# .png
"image/png": {"type": "convert"},
# .tif
"image/tiff": {"type": "convert"},
"image/x-tiff": {"type": "convert"},
}
# Detect MIME type
mime = magic.Magic(mime=True)
mime_type = mime.from_file("/tmp/input_file")
# Validate MIME type
if mime_type not in conversions:
self.output(True, "The document format is not supported", percentage)
return 1
# Convert input document to PDF
conversion = conversions[mime_type]
if conversion["type"] is None:
pdf_filename = "/tmp/input_file"
elif conversion["type"] == "libreoffice":
self.output(False, "Converting to PDF using LibreOffice", percentage)
args = [
"libreoffice",
"--headless",
"--convert-to",
f"pdf:{conversion['libreoffice_output_filter']}",
"--outdir",
"/tmp",
"/tmp/input_file",
]
try:
p = subprocess.run(
args,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
timeout=60,
)
except subprocess.TimeoutExpired:
self.output(
True,
"Error converting document to PDF, LibreOffice timed out after 60 seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
f"Conversion to PDF with LibreOffice failed",
percentage,
)
return 1
pdf_filename = "/tmp/input_file.pdf"
elif conversion["type"] == "convert":
self.output(False, "Converting to PDF using GraphicsMagick", percentage)
args = [
"gm",
"convert",
"/tmp/input_file",
"/tmp/input_file.pdf",
]
try:
p = subprocess.run(
args,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
timeout=60,
)
except subprocess.TimeoutExpired:
self.output(
True,
"Error converting document to PDF, GraphicsMagick timed out after 60 seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
"Conversion to PDF with GraphicsMagick failed",
percentage,
)
return 1
pdf_filename = "/tmp/input_file.pdf"
else:
self.output(
True,
"Invalid conversion type",
percentage,
)
return 1
percentage += 3
# Separate PDF into pages
self.output(
False,
"Separating document into pages",
percentage,
)
args = ["pdftk", pdf_filename, "burst", "output", "/tmp/page-%d.pdf"]
try:
p = subprocess.run(
args, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=60
)
except subprocess.TimeoutExpired:
self.output(
True,
"Error separating document into pages, pdfseparate timed out after 60 seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
"Separating document into pages failed",
percentage,
)
return 1
page_filenames = glob.glob("/tmp/page-*.pdf")
percentage += 2
# Convert to RGB pixel data
percentage_per_page = 45.0 / len(page_filenames)
for page in range(1, len(page_filenames) + 1):
pdf_filename = f"/tmp/page-{page}.pdf"
png_filename = f"/tmp/page-{page}.png"
rgb_filename = f"/tmp/page-{page}.rgb"
width_filename = f"/tmp/page-{page}.width"
height_filename = f"/tmp/page-{page}.height"
filename_base = f"/tmp/page-{page}"
self.output(
False,
f"Converting page {page}/{len(page_filenames)} to pixels",
percentage,
)
# Convert to png
try:
p = subprocess.run(
["pdftocairo", pdf_filename, "-png", "-singlefile", filename_base],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
timeout=60,
)
except subprocess.TimeoutExpired:
self.output(
True,
"Error converting from PDF to PNG, pdftocairo timed out after 60 seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
"Conversion from PDF to PNG failed",
percentage,
)
return 1
# Save the width and height
im = Image.open(png_filename)
width, height = im.size
with open(width_filename, "w") as f:
f.write(str(width))
with open(height_filename, "w") as f:
f.write(str(height))
# Convert to RGB pixels
try:
p = subprocess.run(
[
"gm",
"convert",
png_filename,
"-depth",
"8",
f"rgb:{rgb_filename}",
],
timeout=60,
)
except subprocess.TimeoutExpired:
self.output(
True,
"Error converting from PNG to pixels, convert timed out after 60 seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
"Conversion from PNG to RGB failed",
percentage,
)
return 1
# Delete the png
os.remove(png_filename)
percentage += percentage_per_page
self.output(
False,
"Converted document to pixels",
percentage,
)
# Move converted files into /dangerzone
for filename in (
glob.glob("/tmp/page-*.rgb")
+ glob.glob("/tmp/page-*.width")
+ glob.glob("/tmp/page-*.height")
):
shutil.move(filename, "/dangerzone")
return 0
def pixels_to_pdf(self):
percentage = 50.0
num_pages = len(glob.glob("/dangerzone/page-*.rgb"))
# Convert RGB files to PDF files
percentage_per_page = 45.0 / num_pages
for page in range(1, num_pages + 1):
filename_base = f"/dangerzone/page-{page}"
rgb_filename = f"{filename_base}.rgb"
width_filename = f"{filename_base}.width"
height_filename = f"{filename_base}.height"
png_filename = f"/tmp/page-{page}.png"
ocr_filename = f"/tmp/page-{page}"
pdf_filename = f"/tmp/page-{page}.pdf"
with open(width_filename) as f:
width = f.read().strip()
with open(height_filename) as f:
height = f.read().strip()
if os.environ.get("OCR") == "1":
# OCR the document
self.output(
False,
f"Converting page {page}/{num_pages} from pixels to searchable PDF",
percentage,
)
args = [
"gm",
"convert",
"-size",
f"{width}x{height}",
"-depth",
"8",
f"rgb:{rgb_filename}",
f"png:{png_filename}",
]
try:
p = subprocess.run(
args,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
timeout=60,
)
except subprocess.TimeoutExpired:
self.output(
True,
"Error converting pixels to PNG, convert timed out after 60 seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
f"Page {page}/{num_pages} conversion to PNG failed",
percentage,
)
return 1
args = [
"tesseract",
png_filename,
ocr_filename,
"-l",
os.environ.get("OCR_LANGUAGE"),
"--dpi",
"70",
"pdf",
]
try:
p = subprocess.run(
args,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
timeout=60,
)
except subprocess.TimeoutExpired:
self.output(
True,
"Error converting PNG to searchable PDF, tesseract timed out after 60 seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
f"Page {page}/{num_pages} OCR failed",
percentage,
)
return 1
else:
# Don't OCR
self.output(
False,
f"Converting page {page}/{num_pages} from pixels to PDF",
percentage,
)
args = [
"gm",
"convert",
"-size",
f"{width}x{height}",
"-depth",
"8",
f"rgb:{rgb_filename}",
f"pdf:{pdf_filename}",
]
try:
p = subprocess.run(
args,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
timeout=60,
)
except subprocess.TimeoutExpired:
self.output(
True,
"Error converting RGB to PDF, convert timed out after 60 seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
f"Page {page}/{num_pages} conversion to PDF failed",
percentage,
)
return 1
percentage += percentage_per_page
# Merge pages into a single PDF
self.output(
False,
f"Merging {num_pages} pages into a single PDF",
percentage,
)
args = ["pdfunite"]
for page in range(1, num_pages + 1):
args.append(f"/tmp/page-{page}.pdf")
args.append(f"/tmp/safe-output.pdf")
try:
p = subprocess.run(
args, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=60
)
except subprocess.TimeoutExpired:
self.output(
True,
"Error merging pages into a single PDF, pdfunite timed out after 60 seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
"Merging pages into a single PDF failed",
percentage,
)
return 1
percentage += 2
# Compress
self.output(
False,
f"Compressing PDF",
percentage,
)
compress_timeout = num_pages * 3
try:
p = subprocess.run(
["ps2pdf", "/tmp/safe-output.pdf", "/tmp/safe-output-compressed.pdf"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
timeout=compress_timeout,
)
except subprocess.TimeoutExpired:
self.output(
True,
f"Error compressing PDF, ps2pdf timed out after {compress_timeout} seconds",
percentage,
)
return 1
if p.returncode != 0:
self.output(
True,
f"Compressing PDF failed",
percentage,
)
return 1
percentage = 100.0
self.output(False, "Safe PDF created", percentage)
# Move converted files into /safezone
shutil.move("/tmp/safe-output.pdf", "/safezone")
shutil.move("/tmp/safe-output-compressed.pdf", "/safezone")
return 0
def output(self, error, text, percentage):
print(json.dumps({"error": error, "text": text, "percentage": int(percentage)}))
sys.stdout.flush()
def main():
if len(sys.argv) != 2:
print(f"Usage: {sys.argv[0]} [document-to-pixels]|[pixels-to-pdf]")
return -1
converter = DangerzoneConverter()
if sys.argv[1] == "document-to-pixels":
return converter.document_to_pixels()
if sys.argv[1] == "pixels-to-pdf":
return converter.pixels_to_pdf()
return -1
if __name__ == "__main__":
sys.exit(main())

View file

@ -1,6 +1,25 @@
import logging
import os
import sys
logger = logging.getLogger(__name__)
# Call freeze_support() to avoid passing unknown options to the subprocess.
# See https://github.com/freedomofpress/dangerzone/issues/873
import multiprocessing
multiprocessing.freeze_support()
try:
from . import vendor # type: ignore [attr-defined]
vendor_path: str = vendor.__path__[0]
logger.debug(f"Using vendored PyMuPDF libraries from '{vendor_path}'")
sys.path.insert(0, vendor_path)
except ImportError:
pass
if "DANGERZONE_MODE" in os.environ:
mode = os.environ["DANGERZONE_MODE"]
else:
@ -13,4 +32,4 @@ else:
if mode == "cli":
from .cli import cli_main as main
else:
from .gui import gui_main as main
from .gui import gui_main as main # noqa: F401

109
dangerzone/args.py Normal file
View file

@ -0,0 +1,109 @@
import functools
import os
import sys
from typing import List, Optional, Tuple
import click
from . import errors
from .document import Document
@errors.handle_document_errors
def _validate_input_filename(
ctx: click.Context, param: str, value: Optional[str]
) -> Optional[str]:
if value is None:
return None
filename = Document.normalize_filename(value)
Document.validate_input_filename(filename)
return filename
@errors.handle_document_errors
def _validate_input_filenames(
ctx: click.Context, param: List[str], value: Tuple[str]
) -> List[str]:
normalized_filenames = []
for filename in value:
filename = Document.normalize_filename(filename)
Document.validate_input_filename(filename)
normalized_filenames.append(filename)
return normalized_filenames
@errors.handle_document_errors
def _validate_output_filename(
ctx: click.Context, param: str, value: Optional[str]
) -> Optional[str]:
if value is None:
return None
filename = Document.normalize_filename(value)
Document.validate_output_filename(filename)
return filename
# XXX: Click versions 7.x and below inspect the number of arguments that the
# callback handler supports. Unfortunately, common Python decorators (such as
# `handle_document_errors()`) mask this number, so we need to reinstate it
# somehow [1]. The simplest way to do so is using a wrapper function.
#
# Once we stop supporting Click 7.x, we can remove the wrappers below.
#
# [1]: https://github.com/freedomofpress/dangerzone/issues/206#issuecomment-1297336863
def validate_input_filename(
ctx: click.Context, param: str, value: Optional[str]
) -> Optional[str]:
return _validate_input_filename(ctx, param, value)
def validate_input_filenames(
ctx: click.Context, param: List[str], value: Tuple[str]
) -> List[str]:
return _validate_input_filenames(ctx, param, value)
def validate_output_filename(
ctx: click.Context, param: str, value: Optional[str]
) -> Optional[str]:
return _validate_output_filename(ctx, param, value)
def check_suspicious_options(args: List[str]) -> None:
options = set([arg for arg in args if arg.startswith("-")])
try:
files = set(os.listdir())
except Exception:
# If we can list files in the current working directory, this means that
# we're probably in an unlinked directory. Dangerzone should still work in
# this case, so we should return here.
return
intersection = options & files
if intersection:
filenames_str = ", ".join(intersection)
msg = (
f"Security: Detected CLI options that are also present as files in the"
f" current working directory: {filenames_str}"
)
click.echo(msg)
sys.exit(1)
def override_parser_and_check_suspicious_options(click_main: click.Command) -> None:
"""Override the argument parsing logic of Click.
Click does not allow us to have access to the raw arguments that it receives (either
from sys.argv or from its testing module). To circumvent this, we can override its
`Command.parse_args()` method, which is public and should be safe to do so.
We can use it to check for any suspicious options prior to arg parsing.
"""
orig_parse_fn = click_main.parse_args
@functools.wraps(orig_parse_fn)
def custom_parse_fn(ctx: click.Context, args: List[str]) -> List[str]:
check_suspicious_options(args)
return orig_parse_fn(ctx, args)
click_main.parse_args = custom_parse_fn # type: ignore [method-assign]

View file

@ -1,115 +1,355 @@
import os
import logging
import sys
import json
from typing import List, Optional
import click
from colorama import Fore, Style
from colorama import Back, Fore, Style
from .global_common import GlobalCommon
from .common import Common
from .container import convert
from . import args, errors
from .document import ARCHIVE_SUBDIR, SAFE_EXTENSION
from .isolation_provider.container import Container
from .isolation_provider.dummy import Dummy
from .isolation_provider.qubes import Qubes, is_qubes_native_conversion
from .logic import DangerzoneCore
from .settings import Settings
from .util import get_version, replace_control_chars
def print_header(s):
def print_header(s: str) -> None:
click.echo("")
click.echo(Style.BRIGHT + s)
@click.command()
@click.option("--output-filename", help="Default is filename ending with -safe.pdf")
@click.option(
"--output-filename",
callback=args.validate_output_filename,
help=f"Default is filename ending with {SAFE_EXTENSION}",
)
@click.option("--ocr-lang", help="Language to OCR, defaults to none")
@click.argument("filename", required=True)
def cli_main(output_filename, ocr_lang, filename):
global_common = GlobalCommon()
common = Common()
global_common.display_banner()
# Validate filename
valid = True
try:
with open(os.path.abspath(filename), "rb") as f:
pass
except:
valid = False
if not valid:
click.echo("Invalid filename")
return
common.input_filename = os.path.abspath(filename)
# Validate safe PDF output filename
if output_filename:
valid = True
if not output_filename.endswith(".pdf"):
click.echo("Safe PDF filename must end in '.pdf'")
return
try:
with open(os.path.abspath(output_filename), "wb") as f:
pass
except:
valid = False
if not valid:
click.echo("Safe PDF filename is not writable")
return
common.output_filename = os.path.abspath(output_filename)
else:
common.output_filename = (
f"{os.path.splitext(common.input_filename)[0]}-safe.pdf"
)
try:
with open(common.output_filename, "wb") as f:
pass
except:
@click.option(
"--archive",
"archive",
flag_value=True,
help=f"Archives the unsafe version in a subdirectory named '{ARCHIVE_SUBDIR}'",
)
@click.option(
"--unsafe-dummy-conversion", "dummy_conversion", flag_value=True, hidden=True
)
@click.argument(
"filenames",
required=False,
nargs=-1,
type=click.UNPROCESSED,
callback=args.validate_input_filenames,
)
@click.option(
"--debug",
"debug",
flag_value=True,
help="Run Dangerzone in debug mode, to get logs from gVisor.",
)
@click.option(
"--set-container-runtime",
required=False,
help=(
"The name or full path of the container runtime you want Dangerzone to use."
" You can specify the value 'default' if you want to take back your choice, and"
" let Dangerzone use the default runtime for this OS"
),
)
@click.version_option(version=get_version(), message="%(version)s")
@errors.handle_document_errors
def cli_main(
output_filename: Optional[str],
ocr_lang: Optional[str],
filenames: Optional[List[str]],
archive: bool,
dummy_conversion: bool,
debug: bool,
set_container_runtime: Optional[str] = None,
) -> None:
setup_logging()
display_banner()
if set_container_runtime:
settings = Settings()
if set_container_runtime == "default":
settings.unset_custom_runtime()
click.echo(
f"Output filename {common.output_filename} is not writable, use --output-filename"
"Instructed Dangerzone to use the default container runtime for this OS"
)
return
else:
container_runtime = settings.set_custom_runtime(
set_container_runtime, autosave=True
)
click.echo(f"Set the settings container_runtime to {container_runtime}")
sys.exit(0)
elif not filenames:
raise click.UsageError("Missing argument 'FILENAMES...'")
if getattr(sys, "dangerzone_dev", False) and dummy_conversion:
dangerzone = DangerzoneCore(Dummy())
elif is_qubes_native_conversion():
dangerzone = DangerzoneCore(Qubes())
else:
dangerzone = DangerzoneCore(Container(debug=debug))
if len(filenames) == 1 and output_filename:
dangerzone.add_document_from_filename(filenames[0], output_filename, archive)
elif len(filenames) > 1 and output_filename:
click.echo("--output-filename can only be used with one input file.")
sys.exit(1)
else:
for filename in filenames:
dangerzone.add_document_from_filename(filename, archive=archive)
# Validate OCR language
if ocr_lang:
valid = False
for lang in global_common.ocr_languages:
if global_common.ocr_languages[lang] == ocr_lang:
for lang in dangerzone.ocr_languages:
if dangerzone.ocr_languages[lang] == ocr_lang:
valid = True
break
if not valid:
click.echo("Invalid OCR language code. Valid language codes:")
for lang in global_common.ocr_languages:
click.echo(f"{global_common.ocr_languages[lang]}: {lang}")
return
for lang in dangerzone.ocr_languages:
click.echo(f"{dangerzone.ocr_languages[lang]}: {lang}")
sys.exit(1)
# Ensure container is installed
global_common.install_container()
dangerzone.isolation_provider.install()
# Convert the document
print_header("Converting document to safe PDF")
def stdout_callback(line):
try:
status = json.loads(line)
s = Style.BRIGHT + Fore.CYAN + f"{status['percentage']}% "
if status["error"]:
s += Style.RESET_ALL + Fore.RED + status["text"]
else:
s += Style.RESET_ALL + status["text"]
click.echo(s)
except:
click.echo(f"Invalid JSON returned from container: {line}")
dangerzone.convert_documents(ocr_lang)
documents_safe = dangerzone.get_safe_documents()
documents_failed = dangerzone.get_failed_documents()
if convert(
common.input_filename,
common.output_filename,
ocr_lang,
stdout_callback,
):
print_header("Safe PDF created successfully")
click.echo(common.output_filename)
sys.exit(0)
if documents_safe != []:
print_header("Safe PDF(s) created successfully")
for document in documents_safe:
click.echo(replace_control_chars(document.output_filename))
if archive:
print_header(
f"Unsafe (original) documents moved to '{ARCHIVE_SUBDIR}' subdirectory"
)
if documents_failed != []:
print_header("Failed to convert document(s)")
for document in documents_failed:
click.echo(replace_control_chars(document.input_filename))
sys.exit(1)
else:
print_header("Failed to convert document")
sys.exit(-1)
sys.exit(0)
args.override_parser_and_check_suspicious_options(cli_main)
def setup_logging() -> None:
class EndUserLoggingFormatter(logging.Formatter):
"""Prefixes any non-INFO log line with the log level"""
def format(self, record: logging.LogRecord) -> str:
if record.levelno == logging.INFO:
# Bypass formatter: print line directly
return record.getMessage()
else:
return super().format(record)
if getattr(sys, "dangerzone_dev", False):
fmt = "[%(levelname)-5s] %(message)s"
logging.basicConfig(level=logging.DEBUG, format=fmt)
else:
# prefix non-INFO log lines with the respective log type
fmt = "%(levelname)s %(message)s"
formatter = EndUserLoggingFormatter(fmt=fmt)
ch = logging.StreamHandler()
ch.setFormatter(formatter)
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.addHandler(ch)
def display_banner() -> None:
"""
Raw ASCII art example:
Dangerzone v0.1.5
https://dangerzone.rocks
"""
print(Back.BLACK + Fore.YELLOW + Style.DIM + "╭──────────────────────────╮")
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ▄██▄ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ██████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███▀▀▀██ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███ ████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███ ██████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███ ▀▀▀▀████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███████ ▄██████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███████ ▄█████████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ████████████████████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(Back.BLACK + Fore.YELLOW + Style.DIM + "│ │")
left_spaces = (15 - len(get_version()) - 1) // 2
right_spaces = left_spaces
if left_spaces + len(get_version()) + 1 + right_spaces < 15:
right_spaces += 1
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Style.RESET_ALL
+ Back.BLACK
+ Fore.LIGHTWHITE_EX
+ Style.BRIGHT
+ f"{' ' * left_spaces}Dangerzone v{get_version()}{' ' * right_spaces}"
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Style.RESET_ALL
+ Back.BLACK
+ Fore.LIGHTWHITE_EX
+ " https://dangerzone.rocks "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ "╰──────────────────────────╯"
+ Style.RESET_ALL
)

View file

@ -1,16 +0,0 @@
import os
import stat
import platform
import tempfile
import appdirs
class Common(object):
"""
The Common class is a singleton of shared functionality throughout an open dangerzone window
"""
def __init__(self):
# Name of input and out files
self.input_filename = None
self.output_filename = None

View file

@ -1,212 +0,0 @@
import platform
import subprocess
import pipes
import shutil
import os
import tempfile
import appdirs
# What container tech is used for this platform?
if platform.system() == "Linux":
container_tech = "podman"
else:
# Windows, Darwin, and unknown use docker for now, dangerzone-vm eventually
container_tech = "docker"
# Define startupinfo for subprocesses
if platform.system() == "Windows":
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
else:
startupinfo = None
# Name of the dangerzone container
container_name = "dangerzone.rocks/dangerzone"
def exec(args, stdout_callback=None):
args_str = " ".join(pipes.quote(s) for s in args)
print("> " + args_str)
with subprocess.Popen(
args,
stdin=None,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
bufsize=1,
universal_newlines=True,
startupinfo=startupinfo,
) as p:
if stdout_callback:
for line in p.stdout:
stdout_callback(line)
p.communicate()
return p.returncode
def exec_container(args, stdout_callback=None):
if container_tech == "podman":
container_runtime = shutil.which("podman")
else:
container_runtime = shutil.which("docker")
args = [container_runtime] + args
return exec(args, stdout_callback)
def convert(input_filename, output_filename, ocr_lang, stdout_callback):
success = False
if ocr_lang:
ocr = "1"
else:
ocr = "0"
dz_tmp = os.path.join(appdirs.user_config_dir("dangerzone"), "tmp")
os.makedirs(dz_tmp, exist_ok=True)
tmpdir = tempfile.TemporaryDirectory(dir=dz_tmp)
pixel_dir = os.path.join(tmpdir.name, "pixels")
safe_dir = os.path.join(tmpdir.name, "safe")
os.makedirs(pixel_dir, exist_ok=True)
os.makedirs(safe_dir, exist_ok=True)
if container_tech == "docker":
platform_args = ["--platform", "linux/amd64"]
else:
platform_args = []
# Convert document to pixels
args = (
["run", "--network", "none"]
+ platform_args
+ [
"-v",
f"{input_filename}:/tmp/input_file",
"-v",
f"{pixel_dir}:/dangerzone",
container_name,
"/usr/bin/python3",
"/usr/local/bin/dangerzone.py",
"document-to-pixels",
]
)
ret = exec_container(args, stdout_callback)
if ret != 0:
print("documents-to-pixels failed")
else:
# TODO: validate convert to pixels output
# Convert pixels to safe PDF
args = (
["run", "--network", "none"]
+ platform_args
+ [
"-v",
f"{pixel_dir}:/dangerzone",
"-v",
f"{safe_dir}:/safezone",
"-e",
f"OCR={ocr}",
"-e",
f"OCR_LANGUAGE={ocr_lang}",
container_name,
"/usr/bin/python3",
"/usr/local/bin/dangerzone.py",
"pixels-to-pdf",
]
)
ret = exec_container(args, stdout_callback)
if ret != 0:
print("pixels-to-pdf failed")
else:
# Move the final file to the right place
if os.path.exists(output_filename):
os.remove(output_filename)
container_output_filename = os.path.join(
safe_dir, "safe-output-compressed.pdf"
)
shutil.move(container_output_filename, output_filename)
# We did it
success = True
# Clean up
tmpdir.cleanup()
return success
# From global_common:
# def validate_convert_to_pixel_output(self, common, output):
# """
# Take the output from the convert to pixels tasks and validate it. Returns
# a tuple like: (success (boolean), error_message (str))
# """
# max_image_width = 10000
# max_image_height = 10000
# # Did we hit an error?
# for line in output.split("\n"):
# if (
# "failed:" in line
# or "The document format is not supported" in line
# or "Error" in line
# ):
# return False, output
# # How many pages was that?
# num_pages = None
# for line in output.split("\n"):
# if line.startswith("Document has "):
# num_pages = line.split(" ")[2]
# break
# if not num_pages or not num_pages.isdigit() or int(num_pages) <= 0:
# return False, "Invalid number of pages returned"
# num_pages = int(num_pages)
# # Make sure we have the files we expect
# expected_filenames = []
# for i in range(1, num_pages + 1):
# expected_filenames += [
# f"page-{i}.rgb",
# f"page-{i}.width",
# f"page-{i}.height",
# ]
# expected_filenames.sort()
# actual_filenames = os.listdir(common.pixel_dir.name)
# actual_filenames.sort()
# if expected_filenames != actual_filenames:
# return (
# False,
# f"We expected these files:\n{expected_filenames}\n\nBut we got these files:\n{actual_filenames}",
# )
# # Make sure the files are the correct sizes
# for i in range(1, num_pages + 1):
# with open(f"{common.pixel_dir.name}/page-{i}.width") as f:
# w_str = f.read().strip()
# with open(f"{common.pixel_dir.name}/page-{i}.height") as f:
# h_str = f.read().strip()
# w = int(w_str)
# h = int(h_str)
# if (
# not w_str.isdigit()
# or not h_str.isdigit()
# or w <= 0
# or w > max_image_width
# or h <= 0
# or h > max_image_height
# ):
# return False, f"Page {i} has invalid geometry"
# # Make sure the RGB file is the correct size
# if os.path.getsize(f"{common.pixel_dir.name}/page-{i}.rgb") != w * h * 3:
# return False, f"Page {i} has an invalid RGB file size"
# return True, True

View file

@ -0,0 +1,235 @@
#!/usr/bin/python3
import json
import os
import shlex
import subprocess
import sys
import typing
# This script wraps the command-line arguments passed to it to run as an
# unprivileged user in a gVisor sandbox.
# Its behavior can be modified with the following environment variables:
# RUNSC_DEBUG: If set, print debug messages to stderr, and log all gVisor
# output to stderr.
# RUNSC_FLAGS: If set, pass these flags to the `runsc` invocation.
# These environment variables are not passed on to the sandboxed process.
def log(message: str, *values: typing.Any) -> None:
"""Helper function to log messages if RUNSC_DEBUG is set."""
if os.environ.get("RUNSC_DEBUG"):
print(message.format(*values), file=sys.stderr)
command = sys.argv[1:]
if len(command) == 0:
log("Invoked without a command; will execute 'sh'.")
command = ["sh"]
else:
log("Invoked with command: {}", " ".join(shlex.quote(s) for s in command))
# Build and write container OCI config.
oci_config: dict[str, typing.Any] = {
"ociVersion": "1.0.0",
"process": {
"user": {
# Hardcode the UID/GID of the container image to 1000, since we're in
# control of the image creation, and we don't expect it to change.
"uid": 1000,
"gid": 1000,
},
"args": command,
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PYTHONPATH=/opt/dangerzone",
"TERM=xterm",
],
"cwd": "/",
"capabilities": {
"bounding": [],
"effective": [],
"inheritable": [],
"permitted": [],
},
"rlimits": [
{"type": "RLIMIT_NOFILE", "hard": 4096, "soft": 4096},
],
},
"root": {"path": "rootfs", "readonly": True},
"hostname": "dangerzone",
"mounts": [
# Mask almost every system directory of the outer container, by mounting tmpfs
# on top of them. This is done to avoid leaking any sensitive information,
# either mounted by Podman/Docker, or when gVisor runs, since we reuse the same
# rootfs. We basically mask everything except for `/usr`, `/bin`, `/lib`,
# `/etc`, and `/opt`.
#
# Note that we set `--root /home/dangerzone/.containers` for the directory where
# gVisor will create files at runtime, which means that in principle, we are
# covered by the masking of `/home/dangerzone` that follows below.
#
# Finally, note that the following list has been taken from the dirs in our
# container image, and double-checked against the top-level dirs listed in the
# Filesystem Hierarchy Standard (FHS) [1]. It would be nice to have an allowlist
# approach instead of a denylist, but FHS is such an old standard that we don't
# expect any new top-level dirs to pop up any time soon.
#
# [1] https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
{
"destination": "/boot",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/dev",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
{
"destination": "/home",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/media",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/mnt",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/proc",
"type": "proc",
"source": "proc",
},
{
"destination": "/root",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/run",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
{
"destination": "/sbin",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/srv",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/sys",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/tmp",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
{
"destination": "/var",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
# LibreOffice needs a writable home directory, so just mount a tmpfs
# over it.
{
"destination": "/home/dangerzone",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
# Used for LibreOffice extensions, which are only conditionally
# installed depending on which file is being converted.
{
"destination": "/usr/lib/libreoffice/share/extensions/",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
],
"linux": {
"namespaces": [
{"type": "pid"},
{"type": "network"},
{"type": "ipc"},
{"type": "uts"},
{"type": "mount"},
],
},
}
not_forwarded_env = set(
(
"PATH",
"HOME",
"SHLVL",
"HOSTNAME",
"TERM",
"PWD",
"RUNSC_FLAGS",
"RUNSC_DEBUG",
)
)
for key_val in oci_config["process"]["env"]:
not_forwarded_env.add(key_val[: key_val.index("=")])
for key, val in os.environ.items():
if key in not_forwarded_env:
continue
oci_config["process"]["env"].append("%s=%s" % (key, val))
if os.environ.get("RUNSC_DEBUG"):
log("Command inside gVisor sandbox: {}", command)
log("OCI config:")
json.dump(oci_config, sys.stderr, indent=2, sort_keys=True)
# json.dump doesn't print a trailing newline, so print one here:
log("")
with open("/home/dangerzone/dangerzone-image/config.json", "w") as oci_config_out:
json.dump(oci_config, oci_config_out, indent=2, sort_keys=True)
# Run gVisor.
runsc_argv = [
"/usr/bin/runsc",
"--rootless=true",
"--network=none",
"--root=/home/dangerzone/.containers",
# Disable DirectFS for to make the seccomp filter even stricter,
# at some performance cost.
"--directfs=false",
]
if os.environ.get("RUNSC_DEBUG"):
runsc_argv += ["--debug=true", "--alsologtostderr=true"]
if os.environ.get("RUNSC_FLAGS"):
runsc_argv += [x for x in shlex.split(os.environ.get("RUNSC_FLAGS", "")) if x]
runsc_argv += ["run", "--bundle=/home/dangerzone/dangerzone-image", "dangerzone"]
log(
"Running gVisor with command line: {}", " ".join(shlex.quote(s) for s in runsc_argv)
)
runsc_process = subprocess.run(
runsc_argv,
check=False,
)
log("gVisor quit with exit code: {}", runsc_process.returncode)
# We're done.
sys.exit(runsc_process.returncode)

View file

@ -0,0 +1,29 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBF0meAYBEACcBYPOSBiKtid+qTQlbgKGPxUYt0cNZiQqWXylhYUT4PuNlNx5
s+sBLFvNTpdTrXMmZ8NkekyjD1HardWvebvJT4u+Ho/9jUr4rP71cNwNtocz/w8G
DsUXSLgH8SDkq6xw0L+5eGc78BBg9cOeBeFBm3UPgxTBXS9Zevoi2w1lzSxkXvjx
cGzltzMZfPXERljgLzp9AAfhg/2ouqVQm37fY+P/NDzFMJ1XHPIIp9KJl/prBVud
jJJteFZ5sgL6MwjBQq2kw+q2Jb8Zfjl0BeXDgGMN5M5lGhX2wTfiMbfo7KWyzRnB
RpSP3BxlLqYeQUuLG5Yx8z3oA3uBkuKaFOKvXtiScxmGM/+Ri2YM3m66imwDhtmP
AKwTPI3Re4gWWOffglMVSv2sUAY32XZ74yXjY1VhK3bN3WFUPGrgQx4X7GP0A1Te
lzqkT3VSMXieImTASosK5L5Q8rryvgCeI9tQLn9EpYFCtU3LXvVgTreGNEEjMOnL
dR7yOU+Fs775stn6ucqmdYarx7CvKUrNAhgEeHMonLe1cjYScF7NfLO1GIrQKJR2
DE0f+uJZ52inOkO8ufh3WVQJSYszuS3HCY7w5oj1aP38k/y9zZdZvVvwAWZaiqBQ
iwjVs6Kub76VVZZhRDf4iYs8k1Zh64nXdfQt250d8U5yMPF3wIJ+c1yhxwARAQAB
tCpUaGUgZ1Zpc29yIEF1dGhvcnMgPGd2aXNvci1ib3RAZ29vZ2xlLmNvbT6JAk4E
EwEKADgCGwMFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AWIQRvHfheOnHCSRjnJ9Vv
xtVU4yvZQwUCYO4TxQAKCRBvxtVU4yvZQ9UoEACLPV7CnEA2bjCPi0NCWB/Mo1WL
evqv7Wv7vmXzI1K9DrqOhxuamQW75SVXg1df0hTJWbKFmDAip6NEC2Rg5P+A8hHj
nW/VG+q4ZFT662jDhnXQiO9L7EZzjyqNF4yWYzzgnqEu/SmGkDLDYiUCcGBqS2oE
EQfk7RHJSLMJXAnNDH7OUDgrirSssg/dlQ5uAHA9Au80VvC5fsTKza8b3Aydw3SV
iB8/Yuikbl8wKbpSGiXtR4viElXjNips0+mBqaUk2xpqSBrsfN+FezcInVXaXFeq
xtpq2/3M3DYbqCRjqeyd9wNi92FHdOusNrK4MYe0pAYbGjc65BwH+F0T4oJ8ZSJV
lIt+FZ0MqM1T97XadybYFsJh8qvajQpZEPL+zzNncc4f1d80e7+lwIZV/al0FZWW
Zlp7TpbeO/uW+lHs5W14YKwaQVh1whapKXTrATipNOOSCw2hnfrT8V7Hy55QWaGZ
f4/kfy929EeCP16d/LqOClv0j0RBr6NhRBQ0l/BE/mXjJwIk6nKwi+Yi4ek1ARi6
AlCMLn9AZF7aTGpvCiftzIrlyDfVZT5IX03TayxRHZ4b1Rj8eyJaHcjI49u83gkr
4LGX08lEawn9nxFSx4RCg2swGiYw5F436wwwAIozqJuDASeTa3QND3au5v0oYWnl
umDySUl5wPaAaALgzA==
=5/8T
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,103 @@
#!/bin/bash
#
# Copyright The repro-sources-list.sh Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -----------------------------------------------------------------------------
# repro-sources-list.sh:
# configures /etc/apt/sources.list and similar files for installing packages from a snapshot.
#
# This script is expected to be executed inside Dockerfile.
#
# The following distributions are supported:
# - debian:11 (/etc/apt/sources.list)
# - debian:12 (/etc/apt/sources.list.d/debian.sources)
# - ubuntu:22.04 (/etc/apt/sources.list)
# - ubuntu:24.04 (/etc/apt/sources.listd/ubuntu.sources)
# - archlinux (/etc/pacman.d/mirrorlist)
#
# For the further information, see https://github.com/reproducible-containers/repro-sources-list.sh
# -----------------------------------------------------------------------------
set -eux -o pipefail
. /etc/os-release
: "${KEEP_CACHE:=1}"
keep_apt_cache() {
rm -f /etc/apt/apt.conf.d/docker-clean
echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' >/etc/apt/apt.conf.d/keep-cache
}
case "${ID}" in
"debian")
: "${SNAPSHOT_ARCHIVE_BASE:=http://snapshot.debian.org/archive/}"
: "${BACKPORTS:=}"
if [ -e /etc/apt/sources.list.d/debian.sources ]; then
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /etc/apt/sources.list.d/debian.sources)}"
rm -f /etc/apt/sources.list.d/debian.sources
else
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /etc/apt/sources.list)}"
fi
snapshot="$(printf "%(%Y%m%dT%H%M%SZ)T\n" "${SOURCE_DATE_EPOCH}")"
# TODO: use the new format for Debian >= 12
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}debian/${snapshot} ${VERSION_CODENAME} main" >/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}debian-security/${snapshot} ${VERSION_CODENAME}-security main" >>/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}debian/${snapshot} ${VERSION_CODENAME}-updates main" >>/etc/apt/sources.list
if [ "${BACKPORTS}" = 1 ]; then echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}debian/${snapshot} ${VERSION_CODENAME}-backports main" >>/etc/apt/sources.list; fi
if [ "${KEEP_CACHE}" = 1 ]; then keep_apt_cache; fi
;;
"ubuntu")
: "${SNAPSHOT_ARCHIVE_BASE:=http://snapshot.ubuntu.com/}"
if [ -e /etc/apt/sources.list.d/ubuntu.sources ]; then
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /etc/apt/sources.list.d/ubuntu.sources)}"
rm -f /etc/apt/sources.list.d/ubuntu.sources
else
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /etc/apt/sources.list)}"
fi
snapshot="$(printf "%(%Y%m%dT%H%M%SZ)T\n" "${SOURCE_DATE_EPOCH}")"
# TODO: use the new format for Ubuntu >= 24.04
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME} main restricted" >/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-updates main restricted" >>/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME} universe" >>/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-updates universe" >>/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME} multiverse" >>/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-updates multiverse" >>/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-backports main restricted universe multiverse" >>/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-security main restricted" >>/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-security universe" >>/etc/apt/sources.list
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-security multiverse" >>/etc/apt/sources.list
if [ "${KEEP_CACHE}" = 1 ]; then keep_apt_cache; fi
# http://snapshot.ubuntu.com is redirected to https, so we have to install ca-certificates
export DEBIAN_FRONTEND=noninteractive
apt-get -o Acquire::https::Verify-Peer=false update >&2
apt-get -o Acquire::https::Verify-Peer=false install -y ca-certificates >&2
;;
"arch")
: "${SNAPSHOT_ARCHIVE_BASE:=http://archive.archlinux.org/}"
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /var/log/pacman.log)}"
export SOURCE_DATE_EPOCH
# shellcheck disable=SC2016
date -d "@${SOURCE_DATE_EPOCH}" "+Server = ${SNAPSHOT_ARCHIVE_BASE}repos/%Y/%m/%d/\$repo/os/\$arch" >/etc/pacman.d/mirrorlist
;;
*)
echo >&2 "Unsupported distribution: ${ID}"
exit 1
;;
esac
: "${WRITE_SOURCE_DATE_EPOCH:=/dev/null}"
echo "${SOURCE_DATE_EPOCH}" >"${WRITE_SOURCE_DATE_EPOCH}"
echo "SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}"

View file

@ -0,0 +1,201 @@
import logging
import os
import platform
import shutil
import subprocess
from pathlib import Path
from typing import List, Optional, Tuple
from . import errors
from .settings import Settings
from .util import get_resource_path, get_subprocess_startupinfo
CONTAINER_NAME = "dangerzone.rocks/dangerzone"
log = logging.getLogger(__name__)
class Runtime(object):
"""Represents the container runtime to use.
- It can be specified via the settings, using the "container_runtime" key,
which should point to the full path of the runtime;
- If the runtime is not specified via the settings, it defaults
to "podman" on Linux and "docker" on macOS and Windows.
"""
def __init__(self) -> None:
settings = Settings()
if settings.custom_runtime_specified():
self.path = Path(settings.get("container_runtime"))
if not self.path.exists():
raise errors.UnsupportedContainerRuntime(self.path)
self.name = self.path.stem
else:
self.name = self.get_default_runtime_name()
self.path = Runtime.path_from_name(self.name)
if self.name not in ("podman", "docker"):
raise errors.UnsupportedContainerRuntime(self.name)
@staticmethod
def path_from_name(name: str) -> Path:
name_path = Path(name)
if name_path.is_file():
return name_path
else:
runtime = shutil.which(name_path)
if runtime is None:
raise errors.NoContainerTechException(name)
return Path(runtime)
@staticmethod
def get_default_runtime_name() -> str:
return "podman" if platform.system() == "Linux" else "docker"
def get_runtime_version(runtime: Optional[Runtime] = None) -> Tuple[int, int]:
"""Get the major/minor parts of the Docker/Podman version.
Some of the operations we perform in this module rely on some Podman features
that are not available across all of our platforms. In order to have a proper
fallback, we need to know the Podman version. More specifically, we're fine with
just knowing the major and minor version, since writing/installing a full-blown
semver parser is an overkill.
"""
runtime = runtime or Runtime()
# Get the Docker/Podman version, using a Go template.
if runtime.name == "podman":
query = "{{.Client.Version}}"
else:
query = "{{.Server.Version}}"
cmd = [str(runtime.path), "version", "-f", query]
try:
version = subprocess.run(
cmd,
startupinfo=get_subprocess_startupinfo(),
capture_output=True,
check=True,
).stdout.decode()
except Exception as e:
msg = f"Could not get the version of the {runtime.name.capitalize()} tool: {e}"
raise RuntimeError(msg) from e
# Parse this version and return the major/minor parts, since we don't need the
# rest.
try:
major, minor, _ = version.split(".", 3)
return (int(major), int(minor))
except Exception as e:
msg = (
f"Could not parse the version of the {runtime.name.capitalize()} tool"
f" (found: '{version}') due to the following error: {e}"
)
raise RuntimeError(msg)
def list_image_tags() -> List[str]:
"""Get the tags of all loaded Dangerzone images.
This method returns a mapping of image tags to image IDs, for all Dangerzone
images. This can be useful when we want to find which are the local image tags,
and which image ID does the "latest" tag point to.
"""
runtime = Runtime()
return (
subprocess.check_output(
[
str(runtime.path),
"image",
"list",
"--format",
"{{ .Tag }}",
CONTAINER_NAME,
],
text=True,
startupinfo=get_subprocess_startupinfo(),
)
.strip()
.split()
)
def add_image_tag(image_id: str, new_tag: str) -> None:
"""Add a tag to the Dangerzone image."""
runtime = Runtime()
log.debug(f"Adding tag '{new_tag}' to image '{image_id}'")
subprocess.check_output(
[str(runtime.path), "tag", image_id, new_tag],
startupinfo=get_subprocess_startupinfo(),
)
def delete_image_tag(tag: str) -> None:
"""Delete a Dangerzone image tag."""
runtime = Runtime()
log.warning(f"Deleting old container image: {tag}")
try:
subprocess.check_output(
[str(runtime.name), "rmi", "--force", tag],
startupinfo=get_subprocess_startupinfo(),
)
except Exception as e:
log.warning(
f"Couldn't delete old container image '{tag}', so leaving it there."
f" Original error: {e}"
)
def get_expected_tag() -> str:
"""Get the tag of the Dangerzone image tarball from the image-id.txt file."""
with get_resource_path("image-id.txt").open() as f:
return f.read().strip()
def load_image_tarball() -> None:
runtime = Runtime()
log.info("Installing Dangerzone container image...")
tarball_path = get_resource_path("container.tar")
try:
res = subprocess.run(
[str(runtime.path), "load", "-i", str(tarball_path)],
startupinfo=get_subprocess_startupinfo(),
capture_output=True,
check=True,
)
except subprocess.CalledProcessError as e:
if e.stderr:
error = e.stderr.decode()
else:
error = "No output"
raise errors.ImageInstallationException(
f"Could not install container image: {error}"
)
# Loading an image built with Buildkit in Podman 3.4 messes up its name. The tag
# somehow becomes the name of the loaded image [1].
#
# We know that older Podman versions are not generally affected, since Podman v3.0.1
# on Debian Bullseye works properly. Also, Podman v4.0 is not affected, so it makes
# sense to target only Podman v3.4 for a fix.
#
# The fix is simple, tag the image properly based on the expected tag from
# `share/image-id.txt` and delete the incorrect tag.
#
# [1] https://github.com/containers/podman/issues/16490
if runtime.name == "podman" and get_runtime_version(runtime) == (3, 4):
expected_tag = get_expected_tag()
bad_tag = f"localhost/{expected_tag}:latest"
good_tag = f"{CONTAINER_NAME}:{expected_tag}"
log.debug(
f"Dangerzone images loaded in Podman v3.4 usually have an invalid tag."
" Fixing it..."
)
add_image_tag(bad_tag, good_tag)
delete_image_tag(bad_tag)
log.info("Successfully installed container image")

View file

View file

@ -0,0 +1,143 @@
import asyncio
import os
import sys
from abc import abstractmethod
from typing import Callable, List, Optional, TextIO, Tuple
DEFAULT_DPI = 150 # Pixels per inch
INT_BYTES = 2
def running_on_qubes() -> bool:
# https://www.qubes-os.org/faq/#what-is-the-canonical-way-to-detect-qubes-vm
return os.path.exists("/usr/share/qubes/marker-vm")
class DangerzoneConverter:
def __init__(self, progress_callback: Optional[Callable] = None) -> None:
self.percentage: float = 0.0
self.progress_callback = progress_callback
self.captured_output: bytes = b""
@classmethod
def _read_bytes(cls) -> bytes:
"""Read bytes from the stdin."""
data = sys.stdin.buffer.read()
if data is None:
raise EOFError
return data
@classmethod
def _write_bytes(cls, data: bytes, file: TextIO = sys.stdout) -> None:
file.buffer.write(data)
@classmethod
def _write_text(cls, text: str, file: TextIO = sys.stdout) -> None:
cls._write_bytes(text.encode(), file=file)
@classmethod
def _write_int(cls, num: int, file: TextIO = sys.stdout) -> None:
cls._write_bytes(num.to_bytes(INT_BYTES, "big", signed=False), file=file)
# ==== ASYNC METHODS ====
# We run sync methods in async wrappers, because pure async methods are more difficult:
# https://stackoverflow.com/a/52702646
#
# In practice, because they are I/O bound and we don't have many running concurrently,
# they shouldn't cause a problem.
@classmethod
async def read_bytes(cls) -> bytes:
return await asyncio.to_thread(cls._read_bytes)
@classmethod
async def write_bytes(cls, data: bytes, file: TextIO = sys.stdout) -> None:
return await asyncio.to_thread(cls._write_bytes, data, file=file)
@classmethod
async def write_text(cls, text: str, file: TextIO = sys.stdout) -> None:
return await asyncio.to_thread(cls._write_text, text, file=file)
@classmethod
async def write_int(cls, num: int, file: TextIO = sys.stdout) -> None:
return await asyncio.to_thread(cls._write_int, num, file=file)
async def read_stream(
self, sr: asyncio.StreamReader, callback: Optional[Callable] = None
) -> bytes:
"""Consume a byte stream line-by-line.
Read all lines in a stream until EOF. If a user has passed a callback, call it for
each line.
Note that the lines are in bytes, since we can't assume that all command output will
be UTF-8 encoded. Higher level commands are advised to decode the output to Unicode,
if they know its encoding.
"""
buf = b""
while not sr.at_eof():
line = await sr.readline()
self.captured_output += line
if callback is not None:
await callback(line)
buf += line
return buf
async def run_command(
self,
args: List[str],
*,
error_message: str,
stdout_callback: Optional[Callable] = None,
stderr_callback: Optional[Callable] = None,
) -> Tuple[bytes, bytes]:
"""Run a command and get its output.
Run a command using asyncio.subprocess, consume its standard streams, and return its
output in bytes.
:raises RuntimeError: if the process returns a non-zero exit status
"""
# Start the provided command, and return a handle. The command will run in the
# background.
proc = await asyncio.subprocess.create_subprocess_exec(
*args,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
# Log command to debug log so we can trace back which errors
# are from each command
self.captured_output += f"[COMMAND] {' '.join(args)}\n".encode()
assert proc.stdout is not None
assert proc.stderr is not None
# Create asynchronous tasks that will consume the standard streams of the command,
# and call callbacks if necessary.
stdout_task = asyncio.create_task(
self.read_stream(proc.stdout, stdout_callback)
)
stderr_task = asyncio.create_task(
self.read_stream(proc.stderr, stderr_callback)
)
# Wait until the command has finished. Then, verify that the command
# has completed successfully. In any other case, raise an exception.
ret = await proc.wait()
if ret != 0:
raise RuntimeError(error_message)
# Wait until the tasks that consume the command's standard streams have exited as
# well, and return their output.
stdout = await stdout_task
stderr = await stderr_task
return (stdout, stderr)
@abstractmethod
async def convert(self) -> None:
pass
@abstractmethod
def update_progress(self, text: str) -> None:
pass

View file

@ -0,0 +1,302 @@
import asyncio
import os
import sys
from typing import Dict, Optional
# XXX: PyMUPDF logs to stdout by default [1]. The PyMuPDF devs provide a way [2] to log to
# stderr, but it's based on environment variables. These envvars are consulted at import
# time [3], so we have to set them here, before we import `fitz`.
#
# [1] https://github.com/freedomofpress/dangerzone/issues/877
# [2] https://github.com/pymupdf/PyMuPDF/issues/3135#issuecomment-1992625724
# [3] https://github.com/pymupdf/PyMuPDF/blob/9717935eeb2d50d15440d62575878214226795f9/src/__init__.py#L62-L63
os.environ["PYMUPDF_MESSAGE"] = "fd:2"
os.environ["PYMUPDF_LOG"] = "fd:2"
import fitz
import magic
from . import errors
from .common import DEFAULT_DPI, DangerzoneConverter, running_on_qubes
class DocumentToPixels(DangerzoneConverter):
async def write_page_count(self, count: int) -> None:
return await self.write_int(count)
async def write_page_width(self, width: int) -> None:
return await self.write_int(width)
async def write_page_height(self, height: int) -> None:
return await self.write_int(height)
async def write_page_data(self, data: bytes) -> None:
return await self.write_bytes(data)
def update_progress(self, text: str, *, error: bool = False) -> None:
print(text, file=sys.stderr)
async def convert(self) -> None:
conversions: Dict[str, Dict[str, Optional[str]]] = {
# .pdf
"application/pdf": {"type": "PyMuPDF"},
# .docx
"application/vnd.openxmlformats-officedocument.wordprocessingml.document": {
"type": "libreoffice",
},
# .doc
"application/msword": {
"type": "libreoffice",
},
# .docm
"application/vnd.ms-word.document.macroEnabled.12": {
"type": "libreoffice",
},
# .xlsx
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": {
"type": "libreoffice",
},
# .xls
"application/vnd.ms-excel": {
"type": "libreoffice",
},
# .pptx
"application/vnd.openxmlformats-officedocument.presentationml.presentation": {
"type": "libreoffice",
},
# .ppt
"application/vnd.ms-powerpoint": {
"type": "libreoffice",
},
# .odt
"application/vnd.oasis.opendocument.text": {
"type": "libreoffice",
},
# .odg
"application/vnd.oasis.opendocument.graphics": {
"type": "libreoffice",
},
# .odp
"application/vnd.oasis.opendocument.presentation": {
"type": "libreoffice",
},
# .ods
"application/vnd.oasis.opendocument.spreadsheet": {
"type": "libreoffice",
},
# .ods / .ots
"application/vnd.oasis.opendocument.spreadsheet-template": {
"type": "libreoffice",
},
# .odt / .ott
"application/vnd.oasis.opendocument.text-template": {
"type": "libreoffice",
},
# .hwp
# Commented MIMEs are not used in `file` and don't conform to the rules.
# Left them for just in case
# PR: https://github.com/freedomofpress/dangerzone/pull/460
# "application/haansofthwp": {
# "type": "libreoffice",
# "libreoffice_ext": "h2orestart.oxt",
# },
# "application/vnd.hancom.hwp": {
# "type": "libreoffice",
# "libreoffice_ext": "h2orestart.oxt",
# },
"application/x-hwp": {
"type": "libreoffice",
"libreoffice_ext": "h2orestart.oxt",
},
# .hwpx
# "application/haansofthwpx": {
# "type": "libreoffice",
# "libreoffice_ext": "h2orestart.oxt",
# },
# "application/vnd.hancom.hwpx": {
# "type": "libreoffice",
# "libreoffice_ext": "h2orestart.oxt",
# },
"application/x-hwp+zip": {
"type": "libreoffice",
"libreoffice_ext": "h2orestart.oxt",
},
"application/hwp+zip": {
"type": "libreoffice",
"libreoffice_ext": "h2orestart.oxt",
},
# At least .odt, .docx, .odg, .odp, .ods, and .pptx
"application/zip": {
"type": "libreoffice",
# NOTE: `file` command < 5.45 cannot detect hwpx files properly, so we
# enable the extension in any case. See also:
# https://github.com/freedomofpress/dangerzone/pull/460#issuecomment-1654166465
"libreoffice_ext": "h2orestart.oxt",
},
# At least .doc, .docx, .odg, .odp, .odt, .pdf, .ppt, .pptx, .xls, and .xlsx
"application/octet-stream": {
"type": "libreoffice",
},
# At least .doc, .ppt, and .xls
"application/x-ole-storage": {
"type": "libreoffice",
},
# .epub
"application/epub+zip": {"type": "PyMuPDF"},
# .svg
"image/svg+xml": {"type": "PyMuPDF"},
# .bmp
"image/bmp": {"type": "PyMuPDF"},
# .pnm
"image/x-portable-anymap": {"type": "PyMuPDF"},
# .pbm
"image/x-portable-bitmap": {"type": "PyMuPDF"},
# .ppm
"image/x-portable-pixmap": {"type": "PyMuPDF"},
# .jpg
"image/jpeg": {"type": "PyMuPDF"},
# .gif
"image/gif": {"type": "PyMuPDF"},
# .png
"image/png": {"type": "PyMuPDF"},
# .tif
"image/tiff": {"type": "PyMuPDF"},
"image/x-tiff": {"type": "PyMuPDF"},
}
# Detect MIME type
mime_type = self.detect_mime_type("/tmp/input_file")
# Validate MIME type
if mime_type not in conversions:
raise errors.DocFormatUnsupported()
# Temporary fix for the HWPX format
# Should be removed after new release of `file' (current release 5.44)
if mime_type == "application/zip":
file_type = self.detect_mime_type("/tmp/input_file")
hwpx_file_type = 'Zip data (MIME type "application/hwp+zip"?)'
if file_type == hwpx_file_type:
mime_type = "application/x-hwp+zip"
# Convert input document to PDF
conversion = conversions[mime_type]
if conversion["type"] == "PyMuPDF":
try:
doc = fitz.open("/tmp/input_file", filetype=mime_type)
except (ValueError, fitz.FileDataError):
raise errors.DocCorruptedException()
elif conversion["type"] == "libreoffice":
libreoffice_ext = conversion.get("libreoffice_ext", None)
# Disable conversion for HWP/HWPX on specific platforms. See:
#
# https://github.com/freedomofpress/dangerzone/issues/494
# https://github.com/freedomofpress/dangerzone/issues/498
if libreoffice_ext == "h2orestart.oxt" and running_on_qubes():
raise errors.DocFormatUnsupportedHWPQubes()
if libreoffice_ext:
await self.install_libreoffice_ext(libreoffice_ext)
self.update_progress("Converting to PDF using LibreOffice")
args = [
"libreoffice",
"--headless",
"--safe-mode",
"--convert-to",
"pdf",
"--outdir",
"/tmp",
"/tmp/input_file",
]
await self.run_command(
args,
error_message="Conversion to PDF with LibreOffice failed",
)
pdf_filename = "/tmp/input_file.pdf"
# XXX: Sometimes, LibreOffice can fail with status code 0. So, we need to
# always check if the file exists. See:
#
# https://github.com/freedomofpress/dangerzone/issues/494
if not os.path.exists(pdf_filename):
raise errors.LibreofficeFailure()
try:
doc = fitz.open(pdf_filename)
except (ValueError, fitz.FileDataError):
raise errors.DocCorruptedException()
else:
# NOTE: This should never be reached
raise errors.DocFormatUnsupported()
# Obtain number of pages
if doc.page_count > errors.MAX_PAGES:
raise errors.MaxPagesException()
await self.write_page_count(doc.page_count)
for page in doc.pages():
# TODO check if page.number is doc-controlled
page_num = page.number + 1 # pages start in 1
self.update_progress(
f"Converting page {page_num}/{doc.page_count} to pixels"
)
pix = page.get_pixmap(dpi=DEFAULT_DPI)
rgb_buf = pix.samples_mv
await self.write_page_width(pix.width)
await self.write_page_height(pix.height)
await self.write_page_data(rgb_buf)
self.update_progress("Converted document to pixels")
async def install_libreoffice_ext(self, libreoffice_ext: str) -> None:
self.update_progress(f"Installing LibreOffice extension '{libreoffice_ext}'")
unzip_args = [
"unzip",
"-d",
f"/usr/lib/libreoffice/share/extensions/{libreoffice_ext}/",
f"/opt/libreoffice_ext/{libreoffice_ext}",
]
await self.run_command(
unzip_args,
error_message="LibreOffice extension installation failed (unzipping)",
)
def detect_mime_type(self, path: str) -> str:
"""Detect MIME types in a platform-agnostic type.
Detect the MIME type of a file, either on Qubes or container platforms.
"""
try:
mime = magic.Magic(mime=True)
mime_type = mime.from_file("/tmp/input_file")
except TypeError:
mime_type = magic.detect_from_filename("/tmp/input_file").mime_type
return mime_type
async def main() -> None:
try:
data = await DocumentToPixels.read_bytes()
except EOFError:
sys.exit(1)
with open("/tmp/input_file", "wb") as f:
f.write(data)
try:
converter = DocumentToPixels()
await converter.convert()
except errors.ConversionException as e:
await DocumentToPixels.write_bytes(str(e).encode(), file=sys.stderr)
sys.exit(e.error_code)
except Exception as e:
await DocumentToPixels.write_bytes(str(e).encode(), file=sys.stderr)
error_code = errors.UnexpectedConversionError.error_code
sys.exit(error_code)
# Write debug information
await DocumentToPixels.write_bytes(converter.captured_output, file=sys.stderr)
if __name__ == "__main__":
sys.exit(asyncio.run(main()))

View file

@ -0,0 +1,108 @@
from typing import List, Optional, Type, Union
# XXX: errors start at 128 for conversion-related issues
ERROR_SHIFT = 128
MAX_PAGES = 10000
MAX_PAGE_WIDTH = 10000
MAX_PAGE_HEIGHT = 10000
class ConverterProcException(Exception):
"""Some exception occurred in the converter"""
def __init__(self) -> None:
super().__init__("The process spawned for the conversion has exited early")
class ConversionException(Exception):
error_message = "Unspecified error"
error_code = ERROR_SHIFT
def __init__(self, error_message: Optional[str] = None) -> None:
if error_message:
self.error_message = error_message
super().__init__(self.error_message)
@classmethod
def get_subclasses(cls) -> List[Type["ConversionException"]]:
subclasses = [cls]
for subclass in cls.__subclasses__():
subclasses += subclass.get_subclasses()
return subclasses
class QubesQrexecFailed(ConversionException):
error_code = 126 # No ERROR_SHIFT since this is a qrexec error
error_message = (
"Could not start a disposable qube for the file conversion. "
"More information should have shown up on the top-right corner of your screen."
)
class DocFormatUnsupported(ConversionException):
error_code = ERROR_SHIFT + 10
error_message = "The document format is not supported"
class DocFormatUnsupportedHWPQubes(DocFormatUnsupported):
error_code = ERROR_SHIFT + 16
error_message = "HWP / HWPX formats are not supported in Qubes"
class LibreofficeFailure(ConversionException):
error_code = ERROR_SHIFT + 20
error_message = "Conversion to PDF with LibreOffice failed"
class DocCorruptedException(ConversionException):
error_code = ERROR_SHIFT + 30
error_message = "The document appears to be corrupted and could not be opened"
class PagesException(ConversionException):
error_code = ERROR_SHIFT + 40
class NoPageCountException(PagesException):
error_code = ERROR_SHIFT + 41
error_message = "Number of pages could not be extracted from PDF"
class MaxPagesException(PagesException):
"""Max number of pages enforced by the client (to fail early) but also the
server, which distrusts the client"""
error_code = ERROR_SHIFT + 42
error_message = f"Number of pages exceeds maximum ({MAX_PAGES})"
class MaxPageWidthException(PagesException):
error_code = ERROR_SHIFT + 44
error_message = "A page exceeded the maximum width."
class MaxPageHeightException(PagesException):
error_code = ERROR_SHIFT + 45
error_message = "A page exceeded the maximum height."
class PageCountMismatch(PagesException):
error_code = ERROR_SHIFT + 46
error_message = (
"The final document does not have the same page count as the original one"
)
class UnexpectedConversionError(ConversionException):
error_code = ERROR_SHIFT + 100
error_message = "Some unexpected error occurred while converting the document"
def exception_from_error_code(
error_code: int,
) -> Union[ConversionException, ValueError]:
"""returns the conversion exception corresponding to the error code"""
for cls in ConversionException.get_subclasses():
if cls.error_code == error_code:
return cls()
return UnexpectedConversionError(f"Unknown error code '{error_code}'")

229
dangerzone/document.py Normal file
View file

@ -0,0 +1,229 @@
import enum
import logging
import os
import platform
import re
import secrets
from pathlib import Path, PurePosixPath, PureWindowsPath
from typing import Optional
from . import errors, util
SAFE_EXTENSION = "-safe.pdf"
ARCHIVE_SUBDIR = "unsafe"
log = logging.getLogger(__name__)
class Document:
"""Track the state of a single document.
The Document class is responsible for holding the state of a single
document, and validating its info.
"""
# document conversion state
STATE_UNCONVERTED = enum.auto()
STATE_CONVERTING = enum.auto()
STATE_SAFE = enum.auto()
STATE_FAILED = enum.auto()
def __init__(
self,
input_filename: Optional[str] = None,
output_filename: Optional[str] = None,
suffix: str = SAFE_EXTENSION,
archive: bool = False,
) -> None:
# NOTE: See https://github.com/freedomofpress/dangerzone/pull/216#discussion_r1015449418
self.id = secrets.token_urlsafe(6)[0:6]
self._input_filename: Optional[str] = None
self._output_filename: Optional[str] = None
self._archive = False
self._suffix = suffix
if input_filename:
self.input_filename = input_filename
if output_filename:
self.output_filename = output_filename
self.state = Document.STATE_UNCONVERTED
self.archive_after_conversion = archive
@staticmethod
def normalize_filename(filename: str) -> str:
return os.path.abspath(filename)
@staticmethod
def validate_input_filename(filename: str) -> None:
try:
open(filename, "rb")
except FileNotFoundError as e:
raise errors.InputFileNotFoundException() from e
except PermissionError as e:
raise errors.InputFileNotReadableException() from e
@staticmethod
def validate_output_filename(filename: str) -> None:
if not filename.endswith(".pdf"):
raise errors.NonPDFOutputFileException()
if platform.system() == "Windows":
final_filename = PureWindowsPath(filename).name
illegal_chars_regex = re.compile(r"[\"*/:<>?\\|]")
else:
final_filename = PurePosixPath(filename).name
illegal_chars_regex = re.compile(r"[\\]")
if platform.system() in ("Windows", "Darwin"):
match = illegal_chars_regex.search(final_filename)
if match:
# filename contains illegal characters
raise errors.IllegalOutputFilenameException(match.group(0))
if not os.access(Path(filename).parent, os.W_OK):
# in unwriteable directory
raise errors.UnwriteableOutputDirException()
def validate_default_archive_dir(self) -> None:
"""Checks if archive dir can be created"""
if not os.access(self.default_archive_dir.parent, os.W_OK):
raise errors.UnwriteableArchiveDirException()
@property
def input_filename(self) -> str:
if self._input_filename is None:
raise errors.NotSetInputFilenameException()
else:
return self._input_filename
@input_filename.setter
def input_filename(self, filename: str) -> None:
filename = self.normalize_filename(filename)
self.validate_input_filename(filename)
self._input_filename = filename
self.announce_id()
@property
def output_filename(self) -> str:
if self._output_filename is None:
if self._input_filename is not None:
return self.default_output_filename
else:
raise errors.NotSetOutputFilenameException()
else:
return self._output_filename
@output_filename.setter
def output_filename(self, filename: str) -> None:
filename = self.normalize_filename(filename)
self.validate_output_filename(filename)
self._output_filename = filename
@property
def sanitized_output_filename(self) -> str:
return util.replace_control_chars(self.output_filename)
@property
def suffix(self) -> str:
return self._suffix
@suffix.setter
def suffix(self, suf: str) -> None:
if self._output_filename is None:
self._suffix = suf
else:
raise errors.SuffixNotApplicableException()
@property
def archive_after_conversion(self) -> bool:
return self._archive
@archive_after_conversion.setter
def archive_after_conversion(self, enabled: bool) -> None:
if enabled:
self.validate_default_archive_dir()
self._archive = True
else:
self._archive = False
def archive(self) -> None:
"""
Moves the original document to a subdirectory. Prevents the user from
mistakenly opening the unsafe (original) document.
"""
archive_dir = self.default_archive_dir
old_file_path = Path(self.input_filename)
new_file_path = archive_dir / old_file_path.name
log.debug(f"Archiving doc {self.id} to {new_file_path}")
Path.mkdir(archive_dir, exist_ok=True)
# On Windows, moving the file will fail if it already exists.
new_file_path.unlink(missing_ok=True)
old_file_path.rename(new_file_path)
@property
def default_archive_dir(self) -> Path:
return Path(self.input_filename).parent / ARCHIVE_SUBDIR
@property
def default_output_filename(self) -> str:
return f"{os.path.splitext(self.input_filename)[0]}{self.suffix}"
def announce_id(self) -> None:
sanitized_filename = util.replace_control_chars(self.input_filename)
log.info(f"Assigning ID '{self.id}' to doc '{sanitized_filename}'")
def set_output_dir(self, path: str) -> None:
# keep the same name
old_filename = os.path.basename(self.output_filename)
new_path = os.path.abspath(path)
if not os.path.exists(new_path):
raise errors.NonExistantOutputDirException()
if not os.path.isdir(new_path):
raise errors.OutputDirIsNotDirException()
if not os.access(new_path, os.W_OK):
raise errors.UnwriteableOutputDirException()
self._output_filename = os.path.join(new_path, old_filename)
def is_unconverted(self) -> bool:
return self.state is Document.STATE_UNCONVERTED
def is_converting(self) -> bool:
return self.state is Document.STATE_CONVERTING
def is_failed(self) -> bool:
return self.state is Document.STATE_FAILED
def is_safe(self) -> bool:
return self.state is Document.STATE_SAFE
def mark_as_converting(self) -> None:
log.debug(f"Marking doc {self.id} as 'converting'")
self.state = Document.STATE_CONVERTING
def mark_as_failed(self) -> None:
log.debug(f"Marking doc {self.id} as 'failed'")
self.state = Document.STATE_FAILED
def mark_as_safe(self) -> None:
log.debug(f"Marking doc {self.id} as 'safe'")
self.state = Document.STATE_SAFE
def __eq__(self, other: object) -> bool:
if not isinstance(other, Document):
return False
return (
Path(self.input_filename).absolute()
== Path(other.input_filename).absolute()
)
def __hash__(self) -> int:
return hash(str(Path(self.input_filename).absolute()))
def __str__(self) -> str:
return self.input_filename

146
dangerzone/errors.py Normal file
View file

@ -0,0 +1,146 @@
import functools
import logging
import sys
from typing import Any, Callable, TypeVar, cast
import click
F = TypeVar("F", bound=Callable[..., Any])
log = logging.getLogger(__name__)
class DocumentFilenameException(Exception):
"""Exception for document-related filename errors."""
class AddedDuplicateDocumentException(DocumentFilenameException):
"""Exception for a document is added twice."""
def __init__(self) -> None:
super().__init__("A document was added twice")
class InputFileNotFoundException(DocumentFilenameException):
"""Exception for when an input file does not exist."""
def __init__(self) -> None:
super().__init__("Input file not found: make sure you typed it correctly.")
class InputFileNotReadableException(DocumentFilenameException):
"""Exception for when an input file exists but is not readable."""
def __init__(self) -> None:
super().__init__("You don't have permission to open the input file.")
class NonPDFOutputFileException(DocumentFilenameException):
"""Exception for when the output file is not a PDF."""
def __init__(self) -> None:
super().__init__("Safe PDF filename must end in '.pdf'")
class IllegalOutputFilenameException(DocumentFilenameException):
"""Exception for when the output file contains illegal characters."""
def __init__(self, char: str) -> None:
super().__init__(f"Illegal character: {char}")
class UnwriteableOutputDirException(DocumentFilenameException):
"""Exception for when the output file is not writeable."""
def __init__(self) -> None:
super().__init__("Safe PDF filename is not writable")
class NotSetInputFilenameException(DocumentFilenameException):
"""Exception for when the output filename is set before having an
associated input file."""
def __init__(self) -> None:
super().__init__("Input filename has not been set yet.")
class NotSetOutputFilenameException(DocumentFilenameException):
"""Exception for when the output filename is read before it has been set."""
def __init__(self) -> None:
super().__init__("Output filename has not been set yet.")
class NonExistantOutputDirException(DocumentFilenameException):
"""Exception for when the output dir does not exist."""
def __init__(self) -> None:
super().__init__("Output directory does not exist")
class OutputDirIsNotDirException(DocumentFilenameException):
"""Exception for when the specified output dir is not actually a dir."""
def __init__(self) -> None:
super().__init__("Specified output directory is actually not a directory")
class UnwriteableArchiveDirException(DocumentFilenameException):
"""Exception for when the archive directory cannot be created."""
def __init__(self) -> None:
super().__init__(
"Archive directory for storing unsafe documents cannot be created."
)
class SuffixNotApplicableException(DocumentFilenameException):
"""Exception for when the suffix cannot be applied to the output filename."""
def __init__(self) -> None:
super().__init__("Cannot set a suffix after setting an output filename")
def handle_document_errors(func: F) -> F:
"""Decorator to log document-related errors and exit gracefully."""
@functools.wraps(func)
def wrapper(*args, **kwargs): # type: ignore
try:
return func(*args, **kwargs)
except DocumentFilenameException as e:
if getattr(sys, "dangerzone_dev", False):
# Show the full traceback only on dev environments.
msg = "An exception occured while validating a document"
log.exception(msg)
click.echo(str(e))
sys.exit(1)
return cast(F, wrapper)
#### Container-related errors
class ImageNotPresentException(Exception):
pass
class ImageInstallationException(Exception):
pass
class NoContainerTechException(Exception):
def __init__(self, container_tech: str) -> None:
super().__init__(f"{container_tech} is not installed")
class NotAvailableContainerTechException(Exception):
def __init__(self, container_tech: str, error: str) -> None:
self.error = error
self.container_tech = container_tech
super().__init__(f"{container_tech} is not available")
class UnsupportedContainerRuntime(Exception):
pass

View file

@ -1,495 +0,0 @@
import sys
import os
import inspect
import appdirs
import platform
import subprocess
import shutil
import json
import gzip
import colorama
from colorama import Fore, Back, Style
from .settings import Settings
from .container import convert
class GlobalCommon(object):
"""
The GlobalCommon class is a singleton of shared functionality throughout the app
"""
def __init__(self):
# Version
try:
with open(self.get_resource_path("version.txt")) as f:
self.version = f.read().strip()
except FileNotFoundError:
# In dev mode, in Windows, get_resource_path doesn't work properly for the container, but luckily
# it doesn't need to know the version
self.version = "unknown"
# Initialize terminal colors
colorama.init(autoreset=True)
# App data folder
self.appdata_path = appdirs.user_config_dir("dangerzone")
# Container
self.container_name = "dangerzone.rocks/dangerzone"
# Languages supported by tesseract
self.ocr_languages = {
"Afrikaans": "ar",
"Albanian": "sqi",
"Amharic": "amh",
"Arabic": "ara",
"Arabic script": "Arabic",
"Armenian": "hye",
"Armenian script": "Armenian",
"Assamese": "asm",
"Azerbaijani": "aze",
"Azerbaijani (Cyrillic)": "aze_cyrl",
"Basque": "eus",
"Belarusian": "bel",
"Bengali": "ben",
"Bengali script": "Bengali",
"Bosnian": "bos",
"Breton": "bre",
"Bulgarian": "bul",
"Burmese": "mya",
"Canadian Aboriginal script": "Canadian_Aboriginal",
"Catalan": "cat",
"Cebuano": "ceb",
"Cherokee": "chr",
"Cherokee script": "Cherokee",
"Chinese - Simplified": "chi_sim",
"Chinese - Simplified (vertical)": "chi_sim_vert",
"Chinese - Traditional": "chi_tra",
"Chinese - Traditional (vertical)": "chi_tra_vert",
"Corsican": "cos",
"Croatian": "hrv",
"Cyrillic script": "Cyrillic",
"Czech": "ces",
"Danish": "dan",
"Devanagari script": "Devanagari",
"Divehi": "div",
"Dutch": "nld",
"Dzongkha": "dzo",
"English": "eng",
"English, Middle (1100-1500)": "enm",
"Esperanto": "epo",
"Estonian": "est",
"Ethiopic script": "Ethiopic",
"Faroese": "fao",
"Filipino": "fil",
"Finnish": "fin",
"Fraktur script": "Fraktur",
"Frankish": "frk",
"French": "fra",
"French, Middle (ca.1400-1600)": "frm",
"Frisian (Western)": "fry",
"Gaelic (Scots)": "gla",
"Galician": "glg",
"Georgian": "kat",
"Georgian script": "Georgian",
"German": "deu",
"Greek": "ell",
"Greek script": "Greek",
"Gujarati": "guj",
"Gujarati script": "Gujarati",
"Gurmukhi script": "Gurmukhi",
"Hangul script": "Hangul",
"Hangul (vertical) script": "Hangul_vert",
"Han - Simplified script": "HanS",
"Han - Simplified (vertical) script": "HanS_vert",
"Han - Traditional script": "HanT",
"Han - Traditional (vertical) script": "HanT_vert",
"Hatian": "hat",
"Hebrew": "heb",
"Hebrew script": "Hebrew",
"Hindi": "hin",
"Hungarian": "hun",
"Icelandic": "isl",
"Indonesian": "ind",
"Inuktitut": "iku",
"Irish": "gle",
"Italian": "ita",
"Italian - Old": "ita_old",
"Japanese": "jpn",
"Japanese script": "Japanese",
"Japanese (vertical)": "jpn_vert",
"Japanese (vertical) script": "Japanese_vert",
"Javanese": "jav",
"Kannada": "kan",
"Kannada script": "Kannada",
"Kazakh": "kaz",
"Khmer": "khm",
"Khmer script": "Khmer",
"Korean": "kor",
"Korean (vertical)": "kor_vert",
"Kurdish (Arabic)": "kur_ara",
"Kyrgyz": "kir",
"Lao": "lao",
"Lao script": "Lao",
"Latin": "lat",
"Latin script": "Latin",
"Latvian": "lav",
"Lithuanian": "lit",
"Luxembourgish": "ltz",
"Macedonian": "mkd",
"Malayalam": "mal",
"Malayalam script": "Malayalam",
"Malay": "msa",
"Maltese": "mlt",
"Maori": "mri",
"Marathi": "mar",
"Mongolian": "mon",
"Myanmar script": "Myanmar",
"Nepali": "nep",
"Norwegian": "nor",
"Occitan (post 1500)": "oci",
"Old Georgian": "kat_old",
"Oriya (Odia) script": "Oriya",
"Oriya": "ori",
"Pashto": "pus",
"Persian": "fas",
"Polish": "pol",
"Portuguese": "por",
"Punjabi": "pan",
"Quechua": "que",
"Romanian": "ron",
"Russian": "rus",
"Sanskrit": "san",
"script and orientation": "osd",
"Serbian (Latin)": "srp_latn",
"Serbian": "srp",
"Sindhi": "snd",
"Sinhala script": "Sinhala",
"Sinhala": "sin",
"Slovakian": "slk",
"Slovenian": "slv",
"Spanish, Castilian - Old": "spa_old",
"Spanish": "spa",
"Sundanese": "sun",
"Swahili": "swa",
"Swedish": "swe",
"Syriac script": "Syriac",
"Syriac": "syr",
"Tajik": "tgk",
"Tamil script": "Tamil",
"Tamil": "tam",
"Tatar": "tat",
"Telugu script": "Telugu",
"Telugu": "tel",
"Thaana script": "Thaana",
"Thai script": "Thai",
"Thai": "tha",
"Tibetan script": "Tibetan",
"Tibetan Standard": "bod",
"Tigrinya": "tir",
"Tonga": "ton",
"Turkish": "tur",
"Ukrainian": "ukr",
"Urdu": "urd",
"Uyghur": "uig",
"Uzbek (Cyrillic)": "uzb_cyrl",
"Uzbek": "uzb",
"Vietnamese script": "Vietnamese",
"Vietnamese": "vie",
"Welsh": "cym",
"Yiddish": "yid",
"Yoruba": "yor",
}
# Load settings
self.settings = Settings(self)
def display_banner(self):
"""
Raw ASCII art example:
Dangerzone v0.1.5
https://dangerzone.rocks
"""
print(Back.BLACK + Fore.YELLOW + Style.DIM + "╭──────────────────────────╮")
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ▄██▄ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ██████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███▀▀▀██ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███ ████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███ ██████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███ ▀▀▀▀████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███████ ▄██████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ███████ ▄█████████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ████████████████████ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Fore.LIGHTYELLOW_EX
+ Style.NORMAL
+ " ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(Back.BLACK + Fore.YELLOW + Style.DIM + "│ │")
left_spaces = (15 - len(self.version) - 1) // 2
right_spaces = left_spaces
if left_spaces + len(self.version) + 1 + right_spaces < 15:
right_spaces += 1
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Style.RESET_ALL
+ Back.BLACK
+ Fore.LIGHTWHITE_EX
+ Style.BRIGHT
+ f"{' '*left_spaces}Dangerzone v{self.version}{' '*right_spaces}"
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(
Back.BLACK
+ Fore.YELLOW
+ Style.DIM
+ ""
+ Style.RESET_ALL
+ Back.BLACK
+ Fore.LIGHTWHITE_EX
+ " https://dangerzone.rocks "
+ Fore.YELLOW
+ Style.DIM
+ ""
)
print(Back.BLACK + Fore.YELLOW + Style.DIM + "╰──────────────────────────╯")
def get_container_runtime(self):
if platform.system() == "Linux":
return shutil.which("podman")
else:
return shutil.which("docker")
def get_resource_path(self, filename):
if getattr(sys, "dangerzone_dev", False):
# Look for resources directory relative to python file
prefix = os.path.join(
os.path.dirname(
os.path.dirname(
os.path.abspath(inspect.getfile(inspect.currentframe()))
)
),
"share",
)
else:
if platform.system() == "Darwin":
prefix = os.path.join(
os.path.dirname(os.path.dirname(sys.executable)), "Resources/share"
)
elif platform.system() == "Linux":
prefix = os.path.join(sys.prefix, "share", "dangerzone")
else:
# Windows
prefix = os.path.join(os.path.dirname(sys.executable), "share")
resource_path = os.path.join(prefix, filename)
return resource_path
def get_subprocess_startupinfo(self):
if platform.system() == "Windows":
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
return startupinfo
else:
return None
def install_container(self):
"""
Make sure the podman container is installed. Linux only.
"""
if self.is_container_installed():
return
# Load the container into podman
print("Installing Dangerzone container image...")
p = subprocess.Popen(
[self.get_container_runtime(), "load"],
stdin=subprocess.PIPE,
startupinfo=self.get_subprocess_startupinfo(),
)
chunk_size = 10240
compressed_container_path = self.get_resource_path("container.tar.gz")
with gzip.open(compressed_container_path) as f:
while True:
chunk = f.read(chunk_size)
if len(chunk) > 0:
p.stdin.write(chunk)
else:
break
p.communicate()
if not self.is_container_installed():
print("Failed to install the container image")
return False
print("Container image installed")
return True
def is_container_installed(self):
"""
See if the podman container is installed. Linux only.
"""
# Get the image id
with open(self.get_resource_path("image-id.txt")) as f:
expected_image_id = f.read().strip()
# See if this image is already installed
installed = False
found_image_id = subprocess.check_output(
[
self.get_container_runtime(),
"image",
"list",
"--format",
"{{.ID}}",
self.container_name,
],
text=True,
startupinfo=self.get_subprocess_startupinfo(),
)
found_image_id = found_image_id.strip()
if found_image_id == expected_image_id:
installed = True
elif found_image_id == "":
pass
else:
print("Deleting old dangerzone container image")
try:
subprocess.check_output(
[self.get_container_runtime(), "rmi", "--force", found_image_id],
startupinfo=self.get_subprocess_startupinfo(),
)
except:
print("Couldn't delete old container image, so leaving it there")
return installed

View file

@ -1,37 +1,83 @@
import enum
import logging
import os
import sys
import signal
import platform
import signal
import sys
import typing
from typing import List, Optional
import click
import uuid
from PySide2 import QtCore, QtWidgets
import colorama
from .common import GuiCommon
# FIXME: See https://github.com/freedomofpress/dangerzone/issues/320 for more details.
if typing.TYPE_CHECKING:
from PySide2 import QtCore, QtGui, QtWidgets
else:
try:
from PySide6 import QtCore, QtGui, QtWidgets
except ImportError:
from PySide2 import QtCore, QtGui, QtWidgets
from .. import args, errors
from ..document import Document
from ..isolation_provider.container import Container
from ..isolation_provider.dummy import Dummy
from ..isolation_provider.qubes import Qubes, is_qubes_native_conversion
from ..util import get_resource_path, get_version
from .logic import DangerzoneGui
from .main_window import MainWindow
from .systray import SysTray
from ..global_common import GlobalCommon
from .updater import UpdaterThread
log = logging.getLogger(__name__)
# For some reason, Dangerzone segfaults if I inherit from QApplication directly, so instead
# this is a class whose job is to hold a QApplication object and customize it
class ApplicationWrapper(QtCore.QObject):
document_selected = QtCore.Signal(str)
new_window = QtCore.Signal()
class OSColorMode(enum.Enum):
"""
Operating system color mode, e.g. Light or Dark Mode on macOS 10.14+ or Windows 10+.
The enum values are used as the names of Qt properties that will be selected by QSS
property selectors to set color-mode-specific style rules.
"""
LIGHT = "light"
DARK = "dark"
class Application(QtWidgets.QApplication):
document_selected = QtCore.Signal(list)
application_activated = QtCore.Signal()
def __init__(self):
super(ApplicationWrapper, self).__init__()
self.app = QtWidgets.QApplication()
self.app.setQuitOnLastWindowClosed(False)
def __init__(self, *args: typing.Any, **kwargs: typing.Any) -> None:
super(Application, self).__init__(*args, **kwargs)
self.setQuitOnLastWindowClosed(False)
with get_resource_path("dangerzone.css").open("r") as f:
style = f.read()
self.setStyleSheet(style)
self.original_event = self.app.event
# Needed under certain windowing systems to match the application to the
# desktop entry in order to display the correct application name and icon
# and to allow identifying windows that belong to the application (e.g.
# under Wayland it sets the correct app ID). The value is the name of the
# Dangerzone .desktop file.
self.setDesktopFileName("press.freedom.dangerzone")
def monkeypatch_event(event):
# In some combinations of window managers and OSes, if we don't set an
# application name, then the window manager may report it as `python3` or
# `__init__.py`. Always set this to `dangerzone`, which corresponds to the
# executable name as well.
# See: https://github.com/freedomofpress/dangerzone/issues/402
self.setApplicationName("dangerzone")
self.original_event = self.event
def monkeypatch_event(arg__1: QtCore.QEvent) -> bool:
event = arg__1 # oddly Qt calls internally event by "arg__1"
# In macOS, handle the file open event
if event.type() == QtCore.QEvent.FileOpen:
if isinstance(event, QtGui.QFileOpenEvent):
# Skip file open events in dev mode
if not hasattr(sys, "dangerzone_dev"):
self.document_selected.emit(event.file())
self.document_selected.emit([event.file()])
return True
elif event.type() == QtCore.QEvent.ApplicationActivate:
self.application_activated.emit()
@ -39,12 +85,42 @@ class ApplicationWrapper(QtCore.QObject):
return self.original_event(event)
self.app.event = monkeypatch_event
self.event = monkeypatch_event # type: ignore [method-assign]
self.os_color_mode = self.infer_os_color_mode()
log.debug(f"Inferred system color scheme as {self.os_color_mode}")
def infer_os_color_mode(self) -> OSColorMode:
"""
Qt 6.5+ explicitly provides the OS color scheme via QStyleHints.colorScheme(),
but we still need to support PySide2/Qt 5, so instead we infer the OS color
scheme from the default palette.
"""
text_color, window_color = (
self.palette().color(role)
for role in (QtGui.QPalette.WindowText, QtGui.QPalette.Window)
)
if text_color.lightness() > window_color.lightness():
return OSColorMode.DARK
return OSColorMode.LIGHT
@click.command()
@click.argument("filename", required=False)
def gui_main(filename):
@click.option(
"--unsafe-dummy-conversion", "dummy_conversion", flag_value=True, hidden=True
)
@click.argument(
"filenames",
required=False,
nargs=-1,
type=click.UNPROCESSED,
callback=args.validate_input_filenames,
)
@click.version_option(version=get_version(), message="%(version)s")
@errors.handle_document_errors
def gui_main(dummy_conversion: bool, filenames: Optional[List[str]]) -> bool:
setup_logging()
if platform.system() == "Darwin":
# Required for macOS Big Sur: https://stackoverflow.com/a/64878899
os.environ["QT_MAC_WANTS_LAYER"] = "1"
@ -52,97 +128,70 @@ def gui_main(filename):
# Make sure /usr/local/bin is in the path
os.environ["PATH"] = "/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin"
# Strip ANSI colors from stdout output, to prevent terminal colors from breaking
# the macOS GUI app
from strip_ansi import strip_ansi
class StdoutFilter:
def __init__(self, stream):
self.stream = stream
def __getattr__(self, attr_name):
return getattr(self.stream, attr_name)
def write(self, data):
self.stream.write(strip_ansi(data))
def flush(self):
self.stream.flush()
sys.stdout = StdoutFilter(sys.stdout)
sys.stderr = StdoutFilter(sys.stderr)
# Don't show ANSI colors from stdout output, to prevent terminal
# colors from breaking the macOS GUI app
colorama.deinit()
# Create the Qt app
app_wrapper = ApplicationWrapper()
app = app_wrapper.app
app = Application()
# Common objects
global_common = GlobalCommon()
gui_common = GuiCommon(app, global_common)
if getattr(sys, "dangerzone_dev", False) and dummy_conversion:
dummy = Dummy()
dangerzone = DangerzoneGui(app, isolation_provider=dummy)
elif is_qubes_native_conversion():
qubes = Qubes()
dangerzone = DangerzoneGui(app, isolation_provider=qubes)
else:
container = Container()
dangerzone = DangerzoneGui(app, isolation_provider=container)
# Allow Ctrl-C to smoothly quit the program instead of throwing an exception
signal.signal(signal.SIGINT, signal.SIG_DFL)
# Create the system tray
systray = SysTray(global_common, gui_common, app, app_wrapper)
def open_files(filenames: List[str] = []) -> None:
documents = [Document(filename) for filename in filenames]
window.content_widget.doc_selection_widget.documents_selected.emit(documents)
closed_windows = {}
windows = {}
window = MainWindow(dangerzone)
def delete_window(window_id):
closed_windows[window_id] = windows[window_id]
del windows[window_id]
# Check for updates
log.debug("Setting up Dangerzone updater")
updater = UpdaterThread(dangerzone)
window.register_update_handler(updater.finished)
# Open a document in a window
def select_document(filename=None):
if (
len(windows) == 1
and windows[list(windows.keys())[0]].common.input_filename == None
):
window = windows[list(windows.keys())[0]]
else:
window_id = uuid.uuid4().hex
window = MainWindow(global_common, gui_common, window_id)
window.delete_window.connect(delete_window)
windows[window_id] = window
if filename:
# Validate filename
filename = os.path.abspath(os.path.expanduser(filename))
try:
open(filename, "rb")
except FileNotFoundError:
click.echo("File not found")
return False
except PermissionError:
click.echo("Permission denied")
return False
window.common.input_filename = filename
window.content_widget.doc_selection_widget.document_selected.emit()
return True
# Open a new window if not filename is passed
if filename is None:
select_document()
log.debug("Consulting updater settings before checking for updates")
if updater.should_check_for_updates():
log.debug("Checking for updates")
updater.start()
else:
# If filename is passed as an argument, open it
if not select_document(filename):
return True
log.debug("Will not check for updates, based on updater settings")
# Open a new window, if all windows are closed
def application_activated():
if len(windows) == 0:
select_document()
# Ensure the status of the toggle updates checkbox is updated, after the user is
# prompted to enable updates.
window.toggle_updates_action.setChecked(bool(updater.check))
if filenames:
open_files(filenames)
# MacOS: Open a new window, if all windows are closed
def application_activated() -> None:
window.show()
# If we get a file open event, open it
app_wrapper.document_selected.connect(select_document)
app_wrapper.new_window.connect(select_document)
app.document_selected.connect(open_files)
# If the application is activated and all windows are closed, open a new one
app_wrapper.application_activated.connect(application_activated)
app.application_activated.connect(application_activated)
# Launch the GUI
ret = app.exec_()
sys.exit(ret)
def setup_logging() -> None:
logging.basicConfig(level=logging.DEBUG, format="[%(levelname)s] %(message)s")
args.override_parser_and_check_suspicious_options(gui_main)

View file

@ -1,176 +0,0 @@
import os
import platform
import subprocess
import shlex
import pipes
from PySide2 import QtCore, QtGui, QtWidgets
from colorama import Fore
if platform.system() == "Darwin":
import plistlib
elif platform.system() == "Linux":
import grp
import getpass
from xdg.DesktopEntry import DesktopEntry
from ..settings import Settings
class GuiCommon(object):
"""
The GuiCommon class is a singleton of shared functionality for the GUI
"""
def __init__(self, app, global_common):
# Qt app
self.app = app
# Global common singleton
self.global_common = global_common
# Preload font
self.fixed_font = QtGui.QFontDatabase.systemFont(QtGui.QFontDatabase.FixedFont)
# Preload list of PDF viewers on computer
self.pdf_viewers = self._find_pdf_viewers()
# Are we done waiting (for Docker Desktop to be installed, or for container to install)
self.is_waiting_finished = False
def get_window_icon(self):
if platform.system() == "Windows":
path = self.global_common.get_resource_path("dangerzone.ico")
else:
path = self.global_common.get_resource_path("icon.png")
return QtGui.QIcon(path)
def open_pdf_viewer(self, filename):
if platform.system() == "Darwin":
# Open in Preview
args = ["open", "-a", "Preview.app", filename]
# Run
args_str = " ".join(pipes.quote(s) for s in args)
print(Fore.YELLOW + "> " + Fore.CYAN + args_str)
subprocess.run(args)
elif platform.system() == "Linux":
# Get the PDF reader command
args = shlex.split(
self.pdf_viewers[self.global_common.settings.get("open_app")]
)
# %f, %F, %u, and %U are filenames or URLS -- so replace with the file to open
for i in range(len(args)):
if (
args[i] == "%f"
or args[i] == "%F"
or args[i] == "%u"
or args[i] == "%U"
):
args[i] = filename
# Open as a background process
args_str = " ".join(pipes.quote(s) for s in args)
print(Fore.YELLOW + "> " + Fore.CYAN + args_str)
subprocess.Popen(args)
def _find_pdf_viewers(self):
pdf_viewers = {}
if platform.system() == "Linux":
# Find all .desktop files
for search_path in [
"/usr/share/applications",
"/usr/local/share/applications",
os.path.expanduser("~/.local/share/applications"),
]:
try:
for filename in os.listdir(search_path):
full_filename = os.path.join(search_path, filename)
if os.path.splitext(filename)[1] == ".desktop":
# See which ones can open PDFs
desktop_entry = DesktopEntry(full_filename)
if (
"application/pdf" in desktop_entry.getMimeTypes()
and desktop_entry.getName() != "dangerzone"
):
pdf_viewers[
desktop_entry.getName()
] = desktop_entry.getExec()
except FileNotFoundError:
pass
return pdf_viewers
class Alert(QtWidgets.QDialog):
def __init__(
self, gui_common, global_common, message, ok_text="Ok", extra_button_text=None
):
super(Alert, self).__init__()
self.global_common = global_common
self.gui_common = gui_common
self.setWindowTitle("dangerzone")
self.setWindowIcon(self.gui_common.get_window_icon())
self.setModal(True)
flags = (
QtCore.Qt.CustomizeWindowHint
| QtCore.Qt.WindowTitleHint
| QtCore.Qt.WindowSystemMenuHint
| QtCore.Qt.WindowCloseButtonHint
| QtCore.Qt.WindowStaysOnTopHint
)
self.setWindowFlags(flags)
logo = QtWidgets.QLabel()
logo.setPixmap(
QtGui.QPixmap.fromImage(
QtGui.QImage(self.global_common.get_resource_path("icon.png"))
)
)
label = QtWidgets.QLabel()
label.setText(message)
label.setWordWrap(True)
message_layout = QtWidgets.QHBoxLayout()
message_layout.addWidget(logo)
message_layout.addSpacing(10)
message_layout.addWidget(label, stretch=1)
ok_button = QtWidgets.QPushButton(ok_text)
ok_button.clicked.connect(self.clicked_ok)
if extra_button_text:
extra_button = QtWidgets.QPushButton(extra_button_text)
extra_button.clicked.connect(self.clicked_extra)
cancel_button = QtWidgets.QPushButton("Cancel")
cancel_button.clicked.connect(self.clicked_cancel)
buttons_layout = QtWidgets.QHBoxLayout()
buttons_layout.addStretch()
buttons_layout.addWidget(ok_button)
if extra_button_text:
buttons_layout.addWidget(extra_button)
buttons_layout.addWidget(cancel_button)
layout = QtWidgets.QVBoxLayout()
layout.addLayout(message_layout)
layout.addSpacing(10)
layout.addLayout(buttons_layout)
self.setLayout(layout)
def clicked_ok(self):
self.done(QtWidgets.QDialog.Accepted)
def clicked_extra(self):
self.done(2)
def clicked_cancel(self):
self.done(QtWidgets.QDialog.Rejected)
def launch(self):
return self.exec_()

401
dangerzone/gui/logic.py Normal file
View file

@ -0,0 +1,401 @@
from __future__ import annotations
import logging
import os
import platform
import shlex
import subprocess
import typing
from collections import OrderedDict
from pathlib import Path
from typing import Optional
from colorama import Fore
# FIXME: See https://github.com/freedomofpress/dangerzone/issues/320 for more details.
if typing.TYPE_CHECKING:
from PySide2 import QtCore, QtGui, QtWidgets
from . import Application
else:
try:
from PySide6 import QtCore, QtGui, QtWidgets
except ImportError:
from PySide2 import QtCore, QtGui, QtWidgets
if platform.system() == "Linux":
from xdg.DesktopEntry import DesktopEntry, ParsingError
from ..isolation_provider.base import IsolationProvider
from ..logic import DangerzoneCore
from ..util import get_resource_path, replace_control_chars
log = logging.getLogger(__name__)
class DangerzoneGui(DangerzoneCore):
"""
Singleton of shared state / functionality for the GUI and core app logic
"""
def __init__(
self, app: "Application", isolation_provider: IsolationProvider
) -> None:
super().__init__(isolation_provider)
# Qt app
self.app = app
# Only one output dir is supported in the GUI
self.output_dir: str = ""
# Preload font
self.fixed_font = QtGui.QFontDatabase.systemFont(QtGui.QFontDatabase.FixedFont)
# Preload ordered list of PDF viewers on computer, starting with default
self.pdf_viewers = self._find_pdf_viewers()
# Are we done waiting (for Docker Desktop to be installed, or for container to install)
self.is_waiting_finished = False
def get_window_icon(self) -> QtGui.QIcon:
if platform.system() == "Windows":
path = get_resource_path("dangerzone.ico")
else:
path = get_resource_path("icon.png")
return QtGui.QIcon(str(path))
def open_pdf_viewer(self, filename: str) -> None:
if platform.system() == "Darwin":
# Open in Preview
args = ["open", "-a", "Preview.app", filename]
# Run
args_str = replace_control_chars(" ".join(shlex.quote(s) for s in args))
log.info(Fore.YELLOW + "> " + Fore.CYAN + args_str)
subprocess.run(args)
elif platform.system() == "Windows":
os.startfile(Path(filename)) # type: ignore [attr-defined]
elif platform.system() == "Linux":
# Get the PDF reader command
args = shlex.split(self.pdf_viewers[self.settings.get("open_app")])
# %f, %F, %u, and %U are filenames or URLS -- so replace with the file to open
for i in range(len(args)):
if (
args[i] == "%f"
or args[i] == "%F"
or args[i] == "%u"
or args[i] == "%U"
):
args[i] = filename
# Open as a background process
args_str = replace_control_chars(" ".join(shlex.quote(s) for s in args))
log.info(Fore.YELLOW + "> " + Fore.CYAN + args_str)
subprocess.Popen(args)
def _find_pdf_viewers(self) -> OrderedDict[str, str]:
pdf_viewers: OrderedDict[str, str] = OrderedDict()
if platform.system() == "Linux":
# Opportunistically query for default pdf handler
default_pdf_viewer = None
try:
default_pdf_viewer = subprocess.check_output(
["xdg-mime", "query", "default", "application/pdf"]
).decode()
except (FileNotFoundError, subprocess.CalledProcessError) as e:
# Log it and continue
log.info(
"xdg-mime query failed, default PDF handler could not be found."
)
log.debug(f"xdg-mime query failed: {e}")
# Find all .desktop files
for search_path in [
"/usr/share/applications",
"/usr/local/share/applications",
os.path.expanduser("~/.local/share/applications"),
]:
try:
for filename in os.listdir(search_path):
full_filename = os.path.join(search_path, filename)
if os.path.splitext(filename)[1] == ".desktop":
# See which ones can open PDFs
try:
desktop_entry = DesktopEntry(full_filename)
except ParsingError:
# Do not stop when encountering malformed desktop entries
continue
except Exception:
log.exception(
"Encountered the following exception while processing desktop entry %s",
full_filename,
)
else:
desktop_entry_name = desktop_entry.getName()
if (
"application/pdf" in desktop_entry.getMimeTypes()
and "dangerzone" not in desktop_entry_name.lower()
):
pdf_viewers[desktop_entry_name] = (
desktop_entry.getExec()
)
# Put the default entry first
if filename == default_pdf_viewer:
try:
pdf_viewers.move_to_end(
desktop_entry_name, last=False
)
except KeyError as e:
# Should be unreachable
log.error(
f"Problem reordering applications: {e}"
)
except FileNotFoundError:
pass
return pdf_viewers
class Dialog(QtWidgets.QDialog):
def __init__(
self,
dangerzone: DangerzoneGui,
title: str,
ok_text: str = "Ok",
has_cancel: bool = True,
cancel_text: str = "Cancel",
extra_button_text: Optional[str] = None,
) -> None:
super().__init__()
self.dangerzone = dangerzone
self.setProperty("OSColorMode", self.dangerzone.app.os_color_mode.value)
self.setWindowTitle(title)
self.setWindowIcon(self.dangerzone.get_window_icon())
self.setModal(True)
flags = (
QtCore.Qt.CustomizeWindowHint
| QtCore.Qt.WindowTitleHint
| QtCore.Qt.WindowSystemMenuHint
| QtCore.Qt.WindowCloseButtonHint
| QtCore.Qt.WindowStaysOnTopHint
)
self.setWindowFlags(flags)
message_layout = self.create_layout()
self.ok_button = QtWidgets.QPushButton(ok_text)
self.ok_button.clicked.connect(self.clicked_ok)
self.extra_button: Optional[QtWidgets.QPushButton] = None
if extra_button_text:
self.extra_button = QtWidgets.QPushButton(extra_button_text)
self.extra_button.clicked.connect(self.clicked_extra)
self.cancel_button: Optional[QtWidgets.QPushButton] = None
if has_cancel:
self.cancel_button = QtWidgets.QPushButton(cancel_text)
self.cancel_button.clicked.connect(self.clicked_cancel)
buttons_layout = self.create_buttons_layout()
layout = QtWidgets.QVBoxLayout()
layout.addLayout(message_layout)
layout.addSpacing(10)
layout.addLayout(buttons_layout)
self.setLayout(layout)
def create_buttons_layout(self) -> QtWidgets.QHBoxLayout:
buttons_layout = QtWidgets.QHBoxLayout()
buttons_layout.addStretch()
buttons_layout.addWidget(self.ok_button)
if self.extra_button:
buttons_layout.addWidget(self.extra_button)
if self.cancel_button:
buttons_layout.addWidget(self.cancel_button)
return buttons_layout
def create_layout(self) -> QtWidgets.QBoxLayout:
raise NotImplementedError("Dangerzone dialogs must implement this method")
def clicked_ok(self) -> None:
self.done(int(QtWidgets.QDialog.Accepted))
def clicked_extra(self) -> None:
self.done(2)
def clicked_cancel(self) -> None:
self.done(int(QtWidgets.QDialog.Rejected))
def launch(self) -> int:
return self.exec()
class Alert(Dialog):
def __init__( # type: ignore [no-untyped-def]
self,
*args,
message: str = "",
**kwargs,
) -> None:
self.message = message
kwargs.setdefault("title", "dangerzone")
super().__init__(*args, **kwargs)
def create_layout(self) -> QtWidgets.QBoxLayout:
logo = QtWidgets.QLabel()
logo.setPixmap(
QtGui.QPixmap.fromImage(QtGui.QImage(str(get_resource_path("icon.png"))))
)
label = QtWidgets.QLabel()
label.setText(self.message)
label.setWordWrap(True)
label.setOpenExternalLinks(True)
message_layout = QtWidgets.QHBoxLayout()
message_layout.addWidget(logo)
message_layout.addSpacing(10)
message_layout.addWidget(label, stretch=1)
return message_layout
class UpdateDialog(Dialog):
def __init__( # type: ignore [no-untyped-def]
self,
*args,
intro_msg: Optional[str] = None,
middle_widget: Optional[QtWidgets.QWidget] = None,
epilogue_msg: Optional[str] = None,
**kwargs,
) -> None:
self.intro_msg = intro_msg
self.middle_widget = middle_widget
self.epilogue_msg = epilogue_msg
super().__init__(*args, **kwargs)
def create_layout(self) -> QtWidgets.QBoxLayout:
self.setMinimumWidth(500)
message_layout = QtWidgets.QVBoxLayout()
if self.intro_msg is not None:
intro = QtWidgets.QLabel()
intro.setText(self.intro_msg)
intro.setWordWrap(True)
intro.setAlignment(QtCore.Qt.AlignCenter)
intro.setOpenExternalLinks(True)
message_layout.addWidget(intro)
message_layout.addSpacing(10)
if self.middle_widget is not None:
self.middle_widget.setParent(self)
message_layout.addWidget(self.middle_widget)
message_layout.addSpacing(10)
if self.epilogue_msg is not None:
epilogue = QtWidgets.QLabel()
epilogue.setText(self.epilogue_msg)
epilogue.setWordWrap(True)
epilogue.setOpenExternalLinks(True)
message_layout.addWidget(epilogue)
message_layout.addSpacing(10)
return message_layout
class CollapsibleBox(QtWidgets.QWidget):
"""Create a widget that can show/hide its contents when you click on it.
The credits for this code go to eyllanesc's answer in StackOverflow:
https://stackoverflow.com/a/52617714. We have made the following improvements:
1. Adapt the code to PySide.
2. Resize the window once the box uncollapses.
3. Add type hints.
"""
def __init__(self, title: str, parent: Optional[QtWidgets.QWidget] = None):
super(CollapsibleBox, self).__init__(parent)
self.toggle_button = QtWidgets.QToolButton(
text=title,
checkable=True,
checked=False,
)
self.toggle_button.setStyleSheet("QToolButton { border: none; }")
self.toggle_button.setToolButtonStyle(QtCore.Qt.ToolButtonTextBesideIcon)
self.toggle_button.setArrowType(QtCore.Qt.RightArrow)
self.toggle_button.clicked.connect(self.on_click)
self.toggle_animation = QtCore.QParallelAnimationGroup(self)
self.content_area = QtWidgets.QScrollArea(maximumHeight=0, minimumHeight=0)
self.content_area.setSizePolicy(
QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Fixed
)
self.content_area.setFrameShape(QtWidgets.QFrame.NoFrame)
lay = QtWidgets.QVBoxLayout(self)
lay.setSpacing(0)
lay.setContentsMargins(0, 0, 0, 0)
lay.addWidget(self.toggle_button)
lay.addWidget(self.content_area)
self.toggle_animation.addAnimation(
QtCore.QPropertyAnimation(self, b"minimumHeight")
)
self.toggle_animation.addAnimation(
QtCore.QPropertyAnimation(self, b"maximumHeight")
)
self.toggle_animation.addAnimation(
QtCore.QPropertyAnimation(self.content_area, b"maximumHeight")
)
self.toggle_animation.finished.connect(self.on_animation_finished)
def on_click(self) -> None:
checked = self.toggle_button.isChecked()
self.toggle_button.setArrowType(
QtCore.Qt.DownArrow if checked else QtCore.Qt.RightArrow
)
self.toggle_animation.setDirection(
QtCore.QAbstractAnimation.Forward
if checked
else QtCore.QAbstractAnimation.Backward
)
self.toggle_animation.start()
def on_animation_finished(self) -> None:
if not self.toggle_button.isChecked():
content_height = self.content_area.layout().sizeHint().height()
parent = self.parent()
assert isinstance(parent, QtWidgets.QWidget)
parent.resize(parent.width(), parent.height() - content_height)
def setContentLayout(self, layout: QtWidgets.QBoxLayout) -> None:
lay = self.content_area.layout()
del lay
self.content_area.setLayout(layout)
collapsed_height = self.sizeHint().height() - self.content_area.maximumHeight()
content_height = layout.sizeHint().height()
for i in range(self.toggle_animation.animationCount()):
animation = self.toggle_animation.animationAt(i)
assert isinstance(animation, QtCore.QPropertyAnimation)
animation.setDuration(60)
animation.setStartValue(collapsed_height)
animation.setEndValue(collapsed_height + content_height)
content_animation = self.toggle_animation.animationAt(
self.toggle_animation.animationCount() - 1
)
assert isinstance(content_animation, QtCore.QPropertyAnimation)
content_animation.setDuration(60)
content_animation.setStartValue(0)
content_animation.setEndValue(content_height)

File diff suppressed because it is too large Load diff

View file

@ -1,30 +0,0 @@
import platform
from PySide2 import QtWidgets
class SysTray(QtWidgets.QSystemTrayIcon):
def __init__(self, global_common, gui_common, app, app_wrapper):
super(SysTray, self).__init__()
self.global_common = global_common
self.gui_common = gui_common
self.app = app
self.app_wrapper = app_wrapper
self.setIcon(self.gui_common.get_window_icon())
menu = QtWidgets.QMenu()
self.new_action = menu.addAction("New window")
self.new_action.triggered.connect(self.new_window)
self.quit_action = menu.addAction("Quit")
self.quit_action.triggered.connect(self.quit_clicked)
self.setContextMenu(menu)
self.show()
def new_window(self):
self.app_wrapper.new_window.emit()
def quit_clicked(self):
self.app.quit()

315
dangerzone/gui/updater.py Normal file
View file

@ -0,0 +1,315 @@
"""A module that contains the logic for checking for updates."""
import json
import logging
import platform
import sys
import time
import typing
from typing import Optional
from packaging import version
if typing.TYPE_CHECKING:
from PySide2 import QtCore, QtWidgets
else:
try:
from PySide6 import QtCore, QtWidgets
except ImportError:
from PySide2 import QtCore, QtWidgets
# XXX implict import for "markdown" module required for Cx_Freeze to build on Windows
# See https://github.com/freedomofpress/dangerzone/issues/501
import html.parser # noqa: F401
import markdown
import requests
from ..util import get_version
from .logic import Alert, DangerzoneGui
log = logging.getLogger(__name__)
MSG_CONFIRM_UPDATE_CHECKS = """\
<p><b>Do you want Dangerzone to automatically check for updates?</b></p>
<p>If you accept, Dangerzone will check the
<a href="https://github.com/freedomofpress/dangerzone/releases">latest releases page</a>
in github.com on startup. Otherwise it will make no network requests and
won't inform you about new releases.</p>
<p>If you prefer another way of getting notified about new releases, we suggest adding
to your RSS reader our
<a href="https://fosstodon.org/@dangerzone.rss">Mastodon feed</a>. For more information
about updates, check
<a href="https://github.com/freedomofpress/dangerzone/wiki/Updates">this webpage</a>.</p>
"""
UPDATE_CHECK_COOLDOWN_SECS = 60 * 60 * 12 # Check for updates at most every 12 hours.
class UpdateCheckPrompt(Alert):
"""The prompt that asks the users if they want to enable update checks."""
x_pressed = False
def closeEvent(self, event: QtCore.QEvent) -> None:
"""Detect when a user has pressed "X" in the title bar.
This function is called when a user clicks on "X" in the title bar. We want to
differentiate between the user clicking on "Cancel" and clicking on "X", since
in the second case, we want to remind them again on the next run.
See: https://stackoverflow.com/questions/70851063/pyqt-differentiate-between-close-function-and-title-bar-close-x
"""
self.x_pressed = True
event.accept()
def create_buttons_layout(self) -> QtWidgets.QHBoxLayout:
buttons_layout = QtWidgets.QHBoxLayout()
buttons_layout.addStretch()
assert self.cancel_button is not None
buttons_layout.addWidget(self.cancel_button)
buttons_layout.addWidget(self.ok_button)
self.ok_button.setDefault(True)
return buttons_layout
class UpdateReport:
"""A report for an update check."""
def __init__(
self,
version: Optional[str] = None,
changelog: Optional[str] = None,
error: Optional[str] = None,
):
self.version = version
self.changelog = changelog
self.error = error
def empty(self) -> bool:
return self.version is None and self.changelog is None and self.error is None
class UpdaterThread(QtCore.QThread):
"""Check asynchronously for Dangerzone updates.
The Updater class is mainly responsible for the following:
1. Asking the user if they want to enable update checks or not.
2. Determining when it's the right time to check for updates.
3. Hitting the GitHub releases API and learning about updates.
Since checking for updates is a task that may take some time, we perform it
asynchronously, in a Qt thread. This thread then triggers a signal, and informs
whoever has connected to it.
"""
finished = QtCore.Signal(UpdateReport)
GH_RELEASE_URL = (
"https://api.github.com/repos/freedomofpress/dangerzone/releases/latest"
)
REQ_TIMEOUT = 15
def __init__(self, dangerzone: DangerzoneGui):
super().__init__()
self.dangerzone = dangerzone
###########
# Helpers for updater settings
#
# These helpers make it easy to retrieve specific updater-related settings, as well
# as save the settings file, only when necessary.
@property
def check(self) -> Optional[bool]:
return self.dangerzone.settings.get("updater_check")
@check.setter
def check(self, val: bool) -> None:
self.dangerzone.settings.set("updater_check", val, autosave=True)
def prompt_for_checks(self) -> Optional[bool]:
"""Ask the user if they want to be informed about Dangerzone updates."""
log.debug("Prompting the user for update checks")
# FIXME: Handle the case where a user clicks on "X", instead of explicitly
# making a choice. We should probably ask them again on the next run.
prompt = UpdateCheckPrompt(
self.dangerzone,
message=MSG_CONFIRM_UPDATE_CHECKS,
ok_text="Check Automatically",
cancel_text="Don't Check",
)
check = prompt.launch()
if not check and prompt.x_pressed:
return None
return bool(check)
def should_check_for_updates(self) -> bool:
"""Determine if we can check for updates based on settings and user prefs.
Note that this method only checks if the user has expressed an interest for
learning about new updates, and not whether we should actually make an update
check. Those two things are distinct, actually. For example:
* A user may have expressed that they want to learn about new updates.
* A previous update check may have found out that there's a new version out.
* Thus we will always show to the user the cached info about the new version,
and won't make a new update check.
"""
log.debug("Checking platform type")
# TODO: Disable updates for Homebrew installations.
if platform.system() == "Linux" and not getattr(sys, "dangerzone_dev", False):
log.debug("Running on Linux, disabling updates")
if not self.check: # if not overidden by user
self.check = False
return False
log.debug("Checking if first run of Dangerzone")
if self.dangerzone.settings.get("updater_last_check") is None:
log.debug("Dangerzone is running for the first time, updates are stalled")
self.dangerzone.settings.set("updater_last_check", 0, autosave=True)
return False
log.debug("Checking if user has already expressed their preference")
if self.check is None:
log.debug("User has not been asked yet for update checks")
self.check = self.prompt_for_checks()
return bool(self.check)
elif not self.check:
log.debug("User has expressed that they don't want to check for updates")
return False
return True
def can_update(self, cur_version: str, latest_version: str) -> bool:
if version.parse(cur_version) == version.parse(latest_version):
return False
elif version.parse(cur_version) > version.parse(latest_version):
# FIXME: This is a sanity check, but we should improve its wording.
raise Exception("Received version is older than the latest version")
else:
return True
def _get_now_timestamp(self) -> int:
return int(time.time())
def _should_postpone_update_check(self) -> bool:
"""Consult and update cooldown timer.
If the previous check happened before the cooldown period expires, do not check
again.
"""
current_time = self._get_now_timestamp()
last_check = self.dangerzone.settings.get("updater_last_check")
if current_time < last_check + UPDATE_CHECK_COOLDOWN_SECS:
log.debug("Cooling down update checks")
return True
else:
return False
def get_latest_info(self) -> UpdateReport:
"""Get the latest release info from GitHub.
Also, render the changelog from Markdown format to HTML, so that we can show it
to the users.
"""
try:
res = requests.get(self.GH_RELEASE_URL, timeout=self.REQ_TIMEOUT)
except Exception as e:
raise RuntimeError(
f"Encountered an exception while checking {self.GH_RELEASE_URL}: {e}"
)
if res.status_code != 200:
raise RuntimeError(
f"Encountered an HTTP {res.status_code} error while checking"
f" {self.GH_RELEASE_URL}"
)
try:
info = res.json()
except json.JSONDecodeError:
raise ValueError(f"Received a non-JSON response from {self.GH_RELEASE_URL}")
try:
version = info["tag_name"].lstrip("v")
changelog = markdown.markdown(info["body"])
except KeyError:
raise ValueError(
f"Missing required fields in JSON response from {self.GH_RELEASE_URL}"
)
return UpdateReport(version=version, changelog=changelog)
# XXX: This happens in parallel with other tasks. DO NOT alter global state!
def _check_for_updates(self) -> UpdateReport:
"""Check for updates locally and remotely.
Check for updates in two places:
1. In our settings, in case we have cached the latest version/changelog from a
previous run.
2. In GitHub, by hitting the latest releases API.
"""
log.debug("Checking for Dangerzone updates")
latest_version = self.dangerzone.settings.get("updater_latest_version")
if version.parse(get_version()) < version.parse(latest_version):
log.debug("Determined that there is an update due to cached results")
return UpdateReport(
version=latest_version,
changelog=self.dangerzone.settings.get("updater_latest_changelog"),
)
# If the previous check happened before the cooldown period expires, do not
# check again. Else, bump the last check timestamp, before making the actual
# check. This is to ensure that even failed update checks respect the cooldown
# period.
if self._should_postpone_update_check():
return UpdateReport()
else:
self.dangerzone.settings.set(
"updater_last_check", self._get_now_timestamp(), autosave=True
)
log.debug("Checking the latest GitHub release")
report = self.get_latest_info()
log.debug(f"Latest version in GitHub is {report.version}")
if report.version and self.can_update(latest_version, report.version):
log.debug(
f"Determined that there is an update due to a new GitHub version:"
f" {latest_version} < {report.version}"
)
return report
log.debug("No need to update")
return UpdateReport()
##################
# Logic for running update checks asynchronously
def check_for_updates(self) -> UpdateReport:
"""Check for updates and return a report with the findings:
There are three scenarios when we check for updates, and each scenario returns a
slightly different answer:
1. No new updates: Return an empty update report.
2. Updates are available: Return an update report with the latest version and
changelog, in HTML format.
3. Update check failed: Return an update report that holds just the error
message.
"""
try:
res = self._check_for_updates()
except Exception as e:
log.exception("Encountered an error while checking for upgrades")
res = UpdateReport(error=str(e))
return res
def run(self) -> None:
self.finished.emit(self.check_for_updates())

View file

@ -0,0 +1,388 @@
import contextlib
import logging
import os
import platform
import signal
import subprocess
import sys
import threading
from abc import ABC, abstractmethod
from io import BytesIO
from typing import IO, Callable, Iterator, Optional
import fitz
from colorama import Fore, Style
from ..conversion import errors
from ..conversion.common import DEFAULT_DPI, INT_BYTES
from ..document import Document
from ..util import get_tessdata_dir, replace_control_chars
log = logging.getLogger(__name__)
TIMEOUT_EXCEPTION = 15
TIMEOUT_GRACE = 15
TIMEOUT_FORCE = 5
def _signal_process_group(p: subprocess.Popen, signo: int) -> None:
"""Send a signal to a process group."""
try:
os.killpg(os.getpgid(p.pid), signo)
except (ProcessLookupError, PermissionError):
# If the process no longer exists, we may encounter the above errors, either
# when looking for the process group (ProcessLookupError), or when trying to
# kill a process group that no longer exists (PermissionError)
return
except Exception:
log.exception(
f"Unexpected error while sending signal {signo} to the"
f"document-to-pixels process group (PID: {p.pid})"
)
def terminate_process_group(p: subprocess.Popen) -> None:
"""Terminate a process group."""
if platform.system() == "Windows":
p.terminate()
else:
_signal_process_group(p, signal.SIGTERM)
def kill_process_group(p: subprocess.Popen) -> None:
"""Forcefully kill a process group."""
if platform.system() == "Windows":
p.kill()
else:
_signal_process_group(p, signal.SIGKILL)
def read_bytes(f: IO[bytes], size: int, exact: bool = True) -> bytes:
"""Read bytes from a file-like object."""
buf = f.read(size)
if exact and len(buf) != size:
raise errors.ConverterProcException()
return buf
def read_int(f: IO[bytes]) -> int:
"""Read 2 bytes from a file-like object, and decode them as int."""
untrusted_int = f.read(INT_BYTES)
if len(untrusted_int) != INT_BYTES:
raise errors.ConverterProcException()
return int.from_bytes(untrusted_int, "big", signed=False)
def sanitize_debug_text(text: bytes) -> str:
"""Read all the buffer and return a sanitized version"""
untrusted_text = text.decode("ascii", errors="replace")
return replace_control_chars(untrusted_text, keep_newlines=True)
class IsolationProvider(ABC):
"""
Abstracts an isolation provider
"""
def __init__(self, debug: bool = False) -> None:
self.debug = debug
if self.should_capture_stderr():
self.proc_stderr = subprocess.PIPE
else:
self.proc_stderr = subprocess.DEVNULL
def should_capture_stderr(self) -> bool:
return self.debug or getattr(sys, "dangerzone_dev", False)
@abstractmethod
def install(self) -> bool:
pass
def convert(
self,
document: Document,
ocr_lang: Optional[str],
progress_callback: Optional[Callable] = None,
) -> None:
self.progress_callback = progress_callback
document.mark_as_converting()
try:
with self.doc_to_pixels_proc(document) as conversion_proc:
self.convert_with_proc(document, ocr_lang, conversion_proc)
document.mark_as_safe()
if document.archive_after_conversion:
document.archive()
except errors.ConversionException as e:
self.print_progress(document, True, str(e), 0)
document.mark_as_failed()
except Exception as e:
log.exception(
f"An exception occurred while converting document '{document.id}'"
)
self.print_progress(document, True, str(e), 0)
document.mark_as_failed()
def ocr_page(self, pixmap: fitz.Pixmap, ocr_lang: str) -> bytes:
"""Get a single page as pixels, OCR it, and return a PDF as bytes."""
return pixmap.pdfocr_tobytes(
compress=True,
language=ocr_lang,
tessdata=str(get_tessdata_dir()),
)
def pixels_to_pdf_page(
self,
untrusted_data: bytes,
untrusted_width: int,
untrusted_height: int,
ocr_lang: Optional[str],
) -> fitz.Document:
"""Convert a byte array of RGB pixels into a PDF page, optionally with OCR."""
pixmap = fitz.Pixmap(
fitz.Colorspace(fitz.CS_RGB),
untrusted_width,
untrusted_height,
untrusted_data,
False,
)
pixmap.set_dpi(DEFAULT_DPI, DEFAULT_DPI)
if ocr_lang: # OCR the document
page_pdf_bytes = self.ocr_page(pixmap, ocr_lang)
else: # Don't OCR
page_doc = fitz.Document()
page_doc.insert_file(pixmap)
page_pdf_bytes = page_doc.tobytes(deflate_images=True)
return fitz.open("pdf", page_pdf_bytes)
def convert_with_proc(
self,
document: Document,
ocr_lang: Optional[str],
p: subprocess.Popen,
) -> None:
percentage = 0.0
with open(document.input_filename, "rb") as f:
try:
assert p.stdin is not None
p.stdin.write(f.read())
p.stdin.close()
except BrokenPipeError:
raise errors.ConverterProcException()
assert p.stdout
n_pages = read_int(p.stdout)
if n_pages == 0 or n_pages > errors.MAX_PAGES:
raise errors.MaxPagesException()
step = 100 / n_pages
safe_doc = fitz.Document()
for page in range(1, n_pages + 1):
searchable = "searchable " if ocr_lang else ""
text = (
f"Converting page {page}/{n_pages} from pixels to {searchable}PDF"
)
self.print_progress(document, False, text, percentage)
width = read_int(p.stdout)
height = read_int(p.stdout)
if not (1 <= width <= errors.MAX_PAGE_WIDTH):
raise errors.MaxPageWidthException()
if not (1 <= height <= errors.MAX_PAGE_HEIGHT):
raise errors.MaxPageHeightException()
num_pixels = width * height * 3 # three color channels
untrusted_pixels = read_bytes(
p.stdout,
num_pixels,
)
page_pdf = self.pixels_to_pdf_page(
untrusted_pixels,
width,
height,
ocr_lang,
)
safe_doc.insert_pdf(page_pdf)
percentage += step
# Ensure nothing else is read after all bitmaps are obtained
p.stdout.close()
# Saving it with a different name first, because PyMuPDF cannot handle
# non-Unicode chars.
safe_doc.save(document.sanitized_output_filename)
os.replace(document.sanitized_output_filename, document.output_filename)
# TODO handle leftover code input
text = "Successfully converted document"
self.print_progress(document, False, text, 100)
def print_progress(
self, document: Document, error: bool, text: str, percentage: float
) -> None:
s = Style.BRIGHT + Fore.YELLOW + f"[doc {document.id}] "
s += Fore.CYAN + f"{int(percentage)}% " + Style.RESET_ALL
if error:
s += Fore.RED + text + Style.RESET_ALL
log.error(s)
else:
s += text
log.info(s)
if self.progress_callback:
self.progress_callback(error, text, percentage)
def get_proc_exception(
self, p: subprocess.Popen, timeout: int = TIMEOUT_EXCEPTION
) -> Exception:
"""Returns an exception associated with a process exit code"""
try:
error_code = p.wait(timeout)
except subprocess.TimeoutExpired:
return errors.UnexpectedConversionError(
"Encountered an I/O error during document to pixels conversion,"
f" but the conversion process is still running after {timeout} seconds"
f" (PID: {p.pid})"
)
except Exception:
return errors.UnexpectedConversionError(
"Encountered an I/O error during document to pixels conversion,"
f" but the status of the conversion process is unknown (PID: {p.pid})"
)
return errors.exception_from_error_code(error_code)
@abstractmethod
def should_wait_install(self) -> bool:
"""Whether this isolation provider takes a lot of time to install."""
pass
@abstractmethod
def is_available(self) -> bool:
"""Whether the backing implementation of the isolation provider is available."""
pass
@abstractmethod
def get_max_parallel_conversions(self) -> int:
pass
@abstractmethod
def start_doc_to_pixels_proc(self, document: Document) -> subprocess.Popen:
pass
@abstractmethod
def terminate_doc_to_pixels_proc(
self, document: Document, p: subprocess.Popen
) -> None:
"""Terminate gracefully the process started for the doc-to-pixels phase."""
pass
def ensure_stop_doc_to_pixels_proc(
self,
document: Document,
p: subprocess.Popen,
timeout_grace: int = TIMEOUT_GRACE,
timeout_force: int = TIMEOUT_FORCE,
) -> None:
"""Stop the conversion process, or ensure it has exited.
This method should be called when we want to verify that the doc-to-pixels
process has exited, or terminate it ourselves. The termination should happen as
gracefully as possible, and we should not block indefinitely until the process
has exited.
"""
# Check if the process completed.
ret = p.poll()
if ret is not None:
return
# At this point, the process is still running. This may be benign, as we haven't
# waited for it yet. Terminate it gracefully.
self.terminate_doc_to_pixels_proc(document, p)
try:
p.wait(timeout_grace)
except subprocess.TimeoutExpired:
log.warning(
f"Conversion process did not terminate gracefully after {timeout_grace}"
" seconds. Killing it forcefully..."
)
# Forcefully kill the running process.
kill_process_group(p)
try:
p.wait(timeout_force)
except subprocess.TimeoutExpired:
log.warning(
"Conversion process did not terminate forcefully after"
f" {timeout_force} seconds. Resources may linger..."
)
@contextlib.contextmanager
def doc_to_pixels_proc(
self,
document: Document,
timeout_exception: int = TIMEOUT_EXCEPTION,
timeout_grace: int = TIMEOUT_GRACE,
timeout_force: int = TIMEOUT_FORCE,
) -> Iterator[subprocess.Popen]:
"""Start a conversion process, pass it to the caller, and then clean it up."""
# Store the proc stderr in memory
stderr = BytesIO()
p = self.start_doc_to_pixels_proc(document)
stderr_thread = self.start_stderr_thread(p, stderr)
if platform.system() != "Windows":
assert os.getpgid(p.pid) != os.getpgid(os.getpid()), (
"Parent shares same PGID with child"
)
try:
yield p
except errors.ConverterProcException as e:
exception = self.get_proc_exception(p, timeout_exception)
raise exception from e
finally:
self.ensure_stop_doc_to_pixels_proc(
document, p, timeout_grace=timeout_grace, timeout_force=timeout_force
)
if stderr_thread:
# Wait for the thread to complete. If it's still alive, mention it in the debug log.
stderr_thread.join(timeout=1)
debug_bytes = stderr.getvalue()
debug_log = sanitize_debug_text(debug_bytes)
incomplete = "(incomplete) " if stderr_thread.is_alive() else ""
log.info(
"Conversion output (doc to pixels)\n"
f"----- DOC TO PIXELS LOG START {incomplete}-----\n"
f"{debug_log}" # no need for an extra newline here
"----- DOC TO PIXELS LOG END -----"
)
def start_stderr_thread(
self, process: subprocess.Popen, stderr: IO[bytes]
) -> Optional[threading.Thread]:
"""Start a thread to read stderr from the process"""
def _stream_stderr(process_stderr: IO[bytes]) -> None:
try:
for line in process_stderr:
stderr.write(line)
except (ValueError, IOError) as e:
log.debug(f"Stderr stream closed: {e}")
if process.stderr:
stderr_thread = threading.Thread(
target=_stream_stderr,
args=(process.stderr,),
daemon=True,
)
stderr_thread.start()
return stderr_thread
return None

View file

@ -0,0 +1,340 @@
import logging
import os
import platform
import shlex
import subprocess
from typing import List, Tuple
from .. import container_utils, errors
from ..container_utils import Runtime
from ..document import Document
from ..util import get_resource_path, get_subprocess_startupinfo
from .base import IsolationProvider, terminate_process_group
TIMEOUT_KILL = 5 # Timeout in seconds until the kill command returns.
MINIMUM_DOCKER_DESKTOP = {
"Darwin": "4.40.0",
"Windows": "4.40.0",
}
# Define startupinfo for subprocesses
if platform.system() == "Windows":
startupinfo = subprocess.STARTUPINFO() # type: ignore [attr-defined]
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW # type: ignore [attr-defined]
else:
startupinfo = None
log = logging.getLogger(__name__)
class Container(IsolationProvider):
# Name of the dangerzone container
@staticmethod
def get_runtime_security_args() -> List[str]:
"""Security options applicable to the outer Dangerzone container.
Our security precautions for the outer Dangerzone container are the following:
* Do not let the container assume new privileges.
* Drop all capabilities, except for CAP_SYS_CHROOT, which is necessary for
running gVisor.
* Do not allow access to the network stack.
* Run the container as the unprivileged `dangerzone` user.
* Set the `container_engine_t` SELinux label, which allows gVisor to work on
SELinux-enforcing systems
(see https://github.com/freedomofpress/dangerzone/issues/880).
* Set a custom seccomp policy for every container engine, since the `ptrace(2)`
system call is forbidden by some.
For Podman specifically, where applicable, we also add the following:
* Do not log the container's output.
* Do not map the host user to the container, with `--userns nomap` (available
from Podman 4.1 onwards)
"""
runtime = Runtime()
if runtime.name == "podman":
security_args = ["--log-driver", "none"]
security_args += ["--security-opt", "no-new-privileges"]
if container_utils.get_runtime_version() >= (4, 1):
# We perform a platform check to avoid the following Podman Desktop
# error on Windows:
#
# Error: nomap is only supported in rootless mode
#
# See also: https://github.com/freedomofpress/dangerzone/issues/1127
if platform.system() != "Windows":
security_args += ["--userns", "nomap"]
else:
security_args = ["--security-opt=no-new-privileges:true"]
# We specify a custom seccomp policy uniformly, because on certain container
# engines the default policy might not allow the `ptrace(2)` syscall [1]. Our
# custom seccomp policy has been copied as is [2] from the official Podman repo.
#
# [1] https://github.com/freedomofpress/dangerzone/issues/846
# [2] https://github.com/containers/common/blob/d3283f8401eeeb21f3c59a425b5461f069e199a7/pkg/seccomp/seccomp.json
seccomp_json_path = str(get_resource_path("seccomp.gvisor.json"))
# We perform a platform check to avoid the following Podman Desktop
# error on Windows:
#
# Error: opening seccomp profile failed: open
# C:\[...]\dangerzone\share\seccomp.gvisor.json: no such file or directory
#
# See also: https://github.com/freedomofpress/dangerzone/issues/1127
if runtime.name == "podman" and platform.system() != "Windows":
security_args += ["--security-opt", f"seccomp={seccomp_json_path}"]
security_args += ["--cap-drop", "all"]
security_args += ["--cap-add", "SYS_CHROOT"]
security_args += ["--security-opt", "label=type:container_engine_t"]
security_args += ["--network=none"]
security_args += ["-u", "dangerzone"]
return security_args
@staticmethod
def install() -> bool:
"""Install the container image tarball, or verify that it's already installed.
Perform the following actions:
1. Get the tags of any locally available images that match Dangerzone's image
name.
2. Get the expected image tag from the image-id.txt file.
- If this tag is present in the local images, then we can return.
- Else, prune the older container images and continue.
3. Load the image tarball and make sure it matches the expected tag.
"""
old_tags = container_utils.list_image_tags()
expected_tag = container_utils.get_expected_tag()
if expected_tag not in old_tags:
# Prune older container images.
log.info(
f"Could not find a Dangerzone container image with tag '{expected_tag}'"
)
for tag in old_tags:
tag = container_utils.CONTAINER_NAME + ":" + tag
container_utils.delete_image_tag(tag)
else:
return True
# Load the image tarball into the container runtime.
container_utils.load_image_tarball()
# Check that the container image has the expected image tag.
# See https://github.com/freedomofpress/dangerzone/issues/988 for an example
# where this was not the case.
new_tags = container_utils.list_image_tags()
if expected_tag not in new_tags:
raise errors.ImageNotPresentException(
f"Could not find expected tag '{expected_tag}' after loading the"
" container image tarball"
)
return True
@staticmethod
def should_wait_install() -> bool:
return True
@staticmethod
def is_available() -> bool:
runtime = Runtime()
# Can we run `docker/podman image ls` without an error
with subprocess.Popen(
[str(runtime.path), "image", "ls"],
stdout=subprocess.DEVNULL,
stderr=subprocess.PIPE,
startupinfo=get_subprocess_startupinfo(),
) as p:
_, stderr = p.communicate()
if p.returncode != 0:
raise errors.NotAvailableContainerTechException(
runtime.name, stderr.decode()
)
return True
def check_docker_desktop_version(self) -> Tuple[bool, str]:
# On windows and darwin, check that the minimum version is met
version = ""
runtime = Runtime()
runtime_is_docker = runtime.name == "docker"
platform_is_not_linux = platform.system() != "Linux"
if runtime_is_docker and platform_is_not_linux:
with subprocess.Popen(
["docker", "version", "--format", "{{.Server.Platform.Name}}"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
startupinfo=get_subprocess_startupinfo(),
) as p:
stdout, stderr = p.communicate()
if p.returncode != 0:
# When an error occurs, consider that the check went
# through, as we're checking for installation compatibiliy
# somewhere else already
return True, version
# The output is like "Docker Desktop 4.35.1 (173168)"
version = stdout.decode().replace("Docker Desktop", "").split()[0]
if version < MINIMUM_DOCKER_DESKTOP[platform.system()]:
return False, version
return True, version
def doc_to_pixels_container_name(self, document: Document) -> str:
"""Unique container name for the doc-to-pixels phase."""
return f"dangerzone-doc-to-pixels-{document.id}"
def pixels_to_pdf_container_name(self, document: Document) -> str:
"""Unique container name for the pixels-to-pdf phase."""
return f"dangerzone-pixels-to-pdf-{document.id}"
def exec(
self,
args: List[str],
) -> subprocess.Popen:
args_str = " ".join(shlex.quote(s) for s in args)
log.info("> " + args_str)
return subprocess.Popen(
args,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=self.proc_stderr,
startupinfo=startupinfo,
# Start the conversion process in a new session, so that we can later on
# kill the process group, without killing the controlling script.
start_new_session=True,
)
def exec_container(
self,
command: List[str],
name: str,
) -> subprocess.Popen:
runtime = Runtime()
security_args = self.get_runtime_security_args()
debug_args = []
if self.debug:
debug_args += ["-e", "RUNSC_DEBUG=1"]
enable_stdin = ["-i"]
set_name = ["--name", name]
prevent_leakage_args = ["--rm"]
image_name = [
container_utils.CONTAINER_NAME + ":" + container_utils.get_expected_tag()
]
args = (
["run"]
+ security_args
+ debug_args
+ prevent_leakage_args
+ enable_stdin
+ set_name
+ image_name
+ command
)
return self.exec([str(runtime.path)] + args)
def kill_container(self, name: str) -> None:
"""Terminate a spawned container.
We choose to terminate spawned containers using the `kill` action that the
container runtime provides, instead of terminating the process that spawned
them. The reason is that this process is not always tied to the underlying
container. For instance, in Docker containers, this process is actually
connected to the Docker daemon, and killing it will just close the associated
standard streams.
"""
runtime = Runtime()
cmd = [str(runtime.path), "kill", name]
try:
# We do not check the exit code of the process here, since the container may
# have stopped right before invoking this command. In that case, the
# command's output will contain some error messages, so we capture them in
# order to silence them.
#
# NOTE: We specify a timeout for this command, since we've seen it hang
# indefinitely for specific files. See:
# https://github.com/freedomofpress/dangerzone/issues/854
subprocess.run(
cmd,
capture_output=True,
startupinfo=get_subprocess_startupinfo(),
timeout=TIMEOUT_KILL,
)
except subprocess.TimeoutExpired:
log.warning(
f"Could not kill container '{name}' within {TIMEOUT_KILL} seconds"
)
except Exception as e:
log.exception(
f"Unexpected error occurred while killing container '{name}': {str(e)}"
)
def start_doc_to_pixels_proc(self, document: Document) -> subprocess.Popen:
# Convert document to pixels
command = [
"/usr/bin/python3",
"-m",
"dangerzone.conversion.doc_to_pixels",
]
name = self.doc_to_pixels_container_name(document)
return self.exec_container(command, name=name)
def terminate_doc_to_pixels_proc(
self, document: Document, p: subprocess.Popen
) -> None:
# There are two steps to gracefully terminate a conversion process:
# 1. Kill the container, and check that it has exited.
# 2. Gracefully terminate the conversion process, in case it's stuck on I/O
#
# See also https://github.com/freedomofpress/dangerzone/issues/791
self.kill_container(self.doc_to_pixels_container_name(document))
terminate_process_group(p)
def ensure_stop_doc_to_pixels_proc( # type: ignore [no-untyped-def]
self, document: Document, *args, **kwargs
) -> None:
super().ensure_stop_doc_to_pixels_proc(document, *args, **kwargs)
# Check if the container no longer exists, either because we successfully killed
# it, or because it exited on its own. We operate under the assumption that
# after a podman kill / docker kill invocation, this will likely be the case,
# else the container runtime (Docker/Podman) has experienced a problem, and we
# should report it.
runtime = Runtime()
name = self.doc_to_pixels_container_name(document)
all_containers = subprocess.run(
[str(runtime.path), "ps", "-a"],
capture_output=True,
startupinfo=get_subprocess_startupinfo(),
)
if name in all_containers.stdout.decode():
log.warning(f"Container '{name}' did not stop gracefully")
def get_max_parallel_conversions(self) -> int:
# FIXME hardcoded 1 until length conversions are better handled
# https://github.com/freedomofpress/dangerzone/issues/257
return 1
runtime = Runtime() # type: ignore [unreachable]
n_cpu = 1
if platform.system() == "Linux":
# if on linux containers run natively
cpu_count = os.cpu_count()
if cpu_count is not None:
n_cpu = cpu_count
elif runtime.name == "docker":
# For Windows and MacOS containers run in VM
# So we obtain the CPU count for the VM
n_cpu_str = subprocess.check_output(
[str(runtime.path), "info", "--format", "{{.NCPU}}"],
text=True,
startupinfo=get_subprocess_startupinfo(),
)
n_cpu = int(n_cpu_str.strip())
return 2 * n_cpu + 1

View file

@ -0,0 +1,71 @@
import logging
import subprocess
import sys
from ..conversion.common import DangerzoneConverter
from ..document import Document
from .base import IsolationProvider, terminate_process_group
log = logging.getLogger(__name__)
def dummy_script() -> None:
sys.stdin.buffer.read()
pages = 2
width = height = 9
DangerzoneConverter._write_int(pages)
for page in range(pages):
DangerzoneConverter._write_int(width)
DangerzoneConverter._write_int(height)
DangerzoneConverter._write_bytes(width * height * 3 * b"A")
class Dummy(IsolationProvider):
"""Dummy Isolation Provider (FOR TESTING ONLY)
"Do-nothing" converter - the sanitized files are the same as the input files.
Useful for testing without the need to use docker.
"""
def __init__(self) -> None:
# Sanity check
if not getattr(sys, "dangerzone_dev", False):
raise Exception(
"Dummy isolation provider is UNSAFE and should never be "
+ "called in a non-testing system."
)
super().__init__()
def install(self) -> bool:
return True
@staticmethod
def is_available() -> bool:
return True
@staticmethod
def should_wait_install() -> bool:
return False
def start_doc_to_pixels_proc(self, document: Document) -> subprocess.Popen:
cmd = [
sys.executable,
"-c",
"from dangerzone.isolation_provider.dummy import dummy_script;"
" dummy_script()",
]
return subprocess.Popen(
cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=self.proc_stderr,
start_new_session=True,
)
def terminate_doc_to_pixels_proc(
self, document: Document, p: subprocess.Popen
) -> None:
terminate_process_group(p)
def get_max_parallel_conversions(self) -> int:
return 1

View file

@ -0,0 +1,135 @@
import io
import logging
import os
import subprocess
import sys
import zipfile
from pathlib import Path
from typing import IO
from ..conversion.common import running_on_qubes
from ..document import Document
from ..util import get_resource_path
from .base import IsolationProvider
log = logging.getLogger(__name__)
class Qubes(IsolationProvider):
"""Uses a disposable qube for performing the conversion"""
def install(self) -> bool:
return True
@staticmethod
def is_available() -> bool:
return True
@staticmethod
def should_wait_install() -> bool:
return False
def get_max_parallel_conversions(self) -> int:
return 1
def start_doc_to_pixels_proc(self, document: Document) -> subprocess.Popen:
dev_mode = getattr(sys, "dangerzone_dev", False) is True
if dev_mode:
# Use dz.ConvertDev RPC call instead, if we are in development mode.
# Basically, the change is that we also transfer the necessary Python
# code as a zipfile, before sending the doc that the user requested.
qrexec_policy = "dz.ConvertDev"
stderr = subprocess.PIPE
else:
qrexec_policy = "dz.Convert"
stderr = subprocess.DEVNULL
p = subprocess.Popen(
["/usr/bin/qrexec-client-vm", "@dispvm:dz-dvm", qrexec_policy],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=stderr,
# Start the conversion process in a new session, so that we can later on
# kill the process group, without killing the controlling script.
start_new_session=True,
)
if dev_mode:
assert p.stdin is not None
# Send the dangerzone module first.
self.teleport_dz_module(p.stdin)
return p
def terminate_doc_to_pixels_proc(
self, document: Document, p: subprocess.Popen
) -> None:
"""Terminate a spawned disposable qube.
Qubes does not offer a way out of the box to terminate disposable Qubes from
domU [1]. Our best bet is to close the standard streams of the process, and hope
that the disposable qube will attempt to read/write to them, and thus receive an
EOF.
There are two ways we can do the above; close the standard streams explicitly,
or terminate the process. The problem with the latter is that terminating
`qrexec-client-vm` happens immediately, and we no longer have a way to learn if
the disposable qube actually terminated. That's why we prefer closing the
standard streams explicitly, so that we can afterwards use `Popen.wait()` to
learn if the qube terminated.
Note that we don't close the stderr stream because we want to read debug logs
from it. In the rare case where a qube cannot terminate because it's stuck
writing at stderr (this is not the expected behavior), we expect that the
process will still be forcefully killed after the soft termination timeout
expires.
[1]: https://github.com/freedomofpress/dangerzone/issues/563#issuecomment-2034803232
"""
if p.stdin:
p.stdin.close()
if p.stdout:
p.stdout.close()
def teleport_dz_module(self, wpipe: IO[bytes]) -> None:
"""Send the dangerzone module to another qube, as a zipfile."""
# Grab the absolute file path of the dangerzone module.
import dangerzone as _dz
_conv_path = Path(_dz.conversion.__file__).parent
_src_root = Path(_dz.__file__).parent.parent
temp_file = io.BytesIO()
with zipfile.ZipFile(temp_file, "w") as z:
z.mkdir("dangerzone/")
z.writestr("dangerzone/__init__.py", "")
for root, _, files in os.walk(_conv_path):
for file in files:
if file.endswith(".py"):
file_path = os.path.join(root, file)
relative_path = os.path.relpath(file_path, _src_root)
z.write(file_path, relative_path)
# Send the following data:
# 1. The size of the Python zipfile, so that the server can know when to
# stop.
# 2. The Python zipfile itself.
bufsize_bytes = len(temp_file.getvalue()).to_bytes(4, "big")
wpipe.write(bufsize_bytes)
wpipe.write(temp_file.getvalue())
def is_qubes_native_conversion() -> bool:
"""Returns True if the conversion should be run using Qubes OS's diposable
VMs and False if not."""
if running_on_qubes():
if getattr(sys, "dangerzone_dev", False):
return os.environ.get("QUBES_CONVERSION", "0") == "1"
# XXX If Dangerzone is installed check if container image was shipped
# This disambiguates if it is running a Qubes targetted build or not
# (Qubes-specific builds don't ship the container image)
return not get_resource_path("container.tar").exists()
else:
return False

92
dangerzone/logic.py Normal file
View file

@ -0,0 +1,92 @@
import concurrent.futures
import json
import logging
from typing import Callable, List, Optional
import colorama
from . import errors, util
from .document import Document
from .isolation_provider.base import IsolationProvider
from .settings import Settings
from .util import get_resource_path
log = logging.getLogger(__name__)
class DangerzoneCore(object):
"""
Singleton of shared state / functionality throughout the app
"""
def __init__(self, isolation_provider: IsolationProvider) -> None:
# Initialize terminal colors
colorama.init(autoreset=True)
# Languages supported by tesseract
with get_resource_path("ocr-languages.json").open("r") as f:
unsorted_ocr_languages = json.load(f)
self.ocr_languages = dict(sorted(unsorted_ocr_languages.items()))
# Load settings
self.settings = Settings()
self.documents: List[Document] = []
self.isolation_provider = isolation_provider
def add_document_from_filename(
self,
input_filename: str,
output_filename: Optional[str] = None,
archive: bool = False,
) -> None:
doc = Document(input_filename, output_filename, archive=archive)
self.add_document(doc)
def add_document(self, doc: Document) -> None:
if doc in self.documents:
raise errors.AddedDuplicateDocumentException()
self.documents.append(doc)
def remove_document(self, doc: Document) -> None:
if doc not in self.documents:
# Sanity check: should not have reached
return
log.debug(f"Removing document {doc.input_filename}")
self.documents.remove(doc)
def clear_documents(self) -> None:
log.debug("Removing all documents")
self.documents = []
def convert_documents(
self, ocr_lang: Optional[str], stdout_callback: Optional[Callable] = None
) -> None:
def convert_doc(document: Document) -> None:
try:
self.isolation_provider.convert(
document,
ocr_lang,
stdout_callback,
)
except Exception:
log.exception(
f"Unexpected error occurred while converting '{document}'"
)
document.mark_as_failed()
max_jobs = self.isolation_provider.get_max_parallel_conversions()
with concurrent.futures.ThreadPoolExecutor(max_workers=max_jobs) as executor:
executor.map(convert_doc, self.documents)
def get_unconverted_documents(self) -> List[Document]:
return [doc for doc in self.documents if doc.is_unconverted()]
def get_safe_documents(self) -> List[Document]:
return [doc for doc in self.documents if doc.is_safe()]
def get_failed_documents(self) -> List[Document]:
return [doc for doc in self.documents if doc.is_failed()]
def get_converting_documents(self) -> List[Document]:
return [doc for doc in self.documents if doc.is_converting()]

View file

@ -1,30 +1,82 @@
import os
import json
import appdirs
import logging
import os
from pathlib import Path
from typing import TYPE_CHECKING, Any, Dict
from packaging import version
from .document import SAFE_EXTENSION
from .util import get_config_dir, get_version
log = logging.getLogger(__name__)
SETTINGS_FILENAME: str = "settings.json"
class Settings:
def __init__(self, common):
self.common = common
self.settings_filename = os.path.join(self.common.appdata_path, "settings.json")
self.default_settings = {
settings: Dict[str, Any]
def __init__(self) -> None:
self.settings_filename = get_config_dir() / SETTINGS_FILENAME
self.default_settings: Dict[str, Any] = self.generate_default_settings()
self.load()
@classmethod
def generate_default_settings(cls) -> Dict[str, Any]:
return {
"save": True,
"archive": True,
"ocr": True,
"ocr_language": "English",
"open": True,
"open_app": None,
"safe_extension": SAFE_EXTENSION,
"updater_check": None,
"updater_last_check": None, # last check in UNIX epoch (secs since 1970)
# FIXME: How to invalidate those if they change upstream?
"updater_latest_version": get_version(),
"updater_latest_changelog": "",
"updater_errors": 0,
}
self.load()
def custom_runtime_specified(self) -> bool:
return "container_runtime" in self.settings
def get(self, key):
def set_custom_runtime(self, runtime: str, autosave: bool = False) -> Path:
from .container_utils import Runtime # Avoid circular import
container_runtime = Runtime.path_from_name(runtime)
self.settings["container_runtime"] = str(container_runtime)
if autosave:
self.save()
return container_runtime
def unset_custom_runtime(self) -> None:
self.settings.pop("container_runtime")
self.save()
def get(self, key: str) -> Any:
return self.settings[key]
def set(self, key, val):
def set(self, key: str, val: Any, autosave: bool = False) -> None:
try:
old_val = self.get(key)
except KeyError:
old_val = None
self.settings[key] = val
if autosave and val != old_val:
self.save()
def load(self):
def get_updater_settings(self) -> Dict[str, Any]:
return {
key: val for key, val in self.settings.items() if key.startswith("updater_")
}
def load(self) -> None:
if os.path.isfile(self.settings_filename):
self.settings = self.default_settings
# If the settings file exists, load it
try:
with open(self.settings_filename, "r") as settings_file:
@ -34,19 +86,22 @@ class Settings:
for key in self.default_settings:
if key not in self.settings:
self.settings[key] = self.default_settings[key]
elif key == "updater_latest_version":
if version.parse(get_version()) > version.parse(self.get(key)):
self.set(key, get_version())
except:
print("Error loading settings, falling back to default")
except Exception:
log.error("Error loading settings, falling back to default")
self.settings = self.default_settings
else:
# Save with default settings
print("Settings file doesn't exist, starting with default")
log.info("Settings file doesn't exist, starting with default")
self.settings = self.default_settings
self.save()
def save(self):
os.makedirs(self.common.appdata_path, exist_ok=True)
with open(self.settings_filename, "w") as settings_file:
def save(self) -> None:
self.settings_filename.parent.mkdir(parents=True, exist_ok=True)
with self.settings_filename.open("w") as settings_file:
json.dump(self.settings, settings_file, indent=4)

132
dangerzone/util.py Normal file
View file

@ -0,0 +1,132 @@
import platform
import subprocess
import sys
import traceback
import unicodedata
from pathlib import Path
try:
import platformdirs
except ImportError:
import appdirs as platformdirs
def get_config_dir() -> Path:
return Path(platformdirs.user_config_dir("dangerzone"))
def get_resource_path(filename: str) -> Path:
if getattr(sys, "dangerzone_dev", False):
# Look for resources directory relative to python file
project_root = Path(__file__).parent.parent
prefix = project_root / "share"
else:
if platform.system() == "Darwin":
bin_path = Path(sys.executable)
app_path = bin_path.parent.parent
prefix = app_path / "Resources" / "share"
elif platform.system() == "Linux":
prefix = Path(sys.prefix) / "share" / "dangerzone"
elif platform.system() == "Windows":
exe_path = Path(sys.executable)
dz_install_path = exe_path.parent
prefix = dz_install_path / "share"
else:
raise NotImplementedError(f"Unsupported system {platform.system()}")
return prefix / filename
def get_tessdata_dir() -> Path:
if getattr(sys, "dangerzone_dev", False) or platform.system() in (
"Windows",
"Darwin",
):
# Always use the tessdata path from the Dangerzone ./share directory, for
# development builds, or in Windows/macOS platforms.
return get_resource_path("tessdata")
# In case of Linux systems, grab the Tesseract data from any of the following
# locations. We have found some of the locations through trial and error, whereas
# others are taken from the docs:
#
# [...] Possibilities are /usr/share/tesseract-ocr/tessdata or
# /usr/share/tessdata or /usr/share/tesseract-ocr/4.00/tessdata. [1]
#
# [1] https://tesseract-ocr.github.io/tessdoc/Installation.html
tessdata_dirs = [
Path("/usr/share/tessdata/"), # on some Debian
Path("/usr/share/tesseract/tessdata/"), # on Fedora
Path("/usr/share/tesseract-ocr/tessdata/"), # ? (documented)
Path("/usr/share/tesseract-ocr/4.00/tessdata/"), # on Debian Bullseye
Path("/usr/share/tesseract-ocr/5/tessdata/"), # on Debian Trixie
]
for dir in tessdata_dirs:
if dir.is_dir():
return dir
raise RuntimeError("Tesseract language data are not installed in the system")
def get_version() -> str:
try:
with get_resource_path("version.txt").open() as f:
version = f.read().strip()
except FileNotFoundError:
# In dev mode, in Windows, get_resource_path doesn't work properly for the container, but luckily
# it doesn't need to know the version
version = "unknown"
return version
def get_subprocess_startupinfo(): # type: ignore [no-untyped-def]
if platform.system() == "Windows":
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
return startupinfo
else:
return None
def replace_control_chars(untrusted_str: str, keep_newlines: bool = False) -> str:
"""Remove control characters from string. Protects a terminal emulator
from obscure control characters.
Control characters are replaced by <EFBFBD> U+FFFD Replacement Character.
If a user wants to keep the newline character (e.g., because they are sanitizing a
multi-line text), they must pass `keep_newlines=True`.
"""
def is_safe(chr: str) -> bool:
"""Return whether Unicode character is safe to print in a terminal
emulator, based on its General Category.
The following General Category values are considered unsafe:
* C* - all control character categories (Cc, Cf, Cs, Co, Cn)
* Zl - U+2028 LINE SEPARATOR only
* Zp - U+2029 PARAGRAPH SEPARATOR only
"""
categ = unicodedata.category(chr)
if categ.startswith("C") or categ in ("Zl", "Zp"):
return False
return True
sanitized_str = ""
for char in untrusted_str:
if (keep_newlines and char == "\n") or is_safe(char):
sanitized_str += char
else:
sanitized_str += "<EFBFBD>"
return sanitized_str
def format_exception(e: Exception) -> str:
# The signature of traceback.format_exception has changed in python 3.10
if sys.version_info < (3, 10):
output = traceback.format_exception(*sys.exc_info())
else:
output = traceback.format_exception(e)
return "".join(output)

29
debian/changelog vendored Normal file
View file

@ -0,0 +1,29 @@
dangerzone (0.9.0) unstable; urgency=low
* Released Dangerzone 0.9.0
-- Freedom of the Press Foundation <info@freedom.press> Mon, 31 Mar 2025 15:57:18 +0300
dangerzone (0.8.1) unstable; urgency=low
* Released Dangerzone 0.8.1
-- Freedom of the Press Foundation <info@freedom.press> Tue, 22 Dec 2024 22:03:28 +0300
dangerzone (0.8.0) unstable; urgency=low
* Released Dangerzone 0.8.0
-- Freedom of the Press Foundation <info@freedom.press> Tue, 30 Oct 2024 01:56:28 +0300
dangerzone (0.7.1) unstable; urgency=low
* Released Dangerzone 0.7.1
-- Freedom of the Press Foundation <info@freedom.press> Tue, 1 Oct 2024 17:02:28 +0300
dangerzone (0.7.0) unstable; urgency=low
* Removed stdeb in favor of direct debian packaging tools
-- Freedom of the Press Foundation <info@freedom.press> Tue, 27 Aug 2024 14:39:28 +0200

1
debian/compat vendored Normal file
View file

@ -0,0 +1 @@
10

15
debian/control vendored Normal file
View file

@ -0,0 +1,15 @@
Source: dangerzone
Maintainer: Freedom of the Press Foundation <info@freedom.press>
Section: python
Priority: optional
Build-Depends: dh-python, python3-setuptools, python3, dpkg-dev, debhelper (>= 9)
Standards-Version: 4.5.1
Homepage: https://github.com/freedomofpress/dangerzone
Rules-Requires-Root: no
Package: dangerzone
Architecture: any
Depends: ${misc:Depends}, podman, python3, python3-pyside2.qtcore, python3-pyside2.qtgui, python3-pyside2.qtwidgets, python3-pyside2.qtsvg, python3-platformdirs | python3-appdirs, python3-click, python3-xdg, python3-colorama, python3-requests, python3-markdown, python3-packaging, tesseract-ocr-all
Description: Take potentially dangerous PDFs, office documents, or images
Dangerzone is an open source desktop application that takes potentially dangerous PDFs, office documents, or images and converts them to safe PDFs. It uses disposable VMs on Qubes OS, or container technology in other OSes, to convert the documents within a secure sandbox.
.

8
debian/copyright vendored Normal file
View file

@ -0,0 +1,8 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: dangerzone
Source: https://github.com/freedomofpress/dangerzone
Files: *
Copyright: 2020-2021 First Look Media
2022- Freedom of the Press Foundation, and Dangerzone contributors
License: AGPL-3.0-or-later

13
debian/rules vendored Executable file
View file

@ -0,0 +1,13 @@
#!/usr/bin/make -f
export PYBUILD_NAME=dangerzone
export DEB_BUILD_OPTIONS=nocheck
export PYBUILD_INSTALL_ARGS=--install-lib=/usr/lib/python3/dist-packages
export PYTHONDONTWRITEBYTECODE=1
export DH_VERBOSE=1
%:
dh $@ --with python3 --buildsystem=pybuild
override_dh_builddeb:
./install/linux/debian-vendor-pymupdf.py --dest debian/dangerzone/usr/lib/python3/dist-packages/dangerzone/vendor/
dh_builddeb $@

1
debian/source/format vendored Normal file
View file

@ -0,0 +1 @@
3.0 (native)

7
debian/source/options vendored Normal file
View file

@ -0,0 +1,7 @@
compression = "gzip"
tar-ignore = "dev_scripts"
tar-ignore = ".*"
tar-ignore = "__pycache__"
# Ignore the 'share/tessdata' dir, since it slows down the process, and we
# install Tesseract data via Debian packages anyway.
tar-ignore = "share/tessdata"

7
dev_scripts/README.md Normal file
View file

@ -0,0 +1,7 @@
# Developer scripts
This directory holds some scripts that are helpful for developing on Dangerzone.
Read the respective documentation for more details on some of the scripts.
* [`env.py`](../docs/developer/environments.md)
* [`qa.py`](../docs/developer/qa.md)

View file

@ -0,0 +1,7 @@
Package: *
Pin: origin "packages.freedom.press/apt-tools-prod"
Pin-Priority: 100
Package: conmon
Pin: origin "packages.freedom.press/apt-tools-prod"
Pin-Priority: 500

View file

@ -0,0 +1,71 @@
Types: deb
URIs: https://packages.freedom.press/apt-tools-prod
Suites: jammy
Components: main
Signed-By:
-----BEGIN PGP PUBLIC KEY BLOCK-----
Comment: DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281
Comment: Dangerzone Release Key <dangerzone-release-key@freedom.
.
xsFNBGP2Ey4BEADSudGS33NCAeuUcHqrgNAet4bX6jAPVTNXgOGLK7DYuBNJ0aSR
1wv+PHaM8la/U2YGD31nKsEsLulzyzrdod8AlbyYPkygaYAJIa7NK7IuOtO6/52R
unpGkPFzA1oDhmOjUbkNthe3GTqDq6a5U04GRhtbY0U9j0+OREzy18IiTQVTnBRX
kP8h2H+MaQwI/J2hB7ZF4rRHbzrzfZByWZmxpjxRR56Zfdl8xgp35WDt4ohyo9n6
LrQZ6Va70EwWRFW2c+SuAF1V3HDdyfkk/NHbD7FWhNByExNCoHiFP1ZMUgqYlQVs
gAsoW9IEMGayPowMlB1IjhYa65ihyH511nsKP6l8M2cRAfVJTtV3P84JqrWJd9TE
3koSUa/yi8AO36omUWHHX9rjnj8CHLhk0GEf0c53E7Izad/DTcsVnIs6NT2e7t5u
O1FUQPFL5gHi0iWI+2jzRGNa/YvGF6amVav5dtus0bObfgPBdBo9UhqgimidX1+N
T/g25CTKhoUMz4dDycBcJQgeOOpbyPI7W7tpyBNyD3/JCga1+Kj5E+gw+MAFl+KB
gqaapbXfuIa69WSutOTKINjXVV+Wqn04uMjgcPZP+LCQWN49oS3dCflKEd/mmg63
8WMuzweDm5gYNCuGHprj3rNUjruVqHzSDwhfDuKtbDjgZ48V9CnwzNe2/wARAQAB
zT1EYW5nZXJ6b25lIFJlbGVhc2UgS2V5IDxkYW5nZXJ6b25lLXJlbGVhc2Uta2V5
QGZyZWVkb20ucHJlc3M+wsGRBBMBCAA7FiEE3iirJB+kgmD6ybi6p8mzhSJgQoEF
AmP2Ey4CGwMFCwkIBwICIgIGFQoJCAsCBBYCAwECHgcCF4AACgkQp8mzhSJgQoH6
BhAAkpD8fb4rJVAZuOpxWSNM0EzTjm08aFt9XZqyCxN+j+2737okhjzX5hkTbNgR
Opn2pAtBEBMXGYzyw/XIg8PKEhY9L0dE5+i0rZwodvdPWMPQ6kHizJQokSc2rWV4
bjFJrMyS3zXioh8W1zgoHLxvKTBncmdCPRX53Wf4TwZHqgimn03CHHHF9BJBix9d
8OXJbVBdjB3zRT3VuLxeq73YzNSgUYBjrNf1IYJQKnZzXO9FhfHtaxUnSde2wv9E
fhTa7fYDQz42zE2GPCcSVFUXmYB/FyHDswJAp7YtTNr8xYZ7ZkV6W1QIEWK9FVC3
ABcTrMnAGG5OFeNE5C2w5yVM+5f6sJ8H47y9fO8AvEyKRMzsD3c+AOpOgdy+4/6U
qN0lDB9mP7CXqU5lp7z42QofJtXvUAbzQquZoRZCdEK4N9G0TzUwTkWeeZ+tYgnB
FkM464DoaF1aZCGIaw3+J7XyweV2TkG9kS95khyszps1iHRUOGYzy0n8twFNWsbe
/qLZkDtYI28cnKkedHKNAysmtyNIzab+Elc3vni0BEz8d+rn5rG20kjLBDQCZnf9
UyrXTQ6ahl5UexaJC/3I46g6obHRU6F9+C+JmpRxJxIqJw3jduVB0acTttg8n3F4
DCUE5Lj/vsE8nlhFi4inU2SMtHBspR5UkKNN5sRDuqac3ovOwU0EY/YT8AEQAM1N
O3WVV41Ac1SI/fc/NjtKX2wD7GfZERbkPEWFG5n9cY/yoPbYiALKyWhRv7BWFg8e
j7eEKKuZV6U4zqsnQNTC41LPG3Umq1oWsheeOXS/q7d26nwveL0b2OekUMpjAUkX
Xboq73UxPyfq9AEzrN2P8AJ6+KBfUx1Croa9Sy49z/94IG1DuJbq5X/9WjNJ6n2d
zWF0rEnsEKP4bcympi2+hJFOJm3h+GrevOPm5nf3/6N7pRNS3BdW7UsgPfAYOhYL
2vswNGQu9rDyum11TLzEnTctWDfcnTmvU/cmMup6jx6j8PLotfhT82Ua3DhBiTfC
RWEEL/vvoiFVmcqOxCX5d7KrSlPUcJR3/438/Rw8W0PlrSW1DT+eMD86uynH4kMs
FIXGQMZ5OyrPkSQf9RMj93fp6zuh+eqAwOmR460wA/S6q74i0ZD+hTDjz3X178nH
4CLsl3mGTGu3qlM6wY+gaObzdJFhQsQ014lZ1DuLGRaQY9GVkFftyIUJiDnIJVP2
Zw8i/3j197cBh4NxAyHRU1m1gP2rhIAh2vT1iA1cXSpf7TZNMFZvYR4IgdWac78Y
pFuIzFkuFLRYl2oCRp9Q4eS4J/VkoKSe4jGNiPBl+brubsUye7PsWIrUm0ZT2rXa
cIy/W59zJT6pP2J2yM4TjCsXCyxyrwD3ultOhpNnABEBAAHCw6wEGAEIACAWIQTe
KKskH6SCYPrJuLqnybOFImBCgQUCY/YT8AIbAgJACRCnybOFImBCgcF0IAQZAQgA
HRYhBATKvrXddrrPK9Q9L/Osxg9i6lHLBQJj9hPwAAoJEPOsxg9i6lHLrokP/2Mf
M+z3xy7eT06saDMvJ4X/jkP7c8OwjHcJW6nfwVrxDua/MwZrnNbBvs0uolBbljuG
B873tCiE5Hx19lUr+o1FUxcMFb6zisSZNZv7FleSYfuyZ2jMMysNrCdATqMJojpj
XURgr/6LAG/ZEV6egOL7SWQdAw33JBiPLXsRgUeR9QhZjYV+dNDYYzkVlctK7g1h
mgjKbz7Qu0K4d/nDIUpJlWnNGsmZEObcEQq79GzP6QoUGF86jHurTtJyJYhliyJz
J+5Ph8hzHSK5H0Z36i+H4LwOL/p4kVJWMRZLLCFXU2IeJk6oDQMkH9fC74b6RB1C
BbM4AvzxxvgnLdKHBLQ10yum3Iztty7uAg5OMRK+OoZfZpZrcK0uZK1UWjAoHnrr
h8SqceXwc195ujdnYW+G4Pk3F6OHN7N0Rfp2LN23F/Hbwsz+RKTucCXb8E4cgS14
4l2O8vaM0VnhKprO5GDk7IY6Q+A7eRnveIMlBitX1Snzgd1+oA5bkCHzEUmQMNUs
Cqm6iQEqGjEVyRxbCyC+xv+YbpawPm64QSBcR6t0pR3Py9l4HVzVqtN5AHgSN5yp
NEXEyvhDJT4WEOkJlGwxfGtVzMx01qiWMOjsTzg21qWWvg0+acnU6yMpVxHbG9hb
Lfz9p42Cc91dUULDg2kVHpOA0+VBkmtKUgNV59rLko8P/jm6JgnJIzNS3a3Zt4HM
2zGy6r6OpLvgTB4WMRPT/tcxMhky+m/RqNXA3J9U83potE28bc10ZOpCwL/RBt+S
631Bq2ISFmKCMqVSvlOCGmW4DA52kO82V20E+ijuTMe/bAiarTxeXWkFAptq8Xo1
6TJdcy4uluhTz23iXeRF09S6Cs6v+RmOTXcRR9FbeLtJhp8h+vhHlqN9JS/k+1dn
54dRk6ioQO+rrneWSMIqf/Vz9W8YnKMuSP8pCzagGnBsUcZCgDKol01QETgwbUvk
QTxMsJPT+q/JDBKYuPr073iblJl5S+/so+Ia8NHDJfp5I4q4hKCRMYns4Aur3csU
kGJiSa+YVx5dkh6FSYBri1yWdu2BMIPVBwR/q4lGv80c64U04PWiVp6Z8SR56PS0
PJOZYKtgpZ0GJ7ghrv+74HitYYBXBsYc8uP1mfKtuN7AQk7iO8+sttFAAdx5QhQJ
nn2wsXzbeFDtis3kSH5rjrjRsunTcPzbcf/YfQeBw+rCgAARNKSQTQCu7la161cA
OJ0bdOpSwHRZdMu4sqpjSUnim54i+6WQi10J3EFiULTRWkT1QLsXL+y/QO7jKWJa
MaBowvhD9Z4dqreMzFLCpBioJjX5acJhNWeReop3OFjiO2DI/T6sK57Lacqi5PBK
cFA7AhXOOSdymipReu4BHt/y
=bwyT
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,3 @@
[engine]
cgroup_manager="cgroupfs"
events_logger="file"

View file

@ -1,9 +1,10 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Load dangerzone module and resources from the source code tree
import os, sys
import os
import sys
# Load dangerzone module and resources from the source code tree
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.dangerzone_dev = True

782
dev_scripts/env.py Executable file
View file

@ -0,0 +1,782 @@
#!/usr/bin/env python3
import argparse
import hashlib
import os
import pathlib
import platform
import shutil
import subprocess
import sys
from datetime import date
DEFAULT_GUI = True
DEFAULT_USER = "user"
DEFAULT_DRY = False
DEFAULT_DEV = False
DEFAULT_SHOW_DOCKERFILE = False
# The Linux distributions that we currently support.
# FIXME: Add a version mapping to avoid mistakes.
# FIXME: Maybe create an enum for these values.
DISTROS = ["debian", "fedora", "ubuntu"]
CONTAINER_RUNTIMES = ["podman", "docker"]
IMAGES_REGISTRY = "ghcr.io/freedomofpress/"
IMAGE_NAME_BUILD_DEV_FMT = (
IMAGES_REGISTRY + "v2/dangerzone/build-dev/{distro}-{version}:{date}-{hash}"
)
IMAGE_NAME_BUILD_ENDUSER_FMT = (
IMAGES_REGISTRY + "v2/dangerzone/end-user/{distro}-{version}:{date}-{hash}"
)
EPILOG = """\
Examples:
Build a Dangerzone environment for development (build-dev) or testing (build) based on
Ubuntu 22.04:
env.py --distro ubuntu --version 22.04 build-dev
env.py --distro ubuntu --version 22.04 build
Inspect the Dockerfile for the environments:
env.py --distro ubuntu --version 22.04 build-dev --show-dockerfile
env.py --distro ubuntu --version 22.04 build --show-dockerfile
Run an interactive shell in the development or end-user environment:
env.py --distro ubuntu --version 22.04 run --dev bash
env.py --distro ubuntu --version 22.04 run bash
Run Dangerzone in the development environment:
env.py --distro ubuntu --version 22.04 run --dev bash
user@dangerzone-dev:~$ cd dangerzone/
user@dangerzone-dev:~$ poetry run ./dev_scripts/dangerzone
Run Dangerzone in the end-user environment:
env.py --distro ubuntu --version 22.04 run dangerzone
"""
# XXX: overcome the fact that ubuntu images (starting on 23.04) ship with the 'ubuntu'
# user by default https://bugs.launchpad.net/cloud-images/+bug/2005129
# Related issue https://github.com/freedomofpress/dangerzone/pull/461
DOCKERFILE_UBUNTU_REM_USER = r"""
RUN touch /var/mail/ubuntu && chown ubuntu /var/mail/ubuntu && userdel -r ubuntu
"""
# On Ubuntu Jammy, use a different conmon version, as acquired from our apt-tools-prod
# repo. For more details, read:
# https://github.com/freedomofpress/dangerzone/issues/685
DOCKERFILE_CONMON_UPDATE = r"""
RUN apt-get update \
&& apt-get install -y ca-certificates \
&& rm -rf /var/lib/apt/lists/*
COPY apt-tools-prod.sources /etc/apt/sources.list.d/
COPY apt-tools-prod.pref /etc/apt/preferences.d/
"""
# FIXME: Do we really need the python3-venv packages?
DOCKERFILE_BUILD_DEV_DEBIAN_DEPS = r"""
ARG DEBIAN_FRONTEND=noninteractive
# NOTE: Podman has several recommended packages that are actually essential for rootless
# containers. However, certain Podman versions (e.g., in Debian Trixie) bring Systemd in
# as a recommended dependency. The latter is a cause for problems, so we prefer to
# install only a subset of the recommended Podman packages. See also:
# https://github.com/freedomofpress/dangerzone/issues/689
RUN apt-get update \
&& apt-get install -y --no-install-recommends podman uidmap slirp4netns \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update \
&& apt-get install -y passt || echo "Skipping installation of passt package" \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update \
&& apt-get install -y --no-install-recommends dh-python make build-essential \
git {qt_deps} pipx python3 python3-pip python3-venv dpkg-dev debhelper python3-setuptools \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
RUN pipx install poetry
RUN apt-get update \
&& apt-get install -y --no-install-recommends mupdf thunar \
&& rm -rf /var/lib/apt/lists/*
"""
# FIXME: Install Poetry on Fedora via package manager.
DOCKERFILE_BUILD_DEV_FEDORA_DEPS = r"""
RUN dnf install -y git rpm-build podman python3 python3-devel python3-poetry-core \
pipx make qt6-qtbase-gui gcc gcc-c++\
&& dnf clean all
# FIXME: Drop this fix after it's resolved upstream.
# See https://github.com/freedomofpress/dangerzone/issues/286#issuecomment-1347149783
RUN rpm --restore shadow-utils
RUN dnf install -y mupdf thunar && dnf clean all
"""
# The Dockerfile for building a development environment for Dangerzone. Parts of the
# Dockerfile will be populated during runtime.
DOCKERFILE_BUILD_DEV = r"""FROM {distro}:{version}
{install_deps}
#########################################
# Create a non-root user to run Dangerzone
RUN adduser user
# See https://github.com/freedomofpress/dangerzone/issues/286#issuecomment-1347149783
RUN echo user:2000:2000 > /etc/subuid
RUN echo user:2000:2000 > /etc/subgid
# XXX: We need the empty source folder, so that we can trick Poetry to create a
# link to the project's path. This way, we should be able to do `import
# dangerzone` from within the container.
RUN mkdir -p /home/user/dangerzone/dangerzone
RUN touch /home/user/dangerzone/dangerzone/__init__.py
USER user
WORKDIR /home/user
VOLUME /home/user/dangerzone
# Force Podman to use a specific configuration.
# See https://github.com/freedomofpress/dangerzone/issues/489
RUN mkdir -p /home/user/.config/containers
COPY storage.conf /home/user/.config/containers
# Install Poetry under ~/.local/bin.
# See https://github.com/freedomofpress/dangerzone/issues/351
# FIXME: pipx install poetry does not work for Ubuntu Focal.
ENV PATH="$PATH:/home/user/.local/bin"
RUN pipx install poetry
RUN pipx inject poetry poetry-plugin-export
COPY pyproject.toml poetry.lock /home/user/dangerzone/
RUN cd /home/user/dangerzone && poetry --no-ansi install
"""
DOCKERFILE_BUILD_DEBIAN_DEPS = r"""
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y --no-install-recommends mupdf thunar \
&& rm -rf /var/lib/apt/lists/*
"""
DOCKERFILE_BUILD_FEDORA_DEPS = r"""
RUN dnf install -y mupdf thunar && dnf clean all
# FIXME: Drop this fix after it's resolved upstream.
# See https://github.com/freedomofpress/dangerzone/issues/286#issuecomment-1347149783
RUN rpm --restore shadow-utils
"""
# The Dockerfile for building an environment with Dangerzone installed in it. Parts of
# the Dockerfile will be populated during runtime.
#
# FIXME: The fact that we remove the package does not reduce the image size. We need to
# flatten the image layers as well.
DOCKERFILE_BUILD = r"""FROM {distro}:{version}
{install_deps}
COPY {package} /tmp/{package}
RUN {install_cmd} /tmp/{package}
RUN rm /tmp/{package}
#########################################
# Create a non-root user to run Dangerzone
RUN adduser user
# See https://github.com/freedomofpress/dangerzone/issues/286#issuecomment-1347149783
RUN echo user:2000:2000 > /etc/subuid
RUN echo user:2000:2000 > /etc/subgid
USER user
WORKDIR /home/user
########################################
# Force Podman to use a specific configuration.
# See https://github.com/freedomofpress/dangerzone/issues/489
RUN mkdir -p /home/user/.config/containers
COPY storage.conf /home/user/.config/containers
"""
def run(*args):
"""Simple function that runs a command, validates it, and returns the output"""
return subprocess.run(
args, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE
).stdout
def git_root():
"""Get the root directory of the Git repo."""
# FIXME: Use a Git Python binding for this.
# FIXME: Make this work if called outside the repo.
path = run("git", "rev-parse", "--show-toplevel").decode().strip("\n")
return pathlib.Path(path)
def user_data():
"""Get the user data dir in (which differs on different OSes)"""
home = pathlib.Path.home()
system = platform.system()
if system == "Windows":
return home / "AppData" / "Local"
elif system == "Linux":
return home / ".local" / "share"
elif system == "Darwin":
return home / "Library" / "Application Support"
def dz_dev_root():
"""Get the directory where we will store dangerzone-dev related files"""
return user_data() / "dangerzone-dev"
def distro_root(distro, version):
"""Get the root directory for the specific Linux environment."""
return dz_dev_root() / "envs" / distro / version
def distro_state(distro, version):
"""Get the directory where we will store the state for the distro."""
return distro_root(distro, version) / "state"
def distro_build(distro, version):
"""Get the directory where we will store the build files for the distro."""
return distro_root(distro, version) / "build"
def get_current_date():
return date.today().strftime("%Y-%m-%d")
def get_build_dir_sources(distro, version):
"""Return the files needed to build an image."""
sources = [
git_root() / "pyproject.toml",
git_root() / "poetry.lock",
git_root() / "dev_scripts" / "env.py",
git_root() / "dev_scripts" / "storage.conf",
git_root() / "dev_scripts" / "containers.conf",
]
if distro == "ubuntu" and version in ("22.04", "jammy"):
sources.extend(
[
git_root() / "dev_scripts" / "apt-tools-prod.pref",
git_root() / "dev_scripts" / "apt-tools-prod.sources",
]
)
return sources
def image_name_build_dev(distro, version):
"""Get the container image for the dev variant of a Dangerzone environment."""
hash = hash_files(get_build_dir_sources(distro, version))
return IMAGE_NAME_BUILD_DEV_FMT.format(
distro=distro, version=version, hash=hash, date=get_current_date()
)
def image_name_build_enduser(distro, version):
"""Get the container image for the Dangerzone end-user environment."""
hash = hash_files(get_files_in("install/linux", "debian"))
return IMAGE_NAME_BUILD_ENDUSER_FMT.format(
distro=distro, version=version, hash=hash, date=get_current_date()
)
def dz_version():
"""Get the current Dangerzone version."""
with open(git_root() / "share/version.txt") as f:
return f.read().strip()
def hash_files(file_paths: list[pathlib.Path]) -> str:
"""Returns the hash value of a list of files using the sha256 hashing algorithm."""
hash_obj = hashlib.new("sha256")
for path in file_paths:
with open(path, "rb") as file:
file_data = file.read()
hash_obj.update(file_data)
return hash_obj.hexdigest()
def get_files_in(*folders: list[str]) -> list[pathlib.Path]:
"""Return the list of all files present in the given folders"""
files = []
for folder in folders:
files.extend([p for p in (git_root() / folder).glob("**") if p.is_file()])
return files
class Env:
"""A class that implements actions on Dangerzone environments"""
def __init__(self, distro, version, runtime):
"""Initialize an Env class based on some common parameters."""
self.distro = distro
self.version = version
# NOTE: We change "bullseye" to "bullseye-backports", since it contains `pipx`,
# which is not available through the original repos.
if self.distro == "debian" and self.version in ("bullseye", "11"):
self.version = "bullseye-backports"
# Try to autodetect the runtime, if the user has not provided it.
podman_cmd = ["podman"]
docker_cmd = ["docker"]
if not runtime:
if shutil.which("podman"):
self.runtime = "podman"
self.runtime_cmd = podman_cmd
elif shutil.which("docker"):
self.runtime = "docker"
self.runtime_cmd = docker_cmd
else:
raise SystemError(
"You need either Podman or Docker installed to continue"
)
elif runtime == "podman":
self.runtime = "podman"
self.runtime_cmd = podman_cmd
elif runtime == "docker":
self.runtime = "docker"
self.runtime_cmd = docker_cmd
else:
raise RuntimeError(f"Unexpected runtime: {runtime}")
@classmethod
def from_args(cls, args):
"""Create an Env class from CLI arguments"""
return cls(distro=args.distro, version=args.version, runtime=args.runtime)
def find_dz_package(self, path, pattern):
"""Get the full path of the Dangerzone package in the specified dir.
There are times where we don't know the exact name of the Dangerzone package
that we've built, e.g., because its patch level may have changed.
Auto-detect the Dangerzone package based on a pattern that a user has provided,
and fail if there are none, or multiple matches. If there's a single match, then
return the full path for the package.
"""
matches = list(path.glob(pattern))
if len(matches) == 0:
raise RuntimeError(
f"Could not find Dangerzone package '{pattern}' in '{path}'"
)
elif len(matches) > 1:
raise RuntimeError(
f"Found more than one matches for Dangerzone package '{pattern}' in"
f" '{path}'"
)
return matches[0]
def runtime_run(self, *args):
"""Run a command for a specific container runtime.
A user's environment may have more than one container runtime [1], e.g., Podman
or Docker. These two runtimes have the same interface, so we can use them
interchangeably.
This method expects a command to run, minus the "docker" / "podman" part. Since
the command can be any valid command, such as "run" or "build", we can't assume
anything about the standard streams, so we don't affect them at all.
[1]: Technically, a container runtime is a program that implements the Container
Runtime Interface. We overload this term here, in lieu of a better one.
"""
subprocess.run(self.runtime_cmd + list(args), check=True)
def run(
self, cmd, gui=DEFAULT_GUI, user=DEFAULT_USER, dry=DEFAULT_DRY, dev=DEFAULT_DEV
):
"""Run a command in a Dangerzone environment."""
# FIXME: Allow wiping the state of the distro before running the environment, to
# ensure reproducibility.
run_cmd = [
"run",
"--rm",
"-it",
"-v",
"/etc/localtime:/etc/localtime:ro",
# FIXME: Find a more secure invocation.
"--security-opt",
"seccomp=unconfined",
"--privileged",
]
# We need to retain our UID, because we are mounting the Dangerzone source to
# the container.
if self.runtime == "podman":
uidmaps = [
"--uidmap",
"1000:0:1",
"--uidmap",
"0:1:1000",
"--uidmap",
"1001:1001:64536",
]
gidmaps = [
"--gidmap",
"1000:0:1",
"--gidmap",
"0:1:1000",
"--gidmap",
"1001:1001:64536",
]
run_cmd += uidmaps + gidmaps
# Compute container runtime arguments for GUI purposes.
if gui:
# Detect X11 display server connection settings.
env_display = os.environ.get("DISPLAY")
env_xauthority = os.environ.get("XAUTHORITY")
run_cmd += [
"-e",
f"DISPLAY={env_display}",
"-v",
"/tmp/.X11-unix:/tmp/.X11-unix:ro",
]
if env_xauthority:
run_cmd += [
"-e",
f"XAUTHORITY={env_xauthority}",
"-v",
f"{env_xauthority}:{env_xauthority}:ro",
]
# FIXME: Detect Wayland connection settings. This requires some extra
# settings, as we can see in this link:
#
# https://github.com/mviereck/x11docker/wiki/How-to-provide-Wayland-socket-to-docker-container
# Mount the source and the state of the distro into the container
dz_src = git_root()
dist_state = distro_state(self.distro, self.version)
run_cmd += [
"-v",
f"{dz_src}:/home/user/dangerzone",
"-v",
f"{dist_state}/containers:/home/user/.local/share/containers",
"-v",
f"{dist_state}/.bash_history:/home/user/.bash_history",
]
run_cmd += ["-u", user]
# Select the proper container image based on whether the user wants to run the
# command in a dev or end-user environment.
if dev:
run_cmd += [
"--hostname",
"dangerzone-dev",
image_name_build_dev(self.distro, self.version),
]
else:
run_cmd += [
"--hostname",
"dangerzone",
image_name_build_enduser(self.distro, self.version),
]
run_cmd += cmd
# If the user has asked to perform a dry-run, then print the command that the
# script would use internally.
if dry:
print(" ".join(self.runtime_cmd + list(run_cmd)))
return
dist_state.mkdir(parents=True, exist_ok=True)
(dist_state / "containers").mkdir(exist_ok=True)
(dist_state / ".bash_history").touch(exist_ok=True)
self.runtime_run(*run_cmd)
def pull_image_from_registry(self, image):
try:
subprocess.run(self.runtime_cmd + ["pull", image], check=True)
return True
except subprocess.CalledProcessError:
# Do not log an error here, we are just checking if the image exists
# on the registry.
return False
def push_image_to_registry(self, image):
try:
subprocess.run(self.runtime_cmd + ["push", image], check=True)
return True
except subprocess.CalledProcessError as e:
print("An error occured when pulling the image: ", e)
return False
def build_dev(self, show_dockerfile=DEFAULT_SHOW_DOCKERFILE, sync=False):
"""Build a Linux environment and install tools for Dangerzone development."""
image = image_name_build_dev(self.distro, self.version)
if sync and self.pull_image_from_registry(image):
print("Image has been pulled from the registry, no need to build it.")
return
elif sync:
print("Image label not in registry, building it")
if self.distro == "fedora":
install_deps = DOCKERFILE_BUILD_DEV_FEDORA_DEPS
else:
# Use Qt6 in all of our Linux dev environments, and add a missing
# libxcb-cursor0 dependency
#
# See https://github.com/freedomofpress/dangerzone/issues/482
qt_deps = "libqt6gui6 libxcb-cursor0"
install_deps = DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
if self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
# Ubuntu Jammy misses a dependency to `libxkbcommon-x11-0`, which we can
# install indirectly via `qt6-qpa-plugins`.
qt_deps += " qt6-qpa-plugins"
# Ubuntu Jammy requires a more up-to-date conmon package
# (see https://github.com/freedomofpress/dangerzone/issues/685)
install_deps = (
DOCKERFILE_CONMON_UPDATE + DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
)
elif self.distro == "ubuntu" and self.version in (
"24.04",
"noble",
"24.10",
"ocular",
"25.04",
"plucky",
):
install_deps = (
DOCKERFILE_UBUNTU_REM_USER + DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
)
elif self.distro == "debian" and self.version in ("bullseye-backports",):
# Debian Bullseye misses a dependency to libgl1.
qt_deps += " libgl1"
install_deps = install_deps.format(qt_deps=qt_deps)
dockerfile = DOCKERFILE_BUILD_DEV.format(
distro=self.distro, version=self.version, install_deps=install_deps
)
if show_dockerfile:
print(dockerfile)
return
build_dir = distro_build(self.distro, self.version)
os.makedirs(build_dir, exist_ok=True)
# Populate the build context.
for source in get_build_dir_sources(self.distro, self.version):
shutil.copy(source, build_dir)
with open(build_dir / "Dockerfile", mode="w") as f:
f.write(dockerfile)
self.runtime_run("build", "-t", image, build_dir)
if sync:
if not self.push_image_to_registry(image):
print("An error occured while trying to push to the container registry")
def build(
self,
show_dockerfile=DEFAULT_SHOW_DOCKERFILE,
):
"""Build a Linux environment and install Dangerzone in it."""
build_dir = distro_build(self.distro, self.version)
os.makedirs(build_dir, exist_ok=True)
version = dz_version()
if self.distro == "fedora":
install_deps = DOCKERFILE_BUILD_FEDORA_DEPS
package_pattern = f"dangerzone-{version}-*.fc{self.version}.x86_64.rpm"
package_src = self.find_dz_package(git_root() / "dist", package_pattern)
package = package_src.name
package_dst = build_dir / package
install_cmd = "dnf install -y"
else:
install_deps = DOCKERFILE_BUILD_DEBIAN_DEPS
if self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
# Ubuntu Jammy requires a more up-to-date conmon
# package (see https://github.com/freedomofpress/dangerzone/issues/685)
install_deps = DOCKERFILE_CONMON_UPDATE + DOCKERFILE_BUILD_DEBIAN_DEPS
elif self.distro == "ubuntu" and self.version in (
"24.04",
"noble",
"24.10",
"ocular",
"25.04",
"plucky",
):
install_deps = DOCKERFILE_UBUNTU_REM_USER + DOCKERFILE_BUILD_DEBIAN_DEPS
package_pattern = f"dangerzone_{version}-*_*.deb"
package_src = self.find_dz_package(git_root() / "deb_dist", package_pattern)
package = package_src.name
package_dst = build_dir / package
install_cmd = "apt-get update && apt-get install -y"
dockerfile = DOCKERFILE_BUILD.format(
distro=self.distro,
version=self.version,
install_cmd=install_cmd,
package=package,
install_deps=install_deps,
)
if show_dockerfile:
print(dockerfile)
return
# Populate the build context.
shutil.copy(package_src, package_dst)
shutil.copy(git_root() / "dev_scripts" / "storage.conf", build_dir)
shutil.copy(git_root() / "dev_scripts" / "containers.conf", build_dir)
if self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
shutil.copy(git_root() / "dev_scripts" / "apt-tools-prod.pref", build_dir)
shutil.copy(
git_root() / "dev_scripts" / "apt-tools-prod.sources", build_dir
)
with open(build_dir / "Dockerfile", mode="w") as f:
f.write(dockerfile)
image = image_name_build_enduser(self.distro, self.version)
self.runtime_run("build", "-t", image, build_dir)
def env_run(args):
"""Invoke the 'run' command based on the CLI args."""
if not args.command:
print("Please provide a command for the environment")
sys.exit(1)
env = Env.from_args(args)
return env.run(
args.command, gui=args.gui, user=args.user, dry=args.dry, dev=args.dev
)
def env_build_dev(args):
"""Invoke the 'build-dev' command based on the CLI args."""
env = Env.from_args(args)
return env.build_dev(show_dockerfile=args.show_dockerfile, sync=args.sync)
def env_build(args):
"""Invoke the 'build' command based on the CLI args."""
env = Env.from_args(args)
return env.build(
show_dockerfile=args.show_dockerfile,
)
def parse_args():
parser = argparse.ArgumentParser(
prog=sys.argv[0],
description="Dev script for handling Dangerzone environments",
epilog=EPILOG,
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
"--distro",
choices=DISTROS,
required=True,
help="The name of the Linux distro",
)
parser.add_argument(
"--version",
required=True,
help="The version of the Linux distro",
)
parser.add_argument(
"--runtime",
choices=CONTAINER_RUNTIMES,
help="The name of the container runtime",
)
subparsers = parser.add_subparsers(help="Available subcommands")
subparsers.required = True
# Run a command in an environment.
parser_run = subparsers.add_parser(
"run", help="Run a command in a Dangerzone environment"
)
parser_run.set_defaults(func=env_run)
parser_run.add_argument(
"--no-gui",
default=DEFAULT_GUI,
action="store_false",
dest="gui",
help="Run command with GUI support",
)
parser_run.add_argument(
"--user",
"-u",
default=DEFAULT_USER,
help="Run command as user USER",
)
parser_run.add_argument(
"--dry",
default=DEFAULT_DRY,
action="store_true",
help="Do not run the command, just print it with the container invocation",
)
parser_run.add_argument(
"--dev",
default=DEFAULT_DEV,
action="store_true",
help="Run the command into the dev variant of the Dangerzone environment",
)
parser_run.add_argument(
"command",
nargs=argparse.REMAINDER,
help="Run command COMMAND in the Dangerzone environment",
)
# Build a development variant of a Dangerzone environment.
parser_build_dev = subparsers.add_parser(
"build-dev",
help="Build a Linux environment and install tools for Dangerzone development",
)
parser_build_dev.set_defaults(func=env_build_dev)
parser_build_dev.add_argument(
"--show-dockerfile",
default=DEFAULT_SHOW_DOCKERFILE,
action="store_true",
help="Do not build, only show the Dockerfile",
)
parser_build_dev.add_argument(
"--sync",
default=False,
action="store_true",
help="Attempt to pull the image, build it if not found and push it to the container registry",
)
# Build a development variant of a Dangerzone environment.
parser_build = subparsers.add_parser(
"build",
help="Build a Linux environment and install Dangerzone",
)
parser_build.set_defaults(func=env_build)
parser_build.add_argument(
"--show-dockerfile",
default=DEFAULT_SHOW_DOCKERFILE,
action="store_true",
help="Do not build, only show the Dockerfile",
)
return parser.parse_args()
def main():
args = parse_args()
return args.func(args)
if __name__ == "__main__":
sys.exit(main())

View file

@ -0,0 +1,254 @@
#!/usr/bin/env python3
import argparse
import asyncio
import re
import sys
from datetime import datetime
from typing import Dict, List, Optional, Tuple
import httpx
REPOSITORY = "https://github.com/freedomofpress/dangerzone/"
TEMPLATE = "- {title} ([#{number}]({url}))"
def parse_version(version: str) -> Tuple[int, int]:
"""Extract major.minor from version string, ignoring patch"""
match = re.match(r"v?(\d+)\.(\d+)", version)
if not match:
raise ValueError(f"Invalid version format: {version}")
return (int(match.group(1)), int(match.group(2)))
async def get_last_minor_release(
client: httpx.AsyncClient, owner: str, repo: str
) -> Optional[str]:
"""Get the latest minor release date (ignoring patches)"""
response = await client.get(f"https://api.github.com/repos/{owner}/{repo}/releases")
response.raise_for_status()
releases = response.json()
if not releases:
return None
# Get the latest minor version by comparing major.minor numbers
current_version = parse_version(releases[0]["tag_name"])
latest_date = None
for release in releases:
try:
version = parse_version(release["tag_name"])
if version < current_version:
latest_date = release["published_at"]
break
except ValueError:
continue
return latest_date
async def get_issue_details(
client: httpx.AsyncClient, owner: str, repo: str, issue_number: int
) -> Optional[dict]:
"""Get issue title and number if it exists"""
response = await client.get(
f"https://api.github.com/repos/{owner}/{repo}/issues/{issue_number}"
)
if response.is_success:
data = response.json()
return {
"title": data["title"],
"number": data["number"],
"url": data["html_url"],
}
return None
def extract_issue_number(pr_body: Optional[str]) -> Optional[int]:
"""Extract issue number from PR body looking for common formats like 'Fixes #123' or 'Closes #123'"""
if not pr_body:
return None
patterns = [
r"(?:closes|fixes|resolves)\s*#(\d+)",
r"(?:close|fix|resolve)\s*#(\d+)",
]
for pattern in patterns:
match = re.search(pattern, pr_body.lower())
if match:
return int(match.group(1))
return None
async def verify_commit_in_master(
client: httpx.AsyncClient, owner: str, repo: str, commit_id: str
) -> bool:
"""Verify if a commit exists in master"""
response = await client.get(
f"https://api.github.com/repos/{owner}/{repo}/commits/{commit_id}"
)
return response.is_success and response.json().get("commit") is not None
async def process_issue_events(
client: httpx.AsyncClient, owner: str, repo: str, issue: Dict
) -> Optional[Dict]:
"""Process events for a single issue"""
events_response = await client.get(f"{issue['url']}/events")
if not events_response.is_success:
return None
for event in events_response.json():
if event["event"] == "closed" and event.get("commit_id"):
if await verify_commit_in_master(client, owner, repo, event["commit_id"]):
return {
"title": issue["title"],
"number": issue["number"],
"url": issue["html_url"],
}
return None
async def get_closed_issues(
client: httpx.AsyncClient, owner: str, repo: str, since: str
) -> List[Dict]:
"""Get issues closed by commits to master since the given date"""
response = await client.get(
f"https://api.github.com/repos/{owner}/{repo}/issues",
params={
"state": "closed",
"sort": "updated",
"direction": "desc",
"since": since,
"per_page": 100,
},
)
response.raise_for_status()
tasks = []
since_date = datetime.strptime(since, "%Y-%m-%dT%H:%M:%SZ")
for issue in response.json():
if "pull_request" in issue:
continue
closed_at = datetime.strptime(issue["closed_at"], "%Y-%m-%dT%H:%M:%SZ")
if closed_at <= since_date:
continue
tasks.append(process_issue_events(client, owner, repo, issue))
results = await asyncio.gather(*tasks)
return [r for r in results if r is not None]
async def process_pull_request(
client: httpx.AsyncClient,
owner: str,
repo: str,
pr: Dict,
closed_issues: List[Dict],
) -> Optional[str]:
"""Process a single pull request"""
issue_number = extract_issue_number(pr.get("body"))
if issue_number:
issue = await get_issue_details(client, owner, repo, issue_number)
if issue:
if not any(i["number"] == issue["number"] for i in closed_issues):
return TEMPLATE.format(**issue)
return None
return TEMPLATE.format(title=pr["title"], number=pr["number"], url=pr["html_url"])
async def get_changes_since_last_release(
owner: str, repo: str, token: Optional[str] = None
) -> List[str]:
headers = {
"Accept": "application/vnd.github.v3+json",
}
if token:
headers["Authorization"] = f"token {token}"
else:
print(
"Warning: No token provided. API rate limiting may occur.", file=sys.stderr
)
async with httpx.AsyncClient(headers=headers, timeout=30.0) as client:
# Get the date of last minor release
since = await get_last_minor_release(client, owner, repo)
if not since:
return []
changes = []
# Get issues closed by commits to master
closed_issues = await get_closed_issues(client, owner, repo, since)
changes.extend([TEMPLATE.format(**issue) for issue in closed_issues])
# Get merged PRs
response = await client.get(
f"https://api.github.com/repos/{owner}/{repo}/pulls",
params={
"state": "closed",
"sort": "updated",
"direction": "desc",
"per_page": 100,
},
)
response.raise_for_status()
# Process PRs in parallel
pr_tasks = []
for pr in response.json():
if not pr["merged_at"]:
continue
if since and pr["merged_at"] <= since:
break
pr_tasks.append(
process_pull_request(client, owner, repo, pr, closed_issues)
)
pr_results = await asyncio.gather(*pr_tasks)
changes.extend([r for r in pr_results if r is not None])
return changes
async def main_async():
parser = argparse.ArgumentParser(description="Generate release notes from GitHub")
parser.add_argument("--token", "-t", help="the file path to the GitHub API token")
args = parser.parse_args()
token = None
if args.token:
with open(args.token) as f:
token = f.read().strip()
try:
url_path = REPOSITORY.rstrip("/").split("github.com/")[1]
owner, repo = url_path.split("/")[-2:]
except (ValueError, IndexError):
print("Error: Invalid GitHub URL", file=sys.stderr)
sys.exit(1)
try:
notes = await get_changes_since_last_release(owner, repo, token)
print("\n".join(notes))
except httpx.HTTPError as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
def main():
asyncio.run(main_async())
if __name__ == "__main__":
main()

View file

@ -0,0 +1,67 @@
#!/usr/bin/env python3
import pathlib
import subprocess
RELEASE_FILE = "RELEASE.md"
QA_FILE = "QA.md"
def git_root():
"""Get the root directory of the Git repo."""
# FIXME: Use a Git Python binding for this.
# FIXME: Make this work if called outside the repo.
path = (
subprocess.run(
["git", "rev-parse", "--show-toplevel"],
check=True,
stdout=subprocess.PIPE,
)
.stdout.decode()
.strip("\n")
)
return pathlib.Path(path)
def extract_checkboxes(filename):
headers = []
result = []
with open(filename, "r") as f:
lines = f.readlines()
current_level = 0
for line in lines:
line = line.rstrip()
# If it's a header, store it
if line.startswith("#"):
# Count number of # to determine header level
level = len(line) - len(line.lstrip("#"))
if level < current_level or not current_level:
headers.extend(["", line, ""])
current_level = level
elif level > current_level:
continue
else:
headers = ["", line, ""]
# If it's a checkbox
elif "- [ ]" in line or "- [x]" in line or "- [X]" in line:
# Print the last header if we haven't already
if headers:
result.extend(headers)
headers = []
current_level = 0
# If this is the "Do the QA tasks" line, recursively get QA tasks
if "Do the QA tasks" in line:
result.append(line)
qa_tasks = extract_checkboxes(git_root() / QA_FILE)
result.append(qa_tasks)
else:
result.append(line)
return "\n".join(result)
if __name__ == "__main__":
print(extract_checkboxes(git_root() / RELEASE_FILE))

1127
dev_scripts/qa.py Executable file

File diff suppressed because it is too large Load diff

680
dev_scripts/repro-build.py Executable file
View file

@ -0,0 +1,680 @@
#!/usr/bin/env python3
import argparse
import datetime
import hashlib
import json
import logging
import os
import pprint
import shlex
import shutil
import subprocess
import sys
import tarfile
from pathlib import Path
logger = logging.getLogger(__name__)
MEDIA_TYPE_INDEX_V1_JSON = "application/vnd.oci.image.index.v1+json"
MEDIA_TYPE_MANIFEST_V1_JSON = "application/vnd.oci.image.manifest.v1+json"
ENV_RUNTIME = "REPRO_RUNTIME"
ENV_DATETIME = "REPRO_DATETIME"
ENV_SDE = "REPRO_SOURCE_DATE_EPOCH"
ENV_CACHE = "REPRO_CACHE"
ENV_BUILDKIT = "REPRO_BUILDKIT_IMAGE"
ENV_ROOTLESS = "REPRO_ROOTLESS"
DEFAULT_BUILDKIT_IMAGE = "moby/buildkit:v0.19.0@sha256:14aa1b4dd92ea0a4cd03a54d0c6079046ea98cd0c0ae6176bdd7036ba370cbbe"
DEFAULT_BUILDKIT_IMAGE_ROOTLESS = "moby/buildkit:v0.19.0-rootless@sha256:e901cffdad753892a7c3afb8b9972549fca02c73888cf340c91ed801fdd96d71"
MSG_BUILD_CTX = """Build environment:
- Container runtime: {runtime}
- BuildKit image: {buildkit_image}
- Rootless support: {rootless}
- Caching enabled: {use_cache}
- Build context: {context}
- Dockerfile: {dockerfile}
- Output: {output}
Build parameters:
- SOURCE_DATE_EPOCH: {sde}
- Build args: {build_args}
- Tag: {tag}
- Platform: {platform}
Podman-only arguments:
- BuildKit arguments: {buildkit_args}
Docker-only arguments:
- Docker Buildx arguments: {buildx_args}
"""
def pretty_error(obj: dict, msg: str):
raise Exception(f"{msg}\n{pprint.pprint(obj)}")
def get_key(obj: dict, key: str) -> object:
if key not in obj:
pretty_error(f"Could not find key '{key}' in the dictionary:", obj)
return obj[key]
def run(cmd, dry=False, check=True):
action = "Would have run" if dry else "Running"
logger.debug(f"{action}: {shlex.join(cmd)}")
if not dry:
subprocess.run(cmd, check=check)
def snip_contents(contents: str, num: int) -> str:
contents = contents.replace("\n", "")
if len(contents) > num:
return (
contents[:num]
+ f" [... {len(contents) - num} characters omitted."
+ " Pass --show-contents to print them in their entirety]"
)
return contents
def detect_container_runtime() -> str:
"""Auto-detect the installed container runtime in the system."""
if shutil.which("docker"):
return "docker"
elif shutil.which("podman"):
return "podman"
else:
return None
def parse_runtime(args) -> str:
if args.runtime is not None:
return args.runtime
runtime = os.environ.get(ENV_RUNTIME)
if runtime is None:
raise RuntimeError("No container runtime detected in your system")
if runtime not in ("docker", "podman"):
raise RuntimeError(
"Only 'docker' or 'podman' container runtimes"
" are currently supported by this script"
)
def parse_use_cache(args) -> bool:
if args.no_cache:
return False
return bool(int(os.environ.get(ENV_CACHE, "1")))
def parse_rootless(args, runtime: str) -> bool:
rootless = args.rootless or bool(int(os.environ.get(ENV_ROOTLESS, "0")))
if runtime != "podman" and rootless:
raise RuntimeError("Rootless mode is only supported with Podman runtime")
return rootless
def parse_sde(args) -> str:
sde = os.environ.get(ENV_SDE, args.source_date_epoch)
dt = os.environ.get(ENV_DATETIME, args.datetime)
if (sde is not None and dt is not None) or (sde is None and dt is None):
raise RuntimeError("You need to pass either a source date epoch or a datetime")
if sde is not None:
return str(sde)
if dt is not None:
d = datetime.datetime.fromisoformat(dt)
# If the datetime is naive, assume its timezone is UTC. The check is
# taken from:
# https://docs.python.org/3/library/datetime.html#determining-if-an-object-is-aware-or-naive
if d.tzinfo is None or d.tzinfo.utcoffset(d) is None:
d = d.replace(tzinfo=datetime.timezone.utc)
return int(d.timestamp())
def parse_buildkit_image(args, rootless: bool, runtime: str) -> str:
default = DEFAULT_BUILDKIT_IMAGE_ROOTLESS if rootless else DEFAULT_BUILDKIT_IMAGE
img = args.buildkit_image or os.environ.get(ENV_BUILDKIT, default)
if runtime == "podman" and not img.startswith("docker.io/"):
img = "docker.io/" + img
return img
def parse_build_args(args) -> str:
return args.build_arg or []
def parse_buildkit_args(args, runtime: str) -> str:
if not args.buildkit_args:
return []
if runtime != "podman":
raise RuntimeError("Cannot specify BuildKit arguments using the Podman runtime")
return shlex.split(args.buildkit_args)
def parse_buildx_args(args, runtime: str) -> str:
if not args.buildx_args:
return []
if runtime != "docker":
raise RuntimeError(
"Cannot specify Docker Buildx arguments using the Podman runtime"
)
return shlex.split(args.buildx_args)
def parse_image_digest(args) -> str | None:
if not args.expected_image_digest:
return None
parsed = args.expected_image_digest.split(":", 1)
if len(parsed) == 1:
return parsed[0]
else:
return parsed[1]
def parse_path(path: str | None) -> str | None:
return path and str(Path(path).absolute())
##########################
# OCI parsing logic
#
# Compatible with:
# * https://github.com/opencontainers/image-spec/blob/main/image-layout.md
def oci_print_info(parsed: dict, full: bool) -> None:
print(f"The OCI tarball contains an index and {len(parsed) - 1} manifest(s):")
print()
print(f"Image digest: {parsed[1]['digest']}")
for i, info in enumerate(parsed):
print()
if i == 0:
print(f"Index ({info['path']}):")
else:
print(f"Manifest {i} ({info['path']}):")
print(f" Digest: {info['digest']}")
print(f" Media type: {info['media_type']}")
print(f" Platform: {info['platform'] or '-'}")
contents = info["contents"] if full else snip_contents(info["contents"], 600)
print(f" Contents: {contents}")
print()
def oci_normalize_path(path):
if path.startswith("sha256:"):
hash_algo, checksum = path.split(":")
path = f"blobs/{hash_algo}/{checksum}"
return path
def oci_get_file_from_tarball(tar: tarfile.TarFile, path: str) -> dict:
"""Get file from an OCI tarball.
If the filename cannot be found, search again by prefixing it with "./", since we
have encountered path names in OCI tarballs prefixed with "./".
"""
try:
return tar.extractfile(path).read().decode()
except KeyError:
if not path.startswith("./") and not path.startswith("/"):
path = "./" + path
try:
return tar.extractfile(path).read().decode()
except KeyError:
# Do not raise here, so that we can raise the original exception below.
pass
raise
def oci_parse_manifest(tar: tarfile.TarFile, path: str, platform: dict | None) -> dict:
"""Parse manifest information in JSON format.
Interestingly, the platform info for a manifest is not included in the
manifest itself, but in the descriptor that points to it. So, we have to
carry it from the previous manifest and include in the info here.
"""
path = oci_normalize_path(path)
contents = oci_get_file_from_tarball(tar, path)
digest = "sha256:" + hashlib.sha256(contents.encode()).hexdigest()
contents_dict = json.loads(contents)
media_type = get_key(contents_dict, "mediaType")
manifests = contents_dict.get("manifests", [])
if platform:
os = get_key(platform, "os")
arch = get_key(platform, "architecture")
platform = f"{os}/{arch}"
return {
"path": path,
"contents": contents,
"digest": digest,
"media_type": media_type,
"platform": platform,
"manifests": manifests,
}
def oci_parse_manifests_dfs(
tar: tarfile.TarFile, path: str, parsed: list, platform: dict | None = None
) -> None:
info = oci_parse_manifest(tar, path, platform)
parsed.append(info)
for m in info["manifests"]:
oci_parse_manifests_dfs(tar, m["digest"], parsed, m.get("platform"))
def oci_parse_tarball(path: Path) -> dict:
parsed = []
with tarfile.TarFile.open(path) as tar:
oci_parse_manifests_dfs(tar, "index.json", parsed)
return parsed
##########################
# Image building logic
def podman_build(
context: str,
dockerfile: str | None,
tag: str | None,
buildkit_image: str,
sde: int,
rootless: bool,
use_cache: bool,
output: Path,
build_args: list,
platform: str,
buildkit_args: list,
dry: bool,
):
rootless_args = []
rootful_args = []
if rootless:
rootless_args = [
"--userns",
"keep-id:uid=1000,gid=1000",
"--security-opt",
"seccomp=unconfined",
"--security-opt",
"apparmor=unconfined",
"-e",
"BUILDKITD_FLAGS=--oci-worker-no-process-sandbox",
]
else:
rootful_args = ["--privileged"]
dockerfile_args_podman = []
dockerfile_args_buildkit = []
if dockerfile:
dockerfile_args_podman = ["-v", f"{dockerfile}:/tmp/Dockerfile"]
dockerfile_args_buildkit = ["--local", "dockerfile=/tmp"]
else:
dockerfile_args_buildkit = ["--local", "dockerfile=/tmp/work"]
tag_args = f",name={tag}" if tag else ""
cache_args = []
if use_cache:
cache_args = [
"--export-cache",
"type=local,mode=max,dest=/tmp/cache",
"--import-cache",
"type=local,src=/tmp/cache",
]
_build_args = []
for arg in build_args:
_build_args.append("--opt")
_build_args.append(f"build-arg:{arg}")
platform_args = ["--opt", f"platform={platform}"] if platform else []
cmd = [
"podman",
"run",
"-it",
"--rm",
"-v",
"buildkit_cache:/tmp/cache",
"-v",
f"{output.parent}:/tmp/image",
"-v",
f"{context}:/tmp/work",
"--entrypoint",
"buildctl-daemonless.sh",
*rootless_args,
*rootful_args,
*dockerfile_args_podman,
buildkit_image,
"build",
"--frontend",
"dockerfile.v0",
"--local",
"context=/tmp/work",
"--opt",
f"build-arg:SOURCE_DATE_EPOCH={sde}",
*_build_args,
"--output",
f"type=docker,dest=/tmp/image/{output.name},rewrite-timestamp=true{tag_args}",
*cache_args,
*dockerfile_args_buildkit,
*platform_args,
*buildkit_args,
]
run(cmd, dry)
def docker_build(
context: str,
dockerfile: str | None,
tag: str | None,
buildkit_image: str,
sde: int,
use_cache: bool,
output: Path,
build_args: list,
platform: str,
buildx_args: list,
dry: bool,
):
builder_id = hashlib.sha256(buildkit_image.encode()).hexdigest()
builder_name = f"repro-build-{builder_id}"
tag_args = ["-t", tag] if tag else []
cache_args = [] if use_cache else ["--no-cache", "--pull"]
cmd = [
"docker",
"buildx",
"create",
"--name",
builder_name,
"--driver-opt",
f"image={buildkit_image}",
]
run(cmd, dry, check=False)
dockerfile_args = ["-f", dockerfile] if dockerfile else []
_build_args = []
for arg in build_args:
_build_args.append("--build-arg")
_build_args.append(arg)
platform_args = ["--platform", platform] if platform else []
cmd = [
"docker",
"buildx",
"--builder",
builder_name,
"build",
"--build-arg",
f"SOURCE_DATE_EPOCH={sde}",
*_build_args,
"--provenance",
"false",
"--output",
f"type=docker,dest={output},rewrite-timestamp=true",
*cache_args,
*tag_args,
*dockerfile_args,
*platform_args,
*buildx_args,
context,
]
run(cmd, dry)
##########################
# Command logic
def build(args):
runtime = parse_runtime(args)
use_cache = parse_use_cache(args)
sde = parse_sde(args)
rootless = parse_rootless(args, runtime)
buildkit_image = parse_buildkit_image(args, rootless, runtime)
build_args = parse_build_args(args)
platform = args.platform
buildkit_args = parse_buildkit_args(args, runtime)
buildx_args = parse_buildx_args(args, runtime)
tag = args.tag
dockerfile = parse_path(args.file)
output = Path(parse_path(args.output))
dry = args.dry
context = parse_path(args.context)
logger.info(
MSG_BUILD_CTX.format(
runtime=runtime,
buildkit_image=buildkit_image,
sde=sde,
rootless=rootless,
use_cache=use_cache,
context=context,
dockerfile=dockerfile or "(not provided)",
tag=tag or "(not provided)",
output=output,
build_args=",".join(build_args) or "(not provided)",
platform=platform or "(default)",
buildkit_args=" ".join(buildkit_args) or "(not provided)",
buildx_args=" ".join(buildx_args) or "(not provided)",
)
)
try:
if runtime == "docker":
docker_build(
context,
dockerfile,
tag,
buildkit_image,
sde,
use_cache,
output,
build_args,
platform,
buildx_args,
dry,
)
else:
podman_build(
context,
dockerfile,
tag,
buildkit_image,
sde,
rootless,
use_cache,
output,
build_args,
platform,
buildkit_args,
dry,
)
except subprocess.CalledProcessError as e:
logger.error(f"Failed with {e.returncode}")
sys.exit(e.returncode)
def analyze(args) -> None:
expected_image_digest = parse_image_digest(args)
tarball_path = Path(args.tarball)
parsed = oci_parse_tarball(tarball_path)
oci_print_info(parsed, args.show_contents)
if expected_image_digest:
cur_digest = parsed[1]["digest"].split(":")[1]
if cur_digest != expected_image_digest:
raise Exception(
f"The image does not have the expected digest: {cur_digest} != {expected_image_digest}"
)
print(f"✅ Image digest matches {expected_image_digest}")
def define_build_cmd_args(parser: argparse.ArgumentParser) -> None:
parser.add_argument(
"--runtime",
choices=["docker", "podman"],
default=detect_container_runtime(),
help="The container runtime for building the image (default: %(default)s)",
)
parser.add_argument(
"--datetime",
metavar="YYYY-MM-DD",
default=None,
help=(
"Provide a date and (optionally) a time in ISO format, which will"
" be used as the timestamp of the image layers"
),
)
parser.add_argument(
"--buildkit-image",
metavar="NAME:TAG@DIGEST",
default=None,
help=(
"The BuildKit container image which will be used for building the"
" reproducible container image. Make sure to pass the '-rootless'"
" variant if you are using rootless Podman"
" (default: docker.io/moby/buildkit:v0.19.0)"
),
)
parser.add_argument(
"--source-date-epoch",
"--sde",
metavar="SECONDS",
type=int,
default=None,
help="Provide a Unix timestamp for the image layers",
)
parser.add_argument(
"--no-cache",
default=False,
action="store_true",
help="Do not use existing cached images for the container build. Build from the start with a new set of cached layers.",
)
parser.add_argument(
"--rootless",
default=False,
action="store_true",
help="Run BuildKit in rootless mode (Podman only)",
)
parser.add_argument(
"-f",
"--file",
metavar="FILE",
default=None,
help="Pathname of a Dockerfile",
)
parser.add_argument(
"-o",
"--output",
metavar="FILE",
default=Path.cwd() / "image.tar",
help="Path to save OCI tarball (default: %(default)s)",
)
parser.add_argument(
"-t",
"--tag",
metavar="TAG",
default=None,
help="Tag the built image with the name %(metavar)s",
)
parser.add_argument(
"--build-arg",
metavar="ARG=VALUE",
action="append",
default=None,
help="Set build-time variables",
)
parser.add_argument(
"--platform",
metavar="PLAT1,PLAT2",
default=None,
help="Set platform for the image",
)
parser.add_argument(
"--buildkit-args",
metavar="'ARG1 ARG2'",
default=None,
help="Extra arguments for BuildKit (Podman only)",
)
parser.add_argument(
"--buildx-args",
metavar="'ARG1 ARG2'",
default=None,
help="Extra arguments for Docker Buildx (Docker only)",
)
parser.add_argument(
"--dry",
default=False,
action="store_true",
help="Do not run any commands, just print what would happen",
)
parser.add_argument(
"context",
metavar="CONTEXT",
help="Path to the build context",
)
def parse_args() -> dict:
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(dest="command", help="Available commands")
build_parser = subparsers.add_parser("build", help="Perform a build operation")
build_parser.set_defaults(func=build)
define_build_cmd_args(build_parser)
analyze_parser = subparsers.add_parser("analyze", help="Analyze an OCI tarball")
analyze_parser.set_defaults(func=analyze)
analyze_parser.add_argument(
"tarball",
metavar="FILE",
help="Path to OCI image in .tar format",
)
analyze_parser.add_argument(
"--expected-image-digest",
metavar="DIGEST",
default=None,
help="The expected digest for the provided image",
)
analyze_parser.add_argument(
"--show-contents",
default=False,
action="store_true",
help="Show full file contents",
)
return parser.parse_args()
def main() -> None:
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
args = parse_args()
if not hasattr(args, "func"):
args.func = build
args.func(args)
if __name__ == "__main__":
sys.exit(main())

115
dev_scripts/reproduce-image.py Executable file
View file

@ -0,0 +1,115 @@
#!/usr/bin/env python3
import argparse
import hashlib
import logging
import pathlib
import platform
import stat
import subprocess
import sys
import urllib.request
logger = logging.getLogger(__name__)
if platform.system() in ["Darwin", "Windows"]:
CONTAINER_RUNTIME = "docker"
elif platform.system() == "Linux":
CONTAINER_RUNTIME = "podman"
def run(*args):
"""Simple function that runs a command and checks the result."""
logger.debug(f"Running command: {' '.join(args)}")
return subprocess.run(args, check=True)
def build_image(
platform=None,
runtime=None,
cache=True,
date=None,
):
"""Build the Dangerzone container image with a special tag."""
platform_args = [] if not platform else ["--platform", platform]
runtime_args = [] if not runtime else ["--runtime", runtime]
cache_args = [] if cache else ["--use-cache", "no"]
date_args = [] if not date else ["--debian-archive-date", date]
run(
"python3",
"./install/common/build-image.py",
*platform_args,
*runtime_args,
*cache_args,
*date_args,
)
def parse_args():
parser = argparse.ArgumentParser(
prog=sys.argv[0],
description="Dev script for verifying container image reproducibility",
)
parser.add_argument(
"--platform",
default=None,
help=f"The platform for building the image (default: current platform)",
)
parser.add_argument(
"--runtime",
choices=["docker", "podman"],
default=CONTAINER_RUNTIME,
help=f"The container runtime for building the image (default: {CONTAINER_RUNTIME})",
)
parser.add_argument(
"--no-cache",
default=False,
action="store_true",
help=(
"Do not use existing cached images for the container build."
" Build from the start with a new set of cached layers."
),
)
parser.add_argument(
"--debian-archive-date",
default=None,
help="Use a specific Debian snapshot archive, by its date",
)
parser.add_argument(
"digest",
help="The digest of the image that you want to reproduce",
)
return parser.parse_args()
def main():
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
args = parse_args()
logger.info(f"Building container image")
build_image(
args.platform,
args.runtime,
not args.no_cache,
args.debian_archive_date,
)
logger.info(
f"Check that the reproduced image has the expected digest: {args.digest}"
)
run(
"./dev_scripts/repro-build.py",
"analyze",
"--show-contents",
"share/container.tar",
"--expected-image-digest",
args.digest,
)
if __name__ == "__main__":
sys.exit(main())

131
dev_scripts/sign-assets.py Executable file
View file

@ -0,0 +1,131 @@
#!/usr/bin/env python3
import argparse
import hashlib
import logging
import pathlib
import subprocess
import sys
log = logging.getLogger(__name__)
DZ_ASSETS = [
"container-{version}-i686.tar",
"container-{version}-arm64.tar",
"Dangerzone-{version}.msi",
"Dangerzone-{version}-arm64.dmg",
"Dangerzone-{version}-i686.dmg",
"dangerzone-{version}.tar.gz",
]
DZ_SIGNING_PUBKEY = "DE28AB241FA48260FAC9B8BAA7C9B38522604281"
def setup_logging():
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
def sign_asset(asset, detached=True):
"""Sign a single Dangerzone asset using GPG.
By default, ask GPG to create a detached signature. Alternatively, ask it to include
the signature with the contents of the file.
"""
_sign_opt = "--detach-sig" if detached else "--clearsign"
cmd = [
"gpg",
"--batch",
"--yes",
"--armor",
_sign_opt,
"-u",
DZ_SIGNING_PUBKEY,
str(asset),
]
log.info(f"Signing '{asset}'")
log.debug(f"GPG command: {' '.join(cmd)}")
subprocess.run(cmd, check=True)
def hash_assets(assets):
"""Create a list of hashes for all the assets, mimicking the output of `sha256sum`.
Compute the SHA-256 hash of every asset, and create a line for each asset that
follows the format of `sha256sum`. From `man sha256sum`:
The sums are computed as described in FIPS-180-2. When checking, the input
should be a former output of this program. The default mode is to print a
line with: checksum, a space, a character indicating input mode ('*' for
binary, ' ' for text or where binary is insignificant), and name for each
FILE.
"""
checksums = []
for asset in assets:
log.info(f"Hashing '{asset}'")
with open(asset, "rb") as f:
hexdigest = hashlib.file_digest(f, "sha256").hexdigest()
checksums.append(f"{hexdigest} {asset.name}")
return "\n".join(checksums)
def ensure_assets_exist(assets):
"""Ensure that assets dir exists, and that the assets are all there."""
dir = assets[0].parent
if not dir.exists():
raise ValueError(f"Path '{dir}' does not exist")
if not dir.is_dir():
raise ValueError(f"Path '{dir}' is not a directory")
for asset in assets:
if not asset.exists():
raise ValueError(
f"Expected asset with name '{asset}', but it does not exist"
)
def main():
parser = argparse.ArgumentParser(
prog=sys.argv[0],
description="Dev script for signing Dangerzone assets",
)
parser.add_argument(
"--version",
required=True,
help="look for assets with this Dangerzone version",
)
parser.add_argument(
"dir",
help="look for assets in this directory",
)
args = parser.parse_args()
setup_logging()
# Ensure that all the necessary assets exist in the provided directory.
log.info("> Ensuring that the required assets exist")
dir = pathlib.Path(args.dir)
assets = [dir / asset.format(version=args.version) for asset in DZ_ASSETS]
ensure_assets_exist(assets)
# Create a file that holds the SHA-256 hashes of the assets.
log.info("> Create a checksums file for our assets")
checksums = hash_assets(assets)
checksums_file = dir / f"checksums-{args.version}.txt"
with open(checksums_file, "w+") as f:
f.write(checksums)
# Sign every asset and create a detached signature (.asc) for each one of them. The
# sole exception is the checksums file, which embeds its signature within the
# file, and retains its original name.
log.info("> Sign all of our assets")
for asset in assets:
sign_asset(asset)
sign_asset(checksums_file, detached=False)
(dir / f"checksums-{args.version}.txt.asc").rename(checksums_file)
if __name__ == "__main__":
sys.exit(main())

2
dev_scripts/storage.conf Normal file
View file

@ -0,0 +1,2 @@
[storage]
driver = "overlay"

133
dev_scripts/upload-asset.py Executable file
View file

@ -0,0 +1,133 @@
#!/usr/bin/env python3
import argparse
import getpass
import logging
import os
import sys
import requests
log = logging.getLogger(__name__)
DEFAULT_HEADERS = {
"Accept": "application/vnd.github+json",
"X-GitHub-Api-Version": "2022-11-28",
}
def get_auth_header(token):
return {"Authorization": f"Bearer {token}"}
def get_latest_draft_release(token):
url = "https://api.github.com/repos/freedomofpress/dangerzone/releases"
headers = DEFAULT_HEADERS.copy()
headers.update(get_auth_header(token))
r = requests.get(url, headers=headers)
r.raise_for_status()
draft_releases = [release["id"] for release in r.json() if release["draft"]]
if len(draft_releases) > 1:
raise RuntimeError("Found more than one draft releases")
elif len(draft_releases) == 0:
raise RuntimeError("No draft releases have been found")
return draft_releases[0]
def get_release_from_tag(token, tag):
url = f"https://api.github.com/repos/freedomofpress/dangerzone/releases/tags/v{tag}"
headers = DEFAULT_HEADERS.copy()
headers.update(get_auth_header(token))
r = requests.get(url, headers=headers)
r.raise_for_status()
return r.json()["id"]
def upload_asset(token, release_id, path):
filename = os.path.basename(path)
url = f"https://uploads.github.com/repos/freedomofpress/dangerzone/releases/{release_id}/assets?name={filename}"
headers = DEFAULT_HEADERS.copy()
headers.update(get_auth_header(token))
headers["Content-Type"] = "application/octet-stream"
with open(path, "rb") as f:
data = f.read()
# XXX: We have to load the data in-memory. Another solution is to use multipart
# encoding, but this doesn't work for GitHub.
r = requests.post(url, headers=headers, data=data)
r.raise_for_status()
def setup_logging():
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
def main():
parser = argparse.ArgumentParser(
prog=sys.argv[0],
description="Dev script for uploading assets to a GitHub release",
)
parser.add_argument(
"--token",
help="the file path to the GitHub token we will use for uploading assets",
)
parser.add_argument(
"--tag",
help="use the release with this tag",
)
parser.add_argument(
"--release-id",
help="use the release with this ID",
)
parser.add_argument(
"--draft",
action="store_true",
help="use the latest draft release",
)
parser.add_argument(
"file",
help="the file path to the asset we want to upload",
)
args = parser.parse_args()
setup_logging()
if args.token:
log.debug(f"Reading token from {args.token}")
# Ensure we are not uploading the token as an asset
assert args.file != args.token
with open(args.token) as f:
token = f.read().strip()
else:
token = getpass.getpass("Token: ")
if args.tag:
log.debug(f"Getting the ID of the {args.tag} release")
release_id = get_release_from_tag(token, args.tag)
log.debug(f"The {args.tag} release has ID '{release_id}'")
elif args.release_id:
release_id = args.release_id
else:
log.debug("Getting the ID of the latest draft release")
release_id = get_latest_draft_release(token)
log.debug(f"The latest draft release has ID '{release_id}'")
log.info(f"Uploading file '{args.file}' to GitHub release '{release_id}'")
upload_asset(token, release_id, args.file)
log.info(
f"Successfully uploaded file '{args.file}' to GitHub release '{release_id}'"
)
if __name__ == "__main__":
sys.exit(main())

View file

@ -0,0 +1,13 @@
# Security Advisory 2023-10-25
For users testing our [new Qubes integration (beta)](https://github.com/freedomofpress/dangerzone/blob/main/INSTALL.md#qubes-os), please note that our instructions were missing a configuration detail for disposable VMs which is necessary to fully harden the configuration.
These instructions apply to users who followed the setup instructions **before October 25, 2023**.
**What you need to do:** run the following command in dom0:
```bash
qvm-prefs dz-dvm default_dispvm ''
```
**Explanation**: In Qubes OS, the default template for disposable VMs is network-connected. For this reason, we instruct users to create their own disposable VM (`dz-dvm`). However, adversaries with the ability to execute commands on `dz-dvm` would also be able open new disposable VMs with the default settings. By setting the default_dispvm to "none" we prevent this bypass.

View file

@ -0,0 +1,32 @@
Security Advisory 2023-12-07
In Dangerzone, a security vulnerability was detected in the quarantined
environment where documents are opened. Vulnerabilities like this are expected
and do not compromise the security of Dangerzone. However, in combination with
another more serious vulnerability (also called container escape), a malicious
document may be able to breach the security of Dangerzone. We are not aware of
any container escapes that affect Dangerzone. **To reduce that risk, you are
strongly advised to update Dangerzone to the latest version**.
# Summary
A security vulnerability in GhostScript (CVE-2023-43115) affects the
**contained** environment where the document rendering takes place. If one
attempts to convert a malicious file with an embedded PostScript image,
arbitrary code may run within that environment. Such files look like regular
Office documents, which means that you cannot avoid a specific extension. Other
programs that open Office documents, such as LibreOffice, are also affected,
unless the system has been upgraded in the meantime.
# How does this impact me?
The expectation is that malicious code will run in a container without Internet
access, meaning that it won't be able to infect the rest of the system.
# What do I need to do?
You are **strongly** advised to update your Dangerzone installation to 0.5.1 as
soon as possible.
Please note that we have recently enabled security scans for our software, and
we aim to alert people even sooner about vulnerabilities like these.

View file

@ -0,0 +1,33 @@
Security Advisory 2024-12-24
In Dangerzone, a security vulnerability was detected in the quarantined
environment where documents are opened. Vulnerabilities like this are expected
and do not compromise the security of Dangerzone. However, in combination with
another more serious vulnerability (also called container escape), a malicious
document may be able to breach the security of Dangerzone. We are not aware of
any container escapes that affect Dangerzone. **To reduce that risk, you are
strongly advised to update Dangerzone to the latest version**.
# Summary
A series of vulnerabilities in gst-plugins-base (CVE-2024-47538, CVE-2024-47607
and CVE-2024-47615) affects the **contained** environment where the document
rendering takes place.
If one attempts to convert a malicious file with an embedded Vorbis or Opus
media elements, arbitrary code may run within that environment. Such files
look like regular Office documents, which means that you cannot avoid a specific
extension. Other programs that open Office documents, such as LibreOffice, are
also affected, unless the system has been upgraded in the meantime.
# How does this impact me?
The expectation is that malicious code will run in a container without Internet
access, meaning that it won't be able to infect the rest of the system.
If you are running Dangerzone via the Qubes OS, you are not impacted.
# What do I need to do?
You are **strongly** advised to update your Dangerzone installation to 0.8.1 as
soon as possible.

38
docs/developer/TESTING.md Normal file
View file

@ -0,0 +1,38 @@
# Dangerzone Testing
Dangerzone has some automated testing under `tests/`.
The following assumes that you have already setup the development environment.
## Run tests
Unit / integration tests are run with:
```bash
poetry run make test
```
## Run large tests
We also have a larger set of tests that can take a day or more to run, where we evaluate the completeness of Dangerzone conversions.
```bash
poetry run make test-large
```
### Test report generation
After running the large tests, a report is stored under `tests/test_docs_large/results/junit/` and it is composed of the JUnit XML file describing the pytest run.
This report can be analysed for errors. It is obtained by running:
```bash
cd tests/docs_test_large
make report
```
If you want to run the report on some historical test result, you can call:
```bash
cd tests/docs_test_large
python report.py tests/test_docs_large/results/junit/commit_<COMMIT_ID>.junit.xml
```

54
docs/developer/doit.md Normal file
View file

@ -0,0 +1,54 @@
# Using the Doit Automation Tool
Developers can use the [Doit](https://pydoit.org/) automation tool to create
release artifacts. The purpose of the tool is to automate the manual release
instructions in `RELEASE.md` file. Not everything is automated yet, since we're
still experimenting with this tool. You can find our task definitions in this
repo's `dodo.py` file.
## Why Doit?
We picked Doit out of the various tools out there for the following reasons:
* **Pythonic:** The configuration file and tasks can be written in Python. Where
applicable, it's easy to issue shell commands as well.
* **File targets:** Doit borrows the file target concept from Makefiles. Tasks
can have file dependencies, and targets they build. This makes it easy to
define a dependency graph for tasks.
* **Hash-based caching:** Unlike Makefiles, doit does not look at the
modification timestamp of source/target files, to figure out if it needs to
run them. Instead, it hashes those files, and will run a task only if the
hash of a file dependency has changed.
* **Parallelization:** Tasks can be run in parallel with the `-n` argument,
which is similar to `make`'s `-j` argument.
## How to Doit?
First, enter your Poetry shell. Then, make sure that your environment is clean,
and you have ample disk space. You can run:
```bash
doit clean --dry-run # if you want to see what would happen
doit clean # you'll be asked to cofirm that you want to clean everything
```
Finally, you can build all the release artifacts with `doit`, or a specific task
with:
```
doit <task>
```
## Tips and tricks
* You can run `doit list --all -s` to see the full list of tasks, their
dependencies, and whether they are up to date (U) or will run (R). Note that
certain small tasks are always configured to run.
* You can run `doit info <task>` to see which dependencies are missing.
* You can pass the following environment variables to the script, in order to
affect some global parameters:
- `CONTAINER_RUNTIME`: The container runtime to use. Either `podman` (default)
or `docker`.
- `RELEASE_DIR`: Where to store the release artifacts. Default path is
`~/release-assets/<version>`
- `APPLE_ID`: The Apple ID to use when signing/notarizing the macOS DMG.

View file

@ -0,0 +1,133 @@
# Create Dangerzone environments
The `dev_scripts/env.py` script creates environments where a user can run
Dangerzone, allows the user to run arbitrary commands in these environments, as
well as run Dangerzone (nested containerization).
It supports two types of environments:
1. Dev environment. This environment has developer tools, necessary for
Dangerzone, baked in. Also, it mounts the Dangerzone source under
`/home/user/dangerzone` in the container. The developer can then run
Dangerzone from source, with `poetry run ./dev_scripts/dangerzone`.
2. End-user environment. This environment has only Dangerzone installed in it,
from the .deb/.rpm package that we have created. For convenience, it also has
the Dangerzone source mounted under `/home/user/dangerzone`, but it lacks
Poetry and other build tools. The developer can run Dangerzone there with
`dangerzone`. This environment is the most vanilla Dangerzone environment,
and should be closer to the end user's environment, than the development
environment.
Each environment corresponds to a Dockerfile, which is generated on the fly. The
developer can see this Dockerfile by passing `--show-dockerfile`.
For usage information, run `./dev_scripts/env.py --help`.
## Nested containerization
Since the Dangerzone environments are containers, this means that the Podman
containers that Dangerzone creates have to be nested containers. This has some
challenges that we will highlight below:
1. Containers typically only have a subset of syscalls allowed, and sometimes
only for specific arguments. This happens with the use of
[seccomp filters](https://docs.docker.com/engine/security/seccomp/). For
instance, in Docker, the `clone` syscall is limited in containers and cannot
create new namespaces
(https://docs.docker.com/engine/security/seccomp/#significant-syscalls-blocked-by-the-default-profile). For testing/development purposes, we can get around this limitation
by disabling the seccomp filters for the external container with
`--security-opt seccomp=unconfined`. This has the same effect as developing
Dangerzone locally, so it should probably be sufficient for now.
2. While Linux supports nested namespaces, we need extra handling for nested
user namespaces. By default, the configuration for each user namespace (see
[`man login.defs`](https://man7.org/linux/man-pages/man5/login.defs.5.html)
is to reserve 65536 UIDs/GIDs, starting from UID/GID 100000. This works fine
for the first container, but can't work for the nested container, since it
doesn't have enough UIDs/GIDs to refer to UID 100000. Our solution to this is
to restrict the number of UIDs/GIDs allowed in the nested container to 2000,
which should be enough to run `podman` in it.
3. Containers also restrict the capabilities (see
[`man capabilities`](https://man7.org/linux/man-pages/man7/capabilities.7.html))
of the processes that run in them. By default, containers do not have mount
capabilities, since it requires `CAP_SYS_ADMIN`, which effectively
[makes the process root](https://lwn.net/Articles/486306/) in the specific
user namespace. In our case, we have to give the Dangerzone environment this
capability, since it will have to mount directories in Podman containers. For
this reason, as well as some extra things we bumped into during development,
we pass `--privileged` when creating the Dangerzone environment, which
includes the `CAP_SYS_ADMIN` capability.
## GUI containerization
Running a GUI app in a container is a tricky subject for multi-platform apps. In
our case, we deal specifically with Linux environments, so we can target just
this platform.
To understand how a GUI app can draw in the user's screen from within a
container, we must first understand how it does so outside the container. In
Unix-like systems, GUI apps act like
[clients to a display server](https://wayland.freedesktop.org/architecture.html).
The most common display server implementation is X11, and the runner-up is
Wayland. Both of these display servers share some common traits, mainly that
they use Unix domain sockets as a way of letting clients communicate with them.
So, this gives us the answer on how one can run a containerized GUI app; they
can simply mount the Unix Domain Socket in the container. In practice this is
more nuanced, for two reasons:
1. Wayland support is not that mature on Linux, so we need to
[set some extra environment variables](https://github.com/mviereck/x11docker/wiki/How-to-provide-Wayland-socket-to-docker-container). To simplify things, we will target
X11 / XWayland hosts, which are the majority of the Linux OSes out there.
2. Sharing the Unix Domain socket does not allow the client to talk to the
display server, for security reasons. In order to allow the client, we need
to mount a magic cookie stored in a file pointed at by the `$XAUTHORITY`
envvar. Else, we can use `xhost`, which is considered slightly more dangerous
for multi-user environments.
## Caching and Reproducibility
In order to build Dangerzone environments, the script uses the following inputs:
* Dev environment:
- Distro name and version. Together, these comprise the base container image.
- `poetry.lock` and `pyproject.toml`. Together, these comprise the build
context.
* End-user environment:
- Distro name and version. Together, these comprise the base container image.
- `.deb` / `.rpm` Dangerzone package, as found under `deb_dist/` or `dist/`
respectively.
Any change in these inputs busts the cache for the corresponding image. In
theory, this means that the Dangerzone environment for each commit can be built
reproducibly. In practice, there are some issues that we haven't covered yet:
1. The output images are:
* Dev: `dangerzone.rocks/build/{distro_name}:{distro_version}`
* End-user: `dangerzone.rocks/{distro_name}:{distro_version}`
These images do not contain the commit/version of the Dangerzone source they
got created from, so each one overrides the other.
2. The end-user environment expects a `.deb.` / `.rpm` tagged with the version
of Dangerzone, but it doesn't insist being built from the current Dangerzone
commit. This means that stale packages may be installed in the end-user
environment.
3. The base images may be different in various environments, depending on when
they where pulled.
## State
The main goal behind these Dangerzone environments is to make them immutable,
so that they do not require to be stored somewhere, but can be recreated from
their images. Any change to these environments should therefore be reflected to
their Dockerfile.
To enforce immutability, we delete the containers every time we run a command or
an interactive shell exits. This means that these environments are suitable only
for running Dangerzone commands, and not doing actual development in them
(install an editor, configure bash prompts, etc.)
The only point where we allow mutability is the directory where Podman stores
the images and stopped containers, which may be useful for developers. If this
proves to be an issue, we will reconsider.

295
docs/developer/gvisor.md Normal file
View file

@ -0,0 +1,295 @@
# gVisor integration
> [!NOTE]
> **Update on 2025-01-13:** There is no longer a copied container image under
> `/home/dangerzone/dangerzone-image/rootfs`. We now reuse the same container
> image both for the inner and outer container. See
> [#1048](https://github.com/freedomofpress/dangerzone/issues/1048).
Dangerzone has relied on the container runtime available in each supported
operating system (Docker Desktop on Windows / macOS, Podman on Linux) to isolate
the host from the sanitization process. The problem with this type of isolation
is that it exposes a rather large attack surface; the Linux kernel.
[gVisor](https://gvisor.dev/) is an application kernel, that emulates a
substantial portion of the Linux Kernel API in Go. What's more interesting to
Dangerzone is that it also offers an OCI runtime (`runsc`) that enables
containers to transparently run this application kernel.
As of writing this, Dangerzone uses two containers to sanitize a document:
* The first container reads a document from stdin, converts each page to pixels,
and writes them to stdout.
* The second container reads the pixels from a mounted volume (the host has
taken care of this), and saves the final PDF to another mounted volume.
Our threat model considers the computation and output of the first container
as **untrusted**, and the computation and output of the second container as
trusted. For this reason, and because we are about to remove the need for the
second container, our integration plan will focus on the first container.
## Design overview
Our integration goals are to:
* Make gVisor available to all of our supported platforms.
* Do not ask from users to run any commands on their system to do so.
Because gVisor does not support Windows and macOS systems out of the box,
Dangerzone will be responsible for "shipping" gVisor to those users. It will do
so using nested containers:
* The **outer** container is the Docker/Podman container that Dangerzone uses
already. This container acts as our **portability** layer. It's main purpose
is to bundle all the necessary configuration files and program to run gVisor
in all of our platforms.
* The **inner** container is the gVisor container, created with `runsc`. This
container acts as our **isolation layer**. It is responsible for running the
Python code that rasterizes a document, in a way that will be fully isolated
from the host.
### Building the container image
This nested container approach directly affects the container image as well,
which will also have two layers:
* The **outer** container image will contain just Python3 and `runsc`, the
latter downloaded from the official gVisor website. It will also contain an
entrypoint that will launch `runsc`. Finally, it will contain the **inner**
container image (see below) as filesystem clone under
`/dangerzone-image/rootfs`.
* The **inner** container image is practically the original Dangerzone image, as
we've always built it, which contains the necessary tooling to rasterize a
document.
### Spawning the container
Spawning the container now becomes a multi-stage process:
The `Container` isolation provider spawns the container as before, with the
following changes:
* It adds the `SYS_CHROOT` Linux capability, which was previously dropped, to
the **outer** container. This capability is necessary to run `runsc`
rootless, and is not inherited by the **inner** container.
* It removes the `--userns keep-id` argument, which mapped the user outside the
container to the same UID (normally `1000`) within the container. This was
originally required when we were mounting host directories within the
container, but this no longer applies to the gVisor integration. By removing
this flag, the host user maps to the root user within the container (UID `0`).
- In distributions that offer Podman version 4 or greater, we use the
`--userns nomap` flag. This flag greatly minimizes the attack surface,
since the host user is not mapped within the container at all.
* We use our custom seccomp policy across container engines, since some do not
allow the `ptrace` syscall (see
[#846](https://github.com/freedomofpress/dangerzone/issues/846)).
* It labels the **outer** container with the `container_engine_t` SELinux label.
This label is reserved for running a container engine within a container, and
is necessary in environments where SELinux is enabled in enforcing mode (see
[#880](https://github.com/freedomofpress/dangerzone/issues/880)).
Then, the following happens when Podman/Docker spawns the container:
1. _(outer container)_ The entrypoint code finds from `sys.argv` the command
that Dangerzone passed to the `docker run` / `podman run` invocation.
Typically, this command is:
```
/usr/bin/python3 -m dangerzone.conversion.doc_to_pixels
```
2. _(outer container)_ The entrypoint code then creates an OCI config for
`runsc` with the following properties:
* Use UID/GID 1000 in the **inner** container image.
* Run the command we detected on step 1.
* Drop all Linux capabilities.
* Limit the number of open files to 4096.
* Use the `/dangerzone-image/rootfs` directory as the root path for the
**inner** container.
* Mount a gVisor view of the `procfs` hierarchy under `/proc` , and then
mount `tmpfs` in the `/dev`, `/sys` and `/tmp` mount points. This way, no
host-specific info may leak to the **inner** container.
- Mount `tmpfs` on some more mountpoints where we want write access.
3. _(outer container)_ If `RUNSC_DEBUG` has been specified, add some debug
arguments to `runsc` (applies to development environments only).
4. _(outer container)_ If `RUNSC_FLAGS` has been specified, pass some
user-specified flags to `runsc` (applies to development environments only).
5. _(outer container)_ Spawn `runsc` as a Python subprocess, and wait for it to
complete.
6. _(inner container)_ Read the document from stdin and write pixels to stdout.
- In practice, nothing changes here, as far as the document conversion is
concerned. The Python process transparently uses the emulated Linux Kernel
API that gVisor provides.
7. _(outer container)_ Exit the container with the same exit code as the inner
container.
## Implementation details
### Creating the outer container image
In order to achieve the above, we add one more build stage in our Dockerfile
(see [multi-stage builds](https://docs.docker.com/build/building/multi-stage/))
that copies the result of the previous stages under `/dangerzone-image/rootfs`.
Also, we install `runsc` and Python, and copy our entrypoint to that layer.
Here's how it looks like:
```dockerfile
# NOTE: The following lines are appended to the end of our original Dockerfile.
# Install some commands required by the entrypoint.
FROM alpine:latest
RUN apk --no-cache -U upgrade && \
apk --no-cache add \
python3 \
su-exec
# Add the previous build stage (`dangerzone-image`) as a filesystem clone under
# the /dangerzone-image/rootfs directory.
RUN mkdir --mode=0755 -p /dangerzone-image/rootfs
COPY --from=dangerzone-image / /dangerzone-image/rootfs
# Download and install gVisor, based on the official instructions.
RUN GVISOR_URL="https://storage.googleapis.com/gvisor/releases/release/latest/$(uname -m)"; \
wget "${GVISOR_URL}/runsc" "${GVISOR_URL}/runsc.sha512" && \
sha512sum -c runsc.sha512 && \
rm -f runsc.sha512 && \
chmod 555 runsc /entrypoint.py && \
mv runsc /usr/bin/
COPY gvisor_wrapper/entrypoint.py /
ENTRYPOINT ["/entrypoint.py"]
```
### OCI config
The OCI config that gets produced is similar to this:
```json
{
"ociVersion": "1.0.0",
"process": {
"user": {
"uid": 1000,
"gid": 1000
},
"args": [
"/usr/bin/python3",
"-m",
"dangerzone.conversion.doc_to_pixels"
],
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PYTHONPATH=/opt/dangerzone",
"TERM=xterm"
],
"cwd": "/",
"capabilities": {
"bounding": [],
"effective": [],
"inheritable": [],
"permitted": [],
},
"rlimits": [
{
"type": "RLIMIT_NOFILE",
"hard": 4096,
"soft": 4096
}
]
},
"root": {
"path": "rootfs",
"readonly": true
},
"hostname": "dangerzone",
"mounts": [
{
"destination": "/proc",
"type": "proc",
"source": "proc"
},
{
"destination": "/dev",
"type": "tmpfs",
"source": "tmpfs",
"options": [
"nosuid",
"noexec",
"nodev"
]
},
{
"destination": "/sys",
"type": "tmpfs",
"source": "tmpfs",
"options": [
"nosuid",
"noexec",
"nodev",
"ro"
]
},
{
"destination": "/tmp",
"type": "tmpfs",
"source": "tmpfs",
"options": [
"nosuid",
"noexec",
"nodev"
]
},
{
"destination": "/home/dangerzone",
"type": "tmpfs",
"source": "tmpfs",
"options": [
"nosuid",
"noexec",
"nodev"
]
},
{
"destination": "/usr/lib/libreoffice/share/extensions/",
"type": "tmpfs",
"source": "tmpfs",
"options": [
"nosuid",
"noexec",
"nodev"
]
}
],
"linux": {
"namespaces": [
{
"type": "pid"
},
{
"type": "network"
},
{
"type": "ipc"
},
{
"type": "uts"
},
{
"type": "mount"
}
]
}
}
```
## Security considerations
* gVisor does not have an official release on Alpine Linux. The developers
provide gVisor binaries from a GCS bucket. In order to verify the integrity of
these binaries, they also provide a SHA-512 hash of the files.
- If we choose to pin the hash, then we essentially pin gVisor, and we may
lose security updates.
## Alternatives
gVisor can be integrated with Podman/Docker, but this is the case only on Linux.
Because we want gVisor on Windows and macOS as well, we decided to not move
forward with this approach.

14
docs/developer/qa.md Normal file
View file

@ -0,0 +1,14 @@
# Scripted QA
The `dev_scripts/qa.py` script runs the QA steps for a supported platform, in
order to make sure that the dev does not skip something. These steps are taken
from our [release instructions](../../RELEASE.md#qa).
The idea behind this script is that it will present each step to the user and
ask them to perform it manually and specify it passes, in order to continue to
the next one. For specific steps, it allows the user to run them automatically.
In steps that require a Dangerzone dev environment, this script uses the
`env.py` script to create one.
Including all the supported platforms in this script is still a work in
progress.

View file

@ -0,0 +1,67 @@
# Reproducible builds
We want to improve the transparency and auditability of our build artifacts, and
a way to achieve this is via reproducible builds. For a broader understanding of
what reproducible builds entail, check out https://reproducible-builds.org/.
Our build artifacts consist of:
* Container images (`amd64` and `arm64` architectures)
* macOS installers (for Intel and Apple Silicon CPUs)
* Windows installer
* Fedora packages (for regular Fedora distros and Qubes)
* Debian packages (for Debian and Ubuntu)
As of writing this, only the following artifacts are reproducible:
* Container images (see [#1047](https://github.com/freedomofpress/dangerzone/issues/1047))
In the following sections, we'll mention some specifics about enforcing
reproducibility for each artifact type.
## Container image
### Updating the image
The fact that our image is reproducible also means that it's frozen in time.
This means that rebuilding the image without updating our Dockerfile will
**not** receive security updates.
Here are the necessary variables that make up our image in the `Dockerfile.env`
file:
* `DEBIAN_IMAGE_DIGEST`: The index digest for the Debian container image
* `DEBIAN_ARCHIVE_DATE`: The Debian snapshot repo that we want to use
* `GVISOR_ARCHIVE_DATE`: The gVisor APT repo that we want to use
* `H2ORESTART_CHECKSUM`: The SHA-256 checksum of the H2ORestart plugin
* `H2ORESTART_VERSION`: The version of the H2ORestart plugin
If you update these values in `Dockerfile.env`, you must also create a new
Dockerfile with:
```
make Dockerfile
```
Updating `Dockerfile` without bumping `Dockerfile.in` is detected and should
trigger a CI error.
### Reproducing the image
For a simple way to reproduce a Dangerzone container image, you can checkout the
commit this image was built from (you can find it from the image tag in its
`g<commit>` portion), retrieve the date it was built (also included in the image
tag), and run the following command in any environment:
```
./dev_scripts/reproduce-image.py \
--debian-archive-date <date> \
<digest>
```
where:
* `<date>` should be given in YYYYMMDD format, e.g, 20250226
* `<digest>` is the SHA-256 hash of the image for the **current platform**, with
or without the `sha256:` prefix.
This command will build a container image from the current Git commit and the
provided date for the Debian archives. Then, it will compare the digest of the
manifest against the provided one. This is a simple way to ensure that the
created image is bit-for-bit reproducible.

222
docs/developer/updates.md Normal file
View file

@ -0,0 +1,222 @@
# Update notifications
This design document explains how the notification mechanism for Dangerzone
updates works, what are its benefits and limitations, and what other
alternatives we have considered. It has been adapted by discussions on GitHub
issue [#189](https://github.com/freedomofpress/dangerzone/issues/189), and has
been updated to reflect the current design.
A user-facing document on how update notifications work can be found in
https://github.com/freedomofpress/dangerzone/wiki/Updates
## Design overview
This feature introduces a hamburger icon that will be visible across almost all
of the Dangerzone windows. This will be used to notify the users about updates.
### First run
_We detect it's the first time Dangerzone runs because the
`settings["updater_last_check"] is None`._
Add the following keys in our `settings.json` file.
* `"updater_check": None`: Whether to check for updates or not. `None` means
that the user has not decided yet, and is the default.
* `"updater_last_check": None`: The last time we checked for updates (in seconds
from Unix epoch). None means that we haven't checked yet.
* `"updater_latest_version": "0.4.2"`: The latest version that the Dangerzone
updater has detected. By default it's the current version.
* `"updater_latest_changelog": ""`: The latest changelog that the Dangerzone
updater has detected. By default it's empty.
* `"updater_errors: 0`: The number of update check errors that we have
encountered in a row.
Note:
* If on Linux, make `"updater_check": False`, since we normally have
other update channels for these platforms.
### Second run
_We detect it's the second time Dangerzone runs because
`settings["updater_check"] is not None and settings["updater_last_check"] is
None`._
Before starting up the main window, show this window:
* Title: Dangerzone Updater
* Body:
> Do you want Dangerzone to automatically check for updates?
>
> If you accept, Dangerzone will check the latest releases page in github.com
> on startup. Otherwise it will make no network requests and won't inform you
> about new releases.
>
> If you prefer another way of getting notified about new releases, we suggest adding
> to your RSS reader our [Mastodon feed](https://fosstodon.org/@dangerzone.rss). For more information
> about updates, check [this webpage](https://github.com/freedomofpress/dangerzone/wiki/Updates).
* Buttons:
- Check Automaticaly: Store `settings["updater_check"] = True`
- Don't Check: Store `settings["updater_check"] = False`
Note:
* Users will be able to change their choice from the hamburger menu, which will
contain an entry called "Check for updates", that users can check and uncheck.
### Subsequent runs
_We perform the following only if `settings["updater_check"] == True`._
1. Spawn a new thread so that we don't block the main window.
2. Check if we have cached information about a release (version and changelog).
If yes, return those immediately.
3. Check if the last time we checked for new releases was less than 12 hours
ago. In that case, skip this update check so that we don't leak telemetry
stats to GitHub.
4. Hit the GitHub releases API and get the [latest release](https://api.github.com/repos/freedomofpress/dangerzone/releases/latest).
Store the current time as the last check time, even if the call fails.
5. Check if the latest release matches `settings["updater_latest_version"]`. If
yes, return an empty update report.
6. If a new update has been detected, return the version number and the
changelog.
7. Add a green bubble in the notification icon, and a menu entry called "New
version available".
8. Users who click on this entry will see a dialog with more info:
* Title: "Dangerzone v0.5.0 has been released"
* Body:
> A new Dangerzone version been released. Please visit our [downloads page](https://dangerzone.rocks#downloads) to install this update.
>
> (Show changelog rendered from Markdown in a collapsible text box)
* Buttons:
- OK: Return
Notes:
* Any successful attempt to fetch info from GitHub will result in clearing the
`settings["updater_errors"]` key.
### Error handling
_We trigger error handling when the updater thread encounters an error (either
due to an HTTPS failure or a Python exception) and does not complete
successfully._
1. Bump the number of errors we've encountered in a row
(`settings["updater_errors"] += 1`)
2. Return an update report with the error we've encountered.
3. Update the hamburger menu with a red notification bubble, and add a menu
entry called "Update error".
4. If a user clicks on this menu entry, show a dialog window:
* Title: "Update check error"
* Body:
> Something went wrong while checking for Dangerzone updates:
>
> You are strongly advised to visit our [downloads page](https://dangerzone.rocks#downloads) and check for new updates manually, or consult [this page](https://github.com/freedomofpress/dangerzone/wiki/Updates) for common causes of errors . Alternatively, you can uncheck "Check for updates", if you are in an air-gapped environment and have another way of learning about updates.
>
> (Show the latest error message in a scrollable, copyable text box)
* Buttons:
- Close: Return
## Key Benefits
1. The above approach future-proofs Dangerzone against API changes or bugs in
the update check process, by asking users to manually visit
https://dangerzone.rocks.
2. If we want to draw the attention of users to immediately install a release,
we can do so in the release body, which we will show in a pop-up window.
3. If we are aware of issues that prevent updates, we can add them in the wiki
page that we show in the error popup. Wiki pages are not versioned, so we can
add useful info even after a release.
## Security Considerations
Because this approach does not download binaries / auto-updates, it **does not
add any more security issues** than the existing, manual way of installing
updates. These issues have to do with a compromised/malicous GitHub service, and
are the following:
1. GitHub pages can alter the contents of our main site
(https://dangerzone.rocks)
2. GitHub releases can serve an older, vulnerable version of Dangerzone, instead
of a new update.
3. GitHub releases can serve a malicious binary (requires a joint operation from
a malicious CA as well, for extra legitimacy).
4. GitHub releases can silently drop updates.
5. GitHub releases can know which users download Dangerzone updates.
6. Network attackers can know that a user has Dangerzone installed (because we ask the user to visit https://dangerzone.rocks)
A good update framework would probably defend against 1,2,3. This is not to say
that our users are currently unprotected, since 1-4 can be detected by the
general public and the developers (unless GitHub specifically targets an
individual, but that's another story).
## Usability Considerations
1. We do not have an update story for users that only use the Dangerzone CLI. A
good assumption is that they are on Linux, so they auto-update.
## Alternatives
We researched a bit on this subject and found out that there are update
frameworks that do this job for us. While working on this issue, we decided that
integrating with one framework will certainly take a bit of work, especially
given that we target both Windows and MacOS systems. In the meantime though, we
didn't want to have releases out without including at least a notification
channel, since staying behind on updates has a huge negative impact on the
users' safety.
The update frameworks that we learned about are:
## Sparkle Project
[Sparkle project](https://sparkle-project.org) seems to be the de-facto update
framework in MacOS. Integrators in practice need to care about two things:
creating a proper `Appcast.xml` file on the server-side, and calling the Sparkle
code from the client-side. These are covered in the project's
[documentation](https://sparkle-project.org/documentation/).
The client-side part is not very straight-forward, since Sparkle is written in
Objective-C. Thankfully, there are others who have ventured into this before:
https://fman.io/blog/codesigning-and-automatic-updates-for-pyqt-apps/
The server-side part is also not very straight-forward. For integrators that use
GitHub releases (like us), this issue may be of help:
https://github.com/sparkle-project/Sparkle/issues/648
The Windows platform is not covered by Sparkle itself, but there are other
projects, such as [WinSparkle](https://winsparkle.org/), that follow a similar
approach. I see that there's a [Python library (`pywinsparkle`)](https://pypi.org/project/pywinsparkle/)
for interacting with WinSparkle, so this may alleviate some pains.
Note that the Sparkle project is not a silver bullet. Development missteps can
happen, and users can be left without updates. Here's an [example issue](https://github.com/sparkle-project/Sparkle/issues/345) that showcases this.
## The Update Framework
[The Update Framework](https://theupdateframework.io/) is a graduated CNCF
project hosted by Linux Foundation. It's based on the
[Thandy](https://chromium.googlesource.com/chromium/src.git/+/master/docs/updater/protocol_3_1.md)
updater for Tor. It's [not widely adopted](https://github.com/sparkle-project/Sparkle/issues/345), but some of its
adopters are high-profile, and it has passed security audits.
It's more of a [specification](https://github.com/sparkle-project/Sparkle/issues/345)
and less of a software project, although a well-maintained
[reference implementation](https://github.com/sparkle-project/Sparkle/issues/345)
in Python exists. Also, a [Python project (`tufup`)](https://doc.qt.io/qtinstallerframework/ifw-updates.html)
that builds upon this implementation makes it even easier to generate the
required keys and files.
Regardless of whether we use it, knowing about the [threat vectors](https://theupdateframework.io/security/) that it's protecting against is very important.
## Other Projects
* Qt has some updater framework as well: https://doc.qt.io/qtinstallerframework/ifw-updates.html
* Google Chrome has it's own updater framework: https://chromium.googlesource.com/chromium/src.git/+/master/docs/updater/protocol_3_1.md
* Keepass rolls out its own way to update: https://github.com/keepassxreboot/keepassxc/blob/develop/src/updatecheck/UpdateChecker.cpp
* [PyUpdater](https://github.com/Digital-Sapphire/PyUpdater) was another popular updater project for Python, but is now archived.

53
docs/podman-desktop.md Normal file
View file

@ -0,0 +1,53 @@
# Podman Desktop support
Starting with Dangerzone 0.9.0, it is possible to use Podman Desktop on
Windows and macOS. The support for this container runtime is currently only
experimental. If you try it out and encounter issues, please reach to us, we'll
be glad to help.
With [Podman Desktop](https://podman-desktop.io/) installed on your machine,
here are the required steps to change the dangerzone container runtime.
You will be required to open a terminal and follow these steps:
## On macOS
You will need to configure podman to access the shared Dangerzone resources:
```bash
podman machine stop
podman machine rm
cat > ~/.config/containers/containers.conf <<EOF
[machine]
volumes = ["/Users:/Users", "/private:/private", "/var/folders:/var/folders", "/Applications/Dangerzone.app:/Applications/Dangerzone.app"]
EOF
podman machine init
podman machine set --rootful=false
podman machine start
```
Then, set the container runtime to podman using this command:
```bash
/Applications/Dangerzone.app/Contents/MacOS/dangerzone-cli --set-container-runtime podman
```
In order to get back to the default behaviour (Docker Desktop on macOS), pass
the `default` value instead:
```bash
/Applications/Dangerzone.app/Contents/MacOS/dangerzone-cli --set-container-runtime default
```
## On Windows
To set the container runtime to podman, use this command:
```bash
'C:\Program Files\Dangerzone\dangerzone-cli.exe' --set-container-runtime podman
```
To revert back to the default behavior, pass the `default` value:
```bash
'C:\Program Files\Dangerzone\dangerzone-cli.exe' --set-container-runtime podman
```

11
docs/templates/release-notes-regular.md vendored Normal file
View file

@ -0,0 +1,11 @@
This release includes various new features, stability improvements, and security fixes **(adjust accordingly)**. If you are on a Mac or PC please also update Docker Desktop to the latest version to get the latest security fixes.
The highlights for this release are:
- **Important accomplishment**
We used to do [this](https://github.com/freedomofpress/dangerzone/issues/1), but now we do [that](https://github.com/freedomofpress/dangerzone/issues/2).
- **Support for a new platform**
We added support for a new platform ([#3](https://github.com/freedomofpress/dangerzone/issues/3))
- **Community contributions**
<!-- Acknowledge all contributions and talk about highlights ->
For a full list of the changes, see our [changelog](https://github.com/freedomofpress/dangerzone/blob/<RELEASE_TAG>/CHANGELOG.md#<RELEASE_ANCHOR>).

Some files were not shown because too many files have changed in this diff Show more