Compare commits

..

27 commits

Author SHA1 Message Date
Alex Pyrgiotis
ac6931f863
FIXUP: Strip 'v' from image tag 2025-01-22 00:28:22 +02:00
Alex Pyrgiotis
005d742113
Mask some extra paths in gVisor's OCI config
Mask some paths of the outer container in the OCI config of the inner
container. This is done to avoid leaking any sensitive information from
Podman / Docker / gVisor, since we reuse the same rootfs

Refs #1048
2025-01-22 00:28:22 +02:00
Alex Pyrgiotis
e377e1858e
docs: Add design document for artifact reproducibility
Refs #1047
2025-01-21 22:25:40 +02:00
Alex Pyrgiotis
aa24b8b919
Update RELEASE.md
Co-authored-by: Alexis Métaireau <alexis@freedom.press>
2025-01-21 22:25:40 +02:00
Alex Pyrgiotis
4044635dc9
ci: Add a CI job that enforces image reproducibility
Add a CI job that uses the `reproduce.py` dev script to enforce image
reproducibility, for every PR that we send to the repo.

Fixes #1047
2025-01-21 22:25:40 +02:00
Alex Pyrgiotis
de364926d5
dev_scripts: Add script for enforcing image reproducibility
Add a dev script for Linux platforms that verifies that a source image
can be reproducibly built from the current Git commit. The
reproducibility check is enforced by the `diffoci` tool, which is
downloaded as part of running the script.
2025-01-21 22:25:39 +02:00
Alex Pyrgiotis
8d357d2d48
Allow setting a tag for the container image
Allow setting a tag for the container image, when building it with the
`build-image.py` script. This should be used for development purposes
only, since the proper image name should be dictated by the script.
2025-01-21 22:13:30 +02:00
Alex Pyrgiotis
e3acd21506
ci: Scan the latest image for CVEs
Update the Debian snapshot date to the current one, so that we always
scan the latest image for CVEs.

Refs #1057
2025-01-21 22:13:30 +02:00
Alex Pyrgiotis
200a8181d7
Render the Dockerfile from a template and some params
Allow updating the Dockerfile from a template and some envs, so that
it's easier to bump the dates in it.
2025-01-21 22:13:29 +02:00
Alex Pyrgiotis
74e01af835
Add jinja2-cli package dependency
Add jinja2-cli as a package dependency, since it will be used to create
the Dockerfile from some user parameters and a template.
2025-01-21 22:13:29 +02:00
Alex Pyrgiotis
ddaf7fef12
Allow using the container engine cache when building our image
Remove our suggestions for not using the container cache, which stemmed
from the fact that our Dangerzone image was not reproducible. Now that
we have switched to Debian Stable and the Dockerfile is all we need to
reproducibly build the exact same container image, we can just use the
cache to speed up builds.
2025-01-21 22:13:29 +02:00
Alex Pyrgiotis
5590563ef8
Rename vendor-pymupdf.py to debian-vendor-pymupdf.py
Rename the `vendor-pymupdf.py` script to `debian-vendor-pymupdf.py`,
since it's used only when building Debian packages.
2025-01-21 22:13:29 +02:00
Alex Pyrgiotis
294e7bdbdc
Do not use poetry.lock when building the container image
Remove all the scaffolding in our `build-image.py` script for using the
`poetry.lock` file, now that we install PyMuPDF from the Debian repos.
2025-01-21 22:13:29 +02:00
Alex Pyrgiotis
2f0733f36a
Switch base image to Debian Stable
Switch base image from Alpine Linux to Debian Stable, in order to reduce
our image footprint, improve our security posture, and build our
container image reproducibly.

Fixes #1046
Refs #1047
2025-01-21 22:13:29 +02:00
Alex Pyrgiotis
464abedf59
Reuse the same rootfs for the inner and outer container
Remove the need to copy the Dangerzone container image (used by the
inner container) within a wrapper gVisor image (used by the outer
container). Instead, use the root of the container filesystem for both
containers. We can do this safely because we don't mount any secrets to
the container, and because gVisor offers a read-only view of the
underlying filesystem

Fixes #1048
2025-01-21 22:13:29 +02:00
Alex Pyrgiotis
4595b3ad4a
Copy gVisor public key and a helper script in container helpers
Download and copy the following artifacts that will be used for building
a Debian-based Dangerzone container image in the subsequent commits:
* The APT key for the gVisor repo [1]
* A helper script for building reproducible Debian images [2]

[1] https://gvisor.dev/archive.key
[2] d15cf12b26/repro-sources-list.sh
2025-01-21 22:13:26 +02:00
Alex Pyrgiotis
2c32a11159
Move container-only build context to dangerzone/container
Move container-only build context (currently just the entrypoint script)
from `dangerzone/gvisor_wrapper` to `dangerzone/container_helpers`.
Update the rest of the scripts to use this location as well.
2025-01-21 21:37:49 +02:00
Alex Pyrgiotis
aaa140629c
Whitespace fixes 2025-01-21 21:22:06 +02:00
Alexis Métaireau
23f3ad1f46
doc: bump the Docker Desktop version as part of the RELEASE procedure
Some checks failed
Tests / macOS (x86_64) (push) Has been cancelled
Scan latest app and container / security-scan-container (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
2025-01-21 10:21:24 +01:00
Alexis Métaireau
970a82f432
Bind Alert instances to the main window alert property 2025-01-21 10:21:24 +01:00
Alexis Métaireau
3d5cacfffb
Warn users if the minimum version of Docker Desktop is not met
This only happens on Windows and macOS.

Fixes #693
2025-01-21 10:21:24 +01:00
Alexis Métaireau
c407e2ff84
doc: update Debian Trixie installation instructions
Some checks are pending
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Tests / run tests (fedora 40) (push) Blocked by required conditions
Tests / run tests (fedora 41) (push) Blocked by required conditions
Tests / run tests (ubuntu 20.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
Starting with Debian Trixie, `apt secure` relies on `sqv` to do its verification, which doesn't support the GPG keybox database format.

At the same time, using the standard PGP base64 format makes the verification fail for versions of `apt secure` which relies on `gpg`, as the subkey isn't detected there.

Fixes #1055
2025-01-20 14:10:15 +01:00
Alexis Métaireau
7f418118e6
CI: Drop Fedora 39 from the CI checks
Some checks failed
Scan latest app and container / security-scan-container (push) Has been cancelled
Tests / windows (push) Has been cancelled
Tests / macOS (arch64) (push) Has been cancelled
Tests / build-deb (ubuntu 22.04) (push) Has been cancelled
Tests / macOS (x86_64) (push) Has been cancelled
Tests / build-deb (debian bookworm) (push) Has been cancelled
Scan latest app and container / security-scan-app (push) Has been cancelled
Tests / install-deb (ubuntu 22.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.04) (push) Has been cancelled
Tests / install-deb (ubuntu 24.10) (push) Has been cancelled
Tests / build-deb (debian bullseye) (push) Has been cancelled
Tests / build-deb (debian trixie) (push) Has been cancelled
Tests / build-deb (ubuntu 20.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.04) (push) Has been cancelled
Tests / build-deb (ubuntu 24.10) (push) Has been cancelled
Tests / install-deb (debian bookworm) (push) Has been cancelled
Tests / install-deb (debian bullseye) (push) Has been cancelled
Tests / install-deb (debian trixie) (push) Has been cancelled
Tests / install-deb (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (fedora 40) (push) Has been cancelled
Tests / run tests (fedora 41) (push) Has been cancelled
Tests / run tests (ubuntu 20.04) (push) Has been cancelled
Tests / run tests (ubuntu 22.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.04) (push) Has been cancelled
Tests / run tests (ubuntu 24.10) (push) Has been cancelled
Tests / build-install-rpm (fedora 40) (push) Has been cancelled
Tests / build-install-rpm (fedora 41) (push) Has been cancelled
Tests / run tests (debian bookworm) (push) Has been cancelled
Tests / run tests (debian bullseye) (push) Has been cancelled
Tests / run tests (debian trixie) (push) Has been cancelled
2025-01-16 11:51:22 +01:00
Alexis Métaireau
02602b072a
Remove intermediate variables for conversion start/end logs
Some checks are pending
Tests / run tests (ubuntu 22.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.04) (push) Blocked by required conditions
Tests / run tests (ubuntu 24.10) (push) Blocked by required conditions
Tests / run-lint (push) Waiting to run
Tests / build-container-image (push) Waiting to run
Tests / Download and cache Tesseract data (push) Waiting to run
Tests / windows (push) Blocked by required conditions
Tests / macOS (arch64) (push) Blocked by required conditions
Tests / macOS (x86_64) (push) Blocked by required conditions
Tests / build-deb (debian bookworm) (push) Blocked by required conditions
Tests / build-deb (debian bullseye) (push) Blocked by required conditions
Tests / build-deb (debian trixie) (push) Blocked by required conditions
Tests / build-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / build-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / install-deb (debian bookworm) (push) Blocked by required conditions
Tests / install-deb (debian bullseye) (push) Blocked by required conditions
Tests / install-deb (debian trixie) (push) Blocked by required conditions
Tests / install-deb (ubuntu 20.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 22.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.04) (push) Blocked by required conditions
Tests / install-deb (ubuntu 24.10) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 40) (push) Blocked by required conditions
Tests / build-install-rpm (fedora 41) (push) Blocked by required conditions
Tests / run tests (debian bookworm) (push) Blocked by required conditions
Tests / run tests (debian bullseye) (push) Blocked by required conditions
Tests / run tests (debian trixie) (push) Blocked by required conditions
Scan latest app and container / security-scan-container (push) Waiting to run
Scan latest app and container / security-scan-app (push) Waiting to run
Also, state that the logs are incomplete in the header.
2025-01-16 11:35:07 +01:00
Alexis Métaireau
acf20ef700
Add a --debug flag to the CLI to help retrieve more logs
When the flag is set, the `RUNSC_DEBUG=1` environment variable is added
to the outer container, and stderr is captured in a separate thread, before printing its output.
2025-01-16 11:35:06 +01:00
Alexis Métaireau
3499010d8e
docs(install): store GPG keys in the base64 format 2025-01-15 19:48:00 +01:00
Alexis Métaireau
2423fc18c5
CI: Store the signature key using the base64 format
The GPG binary format used until now doesn't seem to please `sqv` which
is now used by default on debian trixie.

Fixes #1052
2025-01-15 19:39:02 +01:00
14 changed files with 433 additions and 44 deletions

View file

@ -46,16 +46,30 @@ jobs:
apt update
apt-get install python-all -y
- name: Add GPG key for the packages.freedom.press
- name: Add packages.freedom.press PGP key (gpg)
if: matrix.version != 'trixie'
run: |
apt-get update && apt-get install -y gnupg2 ca-certificates
dirmngr # NOTE: This is a command that's necessary only in containers
# The key needs to be in the GPG keybox database format so the
# signing subkey is detected by apt-secure.
gpg --keyserver hkps://keys.openpgp.org \
--no-default-keyring --keyring ./fpf-apt-tools-archive-keyring.gpg \
--recv-keys "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281"
mkdir -p /etc/apt/keyrings/
mv fpf-apt-tools-archive-keyring.gpg /etc/apt/keyrings
mv ./fpf-apt-tools-archive-keyring.gpg /etc/apt/keyrings/.
- name: Add packages.freedom.press PGP key (sq)
if: matrix.version == 'trixie'
run: |
apt-get update && apt-get install -y ca-certificates sq
mkdir -p /etc/apt/keyrings/
# On debian trixie, apt-secure uses `sqv` to verify the signatures
# so we need to retrieve PGP keys and store them using the base64 format.
sq network keyserver \
--server hkps://keys.openpgp.org \
search "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281" \
--output /etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg
- name: Add packages.freedom.press to our APT sources
run: |
. /etc/os-release
@ -75,8 +89,6 @@ jobs:
strategy:
matrix:
include:
- distro: fedora
version: 39
- distro: fedora
version: 40
- distro: fedora

View file

@ -84,9 +84,20 @@ Dangerzone is available for:
</tr>
</table>
Add our repository following these instructions:
First, retrieve the PGP keys.
Download the GPG key for the repo:
Starting with Trixie, follow these instructions to download the PGP keys:
```bash
sudo apt-get update && sudo apt-get install sq -y
mkdir -p /etc/apt/keyrings/
sq network keyserver \
--server hkps://keys.openpgp.org \
search "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281" \
--output /etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg
```
On other Debian-derivatives:
```sh
sudo apt-get update && sudo apt-get install gnupg2 ca-certificates -y
@ -94,10 +105,12 @@ gpg --keyserver hkps://keys.openpgp.org \
--no-default-keyring --keyring ./fpf-apt-tools-archive-keyring.gpg \
--recv-keys "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281"
sudo mkdir -p /etc/apt/keyrings/
sudo mv fpf-apt-tools-archive-keyring.gpg /etc/apt/keyrings
sudo gpg --no-default-keyring --keyring ./fpf-apt-tools-archive-keyring.gpg \
--armor --export "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281" \
> /etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg
```
Add the URL of the repo in your APT sources:
Then, on all distributions, add the URL of the repo in your APT sources:
```sh
. /etc/os-release

View file

@ -8,12 +8,13 @@ Here is a list of tasks that should be done before issuing the release:
- [ ] Create a new issue named **QA and Release for version \<VERSION\>**, to track the general progress.
You can generate its content with the the `poetry run ./dev_scripts/generate-release-tasks.py` command.
- [ ] [Add new Linux platforms and remove obsolete ones](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#add-new-platforms-and-remove-obsolete-ones)
- [ ] [Add new Linux platforms and remove obsolete ones](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#add-new-linux-platforms-and-remove-obsolete-ones)
- [ ] Bump the Python dependencies using `poetry lock`
- [ ] Update `version` in `pyproject.toml`
- [ ] Update `share/version.txt`
- [ ] Update the "Version" field in `install/linux/dangerzone.spec`
- [ ] Bump the Debian version by adding a new changelog entry in `debian/changelog`
- [ ] [Bump the minimum Docker Desktop versions](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#bump-the-minimum-docker-desktop-version) in `isolation_provider/container.py`
- [ ] Bump the dates and versions in the `Dockerfile`
- [ ] Update screenshot in `README.md`, if necessary
- [ ] CHANGELOG.md should be updated to include a list of all major changes since the last release
@ -47,6 +48,12 @@ In case of the removal of a version:
* Consult the previous paragraph, but also `grep` your way around.
2. Add a notice in our `CHANGELOG.md` about the version removal.
## Bump the minimum Docker Desktop version
We embed the minimum docker desktop versions inside Dangerzone, as an incentive for our macOS and Windows users to upgrade to the latests version.
You can find the latest version at the time of the release by looking at [their release notes](https://docs.docker.com/desktop/release-notes/)
## Large Document Testing
Parallel to the QA process, the release candidate should be put through the large document tests in a dedicated machine to run overnight.

14
THIRD_PARTY_NOTICE Normal file
View file

@ -0,0 +1,14 @@
This project includes third-party components as follows:
1. gVisor APT Key
- URL: https://gvisor.dev/archive.key
- Last updated: 2025-01-21
- Description: This is the public key used for verifying packages from the gVisor repository.
2. Reproducible Containers Helper Script
- URL: https://github.com/reproducible-containers/repro-sources-list.sh/blob/d15cf12b26395b857b24fba223b108aff1c91b26/repro-sources-list.sh
- Last updated: 2025-01-21
- Description: This script is used for building reproducible Debian images.
Please refer to the respective sources for licensing information and further details regarding the use of these components.

View file

@ -42,6 +42,12 @@ def print_header(s: str) -> None:
type=click.UNPROCESSED,
callback=args.validate_input_filenames,
)
@click.option(
"--debug",
"debug",
flag_value=True,
help="Run Dangerzone in debug mode, to get logs from gVisor.",
)
@click.version_option(version=get_version(), message="%(version)s")
@errors.handle_document_errors
def cli_main(
@ -50,6 +56,7 @@ def cli_main(
filenames: List[str],
archive: bool,
dummy_conversion: bool,
debug: bool,
) -> None:
setup_logging()
@ -58,7 +65,7 @@ def cli_main(
elif is_qubes_native_conversion():
dangerzone = DangerzoneCore(Qubes())
else:
dangerzone = DangerzoneCore(Container())
dangerzone = DangerzoneCore(Container(debug=debug))
display_banner()
if len(filenames) == 1 and output_filename:

View file

@ -59,10 +59,28 @@ oci_config: dict[str, typing.Any] = {
"root": {"path": "/", "readonly": True},
"hostname": "dangerzone",
"mounts": [
# Mask almost every system directory of the outer container, by mounting tmpfs
# on top of them. This is done to avoid leaking any sensitive information,
# either mounted by Podman/Docker, or when gVisor runs, since we reuse the same
# rootfs. We basically mask everything except for `/usr`, `/bin`, `/lib`,
# and `/etc`.
#
# Note that we set `--root /home/dangerzone/.containers` for the directory where
# gVisor will create files at runtime, which means that in principle, we are
# covered by the masking of `/home/dangerzone` that follows below.
#
# Finally, note that the following list has been taken from the dirs in our
# container image, and double-checked against the top-level dirs listed in the
# Filesystem Hierarchy Standard (FHS) [1]. It would be nice to have an allowlist
# approach instead of a denylist, but FHS is such an old standard that we don't
# expect any new top-level dirs to pop up any time soon.
#
# [1] https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
{
"destination": "/proc",
"type": "proc",
"source": "proc",
"destination": "/boot",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/dev",
@ -70,6 +88,53 @@ oci_config: dict[str, typing.Any] = {
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
{
"destination": "/home",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/media",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/mnt",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/proc",
"type": "proc",
"source": "proc",
},
{
"destination": "/root",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/run",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
{
"destination": "/sbin",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/srv",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev", "ro"],
},
{
"destination": "/sys",
"type": "tmpfs",
@ -82,6 +147,27 @@ oci_config: dict[str, typing.Any] = {
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
{
"destination": "/var",
"type": "tmpfs",
"source": "tmpfs",
"options": ["nosuid", "noexec", "nodev"],
},
# Also mask some files that are usually mounted by Docker / Podman. These files
# should not contain any sensitive information, since we use the `--network
# none` flag, but we want to make sure in any case.
{
"destination": "/etc/hostname",
"type": "bind",
"source": "/dev/null",
"options": ["rbind", "ro"],
},
{
"destination": "/etc/hosts",
"type": "bind",
"source": "/dev/null",
"options": ["rbind", "ro"],
},
# LibreOffice needs a writable home directory, so just mount a tmpfs
# over it.
{

View file

@ -124,6 +124,7 @@ class MainWindow(QtWidgets.QMainWindow):
self.setWindowTitle("Dangerzone")
self.setWindowIcon(self.dangerzone.get_window_icon())
self.alert: Optional[Alert] = None
self.setMinimumWidth(600)
if platform.system() == "Darwin":
@ -226,6 +227,13 @@ class MainWindow(QtWidgets.QMainWindow):
# This allows us to make QSS rules conditional on the OS color mode.
self.setProperty("OSColorMode", self.dangerzone.app.os_color_mode.value)
if hasattr(self.dangerzone.isolation_provider, "check_docker_desktop_version"):
is_version_valid, version = (
self.dangerzone.isolation_provider.check_docker_desktop_version()
)
if not is_version_valid:
self.handle_docker_desktop_version_check(is_version_valid, version)
self.show()
def show_update_success(self) -> None:
@ -279,6 +287,46 @@ class MainWindow(QtWidgets.QMainWindow):
self.dangerzone.settings.set("updater_check", check)
self.dangerzone.settings.save()
def handle_docker_desktop_version_check(
self, is_version_valid: bool, version: str
) -> None:
hamburger_menu = self.hamburger_button.menu()
sep = hamburger_menu.insertSeparator(hamburger_menu.actions()[0])
upgrade_action = QAction("Docker Desktop should be upgraded", hamburger_menu)
upgrade_action.setIcon(
QtGui.QIcon(
load_svg_image(
"hamburger_menu_update_dot_error.svg", width=64, height=64
)
)
)
message = """
<p>A new version of Docker Desktop is available. Please upgrade your system.</p>
<p>Visit the <a href="https://www.docker.com/products/docker-desktop">Docker Desktop website</a> to download the latest version.</p>
<em>Keeping Docker Desktop up to date allows you to have more confidence that your documents are processed safely.</em>
"""
self.alert = Alert(
self.dangerzone,
title="Upgrade Docker Desktop",
message=message,
ok_text="Ok",
has_cancel=False,
)
def _launch_alert() -> None:
if self.alert:
self.alert.launch()
upgrade_action.triggered.connect(_launch_alert)
hamburger_menu.insertAction(sep, upgrade_action)
self.hamburger_button.setIcon(
QtGui.QIcon(
load_svg_image("hamburger_menu_update_error.svg", width=64, height=64)
)
)
def handle_updates(self, report: UpdateReport) -> None:
"""Handle update reports from the update checker thread.
@ -365,7 +413,7 @@ class MainWindow(QtWidgets.QMainWindow):
self.content_widget.show()
def closeEvent(self, e: QtGui.QCloseEvent) -> None:
alert_widget = Alert(
self.alert = Alert(
self.dangerzone,
message="Some documents are still being converted.\n Are you sure you want to quit?",
ok_text="Abort conversions",
@ -379,7 +427,7 @@ class MainWindow(QtWidgets.QMainWindow):
else:
self.dangerzone.app.exit(0)
else:
accept_exit = alert_widget.launch()
accept_exit = self.alert.launch()
if not accept_exit:
e.ignore()
return
@ -623,7 +671,7 @@ class ContentWidget(QtWidgets.QWidget):
def documents_selected(self, docs: List[Document]) -> None:
if self.conversion_started:
Alert(
self.alert = Alert(
self.dangerzone,
message="Dangerzone does not support adding documents after the conversion has started.",
has_cancel=False,
@ -633,7 +681,7 @@ class ContentWidget(QtWidgets.QWidget):
# Ensure all files in batch are in the same directory
dirnames = {os.path.dirname(doc.input_filename) for doc in docs}
if len(dirnames) > 1:
Alert(
self.alert = Alert(
self.dangerzone,
message="Dangerzone does not support adding documents from multiple locations.\n\n The newly added documents were ignored.",
has_cancel=False,
@ -802,14 +850,14 @@ class DocSelectionDropFrame(QtWidgets.QFrame):
text = f"{num_unsupported_docs} files are not supported."
ok_text = "Continue without these files"
alert_widget = Alert(
self.alert = Alert(
self.dangerzone,
message=f"{text}\nThe supported extensions are: "
+ ", ".join(get_supported_extensions()),
ok_text=ok_text,
)
return alert_widget.exec_()
return self.alert.exec_()
class SettingsWidget(QtWidgets.QWidget):

View file

@ -5,7 +5,9 @@ import platform
import signal
import subprocess
import sys
import threading
from abc import ABC, abstractmethod
from io import BytesIO
from typing import IO, Callable, Iterator, Optional
import fitz
@ -18,10 +20,6 @@ from ..util import get_tessdata_dir, replace_control_chars
log = logging.getLogger(__name__)
MAX_CONVERSION_LOG_CHARS = 150 * 50 # up to ~150 lines of 50 characters
DOC_TO_PIXELS_LOG_START = "----- DOC TO PIXELS LOG START -----"
DOC_TO_PIXELS_LOG_END = "----- DOC TO PIXELS LOG END -----"
TIMEOUT_EXCEPTION = 15
TIMEOUT_GRACE = 15
TIMEOUT_FORCE = 5
@ -75,9 +73,9 @@ def read_int(f: IO[bytes]) -> int:
return int.from_bytes(untrusted_int, "big", signed=False)
def read_debug_text(f: IO[bytes], size: int) -> str:
"""Read arbitrarily long text (for debug purposes), and sanitize it."""
untrusted_text = f.read(size).decode("ascii", errors="replace")
def sanitize_debug_text(text: bytes) -> str:
"""Read all the buffer and return a sanitized version"""
untrusted_text = text.decode("ascii", errors="replace")
return replace_control_chars(untrusted_text, keep_newlines=True)
@ -86,12 +84,16 @@ class IsolationProvider(ABC):
Abstracts an isolation provider
"""
def __init__(self) -> None:
if getattr(sys, "dangerzone_dev", False) is True:
def __init__(self, debug: bool = False) -> None:
self.debug = debug
if self.should_capture_stderr():
self.proc_stderr = subprocess.PIPE
else:
self.proc_stderr = subprocess.DEVNULL
def should_capture_stderr(self) -> bool:
return self.debug or getattr(sys, "dangerzone_dev", False)
@abstractmethod
def install(self) -> bool:
pass
@ -327,7 +329,11 @@ class IsolationProvider(ABC):
timeout_force: int = TIMEOUT_FORCE,
) -> Iterator[subprocess.Popen]:
"""Start a conversion process, pass it to the caller, and then clean it up."""
# Store the proc stderr in memory
stderr = BytesIO()
p = self.start_doc_to_pixels_proc(document)
stderr_thread = self.start_stderr_thread(p, stderr)
if platform.system() != "Windows":
assert os.getpgid(p.pid) != os.getpgid(
os.getpid()
@ -343,15 +349,40 @@ class IsolationProvider(ABC):
document, p, timeout_grace=timeout_grace, timeout_force=timeout_force
)
# Read the stderr of the process only if:
# * Dev mode is enabled.
# * The process has exited (else we risk hanging).
if getattr(sys, "dangerzone_dev", False) and p.poll() is not None:
assert p.stderr
debug_log = read_debug_text(p.stderr, MAX_CONVERSION_LOG_CHARS)
if stderr_thread:
# Wait for the thread to complete. If it's still alive, mention it in the debug log.
stderr_thread.join(timeout=1)
debug_bytes = stderr.getvalue()
debug_log = sanitize_debug_text(debug_bytes)
incomplete = "(incomplete) " if stderr_thread.is_alive() else ""
log.info(
"Conversion output (doc to pixels)\n"
f"{DOC_TO_PIXELS_LOG_START}\n"
f"----- DOC TO PIXELS LOG START {incomplete}-----\n"
f"{debug_log}" # no need for an extra newline here
f"{DOC_TO_PIXELS_LOG_END}"
"----- DOC TO PIXELS LOG END -----"
)
def start_stderr_thread(
self, process: subprocess.Popen, stderr: IO[bytes]
) -> Optional[threading.Thread]:
"""Start a thread to read stderr from the process"""
def _stream_stderr(process_stderr: IO[bytes]) -> None:
try:
for line in process_stderr:
stderr.write(line)
except (ValueError, IOError) as e:
log.debug(f"Stderr stream closed: {e}")
if process.stderr:
stderr_thread = threading.Thread(
target=_stream_stderr,
args=(process.stderr,),
daemon=True,
)
stderr_thread.start()
return stderr_thread
return None

View file

@ -3,7 +3,7 @@ import os
import platform
import shlex
import subprocess
from typing import List
from typing import List, Tuple
from .. import container_utils, errors
from ..document import Document
@ -11,7 +11,10 @@ from ..util import get_resource_path, get_subprocess_startupinfo
from .base import IsolationProvider, terminate_process_group
TIMEOUT_KILL = 5 # Timeout in seconds until the kill command returns.
MINIMUM_DOCKER_DESKTOP = {
"Darwin": "4.36.0",
"Windows": "4.36.0",
}
# Define startupinfo for subprocesses
if platform.system() == "Windows":
@ -121,6 +124,7 @@ class Container(IsolationProvider):
def is_available() -> bool:
container_runtime = container_utils.get_runtime()
runtime_name = container_utils.get_runtime_name()
# Can we run `docker/podman image ls` without an error
with subprocess.Popen(
[container_runtime, "image", "ls"],
@ -135,6 +139,28 @@ class Container(IsolationProvider):
)
return True
def check_docker_desktop_version(self) -> Tuple[bool, str]:
# On windows and darwin, check that the minimum version is met
version = ""
if platform.system() != "Linux":
with subprocess.Popen(
["docker", "version", "--format", "{{.Server.Platform.Name}}"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
startupinfo=get_subprocess_startupinfo(),
) as p:
stdout, stderr = p.communicate()
if p.returncode != 0:
# When an error occurs, consider that the check went
# through, as we're checking for installation compatibiliy
# somewhere else already
return True, version
# The output is like "Docker Desktop 4.35.1 (173168)"
version = stdout.decode().replace("Docker Desktop", "").split()[0]
if version < MINIMUM_DOCKER_DESKTOP[platform.system()]:
return False, version
return True, version
def doc_to_pixels_container_name(self, document: Document) -> str:
"""Unique container name for the doc-to-pixels phase."""
return f"dangerzone-doc-to-pixels-{document.id}"
@ -168,6 +194,10 @@ class Container(IsolationProvider):
) -> subprocess.Popen:
container_runtime = container_utils.get_runtime()
security_args = self.get_runtime_security_args()
debug_args = []
if self.debug:
debug_args += ["-e", "RUNSC_DEBUG=1"]
enable_stdin = ["-i"]
set_name = ["--name", name]
prevent_leakage_args = ["--rm"]
@ -177,14 +207,14 @@ class Container(IsolationProvider):
args = (
["run"]
+ security_args
+ debug_args
+ prevent_leakage_args
+ enable_stdin
+ set_name
+ image_name
+ command
)
args = [container_runtime] + args
return self.exec(args)
return self.exec([container_runtime] + args)
def kill_container(self, name: str) -> None:
"""Terminate a spawned container.

View file

@ -71,6 +71,7 @@ class DangerzoneCore(object):
ocr_lang,
stdout_callback,
)
except Exception:
log.exception(
f"Unexpected error occurred while converting '{document}'"

View file

@ -34,7 +34,7 @@ def git_commit_get():
def git_determine_tag():
return run("git", "describe", "--long", "--first-parent").decode().strip()
return run("git", "describe", "--long", "--first-parent").decode().strip()[1:]
def git_verify(commit, source):
@ -116,7 +116,7 @@ def parse_args():
image_tag = git_determine_tag()
# TODO: Remove the local "podman://" prefix once we have started pushing images to a
# remote.
default_image_name = "podman://" + IMAGE_NAME + ":" + image_tag
default_image_name = f"podman://{IMAGE_NAME}:{image_tag}"
parser = argparse.ArgumentParser(
prog=sys.argv[0],
@ -124,7 +124,6 @@ def parse_args():
)
parser.add_argument(
"--source",
required=True,
default=default_image_name,
help=(
"The name of the image that you want to reproduce. If the image resides in"
@ -158,7 +157,7 @@ def main():
diffoci_download()
tag = f"reproduce-{commit}"
target = f"dangerzone.rocks/dangerzone:{tag}"
target = f"{IMAGE_NAME}:{tag}"
logger.info(f"Building container image and tagging it as '{target}'")
build_image(tag, args.use_cache)

0
test
View file

View file

@ -587,3 +587,57 @@ def test_installation_failure_return_false(qtbot: QtBot, mocker: MockerFixture)
assert "the following error occured" in widget.label.text()
assert "The image cannot be found" in widget.traceback.toPlainText()
def test_up_to_date_docker_desktop_does_nothing(
qtbot: QtBot, mocker: MockerFixture
) -> None:
# Setup install to return False
mock_app = mocker.MagicMock()
dummy = mocker.MagicMock(spec=Container)
dummy.check_docker_desktop_version.return_value = (True, "1.0.0")
dz = DangerzoneGui(mock_app, dummy)
window = MainWindow(dz)
qtbot.addWidget(window)
menu_actions = window.hamburger_button.menu().actions()
assert "Docker Desktop should be upgraded" not in [
a.toolTip() for a in menu_actions
]
def test_outdated_docker_desktop_displays_warning(
qtbot: QtBot, mocker: MockerFixture
) -> None:
# Setup install to return False
mock_app = mocker.MagicMock()
dummy = mocker.MagicMock(spec=Container)
dummy.check_docker_desktop_version.return_value = (False, "1.0.0")
dz = DangerzoneGui(mock_app, dummy)
load_svg_spy = mocker.spy(main_window_module, "load_svg_image")
window = MainWindow(dz)
qtbot.addWidget(window)
menu_actions = window.hamburger_button.menu().actions()
assert menu_actions[0].toolTip() == "Docker Desktop should be upgraded"
# Check that the hamburger icon has changed with the expected SVG image.
assert load_svg_spy.call_count == 4
assert (
load_svg_spy.call_args_list[2].args[0] == "hamburger_menu_update_dot_error.svg"
)
alert_spy = mocker.spy(window.alert, "launch")
# Clicking the menu item should open a warning message
def _check_alert_displayed() -> None:
alert_spy.assert_any_call()
if window.alert:
window.alert.close()
QtCore.QTimer.singleShot(0, _check_alert_displayed)
menu_actions[0].trigger()

View file

@ -1,4 +1,5 @@
import os
import platform
import pytest
from pytest_mock import MockerFixture
@ -108,6 +109,92 @@ class TestContainer(IsolationProviderTest):
with pytest.raises(errors.ImageNotPresentException):
provider.install()
@pytest.mark.skipif(
platform.system() not in ("Windows", "Darwin"),
reason="macOS and Windows specific",
)
def test_old_docker_desktop_version_is_detected(
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
) -> None:
fp.register_subprocess(
[
"docker",
"version",
"--format",
"{{.Server.Platform.Name}}",
],
stdout="Docker Desktop 1.0.0 (173100)",
)
mocker.patch(
"dangerzone.isolation_provider.container.MINIMUM_DOCKER_DESKTOP",
{"Darwin": "1.0.1", "Windows": "1.0.1"},
)
assert (False, "1.0.0") == provider.check_docker_desktop_version()
@pytest.mark.skipif(
platform.system() not in ("Windows", "Darwin"),
reason="macOS and Windows specific",
)
def test_up_to_date_docker_desktop_version_is_detected(
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
) -> None:
fp.register_subprocess(
[
"docker",
"version",
"--format",
"{{.Server.Platform.Name}}",
],
stdout="Docker Desktop 1.0.1 (173100)",
)
# Require version 1.0.1
mocker.patch(
"dangerzone.isolation_provider.container.MINIMUM_DOCKER_DESKTOP",
{"Darwin": "1.0.1", "Windows": "1.0.1"},
)
assert (True, "1.0.1") == provider.check_docker_desktop_version()
fp.register_subprocess(
[
"docker",
"version",
"--format",
"{{.Server.Platform.Name}}",
],
stdout="Docker Desktop 2.0.0 (173100)",
)
assert (True, "2.0.0") == provider.check_docker_desktop_version()
@pytest.mark.skipif(
platform.system() not in ("Windows", "Darwin"),
reason="macOS and Windows specific",
)
def test_docker_desktop_version_failure_returns_true(
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
) -> None:
fp.register_subprocess(
[
"docker",
"version",
"--format",
"{{.Server.Platform.Name}}",
],
stderr="Oopsie",
returncode=1,
)
assert provider.check_docker_desktop_version() == (True, "")
@pytest.mark.skipif(
platform.system() != "Linux",
reason="Linux specific",
)
def test_linux_skips_desktop_version_check_returns_true(
self, mocker: MockerFixture, provider: Container
) -> None:
assert (True, "") == provider.check_docker_desktop_version()
class TestContainerTermination(IsolationProviderTermination):
pass