mirror of
https://github.com/freedomofpress/dangerzone.git
synced 2025-04-28 18:02:38 +02:00
Compare commits
202 commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
d9efcd8a26 | ||
![]() |
a127eef9db | ||
![]() |
847926f59a | ||
![]() |
ec7f6b7321 | ||
![]() |
83be5fb151 | ||
![]() |
04096380ff | ||
![]() |
21ca927b8b | ||
![]() |
05040de212 | ||
![]() |
4014c8591b | ||
![]() |
6cd706af10 | ||
![]() |
634b171b97 | ||
![]() |
c99c424f87 | ||
![]() |
19fa11410b | ||
![]() |
10be85b9f2 | ||
![]() |
47d732e603 | ||
![]() |
d6451290db | ||
![]() |
f0bb65cb4e | ||
![]() |
0c741359cc | ||
![]() |
8c61894e25 | ||
![]() |
57667a96be | ||
![]() |
1a644e2506 | ||
![]() |
843e68cdf7 | ||
![]() |
33b2a183ce | ||
![]() |
c7121b69a3 | ||
![]() |
0b3bf89d5b | ||
![]() |
e0b10c5e40 | ||
![]() |
092eec55d1 | ||
![]() |
14a480c3a3 | ||
![]() |
9df825db5c | ||
![]() |
2ee22a497a | ||
![]() |
b5c09e51d8 | ||
![]() |
37c7608c0f | ||
![]() |
972b264236 | ||
![]() |
e38d8e5db0 | ||
![]() |
f92833cdff | ||
![]() |
07aad5edba | ||
![]() |
e8ca12eb11 | ||
![]() |
491cca6341 | ||
![]() |
0a7b79f61a | ||
![]() |
86eab5d222 | ||
![]() |
ed39c056bb | ||
![]() |
983622fe59 | ||
![]() |
8e99764952 | ||
![]() |
20cd9cfc5c | ||
![]() |
f082641b71 | ||
![]() |
c0215062bc | ||
![]() |
b551a4dec4 | ||
![]() |
5a56a7f055 | ||
![]() |
ab6dd9c01d | ||
![]() |
dfcb74b427 | ||
![]() |
a910ccc273 | ||
![]() |
d868699bab | ||
![]() |
d6adfbc6c1 | ||
![]() |
687bd8585f | ||
![]() |
b212bfc47e | ||
![]() |
bbc90be217 | ||
![]() |
2d321bf257 | ||
![]() |
8bfeae4eed | ||
![]() |
3ed71e8ee0 | ||
![]() |
fa8e8c6dbb | ||
![]() |
8d05b5779d | ||
![]() |
e1dbdff1da | ||
![]() |
a1402d5b6b | ||
![]() |
51f432be6b | ||
![]() |
69234507c4 | ||
![]() |
94fad78f94 | ||
![]() |
66600f32dc | ||
![]() |
d41f604969 | ||
![]() |
6d269572ae | ||
![]() |
c7ba9ee75c | ||
![]() |
418b68d4ca | ||
![]() |
9ba95b5c20 | ||
![]() |
b043c97c41 | ||
![]() |
4a48a2551b | ||
![]() |
56663023f5 | ||
![]() |
53a952235c | ||
![]() |
d2652ef6cd | ||
![]() |
a6aa66f925 | ||
![]() |
856de3fd46 | ||
![]() |
88a6b37770 | ||
![]() |
fb90243668 | ||
![]() |
9724a16d81 | ||
![]() |
cf43a7a0c4 | ||
![]() |
cae4187550 | ||
![]() |
cfa4478ace | ||
![]() |
2557be9bc0 | ||
![]() |
235d71354a | ||
![]() |
5d49f5abdb | ||
![]() |
0ce7773ca1 | ||
![]() |
fa27f4b063 | ||
![]() |
8e8a515b64 | ||
![]() |
270cae1bc0 | ||
![]() |
14bb6c0e39 | ||
![]() |
033ce0986d | ||
![]() |
935396565c | ||
![]() |
e29837cb43 | ||
![]() |
8568b4bb9d | ||
![]() |
be1fa7a395 | ||
![]() |
b2f4e2d523 | ||
![]() |
7409966253 | ||
![]() |
40fb6579f6 | ||
![]() |
6ae91b024e | ||
![]() |
c2841dcc08 | ||
![]() |
df5ccb3f75 | ||
![]() |
9c6c2e1051 | ||
![]() |
23f3ad1f46 | ||
![]() |
970a82f432 | ||
![]() |
3d5cacfffb | ||
![]() |
c407e2ff84 | ||
![]() |
7f418118e6 | ||
![]() |
02602b072a | ||
![]() |
acf20ef700 | ||
![]() |
3499010d8e | ||
![]() |
2423fc18c5 | ||
![]() |
1298e9c398 | ||
![]() |
00e58a8707 | ||
![]() |
77975a8e50 | ||
![]() |
5b9e9c82fc | ||
![]() |
f4fa1f87eb | ||
![]() |
eb345562da | ||
![]() |
d080d03f5a | ||
![]() |
767bfa7e48 | ||
![]() |
37ec91aae2 | ||
![]() |
cecfe63338 | ||
![]() |
4da6b92e12 | ||
![]() |
b06d1aebed | ||
![]() |
da5490a5a1 | ||
![]() |
e96b44e10a | ||
![]() |
7624624471 | ||
![]() |
fb7c2088e2 | ||
![]() |
1ea2f109cb | ||
![]() |
df3063a825 | ||
![]() |
57bb7286ef | ||
![]() |
fbe05065c9 | ||
![]() |
54ffc63c4f | ||
![]() |
bdc4cf13c4 | ||
![]() |
92d7bd6bee | ||
![]() |
7c5a191a5c | ||
![]() |
4bd794dbd1 | ||
![]() |
3eac00b873 | ||
![]() |
ec9f8835e0 | ||
![]() |
0383081394 | ||
![]() |
25fba42022 | ||
![]() |
e54567b7d4 | ||
![]() |
2a8355fb88 | ||
![]() |
e22c795cb7 | ||
![]() |
909560353d | ||
![]() |
6a5e76f2b4 | ||
![]() |
20152fac13 | ||
![]() |
6b51d56e9f | ||
![]() |
309bd12423 | ||
![]() |
1c0a99fcd2 | ||
![]() |
4b5f4b27d7 | ||
![]() |
f537d54ed2 | ||
![]() |
32641603ee | ||
![]() |
a915ae8442 | ||
![]() |
38a803085f | ||
![]() |
2053c98c09 | ||
![]() |
3db1ca1fbb | ||
![]() |
3fff16cc7e | ||
![]() |
8bd9c05832 | ||
![]() |
41e78c907f | ||
![]() |
265c1dde97 | ||
![]() |
ccb302462d | ||
![]() |
4eadc30605 | ||
![]() |
abb71e0fe5 | ||
![]() |
4638444290 | ||
![]() |
68da50a6b2 | ||
![]() |
cc5ba29455 | ||
![]() |
180b9442ab | ||
![]() |
f349e16523 | ||
![]() |
adddb1ecb7 | ||
![]() |
8e57d81a74 | ||
![]() |
3bcf5fc147 | ||
![]() |
60df4f7e35 | ||
![]() |
9fa3c80404 | ||
![]() |
4bf7f9cbb4 | ||
![]() |
fdc27c4d3b | ||
![]() |
23f5f96220 | ||
![]() |
5744215d99 | ||
![]() |
c89988654c | ||
![]() |
7eaa0cfe50 | ||
![]() |
9d69e3b261 | ||
![]() |
1d2a91e8c5 | ||
![]() |
82c29b2098 | ||
![]() |
ce5aca4ba1 | ||
![]() |
13f38cc8a9 | ||
![]() |
57df6fdfe5 | ||
![]() |
20354e7c11 | ||
![]() |
d722800a4b | ||
![]() |
4cfc633cdb | ||
![]() |
944d58dd8d | ||
![]() |
f3806b96af | ||
![]() |
c4bb7c28c8 | ||
![]() |
630083bdea | ||
![]() |
504a9e1df2 | ||
![]() |
a54a8f2057 | ||
![]() |
35abd14f5f | ||
![]() |
1bd18a175b | ||
![]() |
96aa56a6dc | ||
![]() |
91932046f5 | ||
![]() |
c8411de433 |
108 changed files with 5946 additions and 2291 deletions
4
.github/ISSUE_TEMPLATE/bug_report_linux.yml
vendored
4
.github/ISSUE_TEMPLATE/bug_report_linux.yml
vendored
|
@ -21,7 +21,7 @@ body:
|
||||||
label: Linux distribution
|
label: Linux distribution
|
||||||
description: |
|
description: |
|
||||||
What is the name and version of your Linux distribution? You can find it out with `cat /etc/os-release`
|
What is the name and version of your Linux distribution? You can find it out with `cat /etc/os-release`
|
||||||
placeholder: Ubuntu 20.04.6 LTS
|
placeholder: Ubuntu 22.04.5 LTS
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
- type: textarea
|
- type: textarea
|
||||||
|
@ -36,7 +36,7 @@ body:
|
||||||
attributes:
|
attributes:
|
||||||
label: Podman info
|
label: Podman info
|
||||||
description: |
|
description: |
|
||||||
If the bug occurs during document conversion, or is otherwise related with Podman, please copy and paste the following commands in your terminal, and provide us with the output:
|
Please copy and paste the following commands in your terminal, and provide us with the output:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
podman version
|
podman version
|
||||||
|
|
3
.github/ISSUE_TEMPLATE/bug_report_macos.yml
vendored
3
.github/ISSUE_TEMPLATE/bug_report_macos.yml
vendored
|
@ -48,8 +48,7 @@ body:
|
||||||
attributes:
|
attributes:
|
||||||
label: Docker info
|
label: Docker info
|
||||||
description: |
|
description: |
|
||||||
If the bug occurs during document conversion, or is otherwise related
|
Please copy and paste the following commands in your
|
||||||
with Docker, please copy and paste the following commands in your
|
|
||||||
terminal, and provide us with the output:
|
terminal, and provide us with the output:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
|
@ -35,8 +35,7 @@ body:
|
||||||
attributes:
|
attributes:
|
||||||
label: Docker info
|
label: Docker info
|
||||||
description: |
|
description: |
|
||||||
If the bug occurs during document conversion, or is otherwise related
|
Please copy and paste the following commands in your
|
||||||
with Docker, please copy and paste the following commands in your
|
|
||||||
terminal, and provide us with the output:
|
terminal, and provide us with the output:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
|
248
.github/workflows/build-push-image.yml
vendored
Normal file
248
.github/workflows/build-push-image.yml
vendored
Normal file
|
@ -0,0 +1,248 @@
|
||||||
|
name: Build and push multi-arch container image
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_call:
|
||||||
|
inputs:
|
||||||
|
registry:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
registry_user:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
image_name:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
reproduce:
|
||||||
|
required: true
|
||||||
|
type: boolean
|
||||||
|
secrets:
|
||||||
|
registry_token:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
lint:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Install dev. dependencies
|
||||||
|
run: |-
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get install -y git python3-poetry --no-install-recommends
|
||||||
|
poetry install --only package
|
||||||
|
|
||||||
|
- name: Verify that the Dockerfile matches the commited template and params
|
||||||
|
run: |-
|
||||||
|
cp Dockerfile Dockerfile.orig
|
||||||
|
make Dockerfile
|
||||||
|
diff Dockerfile.orig Dockerfile
|
||||||
|
|
||||||
|
prepare:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
outputs:
|
||||||
|
debian_archive_date: ${{ steps.params.outputs.debian_archive_date }}
|
||||||
|
source_date_epoch: ${{ steps.params.outputs.source_date_epoch }}
|
||||||
|
image: ${{ steps.params.outputs.full_image_name }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Compute image parameters
|
||||||
|
id: params
|
||||||
|
run: |
|
||||||
|
source Dockerfile.env
|
||||||
|
DEBIAN_ARCHIVE_DATE=$(date -u +'%Y%m%d')
|
||||||
|
SOURCE_DATE_EPOCH=$(date -u -d ${DEBIAN_ARCHIVE_DATE} +"%s")
|
||||||
|
TAG=${DEBIAN_ARCHIVE_DATE}-$(git describe --long --first-parent | tail -c +2)
|
||||||
|
FULL_IMAGE_NAME=${{ inputs.registry }}/${{ inputs.image_name }}:${TAG}
|
||||||
|
|
||||||
|
echo "debian_archive_date=${DEBIAN_ARCHIVE_DATE}" >> $GITHUB_OUTPUT
|
||||||
|
echo "source_date_epoch=${SOURCE_DATE_EPOCH}" >> $GITHUB_OUTPUT
|
||||||
|
echo "tag=${DEBIAN_ARCHIVE_DATE}-${TAG}" >> $GITHUB_OUTPUT
|
||||||
|
echo "full_image_name=${FULL_IMAGE_NAME}" >> $GITHUB_OUTPUT
|
||||||
|
echo "buildkit_image=${BUILDKIT_IMAGE}" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
build:
|
||||||
|
name: Build ${{ matrix.platform.name }} image
|
||||||
|
runs-on: ${{ matrix.platform.runs-on }}
|
||||||
|
needs:
|
||||||
|
- prepare
|
||||||
|
outputs:
|
||||||
|
debian_archive_date: ${{ needs.prepare.outputs.debian_archive_date }}
|
||||||
|
source_date_epoch: ${{ needs.prepare.outputs.source_date_epoch }}
|
||||||
|
image: ${{ needs.prepare.outputs.image }}
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
platform:
|
||||||
|
- runs-on: "ubuntu-24.04"
|
||||||
|
name: "linux/amd64"
|
||||||
|
- runs-on: "ubuntu-24.04-arm"
|
||||||
|
name: "linux/arm64"
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Prepare
|
||||||
|
run: |
|
||||||
|
platform=${{ matrix.platform.name }}
|
||||||
|
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||||
|
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v3
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ inputs.registry_user }}
|
||||||
|
password: ${{ secrets.registry_token }}
|
||||||
|
|
||||||
|
# Instructions for reproducibly building a container image are taken from:
|
||||||
|
# https://github.com/freedomofpress/repro-build?tab=readme-ov-file#build-and-push-a-container-image-on-github-actions
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v3
|
||||||
|
with:
|
||||||
|
driver-opts: image=${{ needs.prepare.outputs.buildkit_image }}
|
||||||
|
|
||||||
|
- name: Build and push by digest
|
||||||
|
id: build
|
||||||
|
uses: docker/build-push-action@v6
|
||||||
|
with:
|
||||||
|
context: ./dangerzone/
|
||||||
|
file: Dockerfile
|
||||||
|
build-args: |
|
||||||
|
DEBIAN_ARCHIVE_DATE=${{ needs.prepare.outputs.debian_archive_date }}
|
||||||
|
SOURCE_DATE_EPOCH=${{ needs.prepare.outputs.source_date_epoch }}
|
||||||
|
provenance: false
|
||||||
|
outputs: type=image,"name=${{ inputs.registry }}/${{ inputs.image_name }}",push-by-digest=true,push=true,rewrite-timestamp=true,name-canonical=true
|
||||||
|
cache-from: type=gha
|
||||||
|
cache-to: type=gha,mode=max
|
||||||
|
|
||||||
|
- name: Export digest
|
||||||
|
run: |
|
||||||
|
mkdir -p ${{ runner.temp }}/digests
|
||||||
|
digest="${{ steps.build.outputs.digest }}"
|
||||||
|
touch "${{ runner.temp }}/digests/${digest#sha256:}"
|
||||||
|
echo "Image digest is: ${digest}"
|
||||||
|
|
||||||
|
- name: Upload digest
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: digests-${{ env.PLATFORM_PAIR }}
|
||||||
|
path: ${{ runner.temp }}/digests/*
|
||||||
|
if-no-files-found: error
|
||||||
|
retention-days: 1
|
||||||
|
|
||||||
|
merge:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs:
|
||||||
|
- build
|
||||||
|
outputs:
|
||||||
|
debian_archive_date: ${{ needs.build.outputs.debian_archive_date }}
|
||||||
|
source_date_epoch: ${{ needs.build.outputs.source_date_epoch }}
|
||||||
|
image: ${{ needs.build.outputs.image }}
|
||||||
|
digest_root: ${{ steps.image.outputs.digest_root }}
|
||||||
|
digest_amd64: ${{ steps.image.outputs.digest_amd64 }}
|
||||||
|
digest_arm64: ${{ steps.image.outputs.digest_arm64 }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Download digests
|
||||||
|
uses: actions/download-artifact@v4
|
||||||
|
with:
|
||||||
|
path: ${{ runner.temp }}/digests
|
||||||
|
pattern: digests-*
|
||||||
|
merge-multiple: true
|
||||||
|
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v3
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ inputs.registry_user }}
|
||||||
|
password: ${{ secrets.registry_token }}
|
||||||
|
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v3
|
||||||
|
with:
|
||||||
|
driver-opts: image=${{ env.BUILDKIT_IMAGE }}
|
||||||
|
|
||||||
|
- name: Create manifest list and push
|
||||||
|
working-directory: ${{ runner.temp }}/digests
|
||||||
|
run: |
|
||||||
|
DIGESTS=$(printf '${{ needs.build.outputs.image }}@sha256:%s ' *)
|
||||||
|
docker buildx imagetools create -t ${{ needs.build.outputs.image }} ${DIGESTS}
|
||||||
|
|
||||||
|
- name: Inspect image
|
||||||
|
id: image
|
||||||
|
run: |
|
||||||
|
# Inspect the image
|
||||||
|
docker buildx imagetools inspect ${{ needs.build.outputs.image }}
|
||||||
|
docker buildx imagetools inspect ${{ needs.build.outputs.image }} --format "{{json .Manifest}}" > manifest
|
||||||
|
|
||||||
|
# Calculate and print the digests
|
||||||
|
digest_root=$(jq -r .digest manifest)
|
||||||
|
digest_amd64=$(jq -r '.manifests[] | select(.platform.architecture=="amd64") | .digest' manifest)
|
||||||
|
digest_arm64=$(jq -r '.manifests[] | select(.platform.architecture=="arm64") | .digest' manifest)
|
||||||
|
|
||||||
|
echo "The image digests are:"
|
||||||
|
echo " Root: $digest_root"
|
||||||
|
echo " linux/amd64: $digest_amd64"
|
||||||
|
echo " linux/arm64: $digest_arm64"
|
||||||
|
|
||||||
|
# NOTE: Set the digests as an output because the `env` context is not
|
||||||
|
# available to the inputs of a reusable workflow call.
|
||||||
|
echo "digest_root=$digest_root" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "digest_amd64=$digest_amd64" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "digest_arm64=$digest_arm64" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
# This step calls the container workflow to generate provenance and push it to
|
||||||
|
# the container registry.
|
||||||
|
provenance:
|
||||||
|
needs:
|
||||||
|
- merge
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
manifest_type:
|
||||||
|
- root
|
||||||
|
- amd64
|
||||||
|
- arm64
|
||||||
|
permissions:
|
||||||
|
actions: read # for detecting the Github Actions environment.
|
||||||
|
id-token: write # for creating OIDC tokens for signing.
|
||||||
|
packages: write # for uploading attestations.
|
||||||
|
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.1.0
|
||||||
|
with:
|
||||||
|
digest: ${{ needs.merge.outputs[format('digest_{0}', matrix.manifest_type)] }}
|
||||||
|
image: ${{ needs.merge.outputs.image }}
|
||||||
|
registry-username: ${{ inputs.registry_user }}
|
||||||
|
secrets:
|
||||||
|
registry-password: ${{ secrets.registry_token }}
|
||||||
|
|
||||||
|
# This step ensures that the image is reproducible
|
||||||
|
check-reproducibility:
|
||||||
|
if: ${{ inputs.reproduce }}
|
||||||
|
needs:
|
||||||
|
- merge
|
||||||
|
runs-on: ${{ matrix.platform.runs-on }}
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
platform:
|
||||||
|
- runs-on: "ubuntu-24.04"
|
||||||
|
name: "amd64"
|
||||||
|
- runs-on: "ubuntu-24.04-arm"
|
||||||
|
name: "arm64"
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Reproduce the same container image
|
||||||
|
run: |
|
||||||
|
./dev_scripts/reproduce-image.py \
|
||||||
|
--runtime \
|
||||||
|
docker \
|
||||||
|
--debian-archive-date \
|
||||||
|
${{ needs.merge.outputs.debian_archive_date }} \
|
||||||
|
--platform \
|
||||||
|
linux/${{ matrix.platform.name }} \
|
||||||
|
${{ needs.merge.outputs[format('digest_{0}', matrix.platform.name)] }}
|
26
.github/workflows/build.yml
vendored
26
.github/workflows/build.yml
vendored
|
@ -1,6 +1,10 @@
|
||||||
name: Build dev environments
|
name: Build dev environments
|
||||||
on:
|
on:
|
||||||
|
pull_request:
|
||||||
push:
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- "test/**"
|
||||||
schedule:
|
schedule:
|
||||||
- cron: "0 0 * * *" # Run every day at 00:00 UTC.
|
- cron: "0 0 * * *" # Run every day at 00:00 UTC.
|
||||||
|
|
||||||
|
@ -29,26 +33,26 @@ jobs:
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- distro: ubuntu
|
|
||||||
version: "20.04"
|
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "22.04"
|
version: "22.04"
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.04"
|
version: "24.04"
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.10"
|
version: "24.10"
|
||||||
|
- distro: ubuntu
|
||||||
|
version: "25.04"
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: bullseye
|
version: bullseye
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: bookworm
|
version: bookworm
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: trixie
|
version: trixie
|
||||||
- distro: fedora
|
|
||||||
version: "39"
|
|
||||||
- distro: fedora
|
- distro: fedora
|
||||||
version: "40"
|
version: "40"
|
||||||
- distro: fedora
|
- distro: fedora
|
||||||
version: "41"
|
version: "41"
|
||||||
|
- distro: fedora
|
||||||
|
version: "42"
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
|
@ -72,6 +76,8 @@ jobs:
|
||||||
runs-on: ubuntu-24.04
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
- name: Get current date
|
- name: Get current date
|
||||||
id: date
|
id: date
|
||||||
|
@ -81,18 +87,12 @@ jobs:
|
||||||
id: cache-container-image
|
id: cache-container-image
|
||||||
uses: actions/cache@v4
|
uses: actions/cache@v4
|
||||||
with:
|
with:
|
||||||
key: v2-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/common.py', 'dangerzone/conversion/doc_to_pixels.py', 'dangerzone/conversion/pixels_to_pdf.py', 'poetry.lock', 'gvisor_wrapper/entrypoint.py') }}
|
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
|
||||||
path: |
|
path: |
|
||||||
share/container.tar.gz
|
share/container.tar
|
||||||
share/image-id.txt
|
share/image-id.txt
|
||||||
|
|
||||||
- name: Build and push Dangerzone image
|
- name: Build Dangerzone image
|
||||||
if: ${{ steps.cache-container-image.outputs.cache-hit != 'true' }}
|
if: ${{ steps.cache-container-image.outputs.cache-hit != 'true' }}
|
||||||
run: |
|
run: |
|
||||||
sudo apt-get install -y python3-poetry
|
|
||||||
python3 ./install/common/build-image.py
|
python3 ./install/common/build-image.py
|
||||||
echo ${{ github.token }} | podman login ghcr.io -u USERNAME --password-stdin
|
|
||||||
gunzip -c share/container.tar.gz | podman load
|
|
||||||
podman push \
|
|
||||||
dangerzone.rocks/dangerzone \
|
|
||||||
${{ env.IMAGE_REGISTRY }}/dangerzone/dangerzone
|
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
name: Check branch conformity
|
name: Check branch conformity
|
||||||
on:
|
on:
|
||||||
push:
|
pull_request:
|
||||||
|
types: ["opened", "labeled", "unlabeled", "reopened", "synchronize"]
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
prevent-fixup-commits:
|
prevent-fixup-commits:
|
||||||
|
@ -17,3 +18,13 @@ jobs:
|
||||||
git fetch origin
|
git fetch origin
|
||||||
git status
|
git status
|
||||||
git log --pretty=format:%s origin/main..HEAD | grep -ie '^fixup\|^wip' && exit 1 || true
|
git log --pretty=format:%s origin/main..HEAD | grep -ie '^fixup\|^wip' && exit 1 || true
|
||||||
|
|
||||||
|
check-changelog:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
name: Ensure CHANGELOG.md is populated for user-visible changes
|
||||||
|
steps:
|
||||||
|
# Pin the GitHub action to a specific commit that we have audited and know
|
||||||
|
# how it works.
|
||||||
|
- uses: tarides/changelog-check-action@509965da3b8ac786a5e2da30c2ccf9661189121f
|
||||||
|
with:
|
||||||
|
changelog: CHANGELOG.md
|
41
.github/workflows/check_repos.yml
vendored
41
.github/workflows/check_repos.yml
vendored
|
@ -19,14 +19,14 @@ jobs:
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
|
- distro: ubuntu
|
||||||
|
version: "25.04" # plucky
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.10" # oracular
|
version: "24.10" # oracular
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.04" # noble
|
version: "24.04" # noble
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "22.04" # jammy
|
version: "22.04" # jammy
|
||||||
- distro: ubuntu
|
|
||||||
version: "20.04" # focal
|
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: "trixie" # 13
|
version: "trixie" # 13
|
||||||
- distro: debian
|
- distro: debian
|
||||||
|
@ -34,27 +34,32 @@ jobs:
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: "11" # bullseye
|
version: "11" # bullseye
|
||||||
steps:
|
steps:
|
||||||
- name: Add Podman repo for Ubuntu Focal
|
- name: Add packages.freedom.press PGP key (gpg --keyring)
|
||||||
if: matrix.distro == 'ubuntu' && matrix.version == 20.04
|
if: matrix.version != 'trixie' && matrix.version != "25.04"
|
||||||
run: |
|
|
||||||
apt-get update && apt-get -y install curl wget gnupg2
|
|
||||||
. /etc/os-release
|
|
||||||
sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' \
|
|
||||||
> /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
|
|
||||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- \
|
|
||||||
| apt-key add -
|
|
||||||
apt update
|
|
||||||
apt-get install python-all -y
|
|
||||||
|
|
||||||
- name: Add GPG key for the packages.freedom.press
|
|
||||||
run: |
|
run: |
|
||||||
apt-get update && apt-get install -y gnupg2 ca-certificates
|
apt-get update && apt-get install -y gnupg2 ca-certificates
|
||||||
dirmngr # NOTE: This is a command that's necessary only in containers
|
dirmngr # NOTE: This is a command that's necessary only in containers
|
||||||
|
# The key needs to be in the GPG keybox database format so the
|
||||||
|
# signing subkey is detected by apt-secure.
|
||||||
gpg --keyserver hkps://keys.openpgp.org \
|
gpg --keyserver hkps://keys.openpgp.org \
|
||||||
--no-default-keyring --keyring ./fpf-apt-tools-archive-keyring.gpg \
|
--no-default-keyring --keyring ./fpf-apt-tools-archive-keyring.gpg \
|
||||||
--recv-keys "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281"
|
--recv-keys "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281"
|
||||||
mkdir -p /etc/apt/keyrings/
|
mkdir -p /etc/apt/keyrings/
|
||||||
mv fpf-apt-tools-archive-keyring.gpg /etc/apt/keyrings
|
mv ./fpf-apt-tools-archive-keyring.gpg /etc/apt/keyrings/.
|
||||||
|
|
||||||
|
- name: Add packages.freedom.press PGP key (sq)
|
||||||
|
if: matrix.version == 'trixie' || matrix.version == '25.04'
|
||||||
|
run: |
|
||||||
|
apt-get update && apt-get install -y ca-certificates sq
|
||||||
|
mkdir -p /etc/apt/keyrings/
|
||||||
|
# On debian trixie, apt-secure uses `sqv` to verify the signatures
|
||||||
|
# so we need to retrieve PGP keys and store them using the base64 format.
|
||||||
|
sq network keyserver \
|
||||||
|
--server hkps://keys.openpgp.org \
|
||||||
|
search "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281" \
|
||||||
|
--output - \
|
||||||
|
| sq packet dearmor \
|
||||||
|
> /etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg
|
||||||
|
|
||||||
- name: Add packages.freedom.press to our APT sources
|
- name: Add packages.freedom.press to our APT sources
|
||||||
run: |
|
run: |
|
||||||
|
@ -75,12 +80,12 @@ jobs:
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- distro: fedora
|
|
||||||
version: 39
|
|
||||||
- distro: fedora
|
- distro: fedora
|
||||||
version: 40
|
version: 40
|
||||||
- distro: fedora
|
- distro: fedora
|
||||||
version: 41
|
version: 41
|
||||||
|
- distro: fedora
|
||||||
|
version: 42
|
||||||
steps:
|
steps:
|
||||||
- name: Add packages.freedom.press to our YUM sources
|
- name: Add packages.freedom.press to our YUM sources
|
||||||
run: |
|
run: |
|
||||||
|
|
116
.github/workflows/ci.yml
vendored
116
.github/workflows/ci.yml
vendored
|
@ -1,8 +1,10 @@
|
||||||
name: Tests
|
name: Tests
|
||||||
on:
|
on:
|
||||||
push:
|
|
||||||
pull_request:
|
pull_request:
|
||||||
branches: [main]
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- "test/**"
|
||||||
schedule:
|
schedule:
|
||||||
- cron: "2 0 * * *" # Run every day at 02:00 UTC.
|
- cron: "2 0 * * *" # Run every day at 02:00 UTC.
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
|
@ -46,6 +48,8 @@ jobs:
|
||||||
runs-on: ubuntu-24.04
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
- name: Get current date
|
- name: Get current date
|
||||||
id: date
|
id: date
|
||||||
|
@ -55,17 +59,22 @@ jobs:
|
||||||
id: cache-container-image
|
id: cache-container-image
|
||||||
uses: actions/cache@v4
|
uses: actions/cache@v4
|
||||||
with:
|
with:
|
||||||
key: v2-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/common.py', 'dangerzone/conversion/doc_to_pixels.py', 'dangerzone/conversion/pixels_to_pdf.py', 'poetry.lock', 'gvisor_wrapper/entrypoint.py') }}
|
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
|
||||||
path: |-
|
path: |-
|
||||||
share/container.tar.gz
|
share/container.tar
|
||||||
share/image-id.txt
|
share/image-id.txt
|
||||||
|
|
||||||
- name: Build Dangerzone container image
|
- name: Build Dangerzone container image
|
||||||
if: ${{ steps.cache-container-image.outputs.cache-hit != 'true' }}
|
if: ${{ steps.cache-container-image.outputs.cache-hit != 'true' }}
|
||||||
run: |
|
run: |
|
||||||
sudo apt-get install -y python3-poetry
|
|
||||||
python3 ./install/common/build-image.py
|
python3 ./install/common/build-image.py
|
||||||
|
|
||||||
|
- name: Upload container image
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: container.tar
|
||||||
|
path: share/container.tar
|
||||||
|
|
||||||
download-tessdata:
|
download-tessdata:
|
||||||
name: Download and cache Tesseract data
|
name: Download and cache Tesseract data
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
@ -91,7 +100,8 @@ jobs:
|
||||||
|
|
||||||
windows:
|
windows:
|
||||||
runs-on: windows-latest
|
runs-on: windows-latest
|
||||||
needs: download-tessdata
|
needs:
|
||||||
|
- download-tessdata
|
||||||
env:
|
env:
|
||||||
DUMMY_CONVERSION: 1
|
DUMMY_CONVERSION: 1
|
||||||
steps:
|
steps:
|
||||||
|
@ -110,18 +120,30 @@ jobs:
|
||||||
key: v1-tessdata-${{ hashFiles('./install/common/download-tessdata.py') }}
|
key: v1-tessdata-${{ hashFiles('./install/common/download-tessdata.py') }}
|
||||||
- name: Run CLI tests
|
- name: Run CLI tests
|
||||||
run: poetry run make test
|
run: poetry run make test
|
||||||
# Taken from: https://github.com/orgs/community/discussions/27149#discussioncomment-3254829
|
- name: Set up .NET CLI environment
|
||||||
- name: Set path for candle and light
|
uses: actions/setup-dotnet@v4
|
||||||
run: echo "C:\Program Files (x86)\WiX Toolset v3.14\bin" >> $GITHUB_PATH
|
with:
|
||||||
shell: bash
|
dotnet-version: "8.x"
|
||||||
|
- name: Install WiX Toolset
|
||||||
|
run: dotnet tool install --global wix --version 5.0.2
|
||||||
|
- name: Add WiX UI extension
|
||||||
|
run: wix extension add --global WixToolset.UI.wixext/5.0.2
|
||||||
- name: Build the MSI installer
|
- name: Build the MSI installer
|
||||||
# NOTE: This also builds the .exe internally.
|
# NOTE: This also builds the .exe internally.
|
||||||
run: poetry run .\install\windows\build-app.bat
|
run: poetry run .\install\windows\build-app.bat
|
||||||
|
- name: Upload MSI installer
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: Dangerzone.msi
|
||||||
|
path: "dist/Dangerzone.msi"
|
||||||
|
if-no-files-found: error
|
||||||
|
compression-level: 0
|
||||||
|
|
||||||
macOS:
|
macOS:
|
||||||
name: "macOS (${{ matrix.arch }})"
|
name: "macOS (${{ matrix.arch }})"
|
||||||
runs-on: ${{ matrix.runner }}
|
runs-on: ${{ matrix.runner }}
|
||||||
needs: download-tessdata
|
needs:
|
||||||
|
- download-tessdata
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
|
@ -147,22 +169,31 @@ jobs:
|
||||||
- run: poetry install
|
- run: poetry install
|
||||||
- name: Run CLI tests
|
- name: Run CLI tests
|
||||||
run: poetry run make test
|
run: poetry run make test
|
||||||
|
- name: Build macOS app
|
||||||
|
run: poetry run python ./install/macos/build-app.py
|
||||||
|
- name: Upload macOS app
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: Dangerzone-${{ matrix.arch }}.app
|
||||||
|
path: "dist/Dangerzone.app"
|
||||||
|
if-no-files-found: error
|
||||||
|
compression-level: 0
|
||||||
build-deb:
|
build-deb:
|
||||||
|
needs:
|
||||||
|
- build-container-image
|
||||||
name: "build-deb (${{ matrix.distro }} ${{ matrix.version }})"
|
name: "build-deb (${{ matrix.distro }} ${{ matrix.version }})"
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: build-container-image
|
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- distro: ubuntu
|
|
||||||
version: "20.04"
|
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "22.04"
|
version: "22.04"
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.04"
|
version: "24.04"
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.10"
|
version: "24.10"
|
||||||
|
- distro: ubuntu
|
||||||
|
version: "25.04"
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: bullseye
|
version: bullseye
|
||||||
- distro: debian
|
- distro: debian
|
||||||
|
@ -195,9 +226,9 @@ jobs:
|
||||||
- name: Restore container cache
|
- name: Restore container cache
|
||||||
uses: actions/cache/restore@v4
|
uses: actions/cache/restore@v4
|
||||||
with:
|
with:
|
||||||
key: v2-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/common.py', 'dangerzone/conversion/doc_to_pixels.py', 'dangerzone/conversion/pixels_to_pdf.py', 'poetry.lock', 'gvisor_wrapper/entrypoint.py') }}
|
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
|
||||||
path: |-
|
path: |-
|
||||||
share/container.tar.gz
|
share/container.tar
|
||||||
share/image-id.txt
|
share/image-id.txt
|
||||||
fail-on-cache-miss: true
|
fail-on-cache-miss: true
|
||||||
|
|
||||||
|
@ -211,7 +242,7 @@ jobs:
|
||||||
if: matrix.distro == 'debian' && matrix.version == 'bookworm'
|
if: matrix.distro == 'debian' && matrix.version == 'bookworm'
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v4
|
||||||
with:
|
with:
|
||||||
name: dangerzone-${{ matrix.distro }}-${{ matrix.version }}.deb
|
name: dangerzone.deb
|
||||||
path: "deb_dist/dangerzone_*_*.deb"
|
path: "deb_dist/dangerzone_*_*.deb"
|
||||||
if-no-files-found: error
|
if-no-files-found: error
|
||||||
compression-level: 0
|
compression-level: 0
|
||||||
|
@ -219,18 +250,19 @@ jobs:
|
||||||
install-deb:
|
install-deb:
|
||||||
name: "install-deb (${{ matrix.distro }} ${{ matrix.version }})"
|
name: "install-deb (${{ matrix.distro }} ${{ matrix.version }})"
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: build-deb
|
needs:
|
||||||
|
- build-deb
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- distro: ubuntu
|
|
||||||
version: "20.04"
|
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "22.04"
|
version: "22.04"
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.04"
|
version: "24.04"
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.10"
|
version: "24.10"
|
||||||
|
- distro: ubuntu
|
||||||
|
version: "25.04"
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: bullseye
|
version: bullseye
|
||||||
- distro: debian
|
- distro: debian
|
||||||
|
@ -249,7 +281,7 @@ jobs:
|
||||||
- name: Download Dangerzone .deb
|
- name: Download Dangerzone .deb
|
||||||
uses: actions/download-artifact@v4
|
uses: actions/download-artifact@v4
|
||||||
with:
|
with:
|
||||||
name: dangerzone-debian-bookworm.deb
|
name: dangerzone.deb
|
||||||
path: "deb_dist/"
|
path: "deb_dist/"
|
||||||
|
|
||||||
- name: Build end-user environment
|
- name: Build end-user environment
|
||||||
|
@ -273,11 +305,12 @@ jobs:
|
||||||
build-install-rpm:
|
build-install-rpm:
|
||||||
name: "build-install-rpm (${{ matrix.distro }} ${{matrix.version}})"
|
name: "build-install-rpm (${{ matrix.distro }} ${{matrix.version}})"
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: build-container-image
|
needs:
|
||||||
|
- build-container-image
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
distro: ["fedora"]
|
distro: ["fedora"]
|
||||||
version: ["39", "40", "41"]
|
version: ["40", "41", "42"]
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
|
@ -300,9 +333,9 @@ jobs:
|
||||||
- name: Restore container image
|
- name: Restore container image
|
||||||
uses: actions/cache/restore@v4
|
uses: actions/cache/restore@v4
|
||||||
with:
|
with:
|
||||||
key: v2-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/common.py', 'dangerzone/conversion/doc_to_pixels.py', 'dangerzone/conversion/pixels_to_pdf.py', 'poetry.lock', 'gvisor_wrapper/entrypoint.py') }}
|
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
|
||||||
path: |-
|
path: |-
|
||||||
share/container.tar.gz
|
share/container.tar
|
||||||
share/image-id.txt
|
share/image-id.txt
|
||||||
fail-on-cache-miss: true
|
fail-on-cache-miss: true
|
||||||
|
|
||||||
|
@ -311,6 +344,14 @@ jobs:
|
||||||
./dev_scripts/env.py --distro ${{ matrix.distro }} --version ${{ matrix.version }} \
|
./dev_scripts/env.py --distro ${{ matrix.distro }} --version ${{ matrix.version }} \
|
||||||
run --dev --no-gui ./dangerzone/install/linux/build-rpm.py
|
run --dev --no-gui ./dangerzone/install/linux/build-rpm.py
|
||||||
|
|
||||||
|
- name: Upload Dangerzone .rpm
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: dangerzone-${{ matrix.distro }}-${{ matrix.version }}.rpm
|
||||||
|
path: "dist/dangerzone-*.x86_64.rpm"
|
||||||
|
if-no-files-found: error
|
||||||
|
compression-level: 0
|
||||||
|
|
||||||
# Reclaim some space in this step, now that the dev environment is no
|
# Reclaim some space in this step, now that the dev environment is no
|
||||||
# longer necessary. Previously, we encountered out-of-space issues while
|
# longer necessary. Previously, we encountered out-of-space issues while
|
||||||
# running this CI job.
|
# running this CI job.
|
||||||
|
@ -321,7 +362,7 @@ jobs:
|
||||||
run: |
|
run: |
|
||||||
./dev_scripts/env.py --distro ${{ matrix.distro }} \
|
./dev_scripts/env.py --distro ${{ matrix.distro }} \
|
||||||
--version ${{ matrix.version }} \
|
--version ${{ matrix.version }} \
|
||||||
build --download-pyside6
|
build
|
||||||
|
|
||||||
- name: Run a test command
|
- name: Run a test command
|
||||||
run: |
|
run: |
|
||||||
|
@ -342,26 +383,26 @@ jobs:
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- distro: ubuntu
|
|
||||||
version: "20.04"
|
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "22.04"
|
version: "22.04"
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.04"
|
version: "24.04"
|
||||||
- distro: ubuntu
|
- distro: ubuntu
|
||||||
version: "24.10"
|
version: "24.10"
|
||||||
|
- distro: ubuntu
|
||||||
|
version: "25.04"
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: bullseye
|
version: bullseye
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: bookworm
|
version: bookworm
|
||||||
- distro: debian
|
- distro: debian
|
||||||
version: trixie
|
version: trixie
|
||||||
- distro: fedora
|
|
||||||
version: "39"
|
|
||||||
- distro: fedora
|
- distro: fedora
|
||||||
version: "40"
|
version: "40"
|
||||||
- distro: fedora
|
- distro: fedora
|
||||||
version: "41"
|
version: "41"
|
||||||
|
- distro: fedora
|
||||||
|
version: "42"
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
|
@ -389,9 +430,9 @@ jobs:
|
||||||
- name: Restore container image
|
- name: Restore container image
|
||||||
uses: actions/cache/restore@v4
|
uses: actions/cache/restore@v4
|
||||||
with:
|
with:
|
||||||
key: v2-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/common.py', 'dangerzone/conversion/doc_to_pixels.py', 'dangerzone/conversion/pixels_to_pdf.py', 'poetry.lock', 'gvisor_wrapper/entrypoint.py') }}
|
key: v5-${{ steps.date.outputs.date }}-${{ hashFiles('Dockerfile', 'dangerzone/conversion/*.py', 'dangerzone/container_helpers/*', 'install/common/build-image.py') }}
|
||||||
path: |-
|
path: |-
|
||||||
share/container.tar.gz
|
share/container.tar
|
||||||
share/image-id.txt
|
share/image-id.txt
|
||||||
fail-on-cache-miss: true
|
fail-on-cache-miss: true
|
||||||
|
|
||||||
|
@ -405,6 +446,7 @@ jobs:
|
||||||
|
|
||||||
- name: Setup xvfb (Linux)
|
- name: Setup xvfb (Linux)
|
||||||
run: |
|
run: |
|
||||||
|
sudo apt update
|
||||||
# Stuff copied wildly from several stackoverflow posts
|
# Stuff copied wildly from several stackoverflow posts
|
||||||
sudo apt-get install -y xvfb libxkbcommon-x11-0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0 libxcb-render-util0 libxcb-xinerama0 libxcb-xinput0 libxcb-xfixes0 libxcb-shape0 libglib2.0-0 libgl1-mesa-dev '^libxcb.*-dev' libx11-xcb-dev libglu1-mesa-dev libxrender-dev libxi-dev libxkbcommon-dev libxkbcommon-x11-dev
|
sudo apt-get install -y xvfb libxkbcommon-x11-0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0 libxcb-render-util0 libxcb-xinerama0 libxcb-xinput0 libxcb-xfixes0 libxcb-shape0 libglib2.0-0 libgl1-mesa-dev '^libxcb.*-dev' libx11-xcb-dev libglu1-mesa-dev libxrender-dev libxi-dev libxkbcommon-dev libxkbcommon-x11-dev
|
||||||
|
|
||||||
|
@ -431,3 +473,11 @@ jobs:
|
||||||
# file successfully.
|
# file successfully.
|
||||||
xvfb-run -s '-ac' ./dev_scripts/env.py --distro ${{ matrix.distro }} --version ${{ matrix.version }} run --dev \
|
xvfb-run -s '-ac' ./dev_scripts/env.py --distro ${{ matrix.distro }} --version ${{ matrix.version }} run --dev \
|
||||||
bash -c 'cd dangerzone; poetry run make test'
|
bash -c 'cd dangerzone; poetry run make test'
|
||||||
|
|
||||||
|
- name: Upload PDF diffs
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: pdf-diffs-${{ matrix.distro }}-${{ matrix.version }}
|
||||||
|
path: tests/test_docs/diffs/*.jpeg
|
||||||
|
# Always run this step to publish test results, even on failures
|
||||||
|
if: ${{ always() }}
|
||||||
|
|
22
.github/workflows/release-container-image.yml
vendored
Normal file
22
.github/workflows/release-container-image.yml
vendored
Normal file
|
@ -0,0 +1,22 @@
|
||||||
|
name: Release multi-arch container image
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- "test/**"
|
||||||
|
schedule:
|
||||||
|
- cron: "0 0 * * *" # Run every day at 00:00 UTC.
|
||||||
|
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build-push-image:
|
||||||
|
uses: ./.github/workflows/build-push-image.yml
|
||||||
|
with:
|
||||||
|
registry: ghcr.io/${{ github.repository_owner }}
|
||||||
|
registry_user: ${{ github.actor }}
|
||||||
|
image_name: dangerzone/dangerzone
|
||||||
|
reproduce: true
|
||||||
|
secrets:
|
||||||
|
registry_token: ${{ secrets.GITHUB_TOKEN }}
|
42
.github/workflows/scan.yml
vendored
42
.github/workflows/scan.yml
vendored
|
@ -1,29 +1,42 @@
|
||||||
name: Scan latest app and container
|
name: Scan latest app and container
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
pull_request:
|
pull_request:
|
||||||
branches: [ main ]
|
|
||||||
schedule:
|
schedule:
|
||||||
- cron: '0 0 * * *' # Run every day at 00:00 UTC.
|
- cron: '0 0 * * *' # Run every day at 00:00 UTC.
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
security-scan-container:
|
security-scan-container:
|
||||||
runs-on: ubuntu-latest
|
strategy:
|
||||||
|
matrix:
|
||||||
|
runs-on:
|
||||||
|
- ubuntu-24.04
|
||||||
|
- ubuntu-24.04-arm
|
||||||
|
runs-on: ${{ matrix.runs-on }}
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
- name: Install container build dependencies
|
with:
|
||||||
run: sudo apt install pipx && pipx install poetry
|
fetch-depth: 0
|
||||||
- name: Build container image
|
- name: Build container image
|
||||||
run: python3 ./install/common/build-image.py --runtime docker --no-save
|
run: |
|
||||||
|
python3 ./install/common/build-image.py \
|
||||||
|
--debian-archive-date $(date "+%Y%m%d") \
|
||||||
|
--runtime docker
|
||||||
|
docker load -i share/container.tar
|
||||||
|
- name: Get image tag
|
||||||
|
id: tag
|
||||||
|
run: echo "tag=$(cat share/image-id.txt)" >> $GITHUB_OUTPUT
|
||||||
# NOTE: Scan first without failing, else we won't be able to read the scan
|
# NOTE: Scan first without failing, else we won't be able to read the scan
|
||||||
# report.
|
# report.
|
||||||
- name: Scan container image (no fail)
|
- name: Scan container image (no fail)
|
||||||
uses: anchore/scan-action@v5
|
uses: anchore/scan-action@v6
|
||||||
id: scan_container
|
id: scan_container
|
||||||
with:
|
with:
|
||||||
image: "dangerzone.rocks/dangerzone:latest"
|
image: "dangerzone.rocks/dangerzone:${{ steps.tag.outputs.tag }}"
|
||||||
fail-build: false
|
fail-build: false
|
||||||
only-fixed: false
|
only-fixed: false
|
||||||
severity-cutoff: critical
|
severity-cutoff: critical
|
||||||
|
@ -35,22 +48,27 @@ jobs:
|
||||||
- name: Inspect container scan report
|
- name: Inspect container scan report
|
||||||
run: cat ${{ steps.scan_container.outputs.sarif }}
|
run: cat ${{ steps.scan_container.outputs.sarif }}
|
||||||
- name: Scan container image
|
- name: Scan container image
|
||||||
uses: anchore/scan-action@v5
|
uses: anchore/scan-action@v6
|
||||||
with:
|
with:
|
||||||
image: "dangerzone.rocks/dangerzone:latest"
|
image: "dangerzone.rocks/dangerzone:${{ steps.tag.outputs.tag }}"
|
||||||
fail-build: true
|
fail-build: true
|
||||||
only-fixed: false
|
only-fixed: false
|
||||||
severity-cutoff: critical
|
severity-cutoff: critical
|
||||||
|
|
||||||
security-scan-app:
|
security-scan-app:
|
||||||
runs-on: ubuntu-latest
|
strategy:
|
||||||
|
matrix:
|
||||||
|
runs-on:
|
||||||
|
- ubuntu-24.04
|
||||||
|
- ubuntu-24.04-arm
|
||||||
|
runs-on: ${{ matrix.runs-on }}
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
# NOTE: Scan first without failing, else we won't be able to read the scan
|
# NOTE: Scan first without failing, else we won't be able to read the scan
|
||||||
# report.
|
# report.
|
||||||
- name: Scan application (no fail)
|
- name: Scan application (no fail)
|
||||||
uses: anchore/scan-action@v5
|
uses: anchore/scan-action@v6
|
||||||
id: scan_app
|
id: scan_app
|
||||||
with:
|
with:
|
||||||
path: "."
|
path: "."
|
||||||
|
@ -65,7 +83,7 @@ jobs:
|
||||||
- name: Inspect application scan report
|
- name: Inspect application scan report
|
||||||
run: cat ${{ steps.scan_app.outputs.sarif }}
|
run: cat ${{ steps.scan_app.outputs.sarif }}
|
||||||
- name: Scan application
|
- name: Scan application
|
||||||
uses: anchore/scan-action@v5
|
uses: anchore/scan-action@v6
|
||||||
with:
|
with:
|
||||||
path: "."
|
path: "."
|
||||||
fail-build: true
|
fail-build: true
|
||||||
|
|
49
.github/workflows/scan_released.yml
vendored
49
.github/workflows/scan_released.yml
vendored
|
@ -6,23 +6,35 @@ on:
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
security-scan-container:
|
security-scan-container:
|
||||||
runs-on: ubuntu-latest
|
strategy:
|
||||||
|
matrix:
|
||||||
|
include:
|
||||||
|
- runs-on: ubuntu-24.04
|
||||||
|
arch: i686
|
||||||
|
- runs-on: ubuntu-24.04-arm
|
||||||
|
arch: arm64
|
||||||
|
runs-on: ${{ matrix.runs-on }}
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
- name: Download container image for the latest release
|
- name: Download container image for the latest release and load it
|
||||||
run: |
|
run: |
|
||||||
VERSION=$(curl https://api.github.com/repos/freedomofpress/dangerzone/releases/latest | jq -r '.tag_name')
|
VERSION=$(curl https://api.github.com/repos/freedomofpress/dangerzone/releases/latest | grep "tag_name" | cut -d '"' -f 4)
|
||||||
wget https://github.com/freedomofpress/dangerzone/releases/download/${VERSION}/container.tar.gz -O container.tar.gz
|
CONTAINER_FILENAME=container-${VERSION:1}-${{ matrix.arch }}.tar
|
||||||
- name: Load container image
|
wget https://github.com/freedomofpress/dangerzone/releases/download/${VERSION}/${CONTAINER_FILENAME} -O ${CONTAINER_FILENAME}
|
||||||
run: docker load -i container.tar.gz
|
docker load -i ${CONTAINER_FILENAME}
|
||||||
|
- name: Get image tag
|
||||||
|
id: tag
|
||||||
|
run: |
|
||||||
|
tag=$(docker images dangerzone.rocks/dangerzone --format '{{ .Tag }}')
|
||||||
|
echo "tag=$tag" >> $GITHUB_OUTPUT
|
||||||
# NOTE: Scan first without failing, else we won't be able to read the scan
|
# NOTE: Scan first without failing, else we won't be able to read the scan
|
||||||
# report.
|
# report.
|
||||||
- name: Scan container image (no fail)
|
- name: Scan container image (no fail)
|
||||||
uses: anchore/scan-action@v5
|
uses: anchore/scan-action@v6
|
||||||
id: scan_container
|
id: scan_container
|
||||||
with:
|
with:
|
||||||
image: "dangerzone.rocks/dangerzone:latest"
|
image: "dangerzone.rocks/dangerzone:${{ steps.tag.outputs.tag }}"
|
||||||
fail-build: false
|
fail-build: false
|
||||||
only-fixed: false
|
only-fixed: false
|
||||||
severity-cutoff: critical
|
severity-cutoff: critical
|
||||||
|
@ -30,19 +42,24 @@ jobs:
|
||||||
uses: github/codeql-action/upload-sarif@v3
|
uses: github/codeql-action/upload-sarif@v3
|
||||||
with:
|
with:
|
||||||
sarif_file: ${{ steps.scan_container.outputs.sarif }}
|
sarif_file: ${{ steps.scan_container.outputs.sarif }}
|
||||||
category: container
|
category: container-${{ matrix.arch }}
|
||||||
- name: Inspect container scan report
|
- name: Inspect container scan report
|
||||||
run: cat ${{ steps.scan_container.outputs.sarif }}
|
run: cat ${{ steps.scan_container.outputs.sarif }}
|
||||||
- name: Scan container image
|
- name: Scan container image
|
||||||
uses: anchore/scan-action@v5
|
uses: anchore/scan-action@v6
|
||||||
with:
|
with:
|
||||||
image: "dangerzone.rocks/dangerzone:latest"
|
image: "dangerzone.rocks/dangerzone:${{ steps.tag.outputs.tag }}"
|
||||||
fail-build: true
|
fail-build: true
|
||||||
only-fixed: false
|
only-fixed: false
|
||||||
severity-cutoff: critical
|
severity-cutoff: critical
|
||||||
|
|
||||||
security-scan-app:
|
security-scan-app:
|
||||||
runs-on: ubuntu-latest
|
strategy:
|
||||||
|
matrix:
|
||||||
|
runs-on:
|
||||||
|
- ubuntu-24.04
|
||||||
|
- ubuntu-24.04-arm
|
||||||
|
runs-on: ${{ matrix.runs-on }}
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
|
@ -50,12 +67,16 @@ jobs:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
- name: Checkout the latest released tag
|
- name: Checkout the latest released tag
|
||||||
run: |
|
run: |
|
||||||
|
# Grab the latest Grype ignore list before git checkout overwrites it.
|
||||||
|
cp .grype.yaml .grype.yaml.new
|
||||||
VERSION=$(curl https://api.github.com/repos/freedomofpress/dangerzone/releases/latest | jq -r '.tag_name')
|
VERSION=$(curl https://api.github.com/repos/freedomofpress/dangerzone/releases/latest | jq -r '.tag_name')
|
||||||
git checkout $VERSION
|
git checkout $VERSION
|
||||||
|
# Restore the newest Grype ignore list.
|
||||||
|
mv .grype.yaml.new .grype.yaml
|
||||||
# NOTE: Scan first without failing, else we won't be able to read the scan
|
# NOTE: Scan first without failing, else we won't be able to read the scan
|
||||||
# report.
|
# report.
|
||||||
- name: Scan application (no fail)
|
- name: Scan application (no fail)
|
||||||
uses: anchore/scan-action@v5
|
uses: anchore/scan-action@v6
|
||||||
id: scan_app
|
id: scan_app
|
||||||
with:
|
with:
|
||||||
path: "."
|
path: "."
|
||||||
|
@ -70,7 +91,7 @@ jobs:
|
||||||
- name: Inspect application scan report
|
- name: Inspect application scan report
|
||||||
run: cat ${{ steps.scan_app.outputs.sarif }}
|
run: cat ${{ steps.scan_app.outputs.sarif }}
|
||||||
- name: Scan application
|
- name: Scan application
|
||||||
uses: anchore/scan-action@v5
|
uses: anchore/scan-action@v6
|
||||||
with:
|
with:
|
||||||
path: "."
|
path: "."
|
||||||
fail-build: true
|
fail-build: true
|
||||||
|
|
1
.gitignore
vendored
1
.gitignore
vendored
|
@ -149,3 +149,4 @@ share/container.tar
|
||||||
share/container.tar.gz
|
share/container.tar.gz
|
||||||
share/image-id.txt
|
share/image-id.txt
|
||||||
container/container-pip-requirements.txt
|
container/container-pip-requirements.txt
|
||||||
|
.doit.db.db
|
||||||
|
|
88
.grype.yaml
88
.grype.yaml
|
@ -2,47 +2,55 @@
|
||||||
# latest release of Dangerzone, and offer our analysis.
|
# latest release of Dangerzone, and offer our analysis.
|
||||||
|
|
||||||
ignore:
|
ignore:
|
||||||
- vulnerability: CVE-2024-5535
|
# CVE-2023-45853
|
||||||
# CVE-2024-5171
|
|
||||||
# =============
|
|
||||||
#
|
|
||||||
# NVD Entry: https://nvd.nist.gov/vuln/detail/CVE-2024-5171
|
|
||||||
# Verdict: Dangerzone is not affected. The rationale is the following:
|
|
||||||
#
|
|
||||||
# The affected library, `libaom.so`, is linked by GStreamer's `libgstaom.so`
|
|
||||||
# library. The vulnerable `aom_img_alloc` function is only used when
|
|
||||||
# **encoding** a video to AV1. LibreOffce uses the **decode** path instead,
|
|
||||||
# when generating thumbnails.
|
|
||||||
#
|
|
||||||
# See also: https://github.com/freedomofpress/dangerzone/issues/895
|
|
||||||
- vulnerability: CVE-2024-5171
|
|
||||||
|
|
||||||
# CVE-2024-45491, CVE-2024-45492
|
|
||||||
# ===============================
|
|
||||||
#
|
|
||||||
# NVD Entries:
|
|
||||||
# * https://nvd.nist.gov/vuln/detail/CVE-2024-45491
|
|
||||||
# * https://nvd.nist.gov/vuln/detail/CVE-2024-45492
|
|
||||||
#
|
|
||||||
# Verdict: Dangerzone is not affected. The rationale is the following:
|
|
||||||
#
|
|
||||||
# The vulnerabilities that have been assigned to these CVEs affect only 32-bit
|
|
||||||
# architectures. Dangerzone ships only 64-bit images to users.
|
|
||||||
#
|
|
||||||
# See also: https://github.com/freedomofpress/dangerzone/issues/913
|
|
||||||
- vulnerability: CVE-2024-45491
|
|
||||||
- vulnerability: CVE-2024-45492
|
|
||||||
|
|
||||||
# CVE-2024-45490
|
|
||||||
# ==============
|
# ==============
|
||||||
#
|
#
|
||||||
# NVD Entry: https://nvd.nist.gov/vuln/detail/CVE-2024-45490
|
# Debian tracker: https://security-tracker.debian.org/tracker/CVE-2023-45853
|
||||||
# Verdict: Dangerzone is not affected. The rationale is the following:
|
# Verdict: Dangerzone is not affected because the zlib library in Debian is
|
||||||
|
# built in a way that is not vulnerable.
|
||||||
|
- vulnerability: CVE-2023-45853
|
||||||
|
# CVE-2024-38428
|
||||||
|
# ==============
|
||||||
#
|
#
|
||||||
# In order to exploit this bug, the caller must pass a negative length to the
|
# Debian tracker: https://security-tracker.debian.org/tracker/CVE-2024-38428
|
||||||
# `XML_ParseBuffer` function. This function is not directly used by
|
# Verdict: Dangerzone is not affected because it doesn't use wget in the
|
||||||
# LibreOffice, which instead uses a higher-level wrapper. Therefore, our
|
# container image (which also has no network connectivity).
|
||||||
# understanding is that this path cannot be exploited by attackers.
|
- vulnerability: CVE-2024-38428
|
||||||
|
# CVE-2024-57823
|
||||||
|
# ==============
|
||||||
#
|
#
|
||||||
# See also: https://github.com/freedomofpress/dangerzone/issues/913
|
# Debian tracker: https://security-tracker.debian.org/tracker/CVE-2024-57823
|
||||||
- vulnerability: CVE-2024-45490
|
# Verdict: Dangerzone is not affected. First things first, LibreOffice is
|
||||||
|
# using this library for parsing RDF metadata in a document [1], and has
|
||||||
|
# issued a fix for the vendored raptor2 package they have for other distros
|
||||||
|
# [2].
|
||||||
|
#
|
||||||
|
# On the other hand, the Debian security team has stated that this is a minor
|
||||||
|
# issue [3], and there's no fix from the developers yet. It seems that the
|
||||||
|
# Debian package is not affected somehow by this CVE, probably due to the way
|
||||||
|
# it's packaged.
|
||||||
|
#
|
||||||
|
# [1] https://wiki.documentfoundation.org/Documentation/DevGuide/Office_Development#RDF_metadata
|
||||||
|
# [2] https://cgit.freedesktop.org/libreoffice/core/commit/?id=2b50dc0e4482ac0ad27d69147b4175e05af4fba4
|
||||||
|
# [2] From https://security-tracker.debian.org/tracker/CVE-2024-57823:
|
||||||
|
#
|
||||||
|
# [bookworm] - raptor2 <postponed> (Minor issue, revisit when fixed upstream)
|
||||||
|
#
|
||||||
|
- vulnerability: CVE-2024-57823
|
||||||
|
# CVE-2025-0665
|
||||||
|
# ==============
|
||||||
|
#
|
||||||
|
# Debian tracker: https://security-tracker.debian.org/tracker/CVE-2025-0665
|
||||||
|
# Verdict: Dangerzone is not affected because the vulnerable code is not
|
||||||
|
# present in Debian Bookworm. Also, libcurl is an HTTP client, and the
|
||||||
|
# Dangerzone container does not make any network calls.
|
||||||
|
- vulnerability: CVE-2025-0665
|
||||||
|
# CVE-2025-43859
|
||||||
|
# ==============
|
||||||
|
#
|
||||||
|
# GitHub advisory: https://github.com/advisories/GHSA-vqfr-h8mv-ghfj
|
||||||
|
# Verdict: Dangerzone is not affected because the vulnerable code is triggered
|
||||||
|
# when parsing HTTP requests, e.g., by web **servers**. Dangerzone on the
|
||||||
|
# other hand performs HTTP requests, i.e., it operates as **client**.
|
||||||
|
- vulnerability: CVE-2025-43859
|
||||||
|
- vulnerability: GHSA-vqfr-h8mv-ghfj
|
||||||
|
|
1
.well-known/funding-manifest-urls
Normal file
1
.well-known/funding-manifest-urls
Normal file
|
@ -0,0 +1 @@
|
||||||
|
https://dangerzone.rocks/assets/json/funding.json
|
111
BUILD.md
111
BUILD.md
|
@ -34,29 +34,6 @@ Install dependencies:
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<details>
|
|
||||||
<summary><i>:memo: Expand this section if you are on Ubuntu 20.04 (Focal).</i></summary>
|
|
||||||
</br>
|
|
||||||
|
|
||||||
The default Python version that ships with Ubuntu Focal (3.8) is not
|
|
||||||
compatible with PySide6, which requires Python 3.9 or greater.
|
|
||||||
|
|
||||||
You can install Python 3.9 using the `python3.9` package.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo apt install -y python3.9
|
|
||||||
```
|
|
||||||
|
|
||||||
Poetry will automatically pick up the correct version when running.
|
|
||||||
</details>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
sudo apt install -y podman dh-python build-essential make libqt6gui6 \
|
sudo apt install -y podman dh-python build-essential make libqt6gui6 \
|
||||||
pipx python3 python3-dev
|
pipx python3 python3-dev
|
||||||
|
@ -70,6 +47,7 @@ methods](https://python-poetry.org/docs/#installation))_
|
||||||
```sh
|
```sh
|
||||||
pipx ensurepath
|
pipx ensurepath
|
||||||
pipx install poetry
|
pipx install poetry
|
||||||
|
pipx inject poetry poetry-plugin-export
|
||||||
```
|
```
|
||||||
|
|
||||||
After this, restart the terminal window, for the `poetry` command to be in your
|
After this, restart the terminal window, for the `poetry` command to be in your
|
||||||
|
@ -131,32 +109,11 @@ sudo dnf install -y rpm-build podman python3 python3-devel python3-poetry-core \
|
||||||
pipx qt6-qtbase-gui
|
pipx qt6-qtbase-gui
|
||||||
```
|
```
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<details>
|
|
||||||
<summary><i>:memo: Expand this section if you are on Fedora 41.</i></summary>
|
|
||||||
</br>
|
|
||||||
|
|
||||||
The default Python version that ships with Fedora 41 (3.13) is not
|
|
||||||
compatible with PySide6, which requires Python 3.12 or earlier.
|
|
||||||
|
|
||||||
You can install Python 3.12 using the `python3.12` package.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo dnf install -y python3.12
|
|
||||||
```
|
|
||||||
|
|
||||||
Poetry will automatically pick up the correct version when running.
|
|
||||||
</details>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
Install Poetry using `pipx`:
|
Install Poetry using `pipx`:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
pipx install poetry
|
pipx install poetry
|
||||||
|
pipx inject poetry
|
||||||
```
|
```
|
||||||
|
|
||||||
Clone this repository:
|
Clone this repository:
|
||||||
|
@ -230,27 +187,27 @@ Overview of the qubes you'll create:
|
||||||
|--------------|----------|---------|
|
|--------------|----------|---------|
|
||||||
| dz | app qube | Dangerzone development |
|
| dz | app qube | Dangerzone development |
|
||||||
| dz-dvm | app qube | offline disposable template for performing conversions |
|
| dz-dvm | app qube | offline disposable template for performing conversions |
|
||||||
| fedora-40-dz | template | template for the other two qubes |
|
| fedora-41-dz | template | template for the other two qubes |
|
||||||
|
|
||||||
#### In `dom0`:
|
#### In `dom0`:
|
||||||
|
|
||||||
The following instructions require typing commands in a terminal in dom0.
|
The following instructions require typing commands in a terminal in dom0.
|
||||||
|
|
||||||
1. Create a new Fedora **template** (`fedora-40-dz`) for Dangerzone development:
|
1. Create a new Fedora **template** (`fedora-41-dz`) for Dangerzone development:
|
||||||
|
|
||||||
```
|
```
|
||||||
qvm-clone fedora-40 fedora-40-dz
|
qvm-clone fedora-41 fedora-41-dz
|
||||||
```
|
```
|
||||||
|
|
||||||
> :bulb: Alternatively, you can use your base Fedora 40 template in the
|
> :bulb: Alternatively, you can use your base Fedora 40 template in the
|
||||||
> following instructions. In that case, skip this step and replace
|
> following instructions. In that case, skip this step and replace
|
||||||
> `fedora-40-dz` with `fedora-40` in the steps below.
|
> `fedora-41-dz` with `fedora-41` in the steps below.
|
||||||
|
|
||||||
2. Create an offline disposable template (app qube) called `dz-dvm`, based on the `fedora-40-dz`
|
2. Create an offline disposable template (app qube) called `dz-dvm`, based on the `fedora-41-dz`
|
||||||
template. This will be the qube where the documents will be sanitized:
|
template. This will be the qube where the documents will be sanitized:
|
||||||
|
|
||||||
```
|
```
|
||||||
qvm-create --class AppVM --label red --template fedora-40-dz \
|
qvm-create --class AppVM --label red --template fedora-41-dz \
|
||||||
--prop netvm="" --prop template_for_dispvms=True \
|
--prop netvm="" --prop template_for_dispvms=True \
|
||||||
--prop default_dispvm='' dz-dvm
|
--prop default_dispvm='' dz-dvm
|
||||||
```
|
```
|
||||||
|
@ -259,12 +216,18 @@ The following instructions require typing commands in a terminal in dom0.
|
||||||
and initiating the sanitization process:
|
and initiating the sanitization process:
|
||||||
|
|
||||||
```
|
```
|
||||||
qvm-create --class AppVM --label red --template fedora-40-dz dz
|
qvm-create --class AppVM --label red --template fedora-41-dz dz
|
||||||
|
qvm-volume resize dz:private $(numfmt --from=auto 20Gi)
|
||||||
```
|
```
|
||||||
|
|
||||||
> :bulb: Alternatively, you can use a different app qube for Dangerzone
|
> :bulb: Alternatively, you can use a different app qube for Dangerzone
|
||||||
> development. In that case, replace `dz` with the qube of your choice in the
|
> development. In that case, replace `dz` with the qube of your choice in the
|
||||||
> steps below.
|
> steps below.
|
||||||
|
>
|
||||||
|
> In the commands above, we also resize the private volume of the `dz` qube
|
||||||
|
> to 20GiB, since you may need some extra storage space when developing on
|
||||||
|
> Dangerzone (e.g., for container images, Tesseract data, and Python
|
||||||
|
> virtualenvs).
|
||||||
|
|
||||||
4. Add an RPC policy (`/etc/qubes/policy.d/50-dangerzone.policy`) that will
|
4. Add an RPC policy (`/etc/qubes/policy.d/50-dangerzone.policy`) that will
|
||||||
allow launching a disposable qube (`dz-dvm`) when Dangerzone converts a
|
allow launching a disposable qube (`dz-dvm`) when Dangerzone converts a
|
||||||
|
@ -298,29 +261,20 @@ test it.
|
||||||
./install/linux/build-rpm.py --qubes
|
./install/linux/build-rpm.py --qubes
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Copy the produced `.rpm` file into `fedora-40-dz`
|
4. Copy the produced `.rpm` file into `fedora-41-dz`
|
||||||
```sh
|
```sh
|
||||||
qvm-copy dist/*.x86_64.rpm
|
qvm-copy dist/*.x86_64.rpm
|
||||||
```
|
```
|
||||||
|
|
||||||
#### In the `fedora-40-dz` template
|
#### In the `fedora-41-dz` template
|
||||||
|
|
||||||
1. Install the `.rpm` package you just copied
|
1. Install the `.rpm` package you just copied
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
sudo dnf install 'dnf-command(config-manager)'
|
|
||||||
sudo dnf-3 config-manager --add-repo=https://packages.freedom.press/yum-tools-prod/dangerzone/dangerzone.repo
|
|
||||||
sudo dnf install ~/QubesIncoming/dz/*.rpm
|
sudo dnf install ~/QubesIncoming/dz/*.rpm
|
||||||
```
|
```
|
||||||
|
|
||||||
In the above steps, we add the Dangerzone repo because it includes the
|
2. Shutdown the `fedora-41-dz` template
|
||||||
necessary PySide6 RPM in order to make Dangerzone work.
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> During the installation, you will be asked to
|
|
||||||
> [verify the Dangerzone GPG key](INSTALL.md#verifying-dangerzone-gpg-key).
|
|
||||||
|
|
||||||
2. Shutdown the `fedora-40-dz` template
|
|
||||||
|
|
||||||
### Developing Dangerzone
|
### Developing Dangerzone
|
||||||
|
|
||||||
|
@ -351,7 +305,7 @@ For changes in the server side components, you can simply edit them locally,
|
||||||
and they will be mirrored to the disposable qube through the `dz.ConvertDev`
|
and they will be mirrored to the disposable qube through the `dz.ConvertDev`
|
||||||
RPC call.
|
RPC call.
|
||||||
|
|
||||||
The only reason to build a new Qubes RPM and install it in the `fedora-40-dz`
|
The only reason to build a new Qubes RPM and install it in the `fedora-41-dz`
|
||||||
template for development is if:
|
template for development is if:
|
||||||
1. The project requires new server-side components.
|
1. The project requires new server-side components.
|
||||||
2. The code for `qubes/dz.ConvertDev` needs to be updated.
|
2. The code for `qubes/dz.ConvertDev` needs to be updated.
|
||||||
|
@ -474,11 +428,24 @@ poetry shell
|
||||||
.\dev_scripts\dangerzone.bat
|
.\dev_scripts\dangerzone.bat
|
||||||
```
|
```
|
||||||
|
|
||||||
### If you want to build the installer
|
### If you want to build the Windows installer
|
||||||
|
|
||||||
* Go to https://dotnet.microsoft.com/download/dotnet-framework and download and install .NET Framework 3.5 SP1 Runtime. I downloaded `dotnetfx35.exe`.
|
Install [.NET SDK](https://dotnet.microsoft.com/en-us/download) version 6 or later. Then, open a terminal and install the latest version of [WiX Toolset .NET tool](https://wixtoolset.org/) **v5** with:
|
||||||
* Go to https://wixtoolset.org/releases/ and download and install WiX toolset. I downloaded `wix314.exe`.
|
|
||||||
* Add `C:\Program Files (x86)\WiX Toolset v3.14\bin` to the path ([instructions](https://web.archive.org/web/20230221104142/https://windowsloop.com/how-to-add-to-windows-path/)).
|
```sh
|
||||||
|
dotnet tool install --global wix --version 5.0.2
|
||||||
|
```
|
||||||
|
|
||||||
|
Install the WiX UI extension. You may need to open a new terminal in order to use the newly installed `wix` .NET tool:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
wix extension add --global WixToolset.UI.wixext/5.0.2
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> To avoid compatibility issues, ensure the WiX UI extension version matches the version of the WiX Toolset.
|
||||||
|
>
|
||||||
|
> Run `wix --version` to check the version of WiX Toolset you have installed and replace `5.x.y` with the full version number without the Git revision.
|
||||||
|
|
||||||
### If you want to sign binaries with Authenticode
|
### If you want to sign binaries with Authenticode
|
||||||
|
|
||||||
|
@ -503,3 +470,9 @@ poetry run .\install\windows\build-app.bat
|
||||||
```
|
```
|
||||||
|
|
||||||
When you're done you will have `dist\Dangerzone.msi`.
|
When you're done you will have `dist\Dangerzone.msi`.
|
||||||
|
|
||||||
|
## Updating the container image
|
||||||
|
|
||||||
|
The Dangezone container image is reproducible. This means that every time we
|
||||||
|
build it, the result will be bit-for-bit the same, with some minor exceptions.
|
||||||
|
Read more on how you can update it in `docs/developer/reproducibility.md`.
|
||||||
|
|
91
CHANGELOG.md
91
CHANGELOG.md
|
@ -5,7 +5,96 @@ All notable changes to this project will be documented in this file.
|
||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
|
||||||
since 0.4.1, and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
since 0.4.1, and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
## [Unreleased](https://github.com/freedomofpress/dangerzone/compare/v0.8.0...HEAD)
|
## [Unreleased](https://github.com/freedomofpress/dangerzone/compare/v0.9.0...HEAD)
|
||||||
|
|
||||||
|
## Changed
|
||||||
|
|
||||||
|
- Update installation instructions (and CI checks) for Debian derivatives ([#1141](https://github.com/freedomofpress/dangerzone/pull/1141))
|
||||||
|
|
||||||
|
## [0.9.0](https://github.com/freedomofpress/dangerzone/compare/v0.9.0...0.8.1)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- Platform support: Add support for Fedora 42 ([#1091](https://github.com/freedomofpress/dangerzone/issues/1091))
|
||||||
|
- Platform support: Add support for Ubuntu 25.04 (Plucky Puffin) ([#1090](https://github.com/freedomofpress/dangerzone/issues/1090))
|
||||||
|
- (experimental): It is now possible to specify a custom container runtime in
|
||||||
|
the settings, by using the `container_runtime` key. It should contain the path
|
||||||
|
to the container runtime you want to use. Please note that this doesn't mean
|
||||||
|
we support more container runtimes than Podman and Docker for the time being,
|
||||||
|
but enables you to chose which one you want to use, independently of your
|
||||||
|
platform. ([#925](https://github.com/freedomofpress/dangerzone/issues/925))
|
||||||
|
- Document Operating System support [#986](https://github.com/freedomofpress/dangerzone/issues/986)
|
||||||
|
- Tests: Look for regressions when converting PDFs [#321](https://github.com/freedomofpress/dangerzone/issues/321)
|
||||||
|
- Ensure container image reproducibilty across different container runtimes and versions ([#1074](https://github.com/freedomofpress/dangerzone/issues/1074))
|
||||||
|
- Implement container image attestations ([#1035](https://github.com/freedomofpress/dangerzone/issues/1035))
|
||||||
|
- Inform user of outdated Docker Desktop Version ([#693](https://github.com/freedomofpress/dangerzone/issues/693))
|
||||||
|
- Add support for Python 3.13 ([#992](https://github.com/freedomofpress/dangerzone/issues/992))
|
||||||
|
- Publish the built artifacts in our CI pipelines ([#972](https://github.com/freedomofpress/dangerzone/pull/972))
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
|
||||||
|
- Fix our Debian Trixie installation instructions using Sequoia PGP ([#1052](https://github.com/freedomofpress/dangerzone/issues/1052))
|
||||||
|
- Fix the way multiprocessing works on macOS ([#873](https://github.com/freedomofpress/dangerzone/issues/873))
|
||||||
|
- Update minimum Docker Desktop version to fix an stdout truncation issue ([#1101](https://github.com/freedomofpress/dangerzone/issues/1101))
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
|
||||||
|
- Platform support: Drop support for Ubuntu Focal, since it's nearing end-of-life ([#1018](https://github.com/freedomofpress/dangerzone/issues/1018))
|
||||||
|
- Platform support: Drop support for Fedora 39 ([#999](https://github.com/freedomofpress/dangerzone/issues/999))
|
||||||
|
|
||||||
|
## Changed
|
||||||
|
|
||||||
|
- Switch base image to Debian Stable ([#1046](https://github.com/freedomofpress/dangerzone/issues/1046))
|
||||||
|
- Track image tags instead of image IDs in `image-id.txt` ([#1020](https://github.com/freedomofpress/dangerzone/issues/1020))
|
||||||
|
- Migrate to Wix 4 (windows building tool) ([#602](https://github.com/freedomofpress/dangerzone/issues/602)).
|
||||||
|
Thanks [@jkarasti](https://github.com/jkarasti) for the contribution.
|
||||||
|
- Add a `--debug` flag to the CLI to help retrieve more logs ([#941](https://github.com/freedomofpress/dangerzone/pull/941))
|
||||||
|
- The `debian` base image is now fetched by digest. As a result, your local
|
||||||
|
container storage will no longer show a tag for this dependency
|
||||||
|
([#1116](https://github.com/freedomofpress/dangerzone/pull/1116)).
|
||||||
|
Thanks [@sudoforge](https://github.com/sudoforge) for the contribution.
|
||||||
|
- The `debian` base image is now referenced with a fully qualified URI,
|
||||||
|
including the registry hostname ([#1118](https://github.com/freedomofpress/dangerzone/pull/1118)).
|
||||||
|
Thanks [@sudoforge](https://github.com/sudoforge) for the contribution.
|
||||||
|
- Update the Dangerzone container image and its dependencies (gVisor, Debian base image, H2Orestart) to the latest versions:
|
||||||
|
* Debian image release: `bookworm-20250317-slim@sha256:1209d8fd77def86ceb6663deef7956481cc6c14a25e1e64daec12c0ceffcc19d`
|
||||||
|
* Debian snapshots date: `2025-03-31`
|
||||||
|
* gVisor release date: `2025-03-26`
|
||||||
|
* H2Orestart plugin: `v0.7.2` (`d09bc5c93fe2483a7e4a57985d2a8d0e4efae2efb04375fe4b59a68afd7241e2`)
|
||||||
|
|
||||||
|
### Development changes
|
||||||
|
|
||||||
|
- Make container image scanning work for Silicon macOS ([#1008](https://github.com/freedomofpress/dangerzone/issues/1008))
|
||||||
|
- Automate the main bulk of our release tasks ([#1016](https://github.com/freedomofpress/dangerzone/issues/1016))
|
||||||
|
- CI: Enforce updating the CHANGELOG in the CI ([#1108](https://github.com/freedomofpress/dangerzone/pull/1108))
|
||||||
|
- Add reference to funding.json (required by floss.fund application) ([#1092](https://github.com/freedomofpress/dangerzone/pull/1092))
|
||||||
|
- Lint: add ruff for linting and formatting ([#1029](https://github.com/freedomofpress/dangerzone/pull/1029)).
|
||||||
|
Thanks [@jkarasti](https://github.com/jkarasti) for the contribution.
|
||||||
|
- Work around a `cx_freeze` build issue ([#974](https://github.com/freedomofpress/dangerzone/issues/974))
|
||||||
|
- tests: mark the hancom office suite tests for rerun on failures ([#991](https://github.com/freedomofpress/dangerzone/pull/991))
|
||||||
|
- Update reference template for Qubes to Fedora 41 ([#1078](https://github.com/freedomofpress/dangerzone/issues/1078))
|
||||||
|
|
||||||
|
## [0.8.1](https://github.com/freedomofpress/dangerzone/compare/v0.8.1...0.8.0)
|
||||||
|
|
||||||
|
- Update the container image
|
||||||
|
|
||||||
|
### Added
|
||||||
|
|
||||||
|
- Disable gVisor's DirectFS feature ([#226](https://github.com/freedomofpress/dangerzone/issues/226)).
|
||||||
|
Thanks [EtiennePerot](https://github.com/EtiennePerot) for the contribution.
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
|
||||||
|
- Platform support: Drop support for Fedora 39, since it's end-of-life ([#999](https://github.com/freedomofpress/dangerzone/pull/999))
|
||||||
|
|
||||||
|
## Updated
|
||||||
|
|
||||||
|
- Bump `slsa-framework/slsa-github-generator` from 2.0.0 to 2.1.0 ([#1109](https://github.com/freedomofpress/dangerzone/pull/1109))
|
||||||
|
|
||||||
|
### Development changes
|
||||||
|
|
||||||
|
Thanks [@jkarasti](https://github.com/jkarasti) for the contribution.
|
||||||
|
- Automate a large portion of our release tasks with `doit` ([#1016](https://github.com/freedomofpress/dangerzone/issues/1016))
|
||||||
|
|
||||||
## [0.8.0](https://github.com/freedomofpress/dangerzone/compare/v0.8.0...0.7.1)
|
## [0.8.0](https://github.com/freedomofpress/dangerzone/compare/v0.8.0...0.7.1)
|
||||||
|
|
||||||
|
|
292
Dockerfile
292
Dockerfile
|
@ -1,104 +1,228 @@
|
||||||
###########################################
|
# NOTE: Updating the packages to their latest versions requires bumping the
|
||||||
# Build PyMuPDF
|
# Dockerfile args below. For more info about this file, read
|
||||||
|
# docs/developer/reproducibility.md.
|
||||||
|
|
||||||
FROM alpine:latest as pymupdf-build
|
ARG DEBIAN_IMAGE_DIGEST=sha256:1209d8fd77def86ceb6663deef7956481cc6c14a25e1e64daec12c0ceffcc19d
|
||||||
ARG ARCH
|
|
||||||
ARG REQUIREMENTS_TXT
|
|
||||||
|
|
||||||
# Install PyMuPDF via hash-checked requirements file
|
FROM docker.io/library/debian@${DEBIAN_IMAGE_DIGEST} AS dangerzone-image
|
||||||
COPY ${REQUIREMENTS_TXT} /tmp/requirements.txt
|
|
||||||
|
|
||||||
# PyMuPDF provides non-arm musl wheels only.
|
ARG GVISOR_ARCHIVE_DATE=20250326
|
||||||
# Only install build-dependencies if we are actually building the wheel
|
ARG DEBIAN_ARCHIVE_DATE=20250331
|
||||||
RUN case "$ARCH" in \
|
ARG H2ORESTART_CHECKSUM=935e68671bde4ca63a364128077f1c733349bbcc90b7e6973bc7a2306494ec54
|
||||||
"arm64") \
|
ARG H2ORESTART_VERSION=v0.7.2
|
||||||
# This is required for copying later, but is created only in the pre-built wheels
|
|
||||||
mkdir -p /usr/lib/python3.12/site-packages/PyMuPDF.libs/ \
|
|
||||||
&& apk --no-cache add linux-headers g++ linux-headers gcc make python3-dev py3-pip clang-dev ;; \
|
|
||||||
*) \
|
|
||||||
apk --no-cache add py3-pip ;; \
|
|
||||||
esac
|
|
||||||
RUN pip install -vv --break-system-packages --require-hashes -r /tmp/requirements.txt
|
|
||||||
|
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive
|
||||||
|
|
||||||
###########################################
|
# The following way of installing packages is taken from
|
||||||
# Download H2ORestart
|
# https://github.com/reproducible-containers/repro-sources-list.sh/blob/master/Dockerfile.debian-12,
|
||||||
FROM alpine:latest as h2orestart-dl
|
# and adapted to allow installing gVisor from each own repo as well.
|
||||||
ARG H2ORESTART_CHECKSUM=d09bc5c93fe2483a7e4a57985d2a8d0e4efae2efb04375fe4b59a68afd7241e2
|
RUN \
|
||||||
RUN mkdir /libreoffice_ext && cd libreoffice_ext \
|
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||||
|
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||||
|
--mount=type=bind,source=./container_helpers/repro-sources-list.sh,target=/usr/local/bin/repro-sources-list.sh \
|
||||||
|
--mount=type=bind,source=./container_helpers/gvisor.key,target=/tmp/gvisor.key \
|
||||||
|
: "Hacky way to set a date for the Debian snapshot repos" && \
|
||||||
|
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list.d/debian.sources && \
|
||||||
|
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list && \
|
||||||
|
repro-sources-list.sh && \
|
||||||
|
: "Setup APT to install gVisor from its separate APT repo" && \
|
||||||
|
apt-get update && \
|
||||||
|
apt-get upgrade -y && \
|
||||||
|
apt-get install -y --no-install-recommends apt-transport-https ca-certificates gnupg && \
|
||||||
|
gpg -o /usr/share/keyrings/gvisor-archive-keyring.gpg --dearmor /tmp/gvisor.key && \
|
||||||
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases ${GVISOR_ARCHIVE_DATE} main" > /etc/apt/sources.list.d/gvisor.list && \
|
||||||
|
: "Install the necessary gVisor and Dangerzone dependencies" && \
|
||||||
|
apt-get update && \
|
||||||
|
apt-get install -y --no-install-recommends \
|
||||||
|
python3 python3-fitz libreoffice-nogui libreoffice-java-common \
|
||||||
|
python3 python3-magic default-jre-headless fonts-noto-cjk fonts-dejavu \
|
||||||
|
runsc unzip wget && \
|
||||||
|
: "Clean up for improving reproducibility (optional)" && \
|
||||||
|
rm -rf /var/cache/fontconfig/ && \
|
||||||
|
rm -rf /etc/ssl/certs/java/cacerts && \
|
||||||
|
rm -rf /var/log/* /var/cache/ldconfig/aux-cache
|
||||||
|
|
||||||
|
# Download H2ORestart from GitHub using a pinned version and hash. Note that
|
||||||
|
# it's available in Debian repos, but not in Bookworm yet.
|
||||||
|
RUN mkdir /opt/libreoffice_ext && cd /opt/libreoffice_ext \
|
||||||
&& H2ORESTART_FILENAME=h2orestart.oxt \
|
&& H2ORESTART_FILENAME=h2orestart.oxt \
|
||||||
&& H2ORESTART_VERSION="v0.6.6" \
|
|
||||||
&& wget https://github.com/ebandal/H2Orestart/releases/download/$H2ORESTART_VERSION/$H2ORESTART_FILENAME \
|
&& wget https://github.com/ebandal/H2Orestart/releases/download/$H2ORESTART_VERSION/$H2ORESTART_FILENAME \
|
||||||
&& echo "$H2ORESTART_CHECKSUM $H2ORESTART_FILENAME" | sha256sum -c \
|
&& echo "$H2ORESTART_CHECKSUM $H2ORESTART_FILENAME" | sha256sum -c \
|
||||||
&& install -dm777 "/usr/lib/libreoffice/share/extensions/"
|
&& install -dm777 "/usr/lib/libreoffice/share/extensions/" \
|
||||||
|
&& rm /root/.wget-hsts
|
||||||
|
|
||||||
|
# Create an unprivileged user both for gVisor and for running Dangerzone.
|
||||||
|
# XXX: Make the shadow field "date of last password change" a constant
|
||||||
|
# number.
|
||||||
|
RUN addgroup --gid 1000 dangerzone
|
||||||
|
RUN adduser --uid 1000 --ingroup dangerzone --shell /bin/true \
|
||||||
|
--disabled-password --home /home/dangerzone dangerzone \
|
||||||
|
&& chage -d 99999 dangerzone \
|
||||||
|
&& rm /etc/shadow-
|
||||||
|
|
||||||
###########################################
|
# Copy Dangerzone's conversion logic under /opt/dangerzone, and allow Python to
|
||||||
# Dangerzone image
|
# import it.
|
||||||
|
|
||||||
FROM alpine:latest AS dangerzone-image
|
|
||||||
|
|
||||||
# Install dependencies
|
|
||||||
RUN apk --no-cache -U upgrade && \
|
|
||||||
apk --no-cache add \
|
|
||||||
libreoffice \
|
|
||||||
openjdk8 \
|
|
||||||
python3 \
|
|
||||||
py3-magic \
|
|
||||||
font-noto-cjk
|
|
||||||
|
|
||||||
COPY --from=pymupdf-build /usr/lib/python3.12/site-packages/fitz/ /usr/lib/python3.12/site-packages/fitz
|
|
||||||
COPY --from=pymupdf-build /usr/lib/python3.12/site-packages/pymupdf/ /usr/lib/python3.12/site-packages/pymupdf
|
|
||||||
COPY --from=pymupdf-build /usr/lib/python3.12/site-packages/PyMuPDF.libs/ /usr/lib/python3.12/site-packages/PyMuPDF.libs
|
|
||||||
COPY --from=h2orestart-dl /libreoffice_ext/ /libreoffice_ext
|
|
||||||
|
|
||||||
RUN install -dm777 "/usr/lib/libreoffice/share/extensions/"
|
|
||||||
|
|
||||||
RUN mkdir -p /opt/dangerzone/dangerzone
|
RUN mkdir -p /opt/dangerzone/dangerzone
|
||||||
RUN touch /opt/dangerzone/dangerzone/__init__.py
|
RUN touch /opt/dangerzone/dangerzone/__init__.py
|
||||||
COPY conversion /opt/dangerzone/dangerzone/conversion
|
|
||||||
|
|
||||||
# Add the unprivileged user. Set the UID/GID of the dangerzone user/group to
|
# Copy only the Python code, and not any produced .pyc files.
|
||||||
# 1000, since we will point to it from the OCI config.
|
COPY conversion/*.py /opt/dangerzone/dangerzone/conversion/
|
||||||
#
|
|
||||||
# NOTE: A tmpfs will be mounted over /home/dangerzone directory,
|
|
||||||
# so nothing within it from the image will be persisted.
|
|
||||||
RUN addgroup -g 1000 dangerzone && \
|
|
||||||
adduser -u 1000 -s /bin/true -G dangerzone -h /home/dangerzone -D dangerzone
|
|
||||||
|
|
||||||
###########################################
|
|
||||||
# gVisor wrapper image
|
|
||||||
|
|
||||||
FROM alpine:latest
|
|
||||||
|
|
||||||
RUN apk --no-cache -U upgrade && \
|
|
||||||
apk --no-cache add python3
|
|
||||||
|
|
||||||
# Temporarily pin gVisor to the latest working version (release-20240826.0).
|
|
||||||
# See: https://github.com/freedomofpress/dangerzone/issues/928
|
|
||||||
RUN GVISOR_URL="https://storage.googleapis.com/gvisor/releases/release/20240826/$(uname -m)"; \
|
|
||||||
wget "${GVISOR_URL}/runsc" "${GVISOR_URL}/runsc.sha512" && \
|
|
||||||
sha512sum -c runsc.sha512 && \
|
|
||||||
rm -f runsc.sha512 && \
|
|
||||||
chmod 555 runsc && \
|
|
||||||
mv runsc /usr/bin/
|
|
||||||
|
|
||||||
# Add the unprivileged `dangerzone` user.
|
|
||||||
RUN addgroup dangerzone && \
|
|
||||||
adduser -s /bin/true -G dangerzone -h /home/dangerzone -D dangerzone
|
|
||||||
|
|
||||||
# Switch to the dangerzone user for the rest of the script.
|
|
||||||
USER dangerzone
|
|
||||||
|
|
||||||
# Copy the Dangerzone image, as created by the previous steps, into the home
|
|
||||||
# directory of the `dangerzone` user.
|
|
||||||
RUN mkdir /home/dangerzone/dangerzone-image
|
|
||||||
COPY --from=dangerzone-image / /home/dangerzone/dangerzone-image/rootfs
|
|
||||||
|
|
||||||
# Create a directory that will be used by gVisor as the place where it will
|
# Create a directory that will be used by gVisor as the place where it will
|
||||||
# store the state of its containers.
|
# store the state of its containers.
|
||||||
RUN mkdir /home/dangerzone/.containers
|
RUN mkdir /home/dangerzone/.containers
|
||||||
|
|
||||||
COPY gvisor_wrapper/entrypoint.py /
|
###############################################################################
|
||||||
|
#
|
||||||
|
# REUSING CONTAINER IMAGES:
|
||||||
|
# Anatomy of a hack
|
||||||
|
# ========================
|
||||||
|
#
|
||||||
|
# The rest of the Dockerfile aims to do one thing: allow the final container
|
||||||
|
# image to actually contain two container images; one for the outer container
|
||||||
|
# (spawned by Podman/Docker Desktop), and one for the inner container (spawned
|
||||||
|
# by gVisor).
|
||||||
|
#
|
||||||
|
# This has already been done in the past, and we explain why and how in the
|
||||||
|
# design document for gVisor integration (should be in
|
||||||
|
# `docs/developer/gvisor.md`). In this iteration, we want to also
|
||||||
|
# achieve the following:
|
||||||
|
#
|
||||||
|
# 1. Have a small final image, by sharing some system paths between the inner
|
||||||
|
# and outer container image using symlinks.
|
||||||
|
# 2. Allow our security scanning tool to see the contents of the inner
|
||||||
|
# container image.
|
||||||
|
# 3. Make the outer container image operational, in the sense that you can use
|
||||||
|
# `apt` commands and perform a conversion with Dangerzone, outside the
|
||||||
|
# gVisor sandbox. This is helpful for debugging purposes.
|
||||||
|
#
|
||||||
|
# Below we'll explain how our design choices are informed by the above
|
||||||
|
# sub-goals.
|
||||||
|
#
|
||||||
|
# First, to achieve a small container image, we basically need to copy `/etc`,
|
||||||
|
# `/usr` and `/opt` from the original Dangerzone image to the **inner**
|
||||||
|
# container image (under `/home/dangerzone/dangerzone-image/rootfs/`)
|
||||||
|
#
|
||||||
|
# That's all we need. The rest of the files play no role, and we can actually
|
||||||
|
# mask them in gVisor's OCI config.
|
||||||
|
#
|
||||||
|
# Second, in order to let our security scanner find the installed packages,
|
||||||
|
# we need to copy the following dirs to the root of the **outer** container
|
||||||
|
# image:
|
||||||
|
# * `/etc`, so that the security scanner can detect the image type and its
|
||||||
|
# sources
|
||||||
|
# * `/var`, so that the security scanner can have access to the APT database.
|
||||||
|
#
|
||||||
|
# IMPORTANT: We don't symlink the `/etc` of the **outer** container image to
|
||||||
|
# the **inner** one, in order to avoid leaking files like
|
||||||
|
# `/etc/{hostname,hosts,resolv.conf}` that Podman/Docker mounts when running
|
||||||
|
# the **outer** container image.
|
||||||
|
#
|
||||||
|
# Third, in order to have an operational Debian image, we are _mostly_ covered
|
||||||
|
# by the dirs we have copied. There's a _rare_ case where during debugging, we
|
||||||
|
# may want to install a system package that has components in `/etc` and
|
||||||
|
# `/var`, which will not be available in the **inner** container image. In that
|
||||||
|
# case, the developer can do the necessary symlinks in the live container.
|
||||||
|
#
|
||||||
|
# FILESYSTEM HIERARCHY
|
||||||
|
# ====================
|
||||||
|
#
|
||||||
|
# The above plan leads to the following filesystem hierarchy:
|
||||||
|
#
|
||||||
|
# Outer container image:
|
||||||
|
#
|
||||||
|
# # ls -l /
|
||||||
|
# lrwxrwxrwx 1 root root 7 Jan 27 10:46 bin -> usr/bin
|
||||||
|
# -rwxr-xr-x 1 root root 7764 Jan 24 08:14 entrypoint.py
|
||||||
|
# drwxr-xr-x 1 root root 4096 Jan 27 10:47 etc
|
||||||
|
# drwxr-xr-x 1 root root 4096 Jan 27 10:46 home
|
||||||
|
# lrwxrwxrwx 1 root root 7 Jan 27 10:46 lib -> usr/lib
|
||||||
|
# lrwxrwxrwx 1 root root 9 Jan 27 10:46 lib64 -> usr/lib64
|
||||||
|
# drwxr-xr-x 2 root root 4096 Jan 27 10:46 root
|
||||||
|
# drwxr-xr-x 1 root root 4096 Jan 27 10:47 run
|
||||||
|
# lrwxrwxrwx 1 root root 8 Jan 27 10:46 sbin -> usr/sbin
|
||||||
|
# drwxrwxrwx 2 root root 4096 Jan 27 10:46 tmp
|
||||||
|
# lrwxrwxrwx 1 root root 44 Jan 27 10:46 usr -> /home/dangerzone/dangerzone-image/rootfs/usr
|
||||||
|
# drwxr-xr-x 11 root root 4096 Jan 27 10:47 var
|
||||||
|
#
|
||||||
|
# Inner container image:
|
||||||
|
#
|
||||||
|
# # ls -l /home/dangerzone/dangerzone-image/rootfs/
|
||||||
|
# total 12
|
||||||
|
# lrwxrwxrwx 1 root root 7 Jan 27 10:47 bin -> usr/bin
|
||||||
|
# drwxr-xr-x 43 root root 4096 Jan 27 10:46 etc
|
||||||
|
# lrwxrwxrwx 1 root root 7 Jan 27 10:47 lib -> usr/lib
|
||||||
|
# lrwxrwxrwx 1 root root 9 Jan 27 10:47 lib64 -> usr/lib64
|
||||||
|
# drwxr-xr-x 4 root root 4096 Jan 27 10:47 opt
|
||||||
|
# drwxr-xr-x 12 root root 4096 Jan 27 10:47 usr
|
||||||
|
#
|
||||||
|
# SYMLINKING /USR
|
||||||
|
# ===============
|
||||||
|
#
|
||||||
|
# It's surprisingly difficult (maybe even borderline impossible), to symlink
|
||||||
|
# `/usr` to a different path during image build. The problem is that /usr
|
||||||
|
# is very sensitive, and you can't manipulate it in a live system. That is, I
|
||||||
|
# haven't found a way to do the following, or something equivalent:
|
||||||
|
#
|
||||||
|
# rm -r /usr && ln -s /home/dangerzone/dangerzone-image/rootfs/usr/ /usr
|
||||||
|
#
|
||||||
|
# The `ln` binary, even if you specify it by its full path, cannot run
|
||||||
|
# (probably because `ld-linux.so` can't be found). For this reason, we have
|
||||||
|
# to create the symlinks beforehand, in a previous build stage. Then, in an
|
||||||
|
# empty container image (scratch images), we can copy these symlinks and the
|
||||||
|
# /usr, and stitch everything together.
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
# Create the filesystem hierarchy that will be used to symlink /usr.
|
||||||
|
|
||||||
|
RUN mkdir -p \
|
||||||
|
/new_root \
|
||||||
|
/new_root/root \
|
||||||
|
/new_root/run \
|
||||||
|
/new_root/tmp \
|
||||||
|
/new_root/home/dangerzone/dangerzone-image/rootfs
|
||||||
|
|
||||||
|
# Copy the /etc and /var directories under the new root directory. Also,
|
||||||
|
# copy /etc/, /opt, and /usr to the Dangerzone image rootfs.
|
||||||
|
#
|
||||||
|
# NOTE: We also have to remove the resolv.conf file, in order to not leak any
|
||||||
|
# DNS servers added there during image build time.
|
||||||
|
RUN cp -r /etc /var /new_root/ \
|
||||||
|
&& rm /new_root/etc/resolv.conf
|
||||||
|
RUN cp -r /etc /opt /usr /new_root/home/dangerzone/dangerzone-image/rootfs \
|
||||||
|
&& rm /new_root/home/dangerzone/dangerzone-image/rootfs/etc/resolv.conf
|
||||||
|
|
||||||
|
RUN ln -s /home/dangerzone/dangerzone-image/rootfs/usr /new_root/usr
|
||||||
|
RUN ln -s usr/bin /new_root/bin
|
||||||
|
RUN ln -s usr/lib /new_root/lib
|
||||||
|
RUN ln -s usr/lib64 /new_root/lib64
|
||||||
|
RUN ln -s usr/sbin /new_root/sbin
|
||||||
|
RUN ln -s usr/bin /new_root/home/dangerzone/dangerzone-image/rootfs/bin
|
||||||
|
RUN ln -s usr/lib /new_root/home/dangerzone/dangerzone-image/rootfs/lib
|
||||||
|
RUN ln -s usr/lib64 /new_root/home/dangerzone/dangerzone-image/rootfs/lib64
|
||||||
|
|
||||||
|
# Fix permissions in /home/dangerzone, so that our entrypoint script can make
|
||||||
|
# changes in the following folders.
|
||||||
|
RUN chown dangerzone:dangerzone \
|
||||||
|
/new_root/home/dangerzone \
|
||||||
|
/new_root/home/dangerzone/dangerzone-image/
|
||||||
|
# Fix permissions in /tmp, so that it can be used by unprivileged users.
|
||||||
|
RUN chmod 777 /new_root/tmp
|
||||||
|
|
||||||
|
COPY container_helpers/entrypoint.py /new_root
|
||||||
|
# HACK: For reasons that we are not sure yet, we need to explicitly specify the
|
||||||
|
# modification time of this file.
|
||||||
|
RUN touch -d ${DEBIAN_ARCHIVE_DATE}Z /new_root/entrypoint.py
|
||||||
|
|
||||||
|
## Final image
|
||||||
|
|
||||||
|
FROM scratch
|
||||||
|
|
||||||
|
# Copy the filesystem hierarchy that we created in the previous stage, so that
|
||||||
|
# /usr can be a symlink.
|
||||||
|
COPY --from=dangerzone-image /new_root/ /
|
||||||
|
|
||||||
|
# Switch to the dangerzone user for the rest of the script.
|
||||||
|
USER dangerzone
|
||||||
|
|
||||||
ENTRYPOINT ["/entrypoint.py"]
|
ENTRYPOINT ["/entrypoint.py"]
|
||||||
|
|
16
Dockerfile.env
Normal file
16
Dockerfile.env
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
# Should be the INDEX DIGEST from an image tagged `bookworm-<DATE>-slim`:
|
||||||
|
# https://hub.docker.com/_/debian/tags?name=bookworm-
|
||||||
|
#
|
||||||
|
# Tag for this digest: bookworm-20250317-slim
|
||||||
|
DEBIAN_IMAGE_DIGEST=sha256:1209d8fd77def86ceb6663deef7956481cc6c14a25e1e64daec12c0ceffcc19d
|
||||||
|
# Can be bumped to today's date
|
||||||
|
DEBIAN_ARCHIVE_DATE=20250331
|
||||||
|
# Can be bumped to the latest date in https://github.com/google/gvisor/tags
|
||||||
|
GVISOR_ARCHIVE_DATE=20250326
|
||||||
|
# Can be bumped to the latest version and checksum from https://github.com/ebandal/H2Orestart/releases
|
||||||
|
H2ORESTART_CHECKSUM=935e68671bde4ca63a364128077f1c733349bbcc90b7e6973bc7a2306494ec54
|
||||||
|
H2ORESTART_VERSION=v0.7.2
|
||||||
|
|
||||||
|
# Buildkit image (taken from freedomofpress/repro-build)
|
||||||
|
BUILDKIT_IMAGE="docker.io/moby/buildkit:v19.0@sha256:14aa1b4dd92ea0a4cd03a54d0c6079046ea98cd0c0ae6176bdd7036ba370cbbe"
|
||||||
|
BUILDKIT_IMAGE_ROOTLESS="docker.io/moby/buildkit:v0.19.0-rootless@sha256:e901cffdad753892a7c3afb8b9972549fca02c73888cf340c91ed801fdd96d71"
|
228
Dockerfile.in
Normal file
228
Dockerfile.in
Normal file
|
@ -0,0 +1,228 @@
|
||||||
|
# NOTE: Updating the packages to their latest versions requires bumping the
|
||||||
|
# Dockerfile args below. For more info about this file, read
|
||||||
|
# docs/developer/reproducibility.md.
|
||||||
|
|
||||||
|
ARG DEBIAN_IMAGE_DIGEST={{DEBIAN_IMAGE_DIGEST}}
|
||||||
|
|
||||||
|
FROM docker.io/library/debian@${DEBIAN_IMAGE_DIGEST} AS dangerzone-image
|
||||||
|
|
||||||
|
ARG GVISOR_ARCHIVE_DATE={{GVISOR_ARCHIVE_DATE}}
|
||||||
|
ARG DEBIAN_ARCHIVE_DATE={{DEBIAN_ARCHIVE_DATE}}
|
||||||
|
ARG H2ORESTART_CHECKSUM={{H2ORESTART_CHECKSUM}}
|
||||||
|
ARG H2ORESTART_VERSION={{H2ORESTART_VERSION}}
|
||||||
|
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive
|
||||||
|
|
||||||
|
# The following way of installing packages is taken from
|
||||||
|
# https://github.com/reproducible-containers/repro-sources-list.sh/blob/master/Dockerfile.debian-12,
|
||||||
|
# and adapted to allow installing gVisor from each own repo as well.
|
||||||
|
RUN \
|
||||||
|
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||||
|
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||||
|
--mount=type=bind,source=./container_helpers/repro-sources-list.sh,target=/usr/local/bin/repro-sources-list.sh \
|
||||||
|
--mount=type=bind,source=./container_helpers/gvisor.key,target=/tmp/gvisor.key \
|
||||||
|
: "Hacky way to set a date for the Debian snapshot repos" && \
|
||||||
|
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list.d/debian.sources && \
|
||||||
|
touch -d ${DEBIAN_ARCHIVE_DATE}Z /etc/apt/sources.list && \
|
||||||
|
repro-sources-list.sh && \
|
||||||
|
: "Setup APT to install gVisor from its separate APT repo" && \
|
||||||
|
apt-get update && \
|
||||||
|
apt-get upgrade -y && \
|
||||||
|
apt-get install -y --no-install-recommends apt-transport-https ca-certificates gnupg && \
|
||||||
|
gpg -o /usr/share/keyrings/gvisor-archive-keyring.gpg --dearmor /tmp/gvisor.key && \
|
||||||
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases ${GVISOR_ARCHIVE_DATE} main" > /etc/apt/sources.list.d/gvisor.list && \
|
||||||
|
: "Install the necessary gVisor and Dangerzone dependencies" && \
|
||||||
|
apt-get update && \
|
||||||
|
apt-get install -y --no-install-recommends \
|
||||||
|
python3 python3-fitz libreoffice-nogui libreoffice-java-common \
|
||||||
|
python3 python3-magic default-jre-headless fonts-noto-cjk fonts-dejavu \
|
||||||
|
runsc unzip wget && \
|
||||||
|
: "Clean up for improving reproducibility (optional)" && \
|
||||||
|
rm -rf /var/cache/fontconfig/ && \
|
||||||
|
rm -rf /etc/ssl/certs/java/cacerts && \
|
||||||
|
rm -rf /var/log/* /var/cache/ldconfig/aux-cache
|
||||||
|
|
||||||
|
# Download H2ORestart from GitHub using a pinned version and hash. Note that
|
||||||
|
# it's available in Debian repos, but not in Bookworm yet.
|
||||||
|
RUN mkdir /opt/libreoffice_ext && cd /opt/libreoffice_ext \
|
||||||
|
&& H2ORESTART_FILENAME=h2orestart.oxt \
|
||||||
|
&& wget https://github.com/ebandal/H2Orestart/releases/download/$H2ORESTART_VERSION/$H2ORESTART_FILENAME \
|
||||||
|
&& echo "$H2ORESTART_CHECKSUM $H2ORESTART_FILENAME" | sha256sum -c \
|
||||||
|
&& install -dm777 "/usr/lib/libreoffice/share/extensions/" \
|
||||||
|
&& rm /root/.wget-hsts
|
||||||
|
|
||||||
|
# Create an unprivileged user both for gVisor and for running Dangerzone.
|
||||||
|
# XXX: Make the shadow field "date of last password change" a constant
|
||||||
|
# number.
|
||||||
|
RUN addgroup --gid 1000 dangerzone
|
||||||
|
RUN adduser --uid 1000 --ingroup dangerzone --shell /bin/true \
|
||||||
|
--disabled-password --home /home/dangerzone dangerzone \
|
||||||
|
&& chage -d 99999 dangerzone \
|
||||||
|
&& rm /etc/shadow-
|
||||||
|
|
||||||
|
# Copy Dangerzone's conversion logic under /opt/dangerzone, and allow Python to
|
||||||
|
# import it.
|
||||||
|
RUN mkdir -p /opt/dangerzone/dangerzone
|
||||||
|
RUN touch /opt/dangerzone/dangerzone/__init__.py
|
||||||
|
|
||||||
|
# Copy only the Python code, and not any produced .pyc files.
|
||||||
|
COPY conversion/*.py /opt/dangerzone/dangerzone/conversion/
|
||||||
|
|
||||||
|
# Create a directory that will be used by gVisor as the place where it will
|
||||||
|
# store the state of its containers.
|
||||||
|
RUN mkdir /home/dangerzone/.containers
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
#
|
||||||
|
# REUSING CONTAINER IMAGES:
|
||||||
|
# Anatomy of a hack
|
||||||
|
# ========================
|
||||||
|
#
|
||||||
|
# The rest of the Dockerfile aims to do one thing: allow the final container
|
||||||
|
# image to actually contain two container images; one for the outer container
|
||||||
|
# (spawned by Podman/Docker Desktop), and one for the inner container (spawned
|
||||||
|
# by gVisor).
|
||||||
|
#
|
||||||
|
# This has already been done in the past, and we explain why and how in the
|
||||||
|
# design document for gVisor integration (should be in
|
||||||
|
# `docs/developer/gvisor.md`). In this iteration, we want to also
|
||||||
|
# achieve the following:
|
||||||
|
#
|
||||||
|
# 1. Have a small final image, by sharing some system paths between the inner
|
||||||
|
# and outer container image using symlinks.
|
||||||
|
# 2. Allow our security scanning tool to see the contents of the inner
|
||||||
|
# container image.
|
||||||
|
# 3. Make the outer container image operational, in the sense that you can use
|
||||||
|
# `apt` commands and perform a conversion with Dangerzone, outside the
|
||||||
|
# gVisor sandbox. This is helpful for debugging purposes.
|
||||||
|
#
|
||||||
|
# Below we'll explain how our design choices are informed by the above
|
||||||
|
# sub-goals.
|
||||||
|
#
|
||||||
|
# First, to achieve a small container image, we basically need to copy `/etc`,
|
||||||
|
# `/usr` and `/opt` from the original Dangerzone image to the **inner**
|
||||||
|
# container image (under `/home/dangerzone/dangerzone-image/rootfs/`)
|
||||||
|
#
|
||||||
|
# That's all we need. The rest of the files play no role, and we can actually
|
||||||
|
# mask them in gVisor's OCI config.
|
||||||
|
#
|
||||||
|
# Second, in order to let our security scanner find the installed packages,
|
||||||
|
# we need to copy the following dirs to the root of the **outer** container
|
||||||
|
# image:
|
||||||
|
# * `/etc`, so that the security scanner can detect the image type and its
|
||||||
|
# sources
|
||||||
|
# * `/var`, so that the security scanner can have access to the APT database.
|
||||||
|
#
|
||||||
|
# IMPORTANT: We don't symlink the `/etc` of the **outer** container image to
|
||||||
|
# the **inner** one, in order to avoid leaking files like
|
||||||
|
# `/etc/{hostname,hosts,resolv.conf}` that Podman/Docker mounts when running
|
||||||
|
# the **outer** container image.
|
||||||
|
#
|
||||||
|
# Third, in order to have an operational Debian image, we are _mostly_ covered
|
||||||
|
# by the dirs we have copied. There's a _rare_ case where during debugging, we
|
||||||
|
# may want to install a system package that has components in `/etc` and
|
||||||
|
# `/var`, which will not be available in the **inner** container image. In that
|
||||||
|
# case, the developer can do the necessary symlinks in the live container.
|
||||||
|
#
|
||||||
|
# FILESYSTEM HIERARCHY
|
||||||
|
# ====================
|
||||||
|
#
|
||||||
|
# The above plan leads to the following filesystem hierarchy:
|
||||||
|
#
|
||||||
|
# Outer container image:
|
||||||
|
#
|
||||||
|
# # ls -l /
|
||||||
|
# lrwxrwxrwx 1 root root 7 Jan 27 10:46 bin -> usr/bin
|
||||||
|
# -rwxr-xr-x 1 root root 7764 Jan 24 08:14 entrypoint.py
|
||||||
|
# drwxr-xr-x 1 root root 4096 Jan 27 10:47 etc
|
||||||
|
# drwxr-xr-x 1 root root 4096 Jan 27 10:46 home
|
||||||
|
# lrwxrwxrwx 1 root root 7 Jan 27 10:46 lib -> usr/lib
|
||||||
|
# lrwxrwxrwx 1 root root 9 Jan 27 10:46 lib64 -> usr/lib64
|
||||||
|
# drwxr-xr-x 2 root root 4096 Jan 27 10:46 root
|
||||||
|
# drwxr-xr-x 1 root root 4096 Jan 27 10:47 run
|
||||||
|
# lrwxrwxrwx 1 root root 8 Jan 27 10:46 sbin -> usr/sbin
|
||||||
|
# drwxrwxrwx 2 root root 4096 Jan 27 10:46 tmp
|
||||||
|
# lrwxrwxrwx 1 root root 44 Jan 27 10:46 usr -> /home/dangerzone/dangerzone-image/rootfs/usr
|
||||||
|
# drwxr-xr-x 11 root root 4096 Jan 27 10:47 var
|
||||||
|
#
|
||||||
|
# Inner container image:
|
||||||
|
#
|
||||||
|
# # ls -l /home/dangerzone/dangerzone-image/rootfs/
|
||||||
|
# total 12
|
||||||
|
# lrwxrwxrwx 1 root root 7 Jan 27 10:47 bin -> usr/bin
|
||||||
|
# drwxr-xr-x 43 root root 4096 Jan 27 10:46 etc
|
||||||
|
# lrwxrwxrwx 1 root root 7 Jan 27 10:47 lib -> usr/lib
|
||||||
|
# lrwxrwxrwx 1 root root 9 Jan 27 10:47 lib64 -> usr/lib64
|
||||||
|
# drwxr-xr-x 4 root root 4096 Jan 27 10:47 opt
|
||||||
|
# drwxr-xr-x 12 root root 4096 Jan 27 10:47 usr
|
||||||
|
#
|
||||||
|
# SYMLINKING /USR
|
||||||
|
# ===============
|
||||||
|
#
|
||||||
|
# It's surprisingly difficult (maybe even borderline impossible), to symlink
|
||||||
|
# `/usr` to a different path during image build. The problem is that /usr
|
||||||
|
# is very sensitive, and you can't manipulate it in a live system. That is, I
|
||||||
|
# haven't found a way to do the following, or something equivalent:
|
||||||
|
#
|
||||||
|
# rm -r /usr && ln -s /home/dangerzone/dangerzone-image/rootfs/usr/ /usr
|
||||||
|
#
|
||||||
|
# The `ln` binary, even if you specify it by its full path, cannot run
|
||||||
|
# (probably because `ld-linux.so` can't be found). For this reason, we have
|
||||||
|
# to create the symlinks beforehand, in a previous build stage. Then, in an
|
||||||
|
# empty container image (scratch images), we can copy these symlinks and the
|
||||||
|
# /usr, and stitch everything together.
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
# Create the filesystem hierarchy that will be used to symlink /usr.
|
||||||
|
|
||||||
|
RUN mkdir -p \
|
||||||
|
/new_root \
|
||||||
|
/new_root/root \
|
||||||
|
/new_root/run \
|
||||||
|
/new_root/tmp \
|
||||||
|
/new_root/home/dangerzone/dangerzone-image/rootfs
|
||||||
|
|
||||||
|
# Copy the /etc and /var directories under the new root directory. Also,
|
||||||
|
# copy /etc/, /opt, and /usr to the Dangerzone image rootfs.
|
||||||
|
#
|
||||||
|
# NOTE: We also have to remove the resolv.conf file, in order to not leak any
|
||||||
|
# DNS servers added there during image build time.
|
||||||
|
RUN cp -r /etc /var /new_root/ \
|
||||||
|
&& rm /new_root/etc/resolv.conf
|
||||||
|
RUN cp -r /etc /opt /usr /new_root/home/dangerzone/dangerzone-image/rootfs \
|
||||||
|
&& rm /new_root/home/dangerzone/dangerzone-image/rootfs/etc/resolv.conf
|
||||||
|
|
||||||
|
RUN ln -s /home/dangerzone/dangerzone-image/rootfs/usr /new_root/usr
|
||||||
|
RUN ln -s usr/bin /new_root/bin
|
||||||
|
RUN ln -s usr/lib /new_root/lib
|
||||||
|
RUN ln -s usr/lib64 /new_root/lib64
|
||||||
|
RUN ln -s usr/sbin /new_root/sbin
|
||||||
|
RUN ln -s usr/bin /new_root/home/dangerzone/dangerzone-image/rootfs/bin
|
||||||
|
RUN ln -s usr/lib /new_root/home/dangerzone/dangerzone-image/rootfs/lib
|
||||||
|
RUN ln -s usr/lib64 /new_root/home/dangerzone/dangerzone-image/rootfs/lib64
|
||||||
|
|
||||||
|
# Fix permissions in /home/dangerzone, so that our entrypoint script can make
|
||||||
|
# changes in the following folders.
|
||||||
|
RUN chown dangerzone:dangerzone \
|
||||||
|
/new_root/home/dangerzone \
|
||||||
|
/new_root/home/dangerzone/dangerzone-image/
|
||||||
|
# Fix permissions in /tmp, so that it can be used by unprivileged users.
|
||||||
|
RUN chmod 777 /new_root/tmp
|
||||||
|
|
||||||
|
COPY container_helpers/entrypoint.py /new_root
|
||||||
|
# HACK: For reasons that we are not sure yet, we need to explicitly specify the
|
||||||
|
# modification time of this file.
|
||||||
|
RUN touch -d ${DEBIAN_ARCHIVE_DATE}Z /new_root/entrypoint.py
|
||||||
|
|
||||||
|
## Final image
|
||||||
|
|
||||||
|
FROM scratch
|
||||||
|
|
||||||
|
# Copy the filesystem hierarchy that we created in the previous stage, so that
|
||||||
|
# /usr can be a symlink.
|
||||||
|
COPY --from=dangerzone-image /new_root/ /
|
||||||
|
|
||||||
|
# Switch to the dangerzone user for the rest of the script.
|
||||||
|
USER dangerzone
|
||||||
|
|
||||||
|
ENTRYPOINT ["/entrypoint.py"]
|
160
INSTALL.md
160
INSTALL.md
|
@ -1,24 +1,90 @@
|
||||||
|
## Operating System support
|
||||||
|
|
||||||
|
Dangerzone can run on various Operating Systems (OS), and has automated tests
|
||||||
|
for most of them.
|
||||||
|
This section explains which OS we support, how long we support each version, and
|
||||||
|
how do we test Dangerzone against these.
|
||||||
|
|
||||||
|
You can find general support information in this table, and more details in the
|
||||||
|
following sections.
|
||||||
|
|
||||||
|
(Unless specified, the architecture of the OS is AMD64)
|
||||||
|
|
||||||
|
| Distribution | Supported releases | Automated tests | Manual QA |
|
||||||
|
| ------------ | ------------------------- | ---------------------- | --------- |
|
||||||
|
| Windows | 2 last releases | 🗹 (`windows-latest`) ◎ | 🗹 |
|
||||||
|
| macOS intel | 3 last releases | 🗹 (`macos-13`) ◎ | 🗹 |
|
||||||
|
| macOS silicon | 3 last releases | 🗹 (`macos-latest`) ◎ | 🗹 |
|
||||||
|
| Ubuntu | Follow upstream support ✰ | 🗹 | 🗹 |
|
||||||
|
| Debian | Current stable, Oldstable and LTS releases | 🗹 | 🗹 |
|
||||||
|
| Fedora | Follow upstream support | 🗹 | 🗹 |
|
||||||
|
| Qubes OS | [Beta support](https://github.com/freedomofpress/dangerzone/issues/413) ✢ | 🗷 | Latest Fedora template |
|
||||||
|
| Tails | Only the last release | 🗷 | Last release only |
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
✰ Support for Ubuntu Focal [was dropped](https://github.com/freedomofpress/dangerzone/issues/1018)
|
||||||
|
|
||||||
|
✢ Qubes OS support assumes the use of a Fedora template. The supported releases follow our general support for Fedora.
|
||||||
|
|
||||||
|
◎ More information about where that points [in the runner-images repository](https://github.com/actions/runner-images/tree/main)
|
||||||
|
|
||||||
## MacOS
|
## MacOS
|
||||||
See instructions in [README.md](README.md#macos).
|
|
||||||
|
- Download [Dangerzone 0.9.0 for Mac (Apple Silicon CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.9.0/Dangerzone-0.9.0-arm64.dmg)
|
||||||
|
- Download [Dangerzone 0.9.0 for Mac (Intel CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.9.0/Dangerzone-0.9.0-i686.dmg)
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> We support the releases of macOS that are still within Apple's servicing timeline. Apple usually provides security updates for the latest 3 releases, but this isn’t consistently applied and security fixes aren’t guaranteed for the non-latest releases. We are also dependent on [Docker Desktop windows support](https://docs.docker.com/desktop/setup/install/mac-install/)
|
||||||
|
|
||||||
|
You can also install Dangerzone for Mac using [Homebrew](https://brew.sh/): `brew install --cask dangerzone`
|
||||||
|
|
||||||
|
> **Note**: you will also need to install [Docker Desktop](https://www.docker.com/products/docker-desktop/).
|
||||||
|
> This program needs to run alongside Dangerzone at all times, since it is what allows Dangerzone to
|
||||||
|
> create the secure environment.
|
||||||
|
|
||||||
## Windows
|
## Windows
|
||||||
See instructions in [README.md](README.md#windows).
|
|
||||||
|
- Download [Dangerzone 0.9.0 for Windows](https://github.com/freedomofpress/dangerzone/releases/download/v0.9.0/Dangerzone-0.9.0.msi)
|
||||||
|
|
||||||
|
> **Note**: you will also need to install [Docker Desktop](https://www.docker.com/products/docker-desktop/).
|
||||||
|
> This program needs to run alongside Dangerzone at all times, since it is what allows Dangerzone to
|
||||||
|
> create the secure environment.
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> We generally support Windows releases that are still within [Microsoft’s servicing timeline](https://support.microsoft.com/en-us/help/13853/windows-lifecycle-fact-sheet).
|
||||||
|
>
|
||||||
|
> Docker sets the bottom line:
|
||||||
|
>
|
||||||
|
> > Docker only supports Docker Desktop on Windows for those versions of Windows that are still within [Microsoft’s servicing timeline](https://support.microsoft.com/en-us/help/13853/windows-lifecycle-fact-sheet). Docker Desktop is not supported on server versions of Windows, such as Windows Server 2019 or Windows Server 2022.
|
||||||
|
|
||||||
|
|
||||||
## Linux
|
## Linux
|
||||||
|
|
||||||
On Linux, Dangerzone uses [Podman](https://podman.io/) instead of Docker Desktop for creating
|
On Linux, Dangerzone uses [Podman](https://podman.io/) instead of Docker Desktop for creating
|
||||||
an isolated environment. It will be installed automatically when installing Dangerzone.
|
an isolated environment. It will be installed automatically when installing Dangerzone.
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> We support Ubuntu, Debian, and Fedora releases that are still within
|
||||||
|
> their respective servicing timelines, with a few twists:
|
||||||
|
>
|
||||||
|
> - Ubuntu: We follow upstream support with an extra cutoff date. No support for
|
||||||
|
> versions prior to the second oldest LTS release.
|
||||||
|
> - Fedora: We follow upstream support
|
||||||
|
> - Debian: current stable, oldstable and LTS releases.
|
||||||
|
|
||||||
Dangerzone is available for:
|
Dangerzone is available for:
|
||||||
|
|
||||||
|
- Ubuntu 25.04 (plucky)
|
||||||
- Ubuntu 24.10 (oracular)
|
- Ubuntu 24.10 (oracular)
|
||||||
- Ubuntu 24.04 (noble)
|
- Ubuntu 24.04 (noble)
|
||||||
- Ubuntu 22.04 (jammy)
|
- Ubuntu 22.04 (jammy)
|
||||||
- Ubuntu 20.04 (focal)
|
|
||||||
- Debian 13 (trixie)
|
- Debian 13 (trixie)
|
||||||
- Debian 12 (bookworm)
|
- Debian 12 (bookworm)
|
||||||
- Debian 11 (bullseye)
|
- Debian 11 (bullseye)
|
||||||
|
- Fedora 42
|
||||||
- Fedora 41
|
- Fedora 41
|
||||||
- Fedora 40
|
- Fedora 40
|
||||||
- Fedora 39
|
|
||||||
- Tails
|
- Tails
|
||||||
- Qubes OS (beta support)
|
- Qubes OS (beta support)
|
||||||
|
|
||||||
|
@ -28,35 +94,7 @@ Dangerzone is available for:
|
||||||
<tr>
|
<tr>
|
||||||
<td>
|
<td>
|
||||||
<details>
|
<details>
|
||||||
<summary><i>:memo: Expand this section if you are on Ubuntu 20.04 (Focal).</i></summary>
|
<summary><i>:information_source: Backport notice for Ubuntu 22.04 (Jammy) users regarding the <code>conmon</code> package</i></summary>
|
||||||
</br>
|
|
||||||
|
|
||||||
Dangerzone requires [Podman](https://podman.io/), which is not available
|
|
||||||
through the official Ubuntu Focal repos. To proceed with the Dangerzone
|
|
||||||
installation, you need to add an extra OpenSUSE repo that provides Podman to
|
|
||||||
Ubuntu Focal users. You can follow the instructions below, which have been
|
|
||||||
copied from the [official Podman blog](https://podman.io/new/2021/06/16/new.html):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo apt-get update && sudo apt-get install curl wget gnupg2 -y
|
|
||||||
. /etc/os-release
|
|
||||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' \
|
|
||||||
> /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
|
|
||||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- \
|
|
||||||
| sudo apt-key add -
|
|
||||||
sudo apt update
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<details>
|
|
||||||
<summary><i>:information_source: Backport notice for Ubuntu 24.04 (Noble) users regarding the <code>conmon</code> package</i></summary>
|
|
||||||
</br>
|
</br>
|
||||||
|
|
||||||
The `conmon` version that Podman uses and Ubuntu Jammy ships, has a bug
|
The `conmon` version that Podman uses and Ubuntu Jammy ships, has a bug
|
||||||
|
@ -72,20 +110,33 @@ Dangerzone is available for:
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
Add our repository following these instructions:
|
First, retrieve the PGP keys. The instructions differ depending on the specific
|
||||||
|
distribution you are using:
|
||||||
|
|
||||||
Download the GPG key for the repo:
|
For Debian Trixie and Ubuntu Plucky (25.04), follow these instructions to
|
||||||
|
download the PGP keys:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt-get update && sudo apt-get install sq ca-certificates -y
|
||||||
|
sq network keyserver \
|
||||||
|
--server hkps://keys.openpgp.org \
|
||||||
|
search "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281" \
|
||||||
|
--output - | sq packet dearmor fpfdz.gpg
|
||||||
|
sudo mkdir -p /etc/apt/keyrings/
|
||||||
|
sudo mv fpfdz.gpg /etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg
|
||||||
|
```
|
||||||
|
|
||||||
|
On other Debian-derivatives:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
sudo apt-get update && sudo apt-get install gnupg2 ca-certificates -y
|
sudo apt-get update && sudo apt-get install gnupg2 ca-certificates -y
|
||||||
gpg --keyserver hkps://keys.openpgp.org \
|
|
||||||
--no-default-keyring --keyring ./fpf-apt-tools-archive-keyring.gpg \
|
|
||||||
--recv-keys "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281"
|
|
||||||
sudo mkdir -p /etc/apt/keyrings/
|
sudo mkdir -p /etc/apt/keyrings/
|
||||||
sudo mv fpf-apt-tools-archive-keyring.gpg /etc/apt/keyrings
|
sudo gpg --keyserver hkps://keys.openpgp.org \
|
||||||
|
--no-default-keyring --keyring /etc/apt/keyrings/fpf-apt-tools-archive-keyring.gpg \
|
||||||
|
--recv-keys "DE28 AB24 1FA4 8260 FAC9 B8BA A7C9 B385 2260 4281"
|
||||||
```
|
```
|
||||||
|
|
||||||
Add the URL of the repo in your APT sources:
|
Then, on all distributions, add the URL of the repo in your APT sources:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
. /etc/os-release
|
. /etc/os-release
|
||||||
|
@ -125,23 +176,6 @@ sudo apt install -y dangerzone
|
||||||
|
|
||||||
### Fedora
|
### Fedora
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<details>
|
|
||||||
<summary><i>:information_source: Backport notice for Fedora users regarding the <code>python3-pyside6</code> package</i></summary>
|
|
||||||
</br>
|
|
||||||
|
|
||||||
Fedora 39+ onwards does not provide official Python bindings for Qt. For
|
|
||||||
this reason, we provide our own `python3-pyside6` package (see
|
|
||||||
[build instructions](https://github.com/freedomofpress/maint-dangerzone-pyside6))
|
|
||||||
from our YUM repo. For a deeper dive on this subject, you may read
|
|
||||||
[this issue](https://github.com/freedomofpress/dangerzone/issues/211#issuecomment-1827777122).
|
|
||||||
</details>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
Type the following commands in a terminal:
|
Type the following commands in a terminal:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -197,8 +231,8 @@ After confirming that it matches, type `y` (for yes) and the installation should
|
||||||
|
|
||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
> This section will install Dangerzone in your **default template**
|
> This section will install Dangerzone in your **default template**
|
||||||
> (`fedora-40` as of writing this). If you want to install it in a different
|
> (`fedora-41` as of writing this). If you want to install it in a different
|
||||||
> one, make sure to replace `fedora-40` with the template of your choice.
|
> one, make sure to replace `fedora-41` with the template of your choice.
|
||||||
|
|
||||||
The following steps must be completed once. Make sure you run them in the
|
The following steps must be completed once. Make sure you run them in the
|
||||||
specified qubes.
|
specified qubes.
|
||||||
|
@ -215,7 +249,7 @@ Create a **disposable**, offline app qube (`dz-dvm`), based on your default
|
||||||
template. This will be the qube where the documents will be sanitized:
|
template. This will be the qube where the documents will be sanitized:
|
||||||
|
|
||||||
```
|
```
|
||||||
qvm-create --class AppVM --label red --template fedora-40 \
|
qvm-create --class AppVM --label red --template fedora-41 \
|
||||||
--prop netvm="" --prop template_for_dispvms=True \
|
--prop netvm="" --prop template_for_dispvms=True \
|
||||||
--prop default_dispvm='' dz-dvm
|
--prop default_dispvm='' dz-dvm
|
||||||
```
|
```
|
||||||
|
@ -228,7 +262,7 @@ document, with the following contents:
|
||||||
dz.Convert * @anyvm @dispvm:dz-dvm allow
|
dz.Convert * @anyvm @dispvm:dz-dvm allow
|
||||||
```
|
```
|
||||||
|
|
||||||
#### In the `fedora-40` template
|
#### In the `fedora-41` template
|
||||||
|
|
||||||
Install Dangerzone:
|
Install Dangerzone:
|
||||||
|
|
||||||
|
@ -289,7 +323,7 @@ Our [GitHub Releases page](https://github.com/freedomofpress/dangerzone/releases
|
||||||
hosts the following files:
|
hosts the following files:
|
||||||
* Windows installer (`Dangerzone-<version>.msi`)
|
* Windows installer (`Dangerzone-<version>.msi`)
|
||||||
* macOS archives (`Dangerzone-<version>-<arch>.dmg`)
|
* macOS archives (`Dangerzone-<version>-<arch>.dmg`)
|
||||||
* Container images (`container-<version>-<arch>.tar.gz`)
|
* Container images (`container-<version>-<arch>.tar`)
|
||||||
* Source package (`dangerzone-<version>.tar.gz`)
|
* Source package (`dangerzone-<version>.tar.gz`)
|
||||||
|
|
||||||
All these files are accompanied by signatures (as `.asc` files). We'll explain
|
All these files are accompanied by signatures (as `.asc` files). We'll explain
|
||||||
|
@ -317,7 +351,7 @@ gpg --verify Dangerzone-0.6.1-i686.dmg.asc Dangerzone-0.6.1-i686.dmg
|
||||||
For the container images:
|
For the container images:
|
||||||
|
|
||||||
```
|
```
|
||||||
gpg --verify container-0.6.1-i686.tar.gz.asc container-0.6.1-i686.tar.gz
|
gpg --verify container-0.6.1-i686.tar.asc container-0.6.1-i686.tar
|
||||||
```
|
```
|
||||||
|
|
||||||
For the source package:
|
For the source package:
|
||||||
|
|
63
Makefile
63
Makefile
|
@ -1,23 +1,6 @@
|
||||||
LARGE_TEST_REPO_DIR:=tests/test_docs_large
|
LARGE_TEST_REPO_DIR:=tests/test_docs_large
|
||||||
GIT_DESC=$$(git describe)
|
GIT_DESC=$$(git describe)
|
||||||
JUNIT_FLAGS := --capture=sys -o junit_logging=all
|
JUNIT_FLAGS := --capture=sys -o junit_logging=all
|
||||||
|
|
||||||
.PHONY: lint-black
|
|
||||||
lint-black: ## check python source code formatting issues, with black
|
|
||||||
black --check --diff ./
|
|
||||||
|
|
||||||
.PHONY: lint-black-apply
|
|
||||||
lint-black-apply: ## apply black's source code formatting suggestions
|
|
||||||
black ./
|
|
||||||
|
|
||||||
.PHONY: lint-isort
|
|
||||||
lint-isort: ## check imports are organized, with isort
|
|
||||||
isort --check --diff ./
|
|
||||||
|
|
||||||
.PHONY: lint-isort-apply
|
|
||||||
lint-isort-apply: ## apply isort's imports organization suggestions
|
|
||||||
isort ./
|
|
||||||
|
|
||||||
MYPY_ARGS := --ignore-missing-imports \
|
MYPY_ARGS := --ignore-missing-imports \
|
||||||
--disallow-incomplete-defs \
|
--disallow-incomplete-defs \
|
||||||
--disallow-untyped-defs \
|
--disallow-untyped-defs \
|
||||||
|
@ -26,22 +9,20 @@ MYPY_ARGS := --ignore-missing-imports \
|
||||||
--warn-unused-ignores \
|
--warn-unused-ignores \
|
||||||
--exclude $(LARGE_TEST_REPO_DIR)/*.py
|
--exclude $(LARGE_TEST_REPO_DIR)/*.py
|
||||||
|
|
||||||
mypy-host:
|
.PHONY: lint
|
||||||
|
lint: ## Check the code for linting, formatting, and typing issues with ruff and mypy
|
||||||
|
ruff check
|
||||||
|
ruff format --check
|
||||||
mypy $(MYPY_ARGS) dangerzone
|
mypy $(MYPY_ARGS) dangerzone
|
||||||
|
|
||||||
mypy-tests:
|
|
||||||
mypy $(MYPY_ARGS) tests
|
mypy $(MYPY_ARGS) tests
|
||||||
|
|
||||||
mypy: mypy-host mypy-tests ## check type hints with mypy
|
.PHONY: fix
|
||||||
|
fix: ## apply all the suggestions from ruff
|
||||||
.PHONY: lint
|
ruff check --fix
|
||||||
lint: lint-black lint-isort mypy ## check the code with various linters
|
ruff format
|
||||||
|
|
||||||
.PHONY: lint-apply
|
|
||||||
format: lint-black-apply lint-isort-apply ## apply all the linter's suggestions
|
|
||||||
|
|
||||||
.PHONY: test
|
.PHONY: test
|
||||||
test:
|
test: ## Run the tests
|
||||||
# Make each GUI test run as a separate process, to avoid segfaults due to
|
# Make each GUI test run as a separate process, to avoid segfaults due to
|
||||||
# shared state.
|
# shared state.
|
||||||
# See more in https://github.com/freedomofpress/dangerzone/issues/493
|
# See more in https://github.com/freedomofpress/dangerzone/issues/493
|
||||||
|
@ -66,6 +47,32 @@ test-large: test-large-init ## Run large test set
|
||||||
python -m pytest --tb=no tests/test_large_set.py::TestLargeSet -v $(JUNIT_FLAGS) --junitxml=$(TEST_LARGE_RESULTS)
|
python -m pytest --tb=no tests/test_large_set.py::TestLargeSet -v $(JUNIT_FLAGS) --junitxml=$(TEST_LARGE_RESULTS)
|
||||||
python $(TEST_LARGE_RESULTS)/report.py $(TEST_LARGE_RESULTS)
|
python $(TEST_LARGE_RESULTS)/report.py $(TEST_LARGE_RESULTS)
|
||||||
|
|
||||||
|
Dockerfile: Dockerfile.env Dockerfile.in ## Regenerate the Dockerfile from its template
|
||||||
|
poetry run jinja2 Dockerfile.in Dockerfile.env > Dockerfile
|
||||||
|
|
||||||
|
.PHONY: poetry-install
|
||||||
|
poetry-install: ## Install project dependencies
|
||||||
|
poetry install
|
||||||
|
|
||||||
|
.PHONY: build-clean
|
||||||
|
build-clean:
|
||||||
|
poetry run doit clean
|
||||||
|
|
||||||
|
.PHONY: build-macos-intel
|
||||||
|
build-macos-intel: build-clean poetry-install ## Build macOS intel package (.dmg)
|
||||||
|
poetry run doit -n 8
|
||||||
|
|
||||||
|
.PHONY: build-macos-arm
|
||||||
|
build-macos-arm: build-clean poetry-install ## Build macOS Apple Silicon package (.dmg)
|
||||||
|
poetry run doit -n 8 macos_build_dmg
|
||||||
|
|
||||||
|
.PHONY: build-linux
|
||||||
|
build-linux: build-clean poetry-install ## Build linux packages (.rpm and .deb)
|
||||||
|
poetry run doit -n 8 fedora_rpm debian_deb
|
||||||
|
|
||||||
|
.PHONY: regenerate-reference-pdfs
|
||||||
|
regenerate-reference-pdfs: ## Regenerate the reference PDFs
|
||||||
|
pytest tests/test_cli.py -k regenerate --generate-reference-pdfs
|
||||||
# Makefile self-help borrowed from the securedrop-client project
|
# Makefile self-help borrowed from the securedrop-client project
|
||||||
# Explaination of the below shell command should it ever break.
|
# Explaination of the below shell command should it ever break.
|
||||||
# 1. Set the field separator to ": ##" and any make targets that might appear between : and ##
|
# 1. Set the field separator to ": ##" and any make targets that might appear between : and ##
|
||||||
|
|
197
QA.md
Normal file
197
QA.md
Normal file
|
@ -0,0 +1,197 @@
|
||||||
|
## QA
|
||||||
|
|
||||||
|
To ensure that new releases do not introduce regressions, and support existing
|
||||||
|
and newer platforms, we have to test that the produced packages work as expected.
|
||||||
|
|
||||||
|
Check the following:
|
||||||
|
|
||||||
|
- [ ] Make sure that the tip of the `main` branch passes the CI tests.
|
||||||
|
- [ ] Make sure that the Apple account has a valid application password and has
|
||||||
|
agreed to the latest Apple terms (see [macOS release](#macos-release)
|
||||||
|
section).
|
||||||
|
|
||||||
|
Because it is repetitive, we wrote a script to help with the QA.
|
||||||
|
It can run the tasks for you, pausing when it needs manual intervention.
|
||||||
|
|
||||||
|
You can run it with a command like:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
poetry run ./dev_scripts/qa.py {distro}-{version}
|
||||||
|
```
|
||||||
|
|
||||||
|
### The checklist
|
||||||
|
|
||||||
|
- [ ] Create a test build in Windows and make sure it works:
|
||||||
|
- [ ] Check if the suggested Python version is still supported.
|
||||||
|
- [ ] Create a new development environment with Poetry.
|
||||||
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
|
- [ ] Run the Dangerzone tests.
|
||||||
|
- [ ] Build and run the Dangerzone .exe
|
||||||
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
- [ ] Create a test build in macOS (Intel CPU) and make sure it works:
|
||||||
|
- [ ] Check if the suggested Python version is still supported.
|
||||||
|
- [ ] Create a new development environment with Poetry.
|
||||||
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
|
- [ ] Run the Dangerzone tests.
|
||||||
|
- [ ] Create and run an app bundle.
|
||||||
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
- [ ] Create a test build in macOS (M1/2 CPU) and make sure it works:
|
||||||
|
- [ ] Check if the suggested Python version is still supported.
|
||||||
|
- [ ] Create a new development environment with Poetry.
|
||||||
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
|
- [ ] Run the Dangerzone tests.
|
||||||
|
- [ ] Create and run an app bundle.
|
||||||
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
- [ ] Create a test build in the most recent Ubuntu LTS platform (Ubuntu 24.04
|
||||||
|
as of writing this) and make sure it works:
|
||||||
|
- [ ] Create a new development environment with Poetry.
|
||||||
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
|
- [ ] Run the Dangerzone tests.
|
||||||
|
- [ ] Create a .deb package and install it system-wide.
|
||||||
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
- [ ] Create a test build in the most recent Fedora platform (Fedora 41 as of
|
||||||
|
writing this) and make sure it works:
|
||||||
|
- [ ] Create a new development environment with Poetry.
|
||||||
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
|
- [ ] Run the Dangerzone tests.
|
||||||
|
- [ ] Create an .rpm package and install it system-wide.
|
||||||
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
- [ ] Create a test build in the most recent Qubes Fedora template (Fedora 40 as
|
||||||
|
of writing this) and make sure it works:
|
||||||
|
- [ ] Create a new development environment with Poetry.
|
||||||
|
- [ ] Run the Dangerzone tests.
|
||||||
|
- [ ] Create a Qubes .rpm package and install it system-wide.
|
||||||
|
- [ ] Ensure that the Dangerzone application appears in the "Applications"
|
||||||
|
tab.
|
||||||
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below) and make sure
|
||||||
|
they spawn disposable qubes.
|
||||||
|
|
||||||
|
### Scenarios
|
||||||
|
|
||||||
|
#### 1. Dangerzone correctly identifies that Docker/Podman is not installed
|
||||||
|
|
||||||
|
_(Only for MacOS / Windows)_
|
||||||
|
|
||||||
|
Temporarily hide the Docker/Podman binaries, e.g., rename the `docker` /
|
||||||
|
`podman` binaries to something else. Then run Dangerzone. Dangerzone should
|
||||||
|
prompt the user to install Docker/Podman.
|
||||||
|
|
||||||
|
#### 2. Dangerzone correctly identifies that Docker is not running
|
||||||
|
|
||||||
|
_(Only for MacOS / Windows)_
|
||||||
|
|
||||||
|
Stop the Docker Desktop application. Then run Dangerzone. Dangerzone should
|
||||||
|
prompt the user to start Docker Desktop.
|
||||||
|
|
||||||
|
|
||||||
|
#### 3. Updating Dangerzone handles external state correctly.
|
||||||
|
|
||||||
|
_(Applies to Windows/MacOS)_
|
||||||
|
|
||||||
|
Install the previous version of Dangerzone, downloaded from the website.
|
||||||
|
|
||||||
|
Open the Dangerzone application and enable some non-default settings.
|
||||||
|
**If there are new settings, make sure to change those as well**.
|
||||||
|
|
||||||
|
Close the Dangerzone application and get the container image for that
|
||||||
|
version. For example:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ docker images dangerzone.rocks/dangerzone
|
||||||
|
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||||
|
dangerzone.rocks/dangerzone <tag> <image ID> <date> <size>
|
||||||
|
```
|
||||||
|
|
||||||
|
Then run the version under QA and ensure that the settings remain changed.
|
||||||
|
|
||||||
|
Afterwards check that new docker image was installed by running the same command
|
||||||
|
and seeing the following differences:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ docker images dangerzone.rocks/dangerzone
|
||||||
|
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||||
|
dangerzone.rocks/dangerzone <other tag> <different ID> <newer date> <different size>
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Dangerzone successfully installs the container image
|
||||||
|
|
||||||
|
_(Only for Linux)_
|
||||||
|
|
||||||
|
Remove the Dangerzone container image from Docker/Podman. Then run Dangerzone.
|
||||||
|
Dangerzone should install the container image successfully.
|
||||||
|
|
||||||
|
#### 5. Dangerzone retains the settings of previous runs
|
||||||
|
|
||||||
|
Run Dangerzone and make some changes in the settings (e.g., change the OCR
|
||||||
|
language, toggle whether to open the document after conversion, etc.). Restart
|
||||||
|
Dangerzone. Dangerzone should show the settings that the user chose.
|
||||||
|
|
||||||
|
#### 6. Dangerzone reports failed conversions
|
||||||
|
|
||||||
|
Run Dangerzone and convert the `tests/test_docs/sample_bad_pdf.pdf` document.
|
||||||
|
Dangerzone should fail gracefully, by reporting that the operation failed, and
|
||||||
|
showing the following error message:
|
||||||
|
|
||||||
|
> The document format is not supported
|
||||||
|
|
||||||
|
#### 7. Dangerzone succeeds in converting multiple documents
|
||||||
|
|
||||||
|
Run Dangerzone against a list of documents, and tick all options. Ensure that:
|
||||||
|
* Conversions take place sequentially.
|
||||||
|
* Attempting to close the window while converting asks the user if they want to
|
||||||
|
abort the conversions.
|
||||||
|
* Conversions are completed successfully.
|
||||||
|
* Conversions show individual progress in real-time (double-check for Qubes).
|
||||||
|
* _(Only for Linux)_ The resulting files open with the PDF viewer of our choice.
|
||||||
|
* OCR seems to have detected characters in the PDF files.
|
||||||
|
* The resulting files have been saved with the proper suffix, in the proper
|
||||||
|
location.
|
||||||
|
* The original files have been saved in the `unsafe/` directory.
|
||||||
|
|
||||||
|
#### 8. Dangerzone is able to handle drag-n-drop
|
||||||
|
|
||||||
|
Run Dangerzone against a set of documents that you drag-n-drop. Files should be
|
||||||
|
added and conversion should run without issue.
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> On our end-user container environments for Linux, we can start a file manager
|
||||||
|
> with `thunar &`.
|
||||||
|
|
||||||
|
#### 9. Dangerzone CLI succeeds in converting multiple documents
|
||||||
|
|
||||||
|
_(Only for Windows and Linux)_
|
||||||
|
|
||||||
|
Run Dangerzone CLI against a list of documents. Ensure that conversions happen
|
||||||
|
sequentially, are completed successfully, and we see their progress.
|
||||||
|
|
||||||
|
#### 10. Dangerzone can open a document for conversion via right-click -> "Open With"
|
||||||
|
|
||||||
|
_(Only for Windows, MacOS and Qubes)_
|
||||||
|
|
||||||
|
Go to a directory with office documents, right-click on one, and click on "Open
|
||||||
|
With". We should be able to open the file with Dangerzone, and then convert it.
|
||||||
|
|
||||||
|
#### 11. Dangerzone shows helpful errors for setup issues on Qubes
|
||||||
|
|
||||||
|
_(Only for Qubes)_
|
||||||
|
|
||||||
|
Check what errors does Dangerzone throw in the following scenarios. The errors
|
||||||
|
should point the user to the Qubes notifications in the top-right corner:
|
||||||
|
|
||||||
|
1. The `dz-dvm` template does not exist. We can trigger this scenario by
|
||||||
|
temporarily renaming this template.
|
||||||
|
2. The Dangerzone RPC policy does not exist. We can trigger this scenario by
|
||||||
|
temporarily renaming the `dz.Convert` policy.
|
||||||
|
3. The `dz-dvm` disposable Qube cannot start due to insufficient resources. We
|
||||||
|
can trigger this scenario by temporarily increasing the minimum required RAM
|
||||||
|
of the `dz-dvm` template to more than the available amount.
|
34
README.md
34
README.md
|
@ -6,33 +6,23 @@ Take potentially dangerous PDFs, office documents, or images and convert them to
|
||||||
|  | 
|
|  | 
|
||||||
|--|--|
|
|--|--|
|
||||||
|
|
||||||
Dangerzone works like this: You give it a document that you don't know if you can trust (for example, an email attachment). Inside of a sandbox, Dangerzone converts the document to a PDF (if it isn't already one), and then converts the PDF into raw pixel data: a huge list of RGB color values for each page. Then, in a separate sandbox, Dangerzone takes this pixel data and converts it back into a PDF.
|
Dangerzone works like this: You give it a document that you don't know if you can trust (for example, an email attachment). Inside of a sandbox, Dangerzone converts the document to a PDF (if it isn't already one), and then converts the PDF into raw pixel data: a huge list of RGB color values for each page. Then, outside of the sandbox, Dangerzone takes this pixel data and converts it back into a PDF.
|
||||||
|
|
||||||
_Read more about Dangerzone in the [official site](https://dangerzone.rocks/about/)._
|
_Read more about Dangerzone in the [official site](https://dangerzone.rocks/about/)._
|
||||||
|
|
||||||
## Getting started
|
## Getting started
|
||||||
|
|
||||||
### MacOS
|
Follow the instructions for each platform:
|
||||||
- Download [Dangerzone 0.7.1 for Mac (Apple Silicon CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.7.1/Dangerzone-0.7.1-arm64.dmg)
|
|
||||||
- Download [Dangerzone 0.7.1 for Mac (Intel CPU)](https://github.com/freedomofpress/dangerzone/releases/download/v0.7.1/Dangerzone-0.7.1-i686.dmg)
|
|
||||||
|
|
||||||
You can also install Dangerzone for Mac using [Homebrew](https://brew.sh/): `brew install --cask dangerzone`
|
* [macOS](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#macos)
|
||||||
|
* [Windows](https://github.com/freedomofpress/dangerzone/blob/v0.9.0//INSTALL.md#windows)
|
||||||
|
* [Ubuntu Linux](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#ubuntu-debian)
|
||||||
|
* [Debian Linux](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#ubuntu-debian)
|
||||||
|
* [Fedora Linux](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#fedora)
|
||||||
|
* [Qubes OS (beta)](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#qubes-os)
|
||||||
|
* [Tails](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#tails)
|
||||||
|
|
||||||
> **Note**: you will also need to install [Docker Desktop](https://www.docker.com/products/docker-desktop/).
|
You can read more about our operating system support [here](https://github.com/freedomofpress/dangerzone/blob/v0.9.0/INSTALL.md#operating-system-support).
|
||||||
> This program needs to run alongside Dangerzone at all times, since it is what allows Dangerzone to
|
|
||||||
> create the secure environment.
|
|
||||||
|
|
||||||
### Windows
|
|
||||||
|
|
||||||
- Download [Dangerzone 0.7.1 for Windows](https://github.com/freedomofpress/dangerzone/releases/download/v0.7.1/Dangerzone-0.7.1.msi)
|
|
||||||
|
|
||||||
> **Note**: you will also need to install [Docker Desktop](https://www.docker.com/products/docker-desktop/).
|
|
||||||
> This program needs to run alongside Dangerzone at all times, since it is what allows Dangerzone to
|
|
||||||
> create the secure environment.
|
|
||||||
|
|
||||||
### Linux
|
|
||||||
|
|
||||||
See [installing Dangerzone](INSTALL.md#linux) for adding the Linux repositories to your system.
|
|
||||||
|
|
||||||
## Some features
|
## Some features
|
||||||
|
|
||||||
|
@ -92,3 +82,7 @@ Dangerzone gets updates to improve its features _and_ to fix problems. So, updat
|
||||||
1. Check which version of Dangerzone you are currently using: run Dangerzone, then look for a series of numbers to the right of the logo within the app. The format of the numbers will look similar to `0.4.1`
|
1. Check which version of Dangerzone you are currently using: run Dangerzone, then look for a series of numbers to the right of the logo within the app. The format of the numbers will look similar to `0.4.1`
|
||||||
2. Now find the latest available version of Dangerzone: go to the [download page](https://dangerzone.rocks/#downloads). Look for the version number displayed. The number will be using the same format as in Step 1.
|
2. Now find the latest available version of Dangerzone: go to the [download page](https://dangerzone.rocks/#downloads). Look for the version number displayed. The number will be using the same format as in Step 1.
|
||||||
3. Is the version on the Dangerzone download page higher than the version of your installed app? Go ahead and update.
|
3. Is the version on the Dangerzone download page higher than the version of your installed app? Go ahead and update.
|
||||||
|
|
||||||
|
### Can I use Podman Desktop?
|
||||||
|
|
||||||
|
Yes! We've introduced [experimental support for Podman Desktop](https://github.com/freedomofpress/dangerzone/blob/main/docs/podman-desktop.md) on Windows and macOS.
|
||||||
|
|
434
RELEASE.md
434
RELEASE.md
|
@ -1,21 +1,28 @@
|
||||||
# Release instructions
|
# Release instructions
|
||||||
|
|
||||||
This section documents the release process. Unless you're a dangerzone developer making a release, you'll probably never need to follow it.
|
This section documents how we currently release Dangerzone for the different distributions we support.
|
||||||
|
|
||||||
## Pre-release
|
## Pre-release
|
||||||
|
|
||||||
Before making a release, all of these should be complete:
|
Here is a list of tasks that should be done before issuing the release:
|
||||||
|
|
||||||
- [ ] Copy the checkboxes from these instructions onto a new issue and call it **QA and Release version \<VERSION\>**
|
- [ ] Create a new issue named **QA and Release for version \<VERSION\>**, to track the general progress.
|
||||||
- [ ] [Add new Linux platforms and remove obsolete ones](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#add-new-platforms-and-remove-obsolete-ones)
|
You can generate its content with the the `poetry run ./dev_scripts/generate-release-tasks.py` command.
|
||||||
|
- [ ] [Add new Linux platforms and remove obsolete ones](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#add-new-linux-platforms-and-remove-obsolete-ones)
|
||||||
- [ ] Bump the Python dependencies using `poetry lock`
|
- [ ] Bump the Python dependencies using `poetry lock`
|
||||||
- [ ] [Check for official PySide6 versions](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#check-for-official-pyside6-versions)
|
- [ ] Check for new [WiX releases](https://github.com/wixtoolset/wix/releases) and update it if needed
|
||||||
- [ ] Update `version` in `pyproject.toml`
|
- [ ] Update `version` in `pyproject.toml`
|
||||||
- [ ] Update `share/version.txt`
|
- [ ] Update `share/version.txt`
|
||||||
- [ ] Update the "Version" field in `install/linux/dangerzone.spec`
|
- [ ] Update the "Version" field in `install/linux/dangerzone.spec`
|
||||||
- [ ] Bump the Debian version by adding a new changelog entry in `debian/changelog`
|
- [ ] Bump the Debian version by adding a new changelog entry in `debian/changelog`
|
||||||
|
- [ ] [Bump the minimum Docker Desktop versions](https://github.com/freedomofpress/dangerzone/blob/main/RELEASE.md#bump-the-minimum-docker-desktop-version) in `isolation_provider/container.py`
|
||||||
|
- [ ] Bump the dates and versions in the `Dockerfile`
|
||||||
|
- [ ] Update the download links in our `INSTALL.md` page to point to the new version (the download links will be populated after the release)
|
||||||
- [ ] Update screenshot in `README.md`, if necessary
|
- [ ] Update screenshot in `README.md`, if necessary
|
||||||
- [ ] CHANGELOG.md should be updated to include a list of all major changes since the last release
|
- [ ] CHANGELOG.md should be updated to include a list of all major changes since the last release
|
||||||
|
- [ ] A draft release should be created. Copy the release notes text from the template at [`docs/templates/release-notes`](https://github.com/freedomofpress/dangerzone/tree/main/docs/templates/)
|
||||||
|
- [ ] Send the release notes to editorial for review
|
||||||
|
- [ ] Do the QA tasks
|
||||||
|
|
||||||
## Add new Linux platforms and remove obsolete ones
|
## Add new Linux platforms and remove obsolete ones
|
||||||
|
|
||||||
|
@ -38,21 +45,17 @@ In case of a new version (beta, RC, or official release):
|
||||||
`BUILD.md` files where necessary.
|
`BUILD.md` files where necessary.
|
||||||
4. Send a PR with the above changes.
|
4. Send a PR with the above changes.
|
||||||
|
|
||||||
In case of an EOL version:
|
In case of the removal of a version:
|
||||||
|
|
||||||
1. Remove any mention to this version from our repo.
|
1. Remove any mention to this version from our repo.
|
||||||
* Consult the previous paragraph, but also `grep` your way around.
|
* Consult the previous paragraph, but also `grep` your way around.
|
||||||
2. Add a notice in our `CHANGELOG.md` about the version removal.
|
2. Add a notice in our `CHANGELOG.md` about the version removal.
|
||||||
|
|
||||||
## Check for official PySide6 versions
|
## Bump the minimum Docker Desktop version
|
||||||
|
|
||||||
PySide6 6.7.0 is available from the Fedora Rawhide repo, and we expect that a
|
We embed the minimum docker desktop versions inside Dangerzone, as an incentive for our macOS and Windows users to upgrade to the latests version.
|
||||||
similar version will be pushed soon to the rest of the stable releases. Prior to
|
|
||||||
a release, we should check if this has happened already. Once this happens, we
|
|
||||||
should update our CI tests accordingly, and remove this notice.
|
|
||||||
|
|
||||||
For more info, read:
|
You can find the latest version at the time of the release by looking at [their release notes](https://docs.docker.com/desktop/release-notes/)
|
||||||
https://github.com/freedomofpress/maint-dangerzone-pyside6/issues/5
|
|
||||||
|
|
||||||
## Large Document Testing
|
## Large Document Testing
|
||||||
|
|
||||||
|
@ -62,192 +65,13 @@ Follow the instructions in `docs/developer/TESTING.md` to run the tests.
|
||||||
|
|
||||||
These tests will identify any regressions or progression in terms of document coverage.
|
These tests will identify any regressions or progression in terms of document coverage.
|
||||||
|
|
||||||
## QA
|
|
||||||
|
|
||||||
To ensure that new releases do not introduce regressions, and support existing
|
|
||||||
and newer platforms, we have to do the following:
|
|
||||||
|
|
||||||
- [ ] Make sure that the tip of the `main` branch passes the CI tests.
|
|
||||||
- [ ] Make sure that the Apple account has a valid application password and has
|
|
||||||
agreed to the latest Apple terms (see [macOS release](#macos-release)
|
|
||||||
section).
|
|
||||||
- [ ] Create a test build in Windows and make sure it works:
|
|
||||||
- [ ] Check if the suggested Python version is still supported.
|
|
||||||
- [ ] Create a new development environment with Poetry.
|
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
|
||||||
the new image.
|
|
||||||
- [ ] Run the Dangerzone tests.
|
|
||||||
- [ ] Build and run the Dangerzone .exe
|
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
|
||||||
- [ ] Create a test build in macOS (Intel CPU) and make sure it works:
|
|
||||||
- [ ] Check if the suggested Python version is still supported.
|
|
||||||
- [ ] Create a new development environment with Poetry.
|
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
|
||||||
the new image.
|
|
||||||
- [ ] Run the Dangerzone tests.
|
|
||||||
- [ ] Create and run an app bundle.
|
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
|
||||||
- [ ] Create a test build in macOS (M1/2 CPU) and make sure it works:
|
|
||||||
- [ ] Check if the suggested Python version is still supported.
|
|
||||||
- [ ] Create a new development environment with Poetry.
|
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
|
||||||
the new image.
|
|
||||||
- [ ] Run the Dangerzone tests.
|
|
||||||
- [ ] Create and run an app bundle.
|
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
|
||||||
- [ ] Create a test build in the most recent Ubuntu LTS platform (Ubuntu 24.04
|
|
||||||
as of writing this) and make sure it works:
|
|
||||||
- [ ] Create a new development environment with Poetry.
|
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
|
||||||
the new image.
|
|
||||||
- [ ] Run the Dangerzone tests.
|
|
||||||
- [ ] Create a .deb package and install it system-wide.
|
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
|
||||||
- [ ] Create a test build in the most recent Fedora platform (Fedora 41 as of
|
|
||||||
writing this) and make sure it works:
|
|
||||||
- [ ] Create a new development environment with Poetry.
|
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
|
||||||
the new image.
|
|
||||||
- [ ] Run the Dangerzone tests.
|
|
||||||
- [ ] Create an .rpm package and install it system-wide.
|
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
|
||||||
- [ ] Create a test build in the most recent Qubes Fedora template (Fedora 40 as
|
|
||||||
of writing this) and make sure it works:
|
|
||||||
- [ ] Create a new development environment with Poetry.
|
|
||||||
- [ ] Run the Dangerzone tests.
|
|
||||||
- [ ] Create a Qubes .rpm package and install it system-wide.
|
|
||||||
- [ ] Ensure that the Dangerzone application appears in the "Applications"
|
|
||||||
tab.
|
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below) and make sure
|
|
||||||
they spawn disposable qubes.
|
|
||||||
|
|
||||||
### Scenarios
|
|
||||||
|
|
||||||
#### 1. Dangerzone correctly identifies that Docker/Podman is not installed
|
|
||||||
|
|
||||||
_(Only for MacOS / Windows)_
|
|
||||||
|
|
||||||
Temporarily hide the Docker/Podman binaries, e.g., rename the `docker` /
|
|
||||||
`podman` binaries to something else. Then run Dangerzone. Dangerzone should
|
|
||||||
prompt the user to install Docker/Podman.
|
|
||||||
|
|
||||||
#### 2. Dangerzone correctly identifies that Docker is not running
|
|
||||||
|
|
||||||
_(Only for MacOS / Windows)_
|
|
||||||
|
|
||||||
Stop the Docker Desktop application. Then run Dangerzone. Dangerzone should
|
|
||||||
prompt the user to start Docker Desktop.
|
|
||||||
|
|
||||||
|
|
||||||
#### 3. Updating Dangerzone handles external state correctly.
|
|
||||||
|
|
||||||
_(Applies to Windows/MacOS)_
|
|
||||||
|
|
||||||
Install the previous version of Dangerzone, downloaded from the website.
|
|
||||||
|
|
||||||
Open the Dangerzone application and enable some non-default settings.
|
|
||||||
**If there are new settings, make sure to change those as well**.
|
|
||||||
|
|
||||||
Close the Dangerzone application and get the container image for that
|
|
||||||
version. For example:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker images dangerzone.rocks/dangerzone:latest
|
|
||||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
|
||||||
dangerzone.rocks/dangerzone latest <image ID> <date> <size>
|
|
||||||
```
|
|
||||||
|
|
||||||
Then run the version under QA and ensure that the settings remain changed.
|
|
||||||
|
|
||||||
Afterwards check that new docker image was installed by running the same command
|
|
||||||
and seeing the following differences:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker images dangerzone.rocks/dangerzone:latest
|
|
||||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
|
||||||
dangerzone.rocks/dangerzone latest <different ID> <newer date> <different size>
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. Dangerzone successfully installs the container image
|
|
||||||
|
|
||||||
_(Only for Linux)_
|
|
||||||
|
|
||||||
Remove the Dangerzone container image from Docker/Podman. Then run Dangerzone.
|
|
||||||
Dangerzone should install the container image successfully.
|
|
||||||
|
|
||||||
#### 5. Dangerzone retains the settings of previous runs
|
|
||||||
|
|
||||||
Run Dangerzone and make some changes in the settings (e.g., change the OCR
|
|
||||||
language, toggle whether to open the document after conversion, etc.). Restart
|
|
||||||
Dangerzone. Dangerzone should show the settings that the user chose.
|
|
||||||
|
|
||||||
#### 6. Dangerzone reports failed conversions
|
|
||||||
|
|
||||||
Run Dangerzone and convert the `tests/test_docs/sample_bad_pdf.pdf` document.
|
|
||||||
Dangerzone should fail gracefully, by reporting that the operation failed, and
|
|
||||||
showing the following error message:
|
|
||||||
|
|
||||||
> The document format is not supported
|
|
||||||
|
|
||||||
#### 7. Dangerzone succeeds in converting multiple documents
|
|
||||||
|
|
||||||
Run Dangerzone against a list of documents, and tick all options. Ensure that:
|
|
||||||
* Conversions take place sequentially.
|
|
||||||
* Attempting to close the window while converting asks the user if they want to
|
|
||||||
abort the conversions.
|
|
||||||
* Conversions are completed successfully.
|
|
||||||
* Conversions show individual progress in real-time (double-check for Qubes).
|
|
||||||
* _(Only for Linux)_ The resulting files open with the PDF viewer of our choice.
|
|
||||||
* OCR seems to have detected characters in the PDF files.
|
|
||||||
* The resulting files have been saved with the proper suffix, in the proper
|
|
||||||
location.
|
|
||||||
* The original files have been saved in the `unsafe/` directory.
|
|
||||||
|
|
||||||
#### 8. Dangerzone is able to handle drag-n-drop
|
|
||||||
|
|
||||||
Run Dangerzone against a set of documents that you drag-n-drop. Files should be
|
|
||||||
added and conversion should run without issue.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> On our end-user container environments for Linux, we can start a file manager
|
|
||||||
> with `thunar &`.
|
|
||||||
|
|
||||||
#### 9. Dangerzone CLI succeeds in converting multiple documents
|
|
||||||
|
|
||||||
_(Only for Windows and Linux)_
|
|
||||||
|
|
||||||
Run Dangerzone CLI against a list of documents. Ensure that conversions happen
|
|
||||||
sequentially, are completed successfully, and we see their progress.
|
|
||||||
|
|
||||||
#### 10. Dangerzone can open a document for conversion via right-click -> "Open With"
|
|
||||||
|
|
||||||
_(Only for Windows, MacOS and Qubes)_
|
|
||||||
|
|
||||||
Go to a directory with office documents, right-click on one, and click on "Open
|
|
||||||
With". We should be able to open the file with Dangerzone, and then convert it.
|
|
||||||
|
|
||||||
#### 11. Dangerzone shows helpful errors for setup issues on Qubes
|
|
||||||
|
|
||||||
_(Only for Qubes)_
|
|
||||||
|
|
||||||
Check what errors does Dangerzone throw in the following scenarios. The errors
|
|
||||||
should point the user to the Qubes notifications in the top-right corner:
|
|
||||||
|
|
||||||
1. The `dz-dvm` template does not exist. We can trigger this scenario by
|
|
||||||
temporarily renaming this template.
|
|
||||||
2. The Dangerzone RPC policy does not exist. We can trigger this scenario by
|
|
||||||
temporarily renaming the `dz.Convert` policy.
|
|
||||||
3. The `dz-dvm` disposable Qube cannot start due to insufficient resources. We
|
|
||||||
can trigger this scenario by temporarily increasing the minimum required RAM
|
|
||||||
of the `dz-dvm` template to more than the available amount.
|
|
||||||
|
|
||||||
## Release
|
## Release
|
||||||
|
|
||||||
Once we are confident that the release will be out shortly, and doesn't need any more changes:
|
Once we are confident that the release will be out shortly, and doesn't need any more changes:
|
||||||
|
|
||||||
- [ ] Create a PGP-signed git tag for the version, e.g., for dangerzone `v0.1.0`:
|
- [ ] Create a PGP-signed git tag for the version, e.g., for dangerzone `v0.1.0`:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
git tag -s v0.1.0
|
git tag -s v0.1.0
|
||||||
git push origin v0.1.0
|
git push origin v0.1.0
|
||||||
```
|
```
|
||||||
|
@ -263,6 +87,17 @@ Once we are confident that the release will be out shortly, and doesn't need any
|
||||||
|
|
||||||
### macOS Release
|
### macOS Release
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> You can automate these steps from your macOS terminal app with:
|
||||||
|
>
|
||||||
|
> ```
|
||||||
|
> export APPLE_ID=<email>
|
||||||
|
> make build-macos-intel # for Intel macOS
|
||||||
|
> make build-macos-arm # for Apple Silicon macOS
|
||||||
|
> ```
|
||||||
|
|
||||||
|
The following needs to happen for both Silicon and Intel chipsets.
|
||||||
|
|
||||||
#### Initial Setup
|
#### Initial Setup
|
||||||
|
|
||||||
- Build machine must have:
|
- Build machine must have:
|
||||||
|
@ -277,48 +112,83 @@ Once we are confident that the release will be out shortly, and doesn't need any
|
||||||
|
|
||||||
#### Releasing and Signing
|
#### Releasing and Signing
|
||||||
|
|
||||||
|
Here is what you need to do:
|
||||||
|
|
||||||
- [ ] Verify and install the latest supported Python version from
|
- [ ] Verify and install the latest supported Python version from
|
||||||
[python.org](https://www.python.org/downloads/macos/) (do not use the one from
|
[python.org](https://www.python.org/downloads/macos/) (do not use the one from
|
||||||
brew as it is known to [cause issues](https://github.com/freedomofpress/dangerzone/issues/471))
|
brew as it is known to [cause issues](https://github.com/freedomofpress/dangerzone/issues/471))
|
||||||
* In case of a new Python installation or minor version upgrade, e.g., from
|
|
||||||
3.11 to 3.12 , reinstall Poetry with `python3 -m pip install poetry`
|
- [ ] Checkout the dependencies, and clean your local copy:
|
||||||
* You can verify the correct Python version is used with `poetry debug info`
|
|
||||||
- [ ] Verify and checkout the git tag for this release
|
```bash
|
||||||
- [ ] Run `poetry install --sync`
|
|
||||||
- [ ] On the silicon mac, build the container image:
|
# In case of a new Python installation or minor version upgrade, e.g., from
|
||||||
|
# 3.11 to 3.12, reinstall Poetry
|
||||||
|
python3 -m pip install poetry
|
||||||
|
|
||||||
|
# You can verify the correct Python version is used
|
||||||
|
poetry debug info
|
||||||
|
|
||||||
|
# Replace with the actual version
|
||||||
|
export DZ_VERSION=$(cat share/version.txt)
|
||||||
|
|
||||||
|
# Verify and checkout the git tag for this release:
|
||||||
|
git checkout -f v$VERSION
|
||||||
|
|
||||||
|
# Clean the git repository
|
||||||
|
git clean -df
|
||||||
|
|
||||||
|
# Clean up the environment
|
||||||
|
poetry env remove --all
|
||||||
|
|
||||||
|
# Install the dependencies
|
||||||
|
poetry sync
|
||||||
```
|
```
|
||||||
python3 ./install/common/build-image.py
|
|
||||||
|
- [ ] Build the container image and the OCR language data
|
||||||
|
|
||||||
|
```bash
|
||||||
|
poetry run ./install/common/build-image.py
|
||||||
|
poetry run ./install/common/download-tessdata.py
|
||||||
|
|
||||||
|
# Copy the container image to the assets folder
|
||||||
|
cp share/container.tar ~dz/release-assets/$VERSION/dangerzone-$VERSION-arm64.tar
|
||||||
|
cp share/image-id.txt ~dz/release-assets/$VERSION/.
|
||||||
```
|
```
|
||||||
Then copy the `share/container.tar.gz` to the assets folder on `dangerzone-$VERSION-arm64.tar.gz`, along with the `share/image-id.txt` file.
|
|
||||||
- [ ] Run `poetry run ./install/macos/build-app.py`; this will make `dist/Dangerzone.app`
|
- [ ] Build the app bundle
|
||||||
- [ ] Make sure that the build application works with the containerd graph
|
|
||||||
driver (see [#933](https://github.com/freedomofpress/dangerzone/issues/933))
|
```bash
|
||||||
- [ ] Run `poetry run ./install/macos/build-app.py --only-codesign`; this will make `dist/Dangerzone.dmg`
|
poetry run ./install/macos/build-app.py
|
||||||
* You need to run this command as the account that has access to the code signing certificate
|
```
|
||||||
* You must run this command from the MacOS UI, from a terminal application.
|
|
||||||
- [ ] Notarize it: `xcrun notarytool submit --wait --apple-id "<email>" --keychain-profile "dz-notarytool-release-key" dist/Dangerzone.dmg`
|
- [ ] Sign the application bundle, and notarize it
|
||||||
* You need to change the `<email>` in the above command with the email
|
|
||||||
associated with the Apple Developer ID.
|
You need to run this command as the account that has access to the code signing certificate
|
||||||
* This command assumes that you have created, and stored in the Keychain, an
|
|
||||||
|
This command assumes that you have created, and stored in the Keychain, an
|
||||||
application password associated with your Apple Developer ID, which will be
|
application password associated with your Apple Developer ID, which will be
|
||||||
used specifically for `notarytool`.
|
used specifically for `notarytool`.
|
||||||
- [ ] Wait for it to get approved:
|
|
||||||
* If it gets rejected, you should be able to see why with the same command
|
|
||||||
(or use the `log` option for a more verbose JSON output)
|
|
||||||
* You will also receive an update in your email.
|
|
||||||
- [ ] After it's approved, staple the ticket: `xcrun stapler staple dist/Dangerzone.dmg`
|
|
||||||
|
|
||||||
This process ends up with the final file:
|
```bash
|
||||||
|
# Sign the .App and make it a .dmg
|
||||||
|
poetry run ./install/macos/build-app.py --only-codesign
|
||||||
|
|
||||||
```
|
# Notarize it. You must run this command from the MacOS UI
|
||||||
dist/Dangerzone.dmg
|
# from a terminal application.
|
||||||
```
|
xcrun notarytool submit ./dist/Dangerzone.dmg --apple-id $APPLE_ID --keychain-profile "dz-notarytool-release-key" --wait && xcrun stapler staple dist/Dangerzone.dmg
|
||||||
|
|
||||||
Rename `Dangerzone.dmg` to `Dangerzone-$VERSION.dmg`.
|
# Copy the .dmg to the assets folder
|
||||||
|
ARCH=$(uname -m)
|
||||||
|
if [ "$ARCH" = "x86_64" ]; then
|
||||||
|
ARCH="i686"
|
||||||
|
fi
|
||||||
|
cp dist/Dangerzone.dmg ~dz/release-assets/$VERSION/Dangerzone-$VERSION-$ARCH.dmg
|
||||||
|
```
|
||||||
|
|
||||||
### Windows Release
|
### Windows Release
|
||||||
|
|
||||||
The Windows release is performed in a Windows 11 virtual machine as opposed to a physical one.
|
The Windows release is performed in a Windows 11 virtual machine (as opposed to a physical one).
|
||||||
|
|
||||||
#### Initial Setup
|
#### Initial Setup
|
||||||
|
|
||||||
|
@ -332,14 +202,34 @@ The Windows release is performed in a Windows 11 virtual machine as opposed to a
|
||||||
|
|
||||||
#### Releasing and Signing
|
#### Releasing and Signing
|
||||||
|
|
||||||
- [ ] Verify and checkout the git tag for this release
|
- [ ] Checkout the dependencies, and clean your local copy:
|
||||||
- [ ] Run `poetry install --sync`
|
```bash
|
||||||
|
# In case of a new Python installation or minor version upgrade, e.g., from
|
||||||
|
# 3.11 to 3.12, reinstall Poetry
|
||||||
|
python3 -m pip install poetry
|
||||||
|
|
||||||
|
# You can verify the correct Python version is used
|
||||||
|
poetry debug info
|
||||||
|
|
||||||
|
# Replace with the actual version
|
||||||
|
export DZ_VERSION=$(cat share/version.txt)
|
||||||
|
|
||||||
|
# Verify and checkout the git tag for this release:
|
||||||
|
git checkout -f v$VERSION
|
||||||
|
|
||||||
|
# Clean the git repository
|
||||||
|
git clean -df
|
||||||
|
|
||||||
|
# Clean up the environment
|
||||||
|
poetry env remove --all
|
||||||
|
|
||||||
|
# Install the dependencies
|
||||||
|
poetry sync
|
||||||
|
```
|
||||||
|
|
||||||
- [ ] Copy the container image into the VM
|
- [ ] Copy the container image into the VM
|
||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
> Instead of running `python .\install\windows\build-image.py` in the VM, run the build image script on the host (making sure to build for `linux/amd64`). Copy `share/container.tar.gz` and `share/image-id.txt` from the host into the `share` folder in the VM.
|
> Instead of running `python .\install\windows\build-image.py` in the VM, run the build image script on the host (making sure to build for `linux/amd64`). Copy `share/container.tar` and `share/image-id.txt` from the host into the `share` folder in the VM.
|
||||||
> Also, don't forget to add the supplementary image ID (see
|
|
||||||
> [#933](https://github.com/freedomofpress/dangerzone/issues/933)) in
|
|
||||||
> `share/image-id.txt`)
|
|
||||||
- [ ] Run `poetry run .\install\windows\build-app.bat`
|
- [ ] Run `poetry run .\install\windows\build-app.bat`
|
||||||
- [ ] When you're done you will have `dist\Dangerzone.msi`
|
- [ ] When you're done you will have `dist\Dangerzone.msi`
|
||||||
|
|
||||||
|
@ -347,12 +237,16 @@ Rename `Dangerzone.msi` to `Dangerzone-$VERSION.msi`.
|
||||||
|
|
||||||
### Linux release
|
### Linux release
|
||||||
|
|
||||||
> [!INFO]
|
> [!TIP]
|
||||||
> Below we explain how we build packages for each Linux distribution we support.
|
> You can automate these steps from any Linux distribution with:
|
||||||
>
|
>
|
||||||
> There is also a `release.sh` script available which creates all
|
> ```
|
||||||
> the `.rpm` and `.deb` files with a single command.
|
> make build-linux
|
||||||
|
> ```
|
||||||
|
>
|
||||||
|
> You can then add the created artifacts to the appropriate APT/YUM repo.
|
||||||
|
|
||||||
|
Below we explain how we build packages for each Linux distribution we support.
|
||||||
|
|
||||||
#### Debian/Ubuntu
|
#### Debian/Ubuntu
|
||||||
|
|
||||||
|
@ -365,21 +259,15 @@ instructions in our build section](https://github.com/freedomofpress/dangerzone/
|
||||||
or create your own locally with:
|
or create your own locally with:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
# Create and run debian bookworm development environment
|
||||||
./dev_scripts/env.py --distro debian --version bookworm build-dev
|
./dev_scripts/env.py --distro debian --version bookworm build-dev
|
||||||
./dev_scripts/env.py --distro debian --version bookworm run --dev bash
|
./dev_scripts/env.py --distro debian --version bookworm run --dev bash
|
||||||
cd dangerzone
|
|
||||||
```
|
|
||||||
|
|
||||||
Build the latest container:
|
# Build the latest container
|
||||||
|
./dev_scripts/env.py --distro debian --version bookworm run --dev bash -c "cd dangerzone && poetry run ./install/common/build-image.py"
|
||||||
|
|
||||||
```sh
|
# Create a .deb
|
||||||
python3 ./install/common/build-image.py
|
./dev_scripts/env.py --distro debian --version bookworm run --dev bash -c "cd dangerzone && ./install/linux/build-deb.py"
|
||||||
```
|
|
||||||
|
|
||||||
Create a .deb:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
./install/linux/build-deb.py
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Publish the .deb under `./deb_dist` to the
|
Publish the .deb under `./deb_dist` to the
|
||||||
|
@ -398,22 +286,12 @@ or create your own locally with:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
./dev_scripts/env.py --distro fedora --version 41 build-dev
|
./dev_scripts/env.py --distro fedora --version 41 build-dev
|
||||||
./dev_scripts/env.py --distro fedora --version 41 run --dev bash
|
|
||||||
cd dangerzone
|
|
||||||
```
|
|
||||||
|
|
||||||
Build the latest container:
|
# Build the latest container (skip if already built):
|
||||||
|
./dev_scripts/env.py --distro fedora --version 41 run --dev bash -c "cd dangerzone && poetry run ./install/common/build-image.py"
|
||||||
|
|
||||||
```sh
|
# Create a .rpm:
|
||||||
python3 ./install/common/build-image.py
|
./dev_scripts/env.py --distro fedora --version 41 run --dev bash -c "cd dangerzone && ./install/linux/build-rpm.py"
|
||||||
```
|
|
||||||
|
|
||||||
Copy the container image to the assets folder on `dangerzone-$VERSION-i686.tar.gz`.
|
|
||||||
|
|
||||||
Create a .rpm:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
./install/linux/build-rpm.py
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Publish the .rpm under `./dist` to the
|
Publish the .rpm under `./dist` to the
|
||||||
|
@ -424,7 +302,7 @@ Publish the .rpm under `./dist` to the
|
||||||
Create a .rpm for Qubes:
|
Create a .rpm for Qubes:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
./install/linux/build-rpm.py --qubes
|
./dev_scripts/env.py --distro fedora --version 41 run --dev bash -c "cd dangerzone && ./install/linux/build-rpm.py --qubes"
|
||||||
```
|
```
|
||||||
|
|
||||||
and similarly publish it to the [`freedomofpress/yum-tools-prod`](https://github.com/freedomofpress/yum-tools-prod)
|
and similarly publish it to the [`freedomofpress/yum-tools-prod`](https://github.com/freedomofpress/yum-tools-prod)
|
||||||
|
@ -432,39 +310,39 @@ repo.
|
||||||
|
|
||||||
## Publishing the Release
|
## Publishing the Release
|
||||||
|
|
||||||
To publish the release:
|
To publish the release, you can follow these steps:
|
||||||
|
|
||||||
- [ ] Create an archive of the Dangerzone source in `tar.gz` format:
|
- [ ] Create an archive of the Dangerzone source in `tar.gz` format:
|
||||||
* You can use the following command:
|
```bash
|
||||||
|
export VERSION=$(cat share/version.txt)
|
||||||
```
|
git archive --format=tar.gz -o dangerzone-${VERSION:?}.tar.gz --prefix=dangerzone/ v${VERSION:?}
|
||||||
export DZ_VERSION=$(cat share/version.txt)
|
|
||||||
git archive --format=tar.gz -o dangerzone-${DZ_VERSION:?}.tar.gz --prefix=dangerzone/ v${DZ_VERSION:?}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
- [ ] Run container scan on the produced container images (some time may have passed since the artifacts were built)
|
- [ ] Run container scan on the produced container images (some time may have passed since the artifacts were built)
|
||||||
```
|
```bash
|
||||||
gunzip --keep -c ./share/container.tar.gz > /tmp/container.tar
|
|
||||||
docker pull anchore/grype:latest
|
docker pull anchore/grype:latest
|
||||||
docker run --rm -v /tmp/container.tar:/container.tar anchore/grype:latest /container.tar
|
docker run --rm -v ./share/container.tar:/container.tar anchore/grype:latest /container.tar
|
||||||
```
|
```
|
||||||
|
|
||||||
- [ ] Collect the assets in a single directory, calculate their SHA-256 hashes, and sign them.
|
- [ ] Collect the assets in a single directory, calculate their SHA-256 hashes, and sign them.
|
||||||
* You can use `./dev_scripts/sign-assets.py`, if you want to automate this
|
There is an `./dev_scripts/sign-assets.py` script to automate this task.
|
||||||
task.
|
|
||||||
- [ ] Create a new **draft** release on GitHub and upload the macOS and Windows installers.
|
|
||||||
* Copy the release notes text from the template at [`docs/templates/release-notes`](https://github.com/freedomofpress/dangerzone/tree/main/docs/templates/)
|
|
||||||
* You can use `./dev_scripts/upload-asset.py`, if you want to upload an asset
|
|
||||||
using an access token.
|
|
||||||
- [ ] Upload the `container-$VERSION-i686.tar.gz` and `container-$VERSION-arm64.tar.gz` images that were created in the previous step
|
|
||||||
|
|
||||||
**Important:** Make sure that it's the same container images as the ones that
|
**Important:** Before running the script, make sure that it's the same container images as
|
||||||
are shipped in other platforms (see our [Pre-release](#Pre-release) section)
|
the ones that are shipped in other platforms (see our [Pre-release](#Pre-release) section)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Sign all the assets
|
||||||
|
./dev_scripts/sign-assets.py ~/release-assets/$VERSION/github --version $VERSION
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] Upload all the assets to the draft release on GitHub.
|
||||||
|
```bash
|
||||||
|
find ~/release-assets/$VERSION/github | xargs -n1 ./dev_scripts/upload-asset.py --token ~/token --draft
|
||||||
|
```
|
||||||
|
|
||||||
- [ ] Upload the detached signatures (.asc) and checksum file.
|
|
||||||
- [ ] Update the [Dangerzone website](https://github.com/freedomofpress/dangerzone.rocks) to link to the new installers.
|
- [ ] Update the [Dangerzone website](https://github.com/freedomofpress/dangerzone.rocks) to link to the new installers.
|
||||||
- [ ] Update the brew cask release of Dangerzone with a [PR like this one](https://github.com/Homebrew/homebrew-cask/pull/116319)
|
- [ ] Update the brew cask release of Dangerzone with a [PR like this one](https://github.com/Homebrew/homebrew-cask/pull/116319)
|
||||||
- [ ] Update version and download links in `README.md`
|
- [ ] Update version and links to our installation instructions (`INSTALL.md`) in `README.md`
|
||||||
|
|
||||||
## Post-release
|
## Post-release
|
||||||
|
|
||||||
|
|
14
THIRD_PARTY_NOTICE
Normal file
14
THIRD_PARTY_NOTICE
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
This project includes third-party components as follows:
|
||||||
|
|
||||||
|
1. gVisor APT Key
|
||||||
|
- URL: https://gvisor.dev/archive.key
|
||||||
|
- Last updated: 2025-01-21
|
||||||
|
- Description: This is the public key used for verifying packages from the gVisor repository.
|
||||||
|
|
||||||
|
2. Reproducible Containers Helper Script
|
||||||
|
- URL: https://github.com/reproducible-containers/repro-sources-list.sh/blob/d15cf12b26395b857b24fba223b108aff1c91b26/repro-sources-list.sh
|
||||||
|
- Last updated: 2025-01-21
|
||||||
|
- Description: This script is used for building reproducible Debian images.
|
||||||
|
|
||||||
|
Please refer to the respective sources for licensing information and further details regarding the use of these components.
|
||||||
|
|
|
@ -4,6 +4,12 @@ import sys
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Call freeze_support() to avoid passing unknown options to the subprocess.
|
||||||
|
# See https://github.com/freedomofpress/dangerzone/issues/873
|
||||||
|
import multiprocessing
|
||||||
|
|
||||||
|
multiprocessing.freeze_support()
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from . import vendor # type: ignore [attr-defined]
|
from . import vendor # type: ignore [attr-defined]
|
||||||
|
|
|
@ -11,6 +11,7 @@ from .isolation_provider.container import Container
|
||||||
from .isolation_provider.dummy import Dummy
|
from .isolation_provider.dummy import Dummy
|
||||||
from .isolation_provider.qubes import Qubes, is_qubes_native_conversion
|
from .isolation_provider.qubes import Qubes, is_qubes_native_conversion
|
||||||
from .logic import DangerzoneCore
|
from .logic import DangerzoneCore
|
||||||
|
from .settings import Settings
|
||||||
from .util import get_version, replace_control_chars
|
from .util import get_version, replace_control_chars
|
||||||
|
|
||||||
|
|
||||||
|
@ -37,30 +38,62 @@ def print_header(s: str) -> None:
|
||||||
)
|
)
|
||||||
@click.argument(
|
@click.argument(
|
||||||
"filenames",
|
"filenames",
|
||||||
required=True,
|
required=False,
|
||||||
nargs=-1,
|
nargs=-1,
|
||||||
type=click.UNPROCESSED,
|
type=click.UNPROCESSED,
|
||||||
callback=args.validate_input_filenames,
|
callback=args.validate_input_filenames,
|
||||||
)
|
)
|
||||||
|
@click.option(
|
||||||
|
"--debug",
|
||||||
|
"debug",
|
||||||
|
flag_value=True,
|
||||||
|
help="Run Dangerzone in debug mode, to get logs from gVisor.",
|
||||||
|
)
|
||||||
|
@click.option(
|
||||||
|
"--set-container-runtime",
|
||||||
|
required=False,
|
||||||
|
help=(
|
||||||
|
"The name or full path of the container runtime you want Dangerzone to use."
|
||||||
|
" You can specify the value 'default' if you want to take back your choice, and"
|
||||||
|
" let Dangerzone use the default runtime for this OS"
|
||||||
|
),
|
||||||
|
)
|
||||||
@click.version_option(version=get_version(), message="%(version)s")
|
@click.version_option(version=get_version(), message="%(version)s")
|
||||||
@errors.handle_document_errors
|
@errors.handle_document_errors
|
||||||
def cli_main(
|
def cli_main(
|
||||||
output_filename: Optional[str],
|
output_filename: Optional[str],
|
||||||
ocr_lang: Optional[str],
|
ocr_lang: Optional[str],
|
||||||
filenames: List[str],
|
filenames: Optional[List[str]],
|
||||||
archive: bool,
|
archive: bool,
|
||||||
dummy_conversion: bool,
|
dummy_conversion: bool,
|
||||||
|
debug: bool,
|
||||||
|
set_container_runtime: Optional[str] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
setup_logging()
|
setup_logging()
|
||||||
|
display_banner()
|
||||||
|
if set_container_runtime:
|
||||||
|
settings = Settings()
|
||||||
|
if set_container_runtime == "default":
|
||||||
|
settings.unset_custom_runtime()
|
||||||
|
click.echo(
|
||||||
|
"Instructed Dangerzone to use the default container runtime for this OS"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
container_runtime = settings.set_custom_runtime(
|
||||||
|
set_container_runtime, autosave=True
|
||||||
|
)
|
||||||
|
click.echo(f"Set the settings container_runtime to {container_runtime}")
|
||||||
|
sys.exit(0)
|
||||||
|
elif not filenames:
|
||||||
|
raise click.UsageError("Missing argument 'FILENAMES...'")
|
||||||
|
|
||||||
if getattr(sys, "dangerzone_dev", False) and dummy_conversion:
|
if getattr(sys, "dangerzone_dev", False) and dummy_conversion:
|
||||||
dangerzone = DangerzoneCore(Dummy())
|
dangerzone = DangerzoneCore(Dummy())
|
||||||
elif is_qubes_native_conversion():
|
elif is_qubes_native_conversion():
|
||||||
dangerzone = DangerzoneCore(Qubes())
|
dangerzone = DangerzoneCore(Qubes())
|
||||||
else:
|
else:
|
||||||
dangerzone = DangerzoneCore(Container())
|
dangerzone = DangerzoneCore(Container(debug=debug))
|
||||||
|
|
||||||
display_banner()
|
|
||||||
if len(filenames) == 1 and output_filename:
|
if len(filenames) == 1 and output_filename:
|
||||||
dangerzone.add_document_from_filename(filenames[0], output_filename, archive)
|
dangerzone.add_document_from_filename(filenames[0], output_filename, archive)
|
||||||
elif len(filenames) > 1 and output_filename:
|
elif len(filenames) > 1 and output_filename:
|
||||||
|
@ -295,7 +328,7 @@ def display_banner() -> None:
|
||||||
+ Back.BLACK
|
+ Back.BLACK
|
||||||
+ Fore.LIGHTWHITE_EX
|
+ Fore.LIGHTWHITE_EX
|
||||||
+ Style.BRIGHT
|
+ Style.BRIGHT
|
||||||
+ f"{' '*left_spaces}Dangerzone v{get_version()}{' '*right_spaces}"
|
+ f"{' ' * left_spaces}Dangerzone v{get_version()}{' ' * right_spaces}"
|
||||||
+ Fore.YELLOW
|
+ Fore.YELLOW
|
||||||
+ Style.DIM
|
+ Style.DIM
|
||||||
+ "│"
|
+ "│"
|
||||||
|
@ -313,4 +346,10 @@ def display_banner() -> None:
|
||||||
+ Style.DIM
|
+ Style.DIM
|
||||||
+ "│"
|
+ "│"
|
||||||
)
|
)
|
||||||
print(Back.BLACK + Fore.YELLOW + Style.DIM + "╰──────────────────────────╯")
|
print(
|
||||||
|
Back.BLACK
|
||||||
|
+ Fore.YELLOW
|
||||||
|
+ Style.DIM
|
||||||
|
+ "╰──────────────────────────╯"
|
||||||
|
+ Style.RESET_ALL
|
||||||
|
)
|
||||||
|
|
|
@ -59,10 +59,28 @@ oci_config: dict[str, typing.Any] = {
|
||||||
"root": {"path": "rootfs", "readonly": True},
|
"root": {"path": "rootfs", "readonly": True},
|
||||||
"hostname": "dangerzone",
|
"hostname": "dangerzone",
|
||||||
"mounts": [
|
"mounts": [
|
||||||
|
# Mask almost every system directory of the outer container, by mounting tmpfs
|
||||||
|
# on top of them. This is done to avoid leaking any sensitive information,
|
||||||
|
# either mounted by Podman/Docker, or when gVisor runs, since we reuse the same
|
||||||
|
# rootfs. We basically mask everything except for `/usr`, `/bin`, `/lib`,
|
||||||
|
# `/etc`, and `/opt`.
|
||||||
|
#
|
||||||
|
# Note that we set `--root /home/dangerzone/.containers` for the directory where
|
||||||
|
# gVisor will create files at runtime, which means that in principle, we are
|
||||||
|
# covered by the masking of `/home/dangerzone` that follows below.
|
||||||
|
#
|
||||||
|
# Finally, note that the following list has been taken from the dirs in our
|
||||||
|
# container image, and double-checked against the top-level dirs listed in the
|
||||||
|
# Filesystem Hierarchy Standard (FHS) [1]. It would be nice to have an allowlist
|
||||||
|
# approach instead of a denylist, but FHS is such an old standard that we don't
|
||||||
|
# expect any new top-level dirs to pop up any time soon.
|
||||||
|
#
|
||||||
|
# [1] https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
|
||||||
{
|
{
|
||||||
"destination": "/proc",
|
"destination": "/boot",
|
||||||
"type": "proc",
|
"type": "tmpfs",
|
||||||
"source": "proc",
|
"source": "tmpfs",
|
||||||
|
"options": ["nosuid", "noexec", "nodev", "ro"],
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"destination": "/dev",
|
"destination": "/dev",
|
||||||
|
@ -70,6 +88,53 @@ oci_config: dict[str, typing.Any] = {
|
||||||
"source": "tmpfs",
|
"source": "tmpfs",
|
||||||
"options": ["nosuid", "noexec", "nodev"],
|
"options": ["nosuid", "noexec", "nodev"],
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"destination": "/home",
|
||||||
|
"type": "tmpfs",
|
||||||
|
"source": "tmpfs",
|
||||||
|
"options": ["nosuid", "noexec", "nodev", "ro"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"destination": "/media",
|
||||||
|
"type": "tmpfs",
|
||||||
|
"source": "tmpfs",
|
||||||
|
"options": ["nosuid", "noexec", "nodev", "ro"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"destination": "/mnt",
|
||||||
|
"type": "tmpfs",
|
||||||
|
"source": "tmpfs",
|
||||||
|
"options": ["nosuid", "noexec", "nodev", "ro"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"destination": "/proc",
|
||||||
|
"type": "proc",
|
||||||
|
"source": "proc",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"destination": "/root",
|
||||||
|
"type": "tmpfs",
|
||||||
|
"source": "tmpfs",
|
||||||
|
"options": ["nosuid", "noexec", "nodev", "ro"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"destination": "/run",
|
||||||
|
"type": "tmpfs",
|
||||||
|
"source": "tmpfs",
|
||||||
|
"options": ["nosuid", "noexec", "nodev"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"destination": "/sbin",
|
||||||
|
"type": "tmpfs",
|
||||||
|
"source": "tmpfs",
|
||||||
|
"options": ["nosuid", "noexec", "nodev", "ro"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"destination": "/srv",
|
||||||
|
"type": "tmpfs",
|
||||||
|
"source": "tmpfs",
|
||||||
|
"options": ["nosuid", "noexec", "nodev", "ro"],
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"destination": "/sys",
|
"destination": "/sys",
|
||||||
"type": "tmpfs",
|
"type": "tmpfs",
|
||||||
|
@ -82,6 +147,12 @@ oci_config: dict[str, typing.Any] = {
|
||||||
"source": "tmpfs",
|
"source": "tmpfs",
|
||||||
"options": ["nosuid", "noexec", "nodev"],
|
"options": ["nosuid", "noexec", "nodev"],
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"destination": "/var",
|
||||||
|
"type": "tmpfs",
|
||||||
|
"source": "tmpfs",
|
||||||
|
"options": ["nosuid", "noexec", "nodev"],
|
||||||
|
},
|
||||||
# LibreOffice needs a writable home directory, so just mount a tmpfs
|
# LibreOffice needs a writable home directory, so just mount a tmpfs
|
||||||
# over it.
|
# over it.
|
||||||
{
|
{
|
||||||
|
@ -142,6 +213,9 @@ runsc_argv = [
|
||||||
"--rootless=true",
|
"--rootless=true",
|
||||||
"--network=none",
|
"--network=none",
|
||||||
"--root=/home/dangerzone/.containers",
|
"--root=/home/dangerzone/.containers",
|
||||||
|
# Disable DirectFS for to make the seccomp filter even stricter,
|
||||||
|
# at some performance cost.
|
||||||
|
"--directfs=false",
|
||||||
]
|
]
|
||||||
if os.environ.get("RUNSC_DEBUG"):
|
if os.environ.get("RUNSC_DEBUG"):
|
||||||
runsc_argv += ["--debug=true", "--alsologtostderr=true"]
|
runsc_argv += ["--debug=true", "--alsologtostderr=true"]
|
29
dangerzone/container_helpers/gvisor.key
Normal file
29
dangerzone/container_helpers/gvisor.key
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
-----BEGIN PGP PUBLIC KEY BLOCK-----
|
||||||
|
|
||||||
|
mQINBF0meAYBEACcBYPOSBiKtid+qTQlbgKGPxUYt0cNZiQqWXylhYUT4PuNlNx5
|
||||||
|
s+sBLFvNTpdTrXMmZ8NkekyjD1HardWvebvJT4u+Ho/9jUr4rP71cNwNtocz/w8G
|
||||||
|
DsUXSLgH8SDkq6xw0L+5eGc78BBg9cOeBeFBm3UPgxTBXS9Zevoi2w1lzSxkXvjx
|
||||||
|
cGzltzMZfPXERljgLzp9AAfhg/2ouqVQm37fY+P/NDzFMJ1XHPIIp9KJl/prBVud
|
||||||
|
jJJteFZ5sgL6MwjBQq2kw+q2Jb8Zfjl0BeXDgGMN5M5lGhX2wTfiMbfo7KWyzRnB
|
||||||
|
RpSP3BxlLqYeQUuLG5Yx8z3oA3uBkuKaFOKvXtiScxmGM/+Ri2YM3m66imwDhtmP
|
||||||
|
AKwTPI3Re4gWWOffglMVSv2sUAY32XZ74yXjY1VhK3bN3WFUPGrgQx4X7GP0A1Te
|
||||||
|
lzqkT3VSMXieImTASosK5L5Q8rryvgCeI9tQLn9EpYFCtU3LXvVgTreGNEEjMOnL
|
||||||
|
dR7yOU+Fs775stn6ucqmdYarx7CvKUrNAhgEeHMonLe1cjYScF7NfLO1GIrQKJR2
|
||||||
|
DE0f+uJZ52inOkO8ufh3WVQJSYszuS3HCY7w5oj1aP38k/y9zZdZvVvwAWZaiqBQ
|
||||||
|
iwjVs6Kub76VVZZhRDf4iYs8k1Zh64nXdfQt250d8U5yMPF3wIJ+c1yhxwARAQAB
|
||||||
|
tCpUaGUgZ1Zpc29yIEF1dGhvcnMgPGd2aXNvci1ib3RAZ29vZ2xlLmNvbT6JAk4E
|
||||||
|
EwEKADgCGwMFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AWIQRvHfheOnHCSRjnJ9Vv
|
||||||
|
xtVU4yvZQwUCYO4TxQAKCRBvxtVU4yvZQ9UoEACLPV7CnEA2bjCPi0NCWB/Mo1WL
|
||||||
|
evqv7Wv7vmXzI1K9DrqOhxuamQW75SVXg1df0hTJWbKFmDAip6NEC2Rg5P+A8hHj
|
||||||
|
nW/VG+q4ZFT662jDhnXQiO9L7EZzjyqNF4yWYzzgnqEu/SmGkDLDYiUCcGBqS2oE
|
||||||
|
EQfk7RHJSLMJXAnNDH7OUDgrirSssg/dlQ5uAHA9Au80VvC5fsTKza8b3Aydw3SV
|
||||||
|
iB8/Yuikbl8wKbpSGiXtR4viElXjNips0+mBqaUk2xpqSBrsfN+FezcInVXaXFeq
|
||||||
|
xtpq2/3M3DYbqCRjqeyd9wNi92FHdOusNrK4MYe0pAYbGjc65BwH+F0T4oJ8ZSJV
|
||||||
|
lIt+FZ0MqM1T97XadybYFsJh8qvajQpZEPL+zzNncc4f1d80e7+lwIZV/al0FZWW
|
||||||
|
Zlp7TpbeO/uW+lHs5W14YKwaQVh1whapKXTrATipNOOSCw2hnfrT8V7Hy55QWaGZ
|
||||||
|
f4/kfy929EeCP16d/LqOClv0j0RBr6NhRBQ0l/BE/mXjJwIk6nKwi+Yi4ek1ARi6
|
||||||
|
AlCMLn9AZF7aTGpvCiftzIrlyDfVZT5IX03TayxRHZ4b1Rj8eyJaHcjI49u83gkr
|
||||||
|
4LGX08lEawn9nxFSx4RCg2swGiYw5F436wwwAIozqJuDASeTa3QND3au5v0oYWnl
|
||||||
|
umDySUl5wPaAaALgzA==
|
||||||
|
=5/8T
|
||||||
|
-----END PGP PUBLIC KEY BLOCK-----
|
103
dangerzone/container_helpers/repro-sources-list.sh
Executable file
103
dangerzone/container_helpers/repro-sources-list.sh
Executable file
|
@ -0,0 +1,103 @@
|
||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Copyright The repro-sources-list.sh Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
# repro-sources-list.sh:
|
||||||
|
# configures /etc/apt/sources.list and similar files for installing packages from a snapshot.
|
||||||
|
#
|
||||||
|
# This script is expected to be executed inside Dockerfile.
|
||||||
|
#
|
||||||
|
# The following distributions are supported:
|
||||||
|
# - debian:11 (/etc/apt/sources.list)
|
||||||
|
# - debian:12 (/etc/apt/sources.list.d/debian.sources)
|
||||||
|
# - ubuntu:22.04 (/etc/apt/sources.list)
|
||||||
|
# - ubuntu:24.04 (/etc/apt/sources.listd/ubuntu.sources)
|
||||||
|
# - archlinux (/etc/pacman.d/mirrorlist)
|
||||||
|
#
|
||||||
|
# For the further information, see https://github.com/reproducible-containers/repro-sources-list.sh
|
||||||
|
# -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
set -eux -o pipefail
|
||||||
|
|
||||||
|
. /etc/os-release
|
||||||
|
|
||||||
|
: "${KEEP_CACHE:=1}"
|
||||||
|
|
||||||
|
keep_apt_cache() {
|
||||||
|
rm -f /etc/apt/apt.conf.d/docker-clean
|
||||||
|
echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' >/etc/apt/apt.conf.d/keep-cache
|
||||||
|
}
|
||||||
|
|
||||||
|
case "${ID}" in
|
||||||
|
"debian")
|
||||||
|
: "${SNAPSHOT_ARCHIVE_BASE:=http://snapshot.debian.org/archive/}"
|
||||||
|
: "${BACKPORTS:=}"
|
||||||
|
if [ -e /etc/apt/sources.list.d/debian.sources ]; then
|
||||||
|
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /etc/apt/sources.list.d/debian.sources)}"
|
||||||
|
rm -f /etc/apt/sources.list.d/debian.sources
|
||||||
|
else
|
||||||
|
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /etc/apt/sources.list)}"
|
||||||
|
fi
|
||||||
|
snapshot="$(printf "%(%Y%m%dT%H%M%SZ)T\n" "${SOURCE_DATE_EPOCH}")"
|
||||||
|
# TODO: use the new format for Debian >= 12
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}debian/${snapshot} ${VERSION_CODENAME} main" >/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}debian-security/${snapshot} ${VERSION_CODENAME}-security main" >>/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}debian/${snapshot} ${VERSION_CODENAME}-updates main" >>/etc/apt/sources.list
|
||||||
|
if [ "${BACKPORTS}" = 1 ]; then echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}debian/${snapshot} ${VERSION_CODENAME}-backports main" >>/etc/apt/sources.list; fi
|
||||||
|
if [ "${KEEP_CACHE}" = 1 ]; then keep_apt_cache; fi
|
||||||
|
;;
|
||||||
|
"ubuntu")
|
||||||
|
: "${SNAPSHOT_ARCHIVE_BASE:=http://snapshot.ubuntu.com/}"
|
||||||
|
if [ -e /etc/apt/sources.list.d/ubuntu.sources ]; then
|
||||||
|
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /etc/apt/sources.list.d/ubuntu.sources)}"
|
||||||
|
rm -f /etc/apt/sources.list.d/ubuntu.sources
|
||||||
|
else
|
||||||
|
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /etc/apt/sources.list)}"
|
||||||
|
fi
|
||||||
|
snapshot="$(printf "%(%Y%m%dT%H%M%SZ)T\n" "${SOURCE_DATE_EPOCH}")"
|
||||||
|
# TODO: use the new format for Ubuntu >= 24.04
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME} main restricted" >/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-updates main restricted" >>/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME} universe" >>/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-updates universe" >>/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME} multiverse" >>/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-updates multiverse" >>/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-backports main restricted universe multiverse" >>/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-security main restricted" >>/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-security universe" >>/etc/apt/sources.list
|
||||||
|
echo "deb [check-valid-until=no] ${SNAPSHOT_ARCHIVE_BASE}ubuntu/${snapshot} ${VERSION_CODENAME}-security multiverse" >>/etc/apt/sources.list
|
||||||
|
if [ "${KEEP_CACHE}" = 1 ]; then keep_apt_cache; fi
|
||||||
|
# http://snapshot.ubuntu.com is redirected to https, so we have to install ca-certificates
|
||||||
|
export DEBIAN_FRONTEND=noninteractive
|
||||||
|
apt-get -o Acquire::https::Verify-Peer=false update >&2
|
||||||
|
apt-get -o Acquire::https::Verify-Peer=false install -y ca-certificates >&2
|
||||||
|
;;
|
||||||
|
"arch")
|
||||||
|
: "${SNAPSHOT_ARCHIVE_BASE:=http://archive.archlinux.org/}"
|
||||||
|
: "${SOURCE_DATE_EPOCH:=$(stat --format=%Y /var/log/pacman.log)}"
|
||||||
|
export SOURCE_DATE_EPOCH
|
||||||
|
# shellcheck disable=SC2016
|
||||||
|
date -d "@${SOURCE_DATE_EPOCH}" "+Server = ${SNAPSHOT_ARCHIVE_BASE}repos/%Y/%m/%d/\$repo/os/\$arch" >/etc/pacman.d/mirrorlist
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo >&2 "Unsupported distribution: ${ID}"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
: "${WRITE_SOURCE_DATE_EPOCH:=/dev/null}"
|
||||||
|
echo "${SOURCE_DATE_EPOCH}" >"${WRITE_SOURCE_DATE_EPOCH}"
|
||||||
|
echo "SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}"
|
201
dangerzone/container_utils.py
Normal file
201
dangerzone/container_utils.py
Normal file
|
@ -0,0 +1,201 @@
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import platform
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Optional, Tuple
|
||||||
|
|
||||||
|
from . import errors
|
||||||
|
from .settings import Settings
|
||||||
|
from .util import get_resource_path, get_subprocess_startupinfo
|
||||||
|
|
||||||
|
CONTAINER_NAME = "dangerzone.rocks/dangerzone"
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class Runtime(object):
|
||||||
|
"""Represents the container runtime to use.
|
||||||
|
|
||||||
|
- It can be specified via the settings, using the "container_runtime" key,
|
||||||
|
which should point to the full path of the runtime;
|
||||||
|
- If the runtime is not specified via the settings, it defaults
|
||||||
|
to "podman" on Linux and "docker" on macOS and Windows.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self) -> None:
|
||||||
|
settings = Settings()
|
||||||
|
|
||||||
|
if settings.custom_runtime_specified():
|
||||||
|
self.path = Path(settings.get("container_runtime"))
|
||||||
|
if not self.path.exists():
|
||||||
|
raise errors.UnsupportedContainerRuntime(self.path)
|
||||||
|
self.name = self.path.stem
|
||||||
|
else:
|
||||||
|
self.name = self.get_default_runtime_name()
|
||||||
|
self.path = Runtime.path_from_name(self.name)
|
||||||
|
|
||||||
|
if self.name not in ("podman", "docker"):
|
||||||
|
raise errors.UnsupportedContainerRuntime(self.name)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def path_from_name(name: str) -> Path:
|
||||||
|
name_path = Path(name)
|
||||||
|
if name_path.is_file():
|
||||||
|
return name_path
|
||||||
|
else:
|
||||||
|
runtime = shutil.which(name_path)
|
||||||
|
if runtime is None:
|
||||||
|
raise errors.NoContainerTechException(name)
|
||||||
|
return Path(runtime)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_default_runtime_name() -> str:
|
||||||
|
return "podman" if platform.system() == "Linux" else "docker"
|
||||||
|
|
||||||
|
|
||||||
|
def get_runtime_version(runtime: Optional[Runtime] = None) -> Tuple[int, int]:
|
||||||
|
"""Get the major/minor parts of the Docker/Podman version.
|
||||||
|
|
||||||
|
Some of the operations we perform in this module rely on some Podman features
|
||||||
|
that are not available across all of our platforms. In order to have a proper
|
||||||
|
fallback, we need to know the Podman version. More specifically, we're fine with
|
||||||
|
just knowing the major and minor version, since writing/installing a full-blown
|
||||||
|
semver parser is an overkill.
|
||||||
|
"""
|
||||||
|
runtime = runtime or Runtime()
|
||||||
|
|
||||||
|
# Get the Docker/Podman version, using a Go template.
|
||||||
|
if runtime.name == "podman":
|
||||||
|
query = "{{.Client.Version}}"
|
||||||
|
else:
|
||||||
|
query = "{{.Server.Version}}"
|
||||||
|
|
||||||
|
cmd = [str(runtime.path), "version", "-f", query]
|
||||||
|
try:
|
||||||
|
version = subprocess.run(
|
||||||
|
cmd,
|
||||||
|
startupinfo=get_subprocess_startupinfo(),
|
||||||
|
capture_output=True,
|
||||||
|
check=True,
|
||||||
|
).stdout.decode()
|
||||||
|
except Exception as e:
|
||||||
|
msg = f"Could not get the version of the {runtime.name.capitalize()} tool: {e}"
|
||||||
|
raise RuntimeError(msg) from e
|
||||||
|
|
||||||
|
# Parse this version and return the major/minor parts, since we don't need the
|
||||||
|
# rest.
|
||||||
|
try:
|
||||||
|
major, minor, _ = version.split(".", 3)
|
||||||
|
return (int(major), int(minor))
|
||||||
|
except Exception as e:
|
||||||
|
msg = (
|
||||||
|
f"Could not parse the version of the {runtime.name.capitalize()} tool"
|
||||||
|
f" (found: '{version}') due to the following error: {e}"
|
||||||
|
)
|
||||||
|
raise RuntimeError(msg)
|
||||||
|
|
||||||
|
|
||||||
|
def list_image_tags() -> List[str]:
|
||||||
|
"""Get the tags of all loaded Dangerzone images.
|
||||||
|
|
||||||
|
This method returns a mapping of image tags to image IDs, for all Dangerzone
|
||||||
|
images. This can be useful when we want to find which are the local image tags,
|
||||||
|
and which image ID does the "latest" tag point to.
|
||||||
|
"""
|
||||||
|
runtime = Runtime()
|
||||||
|
return (
|
||||||
|
subprocess.check_output(
|
||||||
|
[
|
||||||
|
str(runtime.path),
|
||||||
|
"image",
|
||||||
|
"list",
|
||||||
|
"--format",
|
||||||
|
"{{ .Tag }}",
|
||||||
|
CONTAINER_NAME,
|
||||||
|
],
|
||||||
|
text=True,
|
||||||
|
startupinfo=get_subprocess_startupinfo(),
|
||||||
|
)
|
||||||
|
.strip()
|
||||||
|
.split()
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def add_image_tag(image_id: str, new_tag: str) -> None:
|
||||||
|
"""Add a tag to the Dangerzone image."""
|
||||||
|
runtime = Runtime()
|
||||||
|
log.debug(f"Adding tag '{new_tag}' to image '{image_id}'")
|
||||||
|
subprocess.check_output(
|
||||||
|
[str(runtime.path), "tag", image_id, new_tag],
|
||||||
|
startupinfo=get_subprocess_startupinfo(),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def delete_image_tag(tag: str) -> None:
|
||||||
|
"""Delete a Dangerzone image tag."""
|
||||||
|
runtime = Runtime()
|
||||||
|
log.warning(f"Deleting old container image: {tag}")
|
||||||
|
try:
|
||||||
|
subprocess.check_output(
|
||||||
|
[str(runtime.name), "rmi", "--force", tag],
|
||||||
|
startupinfo=get_subprocess_startupinfo(),
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(
|
||||||
|
f"Couldn't delete old container image '{tag}', so leaving it there."
|
||||||
|
f" Original error: {e}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def get_expected_tag() -> str:
|
||||||
|
"""Get the tag of the Dangerzone image tarball from the image-id.txt file."""
|
||||||
|
with get_resource_path("image-id.txt").open() as f:
|
||||||
|
return f.read().strip()
|
||||||
|
|
||||||
|
|
||||||
|
def load_image_tarball() -> None:
|
||||||
|
runtime = Runtime()
|
||||||
|
log.info("Installing Dangerzone container image...")
|
||||||
|
tarball_path = get_resource_path("container.tar")
|
||||||
|
try:
|
||||||
|
res = subprocess.run(
|
||||||
|
[str(runtime.path), "load", "-i", str(tarball_path)],
|
||||||
|
startupinfo=get_subprocess_startupinfo(),
|
||||||
|
capture_output=True,
|
||||||
|
check=True,
|
||||||
|
)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
if e.stderr:
|
||||||
|
error = e.stderr.decode()
|
||||||
|
else:
|
||||||
|
error = "No output"
|
||||||
|
raise errors.ImageInstallationException(
|
||||||
|
f"Could not install container image: {error}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Loading an image built with Buildkit in Podman 3.4 messes up its name. The tag
|
||||||
|
# somehow becomes the name of the loaded image [1].
|
||||||
|
#
|
||||||
|
# We know that older Podman versions are not generally affected, since Podman v3.0.1
|
||||||
|
# on Debian Bullseye works properly. Also, Podman v4.0 is not affected, so it makes
|
||||||
|
# sense to target only Podman v3.4 for a fix.
|
||||||
|
#
|
||||||
|
# The fix is simple, tag the image properly based on the expected tag from
|
||||||
|
# `share/image-id.txt` and delete the incorrect tag.
|
||||||
|
#
|
||||||
|
# [1] https://github.com/containers/podman/issues/16490
|
||||||
|
if runtime.name == "podman" and get_runtime_version(runtime) == (3, 4):
|
||||||
|
expected_tag = get_expected_tag()
|
||||||
|
bad_tag = f"localhost/{expected_tag}:latest"
|
||||||
|
good_tag = f"{CONTAINER_NAME}:{expected_tag}"
|
||||||
|
|
||||||
|
log.debug(
|
||||||
|
f"Dangerzone images loaded in Podman v3.4 usually have an invalid tag."
|
||||||
|
" Fixing it..."
|
||||||
|
)
|
||||||
|
add_image_tag(bad_tag, good_tag)
|
||||||
|
delete_image_tag(bad_tag)
|
||||||
|
|
||||||
|
log.info("Successfully installed container image")
|
|
@ -129,6 +129,10 @@ class DocumentToPixels(DangerzoneConverter):
|
||||||
# At least .odt, .docx, .odg, .odp, .ods, and .pptx
|
# At least .odt, .docx, .odg, .odp, .ods, and .pptx
|
||||||
"application/zip": {
|
"application/zip": {
|
||||||
"type": "libreoffice",
|
"type": "libreoffice",
|
||||||
|
# NOTE: `file` command < 5.45 cannot detect hwpx files properly, so we
|
||||||
|
# enable the extension in any case. See also:
|
||||||
|
# https://github.com/freedomofpress/dangerzone/pull/460#issuecomment-1654166465
|
||||||
|
"libreoffice_ext": "h2orestart.oxt",
|
||||||
},
|
},
|
||||||
# At least .doc, .docx, .odg, .odp, .odt, .pdf, .ppt, .pptx, .xls, and .xlsx
|
# At least .doc, .docx, .odg, .odp, .odt, .pdf, .ppt, .pptx, .xls, and .xlsx
|
||||||
"application/octet-stream": {
|
"application/octet-stream": {
|
||||||
|
@ -249,7 +253,7 @@ class DocumentToPixels(DangerzoneConverter):
|
||||||
"unzip",
|
"unzip",
|
||||||
"-d",
|
"-d",
|
||||||
f"/usr/lib/libreoffice/share/extensions/{libreoffice_ext}/",
|
f"/usr/lib/libreoffice/share/extensions/{libreoffice_ext}/",
|
||||||
f"/libreoffice_ext/{libreoffice_ext}",
|
f"/opt/libreoffice_ext/{libreoffice_ext}",
|
||||||
]
|
]
|
||||||
await self.run_command(
|
await self.run_command(
|
||||||
unzip_args,
|
unzip_args,
|
||||||
|
|
|
@ -117,3 +117,30 @@ def handle_document_errors(func: F) -> F:
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
return cast(F, wrapper)
|
return cast(F, wrapper)
|
||||||
|
|
||||||
|
|
||||||
|
#### Container-related errors
|
||||||
|
|
||||||
|
|
||||||
|
class ImageNotPresentException(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class ImageInstallationException(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class NoContainerTechException(Exception):
|
||||||
|
def __init__(self, container_tech: str) -> None:
|
||||||
|
super().__init__(f"{container_tech} is not installed")
|
||||||
|
|
||||||
|
|
||||||
|
class NotAvailableContainerTechException(Exception):
|
||||||
|
def __init__(self, container_tech: str, error: str) -> None:
|
||||||
|
self.error = error
|
||||||
|
self.container_tech = container_tech
|
||||||
|
super().__init__(f"{container_tech} is not available")
|
||||||
|
|
||||||
|
|
||||||
|
class UnsupportedContainerRuntime(Exception):
|
||||||
|
pass
|
||||||
|
|
|
@ -51,7 +51,7 @@ class Application(QtWidgets.QApplication):
|
||||||
def __init__(self, *args: typing.Any, **kwargs: typing.Any) -> None:
|
def __init__(self, *args: typing.Any, **kwargs: typing.Any) -> None:
|
||||||
super(Application, self).__init__(*args, **kwargs)
|
super(Application, self).__init__(*args, **kwargs)
|
||||||
self.setQuitOnLastWindowClosed(False)
|
self.setQuitOnLastWindowClosed(False)
|
||||||
with open(get_resource_path("dangerzone.css"), "r") as f:
|
with get_resource_path("dangerzone.css").open("r") as f:
|
||||||
style = f.read()
|
style = f.read()
|
||||||
self.setStyleSheet(style)
|
self.setStyleSheet(style)
|
||||||
|
|
||||||
|
|
|
@ -63,7 +63,7 @@ class DangerzoneGui(DangerzoneCore):
|
||||||
path = get_resource_path("dangerzone.ico")
|
path = get_resource_path("dangerzone.ico")
|
||||||
else:
|
else:
|
||||||
path = get_resource_path("icon.png")
|
path = get_resource_path("icon.png")
|
||||||
return QtGui.QIcon(path)
|
return QtGui.QIcon(str(path))
|
||||||
|
|
||||||
def open_pdf_viewer(self, filename: str) -> None:
|
def open_pdf_viewer(self, filename: str) -> None:
|
||||||
if platform.system() == "Darwin":
|
if platform.system() == "Darwin":
|
||||||
|
@ -252,7 +252,7 @@ class Alert(Dialog):
|
||||||
def create_layout(self) -> QtWidgets.QBoxLayout:
|
def create_layout(self) -> QtWidgets.QBoxLayout:
|
||||||
logo = QtWidgets.QLabel()
|
logo = QtWidgets.QLabel()
|
||||||
logo.setPixmap(
|
logo.setPixmap(
|
||||||
QtGui.QPixmap.fromImage(QtGui.QImage(get_resource_path("icon.png")))
|
QtGui.QPixmap.fromImage(QtGui.QImage(str(get_resource_path("icon.png"))))
|
||||||
)
|
)
|
||||||
|
|
||||||
label = QtWidgets.QLabel()
|
label = QtWidgets.QLabel()
|
||||||
|
|
|
@ -25,13 +25,7 @@ else:
|
||||||
|
|
||||||
from .. import errors
|
from .. import errors
|
||||||
from ..document import SAFE_EXTENSION, Document
|
from ..document import SAFE_EXTENSION, Document
|
||||||
from ..isolation_provider.container import (
|
from ..isolation_provider.qubes import is_qubes_native_conversion
|
||||||
Container,
|
|
||||||
NoContainerTechException,
|
|
||||||
NotAvailableContainerTechException,
|
|
||||||
)
|
|
||||||
from ..isolation_provider.dummy import Dummy
|
|
||||||
from ..isolation_provider.qubes import Qubes, is_qubes_native_conversion
|
|
||||||
from ..util import format_exception, get_resource_path, get_version
|
from ..util import format_exception, get_resource_path, get_version
|
||||||
from .logic import Alert, CollapsibleBox, DangerzoneGui, UpdateDialog
|
from .logic import Alert, CollapsibleBox, DangerzoneGui, UpdateDialog
|
||||||
from .updater import UpdateReport
|
from .updater import UpdateReport
|
||||||
|
@ -61,20 +55,13 @@ about updates.</p>
|
||||||
HAMBURGER_MENU_SIZE = 30
|
HAMBURGER_MENU_SIZE = 30
|
||||||
|
|
||||||
|
|
||||||
WARNING_MESSAGE = """\
|
|
||||||
<p><b>Warning:</b> Ubuntu Focal systems and their derivatives will
|
|
||||||
stop being supported in subsequent Dangerzone releases. We encourage you to upgrade to a
|
|
||||||
more recent version of your operating system in order to get security updates.</p>
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
def load_svg_image(filename: str, width: int, height: int) -> QtGui.QPixmap:
|
def load_svg_image(filename: str, width: int, height: int) -> QtGui.QPixmap:
|
||||||
"""Load an SVG image from a filename.
|
"""Load an SVG image from a filename.
|
||||||
|
|
||||||
This answer is basically taken from: https://stackoverflow.com/a/25689790
|
This answer is basically taken from: https://stackoverflow.com/a/25689790
|
||||||
"""
|
"""
|
||||||
path = get_resource_path(filename)
|
path = get_resource_path(filename)
|
||||||
svg_renderer = QtSvg.QSvgRenderer(path)
|
svg_renderer = QtSvg.QSvgRenderer(str(path))
|
||||||
image = QtGui.QImage(width, height, QtGui.QImage.Format_ARGB32)
|
image = QtGui.QImage(width, height, QtGui.QImage.Format_ARGB32)
|
||||||
# Set the ARGB to 0 to prevent rendering artifacts
|
# Set the ARGB to 0 to prevent rendering artifacts
|
||||||
image.fill(0x00000000)
|
image.fill(0x00000000)
|
||||||
|
@ -130,6 +117,7 @@ class MainWindow(QtWidgets.QMainWindow):
|
||||||
|
|
||||||
self.setWindowTitle("Dangerzone")
|
self.setWindowTitle("Dangerzone")
|
||||||
self.setWindowIcon(self.dangerzone.get_window_icon())
|
self.setWindowIcon(self.dangerzone.get_window_icon())
|
||||||
|
self.alert: Optional[Alert] = None
|
||||||
|
|
||||||
self.setMinimumWidth(600)
|
self.setMinimumWidth(600)
|
||||||
if platform.system() == "Darwin":
|
if platform.system() == "Darwin":
|
||||||
|
@ -141,9 +129,8 @@ class MainWindow(QtWidgets.QMainWindow):
|
||||||
|
|
||||||
# Header
|
# Header
|
||||||
logo = QtWidgets.QLabel()
|
logo = QtWidgets.QLabel()
|
||||||
logo.setPixmap(
|
icon_path = str(get_resource_path("icon.png"))
|
||||||
QtGui.QPixmap.fromImage(QtGui.QImage(get_resource_path("icon.png")))
|
logo.setPixmap(QtGui.QPixmap.fromImage(QtGui.QImage(icon_path)))
|
||||||
)
|
|
||||||
header_label = QtWidgets.QLabel("Dangerzone")
|
header_label = QtWidgets.QLabel("Dangerzone")
|
||||||
header_label.setFont(self.dangerzone.fixed_font)
|
header_label.setFont(self.dangerzone.fixed_font)
|
||||||
header_label.setStyleSheet("QLabel { font-weight: bold; font-size: 50px; }")
|
header_label.setStyleSheet("QLabel { font-weight: bold; font-size: 50px; }")
|
||||||
|
@ -197,21 +184,18 @@ class MainWindow(QtWidgets.QMainWindow):
|
||||||
header_layout.addWidget(self.hamburger_button)
|
header_layout.addWidget(self.hamburger_button)
|
||||||
header_layout.addSpacing(15)
|
header_layout.addSpacing(15)
|
||||||
|
|
||||||
if isinstance(self.dangerzone.isolation_provider, Container):
|
# Content widget, contains all the window content except waiting widget
|
||||||
|
self.content_widget = ContentWidget(self.dangerzone)
|
||||||
|
|
||||||
|
if self.dangerzone.isolation_provider.should_wait_install():
|
||||||
# Waiting widget replaces content widget while container runtime isn't available
|
# Waiting widget replaces content widget while container runtime isn't available
|
||||||
self.waiting_widget: WaitingWidget = WaitingWidgetContainer(self.dangerzone)
|
self.waiting_widget: WaitingWidget = WaitingWidgetContainer(self.dangerzone)
|
||||||
self.waiting_widget.finished.connect(self.waiting_finished)
|
self.waiting_widget.finished.connect(self.waiting_finished)
|
||||||
|
else:
|
||||||
elif isinstance(self.dangerzone.isolation_provider, Dummy) or isinstance(
|
|
||||||
self.dangerzone.isolation_provider, Qubes
|
|
||||||
):
|
|
||||||
# Don't wait with dummy converter and on Qubes.
|
# Don't wait with dummy converter and on Qubes.
|
||||||
self.waiting_widget = WaitingWidget()
|
self.waiting_widget = WaitingWidget()
|
||||||
self.dangerzone.is_waiting_finished = True
|
self.dangerzone.is_waiting_finished = True
|
||||||
|
|
||||||
# Content widget, contains all the window content except waiting widget
|
|
||||||
self.content_widget = ContentWidget(self.dangerzone)
|
|
||||||
|
|
||||||
# Only use the waiting widget if container runtime isn't available
|
# Only use the waiting widget if container runtime isn't available
|
||||||
if self.dangerzone.is_waiting_finished:
|
if self.dangerzone.is_waiting_finished:
|
||||||
self.waiting_widget.hide()
|
self.waiting_widget.hide()
|
||||||
|
@ -235,6 +219,18 @@ class MainWindow(QtWidgets.QMainWindow):
|
||||||
# This allows us to make QSS rules conditional on the OS color mode.
|
# This allows us to make QSS rules conditional on the OS color mode.
|
||||||
self.setProperty("OSColorMode", self.dangerzone.app.os_color_mode.value)
|
self.setProperty("OSColorMode", self.dangerzone.app.os_color_mode.value)
|
||||||
|
|
||||||
|
if hasattr(self.dangerzone.isolation_provider, "check_docker_desktop_version"):
|
||||||
|
try:
|
||||||
|
is_version_valid, version = (
|
||||||
|
self.dangerzone.isolation_provider.check_docker_desktop_version()
|
||||||
|
)
|
||||||
|
if not is_version_valid:
|
||||||
|
self.handle_docker_desktop_version_check(is_version_valid, version)
|
||||||
|
except errors.UnsupportedContainerRuntime as e:
|
||||||
|
pass # It's caught later in the flow.
|
||||||
|
except errors.NoContainerTechException as e:
|
||||||
|
pass # It's caught later in the flow.
|
||||||
|
|
||||||
self.show()
|
self.show()
|
||||||
|
|
||||||
def show_update_success(self) -> None:
|
def show_update_success(self) -> None:
|
||||||
|
@ -288,6 +284,46 @@ class MainWindow(QtWidgets.QMainWindow):
|
||||||
self.dangerzone.settings.set("updater_check", check)
|
self.dangerzone.settings.set("updater_check", check)
|
||||||
self.dangerzone.settings.save()
|
self.dangerzone.settings.save()
|
||||||
|
|
||||||
|
def handle_docker_desktop_version_check(
|
||||||
|
self, is_version_valid: bool, version: str
|
||||||
|
) -> None:
|
||||||
|
hamburger_menu = self.hamburger_button.menu()
|
||||||
|
sep = hamburger_menu.insertSeparator(hamburger_menu.actions()[0])
|
||||||
|
upgrade_action = QAction("Docker Desktop should be upgraded", hamburger_menu)
|
||||||
|
upgrade_action.setIcon(
|
||||||
|
QtGui.QIcon(
|
||||||
|
load_svg_image(
|
||||||
|
"hamburger_menu_update_dot_error.svg", width=64, height=64
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
message = """
|
||||||
|
<p>A new version of Docker Desktop is available. Please upgrade your system.</p>
|
||||||
|
<p>Visit the <a href="https://www.docker.com/products/docker-desktop">Docker Desktop website</a> to download the latest version.</p>
|
||||||
|
<em>Keeping Docker Desktop up to date allows you to have more confidence that your documents are processed safely.</em>
|
||||||
|
"""
|
||||||
|
self.alert = Alert(
|
||||||
|
self.dangerzone,
|
||||||
|
title="Upgrade Docker Desktop",
|
||||||
|
message=message,
|
||||||
|
ok_text="Ok",
|
||||||
|
has_cancel=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _launch_alert() -> None:
|
||||||
|
if self.alert:
|
||||||
|
self.alert.launch()
|
||||||
|
|
||||||
|
upgrade_action.triggered.connect(_launch_alert)
|
||||||
|
hamburger_menu.insertAction(sep, upgrade_action)
|
||||||
|
|
||||||
|
self.hamburger_button.setIcon(
|
||||||
|
QtGui.QIcon(
|
||||||
|
load_svg_image("hamburger_menu_update_error.svg", width=64, height=64)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
def handle_updates(self, report: UpdateReport) -> None:
|
def handle_updates(self, report: UpdateReport) -> None:
|
||||||
"""Handle update reports from the update checker thread.
|
"""Handle update reports from the update checker thread.
|
||||||
|
|
||||||
|
@ -374,7 +410,7 @@ class MainWindow(QtWidgets.QMainWindow):
|
||||||
self.content_widget.show()
|
self.content_widget.show()
|
||||||
|
|
||||||
def closeEvent(self, e: QtGui.QCloseEvent) -> None:
|
def closeEvent(self, e: QtGui.QCloseEvent) -> None:
|
||||||
alert_widget = Alert(
|
self.alert = Alert(
|
||||||
self.dangerzone,
|
self.dangerzone,
|
||||||
message="Some documents are still being converted.\n Are you sure you want to quit?",
|
message="Some documents are still being converted.\n Are you sure you want to quit?",
|
||||||
ok_text="Abort conversions",
|
ok_text="Abort conversions",
|
||||||
|
@ -388,7 +424,7 @@ class MainWindow(QtWidgets.QMainWindow):
|
||||||
else:
|
else:
|
||||||
self.dangerzone.app.exit(0)
|
self.dangerzone.app.exit(0)
|
||||||
else:
|
else:
|
||||||
accept_exit = alert_widget.launch()
|
accept_exit = self.alert.launch()
|
||||||
if not accept_exit:
|
if not accept_exit:
|
||||||
e.ignore()
|
e.ignore()
|
||||||
return
|
return
|
||||||
|
@ -500,11 +536,11 @@ class WaitingWidgetContainer(WaitingWidget):
|
||||||
error: Optional[str] = None
|
error: Optional[str] = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.dangerzone.isolation_provider.is_runtime_available()
|
self.dangerzone.isolation_provider.is_available()
|
||||||
except NoContainerTechException as e:
|
except errors.NoContainerTechException as e:
|
||||||
log.error(str(e))
|
log.error(str(e))
|
||||||
state = "not_installed"
|
state = "not_installed"
|
||||||
except NotAvailableContainerTechException as e:
|
except errors.NotAvailableContainerTechException as e:
|
||||||
log.error(str(e))
|
log.error(str(e))
|
||||||
state = "not_running"
|
state = "not_running"
|
||||||
error = e.error
|
error = e.error
|
||||||
|
@ -542,8 +578,15 @@ class WaitingWidgetContainer(WaitingWidget):
|
||||||
self.finished.emit()
|
self.finished.emit()
|
||||||
|
|
||||||
def state_change(self, state: str, error: Optional[str] = None) -> None:
|
def state_change(self, state: str, error: Optional[str] = None) -> None:
|
||||||
|
custom_runtime = self.dangerzone.settings.custom_runtime_specified()
|
||||||
|
|
||||||
if state == "not_installed":
|
if state == "not_installed":
|
||||||
if platform.system() == "Linux":
|
if custom_runtime:
|
||||||
|
self.show_error(
|
||||||
|
"<strong>We could not find the container runtime defined in your settings</strong><br><br>"
|
||||||
|
"Please check your settings, install it if needed, and retry."
|
||||||
|
)
|
||||||
|
elif platform.system() == "Linux":
|
||||||
self.show_error(
|
self.show_error(
|
||||||
"<strong>Dangerzone requires Podman</strong><br><br>"
|
"<strong>Dangerzone requires Podman</strong><br><br>"
|
||||||
"Install it and retry."
|
"Install it and retry."
|
||||||
|
@ -556,19 +599,25 @@ class WaitingWidgetContainer(WaitingWidget):
|
||||||
)
|
)
|
||||||
|
|
||||||
elif state == "not_running":
|
elif state == "not_running":
|
||||||
if platform.system() == "Linux":
|
if custom_runtime:
|
||||||
|
self.show_error(
|
||||||
|
"<strong>We were unable to start the container runtime defined in your settings</strong><br><br>"
|
||||||
|
"Please check your settings, install it if needed, and retry."
|
||||||
|
)
|
||||||
|
elif platform.system() == "Linux":
|
||||||
# "not_running" here means that the `podman image ls` command failed.
|
# "not_running" here means that the `podman image ls` command failed.
|
||||||
message = (
|
self.show_error(
|
||||||
"<strong>Dangerzone requires Podman</strong><br><br>"
|
"<strong>Dangerzone requires Podman</strong><br><br>"
|
||||||
"Podman is installed but cannot run properly. See errors below"
|
"Podman is installed but cannot run properly. See errors below",
|
||||||
|
error,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
message = (
|
self.show_error(
|
||||||
"<strong>Dangerzone requires Docker Desktop</strong><br><br>"
|
"<strong>Dangerzone requires Docker Desktop</strong><br><br>"
|
||||||
"Docker is installed but isn't running.<br><br>"
|
"Docker is installed but isn't running.<br><br>"
|
||||||
"Open Docker and make sure it's running in the background."
|
"Open Docker and make sure it's running in the background.",
|
||||||
|
error,
|
||||||
)
|
)
|
||||||
self.show_error(message, error)
|
|
||||||
else:
|
else:
|
||||||
self.show_message(
|
self.show_message(
|
||||||
"Installing the Dangerzone container image.<br><br>"
|
"Installing the Dangerzone container image.<br><br>"
|
||||||
|
@ -587,17 +636,6 @@ class ContentWidget(QtWidgets.QWidget):
|
||||||
self.dangerzone = dangerzone
|
self.dangerzone = dangerzone
|
||||||
self.conversion_started = False
|
self.conversion_started = False
|
||||||
|
|
||||||
self.warning_label = None
|
|
||||||
if platform.system() == "Linux":
|
|
||||||
# Add the warning message only for ubuntu focal
|
|
||||||
os_release_path = Path("/etc/os-release")
|
|
||||||
if os_release_path.exists():
|
|
||||||
os_release = os_release_path.read_text()
|
|
||||||
if "Ubuntu 20.04" in os_release or "focal" in os_release:
|
|
||||||
self.warning_label = QtWidgets.QLabel(WARNING_MESSAGE)
|
|
||||||
self.warning_label.setWordWrap(True)
|
|
||||||
self.warning_label.setProperty("style", "warning")
|
|
||||||
|
|
||||||
# Doc selection widget
|
# Doc selection widget
|
||||||
self.doc_selection_widget = DocSelectionWidget(self.dangerzone)
|
self.doc_selection_widget = DocSelectionWidget(self.dangerzone)
|
||||||
self.doc_selection_widget.documents_selected.connect(self.documents_selected)
|
self.doc_selection_widget.documents_selected.connect(self.documents_selected)
|
||||||
|
@ -623,8 +661,6 @@ class ContentWidget(QtWidgets.QWidget):
|
||||||
|
|
||||||
# Layout
|
# Layout
|
||||||
layout = QtWidgets.QVBoxLayout()
|
layout = QtWidgets.QVBoxLayout()
|
||||||
if self.warning_label:
|
|
||||||
layout.addWidget(self.warning_label) # Add warning at the top
|
|
||||||
layout.addWidget(self.settings_widget, stretch=1)
|
layout.addWidget(self.settings_widget, stretch=1)
|
||||||
layout.addWidget(self.documents_list, stretch=1)
|
layout.addWidget(self.documents_list, stretch=1)
|
||||||
layout.addWidget(self.doc_selection_wrapper, stretch=1)
|
layout.addWidget(self.doc_selection_wrapper, stretch=1)
|
||||||
|
@ -632,7 +668,7 @@ class ContentWidget(QtWidgets.QWidget):
|
||||||
|
|
||||||
def documents_selected(self, docs: List[Document]) -> None:
|
def documents_selected(self, docs: List[Document]) -> None:
|
||||||
if self.conversion_started:
|
if self.conversion_started:
|
||||||
Alert(
|
self.alert = Alert(
|
||||||
self.dangerzone,
|
self.dangerzone,
|
||||||
message="Dangerzone does not support adding documents after the conversion has started.",
|
message="Dangerzone does not support adding documents after the conversion has started.",
|
||||||
has_cancel=False,
|
has_cancel=False,
|
||||||
|
@ -642,7 +678,7 @@ class ContentWidget(QtWidgets.QWidget):
|
||||||
# Ensure all files in batch are in the same directory
|
# Ensure all files in batch are in the same directory
|
||||||
dirnames = {os.path.dirname(doc.input_filename) for doc in docs}
|
dirnames = {os.path.dirname(doc.input_filename) for doc in docs}
|
||||||
if len(dirnames) > 1:
|
if len(dirnames) > 1:
|
||||||
Alert(
|
self.alert = Alert(
|
||||||
self.dangerzone,
|
self.dangerzone,
|
||||||
message="Dangerzone does not support adding documents from multiple locations.\n\n The newly added documents were ignored.",
|
message="Dangerzone does not support adding documents from multiple locations.\n\n The newly added documents were ignored.",
|
||||||
has_cancel=False,
|
has_cancel=False,
|
||||||
|
@ -811,14 +847,14 @@ class DocSelectionDropFrame(QtWidgets.QFrame):
|
||||||
text = f"{num_unsupported_docs} files are not supported."
|
text = f"{num_unsupported_docs} files are not supported."
|
||||||
ok_text = "Continue without these files"
|
ok_text = "Continue without these files"
|
||||||
|
|
||||||
alert_widget = Alert(
|
self.alert = Alert(
|
||||||
self.dangerzone,
|
self.dangerzone,
|
||||||
message=f"{text}\nThe supported extensions are: "
|
message=f"{text}\nThe supported extensions are: "
|
||||||
+ ", ".join(get_supported_extensions()),
|
+ ", ".join(get_supported_extensions()),
|
||||||
ok_text=ok_text,
|
ok_text=ok_text,
|
||||||
)
|
)
|
||||||
|
|
||||||
return alert_widget.exec_()
|
return self.alert.exec_()
|
||||||
|
|
||||||
|
|
||||||
class SettingsWidget(QtWidgets.QWidget):
|
class SettingsWidget(QtWidgets.QWidget):
|
||||||
|
@ -855,22 +891,16 @@ class SettingsWidget(QtWidgets.QWidget):
|
||||||
self.safe_extension_name_layout.setSpacing(0)
|
self.safe_extension_name_layout.setSpacing(0)
|
||||||
self.safe_extension_name_layout.addWidget(self.safe_extension_filename)
|
self.safe_extension_name_layout.addWidget(self.safe_extension_filename)
|
||||||
self.safe_extension_name_layout.addWidget(self.safe_extension)
|
self.safe_extension_name_layout.addWidget(self.safe_extension)
|
||||||
# FIXME: Workaround for https://github.com/freedomofpress/dangerzone/issues/339.
|
self.dot_pdf_validator = QtGui.QRegularExpressionValidator(
|
||||||
# We should drop this once we drop Ubuntu Focal support.
|
QtCore.QRegularExpression(r".*\.[Pp][Dd][Ff]")
|
||||||
if hasattr(QtGui, "QRegularExpressionValidator"):
|
)
|
||||||
QRegEx = QtCore.QRegularExpression
|
|
||||||
QRegExValidator = QtGui.QRegularExpressionValidator
|
|
||||||
else:
|
|
||||||
QRegEx = QtCore.QRegExp # type: ignore [assignment]
|
|
||||||
QRegExValidator = QtGui.QRegExpValidator # type: ignore [assignment]
|
|
||||||
self.dot_pdf_validator = QRegExValidator(QRegEx(r".*\.[Pp][Dd][Ff]"))
|
|
||||||
if platform.system() == "Linux":
|
if platform.system() == "Linux":
|
||||||
illegal_chars_regex = r"[/]"
|
illegal_chars_regex = r"[/]"
|
||||||
elif platform.system() == "Darwin":
|
elif platform.system() == "Darwin":
|
||||||
illegal_chars_regex = r"[\\]"
|
illegal_chars_regex = r"[\\]"
|
||||||
else:
|
else:
|
||||||
illegal_chars_regex = r"[\"*/:<>?\\|]"
|
illegal_chars_regex = r"[\"*/:<>?\\|]"
|
||||||
self.illegal_chars_regex = QRegEx(illegal_chars_regex)
|
self.illegal_chars_regex = QtCore.QRegularExpression(illegal_chars_regex)
|
||||||
self.safe_extension_layout = QtWidgets.QHBoxLayout()
|
self.safe_extension_layout = QtWidgets.QHBoxLayout()
|
||||||
self.safe_extension_layout.addWidget(self.save_checkbox)
|
self.safe_extension_layout.addWidget(self.save_checkbox)
|
||||||
self.safe_extension_layout.addWidget(self.safe_extension_label)
|
self.safe_extension_layout.addWidget(self.safe_extension_label)
|
||||||
|
@ -1289,7 +1319,7 @@ class DocumentWidget(QtWidgets.QWidget):
|
||||||
|
|
||||||
def load_status_image(self, filename: str) -> QtGui.QPixmap:
|
def load_status_image(self, filename: str) -> QtGui.QPixmap:
|
||||||
path = get_resource_path(filename)
|
path = get_resource_path(filename)
|
||||||
img = QtGui.QImage(path)
|
img = QtGui.QImage(str(path))
|
||||||
image = QtGui.QPixmap.fromImage(img)
|
image = QtGui.QPixmap.fromImage(img)
|
||||||
return image.scaled(QtCore.QSize(15, 15))
|
return image.scaled(QtCore.QSize(15, 15))
|
||||||
|
|
||||||
|
|
|
@ -5,8 +5,9 @@ import platform
|
||||||
import signal
|
import signal
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
|
import threading
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from pathlib import Path
|
from io import BytesIO
|
||||||
from typing import IO, Callable, Iterator, Optional
|
from typing import IO, Callable, Iterator, Optional
|
||||||
|
|
||||||
import fitz
|
import fitz
|
||||||
|
@ -19,10 +20,6 @@ from ..util import get_tessdata_dir, replace_control_chars
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
MAX_CONVERSION_LOG_CHARS = 150 * 50 # up to ~150 lines of 50 characters
|
|
||||||
DOC_TO_PIXELS_LOG_START = "----- DOC TO PIXELS LOG START -----"
|
|
||||||
DOC_TO_PIXELS_LOG_END = "----- DOC TO PIXELS LOG END -----"
|
|
||||||
|
|
||||||
TIMEOUT_EXCEPTION = 15
|
TIMEOUT_EXCEPTION = 15
|
||||||
TIMEOUT_GRACE = 15
|
TIMEOUT_GRACE = 15
|
||||||
TIMEOUT_FORCE = 5
|
TIMEOUT_FORCE = 5
|
||||||
|
@ -76,9 +73,9 @@ def read_int(f: IO[bytes]) -> int:
|
||||||
return int.from_bytes(untrusted_int, "big", signed=False)
|
return int.from_bytes(untrusted_int, "big", signed=False)
|
||||||
|
|
||||||
|
|
||||||
def read_debug_text(f: IO[bytes], size: int) -> str:
|
def sanitize_debug_text(text: bytes) -> str:
|
||||||
"""Read arbitrarily long text (for debug purposes), and sanitize it."""
|
"""Read all the buffer and return a sanitized version"""
|
||||||
untrusted_text = f.read(size).decode("ascii", errors="replace")
|
untrusted_text = text.decode("ascii", errors="replace")
|
||||||
return replace_control_chars(untrusted_text, keep_newlines=True)
|
return replace_control_chars(untrusted_text, keep_newlines=True)
|
||||||
|
|
||||||
|
|
||||||
|
@ -87,15 +84,15 @@ class IsolationProvider(ABC):
|
||||||
Abstracts an isolation provider
|
Abstracts an isolation provider
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self) -> None:
|
def __init__(self, debug: bool = False) -> None:
|
||||||
if getattr(sys, "dangerzone_dev", False) is True:
|
self.debug = debug
|
||||||
|
if self.should_capture_stderr():
|
||||||
self.proc_stderr = subprocess.PIPE
|
self.proc_stderr = subprocess.PIPE
|
||||||
else:
|
else:
|
||||||
self.proc_stderr = subprocess.DEVNULL
|
self.proc_stderr = subprocess.DEVNULL
|
||||||
|
|
||||||
@staticmethod
|
def should_capture_stderr(self) -> bool:
|
||||||
def is_runtime_available() -> bool:
|
return self.debug or getattr(sys, "dangerzone_dev", False)
|
||||||
return True
|
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def install(self) -> bool:
|
def install(self) -> bool:
|
||||||
|
@ -258,6 +255,16 @@ class IsolationProvider(ABC):
|
||||||
)
|
)
|
||||||
return errors.exception_from_error_code(error_code)
|
return errors.exception_from_error_code(error_code)
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def should_wait_install(self) -> bool:
|
||||||
|
"""Whether this isolation provider takes a lot of time to install."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def is_available(self) -> bool:
|
||||||
|
"""Whether the backing implementation of the isolation provider is available."""
|
||||||
|
pass
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_max_parallel_conversions(self) -> int:
|
def get_max_parallel_conversions(self) -> int:
|
||||||
pass
|
pass
|
||||||
|
@ -322,11 +329,15 @@ class IsolationProvider(ABC):
|
||||||
timeout_force: int = TIMEOUT_FORCE,
|
timeout_force: int = TIMEOUT_FORCE,
|
||||||
) -> Iterator[subprocess.Popen]:
|
) -> Iterator[subprocess.Popen]:
|
||||||
"""Start a conversion process, pass it to the caller, and then clean it up."""
|
"""Start a conversion process, pass it to the caller, and then clean it up."""
|
||||||
|
# Store the proc stderr in memory
|
||||||
|
stderr = BytesIO()
|
||||||
p = self.start_doc_to_pixels_proc(document)
|
p = self.start_doc_to_pixels_proc(document)
|
||||||
|
stderr_thread = self.start_stderr_thread(p, stderr)
|
||||||
|
|
||||||
if platform.system() != "Windows":
|
if platform.system() != "Windows":
|
||||||
assert os.getpgid(p.pid) != os.getpgid(
|
assert os.getpgid(p.pid) != os.getpgid(os.getpid()), (
|
||||||
os.getpid()
|
"Parent shares same PGID with child"
|
||||||
), "Parent shares same PGID with child"
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
yield p
|
yield p
|
||||||
|
@ -338,15 +349,40 @@ class IsolationProvider(ABC):
|
||||||
document, p, timeout_grace=timeout_grace, timeout_force=timeout_force
|
document, p, timeout_grace=timeout_grace, timeout_force=timeout_force
|
||||||
)
|
)
|
||||||
|
|
||||||
# Read the stderr of the process only if:
|
if stderr_thread:
|
||||||
# * Dev mode is enabled.
|
# Wait for the thread to complete. If it's still alive, mention it in the debug log.
|
||||||
# * The process has exited (else we risk hanging).
|
stderr_thread.join(timeout=1)
|
||||||
if getattr(sys, "dangerzone_dev", False) and p.poll() is not None:
|
|
||||||
assert p.stderr
|
debug_bytes = stderr.getvalue()
|
||||||
debug_log = read_debug_text(p.stderr, MAX_CONVERSION_LOG_CHARS)
|
debug_log = sanitize_debug_text(debug_bytes)
|
||||||
|
|
||||||
|
incomplete = "(incomplete) " if stderr_thread.is_alive() else ""
|
||||||
|
|
||||||
log.info(
|
log.info(
|
||||||
"Conversion output (doc to pixels)\n"
|
"Conversion output (doc to pixels)\n"
|
||||||
f"{DOC_TO_PIXELS_LOG_START}\n"
|
f"----- DOC TO PIXELS LOG START {incomplete}-----\n"
|
||||||
f"{debug_log}" # no need for an extra newline here
|
f"{debug_log}" # no need for an extra newline here
|
||||||
f"{DOC_TO_PIXELS_LOG_END}"
|
"----- DOC TO PIXELS LOG END -----"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def start_stderr_thread(
|
||||||
|
self, process: subprocess.Popen, stderr: IO[bytes]
|
||||||
|
) -> Optional[threading.Thread]:
|
||||||
|
"""Start a thread to read stderr from the process"""
|
||||||
|
|
||||||
|
def _stream_stderr(process_stderr: IO[bytes]) -> None:
|
||||||
|
try:
|
||||||
|
for line in process_stderr:
|
||||||
|
stderr.write(line)
|
||||||
|
except (ValueError, IOError) as e:
|
||||||
|
log.debug(f"Stderr stream closed: {e}")
|
||||||
|
|
||||||
|
if process.stderr:
|
||||||
|
stderr_thread = threading.Thread(
|
||||||
|
target=_stream_stderr,
|
||||||
|
args=(process.stderr,),
|
||||||
|
daemon=True,
|
||||||
|
)
|
||||||
|
stderr_thread.start()
|
||||||
|
return stderr_thread
|
||||||
|
return None
|
||||||
|
|
|
@ -1,18 +1,21 @@
|
||||||
import gzip
|
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import platform
|
import platform
|
||||||
import shlex
|
import shlex
|
||||||
import shutil
|
|
||||||
import subprocess
|
import subprocess
|
||||||
from typing import List, Tuple
|
from typing import List, Tuple
|
||||||
|
|
||||||
|
from .. import container_utils, errors
|
||||||
|
from ..container_utils import Runtime
|
||||||
from ..document import Document
|
from ..document import Document
|
||||||
from ..util import get_resource_path, get_subprocess_startupinfo
|
from ..util import get_resource_path, get_subprocess_startupinfo
|
||||||
from .base import IsolationProvider, terminate_process_group
|
from .base import IsolationProvider, terminate_process_group
|
||||||
|
|
||||||
TIMEOUT_KILL = 5 # Timeout in seconds until the kill command returns.
|
TIMEOUT_KILL = 5 # Timeout in seconds until the kill command returns.
|
||||||
|
MINIMUM_DOCKER_DESKTOP = {
|
||||||
|
"Darwin": "4.40.0",
|
||||||
|
"Windows": "4.40.0",
|
||||||
|
}
|
||||||
|
|
||||||
# Define startupinfo for subprocesses
|
# Define startupinfo for subprocesses
|
||||||
if platform.system() == "Windows":
|
if platform.system() == "Windows":
|
||||||
|
@ -25,88 +28,8 @@ else:
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class NoContainerTechException(Exception):
|
|
||||||
def __init__(self, container_tech: str) -> None:
|
|
||||||
super().__init__(f"{container_tech} is not installed")
|
|
||||||
|
|
||||||
|
|
||||||
class NotAvailableContainerTechException(Exception):
|
|
||||||
def __init__(self, container_tech: str, error: str) -> None:
|
|
||||||
self.error = error
|
|
||||||
self.container_tech = container_tech
|
|
||||||
super().__init__(f"{container_tech} is not available")
|
|
||||||
|
|
||||||
|
|
||||||
class ImageNotPresentException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class ImageInstallationException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class Container(IsolationProvider):
|
class Container(IsolationProvider):
|
||||||
# Name of the dangerzone container
|
# Name of the dangerzone container
|
||||||
CONTAINER_NAME = "dangerzone.rocks/dangerzone"
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_runtime_name() -> str:
|
|
||||||
if platform.system() == "Linux":
|
|
||||||
runtime_name = "podman"
|
|
||||||
else:
|
|
||||||
# Windows, Darwin, and unknown use docker for now, dangerzone-vm eventually
|
|
||||||
runtime_name = "docker"
|
|
||||||
return runtime_name
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_runtime_version() -> Tuple[int, int]:
|
|
||||||
"""Get the major/minor parts of the Docker/Podman version.
|
|
||||||
|
|
||||||
Some of the operations we perform in this module rely on some Podman features
|
|
||||||
that are not available across all of our platforms. In order to have a proper
|
|
||||||
fallback, we need to know the Podman version. More specifically, we're fine with
|
|
||||||
just knowing the major and minor version, since writing/installing a full-blown
|
|
||||||
semver parser is an overkill.
|
|
||||||
"""
|
|
||||||
# Get the Docker/Podman version, using a Go template.
|
|
||||||
runtime = Container.get_runtime_name()
|
|
||||||
if runtime == "podman":
|
|
||||||
query = "{{.Client.Version}}"
|
|
||||||
else:
|
|
||||||
query = "{{.Server.Version}}"
|
|
||||||
|
|
||||||
cmd = [runtime, "version", "-f", query]
|
|
||||||
try:
|
|
||||||
version = subprocess.run(
|
|
||||||
cmd,
|
|
||||||
startupinfo=get_subprocess_startupinfo(),
|
|
||||||
capture_output=True,
|
|
||||||
check=True,
|
|
||||||
).stdout.decode()
|
|
||||||
except Exception as e:
|
|
||||||
msg = f"Could not get the version of the {runtime.capitalize()} tool: {e}"
|
|
||||||
raise RuntimeError(msg) from e
|
|
||||||
|
|
||||||
# Parse this version and return the major/minor parts, since we don't need the
|
|
||||||
# rest.
|
|
||||||
try:
|
|
||||||
major, minor, _ = version.split(".", 3)
|
|
||||||
return (int(major), int(minor))
|
|
||||||
except Exception as e:
|
|
||||||
msg = (
|
|
||||||
f"Could not parse the version of the {runtime.capitalize()} tool"
|
|
||||||
f" (found: '{version}') due to the following error: {e}"
|
|
||||||
)
|
|
||||||
raise RuntimeError(msg)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_runtime() -> str:
|
|
||||||
container_tech = Container.get_runtime_name()
|
|
||||||
runtime = shutil.which(container_tech)
|
|
||||||
if runtime is None:
|
|
||||||
raise NoContainerTechException(container_tech)
|
|
||||||
return runtime
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_runtime_security_args() -> List[str]:
|
def get_runtime_security_args() -> List[str]:
|
||||||
"""Security options applicable to the outer Dangerzone container.
|
"""Security options applicable to the outer Dangerzone container.
|
||||||
|
@ -127,12 +50,20 @@ class Container(IsolationProvider):
|
||||||
* Do not log the container's output.
|
* Do not log the container's output.
|
||||||
* Do not map the host user to the container, with `--userns nomap` (available
|
* Do not map the host user to the container, with `--userns nomap` (available
|
||||||
from Podman 4.1 onwards)
|
from Podman 4.1 onwards)
|
||||||
- This particular argument is specified in `start_doc_to_pixels_proc()`, but
|
|
||||||
should move here once #748 is merged.
|
|
||||||
"""
|
"""
|
||||||
if Container.get_runtime_name() == "podman":
|
runtime = Runtime()
|
||||||
|
if runtime.name == "podman":
|
||||||
security_args = ["--log-driver", "none"]
|
security_args = ["--log-driver", "none"]
|
||||||
security_args += ["--security-opt", "no-new-privileges"]
|
security_args += ["--security-opt", "no-new-privileges"]
|
||||||
|
if container_utils.get_runtime_version() >= (4, 1):
|
||||||
|
# We perform a platform check to avoid the following Podman Desktop
|
||||||
|
# error on Windows:
|
||||||
|
#
|
||||||
|
# Error: nomap is only supported in rootless mode
|
||||||
|
#
|
||||||
|
# See also: https://github.com/freedomofpress/dangerzone/issues/1127
|
||||||
|
if platform.system() != "Windows":
|
||||||
|
security_args += ["--userns", "nomap"]
|
||||||
else:
|
else:
|
||||||
security_args = ["--security-opt=no-new-privileges:true"]
|
security_args = ["--security-opt=no-new-privileges:true"]
|
||||||
|
|
||||||
|
@ -142,7 +73,15 @@ class Container(IsolationProvider):
|
||||||
#
|
#
|
||||||
# [1] https://github.com/freedomofpress/dangerzone/issues/846
|
# [1] https://github.com/freedomofpress/dangerzone/issues/846
|
||||||
# [2] https://github.com/containers/common/blob/d3283f8401eeeb21f3c59a425b5461f069e199a7/pkg/seccomp/seccomp.json
|
# [2] https://github.com/containers/common/blob/d3283f8401eeeb21f3c59a425b5461f069e199a7/pkg/seccomp/seccomp.json
|
||||||
seccomp_json_path = get_resource_path("seccomp.gvisor.json")
|
seccomp_json_path = str(get_resource_path("seccomp.gvisor.json"))
|
||||||
|
# We perform a platform check to avoid the following Podman Desktop
|
||||||
|
# error on Windows:
|
||||||
|
#
|
||||||
|
# Error: opening seccomp profile failed: open
|
||||||
|
# C:\[...]\dangerzone\share\seccomp.gvisor.json: no such file or directory
|
||||||
|
#
|
||||||
|
# See also: https://github.com/freedomofpress/dangerzone/issues/1127
|
||||||
|
if runtime.name == "podman" and platform.system() != "Windows":
|
||||||
security_args += ["--security-opt", f"seccomp={seccomp_json_path}"]
|
security_args += ["--security-opt", f"seccomp={seccomp_json_path}"]
|
||||||
|
|
||||||
security_args += ["--cap-drop", "all"]
|
security_args += ["--cap-drop", "all"]
|
||||||
|
@ -156,114 +95,92 @@ class Container(IsolationProvider):
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def install() -> bool:
|
def install() -> bool:
|
||||||
|
"""Install the container image tarball, or verify that it's already installed.
|
||||||
|
|
||||||
|
Perform the following actions:
|
||||||
|
1. Get the tags of any locally available images that match Dangerzone's image
|
||||||
|
name.
|
||||||
|
2. Get the expected image tag from the image-id.txt file.
|
||||||
|
- If this tag is present in the local images, then we can return.
|
||||||
|
- Else, prune the older container images and continue.
|
||||||
|
3. Load the image tarball and make sure it matches the expected tag.
|
||||||
"""
|
"""
|
||||||
Make sure the podman container is installed. Linux only.
|
old_tags = container_utils.list_image_tags()
|
||||||
"""
|
expected_tag = container_utils.get_expected_tag()
|
||||||
if Container.is_container_installed():
|
|
||||||
|
if expected_tag not in old_tags:
|
||||||
|
# Prune older container images.
|
||||||
|
log.info(
|
||||||
|
f"Could not find a Dangerzone container image with tag '{expected_tag}'"
|
||||||
|
)
|
||||||
|
for tag in old_tags:
|
||||||
|
tag = container_utils.CONTAINER_NAME + ":" + tag
|
||||||
|
container_utils.delete_image_tag(tag)
|
||||||
|
else:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Load the container into podman
|
# Load the image tarball into the container runtime.
|
||||||
log.info("Installing Dangerzone container image...")
|
container_utils.load_image_tarball()
|
||||||
|
|
||||||
p = subprocess.Popen(
|
# Check that the container image has the expected image tag.
|
||||||
[Container.get_runtime(), "load"],
|
# See https://github.com/freedomofpress/dangerzone/issues/988 for an example
|
||||||
stdin=subprocess.PIPE,
|
# where this was not the case.
|
||||||
startupinfo=get_subprocess_startupinfo(),
|
new_tags = container_utils.list_image_tags()
|
||||||
|
if expected_tag not in new_tags:
|
||||||
|
raise errors.ImageNotPresentException(
|
||||||
|
f"Could not find expected tag '{expected_tag}' after loading the"
|
||||||
|
" container image tarball"
|
||||||
)
|
)
|
||||||
|
|
||||||
chunk_size = 4 << 20
|
|
||||||
compressed_container_path = get_resource_path("container.tar.gz")
|
|
||||||
with gzip.open(compressed_container_path) as f:
|
|
||||||
while True:
|
|
||||||
chunk = f.read(chunk_size)
|
|
||||||
if len(chunk) > 0:
|
|
||||||
if p.stdin:
|
|
||||||
p.stdin.write(chunk)
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
_, err = p.communicate()
|
|
||||||
if p.returncode < 0:
|
|
||||||
if err:
|
|
||||||
error = err.decode()
|
|
||||||
else:
|
|
||||||
error = "No output"
|
|
||||||
raise ImageInstallationException(
|
|
||||||
f"Could not install container image: {error}"
|
|
||||||
)
|
|
||||||
|
|
||||||
if not Container.is_container_installed(raise_on_error=True):
|
|
||||||
return False
|
|
||||||
|
|
||||||
log.info("Container image installed")
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def is_runtime_available() -> bool:
|
def should_wait_install() -> bool:
|
||||||
container_runtime = Container.get_runtime()
|
return True
|
||||||
runtime_name = Container.get_runtime_name()
|
|
||||||
|
@staticmethod
|
||||||
|
def is_available() -> bool:
|
||||||
|
runtime = Runtime()
|
||||||
|
|
||||||
# Can we run `docker/podman image ls` without an error
|
# Can we run `docker/podman image ls` without an error
|
||||||
with subprocess.Popen(
|
with subprocess.Popen(
|
||||||
[container_runtime, "image", "ls"],
|
[str(runtime.path), "image", "ls"],
|
||||||
stdout=subprocess.DEVNULL,
|
stdout=subprocess.DEVNULL,
|
||||||
stderr=subprocess.PIPE,
|
stderr=subprocess.PIPE,
|
||||||
startupinfo=get_subprocess_startupinfo(),
|
startupinfo=get_subprocess_startupinfo(),
|
||||||
) as p:
|
) as p:
|
||||||
_, stderr = p.communicate()
|
_, stderr = p.communicate()
|
||||||
if p.returncode != 0:
|
if p.returncode != 0:
|
||||||
raise NotAvailableContainerTechException(runtime_name, stderr.decode())
|
raise errors.NotAvailableContainerTechException(
|
||||||
|
runtime.name, stderr.decode()
|
||||||
|
)
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@staticmethod
|
def check_docker_desktop_version(self) -> Tuple[bool, str]:
|
||||||
def is_container_installed(raise_on_error: bool = False) -> bool:
|
# On windows and darwin, check that the minimum version is met
|
||||||
"""
|
version = ""
|
||||||
See if the container is installed.
|
runtime = Runtime()
|
||||||
"""
|
runtime_is_docker = runtime.name == "docker"
|
||||||
# Get the image id
|
platform_is_not_linux = platform.system() != "Linux"
|
||||||
with open(get_resource_path("image-id.txt")) as f:
|
|
||||||
expected_image_ids = f.read().strip().split()
|
|
||||||
|
|
||||||
# See if this image is already installed
|
if runtime_is_docker and platform_is_not_linux:
|
||||||
installed = False
|
with subprocess.Popen(
|
||||||
found_image_id = subprocess.check_output(
|
["docker", "version", "--format", "{{.Server.Platform.Name}}"],
|
||||||
[
|
stdout=subprocess.PIPE,
|
||||||
Container.get_runtime(),
|
stderr=subprocess.PIPE,
|
||||||
"image",
|
|
||||||
"list",
|
|
||||||
"--format",
|
|
||||||
"{{.ID}}",
|
|
||||||
Container.CONTAINER_NAME,
|
|
||||||
],
|
|
||||||
text=True,
|
|
||||||
startupinfo=get_subprocess_startupinfo(),
|
startupinfo=get_subprocess_startupinfo(),
|
||||||
)
|
) as p:
|
||||||
found_image_id = found_image_id.strip()
|
stdout, stderr = p.communicate()
|
||||||
|
if p.returncode != 0:
|
||||||
if found_image_id in expected_image_ids:
|
# When an error occurs, consider that the check went
|
||||||
installed = True
|
# through, as we're checking for installation compatibiliy
|
||||||
elif found_image_id == "":
|
# somewhere else already
|
||||||
if raise_on_error:
|
return True, version
|
||||||
raise ImageNotPresentException(
|
# The output is like "Docker Desktop 4.35.1 (173168)"
|
||||||
"Image is not listed after installation. Bailing out."
|
version = stdout.decode().replace("Docker Desktop", "").split()[0]
|
||||||
)
|
if version < MINIMUM_DOCKER_DESKTOP[platform.system()]:
|
||||||
else:
|
return False, version
|
||||||
msg = (
|
return True, version
|
||||||
f"{Container.CONTAINER_NAME} images found, but IDs do not match."
|
|
||||||
f" Found: {found_image_id}, Expected: {','.join(expected_image_ids)}"
|
|
||||||
)
|
|
||||||
if raise_on_error:
|
|
||||||
raise ImageNotPresentException(msg)
|
|
||||||
log.info(msg)
|
|
||||||
log.info("Deleting old dangerzone container image")
|
|
||||||
|
|
||||||
try:
|
|
||||||
subprocess.check_output(
|
|
||||||
[Container.get_runtime(), "rmi", "--force", found_image_id],
|
|
||||||
startupinfo=get_subprocess_startupinfo(),
|
|
||||||
)
|
|
||||||
except Exception:
|
|
||||||
log.warning("Couldn't delete old container image, so leaving it there")
|
|
||||||
|
|
||||||
return installed
|
|
||||||
|
|
||||||
def doc_to_pixels_container_name(self, document: Document) -> str:
|
def doc_to_pixels_container_name(self, document: Document) -> str:
|
||||||
"""Unique container name for the doc-to-pixels phase."""
|
"""Unique container name for the doc-to-pixels phase."""
|
||||||
|
@ -295,25 +212,30 @@ class Container(IsolationProvider):
|
||||||
self,
|
self,
|
||||||
command: List[str],
|
command: List[str],
|
||||||
name: str,
|
name: str,
|
||||||
extra_args: List[str] = [],
|
|
||||||
) -> subprocess.Popen:
|
) -> subprocess.Popen:
|
||||||
container_runtime = self.get_runtime()
|
runtime = Runtime()
|
||||||
security_args = self.get_runtime_security_args()
|
security_args = self.get_runtime_security_args()
|
||||||
|
debug_args = []
|
||||||
|
if self.debug:
|
||||||
|
debug_args += ["-e", "RUNSC_DEBUG=1"]
|
||||||
|
|
||||||
enable_stdin = ["-i"]
|
enable_stdin = ["-i"]
|
||||||
set_name = ["--name", name]
|
set_name = ["--name", name]
|
||||||
prevent_leakage_args = ["--rm"]
|
prevent_leakage_args = ["--rm"]
|
||||||
|
image_name = [
|
||||||
|
container_utils.CONTAINER_NAME + ":" + container_utils.get_expected_tag()
|
||||||
|
]
|
||||||
args = (
|
args = (
|
||||||
["run"]
|
["run"]
|
||||||
+ security_args
|
+ security_args
|
||||||
|
+ debug_args
|
||||||
+ prevent_leakage_args
|
+ prevent_leakage_args
|
||||||
+ enable_stdin
|
+ enable_stdin
|
||||||
+ set_name
|
+ set_name
|
||||||
+ extra_args
|
+ image_name
|
||||||
+ [self.CONTAINER_NAME]
|
|
||||||
+ command
|
+ command
|
||||||
)
|
)
|
||||||
args = [container_runtime] + args
|
return self.exec([str(runtime.path)] + args)
|
||||||
return self.exec(args)
|
|
||||||
|
|
||||||
def kill_container(self, name: str) -> None:
|
def kill_container(self, name: str) -> None:
|
||||||
"""Terminate a spawned container.
|
"""Terminate a spawned container.
|
||||||
|
@ -325,8 +247,8 @@ class Container(IsolationProvider):
|
||||||
connected to the Docker daemon, and killing it will just close the associated
|
connected to the Docker daemon, and killing it will just close the associated
|
||||||
standard streams.
|
standard streams.
|
||||||
"""
|
"""
|
||||||
container_runtime = self.get_runtime()
|
runtime = Runtime()
|
||||||
cmd = [container_runtime, "kill", name]
|
cmd = [str(runtime.path), "kill", name]
|
||||||
try:
|
try:
|
||||||
# We do not check the exit code of the process here, since the container may
|
# We do not check the exit code of the process here, since the container may
|
||||||
# have stopped right before invoking this command. In that case, the
|
# have stopped right before invoking this command. In that case, the
|
||||||
|
@ -358,15 +280,8 @@ class Container(IsolationProvider):
|
||||||
"-m",
|
"-m",
|
||||||
"dangerzone.conversion.doc_to_pixels",
|
"dangerzone.conversion.doc_to_pixels",
|
||||||
]
|
]
|
||||||
# NOTE: Using `--userns nomap` is available only on Podman >= 4.1.0.
|
|
||||||
# XXX: Move this under `get_runtime_security_args()` once #748 is merged.
|
|
||||||
extra_args = []
|
|
||||||
if Container.get_runtime_name() == "podman":
|
|
||||||
if Container.get_runtime_version() >= (4, 1):
|
|
||||||
extra_args += ["--userns", "nomap"]
|
|
||||||
|
|
||||||
name = self.doc_to_pixels_container_name(document)
|
name = self.doc_to_pixels_container_name(document)
|
||||||
return self.exec_container(command, name=name, extra_args=extra_args)
|
return self.exec_container(command, name=name)
|
||||||
|
|
||||||
def terminate_doc_to_pixels_proc(
|
def terminate_doc_to_pixels_proc(
|
||||||
self, document: Document, p: subprocess.Popen
|
self, document: Document, p: subprocess.Popen
|
||||||
|
@ -389,10 +304,10 @@ class Container(IsolationProvider):
|
||||||
# after a podman kill / docker kill invocation, this will likely be the case,
|
# after a podman kill / docker kill invocation, this will likely be the case,
|
||||||
# else the container runtime (Docker/Podman) has experienced a problem, and we
|
# else the container runtime (Docker/Podman) has experienced a problem, and we
|
||||||
# should report it.
|
# should report it.
|
||||||
container_runtime = self.get_runtime()
|
runtime = Runtime()
|
||||||
name = self.doc_to_pixels_container_name(document)
|
name = self.doc_to_pixels_container_name(document)
|
||||||
all_containers = subprocess.run(
|
all_containers = subprocess.run(
|
||||||
[container_runtime, "ps", "-a"],
|
[str(runtime.path), "ps", "-a"],
|
||||||
capture_output=True,
|
capture_output=True,
|
||||||
startupinfo=get_subprocess_startupinfo(),
|
startupinfo=get_subprocess_startupinfo(),
|
||||||
)
|
)
|
||||||
|
@ -403,19 +318,20 @@ class Container(IsolationProvider):
|
||||||
# FIXME hardcoded 1 until length conversions are better handled
|
# FIXME hardcoded 1 until length conversions are better handled
|
||||||
# https://github.com/freedomofpress/dangerzone/issues/257
|
# https://github.com/freedomofpress/dangerzone/issues/257
|
||||||
return 1
|
return 1
|
||||||
|
runtime = Runtime() # type: ignore [unreachable]
|
||||||
|
|
||||||
n_cpu = 1 # type: ignore [unreachable]
|
n_cpu = 1
|
||||||
if platform.system() == "Linux":
|
if platform.system() == "Linux":
|
||||||
# if on linux containers run natively
|
# if on linux containers run natively
|
||||||
cpu_count = os.cpu_count()
|
cpu_count = os.cpu_count()
|
||||||
if cpu_count is not None:
|
if cpu_count is not None:
|
||||||
n_cpu = cpu_count
|
n_cpu = cpu_count
|
||||||
|
|
||||||
elif self.get_runtime_name() == "docker":
|
elif runtime.name == "docker":
|
||||||
# For Windows and MacOS containers run in VM
|
# For Windows and MacOS containers run in VM
|
||||||
# So we obtain the CPU count for the VM
|
# So we obtain the CPU count for the VM
|
||||||
n_cpu_str = subprocess.check_output(
|
n_cpu_str = subprocess.check_output(
|
||||||
[self.get_runtime(), "info", "--format", "{{.NCPU}}"],
|
[str(runtime.path), "info", "--format", "{{.NCPU}}"],
|
||||||
text=True,
|
text=True,
|
||||||
startupinfo=get_subprocess_startupinfo(),
|
startupinfo=get_subprocess_startupinfo(),
|
||||||
)
|
)
|
||||||
|
|
|
@ -39,6 +39,14 @@ class Dummy(IsolationProvider):
|
||||||
def install(self) -> bool:
|
def install(self) -> bool:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def is_available() -> bool:
|
||||||
|
return True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def should_wait_install() -> bool:
|
||||||
|
return False
|
||||||
|
|
||||||
def start_doc_to_pixels_proc(self, document: Document) -> subprocess.Popen:
|
def start_doc_to_pixels_proc(self, document: Document) -> subprocess.Popen:
|
||||||
cmd = [
|
cmd = [
|
||||||
sys.executable,
|
sys.executable,
|
||||||
|
|
|
@ -21,6 +21,14 @@ class Qubes(IsolationProvider):
|
||||||
def install(self) -> bool:
|
def install(self) -> bool:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def is_available() -> bool:
|
||||||
|
return True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def should_wait_install() -> bool:
|
||||||
|
return False
|
||||||
|
|
||||||
def get_max_parallel_conversions(self) -> int:
|
def get_max_parallel_conversions(self) -> int:
|
||||||
return 1
|
return 1
|
||||||
|
|
||||||
|
@ -122,7 +130,6 @@ def is_qubes_native_conversion() -> bool:
|
||||||
# This disambiguates if it is running a Qubes targetted build or not
|
# This disambiguates if it is running a Qubes targetted build or not
|
||||||
# (Qubes-specific builds don't ship the container image)
|
# (Qubes-specific builds don't ship the container image)
|
||||||
|
|
||||||
compressed_container_path = get_resource_path("container.tar.gz")
|
return not get_resource_path("container.tar").exists()
|
||||||
return not os.path.exists(compressed_container_path)
|
|
||||||
else:
|
else:
|
||||||
return False
|
return False
|
||||||
|
|
|
@ -23,16 +23,13 @@ class DangerzoneCore(object):
|
||||||
# Initialize terminal colors
|
# Initialize terminal colors
|
||||||
colorama.init(autoreset=True)
|
colorama.init(autoreset=True)
|
||||||
|
|
||||||
# App data folder
|
|
||||||
self.appdata_path = util.get_config_dir()
|
|
||||||
|
|
||||||
# Languages supported by tesseract
|
# Languages supported by tesseract
|
||||||
with open(get_resource_path("ocr-languages.json"), "r") as f:
|
with get_resource_path("ocr-languages.json").open("r") as f:
|
||||||
unsorted_ocr_languages = json.load(f)
|
unsorted_ocr_languages = json.load(f)
|
||||||
self.ocr_languages = dict(sorted(unsorted_ocr_languages.items()))
|
self.ocr_languages = dict(sorted(unsorted_ocr_languages.items()))
|
||||||
|
|
||||||
# Load settings
|
# Load settings
|
||||||
self.settings = Settings(self)
|
self.settings = Settings()
|
||||||
self.documents: List[Document] = []
|
self.documents: List[Document] = []
|
||||||
self.isolation_provider = isolation_provider
|
self.isolation_provider = isolation_provider
|
||||||
|
|
||||||
|
@ -71,7 +68,8 @@ class DangerzoneCore(object):
|
||||||
ocr_lang,
|
ocr_lang,
|
||||||
stdout_callback,
|
stdout_callback,
|
||||||
)
|
)
|
||||||
except Exception as e:
|
|
||||||
|
except Exception:
|
||||||
log.exception(
|
log.exception(
|
||||||
f"Unexpected error occurred while converting '{document}'"
|
f"Unexpected error occurred while converting '{document}'"
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,29 +1,24 @@
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
|
from pathlib import Path
|
||||||
from typing import TYPE_CHECKING, Any, Dict
|
from typing import TYPE_CHECKING, Any, Dict
|
||||||
|
|
||||||
from packaging import version
|
from packaging import version
|
||||||
|
|
||||||
from .document import SAFE_EXTENSION
|
from .document import SAFE_EXTENSION
|
||||||
from .util import get_version
|
from .util import get_config_dir, get_version
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from .logic import DangerzoneCore
|
|
||||||
|
|
||||||
SETTINGS_FILENAME: str = "settings.json"
|
SETTINGS_FILENAME: str = "settings.json"
|
||||||
|
|
||||||
|
|
||||||
class Settings:
|
class Settings:
|
||||||
settings: Dict[str, Any]
|
settings: Dict[str, Any]
|
||||||
|
|
||||||
def __init__(self, dangerzone: "DangerzoneCore") -> None:
|
def __init__(self) -> None:
|
||||||
self.dangerzone = dangerzone
|
self.settings_filename = get_config_dir() / SETTINGS_FILENAME
|
||||||
self.settings_filename = os.path.join(
|
|
||||||
self.dangerzone.appdata_path, SETTINGS_FILENAME
|
|
||||||
)
|
|
||||||
self.default_settings: Dict[str, Any] = self.generate_default_settings()
|
self.default_settings: Dict[str, Any] = self.generate_default_settings()
|
||||||
self.load()
|
self.load()
|
||||||
|
|
||||||
|
@ -45,6 +40,22 @@ class Settings:
|
||||||
"updater_errors": 0,
|
"updater_errors": 0,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
def custom_runtime_specified(self) -> bool:
|
||||||
|
return "container_runtime" in self.settings
|
||||||
|
|
||||||
|
def set_custom_runtime(self, runtime: str, autosave: bool = False) -> Path:
|
||||||
|
from .container_utils import Runtime # Avoid circular import
|
||||||
|
|
||||||
|
container_runtime = Runtime.path_from_name(runtime)
|
||||||
|
self.settings["container_runtime"] = str(container_runtime)
|
||||||
|
if autosave:
|
||||||
|
self.save()
|
||||||
|
return container_runtime
|
||||||
|
|
||||||
|
def unset_custom_runtime(self) -> None:
|
||||||
|
self.settings.pop("container_runtime")
|
||||||
|
self.save()
|
||||||
|
|
||||||
def get(self, key: str) -> Any:
|
def get(self, key: str) -> Any:
|
||||||
return self.settings[key]
|
return self.settings[key]
|
||||||
|
|
||||||
|
@ -91,6 +102,6 @@ class Settings:
|
||||||
self.save()
|
self.save()
|
||||||
|
|
||||||
def save(self) -> None:
|
def save(self) -> None:
|
||||||
os.makedirs(self.dangerzone.appdata_path, exist_ok=True)
|
self.settings_filename.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(self.settings_filename, "w") as settings_file:
|
with self.settings_filename.open("w") as settings_file:
|
||||||
json.dump(self.settings, settings_file, indent=4)
|
json.dump(self.settings, settings_file, indent=4)
|
||||||
|
|
|
@ -1,47 +1,49 @@
|
||||||
import pathlib
|
|
||||||
import platform
|
import platform
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
import traceback
|
import traceback
|
||||||
import unicodedata
|
import unicodedata
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
import appdirs
|
try:
|
||||||
|
import platformdirs
|
||||||
|
except ImportError:
|
||||||
|
import appdirs as platformdirs
|
||||||
|
|
||||||
|
|
||||||
def get_config_dir() -> str:
|
def get_config_dir() -> Path:
|
||||||
return appdirs.user_config_dir("dangerzone")
|
return Path(platformdirs.user_config_dir("dangerzone"))
|
||||||
|
|
||||||
|
|
||||||
def get_resource_path(filename: str) -> str:
|
def get_resource_path(filename: str) -> Path:
|
||||||
if getattr(sys, "dangerzone_dev", False):
|
if getattr(sys, "dangerzone_dev", False):
|
||||||
# Look for resources directory relative to python file
|
# Look for resources directory relative to python file
|
||||||
project_root = pathlib.Path(__file__).parent.parent
|
project_root = Path(__file__).parent.parent
|
||||||
prefix = project_root / "share"
|
prefix = project_root / "share"
|
||||||
else:
|
else:
|
||||||
if platform.system() == "Darwin":
|
if platform.system() == "Darwin":
|
||||||
bin_path = pathlib.Path(sys.executable)
|
bin_path = Path(sys.executable)
|
||||||
app_path = bin_path.parent.parent
|
app_path = bin_path.parent.parent
|
||||||
prefix = app_path / "Resources" / "share"
|
prefix = app_path / "Resources" / "share"
|
||||||
elif platform.system() == "Linux":
|
elif platform.system() == "Linux":
|
||||||
prefix = pathlib.Path(sys.prefix) / "share" / "dangerzone"
|
prefix = Path(sys.prefix) / "share" / "dangerzone"
|
||||||
elif platform.system() == "Windows":
|
elif platform.system() == "Windows":
|
||||||
exe_path = pathlib.Path(sys.executable)
|
exe_path = Path(sys.executable)
|
||||||
dz_install_path = exe_path.parent
|
dz_install_path = exe_path.parent
|
||||||
prefix = dz_install_path / "share"
|
prefix = dz_install_path / "share"
|
||||||
else:
|
else:
|
||||||
raise NotImplementedError(f"Unsupported system {platform.system()}")
|
raise NotImplementedError(f"Unsupported system {platform.system()}")
|
||||||
resource_path = prefix / filename
|
return prefix / filename
|
||||||
return str(resource_path)
|
|
||||||
|
|
||||||
|
|
||||||
def get_tessdata_dir() -> pathlib.Path:
|
def get_tessdata_dir() -> Path:
|
||||||
if getattr(sys, "dangerzone_dev", False) or platform.system() in (
|
if getattr(sys, "dangerzone_dev", False) or platform.system() in (
|
||||||
"Windows",
|
"Windows",
|
||||||
"Darwin",
|
"Darwin",
|
||||||
):
|
):
|
||||||
# Always use the tessdata path from the Dangerzone ./share directory, for
|
# Always use the tessdata path from the Dangerzone ./share directory, for
|
||||||
# development builds, or in Windows/macOS platforms.
|
# development builds, or in Windows/macOS platforms.
|
||||||
return pathlib.Path(get_resource_path("tessdata"))
|
return get_resource_path("tessdata")
|
||||||
|
|
||||||
# In case of Linux systems, grab the Tesseract data from any of the following
|
# In case of Linux systems, grab the Tesseract data from any of the following
|
||||||
# locations. We have found some of the locations through trial and error, whereas
|
# locations. We have found some of the locations through trial and error, whereas
|
||||||
|
@ -52,11 +54,11 @@ def get_tessdata_dir() -> pathlib.Path:
|
||||||
#
|
#
|
||||||
# [1] https://tesseract-ocr.github.io/tessdoc/Installation.html
|
# [1] https://tesseract-ocr.github.io/tessdoc/Installation.html
|
||||||
tessdata_dirs = [
|
tessdata_dirs = [
|
||||||
pathlib.Path("/usr/share/tessdata/"), # on some Debian
|
Path("/usr/share/tessdata/"), # on some Debian
|
||||||
pathlib.Path("/usr/share/tesseract/tessdata/"), # on Fedora
|
Path("/usr/share/tesseract/tessdata/"), # on Fedora
|
||||||
pathlib.Path("/usr/share/tesseract-ocr/tessdata/"), # ? (documented)
|
Path("/usr/share/tesseract-ocr/tessdata/"), # ? (documented)
|
||||||
pathlib.Path("/usr/share/tesseract-ocr/4.00/tessdata/"), # on Ubuntu Focal
|
Path("/usr/share/tesseract-ocr/4.00/tessdata/"), # on Debian Bullseye
|
||||||
pathlib.Path("/usr/share/tesseract-ocr/5/tessdata/"), # on Debian Trixie
|
Path("/usr/share/tesseract-ocr/5/tessdata/"), # on Debian Trixie
|
||||||
]
|
]
|
||||||
|
|
||||||
for dir in tessdata_dirs:
|
for dir in tessdata_dirs:
|
||||||
|
@ -68,7 +70,7 @@ def get_tessdata_dir() -> pathlib.Path:
|
||||||
|
|
||||||
def get_version() -> str:
|
def get_version() -> str:
|
||||||
try:
|
try:
|
||||||
with open(get_resource_path("version.txt")) as f:
|
with get_resource_path("version.txt").open() as f:
|
||||||
version = f.read().strip()
|
version = f.read().strip()
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
# In dev mode, in Windows, get_resource_path doesn't work properly for the container, but luckily
|
# In dev mode, in Windows, get_resource_path doesn't work properly for the container, but luckily
|
||||||
|
|
12
debian/changelog
vendored
12
debian/changelog
vendored
|
@ -1,3 +1,15 @@
|
||||||
|
dangerzone (0.9.0) unstable; urgency=low
|
||||||
|
|
||||||
|
* Released Dangerzone 0.9.0
|
||||||
|
|
||||||
|
-- Freedom of the Press Foundation <info@freedom.press> Mon, 31 Mar 2025 15:57:18 +0300
|
||||||
|
|
||||||
|
dangerzone (0.8.1) unstable; urgency=low
|
||||||
|
|
||||||
|
* Released Dangerzone 0.8.1
|
||||||
|
|
||||||
|
-- Freedom of the Press Foundation <info@freedom.press> Tue, 22 Dec 2024 22:03:28 +0300
|
||||||
|
|
||||||
dangerzone (0.8.0) unstable; urgency=low
|
dangerzone (0.8.0) unstable; urgency=low
|
||||||
|
|
||||||
* Released Dangerzone 0.8.0
|
* Released Dangerzone 0.8.0
|
||||||
|
|
2
debian/control
vendored
2
debian/control
vendored
|
@ -9,7 +9,7 @@ Rules-Requires-Root: no
|
||||||
|
|
||||||
Package: dangerzone
|
Package: dangerzone
|
||||||
Architecture: any
|
Architecture: any
|
||||||
Depends: ${misc:Depends}, ${python3:Depends}, podman, python3, python3-pyside2.qtcore, python3-pyside2.qtgui, python3-pyside2.qtwidgets, python3-pyside2.qtsvg, python3-appdirs, python3-click, python3-xdg, python3-colorama, python3-requests, python3-markdown, python3-packaging, tesseract-ocr-all
|
Depends: ${misc:Depends}, podman, python3, python3-pyside2.qtcore, python3-pyside2.qtgui, python3-pyside2.qtwidgets, python3-pyside2.qtsvg, python3-platformdirs | python3-appdirs, python3-click, python3-xdg, python3-colorama, python3-requests, python3-markdown, python3-packaging, tesseract-ocr-all
|
||||||
Description: Take potentially dangerous PDFs, office documents, or images
|
Description: Take potentially dangerous PDFs, office documents, or images
|
||||||
Dangerzone is an open source desktop application that takes potentially dangerous PDFs, office documents, or images and converts them to safe PDFs. It uses disposable VMs on Qubes OS, or container technology in other OSes, to convert the documents within a secure sandbox.
|
Dangerzone is an open source desktop application that takes potentially dangerous PDFs, office documents, or images and converts them to safe PDFs. It uses disposable VMs on Qubes OS, or container technology in other OSes, to convert the documents within a secure sandbox.
|
||||||
.
|
.
|
||||||
|
|
2
debian/rules
vendored
2
debian/rules
vendored
|
@ -9,5 +9,5 @@ export DH_VERBOSE=1
|
||||||
dh $@ --with python3 --buildsystem=pybuild
|
dh $@ --with python3 --buildsystem=pybuild
|
||||||
|
|
||||||
override_dh_builddeb:
|
override_dh_builddeb:
|
||||||
./install/linux/vendor-pymupdf.py --dest debian/dangerzone/usr/lib/python3/dist-packages/dangerzone/vendor/
|
./install/linux/debian-vendor-pymupdf.py --dest debian/dangerzone/usr/lib/python3/dist-packages/dangerzone/vendor/
|
||||||
dh_builddeb $@
|
dh_builddeb $@
|
||||||
|
|
|
@ -8,7 +8,6 @@ import platform
|
||||||
import shutil
|
import shutil
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
import urllib.request
|
|
||||||
from datetime import date
|
from datetime import date
|
||||||
|
|
||||||
DEFAULT_GUI = True
|
DEFAULT_GUI = True
|
||||||
|
@ -16,42 +15,6 @@ DEFAULT_USER = "user"
|
||||||
DEFAULT_DRY = False
|
DEFAULT_DRY = False
|
||||||
DEFAULT_DEV = False
|
DEFAULT_DEV = False
|
||||||
DEFAULT_SHOW_DOCKERFILE = False
|
DEFAULT_SHOW_DOCKERFILE = False
|
||||||
DEFAULT_DOWNLOAD_PYSIDE6 = False
|
|
||||||
|
|
||||||
PYSIDE6_VERSION = "6.7.1"
|
|
||||||
PYSIDE6_RPM = "python3-pyside6-{pyside6_version}-1.fc{fedora_version}.x86_64.rpm"
|
|
||||||
PYSIDE6_URL = (
|
|
||||||
"https://packages.freedom.press/yum-tools-prod/dangerzone/f{fedora_version}/%s"
|
|
||||||
% PYSIDE6_RPM
|
|
||||||
)
|
|
||||||
|
|
||||||
PYSIDE6_DL_MESSAGE = """\
|
|
||||||
Downloading PySide6 RPM from:
|
|
||||||
|
|
||||||
{pyside6_url}
|
|
||||||
|
|
||||||
into the following local path:
|
|
||||||
|
|
||||||
{pyside6_local_path}
|
|
||||||
|
|
||||||
The RPM is over 100 MB, so this operation may take a while...
|
|
||||||
"""
|
|
||||||
|
|
||||||
PYSIDE6_NOT_FOUND_ERROR = """\
|
|
||||||
The following package is not present in your system:
|
|
||||||
|
|
||||||
{pyside6_local_path}
|
|
||||||
|
|
||||||
You can build it locally and copy it in the expected path, following the instructions
|
|
||||||
in:
|
|
||||||
|
|
||||||
https://github.com/freedomofpress/python3-pyside6-rpm
|
|
||||||
|
|
||||||
Alternatively, you can rerun the command adding the '--download-pyside6' flag, which
|
|
||||||
will download it from:
|
|
||||||
|
|
||||||
{pyside6_url}
|
|
||||||
"""
|
|
||||||
|
|
||||||
# The Linux distributions that we currently support.
|
# The Linux distributions that we currently support.
|
||||||
# FIXME: Add a version mapping to avoid mistakes.
|
# FIXME: Add a version mapping to avoid mistakes.
|
||||||
|
@ -97,24 +60,6 @@ Run Dangerzone in the end-user environment:
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# NOTE: For Ubuntu 20.04 specifically, we need to install some extra deps, mainly for
|
|
||||||
# Podman. This needs to take place both in our dev and end-user environment. See the
|
|
||||||
# corresponding note in our Installation section:
|
|
||||||
#
|
|
||||||
# https://github.com/freedomofpress/dangerzone/blob/main/INSTALL.md#ubuntu-debian
|
|
||||||
DOCKERFILE_UBUNTU_2004_DEPS = r"""
|
|
||||||
ARG DEBIAN_FRONTEND=noninteractive
|
|
||||||
|
|
||||||
RUN apt-get update \
|
|
||||||
&& apt-get install -y python-all python3.9 curl wget gnupg2 \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
|
||||||
RUN . /etc/os-release \
|
|
||||||
&& sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_$VERSION_ID/ /' \
|
|
||||||
> /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list" \
|
|
||||||
&& wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_$VERSION_ID/Release.key -O- \
|
|
||||||
| apt-key add -
|
|
||||||
"""
|
|
||||||
|
|
||||||
# XXX: overcome the fact that ubuntu images (starting on 23.04) ship with the 'ubuntu'
|
# XXX: overcome the fact that ubuntu images (starting on 23.04) ship with the 'ubuntu'
|
||||||
# user by default https://bugs.launchpad.net/cloud-images/+bug/2005129
|
# user by default https://bugs.launchpad.net/cloud-images/+bug/2005129
|
||||||
# Related issue https://github.com/freedomofpress/dangerzone/pull/461
|
# Related issue https://github.com/freedomofpress/dangerzone/pull/461
|
||||||
|
@ -151,33 +96,18 @@ RUN apt-get update \
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install -y --no-install-recommends dh-python make build-essential \
|
&& apt-get install -y --no-install-recommends dh-python make build-essential \
|
||||||
git {qt_deps} pipx python3 python3-pip python3-venv dpkg-dev debhelper python3-setuptools \
|
git {qt_deps} pipx python3 python3-pip python3-venv dpkg-dev debhelper python3-setuptools \
|
||||||
|
python3-dev \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
# NOTE: `pipx install poetry` fails on Ubuntu Focal, when installed through APT. By
|
RUN pipx install poetry
|
||||||
# installing the latest version, we sidestep this issue.
|
|
||||||
RUN bash -c 'if [[ "$(pipx --version)" < "1" ]]; then \
|
|
||||||
apt-get update \
|
|
||||||
&& apt-get remove -y pipx \
|
|
||||||
&& apt-get install -y --no-install-recommends python3-pip \
|
|
||||||
&& pip install pipx \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*; \
|
|
||||||
else true; fi'
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install -y --no-install-recommends mupdf thunar \
|
&& apt-get install -y --no-install-recommends mupdf thunar \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# NOTE: Fedora 41 comes with Python 3.13 installed. Our Python project is not compatible
|
|
||||||
# yet with Python 3.13, because PySide6 cannot work with this Python version. To
|
|
||||||
# sidestep this, install Python 3.12 *only* in dev environments.
|
|
||||||
DOCKERFILE_BUILD_DEV_FEDORA_41_DEPS = r"""
|
|
||||||
# Install Python 3.12 since our project is not compatible yet with Python 3.13.
|
|
||||||
RUN dnf install -y python3.12
|
|
||||||
"""
|
|
||||||
|
|
||||||
# FIXME: Install Poetry on Fedora via package manager.
|
# FIXME: Install Poetry on Fedora via package manager.
|
||||||
DOCKERFILE_BUILD_DEV_FEDORA_DEPS = r"""
|
DOCKERFILE_BUILD_DEV_FEDORA_DEPS = r"""
|
||||||
RUN dnf install -y git rpm-build podman python3 python3-devel python3-poetry-core \
|
RUN dnf install -y git rpm-build podman python3 python3-devel python3-poetry-core \
|
||||||
pipx make qt6-qtbase-gui \
|
pipx make qt6-qtbase-gui gcc gcc-c++\
|
||||||
&& dnf clean all
|
&& dnf clean all
|
||||||
|
|
||||||
# FIXME: Drop this fix after it's resolved upstream.
|
# FIXME: Drop this fix after it's resolved upstream.
|
||||||
|
@ -220,6 +150,7 @@ COPY storage.conf /home/user/.config/containers
|
||||||
# FIXME: pipx install poetry does not work for Ubuntu Focal.
|
# FIXME: pipx install poetry does not work for Ubuntu Focal.
|
||||||
ENV PATH="$PATH:/home/user/.local/bin"
|
ENV PATH="$PATH:/home/user/.local/bin"
|
||||||
RUN pipx install poetry
|
RUN pipx install poetry
|
||||||
|
RUN pipx inject poetry poetry-plugin-export
|
||||||
|
|
||||||
COPY pyproject.toml poetry.lock /home/user/dangerzone/
|
COPY pyproject.toml poetry.lock /home/user/dangerzone/
|
||||||
RUN cd /home/user/dangerzone && poetry --no-ansi install
|
RUN cd /home/user/dangerzone && poetry --no-ansi install
|
||||||
|
@ -232,11 +163,6 @@ RUN apt-get update \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
"""
|
"""
|
||||||
|
|
||||||
DOCKERFILE_BUILD_FEDORA_39_DEPS = r"""
|
|
||||||
COPY {pyside6_rpm} /tmp/pyside6.rpm
|
|
||||||
RUN dnf install -y /tmp/pyside6.rpm
|
|
||||||
"""
|
|
||||||
|
|
||||||
DOCKERFILE_BUILD_FEDORA_DEPS = r"""
|
DOCKERFILE_BUILD_FEDORA_DEPS = r"""
|
||||||
RUN dnf install -y mupdf thunar && dnf clean all
|
RUN dnf install -y mupdf thunar && dnf clean all
|
||||||
|
|
||||||
|
@ -333,6 +259,7 @@ def get_build_dir_sources(distro, version):
|
||||||
sources = [
|
sources = [
|
||||||
git_root() / "pyproject.toml",
|
git_root() / "pyproject.toml",
|
||||||
git_root() / "poetry.lock",
|
git_root() / "poetry.lock",
|
||||||
|
git_root() / "dev_scripts" / "env.py",
|
||||||
git_root() / "dev_scripts" / "storage.conf",
|
git_root() / "dev_scripts" / "storage.conf",
|
||||||
git_root() / "dev_scripts" / "containers.conf",
|
git_root() / "dev_scripts" / "containers.conf",
|
||||||
]
|
]
|
||||||
|
@ -390,74 +317,6 @@ def get_files_in(*folders: list[str]) -> list[pathlib.Path]:
|
||||||
return files
|
return files
|
||||||
|
|
||||||
|
|
||||||
class PySide6Manager:
|
|
||||||
"""Provision PySide6 RPMs in our Dangerzone environments.
|
|
||||||
|
|
||||||
This class holds all the logic around checking and downloading PySide RPMs. It can
|
|
||||||
check if the required RPM version is present under "/dist", and optionally download
|
|
||||||
it.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, distro_name, distro_version):
|
|
||||||
if distro_name != "fedora":
|
|
||||||
raise RuntimeError("Managing PySide6 RPMs is available only in Fedora")
|
|
||||||
self.distro_name = distro_name
|
|
||||||
self.distro_version = distro_version
|
|
||||||
|
|
||||||
@property
|
|
||||||
def version(self):
|
|
||||||
"""The version of the PySide6 RPM."""
|
|
||||||
return PYSIDE6_VERSION
|
|
||||||
|
|
||||||
@property
|
|
||||||
def rpm_name(self):
|
|
||||||
"""The name of the PySide6 RPM."""
|
|
||||||
return PYSIDE6_RPM.format(
|
|
||||||
pyside6_version=self.version, fedora_version=self.distro_version
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def rpm_url(self):
|
|
||||||
"""The URL of the PySide6 RPM, as hosted in FPF's RPM repo."""
|
|
||||||
return PYSIDE6_URL.format(
|
|
||||||
pyside6_version=self.version,
|
|
||||||
fedora_version=self.distro_version,
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def rpm_local_path(self):
|
|
||||||
"""The local path where this script will look for the PySide6 RPM."""
|
|
||||||
return git_root() / "dist" / self.rpm_name
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_rpm_present(self):
|
|
||||||
"""Check if PySide6 RPM is present in the user's system."""
|
|
||||||
return self.rpm_local_path.exists()
|
|
||||||
|
|
||||||
def download_rpm(self):
|
|
||||||
"""Download PySide6 from FPF's RPM repo."""
|
|
||||||
print(
|
|
||||||
PYSIDE6_DL_MESSAGE.format(
|
|
||||||
pyside6_url=self.rpm_url,
|
|
||||||
pyside6_local_path=self.rpm_local_path,
|
|
||||||
),
|
|
||||||
file=sys.stderr,
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
with (
|
|
||||||
urllib.request.urlopen(self.rpm_url) as r,
|
|
||||||
open(self.rpm_local_path, "wb") as f,
|
|
||||||
):
|
|
||||||
shutil.copyfileobj(r, f)
|
|
||||||
except:
|
|
||||||
# NOTE: We purposefully catch all exceptions, since we want to catch Ctrl-C
|
|
||||||
# as well.
|
|
||||||
print("Download interrupted, removing file", file=sys.stderr)
|
|
||||||
self.rpm_local_path.unlink()
|
|
||||||
raise
|
|
||||||
print("PySide6 was downloaded successfully", file=sys.stderr)
|
|
||||||
|
|
||||||
|
|
||||||
class Env:
|
class Env:
|
||||||
"""A class that implements actions on Dangerzone environments"""
|
"""A class that implements actions on Dangerzone environments"""
|
||||||
|
|
||||||
|
@ -672,8 +531,6 @@ class Env:
|
||||||
|
|
||||||
if self.distro == "fedora":
|
if self.distro == "fedora":
|
||||||
install_deps = DOCKERFILE_BUILD_DEV_FEDORA_DEPS
|
install_deps = DOCKERFILE_BUILD_DEV_FEDORA_DEPS
|
||||||
if self.version == "41":
|
|
||||||
install_deps += DOCKERFILE_BUILD_DEV_FEDORA_41_DEPS
|
|
||||||
else:
|
else:
|
||||||
# Use Qt6 in all of our Linux dev environments, and add a missing
|
# Use Qt6 in all of our Linux dev environments, and add a missing
|
||||||
# libxcb-cursor0 dependency
|
# libxcb-cursor0 dependency
|
||||||
|
@ -681,12 +538,7 @@ class Env:
|
||||||
# See https://github.com/freedomofpress/dangerzone/issues/482
|
# See https://github.com/freedomofpress/dangerzone/issues/482
|
||||||
qt_deps = "libqt6gui6 libxcb-cursor0"
|
qt_deps = "libqt6gui6 libxcb-cursor0"
|
||||||
install_deps = DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
|
install_deps = DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
|
||||||
if self.distro == "ubuntu" and self.version in ("20.04", "focal"):
|
if self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
|
||||||
qt_deps = "libqt5gui5 libxcb-cursor0" # Ubuntu Focal has only Qt5.
|
|
||||||
install_deps = (
|
|
||||||
DOCKERFILE_UBUNTU_2004_DEPS + DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
|
|
||||||
)
|
|
||||||
elif self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
|
|
||||||
# Ubuntu Jammy misses a dependency to `libxkbcommon-x11-0`, which we can
|
# Ubuntu Jammy misses a dependency to `libxkbcommon-x11-0`, which we can
|
||||||
# install indirectly via `qt6-qpa-plugins`.
|
# install indirectly via `qt6-qpa-plugins`.
|
||||||
qt_deps += " qt6-qpa-plugins"
|
qt_deps += " qt6-qpa-plugins"
|
||||||
|
@ -700,6 +552,8 @@ class Env:
|
||||||
"noble",
|
"noble",
|
||||||
"24.10",
|
"24.10",
|
||||||
"ocular",
|
"ocular",
|
||||||
|
"25.04",
|
||||||
|
"plucky",
|
||||||
):
|
):
|
||||||
install_deps = (
|
install_deps = (
|
||||||
DOCKERFILE_UBUNTU_REM_USER + DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
|
DOCKERFILE_UBUNTU_REM_USER + DOCKERFILE_BUILD_DEV_DEBIAN_DEPS
|
||||||
|
@ -736,7 +590,6 @@ class Env:
|
||||||
def build(
|
def build(
|
||||||
self,
|
self,
|
||||||
show_dockerfile=DEFAULT_SHOW_DOCKERFILE,
|
show_dockerfile=DEFAULT_SHOW_DOCKERFILE,
|
||||||
download_pyside6=DEFAULT_DOWNLOAD_PYSIDE6,
|
|
||||||
):
|
):
|
||||||
"""Build a Linux environment and install Dangerzone in it."""
|
"""Build a Linux environment and install Dangerzone in it."""
|
||||||
build_dir = distro_build(self.distro, self.version)
|
build_dir = distro_build(self.distro, self.version)
|
||||||
|
@ -749,35 +602,9 @@ class Env:
|
||||||
package = package_src.name
|
package = package_src.name
|
||||||
package_dst = build_dir / package
|
package_dst = build_dir / package
|
||||||
install_cmd = "dnf install -y"
|
install_cmd = "dnf install -y"
|
||||||
|
|
||||||
# NOTE: For Fedora 39, we check if a PySide6 RPM package exists in
|
|
||||||
# the user's system. If not, we either throw an error or download it from
|
|
||||||
# FPF's repo, according to the user's choice.
|
|
||||||
if self.version == "39":
|
|
||||||
pyside6 = PySide6Manager(self.distro, self.version)
|
|
||||||
if not pyside6.is_rpm_present:
|
|
||||||
if download_pyside6:
|
|
||||||
pyside6.download_rpm()
|
|
||||||
else:
|
|
||||||
print(
|
|
||||||
PYSIDE6_NOT_FOUND_ERROR.format(
|
|
||||||
pyside6_local_path=pyside6.rpm_local_path,
|
|
||||||
pyside6_url=pyside6.rpm_url,
|
|
||||||
),
|
|
||||||
file=sys.stderr,
|
|
||||||
)
|
|
||||||
return 1
|
|
||||||
shutil.copy(pyside6.rpm_local_path, build_dir / pyside6.rpm_name)
|
|
||||||
install_deps = (
|
|
||||||
DOCKERFILE_BUILD_FEDORA_DEPS + DOCKERFILE_BUILD_FEDORA_39_DEPS
|
|
||||||
).format(pyside6_rpm=pyside6.rpm_name)
|
|
||||||
else:
|
else:
|
||||||
install_deps = DOCKERFILE_BUILD_DEBIAN_DEPS
|
install_deps = DOCKERFILE_BUILD_DEBIAN_DEPS
|
||||||
if self.distro == "ubuntu" and self.version in ("20.04", "focal"):
|
if self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
|
||||||
install_deps = (
|
|
||||||
DOCKERFILE_UBUNTU_2004_DEPS + DOCKERFILE_BUILD_DEBIAN_DEPS
|
|
||||||
)
|
|
||||||
elif self.distro == "ubuntu" and self.version in ("22.04", "jammy"):
|
|
||||||
# Ubuntu Jammy requires a more up-to-date conmon
|
# Ubuntu Jammy requires a more up-to-date conmon
|
||||||
# package (see https://github.com/freedomofpress/dangerzone/issues/685)
|
# package (see https://github.com/freedomofpress/dangerzone/issues/685)
|
||||||
install_deps = DOCKERFILE_CONMON_UPDATE + DOCKERFILE_BUILD_DEBIAN_DEPS
|
install_deps = DOCKERFILE_CONMON_UPDATE + DOCKERFILE_BUILD_DEBIAN_DEPS
|
||||||
|
@ -786,6 +613,8 @@ class Env:
|
||||||
"noble",
|
"noble",
|
||||||
"24.10",
|
"24.10",
|
||||||
"ocular",
|
"ocular",
|
||||||
|
"25.04",
|
||||||
|
"plucky",
|
||||||
):
|
):
|
||||||
install_deps = DOCKERFILE_UBUNTU_REM_USER + DOCKERFILE_BUILD_DEBIAN_DEPS
|
install_deps = DOCKERFILE_UBUNTU_REM_USER + DOCKERFILE_BUILD_DEBIAN_DEPS
|
||||||
package_pattern = f"dangerzone_{version}-*_*.deb"
|
package_pattern = f"dangerzone_{version}-*_*.deb"
|
||||||
|
@ -844,7 +673,6 @@ def env_build(args):
|
||||||
env = Env.from_args(args)
|
env = Env.from_args(args)
|
||||||
return env.build(
|
return env.build(
|
||||||
show_dockerfile=args.show_dockerfile,
|
show_dockerfile=args.show_dockerfile,
|
||||||
download_pyside6=args.download_pyside6,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -941,12 +769,6 @@ def parse_args():
|
||||||
action="store_true",
|
action="store_true",
|
||||||
help="Do not build, only show the Dockerfile",
|
help="Do not build, only show the Dockerfile",
|
||||||
)
|
)
|
||||||
parser_build.add_argument(
|
|
||||||
"--download-pyside6",
|
|
||||||
default=DEFAULT_DOWNLOAD_PYSIDE6,
|
|
||||||
action="store_true",
|
|
||||||
help="Download PySide6 from FPF's RPM repo",
|
|
||||||
)
|
|
||||||
|
|
||||||
return parser.parse_args()
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
254
dev_scripts/generate-release-notes.py
Executable file
254
dev_scripts/generate-release-notes.py
Executable file
|
@ -0,0 +1,254 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import asyncio
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Dict, List, Optional, Tuple
|
||||||
|
|
||||||
|
import httpx
|
||||||
|
|
||||||
|
REPOSITORY = "https://github.com/freedomofpress/dangerzone/"
|
||||||
|
TEMPLATE = "- {title} ([#{number}]({url}))"
|
||||||
|
|
||||||
|
|
||||||
|
def parse_version(version: str) -> Tuple[int, int]:
|
||||||
|
"""Extract major.minor from version string, ignoring patch"""
|
||||||
|
match = re.match(r"v?(\d+)\.(\d+)", version)
|
||||||
|
if not match:
|
||||||
|
raise ValueError(f"Invalid version format: {version}")
|
||||||
|
return (int(match.group(1)), int(match.group(2)))
|
||||||
|
|
||||||
|
|
||||||
|
async def get_last_minor_release(
|
||||||
|
client: httpx.AsyncClient, owner: str, repo: str
|
||||||
|
) -> Optional[str]:
|
||||||
|
"""Get the latest minor release date (ignoring patches)"""
|
||||||
|
response = await client.get(f"https://api.github.com/repos/{owner}/{repo}/releases")
|
||||||
|
response.raise_for_status()
|
||||||
|
releases = response.json()
|
||||||
|
|
||||||
|
if not releases:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Get the latest minor version by comparing major.minor numbers
|
||||||
|
current_version = parse_version(releases[0]["tag_name"])
|
||||||
|
latest_date = None
|
||||||
|
|
||||||
|
for release in releases:
|
||||||
|
try:
|
||||||
|
version = parse_version(release["tag_name"])
|
||||||
|
if version < current_version:
|
||||||
|
latest_date = release["published_at"]
|
||||||
|
break
|
||||||
|
except ValueError:
|
||||||
|
continue
|
||||||
|
|
||||||
|
return latest_date
|
||||||
|
|
||||||
|
|
||||||
|
async def get_issue_details(
|
||||||
|
client: httpx.AsyncClient, owner: str, repo: str, issue_number: int
|
||||||
|
) -> Optional[dict]:
|
||||||
|
"""Get issue title and number if it exists"""
|
||||||
|
response = await client.get(
|
||||||
|
f"https://api.github.com/repos/{owner}/{repo}/issues/{issue_number}"
|
||||||
|
)
|
||||||
|
if response.is_success:
|
||||||
|
data = response.json()
|
||||||
|
return {
|
||||||
|
"title": data["title"],
|
||||||
|
"number": data["number"],
|
||||||
|
"url": data["html_url"],
|
||||||
|
}
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def extract_issue_number(pr_body: Optional[str]) -> Optional[int]:
|
||||||
|
"""Extract issue number from PR body looking for common formats like 'Fixes #123' or 'Closes #123'"""
|
||||||
|
if not pr_body:
|
||||||
|
return None
|
||||||
|
|
||||||
|
patterns = [
|
||||||
|
r"(?:closes|fixes|resolves)\s*#(\d+)",
|
||||||
|
r"(?:close|fix|resolve)\s*#(\d+)",
|
||||||
|
]
|
||||||
|
|
||||||
|
for pattern in patterns:
|
||||||
|
match = re.search(pattern, pr_body.lower())
|
||||||
|
if match:
|
||||||
|
return int(match.group(1))
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
async def verify_commit_in_master(
|
||||||
|
client: httpx.AsyncClient, owner: str, repo: str, commit_id: str
|
||||||
|
) -> bool:
|
||||||
|
"""Verify if a commit exists in master"""
|
||||||
|
response = await client.get(
|
||||||
|
f"https://api.github.com/repos/{owner}/{repo}/commits/{commit_id}"
|
||||||
|
)
|
||||||
|
return response.is_success and response.json().get("commit") is not None
|
||||||
|
|
||||||
|
|
||||||
|
async def process_issue_events(
|
||||||
|
client: httpx.AsyncClient, owner: str, repo: str, issue: Dict
|
||||||
|
) -> Optional[Dict]:
|
||||||
|
"""Process events for a single issue"""
|
||||||
|
events_response = await client.get(f"{issue['url']}/events")
|
||||||
|
if not events_response.is_success:
|
||||||
|
return None
|
||||||
|
|
||||||
|
for event in events_response.json():
|
||||||
|
if event["event"] == "closed" and event.get("commit_id"):
|
||||||
|
if await verify_commit_in_master(client, owner, repo, event["commit_id"]):
|
||||||
|
return {
|
||||||
|
"title": issue["title"],
|
||||||
|
"number": issue["number"],
|
||||||
|
"url": issue["html_url"],
|
||||||
|
}
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
async def get_closed_issues(
|
||||||
|
client: httpx.AsyncClient, owner: str, repo: str, since: str
|
||||||
|
) -> List[Dict]:
|
||||||
|
"""Get issues closed by commits to master since the given date"""
|
||||||
|
response = await client.get(
|
||||||
|
f"https://api.github.com/repos/{owner}/{repo}/issues",
|
||||||
|
params={
|
||||||
|
"state": "closed",
|
||||||
|
"sort": "updated",
|
||||||
|
"direction": "desc",
|
||||||
|
"since": since,
|
||||||
|
"per_page": 100,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
tasks = []
|
||||||
|
since_date = datetime.strptime(since, "%Y-%m-%dT%H:%M:%SZ")
|
||||||
|
|
||||||
|
for issue in response.json():
|
||||||
|
if "pull_request" in issue:
|
||||||
|
continue
|
||||||
|
|
||||||
|
closed_at = datetime.strptime(issue["closed_at"], "%Y-%m-%dT%H:%M:%SZ")
|
||||||
|
if closed_at <= since_date:
|
||||||
|
continue
|
||||||
|
|
||||||
|
tasks.append(process_issue_events(client, owner, repo, issue))
|
||||||
|
|
||||||
|
results = await asyncio.gather(*tasks)
|
||||||
|
return [r for r in results if r is not None]
|
||||||
|
|
||||||
|
|
||||||
|
async def process_pull_request(
|
||||||
|
client: httpx.AsyncClient,
|
||||||
|
owner: str,
|
||||||
|
repo: str,
|
||||||
|
pr: Dict,
|
||||||
|
closed_issues: List[Dict],
|
||||||
|
) -> Optional[str]:
|
||||||
|
"""Process a single pull request"""
|
||||||
|
issue_number = extract_issue_number(pr.get("body"))
|
||||||
|
if issue_number:
|
||||||
|
issue = await get_issue_details(client, owner, repo, issue_number)
|
||||||
|
if issue:
|
||||||
|
if not any(i["number"] == issue["number"] for i in closed_issues):
|
||||||
|
return TEMPLATE.format(**issue)
|
||||||
|
return None
|
||||||
|
|
||||||
|
return TEMPLATE.format(title=pr["title"], number=pr["number"], url=pr["html_url"])
|
||||||
|
|
||||||
|
|
||||||
|
async def get_changes_since_last_release(
|
||||||
|
owner: str, repo: str, token: Optional[str] = None
|
||||||
|
) -> List[str]:
|
||||||
|
headers = {
|
||||||
|
"Accept": "application/vnd.github.v3+json",
|
||||||
|
}
|
||||||
|
if token:
|
||||||
|
headers["Authorization"] = f"token {token}"
|
||||||
|
else:
|
||||||
|
print(
|
||||||
|
"Warning: No token provided. API rate limiting may occur.", file=sys.stderr
|
||||||
|
)
|
||||||
|
|
||||||
|
async with httpx.AsyncClient(headers=headers, timeout=30.0) as client:
|
||||||
|
# Get the date of last minor release
|
||||||
|
since = await get_last_minor_release(client, owner, repo)
|
||||||
|
if not since:
|
||||||
|
return []
|
||||||
|
|
||||||
|
changes = []
|
||||||
|
|
||||||
|
# Get issues closed by commits to master
|
||||||
|
closed_issues = await get_closed_issues(client, owner, repo, since)
|
||||||
|
changes.extend([TEMPLATE.format(**issue) for issue in closed_issues])
|
||||||
|
|
||||||
|
# Get merged PRs
|
||||||
|
response = await client.get(
|
||||||
|
f"https://api.github.com/repos/{owner}/{repo}/pulls",
|
||||||
|
params={
|
||||||
|
"state": "closed",
|
||||||
|
"sort": "updated",
|
||||||
|
"direction": "desc",
|
||||||
|
"per_page": 100,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
# Process PRs in parallel
|
||||||
|
pr_tasks = []
|
||||||
|
for pr in response.json():
|
||||||
|
if not pr["merged_at"]:
|
||||||
|
continue
|
||||||
|
if since and pr["merged_at"] <= since:
|
||||||
|
break
|
||||||
|
|
||||||
|
pr_tasks.append(
|
||||||
|
process_pull_request(client, owner, repo, pr, closed_issues)
|
||||||
|
)
|
||||||
|
|
||||||
|
pr_results = await asyncio.gather(*pr_tasks)
|
||||||
|
changes.extend([r for r in pr_results if r is not None])
|
||||||
|
|
||||||
|
return changes
|
||||||
|
|
||||||
|
|
||||||
|
async def main_async():
|
||||||
|
parser = argparse.ArgumentParser(description="Generate release notes from GitHub")
|
||||||
|
parser.add_argument("--token", "-t", help="the file path to the GitHub API token")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
token = None
|
||||||
|
if args.token:
|
||||||
|
with open(args.token) as f:
|
||||||
|
token = f.read().strip()
|
||||||
|
try:
|
||||||
|
url_path = REPOSITORY.rstrip("/").split("github.com/")[1]
|
||||||
|
owner, repo = url_path.split("/")[-2:]
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
print("Error: Invalid GitHub URL", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
try:
|
||||||
|
notes = await get_changes_since_last_release(owner, repo, token)
|
||||||
|
print("\n".join(notes))
|
||||||
|
except httpx.HTTPError as e:
|
||||||
|
print(f"Error: {e}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: {e}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
asyncio.run(main_async())
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
67
dev_scripts/generate-release-tasks.py
Executable file
67
dev_scripts/generate-release-tasks.py
Executable file
|
@ -0,0 +1,67 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
import pathlib
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
RELEASE_FILE = "RELEASE.md"
|
||||||
|
QA_FILE = "QA.md"
|
||||||
|
|
||||||
|
|
||||||
|
def git_root():
|
||||||
|
"""Get the root directory of the Git repo."""
|
||||||
|
# FIXME: Use a Git Python binding for this.
|
||||||
|
# FIXME: Make this work if called outside the repo.
|
||||||
|
path = (
|
||||||
|
subprocess.run(
|
||||||
|
["git", "rev-parse", "--show-toplevel"],
|
||||||
|
check=True,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
)
|
||||||
|
.stdout.decode()
|
||||||
|
.strip("\n")
|
||||||
|
)
|
||||||
|
return pathlib.Path(path)
|
||||||
|
|
||||||
|
|
||||||
|
def extract_checkboxes(filename):
|
||||||
|
headers = []
|
||||||
|
result = []
|
||||||
|
|
||||||
|
with open(filename, "r") as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
|
||||||
|
current_level = 0
|
||||||
|
for line in lines:
|
||||||
|
line = line.rstrip()
|
||||||
|
|
||||||
|
# If it's a header, store it
|
||||||
|
if line.startswith("#"):
|
||||||
|
# Count number of # to determine header level
|
||||||
|
level = len(line) - len(line.lstrip("#"))
|
||||||
|
if level < current_level or not current_level:
|
||||||
|
headers.extend(["", line, ""])
|
||||||
|
current_level = level
|
||||||
|
elif level > current_level:
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
headers = ["", line, ""]
|
||||||
|
|
||||||
|
# If it's a checkbox
|
||||||
|
elif "- [ ]" in line or "- [x]" in line or "- [X]" in line:
|
||||||
|
# Print the last header if we haven't already
|
||||||
|
if headers:
|
||||||
|
result.extend(headers)
|
||||||
|
headers = []
|
||||||
|
current_level = 0
|
||||||
|
|
||||||
|
# If this is the "Do the QA tasks" line, recursively get QA tasks
|
||||||
|
if "Do the QA tasks" in line:
|
||||||
|
result.append(line)
|
||||||
|
qa_tasks = extract_checkboxes(git_root() / QA_FILE)
|
||||||
|
result.append(qa_tasks)
|
||||||
|
else:
|
||||||
|
result.append(line)
|
||||||
|
return "\n".join(result)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
print(extract_checkboxes(git_root() / RELEASE_FILE))
|
|
@ -3,28 +3,49 @@
|
||||||
import abc
|
import abc
|
||||||
import argparse
|
import argparse
|
||||||
import difflib
|
import difflib
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
import selectors
|
import selectors
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
|
import urllib.request
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
PYTHON_VERSION = "3.12"
|
||||||
|
EOL_PYTHON_URL = "https://endoflife.date/api/python.json"
|
||||||
|
|
||||||
CONTENT_QA = r"""## QA
|
CONTENT_QA = r"""## QA
|
||||||
|
|
||||||
To ensure that new releases do not introduce regressions, and support existing
|
To ensure that new releases do not introduce regressions, and support existing
|
||||||
and newer platforms, we have to do the following:
|
and newer platforms, we have to test that the produced packages work as expected.
|
||||||
|
|
||||||
|
Check the following:
|
||||||
|
|
||||||
- [ ] Make sure that the tip of the `main` branch passes the CI tests.
|
- [ ] Make sure that the tip of the `main` branch passes the CI tests.
|
||||||
- [ ] Make sure that the Apple account has a valid application password and has
|
- [ ] Make sure that the Apple account has a valid application password and has
|
||||||
agreed to the latest Apple terms (see [macOS release](#macos-release)
|
agreed to the latest Apple terms (see [macOS release](#macos-release)
|
||||||
section).
|
section).
|
||||||
|
|
||||||
|
Because it is repetitive, we wrote a script to help with the QA.
|
||||||
|
It can run the tasks for you, pausing when it needs manual intervention.
|
||||||
|
|
||||||
|
You can run it with a command like:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
poetry run ./dev_scripts/qa.py {distro}-{version}
|
||||||
|
```
|
||||||
|
|
||||||
|
### The checklist
|
||||||
|
|
||||||
- [ ] Create a test build in Windows and make sure it works:
|
- [ ] Create a test build in Windows and make sure it works:
|
||||||
- [ ] Check if the suggested Python version is still supported.
|
- [ ] Check if the suggested Python version is still supported.
|
||||||
- [ ] Create a new development environment with Poetry.
|
- [ ] Create a new development environment with Poetry.
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
the new image.
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
- [ ] Run the Dangerzone tests.
|
- [ ] Run the Dangerzone tests.
|
||||||
- [ ] Build and run the Dangerzone .exe
|
- [ ] Build and run the Dangerzone .exe
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
@ -33,6 +54,7 @@ and newer platforms, we have to do the following:
|
||||||
- [ ] Create a new development environment with Poetry.
|
- [ ] Create a new development environment with Poetry.
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
the new image.
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
- [ ] Run the Dangerzone tests.
|
- [ ] Run the Dangerzone tests.
|
||||||
- [ ] Create and run an app bundle.
|
- [ ] Create and run an app bundle.
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
@ -41,6 +63,7 @@ and newer platforms, we have to do the following:
|
||||||
- [ ] Create a new development environment with Poetry.
|
- [ ] Create a new development environment with Poetry.
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
the new image.
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
- [ ] Run the Dangerzone tests.
|
- [ ] Run the Dangerzone tests.
|
||||||
- [ ] Create and run an app bundle.
|
- [ ] Create and run an app bundle.
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
@ -49,6 +72,7 @@ and newer platforms, we have to do the following:
|
||||||
- [ ] Create a new development environment with Poetry.
|
- [ ] Create a new development environment with Poetry.
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
the new image.
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
- [ ] Run the Dangerzone tests.
|
- [ ] Run the Dangerzone tests.
|
||||||
- [ ] Create a .deb package and install it system-wide.
|
- [ ] Create a .deb package and install it system-wide.
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
@ -57,6 +81,7 @@ and newer platforms, we have to do the following:
|
||||||
- [ ] Create a new development environment with Poetry.
|
- [ ] Create a new development environment with Poetry.
|
||||||
- [ ] Build the container image and ensure the development environment uses
|
- [ ] Build the container image and ensure the development environment uses
|
||||||
the new image.
|
the new image.
|
||||||
|
- [ ] Download the OCR language data using `./install/common/download-tessdata.py`
|
||||||
- [ ] Run the Dangerzone tests.
|
- [ ] Run the Dangerzone tests.
|
||||||
- [ ] Create an .rpm package and install it system-wide.
|
- [ ] Create an .rpm package and install it system-wide.
|
||||||
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
- [ ] Test some QA scenarios (see [Scenarios](#Scenarios) below).
|
||||||
|
@ -102,9 +127,9 @@ Close the Dangerzone application and get the container image for that
|
||||||
version. For example:
|
version. For example:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker images dangerzone.rocks/dangerzone:latest
|
$ docker images dangerzone.rocks/dangerzone
|
||||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||||
dangerzone.rocks/dangerzone latest <image ID> <date> <size>
|
dangerzone.rocks/dangerzone <tag> <image ID> <date> <size>
|
||||||
```
|
```
|
||||||
|
|
||||||
Then run the version under QA and ensure that the settings remain changed.
|
Then run the version under QA and ensure that the settings remain changed.
|
||||||
|
@ -113,9 +138,9 @@ Afterwards check that new docker image was installed by running the same command
|
||||||
and seeing the following differences:
|
and seeing the following differences:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker images dangerzone.rocks/dangerzone:latest
|
$ docker images dangerzone.rocks/dangerzone
|
||||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||||
dangerzone.rocks/dangerzone latest <different ID> <newer date> <different size>
|
dangerzone.rocks/dangerzone <other tag> <different ID> <newer date> <different size>
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 4. Dangerzone successfully installs the container image
|
#### 4. Dangerzone successfully installs the container image
|
||||||
|
@ -226,29 +251,6 @@ Install dependencies:
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<details>
|
|
||||||
<summary><i>:memo: Expand this section if you are on Ubuntu 20.04 (Focal).</i></summary>
|
|
||||||
</br>
|
|
||||||
|
|
||||||
The default Python version that ships with Ubuntu Focal (3.8) is not
|
|
||||||
compatible with PySide6, which requires Python 3.9 or greater.
|
|
||||||
|
|
||||||
You can install Python 3.9 using the `python3.9` package.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo apt install -y python3.9
|
|
||||||
```
|
|
||||||
|
|
||||||
Poetry will automatically pick up the correct version when running.
|
|
||||||
</details>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
sudo apt install -y podman dh-python build-essential make libqt6gui6 \
|
sudo apt install -y podman dh-python build-essential make libqt6gui6 \
|
||||||
pipx python3 python3-dev
|
pipx python3 python3-dev
|
||||||
|
@ -262,6 +264,7 @@ methods](https://python-poetry.org/docs/#installation))_
|
||||||
```sh
|
```sh
|
||||||
pipx ensurepath
|
pipx ensurepath
|
||||||
pipx install poetry
|
pipx install poetry
|
||||||
|
pipx inject poetry poetry-plugin-export
|
||||||
```
|
```
|
||||||
|
|
||||||
After this, restart the terminal window, for the `poetry` command to be in your
|
After this, restart the terminal window, for the `poetry` command to be in your
|
||||||
|
@ -324,32 +327,11 @@ sudo dnf install -y rpm-build podman python3 python3-devel python3-poetry-core \
|
||||||
pipx qt6-qtbase-gui
|
pipx qt6-qtbase-gui
|
||||||
```
|
```
|
||||||
|
|
||||||
<table>
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
<details>
|
|
||||||
<summary><i>:memo: Expand this section if you are on Fedora 41.</i></summary>
|
|
||||||
</br>
|
|
||||||
|
|
||||||
The default Python version that ships with Fedora 41 (3.13) is not
|
|
||||||
compatible with PySide6, which requires Python 3.12 or earlier.
|
|
||||||
|
|
||||||
You can install Python 3.12 using the `python3.12` package.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo dnf install -y python3.12
|
|
||||||
```
|
|
||||||
|
|
||||||
Poetry will automatically pick up the correct version when running.
|
|
||||||
</details>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
|
|
||||||
Install Poetry using `pipx`:
|
Install Poetry using `pipx`:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
pipx install poetry
|
pipx install poetry
|
||||||
|
pipx inject poetry
|
||||||
```
|
```
|
||||||
|
|
||||||
Clone this repository:
|
Clone this repository:
|
||||||
|
@ -547,7 +529,7 @@ class Reference:
|
||||||
# Convert spaces to dashes
|
# Convert spaces to dashes
|
||||||
anchor = anchor.replace(" ", "-")
|
anchor = anchor.replace(" ", "-")
|
||||||
# Remove non-alphanumeric (except dash and underscore)
|
# Remove non-alphanumeric (except dash and underscore)
|
||||||
anchor = re.sub("[^a-zA-Z\-_]", "", anchor)
|
anchor = re.sub("[^a-zA-Z-_]", "", anchor)
|
||||||
|
|
||||||
return anchor
|
return anchor
|
||||||
|
|
||||||
|
@ -566,8 +548,8 @@ class QABase(abc.ABC):
|
||||||
|
|
||||||
platforms = {}
|
platforms = {}
|
||||||
|
|
||||||
REF_QA = Reference("RELEASE.md", content=CONTENT_QA)
|
REF_QA = Reference("QA.md", content=CONTENT_QA)
|
||||||
REF_QA_SCENARIOS = Reference("RELEASE.md", content=CONTENT_QA_SCENARIOS)
|
REF_QA_SCENARIOS = Reference("QA.md", content=CONTENT_QA_SCENARIOS)
|
||||||
|
|
||||||
# The following class method is available since Python 3.6. For more details, see:
|
# The following class method is available since Python 3.6. For more details, see:
|
||||||
# https://docs.python.org/3.6/whatsnew/3.6.html#pep-487-simpler-customization-of-class-creation
|
# https://docs.python.org/3.6/whatsnew/3.6.html#pep-487-simpler-customization-of-class-creation
|
||||||
|
@ -776,6 +758,10 @@ class QABase(abc.ABC):
|
||||||
self.prompt("Does it pass?", choices=["y", "n"])
|
self.prompt("Does it pass?", choices=["y", "n"])
|
||||||
logger.info("Successfully completed QA scenarios")
|
logger.info("Successfully completed QA scenarios")
|
||||||
|
|
||||||
|
@task("Download Tesseract data", auto=True)
|
||||||
|
def download_tessdata(self):
|
||||||
|
self.run("python", str(Path("install", "common", "download-tessdata.py")))
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@abc.abstractmethod
|
@abc.abstractmethod
|
||||||
def get_id(cls):
|
def get_id(cls):
|
||||||
|
@ -802,6 +788,40 @@ class QAWindows(QABase):
|
||||||
while msvcrt.kbhit():
|
while msvcrt.kbhit():
|
||||||
msvcrt.getch()
|
msvcrt.getch()
|
||||||
|
|
||||||
|
def get_latest_python_release(self):
|
||||||
|
with urllib.request.urlopen(EOL_PYTHON_URL) as f:
|
||||||
|
resp = f.read()
|
||||||
|
releases = json.loads(resp)
|
||||||
|
for release in releases:
|
||||||
|
if release["cycle"] == PYTHON_VERSION:
|
||||||
|
# Transform the Python version string (e.g., "3.12.7") into a list
|
||||||
|
# (e.g., [3, 12, 7]), and return it
|
||||||
|
return [int(num) for num in release["latest"].split(".")]
|
||||||
|
|
||||||
|
raise RuntimeError(
|
||||||
|
f"Could not find a Python release for version {PYTHON_VERSION}"
|
||||||
|
)
|
||||||
|
|
||||||
|
@QABase.task(
|
||||||
|
f"Install the latest version of Python {PYTHON_VERSION}", ref=REF_BUILD
|
||||||
|
)
|
||||||
|
def install_python(self):
|
||||||
|
logger.info("Getting latest Python release")
|
||||||
|
try:
|
||||||
|
latest_version = self.get_latest_python_release()
|
||||||
|
except Exception:
|
||||||
|
logger.error("Could not verify that the latest Python version is installed")
|
||||||
|
|
||||||
|
cur_version = list(sys.version_info[:3])
|
||||||
|
if latest_version > cur_version:
|
||||||
|
self.prompt(
|
||||||
|
f"You need to install the latest Python version ({latest_version})"
|
||||||
|
)
|
||||||
|
elif latest_version == cur_version:
|
||||||
|
logger.info(
|
||||||
|
f"Verified that the latest Python version ({latest_version}) is installed"
|
||||||
|
)
|
||||||
|
|
||||||
@QABase.task("Install and Run Docker Desktop", ref=REF_BUILD)
|
@QABase.task("Install and Run Docker Desktop", ref=REF_BUILD)
|
||||||
def install_docker(self):
|
def install_docker(self):
|
||||||
logger.info("Checking if Docker Desktop is installed and running")
|
logger.info("Checking if Docker Desktop is installed and running")
|
||||||
|
@ -816,7 +836,7 @@ class QAWindows(QABase):
|
||||||
)
|
)
|
||||||
def install_poetry(self):
|
def install_poetry(self):
|
||||||
self.run("python", "-m", "pip", "install", "poetry")
|
self.run("python", "-m", "pip", "install", "poetry")
|
||||||
self.run("poetry", "install")
|
self.run("poetry", "sync")
|
||||||
|
|
||||||
@QABase.task("Build Dangerzone container image", ref=REF_BUILD, auto=True)
|
@QABase.task("Build Dangerzone container image", ref=REF_BUILD, auto=True)
|
||||||
def build_image(self):
|
def build_image(self):
|
||||||
|
@ -838,9 +858,11 @@ class QAWindows(QABase):
|
||||||
return "windows"
|
return "windows"
|
||||||
|
|
||||||
def start(self):
|
def start(self):
|
||||||
|
self.install_python()
|
||||||
self.install_docker()
|
self.install_docker()
|
||||||
self.install_poetry()
|
self.install_poetry()
|
||||||
self.build_image()
|
self.build_image()
|
||||||
|
self.download_tessdata()
|
||||||
self.run_tests()
|
self.run_tests()
|
||||||
self.build_dangerzone_exe()
|
self.build_dangerzone_exe()
|
||||||
|
|
||||||
|
@ -915,7 +937,6 @@ class QALinux(QABase):
|
||||||
"--version",
|
"--version",
|
||||||
self.VERSION,
|
self.VERSION,
|
||||||
"build",
|
"build",
|
||||||
"--download-pyside6",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
|
@ -933,6 +954,7 @@ class QALinux(QABase):
|
||||||
def start(self):
|
def start(self):
|
||||||
self.build_dev_image()
|
self.build_dev_image()
|
||||||
self.build_container_image()
|
self.build_container_image()
|
||||||
|
self.download_tessdata()
|
||||||
self.run_tests()
|
self.run_tests()
|
||||||
self.build_package()
|
self.build_package()
|
||||||
self.build_qa_image()
|
self.build_qa_image()
|
||||||
|
@ -968,11 +990,6 @@ class QADebianTrixie(QADebianBased):
|
||||||
VERSION = "trixie"
|
VERSION = "trixie"
|
||||||
|
|
||||||
|
|
||||||
class QAUbuntu2004(QADebianBased):
|
|
||||||
DISTRO = "ubuntu"
|
|
||||||
VERSION = "20.04"
|
|
||||||
|
|
||||||
|
|
||||||
class QAUbuntu2204(QADebianBased):
|
class QAUbuntu2204(QADebianBased):
|
||||||
DISTRO = "ubuntu"
|
DISTRO = "ubuntu"
|
||||||
VERSION = "22.04"
|
VERSION = "22.04"
|
||||||
|
@ -988,6 +1005,11 @@ class QAUbuntu2410(QADebianBased):
|
||||||
VERSION = "24.10"
|
VERSION = "24.10"
|
||||||
|
|
||||||
|
|
||||||
|
class QAUbuntu2504(QADebianBased):
|
||||||
|
DISTRO = "ubuntu"
|
||||||
|
VERSION = "25.04"
|
||||||
|
|
||||||
|
|
||||||
class QAFedora(QALinux):
|
class QAFedora(QALinux):
|
||||||
"""Base class for Fedora distros.
|
"""Base class for Fedora distros.
|
||||||
|
|
||||||
|
@ -1005,14 +1027,18 @@ class QAFedora(QALinux):
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class QAFedora42(QAFedora):
|
||||||
|
VERSION = "42"
|
||||||
|
|
||||||
|
|
||||||
|
class QAFedora41(QAFedora):
|
||||||
|
VERSION = "41"
|
||||||
|
|
||||||
|
|
||||||
class QAFedora40(QAFedora):
|
class QAFedora40(QAFedora):
|
||||||
VERSION = "40"
|
VERSION = "40"
|
||||||
|
|
||||||
|
|
||||||
class QAFedora39(QAFedora):
|
|
||||||
VERSION = "39"
|
|
||||||
|
|
||||||
|
|
||||||
def parse_args():
|
def parse_args():
|
||||||
parser = argparse.ArgumentParser(
|
parser = argparse.ArgumentParser(
|
||||||
prog=sys.argv[0],
|
prog=sys.argv[0],
|
||||||
|
|
680
dev_scripts/repro-build.py
Executable file
680
dev_scripts/repro-build.py
Executable file
|
@ -0,0 +1,680 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import datetime
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import pprint
|
||||||
|
import shlex
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tarfile
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
MEDIA_TYPE_INDEX_V1_JSON = "application/vnd.oci.image.index.v1+json"
|
||||||
|
MEDIA_TYPE_MANIFEST_V1_JSON = "application/vnd.oci.image.manifest.v1+json"
|
||||||
|
|
||||||
|
ENV_RUNTIME = "REPRO_RUNTIME"
|
||||||
|
ENV_DATETIME = "REPRO_DATETIME"
|
||||||
|
ENV_SDE = "REPRO_SOURCE_DATE_EPOCH"
|
||||||
|
ENV_CACHE = "REPRO_CACHE"
|
||||||
|
ENV_BUILDKIT = "REPRO_BUILDKIT_IMAGE"
|
||||||
|
ENV_ROOTLESS = "REPRO_ROOTLESS"
|
||||||
|
|
||||||
|
DEFAULT_BUILDKIT_IMAGE = "moby/buildkit:v0.19.0@sha256:14aa1b4dd92ea0a4cd03a54d0c6079046ea98cd0c0ae6176bdd7036ba370cbbe"
|
||||||
|
DEFAULT_BUILDKIT_IMAGE_ROOTLESS = "moby/buildkit:v0.19.0-rootless@sha256:e901cffdad753892a7c3afb8b9972549fca02c73888cf340c91ed801fdd96d71"
|
||||||
|
|
||||||
|
MSG_BUILD_CTX = """Build environment:
|
||||||
|
- Container runtime: {runtime}
|
||||||
|
- BuildKit image: {buildkit_image}
|
||||||
|
- Rootless support: {rootless}
|
||||||
|
- Caching enabled: {use_cache}
|
||||||
|
- Build context: {context}
|
||||||
|
- Dockerfile: {dockerfile}
|
||||||
|
- Output: {output}
|
||||||
|
|
||||||
|
Build parameters:
|
||||||
|
- SOURCE_DATE_EPOCH: {sde}
|
||||||
|
- Build args: {build_args}
|
||||||
|
- Tag: {tag}
|
||||||
|
- Platform: {platform}
|
||||||
|
|
||||||
|
Podman-only arguments:
|
||||||
|
- BuildKit arguments: {buildkit_args}
|
||||||
|
|
||||||
|
Docker-only arguments:
|
||||||
|
- Docker Buildx arguments: {buildx_args}
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
def pretty_error(obj: dict, msg: str):
|
||||||
|
raise Exception(f"{msg}\n{pprint.pprint(obj)}")
|
||||||
|
|
||||||
|
|
||||||
|
def get_key(obj: dict, key: str) -> object:
|
||||||
|
if key not in obj:
|
||||||
|
pretty_error(f"Could not find key '{key}' in the dictionary:", obj)
|
||||||
|
return obj[key]
|
||||||
|
|
||||||
|
|
||||||
|
def run(cmd, dry=False, check=True):
|
||||||
|
action = "Would have run" if dry else "Running"
|
||||||
|
logger.debug(f"{action}: {shlex.join(cmd)}")
|
||||||
|
if not dry:
|
||||||
|
subprocess.run(cmd, check=check)
|
||||||
|
|
||||||
|
|
||||||
|
def snip_contents(contents: str, num: int) -> str:
|
||||||
|
contents = contents.replace("\n", "")
|
||||||
|
if len(contents) > num:
|
||||||
|
return (
|
||||||
|
contents[:num]
|
||||||
|
+ f" [... {len(contents) - num} characters omitted."
|
||||||
|
+ " Pass --show-contents to print them in their entirety]"
|
||||||
|
)
|
||||||
|
return contents
|
||||||
|
|
||||||
|
|
||||||
|
def detect_container_runtime() -> str:
|
||||||
|
"""Auto-detect the installed container runtime in the system."""
|
||||||
|
if shutil.which("docker"):
|
||||||
|
return "docker"
|
||||||
|
elif shutil.which("podman"):
|
||||||
|
return "podman"
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def parse_runtime(args) -> str:
|
||||||
|
if args.runtime is not None:
|
||||||
|
return args.runtime
|
||||||
|
|
||||||
|
runtime = os.environ.get(ENV_RUNTIME)
|
||||||
|
if runtime is None:
|
||||||
|
raise RuntimeError("No container runtime detected in your system")
|
||||||
|
if runtime not in ("docker", "podman"):
|
||||||
|
raise RuntimeError(
|
||||||
|
"Only 'docker' or 'podman' container runtimes"
|
||||||
|
" are currently supported by this script"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_use_cache(args) -> bool:
|
||||||
|
if args.no_cache:
|
||||||
|
return False
|
||||||
|
return bool(int(os.environ.get(ENV_CACHE, "1")))
|
||||||
|
|
||||||
|
|
||||||
|
def parse_rootless(args, runtime: str) -> bool:
|
||||||
|
rootless = args.rootless or bool(int(os.environ.get(ENV_ROOTLESS, "0")))
|
||||||
|
if runtime != "podman" and rootless:
|
||||||
|
raise RuntimeError("Rootless mode is only supported with Podman runtime")
|
||||||
|
return rootless
|
||||||
|
|
||||||
|
|
||||||
|
def parse_sde(args) -> str:
|
||||||
|
sde = os.environ.get(ENV_SDE, args.source_date_epoch)
|
||||||
|
dt = os.environ.get(ENV_DATETIME, args.datetime)
|
||||||
|
|
||||||
|
if (sde is not None and dt is not None) or (sde is None and dt is None):
|
||||||
|
raise RuntimeError("You need to pass either a source date epoch or a datetime")
|
||||||
|
|
||||||
|
if sde is not None:
|
||||||
|
return str(sde)
|
||||||
|
|
||||||
|
if dt is not None:
|
||||||
|
d = datetime.datetime.fromisoformat(dt)
|
||||||
|
# If the datetime is naive, assume its timezone is UTC. The check is
|
||||||
|
# taken from:
|
||||||
|
# https://docs.python.org/3/library/datetime.html#determining-if-an-object-is-aware-or-naive
|
||||||
|
if d.tzinfo is None or d.tzinfo.utcoffset(d) is None:
|
||||||
|
d = d.replace(tzinfo=datetime.timezone.utc)
|
||||||
|
return int(d.timestamp())
|
||||||
|
|
||||||
|
|
||||||
|
def parse_buildkit_image(args, rootless: bool, runtime: str) -> str:
|
||||||
|
default = DEFAULT_BUILDKIT_IMAGE_ROOTLESS if rootless else DEFAULT_BUILDKIT_IMAGE
|
||||||
|
img = args.buildkit_image or os.environ.get(ENV_BUILDKIT, default)
|
||||||
|
|
||||||
|
if runtime == "podman" and not img.startswith("docker.io/"):
|
||||||
|
img = "docker.io/" + img
|
||||||
|
|
||||||
|
return img
|
||||||
|
|
||||||
|
|
||||||
|
def parse_build_args(args) -> str:
|
||||||
|
return args.build_arg or []
|
||||||
|
|
||||||
|
|
||||||
|
def parse_buildkit_args(args, runtime: str) -> str:
|
||||||
|
if not args.buildkit_args:
|
||||||
|
return []
|
||||||
|
|
||||||
|
if runtime != "podman":
|
||||||
|
raise RuntimeError("Cannot specify BuildKit arguments using the Podman runtime")
|
||||||
|
|
||||||
|
return shlex.split(args.buildkit_args)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_buildx_args(args, runtime: str) -> str:
|
||||||
|
if not args.buildx_args:
|
||||||
|
return []
|
||||||
|
|
||||||
|
if runtime != "docker":
|
||||||
|
raise RuntimeError(
|
||||||
|
"Cannot specify Docker Buildx arguments using the Podman runtime"
|
||||||
|
)
|
||||||
|
|
||||||
|
return shlex.split(args.buildx_args)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_image_digest(args) -> str | None:
|
||||||
|
if not args.expected_image_digest:
|
||||||
|
return None
|
||||||
|
parsed = args.expected_image_digest.split(":", 1)
|
||||||
|
if len(parsed) == 1:
|
||||||
|
return parsed[0]
|
||||||
|
else:
|
||||||
|
return parsed[1]
|
||||||
|
|
||||||
|
|
||||||
|
def parse_path(path: str | None) -> str | None:
|
||||||
|
return path and str(Path(path).absolute())
|
||||||
|
|
||||||
|
|
||||||
|
##########################
|
||||||
|
# OCI parsing logic
|
||||||
|
#
|
||||||
|
# Compatible with:
|
||||||
|
# * https://github.com/opencontainers/image-spec/blob/main/image-layout.md
|
||||||
|
|
||||||
|
|
||||||
|
def oci_print_info(parsed: dict, full: bool) -> None:
|
||||||
|
print(f"The OCI tarball contains an index and {len(parsed) - 1} manifest(s):")
|
||||||
|
print()
|
||||||
|
print(f"Image digest: {parsed[1]['digest']}")
|
||||||
|
for i, info in enumerate(parsed):
|
||||||
|
print()
|
||||||
|
if i == 0:
|
||||||
|
print(f"Index ({info['path']}):")
|
||||||
|
else:
|
||||||
|
print(f"Manifest {i} ({info['path']}):")
|
||||||
|
print(f" Digest: {info['digest']}")
|
||||||
|
print(f" Media type: {info['media_type']}")
|
||||||
|
print(f" Platform: {info['platform'] or '-'}")
|
||||||
|
contents = info["contents"] if full else snip_contents(info["contents"], 600)
|
||||||
|
print(f" Contents: {contents}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def oci_normalize_path(path):
|
||||||
|
if path.startswith("sha256:"):
|
||||||
|
hash_algo, checksum = path.split(":")
|
||||||
|
path = f"blobs/{hash_algo}/{checksum}"
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
def oci_get_file_from_tarball(tar: tarfile.TarFile, path: str) -> dict:
|
||||||
|
"""Get file from an OCI tarball.
|
||||||
|
|
||||||
|
If the filename cannot be found, search again by prefixing it with "./", since we
|
||||||
|
have encountered path names in OCI tarballs prefixed with "./".
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
return tar.extractfile(path).read().decode()
|
||||||
|
except KeyError:
|
||||||
|
if not path.startswith("./") and not path.startswith("/"):
|
||||||
|
path = "./" + path
|
||||||
|
try:
|
||||||
|
return tar.extractfile(path).read().decode()
|
||||||
|
except KeyError:
|
||||||
|
# Do not raise here, so that we can raise the original exception below.
|
||||||
|
pass
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def oci_parse_manifest(tar: tarfile.TarFile, path: str, platform: dict | None) -> dict:
|
||||||
|
"""Parse manifest information in JSON format.
|
||||||
|
|
||||||
|
Interestingly, the platform info for a manifest is not included in the
|
||||||
|
manifest itself, but in the descriptor that points to it. So, we have to
|
||||||
|
carry it from the previous manifest and include in the info here.
|
||||||
|
"""
|
||||||
|
path = oci_normalize_path(path)
|
||||||
|
contents = oci_get_file_from_tarball(tar, path)
|
||||||
|
digest = "sha256:" + hashlib.sha256(contents.encode()).hexdigest()
|
||||||
|
contents_dict = json.loads(contents)
|
||||||
|
media_type = get_key(contents_dict, "mediaType")
|
||||||
|
manifests = contents_dict.get("manifests", [])
|
||||||
|
|
||||||
|
if platform:
|
||||||
|
os = get_key(platform, "os")
|
||||||
|
arch = get_key(platform, "architecture")
|
||||||
|
platform = f"{os}/{arch}"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"path": path,
|
||||||
|
"contents": contents,
|
||||||
|
"digest": digest,
|
||||||
|
"media_type": media_type,
|
||||||
|
"platform": platform,
|
||||||
|
"manifests": manifests,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def oci_parse_manifests_dfs(
|
||||||
|
tar: tarfile.TarFile, path: str, parsed: list, platform: dict | None = None
|
||||||
|
) -> None:
|
||||||
|
info = oci_parse_manifest(tar, path, platform)
|
||||||
|
parsed.append(info)
|
||||||
|
for m in info["manifests"]:
|
||||||
|
oci_parse_manifests_dfs(tar, m["digest"], parsed, m.get("platform"))
|
||||||
|
|
||||||
|
|
||||||
|
def oci_parse_tarball(path: Path) -> dict:
|
||||||
|
parsed = []
|
||||||
|
with tarfile.TarFile.open(path) as tar:
|
||||||
|
oci_parse_manifests_dfs(tar, "index.json", parsed)
|
||||||
|
return parsed
|
||||||
|
|
||||||
|
|
||||||
|
##########################
|
||||||
|
# Image building logic
|
||||||
|
|
||||||
|
|
||||||
|
def podman_build(
|
||||||
|
context: str,
|
||||||
|
dockerfile: str | None,
|
||||||
|
tag: str | None,
|
||||||
|
buildkit_image: str,
|
||||||
|
sde: int,
|
||||||
|
rootless: bool,
|
||||||
|
use_cache: bool,
|
||||||
|
output: Path,
|
||||||
|
build_args: list,
|
||||||
|
platform: str,
|
||||||
|
buildkit_args: list,
|
||||||
|
dry: bool,
|
||||||
|
):
|
||||||
|
rootless_args = []
|
||||||
|
rootful_args = []
|
||||||
|
if rootless:
|
||||||
|
rootless_args = [
|
||||||
|
"--userns",
|
||||||
|
"keep-id:uid=1000,gid=1000",
|
||||||
|
"--security-opt",
|
||||||
|
"seccomp=unconfined",
|
||||||
|
"--security-opt",
|
||||||
|
"apparmor=unconfined",
|
||||||
|
"-e",
|
||||||
|
"BUILDKITD_FLAGS=--oci-worker-no-process-sandbox",
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
rootful_args = ["--privileged"]
|
||||||
|
|
||||||
|
dockerfile_args_podman = []
|
||||||
|
dockerfile_args_buildkit = []
|
||||||
|
if dockerfile:
|
||||||
|
dockerfile_args_podman = ["-v", f"{dockerfile}:/tmp/Dockerfile"]
|
||||||
|
dockerfile_args_buildkit = ["--local", "dockerfile=/tmp"]
|
||||||
|
else:
|
||||||
|
dockerfile_args_buildkit = ["--local", "dockerfile=/tmp/work"]
|
||||||
|
|
||||||
|
tag_args = f",name={tag}" if tag else ""
|
||||||
|
|
||||||
|
cache_args = []
|
||||||
|
if use_cache:
|
||||||
|
cache_args = [
|
||||||
|
"--export-cache",
|
||||||
|
"type=local,mode=max,dest=/tmp/cache",
|
||||||
|
"--import-cache",
|
||||||
|
"type=local,src=/tmp/cache",
|
||||||
|
]
|
||||||
|
|
||||||
|
_build_args = []
|
||||||
|
for arg in build_args:
|
||||||
|
_build_args.append("--opt")
|
||||||
|
_build_args.append(f"build-arg:{arg}")
|
||||||
|
platform_args = ["--opt", f"platform={platform}"] if platform else []
|
||||||
|
|
||||||
|
cmd = [
|
||||||
|
"podman",
|
||||||
|
"run",
|
||||||
|
"-it",
|
||||||
|
"--rm",
|
||||||
|
"-v",
|
||||||
|
"buildkit_cache:/tmp/cache",
|
||||||
|
"-v",
|
||||||
|
f"{output.parent}:/tmp/image",
|
||||||
|
"-v",
|
||||||
|
f"{context}:/tmp/work",
|
||||||
|
"--entrypoint",
|
||||||
|
"buildctl-daemonless.sh",
|
||||||
|
*rootless_args,
|
||||||
|
*rootful_args,
|
||||||
|
*dockerfile_args_podman,
|
||||||
|
buildkit_image,
|
||||||
|
"build",
|
||||||
|
"--frontend",
|
||||||
|
"dockerfile.v0",
|
||||||
|
"--local",
|
||||||
|
"context=/tmp/work",
|
||||||
|
"--opt",
|
||||||
|
f"build-arg:SOURCE_DATE_EPOCH={sde}",
|
||||||
|
*_build_args,
|
||||||
|
"--output",
|
||||||
|
f"type=docker,dest=/tmp/image/{output.name},rewrite-timestamp=true{tag_args}",
|
||||||
|
*cache_args,
|
||||||
|
*dockerfile_args_buildkit,
|
||||||
|
*platform_args,
|
||||||
|
*buildkit_args,
|
||||||
|
]
|
||||||
|
|
||||||
|
run(cmd, dry)
|
||||||
|
|
||||||
|
|
||||||
|
def docker_build(
|
||||||
|
context: str,
|
||||||
|
dockerfile: str | None,
|
||||||
|
tag: str | None,
|
||||||
|
buildkit_image: str,
|
||||||
|
sde: int,
|
||||||
|
use_cache: bool,
|
||||||
|
output: Path,
|
||||||
|
build_args: list,
|
||||||
|
platform: str,
|
||||||
|
buildx_args: list,
|
||||||
|
dry: bool,
|
||||||
|
):
|
||||||
|
builder_id = hashlib.sha256(buildkit_image.encode()).hexdigest()
|
||||||
|
builder_name = f"repro-build-{builder_id}"
|
||||||
|
tag_args = ["-t", tag] if tag else []
|
||||||
|
cache_args = [] if use_cache else ["--no-cache", "--pull"]
|
||||||
|
|
||||||
|
cmd = [
|
||||||
|
"docker",
|
||||||
|
"buildx",
|
||||||
|
"create",
|
||||||
|
"--name",
|
||||||
|
builder_name,
|
||||||
|
"--driver-opt",
|
||||||
|
f"image={buildkit_image}",
|
||||||
|
]
|
||||||
|
run(cmd, dry, check=False)
|
||||||
|
|
||||||
|
dockerfile_args = ["-f", dockerfile] if dockerfile else []
|
||||||
|
_build_args = []
|
||||||
|
for arg in build_args:
|
||||||
|
_build_args.append("--build-arg")
|
||||||
|
_build_args.append(arg)
|
||||||
|
platform_args = ["--platform", platform] if platform else []
|
||||||
|
|
||||||
|
cmd = [
|
||||||
|
"docker",
|
||||||
|
"buildx",
|
||||||
|
"--builder",
|
||||||
|
builder_name,
|
||||||
|
"build",
|
||||||
|
"--build-arg",
|
||||||
|
f"SOURCE_DATE_EPOCH={sde}",
|
||||||
|
*_build_args,
|
||||||
|
"--provenance",
|
||||||
|
"false",
|
||||||
|
"--output",
|
||||||
|
f"type=docker,dest={output},rewrite-timestamp=true",
|
||||||
|
*cache_args,
|
||||||
|
*tag_args,
|
||||||
|
*dockerfile_args,
|
||||||
|
*platform_args,
|
||||||
|
*buildx_args,
|
||||||
|
context,
|
||||||
|
]
|
||||||
|
run(cmd, dry)
|
||||||
|
|
||||||
|
|
||||||
|
##########################
|
||||||
|
# Command logic
|
||||||
|
|
||||||
|
|
||||||
|
def build(args):
|
||||||
|
runtime = parse_runtime(args)
|
||||||
|
use_cache = parse_use_cache(args)
|
||||||
|
sde = parse_sde(args)
|
||||||
|
rootless = parse_rootless(args, runtime)
|
||||||
|
buildkit_image = parse_buildkit_image(args, rootless, runtime)
|
||||||
|
build_args = parse_build_args(args)
|
||||||
|
platform = args.platform
|
||||||
|
buildkit_args = parse_buildkit_args(args, runtime)
|
||||||
|
buildx_args = parse_buildx_args(args, runtime)
|
||||||
|
tag = args.tag
|
||||||
|
dockerfile = parse_path(args.file)
|
||||||
|
output = Path(parse_path(args.output))
|
||||||
|
dry = args.dry
|
||||||
|
context = parse_path(args.context)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
MSG_BUILD_CTX.format(
|
||||||
|
runtime=runtime,
|
||||||
|
buildkit_image=buildkit_image,
|
||||||
|
sde=sde,
|
||||||
|
rootless=rootless,
|
||||||
|
use_cache=use_cache,
|
||||||
|
context=context,
|
||||||
|
dockerfile=dockerfile or "(not provided)",
|
||||||
|
tag=tag or "(not provided)",
|
||||||
|
output=output,
|
||||||
|
build_args=",".join(build_args) or "(not provided)",
|
||||||
|
platform=platform or "(default)",
|
||||||
|
buildkit_args=" ".join(buildkit_args) or "(not provided)",
|
||||||
|
buildx_args=" ".join(buildx_args) or "(not provided)",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
if runtime == "docker":
|
||||||
|
docker_build(
|
||||||
|
context,
|
||||||
|
dockerfile,
|
||||||
|
tag,
|
||||||
|
buildkit_image,
|
||||||
|
sde,
|
||||||
|
use_cache,
|
||||||
|
output,
|
||||||
|
build_args,
|
||||||
|
platform,
|
||||||
|
buildx_args,
|
||||||
|
dry,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
podman_build(
|
||||||
|
context,
|
||||||
|
dockerfile,
|
||||||
|
tag,
|
||||||
|
buildkit_image,
|
||||||
|
sde,
|
||||||
|
rootless,
|
||||||
|
use_cache,
|
||||||
|
output,
|
||||||
|
build_args,
|
||||||
|
platform,
|
||||||
|
buildkit_args,
|
||||||
|
dry,
|
||||||
|
)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
logger.error(f"Failed with {e.returncode}")
|
||||||
|
sys.exit(e.returncode)
|
||||||
|
|
||||||
|
|
||||||
|
def analyze(args) -> None:
|
||||||
|
expected_image_digest = parse_image_digest(args)
|
||||||
|
tarball_path = Path(args.tarball)
|
||||||
|
|
||||||
|
parsed = oci_parse_tarball(tarball_path)
|
||||||
|
oci_print_info(parsed, args.show_contents)
|
||||||
|
|
||||||
|
if expected_image_digest:
|
||||||
|
cur_digest = parsed[1]["digest"].split(":")[1]
|
||||||
|
if cur_digest != expected_image_digest:
|
||||||
|
raise Exception(
|
||||||
|
f"The image does not have the expected digest: {cur_digest} != {expected_image_digest}"
|
||||||
|
)
|
||||||
|
print(f"✅ Image digest matches {expected_image_digest}")
|
||||||
|
|
||||||
|
|
||||||
|
def define_build_cmd_args(parser: argparse.ArgumentParser) -> None:
|
||||||
|
parser.add_argument(
|
||||||
|
"--runtime",
|
||||||
|
choices=["docker", "podman"],
|
||||||
|
default=detect_container_runtime(),
|
||||||
|
help="The container runtime for building the image (default: %(default)s)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--datetime",
|
||||||
|
metavar="YYYY-MM-DD",
|
||||||
|
default=None,
|
||||||
|
help=(
|
||||||
|
"Provide a date and (optionally) a time in ISO format, which will"
|
||||||
|
" be used as the timestamp of the image layers"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--buildkit-image",
|
||||||
|
metavar="NAME:TAG@DIGEST",
|
||||||
|
default=None,
|
||||||
|
help=(
|
||||||
|
"The BuildKit container image which will be used for building the"
|
||||||
|
" reproducible container image. Make sure to pass the '-rootless'"
|
||||||
|
" variant if you are using rootless Podman"
|
||||||
|
" (default: docker.io/moby/buildkit:v0.19.0)"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--source-date-epoch",
|
||||||
|
"--sde",
|
||||||
|
metavar="SECONDS",
|
||||||
|
type=int,
|
||||||
|
default=None,
|
||||||
|
help="Provide a Unix timestamp for the image layers",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--no-cache",
|
||||||
|
default=False,
|
||||||
|
action="store_true",
|
||||||
|
help="Do not use existing cached images for the container build. Build from the start with a new set of cached layers.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--rootless",
|
||||||
|
default=False,
|
||||||
|
action="store_true",
|
||||||
|
help="Run BuildKit in rootless mode (Podman only)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"-f",
|
||||||
|
"--file",
|
||||||
|
metavar="FILE",
|
||||||
|
default=None,
|
||||||
|
help="Pathname of a Dockerfile",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"-o",
|
||||||
|
"--output",
|
||||||
|
metavar="FILE",
|
||||||
|
default=Path.cwd() / "image.tar",
|
||||||
|
help="Path to save OCI tarball (default: %(default)s)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"-t",
|
||||||
|
"--tag",
|
||||||
|
metavar="TAG",
|
||||||
|
default=None,
|
||||||
|
help="Tag the built image with the name %(metavar)s",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--build-arg",
|
||||||
|
metavar="ARG=VALUE",
|
||||||
|
action="append",
|
||||||
|
default=None,
|
||||||
|
help="Set build-time variables",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--platform",
|
||||||
|
metavar="PLAT1,PLAT2",
|
||||||
|
default=None,
|
||||||
|
help="Set platform for the image",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--buildkit-args",
|
||||||
|
metavar="'ARG1 ARG2'",
|
||||||
|
default=None,
|
||||||
|
help="Extra arguments for BuildKit (Podman only)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--buildx-args",
|
||||||
|
metavar="'ARG1 ARG2'",
|
||||||
|
default=None,
|
||||||
|
help="Extra arguments for Docker Buildx (Docker only)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--dry",
|
||||||
|
default=False,
|
||||||
|
action="store_true",
|
||||||
|
help="Do not run any commands, just print what would happen",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"context",
|
||||||
|
metavar="CONTEXT",
|
||||||
|
help="Path to the build context",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args() -> dict:
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
subparsers = parser.add_subparsers(dest="command", help="Available commands")
|
||||||
|
|
||||||
|
build_parser = subparsers.add_parser("build", help="Perform a build operation")
|
||||||
|
build_parser.set_defaults(func=build)
|
||||||
|
define_build_cmd_args(build_parser)
|
||||||
|
|
||||||
|
analyze_parser = subparsers.add_parser("analyze", help="Analyze an OCI tarball")
|
||||||
|
analyze_parser.set_defaults(func=analyze)
|
||||||
|
analyze_parser.add_argument(
|
||||||
|
"tarball",
|
||||||
|
metavar="FILE",
|
||||||
|
help="Path to OCI image in .tar format",
|
||||||
|
)
|
||||||
|
analyze_parser.add_argument(
|
||||||
|
"--expected-image-digest",
|
||||||
|
metavar="DIGEST",
|
||||||
|
default=None,
|
||||||
|
help="The expected digest for the provided image",
|
||||||
|
)
|
||||||
|
analyze_parser.add_argument(
|
||||||
|
"--show-contents",
|
||||||
|
default=False,
|
||||||
|
action="store_true",
|
||||||
|
help="Show full file contents",
|
||||||
|
)
|
||||||
|
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.DEBUG,
|
||||||
|
format="%(asctime)s - %(levelname)s - %(message)s",
|
||||||
|
datefmt="%Y-%m-%d %H:%M:%S",
|
||||||
|
)
|
||||||
|
args = parse_args()
|
||||||
|
|
||||||
|
if not hasattr(args, "func"):
|
||||||
|
args.func = build
|
||||||
|
args.func(args)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
115
dev_scripts/reproduce-image.py
Executable file
115
dev_scripts/reproduce-image.py
Executable file
|
@ -0,0 +1,115 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import hashlib
|
||||||
|
import logging
|
||||||
|
import pathlib
|
||||||
|
import platform
|
||||||
|
import stat
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
if platform.system() in ["Darwin", "Windows"]:
|
||||||
|
CONTAINER_RUNTIME = "docker"
|
||||||
|
elif platform.system() == "Linux":
|
||||||
|
CONTAINER_RUNTIME = "podman"
|
||||||
|
|
||||||
|
|
||||||
|
def run(*args):
|
||||||
|
"""Simple function that runs a command and checks the result."""
|
||||||
|
logger.debug(f"Running command: {' '.join(args)}")
|
||||||
|
return subprocess.run(args, check=True)
|
||||||
|
|
||||||
|
|
||||||
|
def build_image(
|
||||||
|
platform=None,
|
||||||
|
runtime=None,
|
||||||
|
cache=True,
|
||||||
|
date=None,
|
||||||
|
):
|
||||||
|
"""Build the Dangerzone container image with a special tag."""
|
||||||
|
platform_args = [] if not platform else ["--platform", platform]
|
||||||
|
runtime_args = [] if not runtime else ["--runtime", runtime]
|
||||||
|
cache_args = [] if cache else ["--use-cache", "no"]
|
||||||
|
date_args = [] if not date else ["--debian-archive-date", date]
|
||||||
|
run(
|
||||||
|
"python3",
|
||||||
|
"./install/common/build-image.py",
|
||||||
|
*platform_args,
|
||||||
|
*runtime_args,
|
||||||
|
*cache_args,
|
||||||
|
*date_args,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
prog=sys.argv[0],
|
||||||
|
description="Dev script for verifying container image reproducibility",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--platform",
|
||||||
|
default=None,
|
||||||
|
help=f"The platform for building the image (default: current platform)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--runtime",
|
||||||
|
choices=["docker", "podman"],
|
||||||
|
default=CONTAINER_RUNTIME,
|
||||||
|
help=f"The container runtime for building the image (default: {CONTAINER_RUNTIME})",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--no-cache",
|
||||||
|
default=False,
|
||||||
|
action="store_true",
|
||||||
|
help=(
|
||||||
|
"Do not use existing cached images for the container build."
|
||||||
|
" Build from the start with a new set of cached layers."
|
||||||
|
),
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--debian-archive-date",
|
||||||
|
default=None,
|
||||||
|
help="Use a specific Debian snapshot archive, by its date",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"digest",
|
||||||
|
help="The digest of the image that you want to reproduce",
|
||||||
|
)
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.DEBUG,
|
||||||
|
format="%(asctime)s - %(levelname)s - %(message)s",
|
||||||
|
datefmt="%Y-%m-%d %H:%M:%S",
|
||||||
|
)
|
||||||
|
args = parse_args()
|
||||||
|
|
||||||
|
logger.info(f"Building container image")
|
||||||
|
build_image(
|
||||||
|
args.platform,
|
||||||
|
args.runtime,
|
||||||
|
not args.no_cache,
|
||||||
|
args.debian_archive_date,
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f"Check that the reproduced image has the expected digest: {args.digest}"
|
||||||
|
)
|
||||||
|
run(
|
||||||
|
"./dev_scripts/repro-build.py",
|
||||||
|
"analyze",
|
||||||
|
"--show-contents",
|
||||||
|
"share/container.tar",
|
||||||
|
"--expected-image-digest",
|
||||||
|
args.digest,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
|
@ -11,8 +11,8 @@ log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
DZ_ASSETS = [
|
DZ_ASSETS = [
|
||||||
"container-{version}-i686.tar.gz",
|
"container-{version}-i686.tar",
|
||||||
"container-{version}-arm64.tar.gz",
|
"container-{version}-arm64.tar",
|
||||||
"Dangerzone-{version}.msi",
|
"Dangerzone-{version}.msi",
|
||||||
"Dangerzone-{version}-arm64.dmg",
|
"Dangerzone-{version}-arm64.dmg",
|
||||||
"Dangerzone-{version}-i686.dmg",
|
"Dangerzone-{version}-i686.dmg",
|
||||||
|
@ -95,11 +95,11 @@ def main():
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--version",
|
"--version",
|
||||||
required=True,
|
required=True,
|
||||||
help=f"look for assets with this Dangerzone version",
|
help="look for assets with this Dangerzone version",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"dir",
|
"dir",
|
||||||
help=f"look for assets in this directory",
|
help="look for assets in this directory",
|
||||||
)
|
)
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
setup_logging()
|
setup_logging()
|
||||||
|
|
33
docs/advisories/2024-12-24.md
Normal file
33
docs/advisories/2024-12-24.md
Normal file
|
@ -0,0 +1,33 @@
|
||||||
|
Security Advisory 2024-12-24
|
||||||
|
|
||||||
|
In Dangerzone, a security vulnerability was detected in the quarantined
|
||||||
|
environment where documents are opened. Vulnerabilities like this are expected
|
||||||
|
and do not compromise the security of Dangerzone. However, in combination with
|
||||||
|
another more serious vulnerability (also called container escape), a malicious
|
||||||
|
document may be able to breach the security of Dangerzone. We are not aware of
|
||||||
|
any container escapes that affect Dangerzone. **To reduce that risk, you are
|
||||||
|
strongly advised to update Dangerzone to the latest version**.
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
|
||||||
|
A series of vulnerabilities in gst-plugins-base (CVE-2024-47538, CVE-2024-47607
|
||||||
|
and CVE-2024-47615) affects the **contained** environment where the document
|
||||||
|
rendering takes place.
|
||||||
|
|
||||||
|
If one attempts to convert a malicious file with an embedded Vorbis or Opus
|
||||||
|
media elements, arbitrary code may run within that environment. Such files
|
||||||
|
look like regular Office documents, which means that you cannot avoid a specific
|
||||||
|
extension. Other programs that open Office documents, such as LibreOffice, are
|
||||||
|
also affected, unless the system has been upgraded in the meantime.
|
||||||
|
|
||||||
|
# How does this impact me?
|
||||||
|
|
||||||
|
The expectation is that malicious code will run in a container without Internet
|
||||||
|
access, meaning that it won't be able to infect the rest of the system.
|
||||||
|
|
||||||
|
If you are running Dangerzone via the Qubes OS, you are not impacted.
|
||||||
|
|
||||||
|
# What do I need to do?
|
||||||
|
|
||||||
|
You are **strongly** advised to update your Dangerzone installation to 0.8.1 as
|
||||||
|
soon as possible.
|
54
docs/developer/doit.md
Normal file
54
docs/developer/doit.md
Normal file
|
@ -0,0 +1,54 @@
|
||||||
|
# Using the Doit Automation Tool
|
||||||
|
|
||||||
|
Developers can use the [Doit](https://pydoit.org/) automation tool to create
|
||||||
|
release artifacts. The purpose of the tool is to automate the manual release
|
||||||
|
instructions in `RELEASE.md` file. Not everything is automated yet, since we're
|
||||||
|
still experimenting with this tool. You can find our task definitions in this
|
||||||
|
repo's `dodo.py` file.
|
||||||
|
|
||||||
|
## Why Doit?
|
||||||
|
|
||||||
|
We picked Doit out of the various tools out there for the following reasons:
|
||||||
|
|
||||||
|
* **Pythonic:** The configuration file and tasks can be written in Python. Where
|
||||||
|
applicable, it's easy to issue shell commands as well.
|
||||||
|
* **File targets:** Doit borrows the file target concept from Makefiles. Tasks
|
||||||
|
can have file dependencies, and targets they build. This makes it easy to
|
||||||
|
define a dependency graph for tasks.
|
||||||
|
* **Hash-based caching:** Unlike Makefiles, doit does not look at the
|
||||||
|
modification timestamp of source/target files, to figure out if it needs to
|
||||||
|
run them. Instead, it hashes those files, and will run a task only if the
|
||||||
|
hash of a file dependency has changed.
|
||||||
|
* **Parallelization:** Tasks can be run in parallel with the `-n` argument,
|
||||||
|
which is similar to `make`'s `-j` argument.
|
||||||
|
|
||||||
|
## How to Doit?
|
||||||
|
|
||||||
|
First, enter your Poetry shell. Then, make sure that your environment is clean,
|
||||||
|
and you have ample disk space. You can run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
doit clean --dry-run # if you want to see what would happen
|
||||||
|
doit clean # you'll be asked to cofirm that you want to clean everything
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, you can build all the release artifacts with `doit`, or a specific task
|
||||||
|
with:
|
||||||
|
|
||||||
|
```
|
||||||
|
doit <task>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tips and tricks
|
||||||
|
|
||||||
|
* You can run `doit list --all -s` to see the full list of tasks, their
|
||||||
|
dependencies, and whether they are up to date (U) or will run (R). Note that
|
||||||
|
certain small tasks are always configured to run.
|
||||||
|
* You can run `doit info <task>` to see which dependencies are missing.
|
||||||
|
* You can pass the following environment variables to the script, in order to
|
||||||
|
affect some global parameters:
|
||||||
|
- `CONTAINER_RUNTIME`: The container runtime to use. Either `podman` (default)
|
||||||
|
or `docker`.
|
||||||
|
- `RELEASE_DIR`: Where to store the release artifacts. Default path is
|
||||||
|
`~/release-assets/<version>`
|
||||||
|
- `APPLE_ID`: The Apple ID to use when signing/notarizing the macOS DMG.
|
|
@ -1,5 +1,11 @@
|
||||||
# gVisor integration
|
# gVisor integration
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> **Update on 2025-01-13:** There is no longer a copied container image under
|
||||||
|
> `/home/dangerzone/dangerzone-image/rootfs`. We now reuse the same container
|
||||||
|
> image both for the inner and outer container. See
|
||||||
|
> [#1048](https://github.com/freedomofpress/dangerzone/issues/1048).
|
||||||
|
|
||||||
Dangerzone has relied on the container runtime available in each supported
|
Dangerzone has relied on the container runtime available in each supported
|
||||||
operating system (Docker Desktop on Windows / macOS, Podman on Linux) to isolate
|
operating system (Docker Desktop on Windows / macOS, Podman on Linux) to isolate
|
||||||
the host from the sanitization process. The problem with this type of isolation
|
the host from the sanitization process. The problem with this type of isolation
|
||||||
|
|
67
docs/developer/reproducibility.md
Normal file
67
docs/developer/reproducibility.md
Normal file
|
@ -0,0 +1,67 @@
|
||||||
|
# Reproducible builds
|
||||||
|
|
||||||
|
We want to improve the transparency and auditability of our build artifacts, and
|
||||||
|
a way to achieve this is via reproducible builds. For a broader understanding of
|
||||||
|
what reproducible builds entail, check out https://reproducible-builds.org/.
|
||||||
|
|
||||||
|
Our build artifacts consist of:
|
||||||
|
* Container images (`amd64` and `arm64` architectures)
|
||||||
|
* macOS installers (for Intel and Apple Silicon CPUs)
|
||||||
|
* Windows installer
|
||||||
|
* Fedora packages (for regular Fedora distros and Qubes)
|
||||||
|
* Debian packages (for Debian and Ubuntu)
|
||||||
|
|
||||||
|
As of writing this, only the following artifacts are reproducible:
|
||||||
|
* Container images (see [#1047](https://github.com/freedomofpress/dangerzone/issues/1047))
|
||||||
|
|
||||||
|
In the following sections, we'll mention some specifics about enforcing
|
||||||
|
reproducibility for each artifact type.
|
||||||
|
|
||||||
|
## Container image
|
||||||
|
|
||||||
|
### Updating the image
|
||||||
|
|
||||||
|
The fact that our image is reproducible also means that it's frozen in time.
|
||||||
|
This means that rebuilding the image without updating our Dockerfile will
|
||||||
|
**not** receive security updates.
|
||||||
|
|
||||||
|
Here are the necessary variables that make up our image in the `Dockerfile.env`
|
||||||
|
file:
|
||||||
|
* `DEBIAN_IMAGE_DIGEST`: The index digest for the Debian container image
|
||||||
|
* `DEBIAN_ARCHIVE_DATE`: The Debian snapshot repo that we want to use
|
||||||
|
* `GVISOR_ARCHIVE_DATE`: The gVisor APT repo that we want to use
|
||||||
|
* `H2ORESTART_CHECKSUM`: The SHA-256 checksum of the H2ORestart plugin
|
||||||
|
* `H2ORESTART_VERSION`: The version of the H2ORestart plugin
|
||||||
|
|
||||||
|
If you update these values in `Dockerfile.env`, you must also create a new
|
||||||
|
Dockerfile with:
|
||||||
|
|
||||||
|
```
|
||||||
|
make Dockerfile
|
||||||
|
```
|
||||||
|
|
||||||
|
Updating `Dockerfile` without bumping `Dockerfile.in` is detected and should
|
||||||
|
trigger a CI error.
|
||||||
|
|
||||||
|
### Reproducing the image
|
||||||
|
|
||||||
|
For a simple way to reproduce a Dangerzone container image, you can checkout the
|
||||||
|
commit this image was built from (you can find it from the image tag in its
|
||||||
|
`g<commit>` portion), retrieve the date it was built (also included in the image
|
||||||
|
tag), and run the following command in any environment:
|
||||||
|
|
||||||
|
```
|
||||||
|
./dev_scripts/reproduce-image.py \
|
||||||
|
--debian-archive-date <date> \
|
||||||
|
<digest>
|
||||||
|
```
|
||||||
|
|
||||||
|
where:
|
||||||
|
* `<date>` should be given in YYYYMMDD format, e.g, 20250226
|
||||||
|
* `<digest>` is the SHA-256 hash of the image for the **current platform**, with
|
||||||
|
or without the `sha256:` prefix.
|
||||||
|
|
||||||
|
This command will build a container image from the current Git commit and the
|
||||||
|
provided date for the Debian archives. Then, it will compare the digest of the
|
||||||
|
manifest against the provided one. This is a simple way to ensure that the
|
||||||
|
created image is bit-for-bit reproducible.
|
53
docs/podman-desktop.md
Normal file
53
docs/podman-desktop.md
Normal file
|
@ -0,0 +1,53 @@
|
||||||
|
# Podman Desktop support
|
||||||
|
|
||||||
|
Starting with Dangerzone 0.9.0, it is possible to use Podman Desktop on
|
||||||
|
Windows and macOS. The support for this container runtime is currently only
|
||||||
|
experimental. If you try it out and encounter issues, please reach to us, we'll
|
||||||
|
be glad to help.
|
||||||
|
|
||||||
|
With [Podman Desktop](https://podman-desktop.io/) installed on your machine,
|
||||||
|
here are the required steps to change the dangerzone container runtime.
|
||||||
|
|
||||||
|
You will be required to open a terminal and follow these steps:
|
||||||
|
|
||||||
|
## On macOS
|
||||||
|
|
||||||
|
You will need to configure podman to access the shared Dangerzone resources:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
podman machine stop
|
||||||
|
podman machine rm
|
||||||
|
cat > ~/.config/containers/containers.conf <<EOF
|
||||||
|
[machine]
|
||||||
|
volumes = ["/Users:/Users", "/private:/private", "/var/folders:/var/folders", "/Applications/Dangerzone.app:/Applications/Dangerzone.app"]
|
||||||
|
EOF
|
||||||
|
podman machine init
|
||||||
|
podman machine set --rootful=false
|
||||||
|
podman machine start
|
||||||
|
```
|
||||||
|
Then, set the container runtime to podman using this command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/Applications/Dangerzone.app/Contents/MacOS/dangerzone-cli --set-container-runtime podman
|
||||||
|
```
|
||||||
|
|
||||||
|
In order to get back to the default behaviour (Docker Desktop on macOS), pass
|
||||||
|
the `default` value instead:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/Applications/Dangerzone.app/Contents/MacOS/dangerzone-cli --set-container-runtime default
|
||||||
|
```
|
||||||
|
|
||||||
|
## On Windows
|
||||||
|
|
||||||
|
To set the container runtime to podman, use this command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
'C:\Program Files\Dangerzone\dangerzone-cli.exe' --set-container-runtime podman
|
||||||
|
```
|
||||||
|
|
||||||
|
To revert back to the default behavior, pass the `default` value:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
'C:\Program Files\Dangerzone\dangerzone-cli.exe' --set-container-runtime podman
|
||||||
|
```
|
379
dodo.py
Normal file
379
dodo.py
Normal file
|
@ -0,0 +1,379 @@
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import platform
|
||||||
|
import shutil
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from doit.action import CmdAction
|
||||||
|
|
||||||
|
ARCH = "arm64" if platform.machine() == "arm64" else "i686"
|
||||||
|
VERSION = open("share/version.txt").read().strip()
|
||||||
|
FEDORA_VERSIONS = ["40", "41", "42"]
|
||||||
|
|
||||||
|
### Global parameters
|
||||||
|
|
||||||
|
CONTAINER_RUNTIME = os.environ.get("CONTAINER_RUNTIME", "podman")
|
||||||
|
DEFAULT_RELEASE_DIR = Path.home() / "release-assets" / VERSION
|
||||||
|
RELEASE_DIR = Path(os.environ.get("RELEASE_DIR", DEFAULT_RELEASE_DIR))
|
||||||
|
APPLE_ID = os.environ.get("APPLE_ID", None)
|
||||||
|
|
||||||
|
### Task Parameters
|
||||||
|
|
||||||
|
PARAM_APPLE_ID = {
|
||||||
|
"name": "apple_id",
|
||||||
|
"long": "apple-id",
|
||||||
|
"default": APPLE_ID,
|
||||||
|
"help": "The Apple developer ID that will be used to sign the .dmg",
|
||||||
|
}
|
||||||
|
|
||||||
|
### File dependencies
|
||||||
|
#
|
||||||
|
# Define all the file dependencies for our tasks in a single place, since some file
|
||||||
|
# dependencies are shared between tasks.
|
||||||
|
|
||||||
|
|
||||||
|
def list_files(path, recursive=False):
|
||||||
|
"""List files in a directory, and optionally traverse into subdirectories."""
|
||||||
|
glob_fn = Path(path).rglob if recursive else Path(path).glob
|
||||||
|
return [f for f in glob_fn("*") if f.is_file() and not f.suffix == ".pyc"]
|
||||||
|
|
||||||
|
|
||||||
|
def list_language_data():
|
||||||
|
"""List the expected language data that Dangerzone downloads and stores locally."""
|
||||||
|
tessdata_dir = Path("share") / "tessdata"
|
||||||
|
langs = json.loads(open(tessdata_dir.parent / "ocr-languages.json").read()).values()
|
||||||
|
targets = [tessdata_dir / f"{lang}.traineddata" for lang in langs]
|
||||||
|
return targets
|
||||||
|
|
||||||
|
|
||||||
|
TESSDATA_DEPS = ["install/common/download-tessdata.py", "share/ocr-languages.json"]
|
||||||
|
TESSDATA_TARGETS = list_language_data()
|
||||||
|
|
||||||
|
IMAGE_DEPS = [
|
||||||
|
"Dockerfile",
|
||||||
|
*list_files("dangerzone/conversion"),
|
||||||
|
*list_files("dangerzone/container_helpers"),
|
||||||
|
"install/common/build-image.py",
|
||||||
|
]
|
||||||
|
IMAGE_TARGETS = ["share/container.tar", "share/image-id.txt"]
|
||||||
|
|
||||||
|
SOURCE_DEPS = [
|
||||||
|
*list_files("assets"),
|
||||||
|
*list_files("share"),
|
||||||
|
*list_files("dangerzone", recursive=True),
|
||||||
|
]
|
||||||
|
|
||||||
|
PYTHON_DEPS = ["poetry.lock", "pyproject.toml"]
|
||||||
|
|
||||||
|
DMG_DEPS = [
|
||||||
|
*list_files("install/macos"),
|
||||||
|
*TESSDATA_TARGETS,
|
||||||
|
*IMAGE_TARGETS,
|
||||||
|
*PYTHON_DEPS,
|
||||||
|
*SOURCE_DEPS,
|
||||||
|
]
|
||||||
|
|
||||||
|
LINUX_DEPS = [
|
||||||
|
*list_files("install/linux"),
|
||||||
|
*IMAGE_TARGETS,
|
||||||
|
*PYTHON_DEPS,
|
||||||
|
*SOURCE_DEPS,
|
||||||
|
]
|
||||||
|
|
||||||
|
DEB_DEPS = [*LINUX_DEPS, *list_files("debian")]
|
||||||
|
RPM_DEPS = [*LINUX_DEPS, *list_files("qubes")]
|
||||||
|
|
||||||
|
|
||||||
|
def copy_dir(src, dst):
|
||||||
|
"""Copy a directory to a destination dir, and overwrite it if it exists."""
|
||||||
|
shutil.rmtree(dst, ignore_errors=True)
|
||||||
|
shutil.copytree(src, dst)
|
||||||
|
|
||||||
|
|
||||||
|
def create_release_dir():
|
||||||
|
RELEASE_DIR.mkdir(parents=True, exist_ok=True)
|
||||||
|
(RELEASE_DIR / "tmp").mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
|
||||||
|
def build_linux_pkg(distro, version, cwd, qubes=False):
|
||||||
|
"""Generic command for building a .deb/.rpm in a Dangerzone dev environment."""
|
||||||
|
pkg = "rpm" if distro == "fedora" else "deb"
|
||||||
|
cmd = [
|
||||||
|
"python3",
|
||||||
|
"./dev_scripts/env.py",
|
||||||
|
"--distro",
|
||||||
|
distro,
|
||||||
|
"--version",
|
||||||
|
version,
|
||||||
|
"run",
|
||||||
|
"--no-gui",
|
||||||
|
"--dev",
|
||||||
|
f"./dangerzone/install/linux/build-{pkg}.py",
|
||||||
|
]
|
||||||
|
if qubes:
|
||||||
|
cmd += ["--qubes"]
|
||||||
|
return CmdAction(" ".join(cmd), cwd=cwd)
|
||||||
|
|
||||||
|
|
||||||
|
def build_deb(cwd):
|
||||||
|
"""Build a .deb package on Debian Bookworm."""
|
||||||
|
return build_linux_pkg(distro="debian", version="bookworm", cwd=cwd)
|
||||||
|
|
||||||
|
|
||||||
|
def build_rpm(version, cwd, qubes=False):
|
||||||
|
"""Build an .rpm package on the requested Fedora distro."""
|
||||||
|
return build_linux_pkg(distro="fedora", version=version, cwd=cwd, qubes=qubes)
|
||||||
|
|
||||||
|
|
||||||
|
### Tasks
|
||||||
|
|
||||||
|
|
||||||
|
def task_clean_container_runtime():
|
||||||
|
"""Clean the storage space of the container runtime."""
|
||||||
|
return {
|
||||||
|
"actions": None,
|
||||||
|
"clean": [
|
||||||
|
[CONTAINER_RUNTIME, "system", "prune", "-a", "-f"],
|
||||||
|
],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_check_container_runtime():
|
||||||
|
"""Test that the container runtime is ready."""
|
||||||
|
return {
|
||||||
|
"actions": [
|
||||||
|
["which", CONTAINER_RUNTIME],
|
||||||
|
[CONTAINER_RUNTIME, "ps"],
|
||||||
|
],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_macos_check_cert():
|
||||||
|
"""Test that the Apple developer certificate can be used."""
|
||||||
|
return {
|
||||||
|
"actions": [
|
||||||
|
"xcrun notarytool history --apple-id %(apple_id)s --keychain-profile dz-notarytool-release-key"
|
||||||
|
],
|
||||||
|
"params": [PARAM_APPLE_ID],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_macos_check_system():
|
||||||
|
"""Run macOS specific system checks, as well as the generic ones."""
|
||||||
|
return {
|
||||||
|
"actions": None,
|
||||||
|
"task_dep": ["check_container_runtime", "macos_check_cert"],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_init_release_dir():
|
||||||
|
"""Create a directory for release artifacts."""
|
||||||
|
return {
|
||||||
|
"actions": [create_release_dir],
|
||||||
|
"clean": [f"rm -rf {RELEASE_DIR}"],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_download_tessdata():
|
||||||
|
"""Download the Tesseract data using ./install/common/download-tessdata.py"""
|
||||||
|
return {
|
||||||
|
"actions": ["python install/common/download-tessdata.py"],
|
||||||
|
"file_dep": TESSDATA_DEPS,
|
||||||
|
"targets": TESSDATA_TARGETS,
|
||||||
|
"clean": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_build_image():
|
||||||
|
"""Build the container image using ./install/common/build-image.py"""
|
||||||
|
img_src = "share/container.tar"
|
||||||
|
img_dst = RELEASE_DIR / f"container-{VERSION}-{ARCH}.tar" # FIXME: Add arch
|
||||||
|
img_id_src = "share/image-id.txt"
|
||||||
|
img_id_dst = RELEASE_DIR / "image-id.txt" # FIXME: Add arch
|
||||||
|
|
||||||
|
return {
|
||||||
|
"actions": [
|
||||||
|
f"python install/common/build-image.py --runtime={CONTAINER_RUNTIME}",
|
||||||
|
["cp", img_src, img_dst],
|
||||||
|
["cp", img_id_src, img_id_dst],
|
||||||
|
],
|
||||||
|
"file_dep": IMAGE_DEPS,
|
||||||
|
"targets": [img_src, img_dst, img_id_src, img_id_dst],
|
||||||
|
"task_dep": ["init_release_dir", "check_container_runtime"],
|
||||||
|
"clean": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_poetry_install():
|
||||||
|
"""Setup the Poetry environment"""
|
||||||
|
return {"actions": ["poetry sync"], "clean": ["poetry env remove --all"]}
|
||||||
|
|
||||||
|
|
||||||
|
def task_macos_build_dmg():
|
||||||
|
"""Build the macOS .dmg file for Dangerzone."""
|
||||||
|
dz_dir = RELEASE_DIR / "tmp" / "macos"
|
||||||
|
dmg_src = dz_dir / "dist" / "Dangerzone.dmg"
|
||||||
|
dmg_dst = RELEASE_DIR / f"Dangerzone-{VERSION}-{ARCH}.dmg" # FIXME: Add -arch
|
||||||
|
|
||||||
|
return {
|
||||||
|
"actions": [
|
||||||
|
(copy_dir, [".", dz_dir]),
|
||||||
|
f"cd {dz_dir} && poetry run install/macos/build-app.py --with-codesign",
|
||||||
|
(
|
||||||
|
"xcrun notarytool submit --wait --apple-id %(apple_id)s"
|
||||||
|
f" --keychain-profile dz-notarytool-release-key {dmg_src}"
|
||||||
|
),
|
||||||
|
f"xcrun stapler staple {dmg_src}",
|
||||||
|
["cp", dmg_src, dmg_dst],
|
||||||
|
["rm", "-rf", dz_dir],
|
||||||
|
],
|
||||||
|
"params": [PARAM_APPLE_ID],
|
||||||
|
"file_dep": DMG_DEPS,
|
||||||
|
"task_dep": [
|
||||||
|
"macos_check_system",
|
||||||
|
"init_release_dir",
|
||||||
|
"poetry_install",
|
||||||
|
"download_tessdata",
|
||||||
|
],
|
||||||
|
"targets": [dmg_src, dmg_dst],
|
||||||
|
"clean": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_debian_env():
|
||||||
|
"""Build a Debian Bookworm dev environment."""
|
||||||
|
return {
|
||||||
|
"actions": [
|
||||||
|
[
|
||||||
|
"python3",
|
||||||
|
"./dev_scripts/env.py",
|
||||||
|
"--distro",
|
||||||
|
"debian",
|
||||||
|
"--version",
|
||||||
|
"bookworm",
|
||||||
|
"build-dev",
|
||||||
|
]
|
||||||
|
],
|
||||||
|
"task_dep": ["check_container_runtime"],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_debian_deb():
|
||||||
|
"""Build a Debian package for Debian Bookworm."""
|
||||||
|
dz_dir = RELEASE_DIR / "tmp" / "debian"
|
||||||
|
deb_name = f"dangerzone_{VERSION}-1_amd64.deb"
|
||||||
|
deb_src = dz_dir / "deb_dist" / deb_name
|
||||||
|
deb_dst = RELEASE_DIR / deb_name
|
||||||
|
|
||||||
|
return {
|
||||||
|
"actions": [
|
||||||
|
(copy_dir, [".", dz_dir]),
|
||||||
|
build_deb(cwd=dz_dir),
|
||||||
|
["cp", deb_src, deb_dst],
|
||||||
|
["rm", "-rf", dz_dir],
|
||||||
|
],
|
||||||
|
"file_dep": DEB_DEPS,
|
||||||
|
"task_dep": ["init_release_dir", "debian_env"],
|
||||||
|
"targets": [deb_dst],
|
||||||
|
"clean": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_fedora_env():
|
||||||
|
"""Build Fedora dev environments."""
|
||||||
|
for version in FEDORA_VERSIONS:
|
||||||
|
yield {
|
||||||
|
"name": version,
|
||||||
|
"doc": f"Build Fedora {version} dev environments",
|
||||||
|
"actions": [
|
||||||
|
[
|
||||||
|
"python3",
|
||||||
|
"./dev_scripts/env.py",
|
||||||
|
"--distro",
|
||||||
|
"fedora",
|
||||||
|
"--version",
|
||||||
|
version,
|
||||||
|
"build-dev",
|
||||||
|
],
|
||||||
|
],
|
||||||
|
"task_dep": ["check_container_runtime"],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_fedora_rpm():
|
||||||
|
"""Build Fedora packages for every supported version."""
|
||||||
|
for version in FEDORA_VERSIONS:
|
||||||
|
for qubes in (True, False):
|
||||||
|
qubes_ident = "-qubes" if qubes else ""
|
||||||
|
qubes_desc = " for Qubes" if qubes else ""
|
||||||
|
dz_dir = RELEASE_DIR / "tmp" / f"f{version}{qubes_ident}"
|
||||||
|
rpm_names = [
|
||||||
|
f"dangerzone{qubes_ident}-{VERSION}-1.fc{version}.x86_64.rpm",
|
||||||
|
f"dangerzone{qubes_ident}-{VERSION}-1.fc{version}.src.rpm",
|
||||||
|
]
|
||||||
|
rpm_src = [dz_dir / "dist" / rpm_name for rpm_name in rpm_names]
|
||||||
|
rpm_dst = [RELEASE_DIR / rpm_name for rpm_name in rpm_names]
|
||||||
|
|
||||||
|
yield {
|
||||||
|
"name": version + qubes_ident,
|
||||||
|
"doc": f"Build a Fedora {version} package{qubes_desc}",
|
||||||
|
"actions": [
|
||||||
|
(copy_dir, [".", dz_dir]),
|
||||||
|
build_rpm(version, cwd=dz_dir, qubes=qubes),
|
||||||
|
["cp", *rpm_src, RELEASE_DIR],
|
||||||
|
["rm", "-rf", dz_dir],
|
||||||
|
],
|
||||||
|
"file_dep": RPM_DEPS,
|
||||||
|
"task_dep": ["init_release_dir", f"fedora_env:{version}"],
|
||||||
|
"targets": rpm_dst,
|
||||||
|
"clean": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def task_git_archive():
|
||||||
|
"""Build a Git archive of the repo."""
|
||||||
|
target = f"{RELEASE_DIR}/dangerzone-{VERSION}.tar.gz"
|
||||||
|
return {
|
||||||
|
"actions": [
|
||||||
|
f"git archive --format=tar.gz -o {target} --prefix=dangerzone/ v{VERSION}"
|
||||||
|
],
|
||||||
|
"targets": [target],
|
||||||
|
"task_dep": ["init_release_dir"],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
#######################################################################################
|
||||||
|
#
|
||||||
|
# END OF TASKS
|
||||||
|
#
|
||||||
|
# The following task should be the LAST one in the dodo file, so that it runs first when
|
||||||
|
# running `do clean`.
|
||||||
|
|
||||||
|
|
||||||
|
def clean_prompt():
|
||||||
|
ans = input(
|
||||||
|
f"""
|
||||||
|
You have not specified a target to clean.
|
||||||
|
This means that doit will clean the following targets:
|
||||||
|
|
||||||
|
* ALL the containers, images, and build cache in {CONTAINER_RUNTIME.capitalize()}
|
||||||
|
* ALL the built targets and directories
|
||||||
|
|
||||||
|
For a full list of the targets that doit will clean, run: doit clean --dry-run
|
||||||
|
|
||||||
|
Are you sure you want to clean everything (y/N): \
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
if ans.lower() in ["yes", "y"]:
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
print("Exiting...")
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
def task_clean_prompt():
|
||||||
|
"""Make sure that the user really wants to run the clean tasks."""
|
||||||
|
return {
|
||||||
|
"actions": None,
|
||||||
|
"clean": [clean_prompt],
|
||||||
|
}
|
|
@ -1,20 +1,60 @@
|
||||||
import argparse
|
import argparse
|
||||||
import gzip
|
|
||||||
import os
|
|
||||||
import platform
|
import platform
|
||||||
|
import secrets
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
BUILD_CONTEXT = "dangerzone/"
|
BUILD_CONTEXT = "dangerzone"
|
||||||
TAG = "dangerzone.rocks/dangerzone:latest"
|
IMAGE_NAME = "dangerzone.rocks/dangerzone"
|
||||||
REQUIREMENTS_TXT = "container-pip-requirements.txt"
|
|
||||||
if platform.system() in ["Darwin", "Windows"]:
|
if platform.system() in ["Darwin", "Windows"]:
|
||||||
CONTAINER_RUNTIME = "docker"
|
CONTAINER_RUNTIME = "docker"
|
||||||
elif platform.system() == "Linux":
|
elif platform.system() == "Linux":
|
||||||
CONTAINER_RUNTIME = "podman"
|
CONTAINER_RUNTIME = "podman"
|
||||||
|
|
||||||
ARCH = platform.machine()
|
|
||||||
|
def str2bool(v):
|
||||||
|
if isinstance(v, bool):
|
||||||
|
return v
|
||||||
|
if v.lower() in ("yes", "true", "t", "y", "1"):
|
||||||
|
return True
|
||||||
|
elif v.lower() in ("no", "false", "f", "n", "0"):
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
raise argparse.ArgumentTypeError("Boolean value expected.")
|
||||||
|
|
||||||
|
|
||||||
|
def determine_git_tag():
|
||||||
|
# Designate a unique tag for this image, depending on the Git commit it was created
|
||||||
|
# from:
|
||||||
|
# 1. If created from a Git tag (e.g., 0.8.0), the image tag will be `0.8.0`.
|
||||||
|
# 2. If created from a commit, it will be something like `0.8.0-31-g6bdaa7a`.
|
||||||
|
# 3. If the contents of the Git repo are dirty, we will append a unique identifier
|
||||||
|
# for this run, something like `0.8.0-31-g6bdaa7a-fdcb` or `0.8.0-fdcb`.
|
||||||
|
dirty_ident = secrets.token_hex(2)
|
||||||
|
return (
|
||||||
|
subprocess.check_output(
|
||||||
|
[
|
||||||
|
"git",
|
||||||
|
"describe",
|
||||||
|
"--long",
|
||||||
|
"--first-parent",
|
||||||
|
f"--dirty=-{dirty_ident}",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
.decode()
|
||||||
|
.strip()[1:] # remove the "v" prefix of the tag.
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def determine_debian_archive_date():
|
||||||
|
"""Get the date of the Debian archive from Dockerfile.env."""
|
||||||
|
for env in Path("Dockerfile.env").read_text().split("\n"):
|
||||||
|
if env.startswith("DEBIAN_ARCHIVE_DATE"):
|
||||||
|
return env.split("=")[1]
|
||||||
|
raise Exception(
|
||||||
|
"Could not find 'DEBIAN_ARCHIVE_DATE' build argument in Dockerfile.env"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
@ -26,126 +66,86 @@ def main():
|
||||||
help=f"The container runtime for building the image (default: {CONTAINER_RUNTIME})",
|
help=f"The container runtime for building the image (default: {CONTAINER_RUNTIME})",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--no-save",
|
"--platform",
|
||||||
action="store_true",
|
default=None,
|
||||||
help="Do not save the container image as a tarball in share/container.tar.gz",
|
help=f"The platform for building the image (default: current platform)",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--compress-level",
|
"--output",
|
||||||
type=int,
|
"-o",
|
||||||
choices=range(0, 10),
|
default=str(Path("share") / "container.tar"),
|
||||||
default=9,
|
help="Path to store the container image",
|
||||||
help="The Gzip compression level, from 0 (lowest) to 9 (highest, default)",
|
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--use-cache",
|
"--use-cache",
|
||||||
|
type=str2bool,
|
||||||
|
nargs="?",
|
||||||
|
default=True,
|
||||||
|
const=True,
|
||||||
|
help="Use the builder's cache to speed up the builds",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--tag",
|
||||||
|
default=None,
|
||||||
|
help="Provide a custom tag for the image (for development only)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--debian-archive-date",
|
||||||
|
"-d",
|
||||||
|
default=determine_debian_archive_date(),
|
||||||
|
help="Use a specific Debian snapshot archive, by its date (default %(default)s)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--dry",
|
||||||
|
default=False,
|
||||||
action="store_true",
|
action="store_true",
|
||||||
help="Use the builder's cache to speed up the builds (not suitable for release builds)",
|
help="Do not run any commands, just print what would happen",
|
||||||
)
|
)
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
print(f"Building for architecture '{ARCH}'")
|
tag = args.tag or f"{args.debian_archive_date}-{determine_git_tag()}"
|
||||||
|
image_name_tagged = f"{IMAGE_NAME}:{tag}"
|
||||||
|
|
||||||
print("Exporting container pip dependencies")
|
print(f"Will tag the container image as '{image_name_tagged}'")
|
||||||
with ContainerPipDependencies():
|
image_id_path = Path("share") / "image-id.txt"
|
||||||
if not args.use_cache:
|
if not args.dry:
|
||||||
print("Pulling base image")
|
with open(image_id_path, "w") as f:
|
||||||
subprocess.run(
|
f.write(tag)
|
||||||
[
|
|
||||||
args.runtime,
|
|
||||||
"pull",
|
|
||||||
"alpine:latest",
|
|
||||||
],
|
|
||||||
check=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
# Build the container image, and tag it with the calculated tag
|
||||||
print("Building container image")
|
print("Building container image")
|
||||||
cache_args = [] if args.use_cache else ["--no-cache"]
|
cache_args = [] if args.use_cache else ["--no-cache"]
|
||||||
|
platform_args = [] if not args.platform else ["--platform", args.platform]
|
||||||
|
rootless_args = [] if args.runtime == "docker" else ["--rootless"]
|
||||||
|
rootless_args = []
|
||||||
|
dry_args = [] if not args.dry else ["--dry"]
|
||||||
|
|
||||||
subprocess.run(
|
subprocess.run(
|
||||||
[
|
[
|
||||||
args.runtime,
|
sys.executable,
|
||||||
|
str(Path("dev_scripts") / "repro-build.py"),
|
||||||
"build",
|
"build",
|
||||||
BUILD_CONTEXT,
|
"--runtime",
|
||||||
|
args.runtime,
|
||||||
|
"--build-arg",
|
||||||
|
f"DEBIAN_ARCHIVE_DATE={args.debian_archive_date}",
|
||||||
|
"--datetime",
|
||||||
|
args.debian_archive_date,
|
||||||
|
*dry_args,
|
||||||
*cache_args,
|
*cache_args,
|
||||||
"--build-arg",
|
*platform_args,
|
||||||
f"REQUIREMENTS_TXT={REQUIREMENTS_TXT}",
|
*rootless_args,
|
||||||
"--build-arg",
|
"--tag",
|
||||||
f"ARCH={ARCH}",
|
image_name_tagged,
|
||||||
|
"--output",
|
||||||
|
args.output,
|
||||||
"-f",
|
"-f",
|
||||||
"Dockerfile",
|
"Dockerfile",
|
||||||
"--tag",
|
BUILD_CONTEXT,
|
||||||
TAG,
|
|
||||||
],
|
],
|
||||||
check=True,
|
check=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
if not args.no_save:
|
|
||||||
print("Saving container image")
|
|
||||||
cmd = subprocess.Popen(
|
|
||||||
[
|
|
||||||
CONTAINER_RUNTIME,
|
|
||||||
"save",
|
|
||||||
TAG,
|
|
||||||
],
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
)
|
|
||||||
|
|
||||||
print("Compressing container image")
|
|
||||||
chunk_size = 4 << 20
|
|
||||||
with gzip.open(
|
|
||||||
"share/container.tar.gz",
|
|
||||||
"wb",
|
|
||||||
compresslevel=args.compress_level,
|
|
||||||
) as gzip_f:
|
|
||||||
while True:
|
|
||||||
chunk = cmd.stdout.read(chunk_size)
|
|
||||||
if len(chunk) > 0:
|
|
||||||
gzip_f.write(chunk)
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
cmd.wait(5)
|
|
||||||
|
|
||||||
print("Looking up the image id")
|
|
||||||
image_id = subprocess.check_output(
|
|
||||||
[
|
|
||||||
args.runtime,
|
|
||||||
"image",
|
|
||||||
"list",
|
|
||||||
"--format",
|
|
||||||
"{{.ID}}",
|
|
||||||
TAG,
|
|
||||||
],
|
|
||||||
text=True,
|
|
||||||
)
|
|
||||||
with open("share/image-id.txt", "w") as f:
|
|
||||||
f.write(image_id)
|
|
||||||
|
|
||||||
|
|
||||||
class ContainerPipDependencies:
|
|
||||||
"""Generates PIP dependencies within container"""
|
|
||||||
|
|
||||||
def __enter__(self):
|
|
||||||
try:
|
|
||||||
container_requirements_txt = subprocess.check_output(
|
|
||||||
["poetry", "export", "--only", "container"], universal_newlines=True
|
|
||||||
)
|
|
||||||
except subprocess.CalledProcessError as e:
|
|
||||||
print("FAILURE", e.returncode, e.output)
|
|
||||||
print(f"REQUIREMENTS: {container_requirements_txt}")
|
|
||||||
# XXX Export container dependencies and exclude pymupdfb since it is not needed in container
|
|
||||||
req_txt_pymupdfb_stripped = container_requirements_txt.split("pymupdfb")[0]
|
|
||||||
with open(Path(BUILD_CONTEXT) / REQUIREMENTS_TXT, "w") as f:
|
|
||||||
if ARCH == "arm64":
|
|
||||||
# PyMuPDF needs to be built on ARM64 machines
|
|
||||||
# But is already provided as a prebuilt-wheel on other architectures
|
|
||||||
f.write(req_txt_pymupdfb_stripped)
|
|
||||||
else:
|
|
||||||
f.write(container_requirements_txt)
|
|
||||||
|
|
||||||
def __exit__(self, exc_type, exc_value, exc_tb):
|
|
||||||
print("Leaving the context...")
|
|
||||||
os.remove(Path(BUILD_CONTEXT) / REQUIREMENTS_TXT)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
sys.exit(main())
|
sys.exit(main())
|
||||||
|
|
|
@ -51,6 +51,8 @@ def main():
|
||||||
if files == expected_files:
|
if files == expected_files:
|
||||||
logger.info("Skipping tessdata download, language data already exists")
|
logger.info("Skipping tessdata download, language data already exists")
|
||||||
return
|
return
|
||||||
|
elif not files:
|
||||||
|
logger.info("Tesseract dir is empty, proceeding to download language data")
|
||||||
else:
|
else:
|
||||||
logger.info(f"Found {tessdata_dir} but contents do not match")
|
logger.info(f"Found {tessdata_dir} but contents do not match")
|
||||||
return 1
|
return 1
|
||||||
|
|
|
@ -66,14 +66,14 @@ def build(build_dir, qubes=False):
|
||||||
print("* Creating a Python sdist")
|
print("* Creating a Python sdist")
|
||||||
tessdata = root / "share" / "tessdata"
|
tessdata = root / "share" / "tessdata"
|
||||||
tessdata_bak = root / "tessdata.bak"
|
tessdata_bak = root / "tessdata.bak"
|
||||||
container_tar_gz = root / "share" / "container.tar.gz"
|
container_tar = root / "share" / "container.tar"
|
||||||
container_tar_gz_bak = root / "container.tar.gz.bak"
|
container_tar_bak = root / "container.tar.bak"
|
||||||
|
|
||||||
if tessdata.exists():
|
if tessdata.exists():
|
||||||
tessdata.rename(tessdata_bak)
|
tessdata.rename(tessdata_bak)
|
||||||
stash_container = qubes and container_tar_gz.exists()
|
stash_container = qubes and container_tar.exists()
|
||||||
if stash_container and container_tar_gz.exists():
|
if stash_container and container_tar.exists():
|
||||||
container_tar_gz.rename(container_tar_gz_bak)
|
container_tar.rename(container_tar_bak)
|
||||||
try:
|
try:
|
||||||
subprocess.run(["poetry", "build", "-f", "sdist"], cwd=root, check=True)
|
subprocess.run(["poetry", "build", "-f", "sdist"], cwd=root, check=True)
|
||||||
# Copy and unlink the Dangerzone sdist, instead of just renaming it. If the
|
# Copy and unlink the Dangerzone sdist, instead of just renaming it. If the
|
||||||
|
@ -84,8 +84,8 @@ def build(build_dir, qubes=False):
|
||||||
finally:
|
finally:
|
||||||
if tessdata_bak.exists():
|
if tessdata_bak.exists():
|
||||||
tessdata_bak.rename(tessdata)
|
tessdata_bak.rename(tessdata)
|
||||||
if stash_container and container_tar_gz_bak.exists():
|
if stash_container and container_tar_bak.exists():
|
||||||
container_tar_gz_bak.rename(container_tar_gz)
|
container_tar_bak.rename(container_tar)
|
||||||
|
|
||||||
print("* Building RPM package")
|
print("* Building RPM package")
|
||||||
cmd = [
|
cmd = [
|
||||||
|
|
|
@ -18,7 +18,7 @@
|
||||||
#
|
#
|
||||||
# * Qubes packages include some extra files under /etc/qubes-rpc, whereas
|
# * Qubes packages include some extra files under /etc/qubes-rpc, whereas
|
||||||
# regular RPM packages include the container image under
|
# regular RPM packages include the container image under
|
||||||
# /usr/share/container.tar.gz
|
# /usr/share/container.tar
|
||||||
# * Qubes packages have some extra dependencies.
|
# * Qubes packages have some extra dependencies.
|
||||||
# 3. It is best to consume this SPEC file using the `install/linux/build-rpm.py`
|
# 3. It is best to consume this SPEC file using the `install/linux/build-rpm.py`
|
||||||
# script, which handles the necessary scaffolding for building the package.
|
# script, which handles the necessary scaffolding for building the package.
|
||||||
|
@ -32,7 +32,7 @@ Name: dangerzone-qubes
|
||||||
Name: dangerzone
|
Name: dangerzone
|
||||||
%endif
|
%endif
|
||||||
|
|
||||||
Version: 0.8.0
|
Version: 0.9.0
|
||||||
Release: 1%{?dist}
|
Release: 1%{?dist}
|
||||||
Summary: Take potentially dangerous PDFs, office documents, or images and convert them to safe PDFs
|
Summary: Take potentially dangerous PDFs, office documents, or images and convert them to safe PDFs
|
||||||
|
|
||||||
|
@ -216,17 +216,6 @@ convert the documents within a secure sandbox.
|
||||||
%prep
|
%prep
|
||||||
%autosetup -p1 -n dangerzone-%{version}
|
%autosetup -p1 -n dangerzone-%{version}
|
||||||
|
|
||||||
# XXX: Bump the Python requirement in pyproject.toml from <3.13 to <3.14. Fedora
|
|
||||||
# 41 comes with Python 3.13 installed, but our pyproject.toml does not support
|
|
||||||
# it because PySide6 in PyPI works with Python 3.12 or earlier.
|
|
||||||
#
|
|
||||||
# This hack sidesteps this issue, and we haven't noticed any paticular problem
|
|
||||||
# with the package that is built from that.
|
|
||||||
%if 0%{?fedora} == 41
|
|
||||||
sed -i 's/<3.13/<3.14/' pyproject.toml
|
|
||||||
%endif
|
|
||||||
|
|
||||||
|
|
||||||
%generate_buildrequires
|
%generate_buildrequires
|
||||||
%pyproject_buildrequires -R
|
%pyproject_buildrequires -R
|
||||||
|
|
||||||
|
|
|
@ -28,26 +28,9 @@ def main():
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.info("Getting PyMuPDF deps as requirements.txt")
|
logger.info("Getting PyMuPDF deps as requirements.txt")
|
||||||
cmd = ["poetry", "export", "--only", "container"]
|
cmd = ["poetry", "export", "--only", "debian"]
|
||||||
container_requirements_txt = subprocess.check_output(cmd)
|
container_requirements_txt = subprocess.check_output(cmd)
|
||||||
|
|
||||||
# XXX: Hack for Ubuntu Focal.
|
|
||||||
#
|
|
||||||
# The `requirements.txt` file is generated from our `pyproject.toml` file, and thus
|
|
||||||
# specifies that the minimum Python version is 3.9. This was to accommodate to
|
|
||||||
# PySide6, which is installed in macOS / Windows via `poetry` and works with Python
|
|
||||||
# 3.9+. [1]
|
|
||||||
#
|
|
||||||
# The Python version in Ubuntu Focal though is 3.8. This generally was not much of
|
|
||||||
# an issue, since we used the package manager to install dependencies. However, it
|
|
||||||
# becomes an issue when we want to vendor the PyMuPDF package, using `pip`. In order
|
|
||||||
# to sidestep this virtual limitation, we can just change the Python version in the
|
|
||||||
# generated `requirements.txt` file in Ubuntu Focal from 3.9 to 3.8.
|
|
||||||
#
|
|
||||||
# [1] https://github.com/freedomofpress/dangerzone/pull/818
|
|
||||||
if sys.version.startswith("3.8"):
|
|
||||||
container_requirements_txt = container_requirements_txt.replace(b"3.9", b"3.8")
|
|
||||||
|
|
||||||
logger.info(f"Vendoring PyMuPDF under '{args.dest}'")
|
logger.info(f"Vendoring PyMuPDF under '{args.dest}'")
|
||||||
# We prefer to call the CLI version of `pip`, instead of importing it directly, as
|
# We prefer to call the CLI version of `pip`, instead of importing it directly, as
|
||||||
# instructed here:
|
# instructed here:
|
|
@ -1,40 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Development script for installing Podman on Ubuntu Focal. Mainly to be used as
|
|
||||||
# part of our CI pipelines, where we may install Podman on environments that
|
|
||||||
# don't have sudo.
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
if [[ "$EUID" -ne 0 ]]; then
|
|
||||||
SUDO=sudo
|
|
||||||
else
|
|
||||||
SUDO=
|
|
||||||
fi
|
|
||||||
|
|
||||||
provide() {
|
|
||||||
$SUDO apt-get update
|
|
||||||
$SUDO apt-get install curl wget gnupg2 -y
|
|
||||||
source /etc/os-release
|
|
||||||
$SUDO sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' \
|
|
||||||
> /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
|
|
||||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- \
|
|
||||||
| $SUDO apt-key add -
|
|
||||||
$SUDO apt-get update -qq -y
|
|
||||||
}
|
|
||||||
|
|
||||||
install() {
|
|
||||||
$SUDO apt-get -qq --yes install podman
|
|
||||||
podman --version
|
|
||||||
}
|
|
||||||
|
|
||||||
if [[ "$1" == "--repo-only" ]]; then
|
|
||||||
provide
|
|
||||||
elif [[ "$1" == "" ]]; then
|
|
||||||
provide
|
|
||||||
install
|
|
||||||
else
|
|
||||||
echo "Unexpected argument: $1"
|
|
||||||
echo "Usage: $0 [--repo-only]"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
|
@ -17,22 +17,23 @@ signtool.exe sign /v /d "Dangerzone" /a /n "Freedom of the Press Foundation" /fd
|
||||||
REM verify the signature of dangerzone-cli.exe
|
REM verify the signature of dangerzone-cli.exe
|
||||||
signtool.exe verify /pa build\exe.win-amd64-3.12\dangerzone-cli.exe
|
signtool.exe verify /pa build\exe.win-amd64-3.12\dangerzone-cli.exe
|
||||||
|
|
||||||
REM build the wix file
|
REM build the wxs file
|
||||||
python install\windows\build-wxs.py > build\Dangerzone.wxs
|
python install\windows\build-wxs.py
|
||||||
|
|
||||||
REM build the msi package
|
REM build the msi package
|
||||||
cd build
|
cd build
|
||||||
candle.exe Dangerzone.wxs
|
wix build -arch x64 -ext WixToolset.UI.wixext .\Dangerzone.wxs -out Dangerzone.msi
|
||||||
light.exe -ext WixUIExtension Dangerzone.wixobj
|
|
||||||
|
REM validate Dangerzone.msi
|
||||||
|
wix msi validate Dangerzone.msi
|
||||||
|
|
||||||
REM code sign Dangerzone.msi
|
REM code sign Dangerzone.msi
|
||||||
insignia.exe -im Dangerzone.msi
|
|
||||||
signtool.exe sign /v /d "Dangerzone" /a /n "Freedom of the Press Foundation" /fd sha256 /t http://time.certum.pl/ Dangerzone.msi
|
signtool.exe sign /v /d "Dangerzone" /a /n "Freedom of the Press Foundation" /fd sha256 /t http://time.certum.pl/ Dangerzone.msi
|
||||||
|
|
||||||
REM verify the signature of Dangerzone.msi
|
REM verify the signature of Dangerzone.msi
|
||||||
signtool.exe verify /pa Dangerzone.msi
|
signtool.exe verify /pa Dangerzone.msi
|
||||||
|
|
||||||
REM moving Dangerzone.msi to dist
|
REM move Dangerzone.msi to dist
|
||||||
cd ..
|
cd ..
|
||||||
mkdir dist
|
mkdir dist
|
||||||
move build\Dangerzone.msi dist
|
move build\Dangerzone.msi dist
|
||||||
|
|
|
@ -4,114 +4,75 @@ import uuid
|
||||||
import xml.etree.ElementTree as ET
|
import xml.etree.ElementTree as ET
|
||||||
|
|
||||||
|
|
||||||
def build_data(dirname, dir_prefix, id_, name):
|
def build_data(base_path, path_prefix, dir_id, dir_name):
|
||||||
data = {
|
data = {
|
||||||
"id": id_,
|
"directory_name": dir_name,
|
||||||
"name": name,
|
"directory_id": dir_id,
|
||||||
"files": [],
|
"files": [],
|
||||||
"dirs": [],
|
"dirs": [],
|
||||||
}
|
}
|
||||||
|
|
||||||
for basename in os.listdir(dirname):
|
if dir_id == "INSTALLFOLDER":
|
||||||
filename = os.path.join(dirname, basename)
|
data["component_id"] = "ApplicationFiles"
|
||||||
if os.path.isfile(filename):
|
|
||||||
data["files"].append(os.path.join(dir_prefix, basename))
|
|
||||||
elif os.path.isdir(filename):
|
|
||||||
if id_ == "INSTALLDIR":
|
|
||||||
id_prefix = "Folder"
|
|
||||||
else:
|
else:
|
||||||
id_prefix = id_
|
data["component_id"] = "Component" + dir_id
|
||||||
|
data["component_guid"] = str(uuid.uuid4()).upper()
|
||||||
|
|
||||||
|
for entry in os.listdir(base_path):
|
||||||
|
entry_path = os.path.join(base_path, entry)
|
||||||
|
if os.path.isfile(entry_path):
|
||||||
|
data["files"].append(os.path.join(path_prefix, entry))
|
||||||
|
elif os.path.isdir(entry_path):
|
||||||
|
if dir_id == "INSTALLFOLDER":
|
||||||
|
next_dir_prefix = "Folder"
|
||||||
|
else:
|
||||||
|
next_dir_prefix = dir_id
|
||||||
|
|
||||||
# Skip lib/PySide6/examples folder due to ilegal file names
|
# Skip lib/PySide6/examples folder due to ilegal file names
|
||||||
if "\\build\\exe.win-amd64-3.12\\lib\\PySide6\\examples" in dirname:
|
if "\\build\\exe.win-amd64-3.12\\lib\\PySide6\\examples" in base_path:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Skip lib/PySide6/qml/QtQuick folder due to ilegal file names
|
# Skip lib/PySide6/qml/QtQuick folder due to ilegal file names
|
||||||
# XXX Since we're not using Qml it should be no problem
|
# XXX Since we're not using Qml it should be no problem
|
||||||
if "\\build\\exe.win-amd64-3.12\\lib\\PySide6\\qml\\QtQuick" in dirname:
|
if "\\build\\exe.win-amd64-3.12\\lib\\PySide6\\qml\\QtQuick" in base_path:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
id_value = f"{id_prefix}{basename.capitalize().replace('-', '_')}"
|
next_dir_id = next_dir_prefix + entry.capitalize().replace("-", "_")
|
||||||
data["dirs"].append(
|
subdata = build_data(
|
||||||
build_data(
|
os.path.join(base_path, entry),
|
||||||
os.path.join(dirname, basename),
|
os.path.join(path_prefix, entry),
|
||||||
os.path.join(dir_prefix, basename),
|
next_dir_id,
|
||||||
id_value,
|
entry,
|
||||||
basename,
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if len(data["files"]) > 0:
|
# Add the subdirectory only if it contains files or subdirectories
|
||||||
if id_ == "INSTALLDIR":
|
if subdata["files"] or subdata["dirs"]:
|
||||||
data["component_id"] = "ApplicationFiles"
|
data["dirs"].append(subdata)
|
||||||
else:
|
|
||||||
data["component_id"] = "FolderComponent" + id_[len("Folder") :]
|
|
||||||
data["component_guid"] = str(uuid.uuid4())
|
|
||||||
|
|
||||||
return data
|
return data
|
||||||
|
|
||||||
|
|
||||||
def build_dir_xml(root, data):
|
def build_directory_xml(root, data):
|
||||||
attrs = {}
|
attrs = {}
|
||||||
if "id" in data:
|
attrs["Id"] = data["directory_id"]
|
||||||
attrs["Id"] = data["id"]
|
attrs["Name"] = data["directory_name"]
|
||||||
if "name" in data:
|
directory_el = ET.SubElement(root, "Directory", attrs)
|
||||||
attrs["Name"] = data["name"]
|
|
||||||
el = ET.SubElement(root, "Directory", attrs)
|
|
||||||
for subdata in data["dirs"]:
|
for subdata in data["dirs"]:
|
||||||
build_dir_xml(el, subdata)
|
build_directory_xml(directory_el, subdata)
|
||||||
|
|
||||||
# If this is the ProgramMenuFolder, add the menu component
|
|
||||||
if "id" in data and data["id"] == "ProgramMenuFolder":
|
|
||||||
component_el = ET.SubElement(
|
|
||||||
el,
|
|
||||||
"Component",
|
|
||||||
Id="ApplicationShortcuts",
|
|
||||||
Guid="539e7de8-a124-4c09-aa55-0dd516aad7bc",
|
|
||||||
)
|
|
||||||
ET.SubElement(
|
|
||||||
component_el,
|
|
||||||
"Shortcut",
|
|
||||||
Id="ApplicationShortcut1",
|
|
||||||
Name="Dangerzone",
|
|
||||||
Description="Dangerzone",
|
|
||||||
Target="[INSTALLDIR]dangerzone.exe",
|
|
||||||
WorkingDirectory="INSTALLDIR",
|
|
||||||
)
|
|
||||||
ET.SubElement(
|
|
||||||
component_el,
|
|
||||||
"RegistryValue",
|
|
||||||
Root="HKCU",
|
|
||||||
Key="Software\Freedom of the Press Foundation\Dangerzone",
|
|
||||||
Name="installed",
|
|
||||||
Type="integer",
|
|
||||||
Value="1",
|
|
||||||
KeyPath="yes",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def build_components_xml(root, data):
|
def build_components_xml(root, data):
|
||||||
component_ids = []
|
|
||||||
if "component_id" in data:
|
|
||||||
component_ids.append(data["component_id"])
|
|
||||||
|
|
||||||
for subdata in data["dirs"]:
|
|
||||||
if "component_guid" in subdata:
|
|
||||||
dir_ref_el = ET.SubElement(root, "DirectoryRef", Id=subdata["id"])
|
|
||||||
component_el = ET.SubElement(
|
component_el = ET.SubElement(
|
||||||
dir_ref_el,
|
root,
|
||||||
"Component",
|
"Component",
|
||||||
Id=subdata["component_id"],
|
Id=data["component_id"],
|
||||||
Guid=subdata["component_guid"],
|
Guid=data["component_guid"],
|
||||||
|
Directory=data["directory_id"],
|
||||||
)
|
)
|
||||||
for filename in subdata["files"]:
|
for filename in data["files"]:
|
||||||
file_el = ET.SubElement(
|
ET.SubElement(component_el, "File", Source=filename)
|
||||||
component_el, "File", Source=filename, Id="file_" + uuid.uuid4().hex
|
for subdata in data["dirs"]:
|
||||||
)
|
build_components_xml(root, subdata)
|
||||||
|
|
||||||
component_ids += build_components_xml(root, subdata)
|
|
||||||
|
|
||||||
return component_ids
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
@ -125,120 +86,196 @@ def main():
|
||||||
# -rc markers.
|
# -rc markers.
|
||||||
version = f.read().strip().split("-")[0]
|
version = f.read().strip().split("-")[0]
|
||||||
|
|
||||||
dist_dir = os.path.join(
|
build_dir = os.path.join(
|
||||||
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))),
|
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))),
|
||||||
"build",
|
"build",
|
||||||
"exe.win-amd64-3.12",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
cx_freeze_dir = "exe.win-amd64-3.12"
|
||||||
|
|
||||||
|
dist_dir = os.path.join(build_dir, cx_freeze_dir)
|
||||||
|
|
||||||
if not os.path.exists(dist_dir):
|
if not os.path.exists(dist_dir):
|
||||||
print("You must build the dangerzone binary before running this")
|
print("You must build the dangerzone binary before running this")
|
||||||
return
|
return
|
||||||
|
|
||||||
data = {
|
# Prepare data for WiX file harvesting from the output of cx_Freeze
|
||||||
"id": "TARGETDIR",
|
data = build_data(
|
||||||
"name": "SourceDir",
|
|
||||||
"dirs": [
|
|
||||||
{
|
|
||||||
"id": "ProgramFilesFolder",
|
|
||||||
"dirs": [],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "ProgramMenuFolder",
|
|
||||||
"dirs": [],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
}
|
|
||||||
|
|
||||||
data["dirs"][0]["dirs"].append(
|
|
||||||
build_data(
|
|
||||||
dist_dir,
|
dist_dir,
|
||||||
"exe.win-amd64-3.12",
|
cx_freeze_dir,
|
||||||
"INSTALLDIR",
|
"INSTALLFOLDER",
|
||||||
"Dangerzone",
|
"Dangerzone",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Add the Wix root element
|
||||||
|
wix_el = ET.Element(
|
||||||
|
"Wix",
|
||||||
|
{
|
||||||
|
"xmlns": "http://wixtoolset.org/schemas/v4/wxs",
|
||||||
|
"xmlns:ui": "http://wixtoolset.org/schemas/v4/wxs/ui",
|
||||||
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
root_el = ET.Element("Wix", xmlns="http://schemas.microsoft.com/wix/2006/wi")
|
# Add the Package element
|
||||||
product_el = ET.SubElement(
|
package_el = ET.SubElement(
|
||||||
root_el,
|
wix_el,
|
||||||
"Product",
|
"Package",
|
||||||
Name="Dangerzone",
|
Name="Dangerzone",
|
||||||
Manufacturer="Freedom of the Press Foundation",
|
Manufacturer="Freedom of the Press Foundation",
|
||||||
Id="*",
|
UpgradeCode="12B9695C-965B-4BE0-BC33-21274E809576",
|
||||||
UpgradeCode="$(var.ProductUpgradeCode)",
|
|
||||||
Language="1033",
|
Language="1033",
|
||||||
Codepage="1252",
|
|
||||||
Version="$(var.ProductVersion)",
|
|
||||||
)
|
|
||||||
ET.SubElement(
|
|
||||||
product_el,
|
|
||||||
"Package",
|
|
||||||
Id="*",
|
|
||||||
Keywords="Installer",
|
|
||||||
Description="Dangerzone $(var.ProductVersion) Installer",
|
|
||||||
Manufacturer="Freedom of the Press Foundation",
|
|
||||||
InstallerVersion="100",
|
|
||||||
Languages="1033",
|
|
||||||
Compressed="yes",
|
Compressed="yes",
|
||||||
SummaryCodepage="1252",
|
Codepage="1252",
|
||||||
|
Version=version,
|
||||||
)
|
)
|
||||||
ET.SubElement(product_el, "Media", Id="1", Cabinet="product.cab", EmbedCab="yes")
|
|
||||||
ET.SubElement(
|
ET.SubElement(
|
||||||
product_el, "Icon", Id="ProductIcon", SourceFile="..\\share\\dangerzone.ico"
|
package_el,
|
||||||
|
"SummaryInformation",
|
||||||
|
Keywords="Installer",
|
||||||
|
Description="Dangerzone " + version + " Installer",
|
||||||
|
Codepage="1252",
|
||||||
)
|
)
|
||||||
ET.SubElement(product_el, "Property", Id="ARPPRODUCTICON", Value="ProductIcon")
|
ET.SubElement(package_el, "MediaTemplate", EmbedCab="yes")
|
||||||
ET.SubElement(
|
ET.SubElement(
|
||||||
product_el,
|
package_el, "Icon", Id="ProductIcon", SourceFile="..\\share\\dangerzone.ico"
|
||||||
|
)
|
||||||
|
ET.SubElement(package_el, "Property", Id="ARPPRODUCTICON", Value="ProductIcon")
|
||||||
|
ET.SubElement(
|
||||||
|
package_el,
|
||||||
"Property",
|
"Property",
|
||||||
Id="ARPHELPLINK",
|
Id="ARPHELPLINK",
|
||||||
Value="https://dangerzone.rocks",
|
Value="https://dangerzone.rocks",
|
||||||
)
|
)
|
||||||
ET.SubElement(
|
ET.SubElement(
|
||||||
product_el,
|
package_el,
|
||||||
"Property",
|
"Property",
|
||||||
Id="ARPURLINFOABOUT",
|
Id="ARPURLINFOABOUT",
|
||||||
Value="https://freedom.press",
|
Value="https://freedom.press",
|
||||||
)
|
)
|
||||||
ET.SubElement(
|
ET.SubElement(
|
||||||
product_el,
|
package_el, "ui:WixUI", Id="WixUI_InstallDir", InstallDirectory="INSTALLFOLDER"
|
||||||
"Property",
|
|
||||||
Id="WIXUI_INSTALLDIR",
|
|
||||||
Value="INSTALLDIR",
|
|
||||||
)
|
)
|
||||||
ET.SubElement(product_el, "UIRef", Id="WixUI_InstallDir")
|
ET.SubElement(package_el, "UIRef", Id="WixUI_ErrorProgressText")
|
||||||
ET.SubElement(product_el, "UIRef", Id="WixUI_ErrorProgressText")
|
|
||||||
ET.SubElement(
|
ET.SubElement(
|
||||||
product_el,
|
package_el,
|
||||||
"WixVariable",
|
"WixVariable",
|
||||||
Id="WixUILicenseRtf",
|
Id="WixUILicenseRtf",
|
||||||
Value="..\\install\\windows\\license.rtf",
|
Value="..\\install\\windows\\license.rtf",
|
||||||
)
|
)
|
||||||
ET.SubElement(
|
ET.SubElement(
|
||||||
product_el,
|
package_el,
|
||||||
"WixVariable",
|
"WixVariable",
|
||||||
Id="WixUIDialogBmp",
|
Id="WixUIDialogBmp",
|
||||||
Value="..\\install\\windows\\dialog.bmp",
|
Value="..\\install\\windows\\dialog.bmp",
|
||||||
)
|
)
|
||||||
ET.SubElement(
|
ET.SubElement(
|
||||||
product_el,
|
package_el,
|
||||||
"MajorUpgrade",
|
"MajorUpgrade",
|
||||||
AllowSameVersionUpgrades="yes",
|
|
||||||
DowngradeErrorMessage="A newer version of [ProductName] is already installed. If you are sure you want to downgrade, remove the existing installation via Programs and Features.",
|
DowngradeErrorMessage="A newer version of [ProductName] is already installed. If you are sure you want to downgrade, remove the existing installation via Programs and Features.",
|
||||||
)
|
)
|
||||||
|
|
||||||
build_dir_xml(product_el, data)
|
# Workaround for an issue after upgrading from WiX Toolset v3 to v5 where the previous
|
||||||
component_ids = build_components_xml(product_el, data)
|
# version of Dangerzone is not uninstalled during the upgrade by checking if the older installation
|
||||||
|
# exists in "C:\Program Files (x86)\Dangerzone".
|
||||||
|
#
|
||||||
|
# Also handle a special case for Dangerzone 0.8.0 which allows choosing the install location
|
||||||
|
# during install by checking if the registry key for it exists.
|
||||||
|
#
|
||||||
|
# Note that this seems to allow installing Dangerzone 0.8.0 after installing Dangerzone from this branch.
|
||||||
|
# In this case the installer errors until Dangerzone 0.8.0 is uninstalled again
|
||||||
|
#
|
||||||
|
# TODO: Revert this once we are reasonably certain there aren't too many affected Dangerzone installations.
|
||||||
|
find_old_el = ET.SubElement(package_el, "Property", Id="OLDDANGERZONEFOUND")
|
||||||
|
directory_search_el = ET.SubElement(
|
||||||
|
find_old_el,
|
||||||
|
"DirectorySearch",
|
||||||
|
Id="dangerzone_install_folder",
|
||||||
|
Path="C:\\Program Files (x86)\\Dangerzone",
|
||||||
|
)
|
||||||
|
ET.SubElement(directory_search_el, "FileSearch", Name="dangerzone.exe")
|
||||||
|
registry_search_el = ET.SubElement(package_el, "Property", Id="DANGERZONE08FOUND")
|
||||||
|
ET.SubElement(
|
||||||
|
registry_search_el,
|
||||||
|
"RegistrySearch",
|
||||||
|
Root="HKLM",
|
||||||
|
Key="SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\{03C2D2B2-9955-4AED-831F-DA4E67FC0FDB}",
|
||||||
|
Name="DisplayName",
|
||||||
|
Type="raw",
|
||||||
|
)
|
||||||
|
ET.SubElement(
|
||||||
|
registry_search_el,
|
||||||
|
"RegistrySearch",
|
||||||
|
Root="HKLM",
|
||||||
|
Key="SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\{8AAC0808-3556-4164-9D15-6EC1FB673AB2}",
|
||||||
|
Name="DisplayName",
|
||||||
|
Type="raw",
|
||||||
|
)
|
||||||
|
ET.SubElement(
|
||||||
|
package_el,
|
||||||
|
"Launch",
|
||||||
|
Condition="NOT OLDDANGERZONEFOUND AND NOT DANGERZONE08FOUND",
|
||||||
|
Message='A previous version of [ProductName] is already installed. Please uninstall it from "Apps & Features" before proceeding with the installation.',
|
||||||
|
)
|
||||||
|
|
||||||
feature_el = ET.SubElement(product_el, "Feature", Id="DefaultFeature", Level="1")
|
# Add the ProgramMenuFolder StandardDirectory
|
||||||
for component_id in component_ids:
|
programmenufolder_el = ET.SubElement(
|
||||||
ET.SubElement(feature_el, "ComponentRef", Id=component_id)
|
package_el,
|
||||||
|
"StandardDirectory",
|
||||||
|
Id="ProgramMenuFolder",
|
||||||
|
)
|
||||||
|
# Add a shortcut for Dangerzone in the Start menu
|
||||||
|
shortcut_el = ET.SubElement(
|
||||||
|
programmenufolder_el,
|
||||||
|
"Component",
|
||||||
|
Id="ApplicationShortcuts",
|
||||||
|
Guid="539E7DE8-A124-4C09-AA55-0DD516AAD7BC",
|
||||||
|
)
|
||||||
|
ET.SubElement(
|
||||||
|
shortcut_el,
|
||||||
|
"Shortcut",
|
||||||
|
Id="DangerzoneStartMenuShortcut",
|
||||||
|
Name="Dangerzone",
|
||||||
|
Description="Dangerzone",
|
||||||
|
Target="[INSTALLFOLDER]dangerzone.exe",
|
||||||
|
WorkingDirectory="INSTALLFOLDER",
|
||||||
|
)
|
||||||
|
ET.SubElement(
|
||||||
|
shortcut_el,
|
||||||
|
"RegistryValue",
|
||||||
|
Root="HKCU",
|
||||||
|
Key="Software\\Freedom of the Press Foundation\\Dangerzone",
|
||||||
|
Name="installed",
|
||||||
|
Type="integer",
|
||||||
|
Value="1",
|
||||||
|
KeyPath="yes",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add the ProgramFilesFolder StandardDirectory
|
||||||
|
programfilesfolder_el = ET.SubElement(
|
||||||
|
package_el,
|
||||||
|
"StandardDirectory",
|
||||||
|
Id="ProgramFiles64Folder",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create the directory structure for the installed product
|
||||||
|
build_directory_xml(programfilesfolder_el, data)
|
||||||
|
|
||||||
|
# Create a component group for application components
|
||||||
|
applicationcomponents_el = ET.SubElement(
|
||||||
|
package_el, "ComponentGroup", Id="ApplicationComponents"
|
||||||
|
)
|
||||||
|
# Populate the application components group with components for the installed package
|
||||||
|
build_components_xml(applicationcomponents_el, data)
|
||||||
|
|
||||||
|
# Add the Feature element
|
||||||
|
feature_el = ET.SubElement(package_el, "Feature", Id="DefaultFeature", Level="1")
|
||||||
|
ET.SubElement(feature_el, "ComponentGroupRef", Id="ApplicationComponents")
|
||||||
ET.SubElement(feature_el, "ComponentRef", Id="ApplicationShortcuts")
|
ET.SubElement(feature_el, "ComponentRef", Id="ApplicationShortcuts")
|
||||||
|
|
||||||
print('<?xml version="1.0" encoding="windows-1252"?>')
|
ET.indent(wix_el, space=" ")
|
||||||
print(f'<?define ProductVersion = "{version}"?>')
|
|
||||||
print('<?define ProductUpgradeCode = "12b9695c-965b-4be0-bc33-21274e809576"?>')
|
with open(os.path.join(build_dir, "Dangerzone.wxs"), "w") as wxs_file:
|
||||||
ET.indent(root_el)
|
wxs_file.write(ET.tostring(wix_el).decode())
|
||||||
print(ET.tostring(root_el).decode())
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|
1415
poetry.lock
generated
1415
poetry.lock
generated
File diff suppressed because it is too large
Load diff
|
@ -1,6 +1,6 @@
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "dangerzone"
|
name = "dangerzone"
|
||||||
version = "0.8.0"
|
version = "0.9.0"
|
||||||
description = "Take potentially dangerous PDFs, office documents, or images and convert them to safe PDFs"
|
description = "Take potentially dangerous PDFs, office documents, or images and convert them to safe PDFs"
|
||||||
authors = ["Freedom of the Press Foundation <info@freedom.press>", "Micah Lee <micah.lee@theintercept.com>"]
|
authors = ["Freedom of the Press Foundation <info@freedom.press>", "Micah Lee <micah.lee@theintercept.com>"]
|
||||||
license = "AGPL-3.0"
|
license = "AGPL-3.0"
|
||||||
|
@ -13,9 +13,9 @@ include = [
|
||||||
]
|
]
|
||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = ">=3.9,<3.13"
|
python = ">=3.9,<3.14"
|
||||||
click = "*"
|
click = "*"
|
||||||
appdirs = "*"
|
platformdirs = "*"
|
||||||
PySide6 = "^6.7.1"
|
PySide6 = "^6.7.1"
|
||||||
PyMuPDF = "^1.23.3" # The version in Fedora 39
|
PyMuPDF = "^1.23.3" # The version in Fedora 39
|
||||||
colorama = "*"
|
colorama = "*"
|
||||||
|
@ -31,17 +31,21 @@ dangerzone-cli = 'dangerzone:main'
|
||||||
# Dependencies required for packaging the code on various platforms.
|
# Dependencies required for packaging the code on various platforms.
|
||||||
[tool.poetry.group.package.dependencies]
|
[tool.poetry.group.package.dependencies]
|
||||||
setuptools = "*"
|
setuptools = "*"
|
||||||
cx_freeze = {version = "^7.1.1", platform = "win32"}
|
cx_freeze = {version = "^7.2.5", platform = "win32"}
|
||||||
pywin32 = {version = "*", platform = "win32"}
|
pywin32 = {version = "*", platform = "win32"}
|
||||||
pyinstaller = {version = "*", platform = "darwin"}
|
pyinstaller = {version = "*", platform = "darwin"}
|
||||||
|
doit = "^0.36.0"
|
||||||
|
jinja2-cli = "^0.8.2"
|
||||||
|
|
||||||
# Dependencies required for linting the code.
|
# Dependencies required for linting the code.
|
||||||
[tool.poetry.group.lint.dependencies]
|
[tool.poetry.group.lint.dependencies]
|
||||||
black = "*"
|
click = "*" # Install click so mypy is able to reason about it.
|
||||||
isort = "*"
|
|
||||||
mypy = "*"
|
mypy = "*"
|
||||||
|
ruff = "*"
|
||||||
|
types-colorama = "*"
|
||||||
types-PySide2 = "*"
|
types-PySide2 = "*"
|
||||||
types-Markdown = "*"
|
types-Markdown = "*"
|
||||||
|
types-pygments = "*"
|
||||||
types-requests = "*"
|
types-requests = "*"
|
||||||
|
|
||||||
# Dependencies required for testing the code.
|
# Dependencies required for testing the code.
|
||||||
|
@ -52,15 +56,23 @@ pytest-qt = "^4.2.0"
|
||||||
pytest-cov = "^5.0.0"
|
pytest-cov = "^5.0.0"
|
||||||
strip-ansi = "*"
|
strip-ansi = "*"
|
||||||
pytest-subprocess = "^1.5.2"
|
pytest-subprocess = "^1.5.2"
|
||||||
|
pytest-rerunfailures = "^14.0"
|
||||||
|
numpy = "2.0" # bump when we remove python 3.9 support
|
||||||
|
|
||||||
[tool.poetry.group.container.dependencies]
|
[tool.poetry.group.debian.dependencies]
|
||||||
pymupdf = "1.24.11" # Last version to support python 3.8 (needed for Ubuntu Focal support)
|
pymupdf = "^1.24.11"
|
||||||
|
|
||||||
[tool.isort]
|
[tool.poetry.group.dev.dependencies]
|
||||||
profile = "black"
|
httpx = "^0.27.2"
|
||||||
skip_gitignore = true
|
|
||||||
# This is necessary due to https://github.com/PyCQA/isort/issues/1835
|
[tool.doit]
|
||||||
follow_links = false
|
verbosity = 3
|
||||||
|
|
||||||
|
[tool.ruff.lint]
|
||||||
|
select = [
|
||||||
|
# isort
|
||||||
|
"I",
|
||||||
|
]
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
requires = ["poetry-core>=1.2.0"]
|
requires = ["poetry-core>=1.2.0"]
|
||||||
|
|
|
@ -13,11 +13,7 @@ setup(
|
||||||
description="Dangerzone",
|
description="Dangerzone",
|
||||||
options={
|
options={
|
||||||
"build_exe": {
|
"build_exe": {
|
||||||
# Explicitly specify pymupdf.util module to fix building the executables
|
"packages": ["dangerzone", "dangerzone.gui", "pymupdf._wxcolors"],
|
||||||
# with cx_freeze. See https://github.com/marcelotduarte/cx_Freeze/issues/2653
|
|
||||||
# for more details.
|
|
||||||
# TODO: Upgrade to cx_freeze 7.3.0 which should include a fix.
|
|
||||||
"packages": ["dangerzone", "dangerzone.gui", "pymupdf.utils"],
|
|
||||||
"excludes": ["test", "tkinter"],
|
"excludes": ["test", "tkinter"],
|
||||||
"include_files": [("share", "share"), ("LICENSE", "LICENSE")],
|
"include_files": [("share", "share"), ("LICENSE", "LICENSE")],
|
||||||
"include_msvcr": True,
|
"include_msvcr": True,
|
||||||
|
|
|
@ -1 +1 @@
|
||||||
0.8.0
|
0.9.0
|
||||||
|
|
|
@ -122,7 +122,7 @@ test_docs_compressed_dir = Path(__file__).parent.joinpath(SAMPLE_COMPRESSED_DIRE
|
||||||
|
|
||||||
test_docs = [
|
test_docs = [
|
||||||
p
|
p
|
||||||
for p in test_docs_dir.rglob("*")
|
for p in test_docs_dir.glob("*")
|
||||||
if p.is_file()
|
if p.is_file()
|
||||||
and not (p.name.endswith(SAFE_EXTENSION) or p.name.startswith("sample_bad"))
|
and not (p.name.endswith(SAFE_EXTENSION) or p.name.startswith("sample_bad"))
|
||||||
]
|
]
|
||||||
|
@ -160,3 +160,31 @@ def for_each_external_doc(glob_pattern: str = "*") -> Callable:
|
||||||
|
|
||||||
class TestBase:
|
class TestBase:
|
||||||
sample_doc = str(test_docs_dir.joinpath(BASIC_SAMPLE_PDF))
|
sample_doc = str(test_docs_dir.joinpath(BASIC_SAMPLE_PDF))
|
||||||
|
|
||||||
|
|
||||||
|
def pytest_configure(config: pytest.Config) -> None:
|
||||||
|
config.addinivalue_line(
|
||||||
|
"markers",
|
||||||
|
"reference_generator: Used to mark the test cases that regenerate reference documents",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def pytest_addoption(parser: pytest.Parser) -> None:
|
||||||
|
parser.addoption(
|
||||||
|
"--generate-reference-pdfs",
|
||||||
|
action="store_true",
|
||||||
|
default=False,
|
||||||
|
help="Regenerate reference PDFs",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def pytest_collection_modifyitems(
|
||||||
|
config: pytest.Config, items: List[pytest.Item]
|
||||||
|
) -> None:
|
||||||
|
if not config.getoption("--generate-reference-pdfs"):
|
||||||
|
skip_generator = pytest.mark.skip(
|
||||||
|
reason="Only run when --generate-reference-pdfs is provided"
|
||||||
|
)
|
||||||
|
for item in items:
|
||||||
|
if "reference_generator" in item.keywords:
|
||||||
|
item.add_marker(skip_generator)
|
||||||
|
|
|
@ -21,34 +21,25 @@ def get_qt_app() -> Application:
|
||||||
|
|
||||||
def generate_isolated_updater(
|
def generate_isolated_updater(
|
||||||
tmp_path: Path,
|
tmp_path: Path,
|
||||||
monkeypatch: MonkeyPatch,
|
mocker: MockerFixture,
|
||||||
app_mocker: Optional[MockerFixture] = None,
|
mock_app: bool = False,
|
||||||
) -> UpdaterThread:
|
) -> UpdaterThread:
|
||||||
"""Generate an Updater class with its own settings."""
|
"""Generate an Updater class with its own settings."""
|
||||||
if app_mocker:
|
app = mocker.MagicMock() if mock_app else get_qt_app()
|
||||||
app = app_mocker.MagicMock()
|
|
||||||
else:
|
|
||||||
app = get_qt_app()
|
|
||||||
|
|
||||||
dummy = Dummy()
|
dummy = Dummy()
|
||||||
# XXX: We can monkey-patch global state without wrapping it in a context manager, or
|
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
|
||||||
# worrying that it will leak between tests, for two reasons:
|
|
||||||
#
|
|
||||||
# 1. Parallel tests in PyTest take place in different processes.
|
|
||||||
# 2. The monkeypatch fixture tears down the monkey-patch after each test ends.
|
|
||||||
monkeypatch.setattr(util, "get_config_dir", lambda: tmp_path)
|
|
||||||
dangerzone = DangerzoneGui(app, isolation_provider=dummy)
|
dangerzone = DangerzoneGui(app, isolation_provider=dummy)
|
||||||
updater = UpdaterThread(dangerzone)
|
updater = UpdaterThread(dangerzone)
|
||||||
return updater
|
return updater
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def updater(
|
def updater(tmp_path: Path, mocker: MockerFixture) -> UpdaterThread:
|
||||||
tmp_path: Path, monkeypatch: MonkeyPatch, mocker: MockerFixture
|
return generate_isolated_updater(tmp_path, mocker, mock_app=True)
|
||||||
) -> UpdaterThread:
|
|
||||||
return generate_isolated_updater(tmp_path, monkeypatch, mocker)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def qt_updater(tmp_path: Path, monkeypatch: MonkeyPatch) -> UpdaterThread:
|
def qt_updater(tmp_path: Path, mocker: MockerFixture) -> UpdaterThread:
|
||||||
return generate_isolated_updater(tmp_path, monkeypatch)
|
return generate_isolated_updater(tmp_path, mocker, mock_app=False)
|
||||||
|
|
|
@ -33,17 +33,19 @@ def test_order_mime_handers() -> None:
|
||||||
"LibreOffice",
|
"LibreOffice",
|
||||||
]
|
]
|
||||||
|
|
||||||
with mock.patch(
|
with (
|
||||||
|
mock.patch(
|
||||||
"subprocess.check_output", return_value=b"libreoffice-draw.desktop"
|
"subprocess.check_output", return_value=b"libreoffice-draw.desktop"
|
||||||
) as mock_default_mime_hander, mock.patch(
|
) as mock_default_mime_hander,
|
||||||
|
mock.patch(
|
||||||
"os.listdir",
|
"os.listdir",
|
||||||
side_effect=[
|
side_effect=[
|
||||||
["org.gnome.Evince.desktop"],
|
["org.gnome.Evince.desktop"],
|
||||||
["org.pwmt.zathura-pdf-mupdf.desktop"],
|
["org.pwmt.zathura-pdf-mupdf.desktop"],
|
||||||
["libreoffice-draw.desktop"],
|
["libreoffice-draw.desktop"],
|
||||||
],
|
],
|
||||||
) as mock_list, mock.patch(
|
) as mock_list,
|
||||||
"dangerzone.gui.logic.DesktopEntry", return_value=mock_desktop
|
mock.patch("dangerzone.gui.logic.DesktopEntry", return_value=mock_desktop),
|
||||||
):
|
):
|
||||||
dz = DangerzoneGui(mock_app, dummy)
|
dz = DangerzoneGui(mock_app, dummy)
|
||||||
|
|
||||||
|
@ -77,18 +79,20 @@ def test_mime_handers_succeeds_no_default_found() -> None:
|
||||||
"LibreOffice",
|
"LibreOffice",
|
||||||
]
|
]
|
||||||
|
|
||||||
with mock.patch(
|
with (
|
||||||
|
mock.patch(
|
||||||
"subprocess.check_output",
|
"subprocess.check_output",
|
||||||
side_effect=subprocess.CalledProcessError(1, "Oh no, xdg-mime error!)"),
|
side_effect=subprocess.CalledProcessError(1, "Oh no, xdg-mime error!)"),
|
||||||
) as mock_default_mime_hander, mock.patch(
|
) as mock_default_mime_hander,
|
||||||
|
mock.patch(
|
||||||
"os.listdir",
|
"os.listdir",
|
||||||
side_effect=[
|
side_effect=[
|
||||||
["org.gnome.Evince.desktop"],
|
["org.gnome.Evince.desktop"],
|
||||||
["org.pwmt.zathura-pdf-mupdf.desktop"],
|
["org.pwmt.zathura-pdf-mupdf.desktop"],
|
||||||
["libreoffice-draw.desktop"],
|
["libreoffice-draw.desktop"],
|
||||||
],
|
],
|
||||||
) as mock_list, mock.patch(
|
) as mock_list,
|
||||||
"dangerzone.gui.logic.DesktopEntry", return_value=mock_desktop
|
mock.patch("dangerzone.gui.logic.DesktopEntry", return_value=mock_desktop),
|
||||||
):
|
):
|
||||||
dz = DangerzoneGui(mock_app, dummy)
|
dz = DangerzoneGui(mock_app, dummy)
|
||||||
|
|
||||||
|
@ -109,13 +113,16 @@ def test_malformed_desktop_entry_is_catched() -> None:
|
||||||
mock_app = mock.MagicMock()
|
mock_app = mock.MagicMock()
|
||||||
dummy = mock.MagicMock()
|
dummy = mock.MagicMock()
|
||||||
|
|
||||||
with mock.patch("dangerzone.gui.logic.DesktopEntry") as mock_desktop, mock.patch(
|
with (
|
||||||
|
mock.patch("dangerzone.gui.logic.DesktopEntry") as mock_desktop,
|
||||||
|
mock.patch(
|
||||||
"os.listdir",
|
"os.listdir",
|
||||||
side_effect=[
|
side_effect=[
|
||||||
["malformed.desktop", "another.desktop"],
|
["malformed.desktop", "another.desktop"],
|
||||||
[],
|
[],
|
||||||
[],
|
[],
|
||||||
],
|
],
|
||||||
|
),
|
||||||
):
|
):
|
||||||
mock_desktop.side_effect = ParsingError("Oh noes!", "malformed.desktop")
|
mock_desktop.side_effect = ParsingError("Oh noes!", "malformed.desktop")
|
||||||
DangerzoneGui(mock_app, dummy)
|
DangerzoneGui(mock_app, dummy)
|
||||||
|
|
|
@ -7,9 +7,9 @@ from typing import List
|
||||||
|
|
||||||
from pytest import MonkeyPatch, fixture
|
from pytest import MonkeyPatch, fixture
|
||||||
from pytest_mock import MockerFixture
|
from pytest_mock import MockerFixture
|
||||||
from pytest_subprocess import FakeProcess
|
|
||||||
from pytestqt.qtbot import QtBot
|
from pytestqt.qtbot import QtBot
|
||||||
|
|
||||||
|
from dangerzone import errors
|
||||||
from dangerzone.document import Document
|
from dangerzone.document import Document
|
||||||
from dangerzone.gui import MainWindow
|
from dangerzone.gui import MainWindow
|
||||||
from dangerzone.gui import main_window as main_window_module
|
from dangerzone.gui import main_window as main_window_module
|
||||||
|
@ -25,11 +25,8 @@ from dangerzone.gui.main_window import (
|
||||||
WaitingWidgetContainer,
|
WaitingWidgetContainer,
|
||||||
)
|
)
|
||||||
from dangerzone.gui.updater import UpdateReport, UpdaterThread
|
from dangerzone.gui.updater import UpdateReport, UpdaterThread
|
||||||
from dangerzone.isolation_provider.container import (
|
from dangerzone.isolation_provider.container import Container
|
||||||
Container,
|
from dangerzone.isolation_provider.dummy import Dummy
|
||||||
NoContainerTechException,
|
|
||||||
NotAvailableContainerTechException,
|
|
||||||
)
|
|
||||||
|
|
||||||
from .test_updater import assert_report_equal, default_updater_settings
|
from .test_updater import assert_report_equal, default_updater_settings
|
||||||
|
|
||||||
|
@ -510,9 +507,9 @@ def test_not_available_container_tech_exception(
|
||||||
) -> None:
|
) -> None:
|
||||||
# Setup
|
# Setup
|
||||||
mock_app = mocker.MagicMock()
|
mock_app = mocker.MagicMock()
|
||||||
dummy = mocker.MagicMock()
|
dummy = Dummy()
|
||||||
|
fn = mocker.patch.object(dummy, "is_available")
|
||||||
dummy.is_runtime_available.side_effect = NotAvailableContainerTechException(
|
fn.side_effect = errors.NotAvailableContainerTechException(
|
||||||
"podman", "podman image ls logs"
|
"podman", "podman image ls logs"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -535,7 +532,7 @@ def test_no_container_tech_exception(qtbot: QtBot, mocker: MockerFixture) -> Non
|
||||||
dummy = mocker.MagicMock()
|
dummy = mocker.MagicMock()
|
||||||
|
|
||||||
# Raise
|
# Raise
|
||||||
dummy.is_runtime_available.side_effect = NoContainerTechException("podman")
|
dummy.is_available.side_effect = errors.NoContainerTechException("podman")
|
||||||
|
|
||||||
dz = DangerzoneGui(mock_app, dummy)
|
dz = DangerzoneGui(mock_app, dummy)
|
||||||
widget = WaitingWidgetContainer(dz)
|
widget = WaitingWidgetContainer(dz)
|
||||||
|
@ -590,3 +587,57 @@ def test_installation_failure_return_false(qtbot: QtBot, mocker: MockerFixture)
|
||||||
|
|
||||||
assert "the following error occured" in widget.label.text()
|
assert "the following error occured" in widget.label.text()
|
||||||
assert "The image cannot be found" in widget.traceback.toPlainText()
|
assert "The image cannot be found" in widget.traceback.toPlainText()
|
||||||
|
|
||||||
|
|
||||||
|
def test_up_to_date_docker_desktop_does_nothing(
|
||||||
|
qtbot: QtBot, mocker: MockerFixture
|
||||||
|
) -> None:
|
||||||
|
# Setup install to return False
|
||||||
|
mock_app = mocker.MagicMock()
|
||||||
|
dummy = mocker.MagicMock(spec=Container)
|
||||||
|
dummy.check_docker_desktop_version.return_value = (True, "1.0.0")
|
||||||
|
dz = DangerzoneGui(mock_app, dummy)
|
||||||
|
|
||||||
|
window = MainWindow(dz)
|
||||||
|
qtbot.addWidget(window)
|
||||||
|
|
||||||
|
menu_actions = window.hamburger_button.menu().actions()
|
||||||
|
assert "Docker Desktop should be upgraded" not in [
|
||||||
|
a.toolTip() for a in menu_actions
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def test_outdated_docker_desktop_displays_warning(
|
||||||
|
qtbot: QtBot, mocker: MockerFixture
|
||||||
|
) -> None:
|
||||||
|
# Setup install to return False
|
||||||
|
mock_app = mocker.MagicMock()
|
||||||
|
dummy = mocker.MagicMock(spec=Container)
|
||||||
|
dummy.check_docker_desktop_version.return_value = (False, "1.0.0")
|
||||||
|
|
||||||
|
dz = DangerzoneGui(mock_app, dummy)
|
||||||
|
|
||||||
|
load_svg_spy = mocker.spy(main_window_module, "load_svg_image")
|
||||||
|
|
||||||
|
window = MainWindow(dz)
|
||||||
|
qtbot.addWidget(window)
|
||||||
|
|
||||||
|
menu_actions = window.hamburger_button.menu().actions()
|
||||||
|
assert menu_actions[0].toolTip() == "Docker Desktop should be upgraded"
|
||||||
|
|
||||||
|
# Check that the hamburger icon has changed with the expected SVG image.
|
||||||
|
assert load_svg_spy.call_count == 4
|
||||||
|
assert (
|
||||||
|
load_svg_spy.call_args_list[2].args[0] == "hamburger_menu_update_dot_error.svg"
|
||||||
|
)
|
||||||
|
|
||||||
|
alert_spy = mocker.spy(window.alert, "launch")
|
||||||
|
|
||||||
|
# Clicking the menu item should open a warning message
|
||||||
|
def _check_alert_displayed() -> None:
|
||||||
|
alert_spy.assert_any_call()
|
||||||
|
if window.alert:
|
||||||
|
window.alert.close()
|
||||||
|
|
||||||
|
QtCore.QTimer.singleShot(0, _check_alert_displayed)
|
||||||
|
menu_actions[0].trigger()
|
||||||
|
|
|
@ -48,9 +48,7 @@ def test_default_updater_settings(updater: UpdaterThread) -> None:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def test_pre_0_4_2_settings(
|
def test_pre_0_4_2_settings(tmp_path: Path, mocker: MockerFixture) -> None:
|
||||||
tmp_path: Path, monkeypatch: MonkeyPatch, mocker: MockerFixture
|
|
||||||
) -> None:
|
|
||||||
"""Check settings of installations prior to 0.4.2.
|
"""Check settings of installations prior to 0.4.2.
|
||||||
|
|
||||||
Check that installations that have been upgraded from a version < 0.4.2 to >= 0.4.2
|
Check that installations that have been upgraded from a version < 0.4.2 to >= 0.4.2
|
||||||
|
@ -58,7 +56,7 @@ def test_pre_0_4_2_settings(
|
||||||
in their settings.json file.
|
in their settings.json file.
|
||||||
"""
|
"""
|
||||||
save_settings(tmp_path, default_settings_0_4_1())
|
save_settings(tmp_path, default_settings_0_4_1())
|
||||||
updater = generate_isolated_updater(tmp_path, monkeypatch, mocker)
|
updater = generate_isolated_updater(tmp_path, mocker, mock_app=True)
|
||||||
assert (
|
assert (
|
||||||
updater.dangerzone.settings.get_updater_settings() == default_updater_settings()
|
updater.dangerzone.settings.get_updater_settings() == default_updater_settings()
|
||||||
)
|
)
|
||||||
|
@ -83,12 +81,10 @@ def test_post_0_4_2_settings(
|
||||||
# version is 0.4.3.
|
# version is 0.4.3.
|
||||||
expected_settings = default_updater_settings()
|
expected_settings = default_updater_settings()
|
||||||
expected_settings["updater_latest_version"] = "0.4.3"
|
expected_settings["updater_latest_version"] = "0.4.3"
|
||||||
monkeypatch.setattr(
|
monkeypatch.setattr(settings, "get_version", lambda: "0.4.3")
|
||||||
settings, "get_version", lambda: expected_settings["updater_latest_version"]
|
|
||||||
)
|
|
||||||
|
|
||||||
# Ensure that the Settings class will correct the latest version field to 0.4.3.
|
# Ensure that the Settings class will correct the latest version field to 0.4.3.
|
||||||
updater = generate_isolated_updater(tmp_path, monkeypatch, mocker)
|
updater = generate_isolated_updater(tmp_path, mocker, mock_app=True)
|
||||||
assert updater.dangerzone.settings.get_updater_settings() == expected_settings
|
assert updater.dangerzone.settings.get_updater_settings() == expected_settings
|
||||||
|
|
||||||
# Simulate an updater check that found a newer Dangerzone version (e.g., 0.4.4).
|
# Simulate an updater check that found a newer Dangerzone version (e.g., 0.4.4).
|
||||||
|
@ -118,9 +114,7 @@ def test_linux_no_check(updater: UpdaterThread, monkeypatch: MonkeyPatch) -> Non
|
||||||
assert updater.dangerzone.settings.get_updater_settings() == expected_settings
|
assert updater.dangerzone.settings.get_updater_settings() == expected_settings
|
||||||
|
|
||||||
|
|
||||||
def test_user_prompts(
|
def test_user_prompts(updater: UpdaterThread, mocker: MockerFixture) -> None:
|
||||||
updater: UpdaterThread, monkeypatch: MonkeyPatch, mocker: MockerFixture
|
|
||||||
) -> None:
|
|
||||||
"""Test prompting users to ask them if they want to enable update checks."""
|
"""Test prompting users to ask them if they want to enable update checks."""
|
||||||
# First run
|
# First run
|
||||||
#
|
#
|
||||||
|
@ -370,8 +364,6 @@ def test_update_errors(
|
||||||
def test_update_check_prompt(
|
def test_update_check_prompt(
|
||||||
qtbot: QtBot,
|
qtbot: QtBot,
|
||||||
qt_updater: UpdaterThread,
|
qt_updater: UpdaterThread,
|
||||||
monkeypatch: MonkeyPatch,
|
|
||||||
mocker: MockerFixture,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test that the prompt to enable update checks works properly."""
|
"""Test that the prompt to enable update checks works properly."""
|
||||||
# Force Dangerzone to check immediately for updates
|
# Force Dangerzone to check immediately for updates
|
||||||
|
|
|
@ -1,16 +1,15 @@
|
||||||
import os
|
import os
|
||||||
|
import platform
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from pytest_mock import MockerFixture
|
from pytest_mock import MockerFixture
|
||||||
from pytest_subprocess import FakeProcess
|
from pytest_subprocess import FakeProcess
|
||||||
|
|
||||||
from dangerzone.isolation_provider.container import (
|
from dangerzone import errors
|
||||||
Container,
|
from dangerzone.container_utils import Runtime
|
||||||
ImageInstallationException,
|
from dangerzone.isolation_provider.container import Container
|
||||||
ImageNotPresentException,
|
|
||||||
NotAvailableContainerTechException,
|
|
||||||
)
|
|
||||||
from dangerzone.isolation_provider.qubes import is_qubes_native_conversion
|
from dangerzone.isolation_provider.qubes import is_qubes_native_conversion
|
||||||
|
from dangerzone.util import get_resource_path
|
||||||
|
|
||||||
from .base import IsolationProviderTermination, IsolationProviderTest
|
from .base import IsolationProviderTermination, IsolationProviderTest
|
||||||
|
|
||||||
|
@ -26,96 +25,196 @@ def provider() -> Container:
|
||||||
return Container()
|
return Container()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def runtime_path() -> str:
|
||||||
|
return str(Runtime().path)
|
||||||
|
|
||||||
|
|
||||||
class TestContainer(IsolationProviderTest):
|
class TestContainer(IsolationProviderTest):
|
||||||
def test_is_runtime_available_raises(
|
def test_is_available_raises(
|
||||||
self, provider: Container, fp: FakeProcess
|
self, provider: Container, fp: FakeProcess, runtime_path: str
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
NotAvailableContainerTechException should be raised when
|
NotAvailableContainerTechException should be raised when
|
||||||
the "podman image ls" command fails.
|
the "podman image ls" command fails.
|
||||||
"""
|
"""
|
||||||
fp.register_subprocess(
|
fp.register_subprocess(
|
||||||
[provider.get_runtime(), "image", "ls"],
|
[runtime_path, "image", "ls"],
|
||||||
returncode=-1,
|
returncode=-1,
|
||||||
stderr="podman image ls logs",
|
stderr="podman image ls logs",
|
||||||
)
|
)
|
||||||
with pytest.raises(NotAvailableContainerTechException):
|
with pytest.raises(errors.NotAvailableContainerTechException):
|
||||||
provider.is_runtime_available()
|
provider.is_available()
|
||||||
|
|
||||||
def test_is_runtime_available_works(
|
def test_is_available_works(
|
||||||
self, provider: Container, fp: FakeProcess
|
self, provider: Container, fp: FakeProcess, runtime_path: str
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
No exception should be raised when the "podman image ls" can return properly.
|
No exception should be raised when the "podman image ls" can return properly.
|
||||||
"""
|
"""
|
||||||
fp.register_subprocess(
|
fp.register_subprocess(
|
||||||
[provider.get_runtime(), "image", "ls"],
|
[runtime_path, "image", "ls"],
|
||||||
)
|
)
|
||||||
provider.is_runtime_available()
|
provider.is_available()
|
||||||
|
|
||||||
def test_install_raise_if_image_cant_be_installed(
|
def test_install_raise_if_image_cant_be_installed(
|
||||||
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
|
self, provider: Container, fp: FakeProcess, runtime_path: str
|
||||||
) -> None:
|
) -> None:
|
||||||
"""When an image installation fails, an exception should be raised"""
|
"""When an image installation fails, an exception should be raised"""
|
||||||
|
|
||||||
fp.register_subprocess(
|
fp.register_subprocess(
|
||||||
[provider.get_runtime(), "image", "ls"],
|
[runtime_path, "image", "ls"],
|
||||||
)
|
)
|
||||||
|
|
||||||
# First check should return nothing.
|
# First check should return nothing.
|
||||||
fp.register_subprocess(
|
fp.register_subprocess(
|
||||||
[
|
[
|
||||||
provider.get_runtime(),
|
runtime_path,
|
||||||
"image",
|
"image",
|
||||||
"list",
|
"list",
|
||||||
"--format",
|
"--format",
|
||||||
"{{.ID}}",
|
"{{ .Tag }}",
|
||||||
"dangerzone.rocks/dangerzone",
|
"dangerzone.rocks/dangerzone",
|
||||||
],
|
],
|
||||||
occurrences=2,
|
occurrences=2,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Make podman load fail
|
|
||||||
mocker.patch("gzip.open", mocker.mock_open(read_data=""))
|
|
||||||
|
|
||||||
fp.register_subprocess(
|
fp.register_subprocess(
|
||||||
[provider.get_runtime(), "load"],
|
[
|
||||||
|
runtime_path,
|
||||||
|
"load",
|
||||||
|
"-i",
|
||||||
|
get_resource_path("container.tar").absolute(),
|
||||||
|
],
|
||||||
returncode=-1,
|
returncode=-1,
|
||||||
)
|
)
|
||||||
|
|
||||||
with pytest.raises(ImageInstallationException):
|
with pytest.raises(errors.ImageInstallationException):
|
||||||
provider.install()
|
provider.install()
|
||||||
|
|
||||||
def test_install_raises_if_still_not_installed(
|
def test_install_raises_if_still_not_installed(
|
||||||
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
|
self, provider: Container, fp: FakeProcess, runtime_path: str
|
||||||
) -> None:
|
) -> None:
|
||||||
"""When an image keep being not installed, it should return False"""
|
"""When an image keep being not installed, it should return False"""
|
||||||
|
fp.register_subprocess(
|
||||||
|
[runtime_path, "version", "-f", "{{.Client.Version}}"],
|
||||||
|
stdout="4.0.0",
|
||||||
|
)
|
||||||
|
|
||||||
fp.register_subprocess(
|
fp.register_subprocess(
|
||||||
[provider.get_runtime(), "image", "ls"],
|
[runtime_path, "image", "ls"],
|
||||||
)
|
)
|
||||||
|
|
||||||
# First check should return nothing.
|
# First check should return nothing.
|
||||||
fp.register_subprocess(
|
fp.register_subprocess(
|
||||||
[
|
[
|
||||||
provider.get_runtime(),
|
runtime_path,
|
||||||
"image",
|
"image",
|
||||||
"list",
|
"list",
|
||||||
"--format",
|
"--format",
|
||||||
"{{.ID}}",
|
"{{ .Tag }}",
|
||||||
"dangerzone.rocks/dangerzone",
|
"dangerzone.rocks/dangerzone",
|
||||||
],
|
],
|
||||||
occurrences=2,
|
occurrences=2,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Patch gzip.open and podman load so that it works
|
|
||||||
mocker.patch("gzip.open", mocker.mock_open(read_data=""))
|
|
||||||
fp.register_subprocess(
|
fp.register_subprocess(
|
||||||
[provider.get_runtime(), "load"],
|
[
|
||||||
|
runtime_path,
|
||||||
|
"load",
|
||||||
|
"-i",
|
||||||
|
get_resource_path("container.tar").absolute(),
|
||||||
|
],
|
||||||
)
|
)
|
||||||
with pytest.raises(ImageNotPresentException):
|
with pytest.raises(errors.ImageNotPresentException):
|
||||||
provider.install()
|
provider.install()
|
||||||
|
|
||||||
|
@pytest.mark.skipif(
|
||||||
|
platform.system() not in ("Windows", "Darwin"),
|
||||||
|
reason="macOS and Windows specific",
|
||||||
|
)
|
||||||
|
def test_old_docker_desktop_version_is_detected(
|
||||||
|
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
|
||||||
|
) -> None:
|
||||||
|
fp.register_subprocess(
|
||||||
|
[
|
||||||
|
"docker",
|
||||||
|
"version",
|
||||||
|
"--format",
|
||||||
|
"{{.Server.Platform.Name}}",
|
||||||
|
],
|
||||||
|
stdout="Docker Desktop 1.0.0 (173100)",
|
||||||
|
)
|
||||||
|
|
||||||
|
mocker.patch(
|
||||||
|
"dangerzone.isolation_provider.container.MINIMUM_DOCKER_DESKTOP",
|
||||||
|
{"Darwin": "1.0.1", "Windows": "1.0.1"},
|
||||||
|
)
|
||||||
|
assert (False, "1.0.0") == provider.check_docker_desktop_version()
|
||||||
|
|
||||||
|
@pytest.mark.skipif(
|
||||||
|
platform.system() not in ("Windows", "Darwin"),
|
||||||
|
reason="macOS and Windows specific",
|
||||||
|
)
|
||||||
|
def test_up_to_date_docker_desktop_version_is_detected(
|
||||||
|
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
|
||||||
|
) -> None:
|
||||||
|
fp.register_subprocess(
|
||||||
|
[
|
||||||
|
"docker",
|
||||||
|
"version",
|
||||||
|
"--format",
|
||||||
|
"{{.Server.Platform.Name}}",
|
||||||
|
],
|
||||||
|
stdout="Docker Desktop 1.0.1 (173100)",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Require version 1.0.1
|
||||||
|
mocker.patch(
|
||||||
|
"dangerzone.isolation_provider.container.MINIMUM_DOCKER_DESKTOP",
|
||||||
|
{"Darwin": "1.0.1", "Windows": "1.0.1"},
|
||||||
|
)
|
||||||
|
assert (True, "1.0.1") == provider.check_docker_desktop_version()
|
||||||
|
|
||||||
|
fp.register_subprocess(
|
||||||
|
[
|
||||||
|
"docker",
|
||||||
|
"version",
|
||||||
|
"--format",
|
||||||
|
"{{.Server.Platform.Name}}",
|
||||||
|
],
|
||||||
|
stdout="Docker Desktop 2.0.0 (173100)",
|
||||||
|
)
|
||||||
|
assert (True, "2.0.0") == provider.check_docker_desktop_version()
|
||||||
|
|
||||||
|
@pytest.mark.skipif(
|
||||||
|
platform.system() not in ("Windows", "Darwin"),
|
||||||
|
reason="macOS and Windows specific",
|
||||||
|
)
|
||||||
|
def test_docker_desktop_version_failure_returns_true(
|
||||||
|
self, mocker: MockerFixture, provider: Container, fp: FakeProcess
|
||||||
|
) -> None:
|
||||||
|
fp.register_subprocess(
|
||||||
|
[
|
||||||
|
"docker",
|
||||||
|
"version",
|
||||||
|
"--format",
|
||||||
|
"{{.Server.Platform.Name}}",
|
||||||
|
],
|
||||||
|
stderr="Oopsie",
|
||||||
|
returncode=1,
|
||||||
|
)
|
||||||
|
assert provider.check_docker_desktop_version() == (True, "")
|
||||||
|
|
||||||
|
@pytest.mark.skipif(
|
||||||
|
platform.system() != "Linux",
|
||||||
|
reason="Linux specific",
|
||||||
|
)
|
||||||
|
def test_linux_skips_desktop_version_check_returns_true(
|
||||||
|
self, provider: Container
|
||||||
|
) -> None:
|
||||||
|
assert (True, "") == provider.check_docker_desktop_version()
|
||||||
|
|
||||||
|
|
||||||
class TestContainerTermination(IsolationProviderTermination):
|
class TestContainerTermination(IsolationProviderTermination):
|
||||||
pass
|
pass
|
||||||
|
|
|
@ -7,10 +7,13 @@ import platform
|
||||||
import shutil
|
import shutil
|
||||||
import sys
|
import sys
|
||||||
import tempfile
|
import tempfile
|
||||||
|
import time
|
||||||
import traceback
|
import traceback
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Optional, Sequence
|
from typing import Optional, Sequence
|
||||||
|
|
||||||
|
import fitz
|
||||||
|
import numpy as np
|
||||||
import pytest
|
import pytest
|
||||||
from click.testing import CliRunner, Result
|
from click.testing import CliRunner, Result
|
||||||
from pytest_mock import MockerFixture
|
from pytest_mock import MockerFixture
|
||||||
|
@ -190,11 +193,68 @@ class TestCliConversion(TestCliBasic):
|
||||||
result = self.run_cli([sample_pdf, "--ocr-lang", "piglatin"])
|
result = self.run_cli([sample_pdf, "--ocr-lang", "piglatin"])
|
||||||
result.assert_failure()
|
result.assert_failure()
|
||||||
|
|
||||||
|
@pytest.mark.reference_generator
|
||||||
@for_each_doc
|
@for_each_doc
|
||||||
def test_formats(self, doc: Path) -> None:
|
def test_regenerate_reference(self, doc: Path) -> None:
|
||||||
result = self.run_cli(str(doc))
|
reference = (doc.parent / "reference" / doc.stem).with_suffix(".pdf")
|
||||||
|
|
||||||
|
result = self.run_cli([str(doc), "--output-filename", str(reference)])
|
||||||
result.assert_success()
|
result.assert_success()
|
||||||
|
|
||||||
|
@for_each_doc
|
||||||
|
def test_formats(self, doc: Path, tmp_path_factory: pytest.TempPathFactory) -> None:
|
||||||
|
reference = (doc.parent / "reference" / doc.stem).with_suffix(".pdf")
|
||||||
|
destination = tmp_path_factory.mktemp(doc.stem).with_suffix(".pdf")
|
||||||
|
|
||||||
|
result = self.run_cli([str(doc), "--output-filename", str(destination)])
|
||||||
|
result.assert_success()
|
||||||
|
|
||||||
|
# Do not check against reference versions when using a dummy isolation provider
|
||||||
|
if os.environ.get("DUMMY_CONVERSION", False):
|
||||||
|
return
|
||||||
|
|
||||||
|
converted = fitz.open(destination)
|
||||||
|
ref = fitz.open(reference)
|
||||||
|
errors = []
|
||||||
|
if len(converted) != len(ref):
|
||||||
|
errors.append("different number of pages")
|
||||||
|
|
||||||
|
diffs = doc.parent / "diffs"
|
||||||
|
diffs.mkdir(parents=True, exist_ok=True)
|
||||||
|
for page, ref_page in zip(converted, ref):
|
||||||
|
curr_pixmap = page.get_pixmap(dpi=150)
|
||||||
|
ref_pixmap = ref_page.get_pixmap(dpi=150)
|
||||||
|
if curr_pixmap.tobytes() != ref_pixmap.tobytes():
|
||||||
|
errors.append(f"page {page.number} differs")
|
||||||
|
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
|
||||||
|
arr_ref = np.frombuffer(ref_pixmap.samples, dtype=np.uint8).reshape(
|
||||||
|
ref_pixmap.height, ref_pixmap.width, ref_pixmap.n
|
||||||
|
)
|
||||||
|
arr_curr = np.frombuffer(curr_pixmap.samples, dtype=np.uint8).reshape(
|
||||||
|
curr_pixmap.height, curr_pixmap.width, curr_pixmap.n
|
||||||
|
)
|
||||||
|
|
||||||
|
# Find differences (any channel differs)
|
||||||
|
diff = (arr_ref != arr_curr).any(axis=2)
|
||||||
|
|
||||||
|
# Get coordinates of differences
|
||||||
|
diff_coords = np.where(diff)
|
||||||
|
# Mark differences in red
|
||||||
|
for y, x in zip(diff_coords[0], diff_coords[1]):
|
||||||
|
# Note: PyMuPDF's set_pixel takes (x, y) not (y, x)
|
||||||
|
ref_pixmap.set_pixel(int(x), int(y), (255, 0, 0)) # Red
|
||||||
|
|
||||||
|
t1 = time.perf_counter()
|
||||||
|
print(f"diff took {t1 - t0} seconds")
|
||||||
|
ref_pixmap.save(diffs / f"{destination.stem}_{page.number}.jpeg")
|
||||||
|
|
||||||
|
if len(errors) > 0:
|
||||||
|
raise AssertionError(
|
||||||
|
f"The resulting document differs from the reference. See {str(diffs)} for a visual diff."
|
||||||
|
)
|
||||||
|
|
||||||
def test_output_filename(self, sample_pdf: str) -> None:
|
def test_output_filename(self, sample_pdf: str) -> None:
|
||||||
temp_dir = tempfile.mkdtemp(prefix="dangerzone-")
|
temp_dir = tempfile.mkdtemp(prefix="dangerzone-")
|
||||||
output_filename = str(Path(temp_dir) / "safe.pdf")
|
output_filename = str(Path(temp_dir) / "safe.pdf")
|
||||||
|
@ -335,6 +395,7 @@ class TestCliConversion(TestCliBasic):
|
||||||
|
|
||||||
class TestExtraFormats(TestCli):
|
class TestExtraFormats(TestCli):
|
||||||
@for_each_external_doc("*hwp*")
|
@for_each_external_doc("*hwp*")
|
||||||
|
@pytest.mark.flaky(reruns=2)
|
||||||
def test_hancom_office(self, doc: str) -> None:
|
def test_hancom_office(self, doc: str) -> None:
|
||||||
if is_qubes_native_conversion():
|
if is_qubes_native_conversion():
|
||||||
pytest.skip("HWP / HWPX formats are not supported on this platform")
|
pytest.skip("HWP / HWPX formats are not supported on this platform")
|
||||||
|
|
60
tests/test_container_utils.py
Normal file
60
tests/test_container_utils.py
Normal file
|
@ -0,0 +1,60 @@
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from pytest_mock import MockerFixture
|
||||||
|
|
||||||
|
from dangerzone import errors
|
||||||
|
from dangerzone.container_utils import Runtime
|
||||||
|
from dangerzone.settings import Settings
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_runtime_name_from_settings(mocker: MockerFixture, tmp_path: Path) -> None:
|
||||||
|
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
|
||||||
|
mocker.patch("dangerzone.container_utils.Path.exists", return_value=True)
|
||||||
|
|
||||||
|
settings = Settings()
|
||||||
|
settings.set("container_runtime", "/opt/somewhere/docker", autosave=True)
|
||||||
|
|
||||||
|
assert Runtime().name == "docker"
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_runtime_name_linux(mocker: MockerFixture, tmp_path: Path) -> None:
|
||||||
|
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
|
||||||
|
mocker.patch("platform.system", return_value="Linux")
|
||||||
|
mocker.patch(
|
||||||
|
"dangerzone.container_utils.shutil.which", return_value="/usr/bin/podman"
|
||||||
|
)
|
||||||
|
mocker.patch("dangerzone.container_utils.os.path.exists", return_value=True)
|
||||||
|
runtime = Runtime()
|
||||||
|
assert runtime.name == "podman"
|
||||||
|
assert runtime.path == Path("/usr/bin/podman")
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_runtime_name_non_linux(mocker: MockerFixture, tmp_path: Path) -> None:
|
||||||
|
mocker.patch("platform.system", return_value="Windows")
|
||||||
|
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
|
||||||
|
mocker.patch(
|
||||||
|
"dangerzone.container_utils.shutil.which", return_value="/usr/bin/docker"
|
||||||
|
)
|
||||||
|
mocker.patch("dangerzone.container_utils.os.path.exists", return_value=True)
|
||||||
|
runtime = Runtime()
|
||||||
|
assert runtime.name == "docker"
|
||||||
|
assert runtime.path == Path("/usr/bin/docker")
|
||||||
|
|
||||||
|
mocker.patch("platform.system", return_value="Something else")
|
||||||
|
|
||||||
|
runtime = Runtime()
|
||||||
|
assert runtime.name == "docker"
|
||||||
|
assert runtime.path == Path("/usr/bin/docker")
|
||||||
|
assert Runtime().name == "docker"
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_unsupported_runtime_name(mocker: MockerFixture, tmp_path: Path) -> None:
|
||||||
|
mocker.patch("dangerzone.settings.get_config_dir", return_value=tmp_path)
|
||||||
|
settings = Settings()
|
||||||
|
settings.set(
|
||||||
|
"container_runtime", "/opt/somewhere/new-kid-on-the-block", autosave=True
|
||||||
|
)
|
||||||
|
|
||||||
|
with pytest.raises(errors.UnsupportedContainerRuntime):
|
||||||
|
assert Runtime().name == "new-kid-on-the-block"
|
BIN
tests/test_docs/reference/sample-bmp.pdf
Normal file
BIN
tests/test_docs/reference/sample-bmp.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-doc.pdf
Normal file
BIN
tests/test_docs/reference/sample-doc.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-docm.pdf
Normal file
BIN
tests/test_docs/reference/sample-docm.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-docx.pdf
Normal file
BIN
tests/test_docs/reference/sample-docx.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-epub.pdf
Normal file
BIN
tests/test_docs/reference/sample-epub.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-gif.pdf
Normal file
BIN
tests/test_docs/reference/sample-gif.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-jpg.pdf
Normal file
BIN
tests/test_docs/reference/sample-jpg.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-mime-application-zip.pdf
Normal file
BIN
tests/test_docs/reference/sample-mime-application-zip.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-mime-octet-stream.pdf
Normal file
BIN
tests/test_docs/reference/sample-mime-octet-stream.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-mime-spreadsheet-template.pdf
Normal file
BIN
tests/test_docs/reference/sample-mime-spreadsheet-template.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-mime-text-template.pdf
Normal file
BIN
tests/test_docs/reference/sample-mime-text-template.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-mime-x-ole-storage.pdf
Normal file
BIN
tests/test_docs/reference/sample-mime-x-ole-storage.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-odg.pdf
Normal file
BIN
tests/test_docs/reference/sample-odg.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-odp.pdf
Normal file
BIN
tests/test_docs/reference/sample-odp.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-ods.pdf
Normal file
BIN
tests/test_docs/reference/sample-ods.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-odt-mp4.pdf
Normal file
BIN
tests/test_docs/reference/sample-odt-mp4.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-odt.pdf
Normal file
BIN
tests/test_docs/reference/sample-odt.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-pbm.pdf
Normal file
BIN
tests/test_docs/reference/sample-pbm.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-pdf.pdf
Normal file
BIN
tests/test_docs/reference/sample-pdf.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-png.pdf
Normal file
BIN
tests/test_docs/reference/sample-png.pdf
Normal file
Binary file not shown.
BIN
tests/test_docs/reference/sample-pnm.pdf
Normal file
BIN
tests/test_docs/reference/sample-pnm.pdf
Normal file
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue