It seems that the container image for Ubuntu 24.10 also ships with a
default Ubuntu user with UID 1000, so we need to remove it when creating
our dev environment.
Try installing `passt`, which is responsible for user networking in
later Podman releases. If not installed, building the container image
within an Ubuntu 24.10 environment fails with:
setup network: could not find pasta, the network namespace can't be
configured: exec: "pasta": executable file not found in $PATH
Note that this package is not available in older Ubuntu versions. In
these cases, we should swallow installation failures and continue.
Install PyMuPDF under ./dangerzone/vendor, right before we build the
.deb package. We vendor PyMuPDF just for Debian, since the provided
versions don't have OCR support enabled.
Currently, we don't use PyMuPDf on the host, but this will change once
we fully implement the on-host conversion feature.
Refs #625
Add a script that installs PyMuPDF under ./dangerzone/vendor. This will
be useful in subsequent commits, for vendoring PyMuPDF when building
Debian packages.
The PyMuPDF wheels for version 1.24.11 have changed the way they are
being built, which means we have to adapt our Dockerfile in order to
install them properly.
Remove the installation steps for Xvfb, since it's already included in
GitHub actions, and fire up an Xvfb server with disabled host-based
access control.
Initially, we tried to wrap our CI tests with `xvfb-run`, but any
X11 client within our Podman container failed with the following error
message:
Authorization required, but no authorization protocol specified.
This error message is usually thrown when the X11 client does not
provide the magic cookie in the Xauthority file back to the X11 server.
In our case though, we can verify that commands in our Podman container
read the Xauthority file successfully:
socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0) = 3
connect(3, {sa_family=AF_UNIX, sun_path=@"/tmp/.X11-unix/X99"}, 21) = -1 ECONNREFUSED (Connection refused)
close(3) = 0
socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0) = 3
getsockopt(3, SOL_SOCKET, SO_SNDBUF, [212992], [4]) = 0
connect(3, {sa_family=AF_UNIX, sun_path="/tmp/.X11-unix/X99"}, 110) = 0
getpeername(3, {sa_family=AF_UNIX, sun_path="/tmp/.X11-unix/X99"}, [124->21]) = 0
uname({sysname="Linux", nodename="dangerzone-dev", ...}) = 0
access("/home/runner/work/dangerzone/dangerzone/cookie", R_OK) = 0
openat(AT_FDCWD, "/home/runner/work/dangerzone/dangerzone/cookie", O_RDONLY) = 4
fstat(4, {st_mode=S_IFREG|0600, st_size=59, ...}) = 0
read(4, "\1\0\0\rfv-az1915-957\0\299\0\22MIT-MAGIC"..., 4096) = 59
read(4, "", 4096) = 0
close(4) = 0
fcntl(3, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0
fcntl(3, F_SETFD, FD_CLOEXEC) = 0
poll([{fd=3, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=3, revents=POLLOUT}])
writev(3, [{iov_base="l\0\v\0\0\0\0\0\0\0\0\0", iov_len=12}, {iov_base="", iov_len=0}], 2) = 12
recvfrom(3, 0x55a5635c0050, 8, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
poll([{fd=3, events=POLLIN}], 1, -1) = 1 ([{fd=3, revents=POLLIN}])
recvfrom(3, "\0@\v\0\0\0\20\0", 8, 0, NULL, NULL) = 8
recvfrom(3, "Authorization required, but no a"..., 64, 0, NULL, NULL) = 64
write(2, "Authorization required, but no a"..., 64Authorization required, but no authorization protocol specified
) = 64
The line with the magic cookie is:
read(4, "\1\0\0\rfv-az1915-957\0\299\0\22MIT-MAGIC"..., 4096) = 59
Since we are not sure why we are not allowed access to the X11 server
from the Podman container, we decided to disable host-based access
controls altogether. This is not a security concern, since this X11
session is a remote one. However, we shouldn't run tests this way in dev
machines.
Fixes#949
Do not use the `provider_wait` fixture in our termination logic tests,
and switch instead to the `provider` fixture, which instantiates a
typical isolation provider.
The `provider_wait` fixture's goal was to emulate how would the process
behave if it had fully spawned. In practice, this masked some
termination logic issues that became apparent in the WIP on-host
conversion PR. Now that we kill the spawned process via its process
group, we can just use the default isolation provider in our tests.
In practice, in this PR we just do `s/provider_wait/provider`, and
remove some stale code.
Add a fixture that returns our stock Dummy provider. Also, explicitly
use a blocking Dummy provider (`DummyWait`) for a specific test case.
This will prove useful when we stop using the `provider_wait` variant of
our isolation providers in the next commits.
Make the dummy provider behave a bit more like the other providers, with
a proper function and termination logic. This will be helpful soon in
the tests.
Instead of killing just the invoked Podman/Docker/qrexec process, kill
the whole process group, to make sure that other components that have
been spawned die as well. In the case of Podman, conmon is one of the
processes that lingers, so that's one way to kill it.
Start the conversion process in a new session, so that we can later on
kill the process group, without killing the controlling script (i.e.,
the Dangezone UI). This should not affect the conversion process in any
other way.
GitHub provides an Intel macOS runner as `macos-13`. Add it alongside
our M1 macOS runner (`macos-latest`), in order to cover all of our
target environments.
It seems that we need to specify that Python files have LF line endings
on Windows environments, else they will get converted to CRLF. If this
happens, then the container image we build in this environment will have
Python files with wrong endings, and tests will break.
Refs #838 for previous attempt.
Make sure that the Debian package we build conforms to the expected
naming scheme else, it's possible that something is off. A scenario
we've encountered is bumping `share/version.txt`, but not
`debian/changelog`, which would create a Debian package with an older
version.
As part of this change, the dev (build) and end-user test images names
changed from `dangerzone.rocks/*` to `ghcr.io`.
A new `--sync` option is provided in the `env.py` command, in order to
retrieve the images from the registry, or build and upload otherwise.
As per Etienne Perot's comment on #908:
> Then it seems to me like it would be easy to simply apply this seccomp
profile under all container runtimes (since there's no reason why the
same image and the same command-line would call different syscalls under
different container runtimes).
Docker Desktop 4.30.0 uses the containerd image store by default, which
generates different IDs for the images, and as a result breaks the logic
we are using when verifying the images IDs are present.
Now, multiple IDs can be stored in the `image-id.txt` file.
Fixes#933
Switch to using `WixUI_InstallDir` dialog set in the windows installer and add the `WIXUI_INSTALLDIR` property it needs to let user choose where Dangerzone is installed.
resolves#148
Update the gVisor design doc, to better reflect the current state of the
gVisor integration. More specifically, the following have changed since
this design doc was merged:
* We have dropped the need for the `SETFCAP` capability.
* We have added the SELinux label `container_engine_t` to the outer
container.
Use "podman" when on Linux, and "docker" otherwise.
This commit also adds a text widget to the interface, showing the actual
content fo the error that happened, to help debug further if needed.
Fixes#212
Previously, these files where stored inside the repository (under
`dev_scripts/env/`), which could lead to conflicts with some tooling
(black, debian-helper).
(Linux only): as a convenience, here is how to move data to the new
location:
```bash
mkdir -p ~/.local/share/dangerzone-dev
mv dev_scripts/envs/ ~/.local/share/dangerzone-dev/.
```
As a result, a new `debian` folder is now living in the repository.
Debian packaging is now done manually rather than using tools that do
the heavy-lifting for us.
The `build-deb.py` script has also been updated to use `dpkg-buildpackage`
Do not use by default the builder cache, when we build the Dangerzone
container image. This way, we can always have the most fresh result when
we run the `./install/common/build-image.py` command.
If a dev wants to speed up non-release builds, we add the `--use-cache`
flag to use the builder cache.
DirectFS is enabled by default in gVisor to improve I/O performance,
but comes at the cost of enabling the `openat(2)` syscall (with severe
restrictions, but still). As Dangerzone is not performance-sensitive,
and that it is desirable to guarantee for the document conversion
process to not open any files (to mimic some of what SELinux provides),
might as well disable it by default.
See #226.