Skip to content

feat: Add support for x86 devices running balenaOS#2409

Closed
nicomiguelino wants to merge 13 commits into
Screenly:masterfrom
nicomiguelino:x11-for-x86
Closed

feat: Add support for x86 devices running balenaOS#2409
nicomiguelino wants to merge 13 commits into
Screenly:masterfrom
nicomiguelino:x11-for-x86

Conversation

@nicomiguelino
Copy link
Copy Markdown
Contributor

@nicomiguelino nicomiguelino commented Jul 28, 2025

Issues Fixed

  • The connected display doesn't show the current active asset.
  • The display is stuck at displaying the balenaOS logo.

Description

Checklist (Detailed)

  • Playback of image assets works.
  • Playback of web assets works.
  • Playback of video assets works.

Checklist (General)

  • I have performed a self-review of my own code.
  • New and existing unit tests pass locally and on CI with my changes.
  • I have done an end-to-end test for Raspberry Pi devices.
  • I have tested my changes for x86 devices.
  • I added a documentation for the changes I have made (when necessary).

Comment thread bin/start_viewer.sh Outdated
Comment thread bin/start_viewer.sh Outdated
Comment thread bin/start_viewer.sh Outdated
@sonarqubecloud
Copy link
Copy Markdown

vpetersson added a commit that referenced this pull request May 11, 2026
* feat(x86): support balenaOS x86 fleets via Wayland (#2075)

Brings x86 to feature parity with Pi for balenaOS deployments.
balenaOS x86 doesn't expose /dev/fb0, so Qt's linuxfb plugin (used on
Pi) has nothing to draw to and there's no host display server. Run Qt
under Wayland via `cage`, a kiosk wlroots compositor that talks
directly to KMS — no X server, no DISPLAY juggling, single-app by
design.

- bin/deploy_to_balena.sh accepts -b x86 and strips /dev/vchiq from
  the rendered compose (same conditional that already covers pi5).
- docker/Dockerfile.viewer.j2 sets QT_QPA_PLATFORM=wayland on x86;
  every other board keeps linuxfb.
- tools/image_builder/utils.py adds cage + qt6-wayland to the x86
  viewer apt list.
- bin/start_viewer.sh wraps the viewer launch in `cage --` on x86;
  WAYLAND_DISPLAY is added to sudo's --preserve-env so it survives
  the env scrub when dropping to the viewer user.
- .github/workflows/build-balena-disk-image.yaml extends the
  release-driven preflight, balena-cloud-deploy, and
  balena-build-images jobs to include x86 (fleet anthias-x86, balena
  device type genericx86-64-ext). build-rpi-imager-json is
  unchanged: the .img.zst regex is Pi-only, so x86 ships on the
  release without polluting the Raspberry Pi Imager JSON.

Supersedes the stale draft PR #2409. The orphaned changes there
(home.tsx deviceModel fetch with no consumer, viewer/media_player.py
x86 audio table, silent removal of sha256sum -c on the webview
tarball) are intentionally not carried forward.

Closes #2075

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(x86): note x86 wayland exception in viewer apt comment

Address Copilot review on PR #2857. The earlier comment in
get_viewer_context claimed "nothing wayland-related here" — that's
no longer true once x86 pulls in cage + qt6-wayland a few lines
down. Rewrite to call out x86 as the one board that breaks the rule
so future cleanup doesn't try to drop the wayland deps thinking they
were a mistake.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
vpetersson added a commit that referenced this pull request May 11, 2026
…able) (#2854)

* refactor(ci): release flow per #2769 (master = testing, releases = stable)

Master push now publishes container images only. Balena cloud deploy
and disk-image build move to a release-triggered workflow so existing
fleet devices update on cut releases instead of every merge to master.
rpi-imager.json is generated once per release and shipped as a release
asset; the website fetches it at build time instead of regenerating
from the GitHub API on every deploy.

- docker-build.yaml: drop the balena: job
- build-balena-disk-image.yaml: trigger on release.published, add
  balena-cloud-deploy job (replaces deprecated deploy-to-balena-action),
  bump balena-cli 22.4.15 -> 25.1.3, install via bun, two-phase release
  upload so build_pi_imager_json sees per-board snippets
- deploy-website.yaml: drop rpi-imager.json regeneration + test job;
  fetch it from the latest release instead
- build_pi_imager_json.py: honour RELEASE_TAG env to bypass
  /releases/latest (which excludes prereleases by design)

Also strips third-party action dependencies from new code (manual
docker login, bun install, balena-cli install).

Refs #2769

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(ci): address Copilot review on PR #2854

- deploy-website: download rpi-imager.json by tag on release-triggered
  runs (previously: always default-latest, which can skip prereleases
  and may not match the just-published release)
- deploy-website: drop the now-stale prerelease comment
- build-balena-disk-image: pin Bun via BUN_VERSION env so disk-image
  builds and balena deploys are reproducible
- generate-openapi-schema: accept an optional `ref` input via
  workflow_call and check that out, so the schema attached to a
  release matches the release commit (not the default branch)
- python-lint: run rpi-imager generator tests so the package keeps a
  PR-time CI gate after the deploy-website test job was removed
- build_pi_imager_json: reword RELEASE_TAG-override comment

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(ci): address Copilot round-2 review on PR #2854

- build-balena-disk-image: capture BUILD_DATE once at the top of the
  packaging step so a midnight-spanning run can't reference different
  filenames produced earlier
- build-balena-disk-image: workflow_dispatch now fails loudly when
  the input tag has no existing GitHub release, matching the input
  contract; release event always satisfies it on its own trigger
- bun install: extract to .github/workflows/scripts/install-bun.sh,
  which downloads the pinned release archive + SHASUMS256.txt and
  verifies SHA-256 instead of piping a remote shell script to bash
- deploy-website: re-introduce the strong jq -e validations on
  rpi-imager.json (os_list array, required fields, numeric sizes,
  https URLs, no pi1) so a malformed release asset fails fast
- resolve-context: drop the unused `commit` output

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(ci): address Copilot round-3 review on PR #2854

- install-bun.sh: append \$HOME/.bun/bin to GITHUB_PATH so globally-
  installed CLIs (e.g. balena-cli via \`bun install -g\`) resolve in
  subsequent steps. Without this, the disk-image workflow's balena
  invocations would fail with command-not-found.
- deploy-website: distinguish "release exists but lacks
  rpi-imager.json" (transition fallback) from transient errors
  (auth/rate-limit/network). Probe via gh release view --json assets
  before download; only fall back when the asset is genuinely
  missing. Other gh failures now propagate instead of silently
  shipping an empty os_list.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(ci): address Copilot round-4 review + tighten path triggers

- build-balena-disk-image: pin git rev-parse to --short=7 so the
  resolved short hash always matches the 7-char tag format that
  docker-build.yaml writes (a longer abbreviation would silently
  reference image tags that never exist)
- deploy-website: drop the `release: published` trigger. The disk-
  image workflow now ends with `gh workflow run deploy-website.yaml`
  after rpi-imager.json has been uploaded to the release, so the
  deploy is guaranteed to see the asset and won't ship an empty
  os_list during the upload-step window
- deploy-website: add `.github/workflows/scripts/install-bun.sh` to
  the path triggers so changes to the bun installer also redeploy
  the site (it's a runtime dep)
- docker-build / generate-openapi-schema: exclude
  `tools/raspberry_pi_imager/**` and the bun installer script from
  triggers — neither workflow uses those files

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(ci): name release artefacts \`anthias-<board>\` so the imager regex matches

build_pi_imager_json.get_board_from_url's regex
\`-(pi\d(?:-\d+)?)\.img\.zst\$\` only matches a hyphen before \`piN\`.
The disk-image workflow had been writing artefacts as
\`raspberrypi3.img.zst\` / \`raspberrypi4-64.img.zst\` (no hyphen
between \`raspberry\` and \`pi\`), so all boards except pi2 silently
failed to be picked up by the consolidation step — likely the root
of the broken rpi-imager.json the user flagged.

Renames the per-board release artefacts to
\`<date>-anthias-<board>.img.zst\` (and matching \`.sha256\` /
\`.json\`) so the existing regex picks them up. Tests already
covered the \`anthias-piN\` shape, so they pass without changes.
Updates the upload-artifact + attestation glob patterns
accordingly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(ci): address Copilot round-6 review on PR #2854

- Move expression substitutions in resolve-context to env vars and
  switch the dispatch-tag read from `inputs.tag` to
  `github.event.inputs.tag`, so the `inputs` context is only consulted
  on workflow_dispatch where it's actually populated.
- Add `actions: write` permission to build-rpi-imager-json so its
  `gh workflow run deploy-website.yaml` fan-out has the Actions API
  scope it needs to dispatch the website deploy.
- Split the openapi-schema checkout ref resolution into a dedicated
  step that uses env vars + `if -n` rather than the inline
  `${{ inputs.ref || github.ref }}` expression, so the inputs lookup
  is co-located with its fallback in one readable shell block.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(ci): fix stale install-bun.sh header comment

The header described the runners as linux/amd64-only and asked
maintainers to extend the platform detection if that changed, but the
arch case below already covers both x86_64 and aarch64 Linux. Reword
the comment so it matches the script's actual behaviour.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(ci): drop hard-coded --repo from deploy-website gh calls

`gh release view/download` default to the runtime repository when
`--repo` is omitted, so explicitly pinning Screenly/Anthias was making
the workflow needlessly less portable to forks (or a future repo
rename) without buying anything. Match the rest of the workflow,
which already relies on the runtime repo context.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(ci): address Copilot round-9 review on PR #2854

- Gate build-balena-disk-image.yaml's release trigger to Anthias-core
  tags (`v<version>`). build-webview.yaml publishes its own
  `WebView-v<version>` GitHub releases on tag pushes; without this
  guard, every webview release would have spuriously fanned out to
  balena OTA deploys + disk-image builds. Filter is on resolve-context
  so the entire downstream pipeline cascades-skips via `needs:`.
- Cache sha256 + size of each multi-GB image once and reuse for both
  the .sha256 sidecar and the per-board JSON snippet, instead of
  re-hashing the same files inside jq's --arg expansions. Roughly
  halves the wall-clock of the package step.
- Add `tools/raspberry_pi_imager` to .dockerignore. The directory is
  build-time-only (CI generator for rpi-imager.json) but
  Dockerfile.{server,viewer}.j2 do `COPY . /usr/src/app/`, so without
  this entry it baked into runtime images. With docker-build.yaml's
  matching path-trigger exclusion in place, this keeps the two
  filters semantically honest: a tools-only commit truly cannot
  change image content, so skipping the container rebuild is correct
  rather than a footgun.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(ci): write the .sha256 sidecar against user-facing filenames

The uncompressed-image line previously referenced
\`\$BALENA_IMAGE.img\` (e.g. \`raspberrypi5.img\`), the CI-local
intermediate name. That file never ships in the release asset, so
\`sha256sum -c\` against the downloaded sidecar fails to find it.
Switch to \`\$ARTIFACT.img\` — the filename a user gets after
\`zstd -d <ARTIFACT>.img.zst\` — so both lines match files they
actually have on disk.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(ci): call .venv/bin/pytest directly in python-lint job

\`uv run --group website pytest …\` implicitly syncs the project
venv with the default group set, which pulls in the \`dev\` group
(pytest-django==4.12.0). pytest-django then auto-activates as a
plugin, reads \`DJANGO_SETTINGS_MODULE\` from pyproject.toml, and
fails to bootstrap Django because the curated dev-host + website
install doesn't ship pytz / channels / the other transitive bits
the settings module imports.

Invoke the venv binary directly so the minimal hand-curated env
above is what the rpi-imager unit tests actually run against. The
tests don't need Django at all — this keeps the gate fast and the
dependency surface honest.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(ci): pass -p no:django to the rpi-imager pytest invocation

The previous attempt — calling \`.venv/bin/pytest\` directly instead
of \`uv run\` — assumed the dependency-installation step bounded the
venv contents. It doesn't: the earlier \`uv run ruff check\` step
implicitly syncs the project venv with the default \`dev\` group,
which ships pytest-django==4.12.0 + playwright + etc. By the time
the rpi-imager step runs, pytest-django is sitting in .venv as an
auto-loading pytest plugin, reads \`DJANGO_SETTINGS_MODULE\` from
pyproject.toml, and crashes trying to bootstrap Django (pytz,
channels, etc. are missing in this minimal env).

The rpi-imager unit tests don't need Django at all, so disable the
plugin with \`-p no:django\`. Verified locally: 22/22 pass with
pytest-django installed in the venv as long as the plugin is
disabled.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(x86): support balenaOS x86 fleets via Wayland (#2857)

* feat(x86): support balenaOS x86 fleets via Wayland (#2075)

Brings x86 to feature parity with Pi for balenaOS deployments.
balenaOS x86 doesn't expose /dev/fb0, so Qt's linuxfb plugin (used on
Pi) has nothing to draw to and there's no host display server. Run Qt
under Wayland via `cage`, a kiosk wlroots compositor that talks
directly to KMS — no X server, no DISPLAY juggling, single-app by
design.

- bin/deploy_to_balena.sh accepts -b x86 and strips /dev/vchiq from
  the rendered compose (same conditional that already covers pi5).
- docker/Dockerfile.viewer.j2 sets QT_QPA_PLATFORM=wayland on x86;
  every other board keeps linuxfb.
- tools/image_builder/utils.py adds cage + qt6-wayland to the x86
  viewer apt list.
- bin/start_viewer.sh wraps the viewer launch in `cage --` on x86;
  WAYLAND_DISPLAY is added to sudo's --preserve-env so it survives
  the env scrub when dropping to the viewer user.
- .github/workflows/build-balena-disk-image.yaml extends the
  release-driven preflight, balena-cloud-deploy, and
  balena-build-images jobs to include x86 (fleet anthias-x86, balena
  device type genericx86-64-ext). build-rpi-imager-json is
  unchanged: the .img.zst regex is Pi-only, so x86 ships on the
  release without polluting the Raspberry Pi Imager JSON.

Supersedes the stale draft PR #2409. The orphaned changes there
(home.tsx deviceModel fetch with no consumer, viewer/media_player.py
x86 audio table, silent removal of sha256sum -c on the webview
tarball) are intentionally not carried forward.

Closes #2075

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(x86): note x86 wayland exception in viewer apt comment

Address Copilot review on PR #2857. The earlier comment in
get_viewer_context claimed "nothing wayland-related here" — that's
no longer true once x86 pulls in cage + qt6-wayland a few lines
down. Rewrite to call out x86 as the one board that breaks the rule
so future cleanup doesn't try to drop the wayland deps thinking they
were a mistake.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@vpetersson
Copy link
Copy Markdown
Contributor

Done

@vpetersson vpetersson closed this May 12, 2026
@github-project-automation github-project-automation Bot moved this from In progress to Done in Anthias May 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants