Design-partner access for GCP-first platform teams

GitHub Actions runners in your Google Cloud account

Run GitHub Actions on ephemeral, autoscaled GCP infrastructure you control, without building the runner platform yourself.

Built for GCP-first teams evaluating self-hosted runners, larger runners, ARC, or AWS-first runner products.

Example runnerdock job lifecycle in Google Cloud
workflow.yml example syntax pending confirmation
jobs:
  build:
    runs-on: runnerdock-gcp

Why teams evaluate runnerdock

When GitHub-hosted runners stop fitting your environment

Private network access

CI needs to reach GCP services, registries, or internal endpoints safely.

Runner control

Generic hosted machines do not match every image, region, hardware, or policy need.

Operations drag

Static self-hosted runners and custom autoscalers create maintenance work.

Cost uncertainty

Hosted minutes, idle VMs, cache behavior, and platform charges need a clearer model.

GCP-native use cases

Use runnerdock when the workload belongs in GCP

Deploy into Google Cloud

Route GitHub Actions jobs toward GCP-adjacent capacity without opening broad network paths.

Right-size build jobs

Evaluate build and test workflows against GCP runner profiles using real job data.

Reduce long-lived pools

Replace static runner fleets with job-scoped capacity where the lifecycle can return to zero.

Avoid a platform detour

Give platform teams a managed path before they build their own ARC or autoscaler stack.

Architecture

Keep GitHub Actions. Move the runner lifecycle.

Architecture details are being finalized with design partners. This model shows the intended operating shape without claiming final implementation specifics.

  1. 1

    Connect GitHub

    Configure runner access for selected repositories or organizations.

  2. 2

    Define runner pools

    Choose the GCP project, region, labels, and machine profiles for pilots.

  3. 3

    Route jobs

    Update runs-on labels for a focused workflow evaluation.

  4. 4

    Run and tear down

    Jobs execute on GCP capacity and return to zero when idle, where supported.

Bring us one workflow to evaluate

Control and trust

Designed for teams that need infrastructure control

We will confirm runner lifecycle, permissions, log retention, and network boundaries during the design-partner review before asking you to move production workflows.

Customer-owned GCP boundary Design-partner validation
Ephemeral runner lifecycle Design-partner validation
Least-privilege service accounts Planned confirmation
Log and audit visibility Planned confirmation
Private networking and static egress Support being confirmed

Cost model

Model the whole CI cost, not just the runner minute

runnerdock is intended for teams whose CI cost includes idle capacity, slow feedback loops, cloud locality, cache misses, and platform maintenance. In pilots, we compare a current workflow against a GCP runner profile using real job data.

Dimension Current setup runnerdock pilot Facts to measure
Queue time Observed backlog or waiting Pilot capacity profile Median and tail wait
Cloud locality Hosted or self-hosted network path GCP-adjacent runner path Data movement and access needs
Maintenance burden Fleet, autoscaler, or platform work Managed runner lifecycle target Owner time and failure modes

Design-partner access

Help shape the GCP-native runner path

We are looking for GCP-first teams with active GitHub Actions workloads, clear runner pain, and willingness to validate one production-like workflow. Bring CI usage data, security constraints, and a named technical owner.

Submitting does not imply guaranteed acceptance, free usage, or production readiness.

FAQ

Questions high-intent teams ask first

Is runnerdock a CI replacement?

No. runnerdock is intended to keep GitHub Actions as the workflow layer while moving runner execution into GCP-backed capacity.

Does it work with existing GitHub Actions workflows?

The intended migration path is to pilot selected workflows by changing runner labels. Final label syntax and supported workflow patterns are pending product confirmation.

What GCP resources does runnerdock create or manage?

The exact resource model is being finalized with design partners. The evaluation will confirm project boundaries, IAM, networking, logs, and teardown behavior.

How is this different from GitHub larger runners?

GitHub larger runners can be a good fit when hosted capacity is enough. runnerdock is aimed at teams that want runner capacity and operating controls inside their Google Cloud account.

How is this different from ARC?

ARC is a valid Kubernetes-centered path for teams ready to operate that stack. runnerdock is intended to reduce custom runner-platform work for GCP-first teams.

How is this different from AWS-first runner products, hosted, or BYOC alternatives?

Those options can be strong depending on cloud strategy. runnerdock focuses on GitHub Actions teams whose runner requirements are tied to Google Cloud infrastructure control.

Does it support private networking, static IPs, custom images, ARM, GPU, Windows, or macOS?

Available support is being confirmed during design-partner pilots. Bring the capability list you need, and we will evaluate fit against the current roadmap.

How should we calculate expected cost?

Compare using your workflow data: queue time, job duration, idle capacity, cloud locality, cache behavior, and platform maintenance. We are not publishing hard savings claims before pilot data exists.