Zombie Zen

Tailscale on Google Container-Optimized OS

By Roxy Light
Google Cloud Compute Engine + Tailscale

I was hacking on a personal project over the weekend that I’m deploying using Google’s Container-Optimized OS. Container-Optimized OS is quite convenient for hosting small services that don’t quite fit a web request/response workload: it is (mostly) stateless, it auto-updates, it has systemd, and (as the name implies) it runs Docker containers. It is a nice fit for one-process programming.

For debugging, I want to SSH directly into the VM instance. Especially after recently learning from a coworker how easy it is for blackhats to search the public internet for known vulnerabilities, I don’t want to leave an SSH port open continuously. Even with regular security updates, I’d rather avoid the attack surface. In the past, I would modify my Google Cloud project’s firewall temporarily to allow SSH traffic while debugging and then (hopefully) remove the SSH traffic rule after I finished. This has been cumbersome, but there hasn’t been another solution that’s quite as simple.

Enter Tailscale! Tailscale creates a peer-to-peer Virtual Private Network (VPN) with very little fuss. While Container-Optimized OS is mostly designed for running containers, I found I can run the Tailscale static binary with a little kludging.

Here are the hurdles I ran into:

  • Container-Optimized OS mounts most of the filesystem with noexec (details). However, we can mount a tmpfs volume under /mnt/disks/.
  • Because this solution places the tailscaled binary in a non-standard location, we can’t use the systemd unit verbatim.
  • I’m using Tailscale’s ephemeral authentication keys, which use IPv6 addresses. The default sshd configuration for Container-Optimized OS only binds to IPv4. Since Google Cloud instances only use IPv4, I’m assuming the maintainers turned it off for “You Ain’t Gonna Need It” security reasons. Not a big deal: we just have to clear the AddressFamily setting.

If you want to set up your own Google Container-Optimized OS instance with Tailscale, here’s how you do it:

  1. Sign up for Tailscale, if you haven’t already. Make sure you install the client on your local machine.

  2. Visit https://login.tailscale.com/admin/settings/authkeys and generate a pre-authentication key. As noted above, I use Ephemeral Keys, but you can use One-off or Reusable keys depending on your needs.

  3. Save the cloud-init template shown below to a file called cloud-config.yml. Replace tskey-12345 with the Tailscale pre-authentication key you generated in the previous step and replace ssh-rsa xyzzy with your actual SSH public key. If you don’t have an SSH key, generate one.

    #cloud-config
    # SPDX-License-Identifier: Unlicense
    
    users:
      - name: admin
        uid: 2000
        ssh_authorized_keys:
          # TODO: Replace with your SSH public key.
          - ssh-rsa xyzzy
        sudo: ALL=(ALL) NOPASSWD:ALL
    
    write_files:
      - path: /etc/systemd/system/tailscaled.service
        content: |
          [Unit]
          Description=Tailscale node agent
          Documentation=https://tailscale.com/kb/
          Wants=network-pre.target
          After=network-pre.target NetworkManager.service systemd-resolved.service
    
          [Service]
          ExecStartPre=/mnt/disks/tailscale/tailscaled --cleanup
          ExecStart=/mnt/disks/tailscale/tailscaled \
            --state=/var/lib/tailscale/tailscaled.state \
            --socket=/run/tailscale/tailscaled.sock \
            --port 41641
          ExecStopPost=/mnt/disks/tailscale/tailscaled --cleanup
    
          Restart=on-failure
    
          RuntimeDirectory=tailscale
          RuntimeDirectoryMode=0755
          StateDirectory=tailscale
          StateDirectoryMode=0750
          CacheDirectory=tailscale
          CacheDirectoryMode=0750
          Type=notify
    
          [Install]
          WantedBy=multi-user.target      
    
      - path: /tmp/install-tailscale.sh
        permissions: 0644
        owner: root
        content: |
          #!/bin/bash
          set -euo pipefail
          VERSION="$1"
          DEST="$2"
          TMPDIR="${TMPDIR:-/tmp}"
    
          dirname="tailscale_${VERSION}_amd64"
          tarname="${dirname}.tgz"
          if [[ ! -e "$TMPDIR/$tarname" ]]; then
            mkdir -p "$TMPDIR"
            download_url="https://pkgs.tailscale.com/stable/$tarname"
            echo "Downloading $download_url" 1>&2
            curl -fsSLo "$TMPDIR/$tarname" "$download_url"
          fi
          mkdir -p "$DEST"
          tar \
            -xzf "$TMPDIR/$tarname" \
            -C "$DEST" \
            --strip-components=1 \
            "$dirname/tailscale" \
            "$dirname/tailscaled"      
    
    runcmd:
      # Read new systemd units
      - systemctl daemon-reload
      # Allow connecting to SSH over Tailscale ephemeral address
      - sed -i -e '/^AddressFamily/d' /etc/ssh/sshd_config
      - systemctl reload sshd.service
      # Install Tailscale.
      - mkdir /mnt/disks/tailscale
      - mount -t tmpfs tmpfs /mnt/disks/tailscale
      - TMPDIR=/var/tmp bash /tmp/install-tailscale.sh 1.14.0 /mnt/disks/tailscale
      - systemctl start tailscaled.service
      # TODO: Replace "tskey-12345" with your Tailscale auth key.
      - /mnt/disks/tailscale/tailscale up --authkey tskey-12345
    
  4. Run the following with the gcloud CLI, setting INSTANCE_NAME to whatever name you want to give this instance.

    # Arbitrary identifier
    INSTANCE_NAME=foo &&
    
    gcloud compute instances create \
      --image-family cos-stable \
      --image-project cos-cloud \
      --zone us-central1 \
      --machine-type e2-micro \
      --metadata-from-file user-data=cloud-config.yml \
      ${INSTANCE_NAME?}
    
  5. Wait for your instance to show up in https://login.tailscale.com/admin/machines

  6. Run the following SSH command, setting TAILSCALE_ADDRESS to the address shown in the Tailscale Machines tab.

    # Tailscale address of VM instance
    TAILSCALE_ADDRESS=fd7a:115c:a1e0:... &&
    
    ssh admin@${TAILSCALE_ADDRESS?}
    

And that’s it! Hope this helps keep your VM instances secure.