Windows-inside-Docker: insane but surprisingly handy

The latest episode of Tweakers Podcast #385 got me thinking again. There were comments that, as an avid Arch Linux user, started to itch a little: "You can't do everything on Linux." I had to smile, because in my daily practice, I repeatedly prove the opposite. For almost 99% of my digital life, from simple browsing and media consumption to complex development tasks and system administration, Arch Linux and the Gnome Desktop are my rock. They offer me the control, speed, and flexibility I find nowhere else.

Yet, I have to admit, there's that one percent. Those rare times I encounter software that stubbornly refuses to run outside the Windows environment. Think of specific legacy applications, or tools for which there really isn't a full-fledged Linux alternative. It's frustrating, but it's a reality. The thought of a dual-boot installation, where I have to reboot my machine every time I need that one application, is completely repugnant to me. I wanted a solution that was as seamless as the rest of my setup.

And then something struck me! My former colleague Gijs had once shown me something cool: running Windows within Docker. At the time, it seemed absurd. But now it was exactly the solution I was looking for. The idea of starting up a completely isolated Windows environment and shutting it down whenever I needed it, without polluting my host OS or having to install an entire VM suite, sounded like music to my ears.

My search led me to the Windows-inside-Docker project. This turned out to be the perfect foundation. The great thing is that with docker-compose, you can define a complete environment and start it with a single command. After that, you can use Windows via the browser. Tip for those with a robust home server: Place this container there, and you can use Windows on any device with a browser. This is what my compose.yaml for such a setup looks like:

services:
  windows:
    image: dockurr/windows
    container_name: windows
    devices:
      - /dev/kvm
      - /dev/net/tun
      - /dev/bus/usb
      - /dev/dri:/dev/dri
      - /dev/snd:/dev/snd
    cap_add:
      - NET_ADMIN
    ports:
      - 8610:8006
      - 3389:3389/tcp
      - 3389:3389/udp
    volumes:
      - ./windows:/storage
    restart: no
    stop_grace_period: 2m
    environment:
      #ARGUMENTS: "-device usb-host,vendorid=0x046d,productid=0x085c,vendorid=0x5986,productid=0x9101"
      ARGUMENTS: "-device usb-host,vendorid=0x046d,productid=0x085c"
      Version: "11l"
      RAM_SIZE: "8G"
      CPU_CORES: "5"
    privileged: true
    deploy:
       resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ['0']
              capabilities: [gpu]

And how does it work in practice? Surprisingly well! For tasks like testing websites in a specific Windows browser or just clicking around, it's perfect. The performance is "okay," especially considering it runs a full OS within a container. It's lightweight enough to boot quickly and use for those short, essential sessions. But to be honest: I definitely wouldn't use it for gaming. The latency and limited access to graphics hardware make that a hopeless proposition.

So, in response to Tweakers Podcast #385: yes, there are always exceptions, but with the power of Linux and the flexibility of tools like Docker, you can handle those exceptions elegantly and efficiently. It proves to me once again that with a little creativity and the right tools, Linux can truly be the foundation for almost anything.