Full disk space on an illumos VM

TL;DR: If you're running illumos, particularly OpenIndiana, in a VM after adding a new storage disk, run devfsadm -v for it to show up in diskinfo.

Lately I've been researching implementation details of different Unixes, so I've been running way too many VMs. My recent read has been the core internals of illumos-gate, a successor to OpenSolaris from Sun, through the OpenIndiana distribution.

Background on illumos-gate

Like the Linux kernel, there are distros based on illumos-gate that can serve as an introduction to the project and for a friendler user space. However, the illumos-gate project isn't just a kernel but ships with libraries and command line programs. It's not to the extent of the FreeBSD project (which is intended to ship as an entire distribution) but does encapsulate more than just the kernel. The distro I choose to explore was OpenIndiana but others exist such as OmniOS and SmartOS.

OpenIndiana was straightforward to boot on a VM. My physical host is a Linux station so I've been running VMs in QEMU using virt-manager for setup and virsh for management. Setting up the illumos-gate project within the VM was less straightforward because of old documentation but enough Googling resolved any issues I had.

Given my experience with Rust, I wanted to write a few programs for illumos but unfortunately, the default install process didn't work off the bat. Fortunately, I found out you can install rust through pkgin (see update below). This was when I started running into disk space issues.

Update (08/12/2021): I recently sent a PR to clear up the confusion with rustup and illumos-based distros. The default curl command shown should work, but it might need to be piped into bash.

Full disk

Originally, I set up the VM to have 50GB of disk which I thought would be enough to explore the illumos-gate project and build a few rust programs. After a day of use and installing what was apparently way too much, random processes started failing with No space left. Ruh roh

illumos ships with the commonly talked about ZFS, a filesystem I only had little familiarity with before but had to learn much more to wrap my head around fixing this virtual disk storage issue.

To verify my disk use, I asked zfs to clarify:

$ zpool list
rpool 49.8G 49.6G 0

Nice, I used it up. I figured I'll just shutdown the VM, add a new virtual disk and reboot. virsh makes that easy enough.

Wall of errors and a lot of questions

Full disks break things and broken things is exactly what happened.

I was hoping that by just adding the new disk that illumos/OpenIndiana/zfs would pick it up and just do its thing or at least ask me about the new disk (in retrospect, I understand that I was asking for too much) but instead the boot process crashed into a bash shell after a wall of No space left errors so I couldn't even get back into the desktop environment. Also for some reason, I couldn't see my own /export/home files? I was logged in as root since I couldn't log in as my user. Okay, whatever, let me see if even the virtual disk I just added is registered:

$ diskinfo
ATA c1d0 - - 49.8 GiB no no

My main disk is listed but what about the other virtual one I just added? Maybe I didn't understand what diskinfo was supposed to return? Maybe ZFS has a way of seeing the physical device? There has to be a device node file somewhere, right?

Although optimistic in the beginning, I slowly was coming to the conclusion that for some unknown reason, my VM couldn't see the new drive and that I would have to recreate a new VM and go through the song and dance of setting it up again. Was this new disk invisible because of a caching issue? An OS issue or maybe even an underlying QEMU virtualization issue?

Tip: you can sometimes replace "illumos" with "Solaris" in your Google searches for better answers

The solution was hidden on this old Oracle Solaris page: Virtual Disks Do Not Display on Solaris 10 Guests.

The instructions listed there worked for me:

$ devfsadm -v
$ diskinfo
ATA c1d0 - - 49.8 GiB no no
ATA c2s0 - - 149.8 GiB no no

Looks like devfsadm(1M) by default refreshes the list of device nodes found in /dev and /devices. Adding it to ZFS was easy enough:

$ zpool add rpool c2s0
$ reboot

And voila! OpenIndiana fully loaded for me on reboot. No more wall of errors


As I mentioned earlier, a nice thing about the illumos project is that it ships with not just the kernel but also with core user space tooling so to look at how devfsadm(1M) worked, I can read the source code directly in usr/src/cmd/devfsadm. Looks like there's a hidden flag to toggle verbosity levels!

If you see any inaccuracies, typos or have comments, please reach out @mdaverde.