Running OpenWRT in GCP
A tale about instability and custom imagesPublished: 2023-02-09T21:05Z
If you’re a bit like me and have run multiple different linux distributions over the years, you sometimes wish to run a different distribution when using a hosting platform like Google Cloud.
In this post, I’ll be showing my journey to get OpenWRT running on a virtual machine within Google Cloud. I’ll be assuming you already have a Google Cloud project.
Why
While I respect what the Debian and Ubuntu distributions have done in the past, my experience with them in recent years hasn’t proven them to be the stable platform they once were.
Recently, I set up a new jumphost for getting into my network when I’m not already in it. This consisted of an f1-micro instance within GCP, cockpit for web-access, having the SSH daemon configured properly and a tinc daemon for the actual access into the network, all running on Debian 11.
When I tried to install the build-essential
package for building tinyfecVPN from scratch for an experiment related to the unstable internet connection I’m on, mandb suddenly decided to not proceed when updating it’s database. Attempts at fixing this always resulted in the same symptoms when finally running dpkg --configure -a
to reconfigure the machine, rendering the install essentially unable to install updates or new packages.
There’s only 3 distros I haven’t had such issues with: Gentoo, Void Linux, and OpenWRT. Because the prior two require manual intervention during the installation and don’t provide base images that aren’t live installations, my choice for this project fell on OpenWRT.
Even if the issues were completely my own fault, I can’t help but feel like a simple apt-get install build-essential
shouldn’t render the system broken.
Getting the OpenWRT image
The easiest way to get an image that should work is to download it from OpenWRT’s website and going to the downloads page. On that page you would go to the stable releases page and navigate to the most recent release, which at the time of writing is 22.03.3.
On the version’s page, go to targets, x86, and then 64 to get to the downloadable images compatible with the virtual machines used by Google Cloud. The easiest of these to get working is the ext4-combined-efi image, but you’re welcome to try it out with the non-efi image as well.
Once you’ve downloaded the image, you should decompress the image and name it as disk.raw
(important later). To do this, you can execute a command as such:
gzip -d < IMAGE_FILE > disk.raw
Building the tarball for google
We will be building a custom image that we can use to create a new virtual machine later. To be able to create that image, we need a tarball in a specific format that Google Cloud expects. The name of the tarball doesn’t really matter, but the contents of it does.
To create the tarball in the format Google Cloud expects, we’ll be using the uncompressed disk image downloaded earlier and run the following command:
tar czvf image-name.tar.gz disk.raw
Getting the tarball to google
When going to your Google Cloud Project, navigate to Cloud Storage / Buckets and locate or create a bucket you’ll be using to upload the file in. Because we will delete the file at the end of these instructions, having Standard or Autoclass selected will suffice.
Open the bucket, and upload your freshly created tarball into it. It’s that simple.
Sidenote: Yes, you could do it using the gcloud or gsutil command line interfaces, but if you know about them and know how to use them, you probably don’t really need these guidelines.
Creating the custom image
Now that you’ve uploaded the tarball to a bucket within your project, navigate to Compute Engine / Storage / Images. On this page, click the CREATE IMAGE button to start the process of creating a custom image within your project.
Start by giving your custom image a name you will recognize later. In my own project, I’ve chosen to name it openwrt-2203-efi
because I used the 22.03.3 efi image from OpenWRT. You can name it almost anything you’d like, but I strongly recommend giving it a name that allows you to know what’s inside one or several years later.
In the source field, simply select Cloud Storage file
, as that’s the type we built the tarball for earlier. This type expects a tarball with a single file, namely disk.raw
, in it, which is why we renamed the image file before wrapping it earlier.
In the Cloud Storage file field (appeared after you set the source), browse to the bucket you uploaded your tarball in and select the tarball you uploaded earlier.
How you configure the region is entirely up to your use-case. In my case, I’m not expecting to start hosting things outside of europe-west4 any time soon, so I configured it to be regional in that area.
While the family isn’t a required field, I do suggest filling it in with the generic name of the operating system you packaged. Assuming you fully followed the above steps, that’d be openwrt
.
Should you need more description for your image, like if you built a custom image with a specific set of software pre-installed, add it to the description field.
After filling the described fields, hit the create button at the button, and wait for the process to finish. You may need to refresh the page to see if the creation of the image has finished yet.
Creating an instance with the custom image
Assuming the above step completed without any issues, navigate to the VM instances page in your project and click on Create Instance.
On the instance configuration page, click on the change button under boot disk. Within the overlay page that opened, select the custom images tab and select your freshly created custom image. Also take the time to select the right disk type and space here.
If you’re using OpenWRT like I am, it’s advisable to add some metadata to the machine to enable the serial port. Go to Management > Metadata
and add an item there. For the key, use serial-port-enable
and for the value use TRUE
. Enabling the serial port allows you to keep access to the machine through the Google Cloud Console, even if the configuration doesn’t allow access to/from the internet.
Should you want to connect to the machine using http(s) later, enable the http(s) server
checkboxes during the configuration now. That’s easier than looking where they are later. The machine will start without internet access to begin with, so no need to worry about someone hijacking the instance before you’ve configured a root password.
Configure the rest of instance as per your needs. In my case it’ll be mostly doing nothing, so I selected an f1-micro machine, which is essentially the smallest/cheapest machine available.
Once the machine is created, you’re basically ready to go.
Making the instance accessible through the web
The above steps should render you a running instance within your project, but one that is not accessible from anywhere.
To start configuring the machine, start by opening the serial console to your freshly created machine, and hitting enter to get the welcome message shown when activating a tty.
Start by configuring a password for your root user now that it’s not accessible yet outside of it’s serial console. Keep in mind to make this a strong password, even if you will disable password login for the root user later.
With a password set, navigate to the /etc/config
directory and start editing the network
file there. A safe bet is to remove the br-lan
device, as we’ll not be creating bridges here. On my machine, configuring the lan interface to the following and rebooting the machine allowed it to retrieve an IP address from the VPC.
Keep in mind the name of the interface doesn’t need to be lan
, but can be anything that makes sense in your environment.
config interface 'lan'
option device 'eth0'
option proto 'dhcp'
Once the instance has restarted, you should be able to navigate to the machine using it’s IP address. I won’t cover setting a static IP on the instance here.
Resizing the rootfs to match the drive’s size
Honestly, this part had me re-doing the whole process when preparing for the blogpost, mainly due to a tutorial suggesting to recreate the rootfs partition instead of resizing it.
What I found to be the easiest method of doing a live resize of the partition, is by following the steps posted in the github issues by MarioT. These steps can be boiled down to the following:
opkg update
opkg install tune2fs e2fsprogs resize2fs
mount -o remount,ro /
tune2fs -O^resize_node /dev/sda2
fsck.ext4 /dev/sda2
reboot
resize2fs /dev/sda2
Note that these steps are not gueranteed to work on the OpenWRT 22.03.3 image. Should you run into issues, open the link to MarioT’s solution above and scroll through the thread. The parted
method should work if the partition doesn’t show to be larger when using lsblk
.
Feel free to contact me if these steps did not work you but you’ve found a more generic workaround.
Profit
If you’ve followed along, you should now be the proud owner of an OpenWRT instance running in your Google Cloud Project.
From here, you can configure it as you like, whether that be as reverse proxy for your at-home-hosted services, as a vpn host, or something else. One thing’s for sure, the package manager will give you less hassle than with a distribution using apt
or apt-get
(let alone snapd
.. *shudder*).