5 Storage Projects to Supercharge Your Home Lab
Storage is one of the most rewarding and practical projects to tackle in a home lab. Nearly every app or service you self-host will require some form of storage. Beyond basic app data, you’ll likely need to store virtual machines, Docker containers, and other resources. The right storage setup can make provisioning a breeze or, conversely, create significant headaches if you’re not prepared. Choosing the right storage projects can open up exciting new opportunities for testing and learning. If you’re looking for a weekend project to boost your storage capabilities, here are five ideas that will take your home lab to the next level.
1. Build a ZFS storage pool with snapshots and replication
There’s no doubt that ZFS has become a dominant force in the world of home labs and open-source storage. Combining the functionality of both a file system and a volume manager, ZFS brings powerful features like snapshots, replication, compression, and robust data integrity checks.
If you haven’t yet explored ZFS, building a ZFS pool is one of the most rewarding and practical weekend storage projects to dive into.
Taking a closer look at ZFS snapshots, they allow you to instantly capture the state of your data with minimal overhead. This means you can easily roll back changes if something goes wrong, or replicate data to another machine for redundancy. Using ZFS as the underlying storage for VMs or containers—like those in Proxmox—gives you enterprise-level snapshot capabilities on consumer-grade hardware.
One of the easiest ways to get started with ZFS is by setting up TrueNAS SCALE or natively creating a storage pool within Proxmox VE. For example, you can create a simple mirror of two SSDs to deliver ultra-fast storage for testing or lab environments. Once your ZFS pool is set up, you can even experiment with data replication between two machines, like a mini PC in your home lab and a NAS located elsewhere in your house or in a different location.
ZFS is natively integrated with Proxmox, making the process even smoother.

Check out my post covering 5 pooled storage technologies with Proxmox and how these are configured:
2. Deploy Ceph or MicroCeph for scale-out storage
I’m a massive fan of Ceph storage—it’s an incredible distributed storage solution that powers some massive organizations. Once you’ve had some hands-on experience with a system like ZFS in your home lab, deploying Ceph or MicroCeph is a fantastic next project.
A Ceph cluster enables you to pool disks across multiple servers, presenting them as a unified, resilient storage backend for various types of storage, including block, file, and object storage.
What’s great about Ceph is its fault tolerance—if you lose disks or even entire nodes in the cluster, the system can continue running depending on your configuration and the level of redundancy you’ve set up. The best part? It’s highly configurable and flexible, so you don’t need a dedicated NAS or SAN to manage your storage needs.
Similar to ZFS, Proxmox integrates natively with Ceph, making it easy to set up a storage backend for your virtual machines and containers. One feature I especially like about Ceph (or MicroCeph) is the ability to layer CephFS on top of the storage, giving you regular file storage for things like file shares or other uses.
One of the cool things I am doing with it is using CephFS as storage for my Microk8s Kubernetes cluster as well as a Docker Swarm cluster I have running. This makes getting access to your data easy. You don’t have to login to a pod or container to transfer files, you just access the storage directly on the node and copy files around as needed.

If ZFS is the gateway to advanced storage, Ceph is the next level up. Ceph is a distributed storage platform that powers some of the largest cloud providers worldwide. However, thanks to projects like MicroCeph, it’s become accessible even for home labs. A Ceph cluster allows you to combine disks from multiple machines into a single, resilient storage solution that can handle block, file, or object storage.
To simplify things, MicroCeph removes much of the initial complexity involved in setting up a Ceph cluster. It can be up and running with just a few commands. A fun weekend project could be taking three small nodes (like mini PCs) and building a MicroCeph cluster. After that, you can easily mount the cluster to Proxmox or Kubernetes as shared storage. Not only is this a great learning experience, but it also gives you a solid understanding of distributed storage.
3. Set Up a MinIO server or cluster for self-hosted S3-compatible storage
So many solutions these days support backups and storage on S3-compatible storage. Object storage has become the standard in the cloud era that we live in. Services like Amazon S3, Azure Blob storage and Google Cloud storage are the backend storage for storing modern app data. However, this type of storage isn’t just for the cloud. You can build your own S3 compatible storage in your home lab. One of the best ways to do this is by using MinIO.
If you haven’t come across MinIO yet, it’s a lightweight, high-performance object storage server that fully supports the S3 API. This means any app that’s built to interact with Amazon S3 can also integrate seamlessly with MinIO, treating it as though it were native S3 storage.
What’s really cool about MinIO is its flexibility. You can quickly spin it up as a container in Docker or run it in Kubernetes. You can also map it to either local or distributed storage. Personally, I use MinIO in my home lab for backups. I run it as a container on my Synology NAS, leveraging the local storage there as the backend. I back up configurations from Portainer and a few other apps to MinIO, as they all support S3 storage out of the box.
Once it’s running, you can connect apps like Proxmox Backup Server 4 that now supports S3 storage, Velero, or even Nextcloud to store backups or files. This project not only adds a powerful tool to your home lab but also gives you real-world skills transferable to cloud environments.
4. Experiment with iSCSI and NFS shared storage
You might have already worked with iSCSI or NFS shared storage, but if you haven’t, these are fantastic storage protocols to experiment with in a weekend project. They’re also perfect for deploying robust services in your lab. Both iSCSI and NFS have been staples of traditional storage arrays, allowing hypervisors to connect to shared storage, run multiple VMs and containers, and support high availability features like live migration and more.
NFS is easy to set up since it’s a file-based protocol, allowing you to store things like your Proxmox or VMware VMs as files. On the other hand, iSCSI is a bit more complex, but it allows you to present block devices over the network, which hypervisors like Proxmox or VMware can then treat as local disks.
If you prefer a more traditional approach to shared storage for your virtualization stack instead of using distributed storage like Ceph, NFS or iSCSI are great choices. By setting up a TrueNAS SCALE server, you can serve both NFS and iSCSI targets, then connect your hypervisors to them. Not only is this a solid learning project, but it also helps you gain real-world skills that are highly applicable to managing storage in production data centers.
TrueNAS Core also has iSCSI and NFS capabilities as well:

5. Test tiered storage with SSDs and HDDs
Tiering storage is a great way to have the best of both worlds, including speed and capacity. Most of today’s modern NAS devices are using tiered storage with spindles for capacity and NVMe drives for read/write cache.
However, outside of a NAS device you can use another storage device running TrueNAS as an example to setup ZFS and a hybrid pool that combines SSDs for caching and HDDs for bulk storage. Again, this gives you the best of both worlds. TrueNAS SCALE makes the setup of these hybrid tiered arrays very easy with caching just simply part of the wizard to create your pools.
In Proxmox, you can even assign different storage classes to VMs or containers. This is super handy as you can make sure VMs running critical databases or other performance sensitive workloads can run on SSD-backed volumes, while less critical workloads might be fine to run on HDD pools. This is a project that helps you to learn about different storage types and how to balance performance and cost.
Wrapping up
I think project-based learning is one of the best ways to increase your knowledge set in a very “hands-on” way and allows you to develop real-world skills. You can start simple with a ZFS mirror and graduate up to building a Ceph cluster. However, Ceph is not really that intimidating any longer since it is built into Proxmox and there are other things like the MicroCeph project that makes this even easier. Let me know what storage projects you are working on and what types of storage technologies make sense in your home lab. I would be very curious to know.