Uncloud
Uncloud defines itself as a non-cloud, similar to Heroku or Render, open-source and targeting self-hosting.
Comparing a self-hosted cloud service to these vintage pre-Docker SAAS is weird. OK, with all these services, you can push your container without any sysadmin involvement, without configuring (almost) anything.
Uncloud is built during the Docker era, embracing Compose, without any bloat or patronizing developers.
One more point, the elephant in the room, Kubernetes, the hegemonic solution for hosting containers (spoiler: the main developer has a great experience with K8s for small projects). Uncloud sees itself as a Citroën 2CV compared to K8s, as a semi-trailer.
Adminless
With Uncloud, you can manage your production environment like your development laptop. This sentence is terrifying.
But, with a bit of rigor, this approach is credible; Uncloud handles a large part of the work with minimal voodoo magic.
Beware, if you bypass sysadmin, you have to fix online incidents (and updates and backup and fine tunings and a lot of surprises).
Scaleless
Uncloud is hosting agnostic, you can even mix different hostings. VPS, bare metal, self-hosting, mixing processor families (yikes!).
It creates a large virtual encrypted network (with Wireguard), and every container can communicate with each other on all servers.
Let’s recall that the product is Australian, which has neither a large number of local users nor an ultimate network connection with the rest of the world. Having distributed nodes close to clients is more important for them than for a French person who struggles to think beyond their metropolitan area (and Europe for the most ambitious).
Self-hosting is perfectly acceptable, as long as you have enough bandwidth to serve your users. Remember, cheap hosting is, well, cheap, and doesn’t blow 24/7 in your living room (or in a closet).
Mixing local and remote hosting is weird, perhaps to benchmark your fancy Nvidia card for a vital AI feature ? Sure, give it a try, tokens aren’t cheap, and local datas can’t be used/sold somewhere in the Silicon Valley.
Brainless
Uncloud is just one command, uc, which mimics the docker command.
You can even use docker-compose.yml files (with some additional information).
Containers registries are boring to manage. Uncloud push images directly to the remote server, throught SSH (layers by layers, not an ugly full image wrapped in tgz archive).
The neologism is pussh, classy.
By default, the container is pushed somewhere in your cluster, randomly (without packing complexity). If your containers are heterogeneous (CPU, RAM, bandwidth, storage), good luck. OK, you can target one or more specific servers (basic load balancing is provided). Using Docker Compose, you can set lots of options, like the number of replicas. It’s a cleaner deployment strategy.
Deployment is guaranteed to be uninterrupted, following the well-known blue/green or rollup strategies.
You can name your servers with the free *.cluster.uncloud.run domain.
Just add a DNS alias with your own domain.
The documentation is not ready yet; you have to wait.
Let’s Encrypt (via Caddy) provides free TLS certificates.
Firewall blocks everything, you can’t oops a container, exposing a naked service to the Internet. Not everything, HTTP/HTTPS/SSH/Wireguard services are public. Uncloudd (the server side) is used throught SSH.
The deployment procedure is something like that:
- Works for me
- Push to production (the fresh container is started, the old one is stopped)
- If something goes wrong, we immediately have the logs to understand the cause of the oops
- Quick fix
- Re-deployment
- Break for a game of foosball.
Dissection
It sounds great, but what is the secret sauce, the technical choices?
The first rule of Uncloud is decentralization. All nodes provide the same uncloudd service, without specialization or an orchestrator.
Uncloud doesn’t use configurations and promises to consume only 150MB of RAM per node.
Remote API
All services use grpc, with the official REST grpc-gateway.
Grpc services accept different transport layers (and wrappers) :
- SSH tunnel (native or
sshcommand) - Wireguard tunnel
- TCP
- UNIX socket
SSH is the default option (remember the admin motto: “in ssh we trust”).
Containerization
Uncloud is built on Docker : system services are containers, user services are containers too. The API use containerd for low level features (and it uses GRPC).
Images are not centralized; everything is pushed directly to the target via the unregistry subproject.
Bypassing the registry is cheating. Github and Gitlab provide a free registry, one of the steps to continuous deployment. This is not the way. Uncloud is an autonomist; no external services can be mandatory. The temptation of streamlining the technical stack, removing moving parts, is the path to robustness.
Container configuration uses configuration files accessed from volumes.
Environment variables are not available yet, which is a crime according to 12Factors.
Private Network
Private network management on a Cloud is a punishment.
Uncloud picks Wireguard (https://www.wireguard.com/) for linking all servers, and iptable for routing, just like Tailscale (https://tailscale.com/). The obvious answer for kniting networks between scattered Linux servers.
IP are neatly arranged, with subnetworks.
The (eventualy) consistency is assumed by Corrosion (https://github.com/superfly/corrosion), a competitor to Consul, created by Fly.io for large clusters.
An internal DNS allows access to everything, following naming conventions.
Ingress
Caddy is the HTTP/HTTPS gateway. It provides load balancing, dead node eviction, TLS certificates with Let’s Encrypt, and HTTP/3.
Caddy is deployed on each node, leaving the public DNS to choose an entry point.
High Availability
Qorums (sufficient electors within the cluster) terrorize Uncloud. User must be able to use Uncloud, even if the cluster is broken.
In the CAP theorem, Uncloud sacrifices consistency (for availability and performance), hoping that data will reconcile once all cluster members are reunited.
This is an opinionated choice. But, Uncloud clusters are very small, with explicit service placement, local IP are locally managed, and dead nodes are handled by an internal load balancer.
Without self-healing (spawning services instances on healthy nodes), handling node crash can be minimalistic.
Critics
Uncloud assumes not to be production-ready, yet.
Beta software can’t be torn apart with criticism, it’s not fair.
Uncloud is a (almost) one-man project, even if the developer is talented and maintains exemplary documentation. For the perenity of his project, he’ll have to look at both sides before crossing the road.
Uncloud is not concerned by Infrastructure as Code or CI/CD, by design. You are a big boy; you can do IAC/CI/CD, by yourself, Uncloud doesn’t care. Remember, it’s a project targeting developers. IAC/CI/CD are great, but are seen as wasted time for a freelancer on solo projects. This strategy is called “one-man deployment.”
Cluster management is not dry yet. Mixing Corrosion (a rust project), with IPFS CRDT storage, is weird. Consul is the obvious answer for cluster management, but IBM is slowly strangling Hashicorp’s products, what a pity. Sharing states and groups with gossip is the first step, but what is the purpose of a cluster without self-healing ? a federation?
Fault tolerance relies on a (free) DNS service, which is a single point of failure (SPOF) that will cause problems for everyone due to time-to-live (TTL) issues.
Like all distributed hosting offerings that promise fault tolerance, the persistence aspect is conveniently avoided.
Is persistence in the scope of a cluster application? Maybe not, but it’s mandatory for the user.
The documentation should give some leads:
- Redis Cluster for sessions
- Minio for file storage
These two services have the advantage of being largely self-sufficient. For databases, it’s a different story. You either have to compromise with exotic databases like Litefs, Scylladb, or Foundationdb… or handle primary/secondary replication of traditional relational databases (without any magic).
The user-centric approach to data persistence immediately shatters the “elegant simplicity” of Uncloud.
Before complaining, start by having a proper backup procedure, WITH RESTORE TESTING; that would be a good start.
Otherwise, claiming that Uncloud is virtuously sustainable simply because it’s less resource-intensive than Kubernetes is rather petty.
The Aftermath
Uncloud already has the wow effect (and visibility), which is a major achievement for a solo developer. Have a look at the Hacker News thread.
Given its current strategy, Uncloud will likely generate limited revenue, essentially selling its technical expertise.
The choice of targeting “solo developers, small projects, discount hosting” also won’t help monetization. Communication agencies will love it, without giving a dime.
Please, Uncloud, give a more ambitious example than the infamous WordPress+Mariadb combo.
Therefore, it’s essential to hope for some form of sponsorship.
In any case, anything that shakes things up and challenges admins, DevOps engineers, and other cloud worshippers is refreshing, even if the proposal ultimately falls flat (I hope not).
A project to watch and encourage!
