Incus is a next-generation system container, application container, and virtual machine manager.

It provides a user experience similar to that of a public cloud. With it, you can easily mix and match both containers and virtual machines, sharing the same underlying storage and network.

Incus is image based and provides images for a wide number of Linux distributions. It provides flexibility and scalability for various use cases, with support for different storage backends and network types and the option to install on hardware ranging from an individual laptop or cloud instance to a full server rack.

When using Incus, you can manage your instances (containers and VMs) with a simple command line tool, directly through the REST API or by using third-party tools and integrations. Incus implements a single REST API for both local and remote access.

The Incus project was created by Aleksa Sarai as a community driven alternative to Canonical’s LXD. Today, it’s led and maintained by many of the same people that once created LXD.

      • gerdesj@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        I don’t understand what you mean by “epic pile of hacks”. Proxmox is just a Linux distribution, with a particular focus. All the software is the usual stuff with integration scripts and binaries and a webby front end. They start off with stock Debian and work up from there which is the way many distros work.

        I’m not sure what Proxmox switching to Incus would really mean. They are both Linux distributions that focus on providing a VM and container wrangling system.

        I happen to be porting rather a lot of VMware to Proxmox. My little company has a lot of VMware customers and I am rather busy moving them over. I picked Proxmox (Hyper-V? No thanks) about 18 months ago when the Broadcom thing came about and did my own home system first and then rather a lot of testing. I then sold the idea to the rest of my company and we made some plans and are now carrying those plan out.

        Now, if Proxmox becomes toxic, I still have projects like Incus to fall back on. I … WE … have choice, and that is important. You can be sure that if Proxmox drops the ball, Veeam will suddenly support Incus or whatever the world decides is the next best thing in Linux VMs and container land.

        I was a VMware consultant for 25 odd years. No longer (well I am still but only under mild protest!) I also have to wrangle a few Hyper-V clusters too. All of these bloody monolithic monstrosities work at the whim of massive corporations who really don’t have your best interests at heart. They bleed you dry.

        I like to have choice. Proxmox and Incus are both examples of choice. You start off with "I’d like to run VMs and containers on my hardware with software that is “open” and you have more than one option. You do not start off with: “I’d like a HyperV or VMware”, nail your colours to the mast and live in a rather rubbish monoculture.

        Sorry, I seem to have gone on a bit 8)

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          11 days ago

          Well… If you’re running a modern version of Proxmox then you’re already running LXC containers so why not move to Incus that is made by the same people?

          Proxmox (…) They start off with stock Debian and work up from there which is the way many distros work.

          Proxmox has been using Ubuntu’s kernel for a while now.

          Now, if Proxmox becomes toxic

          Proxmox is already toxic, it requires a payed license for the stable version and updates. Furthermore the Proxmox guys have been found to withhold important security updates from non-stable (not paying) users for weeks.

          My little company has a lot of VMware customers and I am rather busy moving them over. I picked Proxmox (Hyper-V? No thanks) about 18 months ago when the Broadcom thing came about and did my own home system first and then rather a lot of testing.

          If you’re expecting the same type of reliably you’ve from VMware on Proxmox you’re going to have a very hard time soon. I hope not, but I also know how Proxmox works.

          I run Promox since 2009 and until very recently, professionally, in datacenters, multiple clusters around 10-15 nodes each which means that I’ve been around for all wins and fails of Proxmox. I saw the raise and fall of OpenVZ, the subsequent and painful move to LXC and the SLES/RHEL compatibility issues.

          While Proxmox works most of the time and their payed support is decent I would never recommend it to anyone since Incus became a thing. The Promox PVE kernel has a lot of quirks, for starters it is build upon Ubuntu’s kernel – that is already a dumpster fire of hacks waiting for someone upstream to implement things properly so they can backport them and ditch their own implementations – and then it is a typically older version so mangled and twisted by the extra features garbage added on top.

          I got burned countless times by Proxmox’s kernel. Broken drivers, waiting months for fixes already available upstream or so they would fix their own bugs. As practice examples, at some point OpenVPN was broken under Proxmox’s kernel, the Realtek networking has probably been broken for more time than working. ZFS support was introduced only to bring kernel panics. Upgrading Proxmox is always a shot in the dark and half of the time you get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later.

          Proxmox’s startup is slow, slower than any other solution – it even includes management daemons that are there just there to ensure that other daemons are running. Most of the built-in daemons are so poorly written and tied together that they don’t even start with the system properly on the first try.

          Why keep dragging all of the Proxmox overhead and potencial issues, if you can run a clean shop with Incus, actually made by the same people who make LXC?

          • gerdesj@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 days ago

            If you’re expecting the same type of reliably you’ve from VMware on Proxmox you’re going to have a very hard time soon.

            Try upgrading a v6.0 or even 6.5 ESXi from the command line. If there is no “enterprise” iLO or iDRAC or whatever with media redirection then you’ll be jumping in the car. Or what about if, back in the day, you went ESX instead of ESXi? lol!

            How often do you find yourself repairing a vCentre? Oh dear the SSL certs are fucked again, despite being fixed a few years back. Yes I can bring the bloody things back but I’ve also got longer Linux experience than VMware. Those 14 virty discs were a daft idea and let’s dump the logs to all sorts of random areas and then stir them around every few versions. … and its 400GB in size - even thin provisioned they are still huge for what they do.

            How about when the Dell customised .iso was the only way to install on Rx10 hardware and then made the box unupgradable years later? or when the Intel NIC drivers got a bit confused - yay - PSoD?

            Reliability: don’t make me laugh!