I don’t know if equivalent exists on fediverse, but r/itsaunixsystem is available on $that_other_platform.
I don’t know if equivalent exists on fediverse, but r/itsaunixsystem is available on $that_other_platform.
With Linux the scale alone makes it pretty difficult to maintain any kind of fork. Handful of individuals just can’t compete with a global effort and it’s pretty well understood that the power Linux has becomes from those globally spread devs working towards a common goal. So, should Linux Foundation cease to exist tomorrow I’d bet that something similar would raise to take it’s place.
For the respect/authority side, I don’t really know. Linux is important enough for governments too, so maybe some entity ran by United nations or something similar could do?
I’ve worked with both kind of companies. Current one doesn’t really care about Bus factor, but currently, for myself personally, that’s just a bonus as after every project it would be even more difficult to onboard someone to my position. And then I’ve worked with companies who hire people to improve bus factor actively. When done correctly that’s a really, really good thing. And when it’s done badly it just grinds everything down to almost halt as people spend their time in nonsensical meetings and writing documentation no-one really cares about.
Balancing that equation is not a easy task and people who are good at it deserve every penny they’re paid for it. And, again just for me, if I get overrun by a bus tomorrow, then it’s not my problem anymore and as the company doesn’t really care about that then I won’t either.
Nothing is perfect but “fundamentally broken” is bullshit.
Compared on how things used to work when Ubuntu came to life it really is fundamentally broken. I’m not the oldest beard around, but I personally have updated both Debian and Ubuntu from obsoleted relase to a current one with very little hiccups in the way. Apt/dpkg is just so good that you could literally bring up a decade old distribution up to date and it was almost without no efforts. The updates ran whenever I chose them to and didn’t break production servers when unattended upgrades were enabled. This is very much not the case with Ubuntu today.
Hatred for a piece of tech simply because other people said it’s bad, therefore it must be.
I realize that this isn’t directly because of my comment, but there’s plenty of evidence even on this chain that the problems go way deeper than few individuals ranting over the net that snap is bad. As I already said, it’s objectively worse than the alternatives we’ve had since the 90’s. And the way canonical bundles snap with apt breaks that very long tradition where you could just rely that, when running stable distribution, you could be pretty much certain that ‘apt-get dist-upgrade’ wouldn’t break your system. And even if it did, you could always fix it manually and get the thing back to speed. And this isn’t just a old guy ranting how things were better in the past as you can still get the very reliable experience today, but not with snapd.
Auto updating is not inherently bad.
I’m not complaining about auto updates. They are very useful and nice to have, even for advanced users. The problem is that even if snap notification says that ‘software updates now’ it often really doesn’t. Restarting the software, and even some cases running manual update, still brings up the notification that the very same software I updated a second ago needs to restart again to update. Rinse and repeat, while losing your current session over and over again.
Also, there’s absolutely no indication if anything is actually done. The notification just nags that I need to stop what I’m doing RIGHT NOW and let the system do whatever it wants instead of the tools I’ve chosen to work for me. I don’t want nor need the forced interruptions for my workflow, but when I do have the spare minute to stop working, I expect that the update process actually triggers on that very second and not after some random delay and I also want a progress bar or something to indicate when things are complete and I can resume doing whatever I had in mind.
it just can’t be a problem to postpone snap updates with a simple command.
But it is. “<your software> is updating now” message just interrupts pretty much everything I’ve been doing and at that point there’s no way to stop it. And after some update process has finally finalized I need to pretty much reboot to regain control of my system. This is a problem which applies to everybody, regardless of their technical skills.
My computer is a tool and when I need to actively fight that tool to not interrupt whatever I’m doing it rubs me in a very wrong way. No matter if it’s just browsing the web or writing code to the next best thing ever or watching youtube, I expect the system to be stable for as long as I want it to be. Then there’s a separate time slot when the system can update and maybe break itself in the process, but I control when that time slot exists.
There’s not a single case that I’ve encountered where snap actually solved a problem I’ve had and there’s a plenty of times when it was either annoying or just straight up caused more problems. Systemd at least have some advantages over SysVInit, but snap doesn’t have even that.
As mentioned, I’m not the oldest linux guy around, but I’ve been running linux for 20+ years and ~15 of that has kept butter on my bread and snapcraft is easily the most annoying thing that I’ve encountered over that period.
You act as if Snap was bad in any way. Proprietary backend does not equal bad.
I don’t give a rats ass if things I use are propietary or not. FOSS is obviously nice to have, but if something else does the work better I’m all for it, and have paid for several pieces of software. But Ubuntu and Snap (which are running on the thing I’m writing this with) are just objectivey bad. Software updates are even more aggressive than with Windows today and even if I try to work with the “<this software> updates in X days, restart now to update” notifications it just doesn’t do what it says it would/should. And once the package is finally updated the nagging notification returns in a day or two.
Additionally, snap and/or ubuntu has bricked at least two of my installations in the last few years, canonicals solutions has broken apt/dpkg in a very fundamental way and it most definetly has caused way more issues with my linux-stuff over the years than anything else, systemd included.
Trying to twist that as an elitist point of view with FOSS (which there are plenty of, obviously) is misleading and just straight up false. Snapcraft and it’s implementation is just broken on so many levels and has pushed me away from ubuntu (and derivatives). Way back when ubuntu started to gain traction it was a really welcomed distribution and I was a happy user for at least a decade, but as then things are now it’s either Debian (mostly for servers) or Mint (on desktops) for me. Whenever I have the choise I won’t even consider ubuntu as an option, both commercially at work and for my personal things.
I did quickly check the files on update.zip and it looks like they’re tarballs embedded in a shell script and image files including pretty much the whole operating system on the thing.
You can extract those even without a VM and do whatever you want with the files and package them back up, so, you can override version checks and you can inject init.d scripts, binaries and pretty much everything to the device, including changing passwords to /etc/shadow and so on.
I don’t know how the thing actually operates, but if it isn’t absolutely necessary I’d leave bootloader (appears to be uboot) and kernel untouched as messing up those might end up with a bricked device and then easy options are broken and you’ll need to try to gain access via other means, like interfacing directly with the storage on the device (which most likely includes opening the thing up and wiring something like arduino or an serial cable to it).
But beyond that, once you override version checks, it should be possible to upload the same version number over and over again until you have what you need. After that you just need suitable binaries for the hardware/kernel, likely some libraries from the same package and a init-script and you should be good to go.
The other way you can approach this is to look for web server configurations from the image and see if there’s any vulnerabilities (like apache running as root and insecure script on top of that to inject system files via http), which might be the safest route at least for a start.
I’m not really experienced on a things like this, but I know a thing or two about linux, so do your homework before attempting anything, have a good luck and have fun while tinkering!
The statement is correct, rsync by itself doesn’t use ssh if you run it as an daemon and if you trigger rsync over ssh then it doesn’t use daemon but instead starts rsync with UID of the ssh-user.
But, you can run rsyncd and bind it only to localhost and connect to that over ssh-tunnel. That way you can get benefits of rsync daemon and still have encrypted connection with ssh.
Not that it’s really relevant for the discussion, but yes. You can do that, with or without chroot.
That’s obviously not the point, but we’re already comparing oranges and apples with chroot and containers.
I have absolutely zero insight on how the foundation and their financing works, but in general it tends to be easier to green light a one time expense than a recurring monthly payment. So it might be just that, a years salary at first to get the gears running again and getting some time to fit the ‘infinite’ running cost into plans/forecasts/everything.
I live in Europe. No unpaid overtime here and productivity requirements are reasonable, so no way to blame for my tools on that. And even if my laptop OS broke itself completely then I’m productive at reinstallation, as keeping my tools in a running shape is also on my job description. So, as long as I’m not just scratching my balls and scrolling instagram reels all day long that’s not a concern.
I’m currently more of an generic sysadmin than linux admin, as I do both. But the ‘other stuff’ at work runs around teams, office, outlook and things like that, so I’m running a win11 with WSL and it’s good enough for what I need from a workstation. There’s technically a policy in place that only windows workstations are supported, but I suppose I could run linux (and I have separate laptop for linux-only stuff). At the current environment it’s just not worth the hassle, spesifically since I need to maintain windows servers too.
So, I have my terminals, firefox and whatever I need and I also have the mandated office-suite, malware protection/IDR/IDS by the book and in my mindset I’m using company tools for company jobs. If they take longer, could be more efficient or whatever, it’s not my problem. I’ll just browse my (personal) cellphone while the throbber spins on the screen and I get paid to do that.
If I switched to linux I’d need to personally take care of my system to meet specs and I wouldn’t have any kind of helpdesk available should I ever need one. So it’s just simpler to stick with what the company provides and if it’s slow then it’s not my headache and I’ve accepted that mindset.
The package file, no matter if it’s rpm, deb or something else, contains few things: Files for the software itself (executables, libraries, documentation, default configuration), depencies for other packages (as in to install software A you need also install library B) and installation scripts for the package. There’s also some metadata, info for uninstallation and things like that, but that’s mostly irrelevant for end user.
And then you need suitable package manager. Like dpkg for deb-packages, rpm (the program) for rpm-packages and so on. So that’s why you mostly can’t run Debian packages on Fedora or other way around. But with derivative distributions, like kubuntu and lubuntu, they use Ubuntu packages but have different default package selection and default configuration. Technically it would be possible to build a kubuntu package which depends on some library version which isn’t on lubuntu and thus the packages wouldn’t be compatible, but I’m almost certain that on those spesific two it’s not the case.
And then there’s things like Linux Mint, which originally based on Ubuntu but at least some point they had builds from both Debian and Ubuntu and thus they had different package selection. So there’s a ton of nuances on this, but for the most part you can ignore them, just follow documentation for your spesific distribution and you’re good to go.
Phobia, by definition, is uncontrollable, irrational, and lasting fear for something. In the current geopolitics situation I’d say that it’s not uncontrollable and very much not irrational. Fear, as a fellow Finn, might be a bit strong word, but it’s a definetly a concern.
When I first read that I thought that the response is a bit harsh, as Russian (and Soviet Union) individuals have traditionally been a big part of open source community and their achievements on computing are pretty significant, but when you dig a bit deeper on that, a majority of Soviet era things are actually built by Ukrainians in Kyiv (obviously Ukraine as a country wasn’t a thing back then).
Also, based on my very limited sight on the matter, Russians are not banned from contributing, but this is more of an statement that anyone working for the government in Russia can’t be a part of kernel development team. There’s of course legal reasons for that, very much including the trade bans against Russia, but also the moral part of it, which Linus seems to take a stand on.
Personally I’ve seen individuals at Russia to do quite amazing feats with both hardware and software, but as none of us are in a void without any external infcluence nor affect, I think that, while harsh, the “sanctions” (for a lack of better word) aren’t overshooting anything, but they’re instead leveling the playing field. Any Joe Anynymous could write a code which compromises the kernel as a whole, but should that Joe live in Russia, it might bring a government backed team which can hide their tracks on a quite a bit different level with their resources than any individual could ever even dream about.
So, while that decision might slow down some implementations and it might include some of the most capable of developers, the fear that one of them might corrupt the whole project isn’t unreasonable and, with ongoing sanctions in place (and legal requirements that follow) the core dev team might not even have a choice on this.
In current global environment we’re living in, I’d rather have a bit too careful management than one which doesn’t take things seriously enough. We already have Canonical and others to break stuff way too often, we don’t need malicious government to expand on that with nefarious purposes which could compromise a shit on of stuff on a very fundamental level if left unattended.
I don’t have answer for you, but Alec over at Technology Connections made a video few days ago related to the topic. That might not have the answer for you either, but as his videos (and there’s a ton of those, even for refridgerators) are among of the best at youtube that is worth cheking out.
But as a rule of thumb, new materials and hardware are better on pretty much every metric. And if your current one doesn’t work properly anymore it’ll most likely uses way more power than it should, as coolant flow/insulation/something isn’t in fully working condition and thus compressor needs to run more often than on a new unit.
You can run clonezilla on your shell session, just apt install conezilla (or whatever variant you’re using) and it can do the trick. Dd will almost surely work too, but that leaves a ton of responsibility to you instead of making any sanity checks on the way. That makes dd very powerful tool and it has saved my ass a multiple times, but if you already have a working partitioning schema clonezilla has a ton of options to make your life a lot simpler and a likely a bit faster than dd.
Back in the day with dial-up internet man pages, readmes and other included documentation was pretty much the only way to learn anything as www was in it’s very early stages. And still ‘man <whatever>’ is way faster than trying to search the same information over the web. Today at the work I needed man page for setfacl (since I still don’t remember every command parameters) and I found out that WSL2 Debian on my office workstation does not have command ‘man’ out of the box and I was more than midly annoyed that I had to search for that.
Of course today it was just a alt+tab to browser, a new tab and a few seconds for results, which most likely consumed enough bandwidth that on dialup it would’ve taken several hours to download, but it was annoying enough that I’ll spend some time at monday to fix this on my laptop.
I mean that the product made in here is not the website and I can well understand that the developer has no interest of spending time for it as it’s not beneficial to the actual project he’s been working with. And I can also understand that he doesn’t want to receive donations from individuals as that would bring in even more work to manage which is time spent off the project. A single sponsor with clearly agreed boundaries is far more simple to manage.
You do realize that man pages don’t live on the internet? The kernel.org one is the offical project website, as far as I know, but the project itself is very much not for the web presense, but for the vastly useful documentation included on your distribution.
Jolla had similar concept too at 2013. I had one and back then it was really, really nice phone. Maybe not in a sense that flagship models from big vendors were, but I really enjoyed the UI and modular options was a huge selling point at least for myself. Then they started to work with a tablet which failed on pretty much all fronts and the whole company practically disappeared.