Indeed. I couldn’t get a couple of old 3DO games working on windows 10/11 even though I bought them on Steam.
Work great on Linux w/ Photon (aka wine).
Indeed. I couldn’t get a couple of old 3DO games working on windows 10/11 even though I bought them on Steam.
Work great on Linux w/ Photon (aka wine).
Garuda.
I’d never used Arch or Arch derivatives but if this is the experience I understand the memes a little more.
The package management is easy and very up to date. I like the BTRFS snapshots, and it had everything game-related available right out of the box. My Nvidia graphics card, which was the thing I couldn’t get working on Ubuntu, performed as well or better than under windows.
The only thing that didn’t work for me was ZFS - but because everything else was working well, I just went another route.
Longtime every OS user. But have been using Linux since the days of Mandrake in ‘96. Switched to Debian shortly thereafter though mostly as a server/SDN device. Then a long spell on Ubuntu starting with 8.something. While I don’t use Linux on the desktop as my primary work OS, I do use it daily.
Recently, annoyed with windows, which I only used/booted up for gaming, I gave gaming on Linux a try. It’s been mostly flawless even when the games aren’t Linux-native. Hilariously Ubuntu was awful and I couldn’t get it working so I’ve switched to something more gaming specific and couldn’t happier.
Found the other NixOS user. ;)
My advice: don’t change anything else right now.
The temptation is high to pack it all in at once.
2 hours a day is a lot. Not too much, just a lot. So, since you asked, don’t change your diet yet. Get into the groove of building this new thing into some level of consistency. Once you’re 90 days in, start modifying something else. Diet. Sleep. Intensity.
Work on one routine at a time.
Now if you’re going too far into calorie deficit then you can think about what your energy needs are but keep the other changes to bare necessity.
You come to my door. You get candy.
Young, old, costume or not.
You get candy.
I found my people here.
…drew their swords and shot each other. A deaf policeman heard their cry.
If you don’t believe this lie is true. Ask the blind man, he saw it too.
If it’s a backup server why not build a system around an CPU with an integrated GPU? Some of the APUs from AMD aren’t half bad.
Particularly if it’s just your backup… and you can live without games/video/acceleration while you repair your primary?
Is there a reason you need a dual book instance instead of a VM or even WINE?
Unless you need direct access to hardware and if you have enough RAM, you can probably avoid dual booting altogether.
Good enough? I mean it’s allowed. But it’s only good enough if a licensee decides your their goal is to make using the code they changed or added as hard as possible.
Usually, the code was obtained through a VCS like GitHub or Gitlab and could easily be re-contributed with comments and documentation in an easy-to-process manner (like a merge or pull request). I’d argue not completing the loop the same way the code was obtained is hostile. A code equivalent of taking the time (or not) to put their shopping carts in the designated spots.
Imagine the owner (original source code) making the source code available only via zip file, with no code comments or READMEs or developer documentation. When the tables are turned - very few would actually use the product or software.
It’s a spirit vs. letter of the law thing. Unfortunately we don’t exist in a social construct that rewards good faith actors over bad ones at the moment.
As someone who worked at a business that transitioned to AGPL from a more permissive license, this is exactly right. Our software was almost always used in a SaaS setting, and so GPL provided little to no protection.
To take it further, even under the AGPL, businesses can simply zip up their code and send it to the AGPL’ed software owner, so companies are free to be as hostile as possible (and some are) while staying within the legal framework of the license.
I’ve been using self-hosted Ghost for a bit and it’s a pretty well designed piece of software.
That it requires mailgun to really function well was a bit of a nuisance. But that’s a very minor nitpick that will likely change if adoption increases.
pfBlockerNG at the network edge and ublockorigin on devices.
Hard to tell from first glance but my guess would be this is fallout from the ongoing xz
drama. Here: https://www.openwall.com/lists/oss-security/2024/03/29/4
DM’ed you the link.
Reason: personal GitHub account.
Reddit was aggressively rate limiting tools used to delete and edit content in a funny way when the API pricing was announced. The API wouldn’t return an error, the rate limiting was silent, and the tools would report successful deletion or edits even when the edit or deletion wasn’t made.
I had to modify an existing script to handle the 5-second rate limit and, lieu of deleting, I just rewrote each comment with a farewell.
Even then I did 3 passes (minor additional edits) in cases Reddit was saving previous edits.
My content has stayed edited.
You’re conferring a level of agency where none exists.
It appears to “understand.” It appears to be “knowledgeable. “
But LLMs do neither of those things.
Take this note from an OpenAI dev:
It’s that these models have leveraged so much data they’ve been able to map out relationships between words (or images) in way as to be able to generate what seem like new versions of those things.
I grant you that an LLM has more base level knowledge than any one human, but again this is thanks to terrifyingly large dataset and a design that means it can access this data reasonably reliably.
But it is still a prediction model. It just has more context, better design and (most importantly) data to make predictions at a level never before seen.
If you’ve ever had a chance to play with a model at level where you can control some of its basic parameters it offers a glimpse into just how much of a prediction machine it can be.
My favourite game for a while was to give midjourney a wildly vague prompt but crank the chaos up to 100 (literally the chaos flag at the highest level) to see what kind of wild connections exist but are being filtered out during “normal” use.
The same with the GPT-3.5 API in the “early days” - you could return multiple versions of the response and see the sausage being made to a very small degree.
It doesn’t take away from the sense of magic using these tools. It just helps frame what’s going on under the hood.
We really can vote with our dollars. The issue is that we don’t (I’m point right at myself here).
Don’t buy the things, we probably don’t need em.
I was so impressed with Garuda that I adopted it for my primary workstation OS even though I’m using the “gaming edition”.