I believe that the 67% number for the 2020 election is of eligible voters and not registered voters. While turnout is low, it’s not 25% low.
I believe that the 67% number for the 2020 election is of eligible voters and not registered voters. While turnout is low, it’s not 25% low.
Since games don’t have to run with more than user privileges and steam runs in flatpak, you could run them as a different user account with very limited permissions.
That said, flatpak should be pretty secure as far as I’m aware if you make sure that permissions for the apps running are restricted appropriately. I’m not sure how restricted you can make steam and still have it work though
You can use offline mode for steam if you’re okay with steam having internet but not games. But there’s no way to use steam entirely offline. Internet access is a fundamental part of the system they have.
There’s also a question of what your threat model is. Like are you trying to prevent causal access of your files by games, or like a sophisticated attempt to compromise the system conveyed through a game. For the former flatpak seems sufficient. For the latter you probably need a dedicated machine. And there’s varying levels in between
Wait so the images in your post are the after images?
I think something that contributes to people talking past each other here is a difference in belief in how necessary/desirable revolution/overthrow of the U.S government is. Like many of the people who I’ve talked to online, who advocate not voting and are also highly engaged, believe in revolution as the necessary alternative. Which does make sense. It’s hard to believe that the system is fundamentally genocidal and not worth working within (by voting for the lesser evil) without also believing that the solution is to overthrow that system.
And in that case, we’re discussing the wrong thing. Like the question isn’t whether you should vote or not . it’s whether the system is worth preserving (and of course what do you do to change it. How much violence in a revolution is necessary/acceptable). Like if you believe it is worth preserving, then clearly you should vote. And if you believe it isn’t, there’s stronger case for not voting and instead working on a revolution.
Does anyone here believe that revolution isn’t necessary and also that voting for the lesser isn’t necessary?
The opposite is more plausible to me: believing in the necessity of revolution while also voting
Personally I believe that revolution or its attempt is unlikely to effective and voting+activism is more effective, and also requires agreement from fewer people in order to progress on its goals. Tragically, this likely means that thousands more people will be murdered, but I don’t know what can actually be effective at stopping that.
Cool!
I wouldn’t worry about making a second post. We can use all the content that we can get and this is neat
Beautiful
Thanks I’ll check it out! From a brief search it looks like at the moment I’ll still have to use the nvidia-libs repo to get cuda: https://github.com/bottlesdevs/Bottles/issues/3301
Huh?? I’m using Kubuntu 24.04 right now and didn’t have to jump through these hoops. That’s weird.
I compile them because I want to use them with my system wine, and not with proton. Proton does that stuff for you for steam games. This is for like CAD software that needs accelerated graphics. I could probably use like wine-ge and let GE compile it for me, but I’m not sure they include all the Nvapi/cuda stuff that’s needed for CAD and not gaming. If there’s an easier way to do it, I’d love to hear! Right now I’m using https://github.com/SveSop/nvidia-libs
I’m a developer that’s been using Ubuntu distros for 20 years and never ran into such issues.
If you’re a developer that’s comfortable with desktop software toolchains, that makes sense. (And checkinstall is wonderful for not polluting your system with random unmanaged files). But I came at this knowing like embedded c++ and Python, and there was just a lot of tools I had to learn. Like what make was and how library files are linked/found, etc. And for someone who’s not a developer at all, I imagine that this would be even harder.
I’ve learned a lot, especially because of everyone in this thread
I’m glad!
Re the flatpak issue, what you linked is just saying that flatpak won’t be a default installed program and packages provided by flatpaks won’t be officially supported by Ubuntu support as of 23.03. I don’t think this effects your use of Ubuntu in any way. If you want to use flatpaks, just install the program. It will still be packaged in the Ubuntu repositories. 23.04 was over a year ago. I still use flatpak without a problem on my kubuntu 24.04 system. It’s just a one time thing to do sudo apt-get install flatpak
and maybe a second package for KDE’s flatpak packagekit back end and it’s like canonical never made that decision.
The push of snaps instead of debs is a bit more concerning because it removes the deb as an option in the official repositories. But as of right now I think only Mozilla software has this happening? If your timeline is 5-10 years though, this may be more of an issue depending on how hard canonical pushes snaps and how large their downsides remain
All those patches seem like nice things to have, but are more focused on adding hardware support and working around bugs in software/other people’s implementations. If you have one of the effected GPUs/games/etc, then those patches probably make a huge difference, but I’d guess there won’t be noticeable frame rate differences on most systems. I have not tested this claim though, so maybe something on there makes a big difference. What’s nice is all the packaging stuff they’ve done to make setting things up correctly easily, not necessarily most of the changes themselves. Like on my system I compile dxvk and various wine nvidia libs myself since Ubuntu doesn’t package them. And it’s easy to screw that up/it requires some knowledge of compiling things
Reading your update, I’d still choose whatever distro packages the software you want with the versions/freshness you need. If you’re willing to tweak things, then the performance stuff can be done yourself pretty easily (unless you have broken hardware that isn’t well supported by the mainline kernel), but packaging things/compiling software that isn’t in the repositories is a huge pain. I think this is one of the reasons people choose arch even with its need to stay on top of updates. Is that the AUR means that you don’t have to figure out how to build software that the distribution managers didn’t package. Ubuntu’s PPAs aren’t great (though I don’t have personal arch experience to compare with)
I’m not sure what performance improvements you’re talking about. As far as I’m aware, the difference between distros on performance is extremely minimal. What does matter is how up to date the DE is in the distribution provided package. For example, I wanted some nvidia+Wayland improvements that were only in kwin 6.1, and so I switched from kubuntu to neon in order to get them (and also definitely sacrificed some stability since more broken packages/combinations get pushed to users than in base ubuntu). It’s also possible that the kernel version might matter in some cases, but I haven’t run into this personally.
I think the main differences between distros is how apps are packaged and the defaults provided, and if you’re most comfortable with apt based systems, I’m not sure what benefit there’s going to be to switching (other than the joy in tinkering and learning something new, which can be fun in its own right).
For some users less experienced with linux, the initial effort required to setup Ubuntu for gaming (installing graphics drivers/possibly setting kernel options, etc) might push someone toward a distribution that removes that barrier, but the end state is going to be basically identical to whatever you’ve setup yourself.
The choice between distributions is probably more ‘what do I want the process to getting to my desired end state to be like’ and less ‘how do I want the computer to run’.
I’m sure they’d welcome a pull improving the UX! https://invent.kde.org/network/kdeconnect-kde I think the implementation of the protocol is pretty well isolated from the UI, so pretty radical UI changes should be relatively easy
I’ve setup okular signing and it worked, but I believe it was with a mime certificate tied to my email (and not pgp keys). If you want I can try to figure out exactly what I did to make it work.
Briefly off the top of my head, I believe it was
I can’t remember if there was a way to do this with pgp certificates easily
I’d be surprised if it was significantly less. A comparable 70 billion parameter model from llama requires about 120GB to store. Supposedly the largest current chatgpt goes up to 170 billion parameters, which would take a couple hundred GB to store. There are ways to tradeoff some accuracy in order to save a bunch of space, but you’re not going to get it under tens of GB.
These models really are going through that many Gb of parameters once for every word in the output. GPUs and tensor processors are crazy fast. For comparison, think about how much data a GPU generates for 4k60 video display. Its like 1GB per second. And the recommended memory speed required to generate that image is like 400GB per second. Crazy fast.
Chatgpt is also probably around 50-100GB at most
This is one reason I’m switching away from pla+ back to normal pla. The esun pla+ really seems to get brittle when held under stress. This is an issue with printed parts as well. I’ve had parts suddenly crack in half where they were stressed over a few months.
Also it’s really annoying when little bits of filament get stuck in your filament guide tube :(
Is the bad side of the seam where it stops or where it starts printing the outer wall? I assume it’s where it stops and then it cross the wall to form the infill?
To add to the PA questions, are you sure that your PA setting actually are changing anything?
What printer is this and what firmware?
Does a spiral mode print work fine?
What if you print the part significantly slower (to rule out rigidity/acceleration issues)
That make sense. I would use tags like that:
Flickr Published
year roundup/2022
type/Landscapes
type/Portraits
events/trips/Zion 2022
content/food
content/animals
I actually do event level as my on-disk sorting. And then tag for stuff that’s not that. But I think it would work pretty well to do the event sorting under tags as well.
Then I rate my favorite photos, usually using the green approved, not stars. But stars would work too. Then if you want to find say, favorite landscapes, the digikam interface makes it really easy to do so.
I’m not sure if you can select what tags get written into the image, but if you can, you might be able to exclude certain parts of the hierarchy, and only include content/
or type/
subhierarchies
One of the things I really like about digikam is the matching of the disk layout with the album structure. This makes it really easy to have other programs also interact with my photo library in a way that’s near impossible if you instead have an internal photo database.
Tags work great for me for multi-categorization. What feels clunky about them in your workflow? You’re even allowed to have a tag hierarchy.
This is definitely not possible with base KDE. If you’re using x11, you might be able to follow https://askubuntu.com/questions/1398508/split-a-widescreen-monitor-in-two but it’s pretty fragile, and I’m not sure if KDE will respect those monitors.