Not sure it counts but nostalgia kicked in and I replayed Chrono Trigger yesterday. Sadly I messed up at the fair in the beginning so I’m soon going to be found guilty in the game 😬
Yoko, Shinobu ni, eto… 🤔
עַם יִשְׂרָאֵל חַי Slava Ukraini 🇺🇦 ❤️ 🇮🇱
Not sure it counts but nostalgia kicked in and I replayed Chrono Trigger yesterday. Sadly I messed up at the fair in the beginning so I’m soon going to be found guilty in the game 😬
It’s actually a good thing that visual learners get a chance to learn useful stuff by watching videos. Not everyone has the attention span required to read through a Wikipedia page.
And you can’t really get a lot of words from “I waited 8 hours for Firefox to build.”
You actually can if you describe the build process and how emerge works, how you can customize the packages with Gentoo’s USE flags etc…
It’s probably part of some homework and he has to describe the install process? Could have picked Gentoo for a higher word count IMO
Since you already know Java, you could jump straight to C++ with Bjarne’s book “Programming - Principles and Practice Using C++”: https://www.stroustrup.com/programming.html
You can then move to more modern C++ with his other book “A Tour of C++”: https://www.stroustrup.com/tour3.html
And then if you’re curious to know how software design is done in modern C++, even if you already know classical design patterns from your Java experience, you should get Klaus Iglberger’s book: https://www.oreilly.com/library/view/c-software-design/9781098113155/
In parallel also watch the “Back to Basics” video series by CppCon (see their YouTube channel: https://www.youtube.com/@CppCon , just type “back to basics” in that channel’s search bar).
Learning proper C++ should give you a much better understanding of the hardware while the syntax still remains elegant, and you get to add a new skill that’s in very high demand.
The guidelines for freezer storage are for quality only—frozen foods stored continuously at 0°F (-18°C) or below can be kept indefinitely.
If you’re doing C++ then C++ Weekly by Jason Turner is an awesome must-watch.
Make flashcards of short questions + answers from your notes. You can use Anki for that (on Android it’s AnkiDroid), and you might want to watch this quick tutorial by Derek Banas: https://www.youtube.com/watch?v=5urUZUWoTLo
One way you can speed up the process of making flashcards is via an AI (not necessarily ChatGPT, I tried Mixtral 8x7b on a few Wikipedia pages and it also works well for this, it’s opensource and you have a free demo here: https://huggingface.co/chat/).
You could ask the AI:
Extract a precise and concise answer to the question “[INSERT QUESTION]” from the following paragraph: “…”
The reverse also works:
Formulate a few short questions that are answered by the following paragraph: “…”
Something you’ll learn to live by once enter the workplace: your coworkers are not your friends. There might be a one in a thousand case that’s an exception to this rule, but most of the time you should not think of them as more than coworkers you have to work with to get your job done. I’ve witnessed too many cases of coworkers backstabbing each other for their own professional ambitions or where a coworker dies and everyone just completely forgets about him a few days later.
That doesn’t mean you should be overly pessimistic either. The idea is to be pragmatic. No one wants a toxic environment, so everyone will put in some effort to maintain a cozy facade, and you should too, as that minimizes tensions for everyone. The mistake would be forgetting that it’s all a facade and starting to think of them as something like a family, that’s a mistake most juniors make.
My motto is: smile at your coworkers but guard your damn ass when you turn your back to them.
For anyone wondering what Proton GE is, it’s Proton on steroids: https://github.com/GloriousEggroll/proton-ge-custom
For instance, even if you have an old Intel integrated GPU, chances are you can still benefit from AMD’s FSR just by pushing a few flags to Proton GE, even if the game doesn’t officially support it, and you’ll literally get a free FPS boost (tested it for fun and can confirm on an Intel UHD Graphics 620).
Congrats! Your laptop will be even happier with a lighter but still nice-looking desktop environment like Xfce and you even have an Ubuntu flavor around it: Xubuntu.
My bad, I’ll move there then
Hard to tell as it’s really dependent on your use. I’m mostly writing my own kernels (so, as if you’re doing CUDA basically), and doing “scientific ML” (SciML) stuff that doesn’t need anything beyond doing backprop on stuff with matrix multiplications and elementwise nonlinearities and some convolutions, and so far everything works. If you want some specific simple examples from computer vision: ResNet18 and VGG19 work fine.
Works out of the box on my laptop (the export
below is to force ROCm to accept my APU since it’s not officially supported yet, but the 7900XTX should have official support):
Last year only compiling and running your own kernels with hipcc
worked on this same laptop, the AMD devs are really doing god’s work here.
Yup, it’s definitely about the “open-source” part. That’s in contrast with Nvidia’s ecosystem: CUDA and the drivers are proprietary, and the drivers’ EULA prohibit you from using your gaming GPU for datacenter uses.
ROCm is that its very unstable
That’s true, but ROCm does get better very quickly. Before last summer it was impossible for me to compile and run HIP code on my laptop, and then after one magic update everything worked. I can’t speak for rendering as that’s not my field, but I’ve done plenty of computational code with HIP and the performance was really good.
But my point was more about coding in HIP, not really about using stuff other people made with HIP. If you write your code with HIP in mind from the start, the results are usually good and you get good intuition about the hardware differences (warps for instance are of size 32 on NVidia but can be 32 or 64 on AMD and that makes a difference if your code makes use of warp intrinsics). If however you just use AMD’s CUDA-to-HIP porting tool, then yeah chances are things won’t work on the first run and you need to refine by hand, starting with all the implicit assumptions you made about how the NVidia hardware works.
HIP is amazing. For everyone saying “nah it can’t be the same, CUDA rulez”, just try it, it works on NVidia GPUs too (there are basically macros and stuff that remap everything to CUDA API calls) so if you code for HIP you’re basically targetting at least two GPU vendors. ROCm is the only framework that allows me to do GPGPU programming in CUDA style on a thin laptop sporting an AMD APU while still enjoying 6 to 8 hours of battery life when I don’t do GPU stuff. With CUDA, in terms of mobility, the only choices you get are a beefy and expensive gaming laptop with a pathetic battery life and heating issues, or a light laptop + SSHing into a server with an NVidia GPU.
They’re worse than us Arch users (btw)
Or even better, just get a vibrating cock ring.