Pretty much exactly this: Ghost - Call Me Little Sunshine
Not sure if this is what you’re referencing, but there’s a famous quantum computer researcher named Scott Aaronson who has this at the top of his blog:
If you take nothing else from this blog: quantum computers won’t solve hard problems instantly by just trying all solutions in parallel.
His blog is good, talks about a lot of quantum computing stuff at an accessible level
Cross-posted to !bestoflemmy@lemmy.world, which is probably the closest active community we’ve got
Does anyone here actually use awk for more than trivial operations? If I ever have to have to consider writing anything substantial with bash/awk/sed/etc, I just start writing a Python script. No hate to the classic tools, but Python is just really nice.
Sorry, mixed up the videos. It’s actually this one, from 2014:
https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript
Edited link above
I’ve been wondering how much of that is back to school. I have the sense that Lemmy has a lot of younger users. I can’t judge though as I’ve been inactive for long stretches due to life. I’ve been trying to contribute more now
Probably my favorite set of stories is by qntm, who writes lots of short fiction you can check out at his site. He wrote There Is No Antimemetics Division, which I think is best described by the intro he wrote for it:
An antimeme is an idea with self-censoring properties; an idea which, by its intrinsic nature, discourages or prevents people from spreading it.
Antimemes are real. Think of any piece of information which you wouldn’t share with anybody, like passwords, taboos and dirty secrets. Or any piece of information which would be difficult to share even if you tried: complex equations, very boring passages of text, large blocks of random numbers, and dreams…
But anomalous antimemes are another matter entirely. How do you contain something you can’t record or remember? How do you fight a war against an enemy with effortless, perfect camouflage, when you can never even know that you’re at war?
Welcome to the Antimemetics Division.
No, this is not your first day.
There’s a lot of other good entries too. They generally take the form of a wiki entry at https://scp-wiki.wikidot.com/, as a classified file describing some anomalous thing or event. They have a shared canon but only loosely, individual stories can conflict with one another. Here’s a couple good ones:
I’ll post over in !scp@lemmy.world too, to see what other people recommend for getting into it
!scp@lemmy.world and !bluey@lemmy.world are both communities that are pretty low traffic atm, but seem like there’s a lot of Lemmings that would be into them
That’s a great line of thought. Take an algorithm of “simulate a human brain”. Obviously that would break the paper’s argument, so you’d have to find why it doesn’t apply here to take the paper’s claims at face value.
There’s a number of major flaws with it:
IMO there’s also flaws in the argument itself, but those are more relevant
Not in general, sorry. Best bet is to make sure you’re using the most recent kernel, which Ubuntu tends to lag on. You can also try checking out the arch wiki entry for it. It’s a different distro, but the wiki is good and commonly has tips relevant for any distro.
What kernel are you running? From what I understand, that should be the major differentiator if you’re not using S3.
Couldn’t tell you unfortunately. It looks like AMD is also on board with deprecating S3 sleep, so I would guess that it’s not significantly better. The kernel controls the newer standby modes, so it’s really going to depend on how well it’s supported there.
Sleep kind of sucks on the original 11th gen hardware. They pushed out a bios update that broke S3 sleep, so now all you’ve got is the s2idle version, which the kernel is only OK at. Your laptop bag might heat up. S3 breaking isn’t really their fault, Intel deprecated it. Still annoying though. I’ve heard the Chromebook version and other newer gens have better sleep support.
Other than that, it’s great. NixOS runs just fine, even the fingerprint reader works, which has been rare for Linux
It is a bold claim, but based on their success with ruff, I’m optimistic that it might pan out.
This is a silly argument:
[…] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’
That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.
‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
EDIT: From the paper:
The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
Canonical lives and dies by the BDFL model. It allowed them to do some great work early on in popularizing Linux with lots of polish. Canonical still does good work when forced to externally, like contributing upstream. The model falters when they have their own sandbox to play in, because the BDFL model means that any internal feedback like “actually this kind of sucks” just gets brushed aside. It doesn’t help that the BDFL in this case is the CEO, founder, and funder of the company and paying everyone working there. People generally don’t like to risk their job to say the emperor has no clothes and all that, it’s easier to just shrug your shoulders and let the internet do that for you.
Here are good examples of when the internal feedback failed and the whole internet had to chime in and say that the hiring process did indeed suck:
https://news.ycombinator.com/item?id=31426558
https://news.ycombinator.com/item?id=37059857
“markshuttle” in those threads is the owner/founder/CEO.
From what I understand, Ada does not have an equivalent to Rust’s borrow checker. There’s efforts to replicate that for Ada, but it’s not there yet.