

Maybe https://en.wikipedia.org/wiki/A_True_Story from the 2nd century - although even that is a parody of existing stories. So the origin dates back a long time!
Maybe https://en.wikipedia.org/wiki/A_True_Story from the 2nd century - although even that is a parody of existing stories. So the origin dates back a long time!
This slowly degrades the power of the union and ultimately reduces wages and benefits of the workers
I’m not sure I buy into that - but that said I live in a country where unions are popular, but unions are not allowed to force people to join (but unions do have a right of access to workplaces to ask people to join / hold meetings).
Firstly, it doesn’t take that big a percentage of an employer’s workforce to strike before a strike is effective… companies don’t have a lot of surplus staff capacity just sitting around doing nothing. And they can’t fire striking union workers for striking.
Secondly, if all employees have to belong to one particular union, that also means the employees have no choice of which union, and hence no leverage over the union. Bad unions who just agree to whatever the employer asks and don’t look after their members then become entrenched and the employees can’t do much. If there are several unions representing employees, they can still unite and work together if they agree on an issue - but there is much more incentive for unions to act in the interests of their members, instead of just their leadership.
A lack of guaranteed employee protections, on the other hand, is inexcusable - it’s just wealthy politicians looking out for the interests of their donors in big business.
By population, and not land area, certain more remote geographic places are well known but have quite a low population. ‘Everyone’ is a high bar, but most adults in Australia would know the following places (ordered from smaller population but slightly less known to higher population):
Stargate SG-1, Season 4, Episode 6 has a variant of the loop trope, but everyone (including most of the protagonists, and everyone else on earth) don’t remember what happens, while two protagonists remember every loop until they are able to stop the looping.
They debrief the others who don’t remember at the end (except for the things they did when they took a loop off anyway!) - but they didn’t miss too much since everyone else on earth missed it.
Another fictional work - a book, not a movie / TV show / anime - is Stephen Fry’s 1996 novel Making History. The time travel aspect is questionable - he sends things back in time to stop Hitler being born, but no people travel through time. However, he remembers the past before his change, and has to deal with the consequences of having the wrong memories relative to everyone else.
No point asking them to justify why they have to ask, they probably don’t even know. Just say “Sorry, I don’t give that out”. I’ve never had a store push back after that - they probably get it all the time.
TIOBE is meaningless - it is just search engine result numbers, which for many search engines are likely a wildly inaccurate estimate of how many results match in their index. Many of those matches will not be about the relevant language, and the numbers probably have very little correlation to who uses it (especially for languages that are single letter, include punctuation in the name, or are a common English word).
Modems also make noises when connected. However, the noise of them connecting is more distinctive because they go through a handshake where you can hear distinct tones, but then negotiate a higher baud rate involving modulation of many different frequencies, at which point to the human ear it is indistinguishable from white noise (a sort of loud hissing). If you pick up the phone while the modem is connected at a higher baud rate (post the handshake), you’ll hear the hissing, and then eventually you picking up the phone will have caused too many errors for the connection to be sustained (due to introducing noise on the line), causing both ends to hang up. You’ll then hear the normal tone you hear when the called party has hung up the line.
I believe it is what Americans call what might be called an Owners Corporation / Body Corporate / Apartment Owners Association / Management Company in other parts of the English-speaking world.
When people say Local AI, they mean things like the Free / Open Source Ollama (https://github.com/ollama/ollama/), which you can read the source code for and check it doesn’t have anything to phone home, and you can completely control when and if you upgrade it. If you don’t like something in the code base, you can also fork it and start your own version. The actual models (e.g. Mistral is a popular one) used with Ollama are commonly represented in GGML format, which doesn’t even carry executable code - only massive multi-dimensional arrays of numbers (tensors) that represent the parameters of the LLM.
Now not trusting that the output is correct is reasonable. But in terms of trusting the software not to spy on you when it is FOSS, it would be no different to whether you trust other FOSS software not to spy on you (e.g. the Linux kernel, etc…). Now that is a risk to an extent if there is an xz style attack on a code base, but I don’t think the risks are materially different for ‘AI’ compared to any other software.
Blockchain is great for when you need global consensus on the ordering of events (e.g. Alice gave all her 5 ETH to Bob first, so a later transaction to give 5 ETH to Charlie is invalid). It is an unnecessarily expensive solution just for archival, since it necessitates storing the data on every node forever.
Ethereum charges ‘gas’ fees per transaction which helps ensure it doesn’t collapse under the weight of excess usage. Blocks have transaction limits, and transactions have size limits. It is currently working out at about US$7,500 per MB of block data (which is stored forever, and replicated to every node in the network). The Internet Archive have apparently ~50 PB of data, which would cost US$371 trillion to put onto Ethereum (in practice, attempting this would push up the price of ETH further, and if they succeeded, most nodes would not be able to keep up with the network). Really, this is just telling us that blockchain is not appropriate for that use case, and the designers of real world blockchains have created mechanisms to make it financially unviable to attempt at that scale, because it would effectively destroy the ability to operate nodes.
The only real reason to use an existing blockchain anyway would be on the theory that you could argue it is too big to fail due to legitimate business use cases, and too hard to remove censorship resistant data. However, if it became used in the majority for censorship resistant data sharing, and transactions were the minority, I doubt that this would stop authorities going after node operators and so on.
The real problems that an archival project faces are:
This is absolutely because they pulled the emergency library stunt, and they were loud as hell about it. They literally broke the law and shouted about it.
I think that you are right as to why the publishers picked them specifically to go after in the first place. I don’t think they should have done the “emergency library”.
That said, the publishers arguments show they have an anti-library agenda that goes beyond just the emergency library.
Libraries are allowed to scan/digitize books they own physically. They are only allowed to lend out as many as they physically own though. Archive knew this and allowed infinite “lend outs”. They even openly acknowledged that this was against the law in their announcement post when they did this.
The trouble is that the publishers are not just going after them for infinite lend-outs. The publishers are arguing that they shouldn’t be allowed to lend out any digital copies of a book they’ve scanned from a physical copy, even if they lock away the corresponding numbers of physical copies.
Worse, they got a court to agree with them on that, which is where the appeal comes in.
The publishers want it to be that physical copies can only be lent out as physical copies, and for digital copies the libraries have to purchase a subscription for a set number of library patrons and concurrent borrows, specifically for digital lending, and with a finite life. This is all about growing publisher revenue. The publishers are not stopping at saying the number of digital copies lent must be less than or equal to the number of physical copies, and are going after archive.org for their entire digital library programme.
No
On economic policy I am quite far left - I support a low Gini coefficient, achieved through a mixed economy, but with state provided options (with no ‘think of the businesses’ pricing strategy) for the essentials and state owned options for natural monopolies / utilities / media.
But on social policy, I support social liberties and democracy. I believe the government should intervene, with force if needed, to protect the rights of others from interference by others (including rights to bodily safety and autonomy, not to be discriminated against, the right to a clean and healthy environment, and the right not to be exploited or misled by profiteers) and to redistribute wealth from those with a surplus to those in need / to fund the legitimate functions of the state. Outside of that, people should have social and political liberties.
I consider being a ‘tankie’ to require both the leftist aspect (✅) and the authoritarian aspect (❌), so I don’t meet the definition.
The fears people who like to talk about the singularity like to propose is that there will be one ‘rogue’ misaligned ASI that progressively takes over everything - i.e. all the AI in the world works against all the people.
My point is that more likely is there will be lots of ASI or AGI systems, not aligned to each other, most on the side of the humans.
I think any prediction based on a ‘singularity’ neglects to consider the physical limitations, and just how long the journey towards significant amounts of AGI would be.
The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.
If we consider a current GPU, e.g. the 12 GB GFX 3060, it can hold about 24 billion parameters at 4 bit quantisation (in reality a fair few less), and uses 180 W of power. So that means an AGI might use 750 kW of power to operate. A super-intelligent machine might use more. That is a farm of 2500 300W solar panels, while the sun is shining, just for the equivalent of one person.
Now to pose a real threat against the billions of humans, you’d need more than one person’s worth of intelligence. Maybe an army equivalent to 1,000 people, powered by 8,333,333 GPUs and 2,500,000 solar panels.
That is not going to materialise out of the air too quickly.
In practice, as we get closer to an AGI or ASI, there will be multiple separate deployments of similar sizes (within an order of magnitude), and they won’t be aligned to each other - some systems will be adversaries of any system executing a plan to destroy humanity, and will be aligned to protect against harm (AI technologies are already widely used for threat analysis). So you’d have a bunch of malicious systems, and a bunch of defender systems, going head to head.
The real AI risks, which I think many of the people ranting about singularities want to obscure, are:
I’m looking into it using data from my instance to check it isn’t an abuse issue.
What I know so far:
I looked into this previously, and found that there is a major problem for most users in the Terms of Service at https://codeium.com/terms-of-service-individual.
Their agreement talks about “Autocomplete User Content” as meaning the context (i.e. the code you write, when you are using it to auto-complete, that the client sends to them) - so it is implied that this counts as “User Content”.
Then they have terms saying you licence them all your user content:
“By Posting User Content to or via the Service, you grant Exafunction a worldwide, non-exclusive, irrevocable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to host, store, reproduce, modify for the purpose of formatting for display and transfer User Content, as authorized in these Terms, in each instance whether now known or hereafter developed. You agree to pay all monies owing to any person or entity resulting from Posting your User Content and from Exafunction’s exercise of the license set forth in this Section.”
So in other words, let’s say you write a 1000 line piece of software, and release it under the GPL. Then you decide to trial Codeium, and autocomplete a few tiny things, sending your 1000 lines of code as context.
Then next week, a big corp wants to use your software in their closed source product, and don’t want to comply with the GPL. Exafunction can sell them a licence (“sublicence through multiple tiers”) to allow them to use the software you wrote without complying with the GPL. If it turns out that you used some GPLd code in your codebase (as the GPL allows), and the other developer sues Exafunction for violating the GPL, you have to pay any money owing.
I emailed them about this back in December, and they didn’t respond or change their terms - so they are aware that their terms allow this interpretation.
My grandparents had a lot of antiques, some probably which they inherited. My grandfather was particular proud of his clockwork wind-up clock (which was an antique even back then). I disassembled it to find out how it worked, but couldn’t figure out how to reassemble it (and my granddad couldn’t either).
If he wanted to kill it on purpose, he could have just shut it down. Maybe to keep the trademark he could have launched some other telecommunications service and used the brand for that.
Elon Musk is all about convincing people to act against their best interests to benefit him. For example, look at Tesla: it has a manufacturing capacity of ~2 million cars per year. Now look at Toyota: it has a manufacturing capacity of ~9 million vehicles per year. Now look at the market capitalisation of each company: for Tesla it is still about $535B, despite some fall from the peak in 2022. For Toyota, it is $416B (which is a record high).
So Toyota makes almost 5 times as many cars a year, but is worth 78% of Tesla? And the production capacity and value gap was even more extreme in the past? I think the question then is, what is going on?
The answer, of course, is Musk. He is very slick at convincing investors to act against their own best interests (usually by suggesting the possibility of things that happen to have the true objective along the way, like full self-driving cars by 2018 rather than competing with existing auto-makers, or 35 minute travel from San Francisco to Los Angeles, or a colony on mars rather than competing with existing satellite companies). This is the same skill-set as a confidence artist. I don’t mean to imply that Musk has necessarily done anything illegal, but due to the similarity in skill set, and the large scale at which he operates, it would be fair to call him the most successful con artist in history. Looking at it through this lens can help to identify his motive.
So what would a con artist want with a social network, and why would he want to alienate a whole lot of people, and get a lot of haters?
Well, the truth is that a con artist doesn’t need everyone to believe in them to make money - they just need the marks to believe in them. Con artists don’t want the people who see through the con (call them the haters for lack of a better word) to interfere with their marks though. At the small scale - e.g. a street con, the con artist might separate a couple where one partner is the mark, to prevent the other from alerting their partner to the scam. But in addition to separating the marks from the haters, con artists use brainwashing techniques to create a psychological barrier between the marks and the haters. A Nigerian Prince scammer might try to convince a mark that their accountant can’t be trusted. A religious cult con might brainwash followers to think their family are different from them, and if they try to provide external perspective, they are acting as the devil. They try to make the marks the in-group, and everyone else, even family and friends, the out-group who doesn’t care about the in-group.
So what would a con artist in control of a social network do? They would start by giving the con artist the megaphone - amplifying everything the artist says to try to get more marks. In parallel, they’d try to get rid of the haters. They could shadow-ban them so the marks never see what they have to say, or they could put up small barriers the marks will happily jump over, and feel more invested in the platform having done that, but which would scare off the haters. However, the marks and the haters might still interact off the social network - so the scam artist would also want to create a culture war to try to make the marks hate the haters, and ignore anything they say, by amplifying messages hostile to the haters.
So what can you do if you don’t want a world wrecked by divisions sewn just so billionaires can be even richer? My suggestion is don’t buy into the divisions - work to find common ground with people, even if others are saying just to ignore them because they are different and will never get it, and get in early before the divisions are too deep.
I suggest having a threat model about what attack(s) your security is protecting against.
I’d suggest this probably isn’t giving much extra security over a long unique password for your password manager:
That said, it might be able to give you more convenience at the expense of slightly less security - particularly if your threat model is entirely around remote attackers - on the convenience/security trade-off. You would touch a button to decrypt instead of entering a long passphrase.
As an experiment / as a bit of a gag, I tried using Claude 3.7 Sonnet with Cline to write some simple cryptography code in Rust - use ECDHE to establish an ephemeral symmetric key, and then use AES256-GCM (with a counter in the nonce) to encrypt packets from client->server and server->client, using off-the-shelf RustCrypto libraries.
It got the interface right, but it got some details really wrong:
wrapping_add
to increment the 32 sequence number! For those who don’t know much Rust and/or much cryptography: the golden rule of using ciphers like GCM is that you must never ever re-use the same nonce for the same key (otherwise you leak the XOR of the two messages).wrapping_add
explicitly means when you get up to the maximum number (and remember, it’s only 32 bits, so there’s only about 4.3 billion numbers) it silently wraps back to 0. The secure implementation would be to explicitly fail if you go past the maximum size for the integer before attempting to encrypt / decrypt - and the smart choice would be to use at least 64 bits.To be fair, I didn’t really expect it to work well. Some kind of security auditor agent that does a pass over all the output might be able to find some of the issues, and pass it back to another agent to correct - which could make vibe coding more secure (to be proven).
But right now, I’d not put “vibe coded” output into production without someone going over it manually with a fine-toothed comb looking for security and stability issues.