

Musical chairs would indeed be a “fast paced and exciting” environment, but in the least desirable way.


Musical chairs would indeed be a “fast paced and exciting” environment, but in the least desirable way.


Approximately 90% of people are right-handed. In European writing systems that use quills and pens, reading and writing left-to-right makes more sense so that you can hold the pen in your right hand and drag it rightward, not into the ink you just laid down.
In East Asia, before writing on paper was a thing, they wrote using inscribed bone, but then eventually moved to vertical wood boards, bound together by string. Each character on the board would be ready from top-to-bottom, and then move to the next board. The most logical choice for a right handed person is to stack the wood pile on their left, and use their right hand to draw the next board to meet their gaze, then set it down on their right. Later, this bundle of wood boards would become paper scrolls, but would still be pulled from left-to-right by a right-handed scholar.
For this reason, the historical writing system common to China, Japan, Korea, and Vietnam for centuries was read right-to-left (because instead of scrolls, we have pages, which can be moved easily). But the native Korean script is left-to-right, as is the modern Vietnamese script. And Chinese and Japanese in the 20th Century switched to left-to-right. And yet, Japanese books are still ordered “backwards” so that the title page is what Westerners would say is the back of the book, and manga panels are read from the right side toward the left.
So far as I’m aware, this means some Japanese signs can be rendered left-to-right (modern), right-to-left (historical standard), and top-to-bottom (traditional). The only orientation that’s disallowed is bottom-to-top (although vertical news tickers will do this, so that readers see the text from top-to-bottom).
It all boils down to right handedness, but it depends on whether your hand is moving, or the page is moving.


I personally started learning microcontrollers using an Arduino dev kit, and then progressed towards compiling the code myself using GCC and loading it directly to the Atmel 328p (the microcontroller from the original Arduino dev kits).
But nowadays, I would recommend the MSP430 dev kit (which has excellent documentation for its peripherals) or the STM32 dev kit (because it uses the ARM32 architecture, which is very popular in the embedded hardware industry, so would look good on your resume).
Regarding userspace drivers, because these are outside of the kernel, such drivers are not kept in the repositories for the kernel. You won’t find any userspace drivers in the Linux or FreeBSD repos. Instead, such drivers are kept in their own repos, maintained separately, and often does unusual things that the kernel folks don’t want to maintain, until there is enough interest. For example, if you’ve developed an unproven VPN tunnel similar to Wireguard, you might face resistance to getting that into the Linux kernel. But you could write a userspace driver that implements your VPN tunnel, and others can use that driver without changing their kernel. If it gets popular enough, other developers might put the effort into getting it reimplemented as a mainline kernel driver.
For userspace driver development, a VM running the specific OS is fine. For kernel driver development, I prefer to run the OS within QEMU, since that allows me to attach a debugger to the VM’s “hardware”, letting me do things like adding breakpoints wirhin my kernel driver.


This answer is going to go in multiple directions.
If you’re looking for practice on using C to implement ways to talk to devices and peripherals, the other commenter’s suggested to start with an SBC (eg Raspberry Pi, Orange Pi) or with a microcontroller dev kit (eg Arduino, MSP430, STM32) is spot-on. That gives you a bunch of attached peripherals, the datasheet that documents the register behavior, and so you can then write your own C functions that fill in and read those registers. In actual projects, you would probably use the provided libraries that already do this, but there is educational value in trying it yourself.
However, just because you write a C function named “put_char_uart0()”, that isn’t enough to prepare for writing full-fledged drivers, such as those in the Linux and FreeBSD kernel. This next step is more about software design, where you structure your C code so that rather than being very hardware-specific (eg for the exact UART peripheral in your microcontroller) you have code which works for a more generic UART (which abstracts general details) but is common-code to all the UARTs made by the same manufacturer. This is about creating reusable code, about creating abstraction layers, and about writing extensible code. Not all code can be reusable, not every abstraction layer is desirable, and you don’t necessarily want to make your code super extensive if it starts to impact your core requirements. Good driver design means you don’t ever paint yourself into a corner, and the best way to learn how to avoid this is through sheer experience.
For when you do want to write a full-and-proper driver for any particular peripheral – maybe one day you’ll create one such device, such as by using an FPGA attached via PCIe to a desktop computer – then you’ll need to work within an existing driver framework. Linux and FreeBSD drivers use a framework so that all drivers have access to what they need (system memory, I/O, helper functions, threads, etc), and then it’s up to the driver author to implement the specific behavior (known in software engineering as “business logic”). It is a learned skill – also through experience – to work within the Linux or FreeBSD kernels. So much so that both kernels have gone through great lengths to enable userspace drivers, meaning the business logic runs as a normal program on the computer, saving the developer from having to learn the strange ways of kernel development.
And it’s not like user space drivers are “cheating” in any way: they’re simply another framework to write a device driver, and it’s incumbent on the software engineer to learn when a kernel or user space driver is more appropriate for a given situation. I have seen kernel drivers used for sheer computational performance, but have also seen userspace drivers that were developed because nobody on that team was comfortable with kernel debugging. Those are entirely valid reasons, and software engineering is very much about selecting the right tool from a large toolbox.


I’ve only heard bits and pieces of this from friends and strangers through some specific events so far
Can you tell us what bits you’ve heard, so that we don’t have to give redundant answers?


The catch with everything that implements E2EE is that, at the end of the day, the humans at each end of the message have to decrypt the message to read it. And that process can leave trails, with the most sophisticated being variations of Van Eck phreaking (spying on a CRT monitor by detecting EM waves), and the least sophisticated being someone that glances over the person’s shoulder and sees the messages on their phone.
In the middle would be cache files left on a phone or from a web browser, and these are the most damning because they will just be laying there, unknown, waiting to be discovered. Whereas the techniques above are active attacks, which require good timing to get even one message.
The other avenue is if anyone in the conversation has screenshots of the convo, or if they’re old-school and actually print out each conversation into paper. Especially if they’re an informant or want to catalog some blackmail for later use.
In short, opsec is hard to do 100% of the time. And it’s the 1% of slip-ups that can give away the game. As an example, we need only look to the group chat of cabinet members using a knock-off Signal client to discuss military operations, and accidentally added the editor of The Atlantic to the chat. Although that scenario highlights more PEBKAC than SIGINT.


The simple answer is probably no, because even where those experts aren’t driven solely by the pursuit of money – as in, they might actually want to improve the state of the art, protect people from harm, prevent the encroachment of the surveillance state, etc… – they are still only human. And that means they have only so much time on this blue earth. If they spend their time answering simple questions that could have been found on the first page of a web search, that’s taking time away from other pursuits in the field.
Necessarily then, don’t be surprised if some experts ask for a minimum consultation fee, as a way to weed out the trivial stuff. If nothing else, if their labor is to have any meaning at all when they do their work professionally, they must value it consistently as a non-zero quantity. Do not demand that people value their labor at zero.
With that out of the way, if you do have a question that can’t be answered by searching existing literature or the web, then the next best is to ask in an informal forum, like here on Lemmy. Worst case is that no one else knows. But best case is that someone works in the field and is bored on their lunch break, so they’ll help point you in the right direction. They may even connect you to a recognized expert, if the question is interesting enough.
Above all, what you absolutely must not do is something like emailing a public mailing list for cryptography experts, gathered to examine the requirements of internet security, to look at your handmade data encryption scheme, which is so faulty that it causes third-party embarrassment when read a decade later.
You were in fact lucky that they paid any attention at all to your proposal, and they’ve already given you many hundreds if not thousands of dollars worth of free consultancy between them
Don’t be the person that causes someone to be have to write this.


There are separate criminal and civil offenses when it comes to copyright infringement, assuming USA. Very generally, under criminal law, it is an offense to distribute copyrighted material without the right or license to do so. Note the word “distribute”, meaning that the crime relates to the act of copying and sharing the work, and usually does not include the receiving of such a work.
That is to say, it’s generally understood that mere possession of a copyrighted work is not sufficient to prove that it was in your possession for the purpose of later distribution. A criminal prosecution would have to show that you did, in fact, infringe the copyright by distributing a copy to someone or somewhere else.
Separately, civil penalties can be sought by the copyright owner, against someone found either distributing their work, or possessing the work without a license. In this case, the copyright owner has to do the legwork to identify offenders, and then would file a civil lawsuit against them. The government is uninvolved with this, except to the extent that the court is a branch of the federal government. The penalty would be money damage, and while a judgement could be quite large – due to the insanity of minimum damages, courtesy of the DMCA – there is no prospect of jail time here.
So as an example, buying a bootleg DVD for $2 and keeping it in your house would not accrue criminal liability, although if police were searching your house – which they can only do with a warrant, or your consent – they could tip-off the copyright owner and you could later receive a civil lawsuit.
Likewise, downloading media using Megaupload, usually also doesn’t meet the “distribution” requirement in criminal law, but still opens the door to civil liability if the copyright owner discovers it. However, something like BitTorrent which uploads to other peers, that would meet the distribution requirement.
To that end, if officers searching your home – make sure to say that you don’t consent to any searches – find a running BitTorrent server and it’s actively sharing copyrighted media, that’s criminal and civil liability. But if they only find the media but can’t find evidence of actual uploading/distributing, and can’t get evidence from the ISP or anyone else, then the criminal case would be non-existent.
That said, in a bygone era, if multiple physical copies of the same copyrighted media were found in your house, such as officers finding a powered-off DVD copy machine that has sixty handwritten discs all labeled “Riven: The Sequel to Myst” next to it, then the criminal evidence is present. Prosecutors can likely convince a jury that you’re the one who operated the machine to make those copies – because you had the ability (the machine) – and that nobody would make so many copies as personal backups. The quantity can only suggest an intent to distribute. This is not unlike how a huge amount of marijuana is chargeable as “possession with intent to distribute”, although drug laws have a different type of illogical-ness.
This logic does not apply when dealing with digital files, because computers naturally keep copies as part of handling files. A cache file temporarily created by VLC does not turn people into copyright criminals.
TL;DR: when the police are searching your house, tell them: 1) you do not consent to any searches, 2) you want a copy of their warrant, which should be signed by a judicial judge, and 3) do not volunteer info to the police; call and talk to a lawyer


Since that whole vibe-coded Cloudflare Matrix nonsense and associated attempted retcon – see here for context – I am looking forward to a talk on how Matrix actually works.
Specifically, I’d like to know what aspects of a secure, decentralized message platform are particularly hard. That’s in the context of whether Matrix can ever grow into a bona fide Signal competitor (nb: Signal remains the gold standard), and also whether Matrix would function well as a Discord replacement, even if it doesn’t have as strong of group chat privacy and encryption protections.


There can be, although some parts may still need to be written in assembly (which is imperative, because that’s ultimately what most CPUs do), for parts like a kernel’s context switching logic. But C has similar restrictions, like how it is impossible to start a C function without initializing the stack. Exception: some CPUs (eg Cortex M) have a specialized mechanism to initialize the stack.
As for why C, it’s a low-level language that maps well to most CPU’s native assembly language. If instead we had stack-based CPUs – eg Lisp Machines or a real Java Machine – then we’d probably be using other languages to write an OS for those systems.


The other commenters correctly opined that encryption at rest should mean you could avoid encryption in memory.
But I wanted to expand on this:
I really don’t see a way around this, to make the string searchable the hashing needs to be predictable.
I mean, there are probabilistic data structures, where something like a Bloom filter will produce one of two answers: definitely in the set, or possibly in the set. In the context of search tokens, if you had a Bloom filter, you could quickly assess if a message does not contain a search keyword, or if it might contain the keyword.
A suitably sized Bloom filter – possibly different lengths based on the associated message size – would provide search coverage for that message, at least until you have to actually access and decrypt the message to fully search it. But it’s certainly a valid technique to get a quick, cursory result.
Though I think perhaps just having the messages in memory unencrypted would be easier, so long as that’s not part of the attack space.


If this is about that period of human history where we had long-distance transportation (ie railroads) but didn’t yet have mass communication infrastructure that isn’t the postal service – so 1830s to 1860s – then I think the answer is to just plan to meet the other person at a certain place every month.
To use modern parlance, put a recurring meeting on their calendar.


It can be, although the example I’ve given where each counter is a discrete part is probably no longer the case. It’s likely that larger ICs which encompass all the requisite functionality can do the job, at lower cost than individual parts.
But those ICs probably can’t do 4:20:69, so I didn’t bother mentioning that.


I should point out that for the hour counter, it’s only a 5 bit counter, since the max value for hours is 23, which fits into 5 bits.
So 566 is not quite the devil’s work, but certainly very close.


(I’m going to take the question seriously)
Supposing that you’re asking about a digital clock as a standalone appliance – because doing the 69th second in software would be trivial, and doing it with an analog clock is nigh impossible – I believe it can be done.
A run-of-the-mill digital clock uses what’s known as a 7-segment display, one for each of the digits of the time. It’s called 7-segment (or 7-seg) because there are seven distinct lines that can be lit up or darkened, which will write out a number between 0 to 9.
In this way, six 7seg displays and some commas are sufficient to build a digital clock. However, we need to carefully consider whether the 7seg displays have all seven segments. In some commercial applications, where it’s known that some numbers will never appear, they will actually remove some segments, to save cost.
For example, in the typical American digital clock, the time is displayed in 12-hour time. This means the left digit of the hour will only ever be 0 or 1. So some cheap clocks will actually choose to build that digit using just 2 segments. When the hour is 10 or greater, those 2 segments can display the necessary!number 1. When the hour is less than 10, they just don’t light up that digit at all. This also makes the clock incapable of 24-hour time.
Fortunately though, to implement your idea of the 69th second, we don’t have this problem. Although it’s true that the left digit of the seconds only goes from 0 to 5 inclusive, the fact remains that those digits do actually require all 7 segments of a 7seg display. So we can display a number six without issue.
Now, as for how to modify the digital clock circuitry, that’s a bit harder but not impossible. The classic construction of a digital clock is as follows: the 60 Hz AC line frequency (or 50 Hz outside North America) is passed from the high-voltage circuitry to the low-voltage circuitry using an opto-isolator, which turns it into a square wave that oscillates 60 times per second.
Specifically, there are 120 transitions per second, with 60 of them being a low-to-high transition and the other 60 being a high-to-low transition. Let’s say we only care about the low-to-high. We now send that signal to a counter circuit, which is very similar to a mechanical odometer. For every transition of the oscillating signal, the counter advances by one. The counter counts in binary, and has six bits, because our goal is to count up to 59, to know when a full second has elapsed. We pair the counter with an AND circuit, which is checking for when the counter has the value 111011 (that’s to in decimal). If so, the AND will force the next value of the counter to 000000, and so this counter resets every 1 second. This counter will never actually register a value of 60, because it is cut off after 59.
Drawing from that AND circuit that triggers once per second, this new signal is a 1 Hz signal, also known as 1PPS (pulse per second). We can now feed this into another similar counter that resets at 59, which gives us a signal when a minute (60 seconds) has elapsed. And from that counter, we can feed it into yet another counter, for when 1 hour (60 minutes) has passed. And yet again, we can feed that too into a counter for either 12 hours or 24 hours.
In this way, the final three counters are recording the time in seconds, minutes, and hours, which is the whole point of a clock appliance. But these counters are in binary; how do we turn on the 7seg display to show the numbers? This final aspect is handled using dedicated chips for the task, known as 7seg drivers. Although the simplest chips will drive only a single digit, there are variants that handle two adjacent digits, which we will use. Such a chip accepts a 7 bit binary value and has a lookup table to display the correct pair of digits on the 7seg displays. Suppose the input is 0101010 (42 in decimal), then the driver will illuminate four segments on the left (to make the number 4) and five segments on the right (to make the number 5). Note that our counter is 6 bits but the driver accepts 7 bits; this is tolerable because the left-most bit is usually forced to always be zero (more on this later).
So that’s how a simple digital clock works. Now we modify it for 69th second operation. The first issue is that our 6-bit counter for seconds will only go from 0-59 inclusive. We can fix this by replacing it with a 7 bit counter, and then modifying the AND circuit to keep advancing after 59, but only when the hour=04 and minute=20. This way, the clock works as normal for all times except 4:20. And when it’s actually 4:20, the seconds will advance through 59 and get to 60. And 61, 62, and so on.
But we must make sure to stop it after 69, so we need another AND circuit to detect when the counter reaches 69. And more importantly, we can’t just zero out the counter; we must force the next counter value to be 10, because otherwise the time is wrong.
It’s very easy to zero out a counter, but it takes a bit of extra circuitry to load a specific value into the counter. But it can be done. And if we do that, we finally have counters suitable for 69th second operation. Because numbers 64 and higher require 7 bits to represent in binary, we can provide the 7th bit to the 7seg driver, and it will show the numbers correctly on the 7seg display without any further changes.
TL;DR: it can absolutely be done, with only some small amount of EE work


Upvoting because the FAQ genuinely is worthwhile to read, and answers the question I had in mind:
7.9 Why not just use a subset of HTTP and HTML?
I don’t agree with their answer though, since if the rough, overall Gemini experience:
is roughly equivalent to HTTP where the only request method is “GET”, the only request header is “Host” and the only response header is “Content-type”, plus HTML where the only tags are <p>, <pre>, <a>, <h1> through <h3>, <ul> and <li> and <blockquote>
Then it stands to reason – per https://xkcd.com/927/ – to do exactly that, rather than devise new protocol, client, and server software. Instead, some of their points have few or no legs to stand on.
The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way.
Initially, my reply was going to make a comparison to the impossibility of judging a book by its cover, since that’s what users already do when faced with visiting a sketchy looking URL. But I actually think their assertion is a strawman, because no one has suggested that we should immediately stop right after such a protocol has been decided. Very clearly, the Gemini project also has client software, to go with their protocol.
But the challenge of identifying a space is, quite frankly, still a problem with no general solution. Yes, sure, here on the Fediverse, we also have the ActivityPub protocol which necessarily constrains what interactions can exist, in the same way that ATProto also constrains what can exist. But even the most set-in-stone protocol (eg DICT) can be used in new and interesting ways, so I find it deeply flawed that they believe they have categorically enumerated all possible ways to use the Gemini protocol. The implication is that users will never be surprised in future about what the protocol enables, and that just sounds ahistoric.
It’s very tedious to verify that a website claiming to use only the subset actually does, as many of the features we want to avoid are invisible (but not harmless!) to the user.
I’m failing how to see how this pans out, because seeing as the web is predominantly client-side (barring server side tracking of IP address, etc), it should be fairly obvious when a non-subset website is doing something that the subset protocol does not allow. Even if it’s a lay-in-wait function, why would a subset-compliant client software honor that?
When it becomes obvious that a website is not compliant with the subset, a well-behaved client should stop interacting with the website, because it has violated the protocol and cannot be trusted going forward. Add it to an internal list of do-not-connect and inform the user.
It’s difficult or even impossible to deactivate support for all the unwanted features in mainstream browsers, so if somebody breaks the rules you’ll pay the consequences.
And yet, Firefox forks are spawning left and right due to Mozilla’s AI ambitions.
Ok, that’s a bit blithe, but I do recognize that the web engines within browsers are now incredibly complex. Even still though, the idea that we cannot extricate the unneeded sections of a rendering engine and leave behind the functionality needed to display a subset of HTML via HTTP, I just can’t accept that until someone shows why that is the case.
Complexity begats complexity, whereas this would be an exercise in removing complexity. It should be easier than writing new code for a new protocol.
Writing a dumbed down web browser which gracefully ignores all the unwanted features is much harder than writing a Gemini client from scratch.
Once again, don’t do that! If a subset browser finds even one violation of the subset protocol, it should halt. That server is being malicious. Why would any client try to continue?
The error handling of a privacy-respecting protocol that is a subset of HTML and HTTP would – in almost all cases – assume the server is malicious, and to disconnect. It is a betrayal of the highest order. There is no such thing as a “graceful” betrayal, so we don’t try to handle that situation.
Even if you did it, you’d have a very difficult time discovering the minuscule fraction of websites it could render.
Is this about using the subset browser to look at regular port-80 web servers? Or is this about content discovery? Only the latter has a semblance of logic behind it, but that too is an unsolved problem to this day.
Famously, YouTube and Spotify are drivers of content discovery, based in part due to algorithms that optimize for keeping users on those platforms. Whereas the Fediverse eschews centralized algorithms and instead just doesn’t have one. And in spite of that, people find communities. They find people, hashtags, images, and media. Is it probably slower than if an algorithm could find these for the user’s convenience? Yes, very likely.
But that’s the rub: no one knows what they don’t know. They cannot discover what they don’t even imagine could exist. That remains the case, whether the Gemini protocol is there or not. So I’m still not seeing why this is a disadvantage against an HTTP/HTML subset.
Alternative, simple-by-design protocols like Gopher and Gemini create alternative, simple-by-design spaces with obvious boundaries and hard restrictions.
ActivityPub does the same, but is constructed atop HTTP, while being extensible to like-for-like replace any existing social media platform that exists today – and some we haven’t even thought of yet – while also creating hard and obvious boundaries which forment a unique community unlike any other social media platform.
The assertion that only simple protocols can foster community spaces is belied by ActivityPub’s success; ActivityPub is not exactly a simple protocol either. And this does not address why stripping down HTML/HTTP wouldn’t also do the same.
You can do all this with a client you wrote yourself, so you know you can trust it.
I sure as heck do not trust the TFTP client I wrote at uni, and that didn’t even have an encryption layer. The idea that every user will write their own encryption layer to implement the mandatory encryption for Gemini protocol is farcical.
It’s a very different, much more liberating and much more empowering experience than trying to carve out a tiny, invisible sub-sub-sub-sub-space of the web.
So too would browsing a sunset of HTML/HTTP using a browser that only implements that subset. We know this because if your reading this right now, you’re either viewing this comment through a web browser frontend for Lemmy, or using an ActivityPub client of some description. And it is liberating! Here we all are, on this sub sub sub sub space of the Internet, hanging out and commenting about protocols and design.
But that doesn’t mean we can’t adapt already-proven, well-defined protocols into a subset that matches an earlier vision of the internet, while achieving the same.


as someone has to lead
At this particular moment, the people of Minnesota are self-organizing the resistance against the invasion of their state, with no unified leadership structure in place. So I wouldn’t say it’s always mandatory.
Long live l’etoile du nord.


An indisputable use-case for supercomputers is the computation of next-day and next-week weather models. By definition, a next-day weather prediction is utterly useless if it takes longer than a day to compute. And is progressively more useful if it can be computed even an hour faster, since that’s more time to warn motorists to stay off the road, more time to plan evacuation routes, more time for farmers to adjust crop management, more time for everything. NOAA in the USA draws in sensor data from all of North America, and since weather is locally-affecting but globally-influenced, this still isn’t enough for a perfect weather model. Even today, there is more data that could be consumed by models, but cannot due to making the predictions take longer. The only solution there is to raise the bar yet again, expanding the supercomputers used.
Supercomputers are not super because they’re bigger. They are super because they can do gargantuan tasks within the required deadlines.


deleted by creator
I don’t currently have any sort of notebook. Instead, for general notes, I prefer A3-sized loose sheets of paper, since I don’t really want to use double the table surface to have both verso and recto in front of me, I don’t like writing on spiral or perfect bound notebooks, and I already catalog my papers into 3-ring binders.
My read of the linked post is that each discrete action need not be recorded, but rather the thought process that leads to a series of action. Rather than “added a printf() in constructor”, the overall thrust of that line of investigation might be “checking the constructor for signs of malformed input parameters”.
I don’t disagree with the practice of “printf debugging”, but unless you’re adding a printf between every single operative line in a library, there’s always going to be some internal thought that goes into where a print statement is placed, based on certain assumptions and along a specific line of inquiry. Having a record of your thoughts is, I think, the point that the author is making.
That said, in lieu of a formal notebook, I do make frequent Git commits and fill in the commit message with my thoughts, at every important juncture (eg before compiling, right before logging off or going to lunch).