I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 1 Post
  • 268 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle

  • I think a lot of people are mostly on the money here. It’s to do with resistance. Now, I’m not a qualified electrician, but I’m an amateur radio license holder and a lot of what you learn for that is applicable here.

    The main problem as many have said is resistance. This comes about from both the length of the conductors but also from every plug/socket connection adds resistance. Also in the case of the non extension socket multipliers, as you add more the weight bearing down would also likely start to make the connections less secure causing more resistance and possibly adding to the problem through arcing.

    Now the resistance alone on small loads likely wouldn’t be a huge problem. But if you had a large enough load (specifically at the end of the stacked connectors/extensions), or a fault that caused a larger than expected load the current would cause the resistance to generate heat.

    There’s a lot of ifs and maybes involved, but really why do it? There’s really no real world situation to need to have a dangerous amount of extensions like this though.

    For larger loads here in the UK there’s some very specific other concerns when dealing with ring mains. But really you’d need to do really weird/unusual things for that to become a problem.



  • Yeah but they’re a cheat. They’re lithium cells regulated down to 1.5v. Good ones are rare, when you find good ones they’re generally expensive and because they’re regulated down you generally get 100% battery showing until just before they fail.

    I used them for some voltage sensitive stuff, but finding a brand that held a good charge for more than even 50-100 charges was hard.

    Nimh is much better for anything that won’t be upset about the voltage too much.





  • r00ty@kbin.lifetoLinux@lemmy.mlIntel or AMD CPUs for new Laptops?
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    22 days ago

    Well for a gamer no real comment. But there is one metric Intel still trashes AMD in for the APU. Hardware video acceleration/encoding. The quality is objectively better on Intel Quicksync.

    When getting a home box that also needed to do transcoding, Intel CPU was a requirement. My desktop development/gaming system? Ryzen + NVidia.


  • It’s not how ActivityPub (at least Lemmy/*bin servers) works. There isn’t so far as I’ve ever seen an API that allows for this within ActivityPub (now specific to Lemmy/*bin implementations there’s the API the browser/apps use that must provide this, but that’s not ActivityPub). It actually looks to be cleverly designed to prevent it. It might look like backfilling is happening because old stuff appears, but there are reasons for this.

    How it works from my experience (I did some work on the federation in kbin a year or so ago).

    • Instance A subscribes to community B hosted on Instance C.
    • Instance C notes this and does nothing. No previous content is sent, only future activities will be.
    • User on Instance D already subscribed to community B upvotes a comment on a post in community B.
    • Instance D sends the activity to Instance C.
    • Instance C sends the activity to Instance A.
    • Instance A gets the notice of the upvote, but realises it has no context for the upvote. But luckily the upvote has the comment ID of the comment that it was related to. So, now Instance A makes a request for the comment from Instance C.
    • Instance A receives the response from Instance C. But it turns out that comment was in reply to another comment. But the comment contains the ID of the parent comment. So Instance A requests that comment (and any parent comments until it gets the parent post).
    • By now Instance A has the information about the like, all comments from the liked comment to the post. These are saved to the database and will appear on the local system.
    • For each of the likes, comments and posts. If the user isn’t known locally the profile will also be fetched from their instance and stored locally.

    And so old posts and comments will begin to appear as activities linked to them happen. But there isn’t a method to ask for “all the posts in community X” using activity pub. I remember because I was specifically looking for this a year or so ago. It let’s you see the parent object but not any children.

    Maybe Mastadon etc does it different? No idea.

    And all of this is moot because if I block a User Agent, or I block an AS number/IP block. They’re not getting anything either by ActivityPub or scraping unless they change User Agent, AS number, or both.



  • But, they aren’t. They’re not after Activitypub specifically. They’re scraping the whole internet, most of them using clear bot User Agents. So, I routinely block their bots because the AI ones are usually hitting you multiple times a second non-stop. If they started making fake Activitypub nodes they would not be scraping as a bot, and they would want specifically fediverse data. Important to note here though, an Activitypub node doesn’t “collect” data, they subscribe (to mastadon users/hashtags or communities) and then get new data delivered to them. So they wouldn’t get the old stuff.

    Having said that, I’ve seen some obvious bots using genuine browser user agents on IP addresses from certain very large Chinese companies. For those I just blocked their whole AS number.




  • I don’t think it’s rose-tinted glasses really. I think it’s just the change in dynamic. It was definitely different during the “real” classic times (I would say classic to Wrath).

    In 2005 when I started playing you needed to group up to get things done really. When you did this you met people. You talked, not with a microphone, but you would be talking. You’d get to know people, they’d invite you to dungeon groups and vice-versa, it would widen both of your in game circles and so on.

    When I got to the position to raid, I was on an RP-PvP realm and while there were raiding guilds, many people were in smaller guilds that were either role-playing or guilds of friends. So, there were often raiding groups. I was in one of these, and we had our own guild chat-esque thing that everyone in the group could chat through and of course raids were mandatory voice. Because generally you did need to have communications to raid. This increased your in game circle too.

    I still speak to some people now, on social media in various forms that I played the game with in 2005-2010. Some I met, others I never did. I’ve not really played retail much for a while now. But, it’s not the same. To an extent, neither is classic now.

    Now, probably an unpopular opinion because I think a lot of people think Blizzard’s actions led to this change in community spirit. I actually think it’s the other way round. I think they saw their player-base changing, and adjusted the game to suit. The side effect is that it put off some of those with a more social gaming mindset for good. But, it would have happened anyway.

    Times change, and they just rolled with it.



  • OK, look back at the original picture this thread is based on.

    We have two situations.

    The first is a dedicated system for providing navigation and other subsystems for a very specific purpose, with very specific hardware that is very limited. An 8 bit CPU with a very clearly known RISCesque instruction set, 4kb of ram and an bus to connect devices.

    The second is a modern computer system with unknown hardware, one of many CPUs offering the same instruction set, but with differing extensions, a lot of memory attached.

    You are going to write software very differently for these two systems. You cannot realistically abstract on the first system, in reality you can’t even use libraries directly. Maybe you can borrow code from a library at best. On the second system you MUST abstract because, you don’t know if the target system will run an Intel or Amd CPU, what the GPU might be, what other hardware is in place, etc etc.

    And this is why my original comment was saying, you just cannot compare these systems. One MUST use abstraction, the other must not. And abstractions DO produce overhead (which is an inefficiency). But we NEED that and it’s not a bad thing.


  • r00ty@kbin.lifetoProgrammer Humor@lemmy.mlProgress!
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    27 days ago

    Exactly my point though. My original point was that you cannot compare this. And the main reason you cannot compare them is because of the abstraction required for modern development (and that happens at the development level and the operating system you run it on).

    The Apollo software was machine code running on known bare metal interfacing with known hardware with no requirement to deal with abstraction, libraries, unknown hardware etc.

    This was why my original comment made it clear, you just cannot compare the two.

    Oh one quick edit to say, I do not in any way mean to take away from the amazing achievement from the apollo developers. That was amazing software. I just think it’s not fair to compare apples with oranges.


  • r00ty@kbin.lifetoProgrammer Humor@lemmy.mlProgress!
    link
    fedilink
    arrow-up
    5
    arrow-down
    8
    ·
    27 days ago

    It does. It definitely does.

    If I write software for fixed hardware with my own operating system designed for that fixed hardware and you write software for a generic operating system that can work with many hardware configurations. Mine runs faster every time. Every single time. That doesn’t make either better.

    This is my whole point. You cannot compare the apollo software with a program written for a modern system. You just cannot.


  • r00ty@kbin.lifetoProgrammer Humor@lemmy.mlProgress!
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    27 days ago

    Wait a second. When did I say abstraction was bad? It’s needed now. But when you are comparing 8bit machine code written for specific hardware against modern programming where you MUST handle multiple x86/x86_x64 cpus, multiple hardware combinations (either via the exe or by the libraries that must handle the abstraction) of course there is an overhead. If you want to tell me there’s no overhead then I’m going to tell you where to go right now.

    It’s a necessary evil we must have in the modern world. I feel like the people hating on what I say are misunderstanding the point I make. The point is WHY we cannot compare these two things!


  • r00ty@kbin.lifetoProgrammer Humor@lemmy.mlProgress!
    link
    fedilink
    arrow-up
    10
    arrow-down
    4
    ·
    27 days ago

    Except it’s not nonsense. I’ve worked in development through both eras. You need to develop in an abstracted way because there are so many variations on hardware to deal with.

    There is bloating for sure, and of course. A lot is because it’s usually much better to use an existing library than reinvent the wheel. And the library needs to cover many other use cases than your own. I encountered this myself, where I used a Web library to work with releases on forgejo, had it working generally, but then saw there was a library for it. The boilerplate to make the library work was more than I did to just make the Web requests.

    But that’s mostly size. The bloat in terms of speed is mostly in the operating system I think and hardware abstraction. Not libraries by and large.

    I’m also going to say legacy systems being papered over doesn’t always make things slower. Where I work, I’ve worked on our legacy system for decades. But on the current product for probably the past 5-10. We still sell both. The legacy system is not the slower system.