A Slint fanboy from Berlin.

  • 4 Posts
  • 63 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle


  • Yes, I should not have said “impossible”: nothing is ever impossible to breach. All you can donis to make a breach more expensive to accomplish.

    Those separate tpm chips are getting rare… most of the time they are build into the CPU (or firmware) nowadays. That makes sniffing harder, but probably opens other attack vectors.

    Anyway: Using a TPM chip makes it more expensive to extract your keys than not using such a chip. So yoj win by using one.


  • Tobias Hunger@programming.devtoPrivacy@lemmy.mlDo you use TPM ?
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    2 months ago

    A TPM is a very slow and dumb chip: It can hash data somebody sends to it and it can encrypt and decrypt data slowly. That’s basically it. There is no privacy concern there that I can see. That chip can not read or write memory nor talk to the network.

    Together with early boot code in the firmware/bootloader/initrd and later user space that chip can do quite a few cool things.

    That code will use the TPM to measure data (create a hash) it loads before transfering control over and then unlock secrets only if the measurements match expected values. There is no way to extract that key on any system with different measurements (like a different computer, or even a different OS on the same computer). I find that pretty interesting and would love to use that, but most distributions do not offer that functionality yet :-(

    Using the TPM to unlock the disks is just as secure as leaving the booted computer somewhere. If you trust the machine to not let random people log in, then TPM-based unlocking is fine. If you do not: Stay away.

    Extracting the keys locked to an TPM is supposed to be impossible, so you do not need to worry about somebody stealing your keys. That alone makes TPMs very interesting… your own little FIDO tocken build right intomyour machine:-)





  • Not only that: It protects your data. The Unix security model is unfortunately stuck in the 1970s: It protects users from each other. That is a wonderful property, but in todays world you also need to protect the users from the applications they are running: Anything running as your user has access to all your data. And on most computer systems the interesting data is the one the users out there: Cryptogrqphic keys, login information, financial information, … . Typically users are much more upset to loose their data than about some virus infecting the OS files, those are trivial to fix.

    Running anything as anlther user stops that application from having access to most of your data.



  • Any of the many immutable distros (vanilla os, fedora silverblue, bluefin, aeon, endless os, pure os, …) will all obviously work.

    Most of your customizations will live in your home directory anyway, so the details of the host OS do not matter too much. As long as it comes with the UI you like, you will be mostly fine. And yku said you like gnome, that installs many apps from flathub anyway and they work just fine from there.

    For development work you just set up a distrobox/toolbox container and are ready to go with everything you need. I much prefer that over working on the “real system” as I can have different environments for different projects and do not have to polute my system with all kinds of dependencies that are useless to the functionality of my system.

    NixOS is ofmcourse also an option and is quasi-immutable, but it is also much more complicated to manage.






  • That depends a lot on how you define “correct C”.

    It is harder to write rust code than C code that the compiler will accept. It is IMHO easier to write rust code than to write correct C code, in the sense it only uses well defined constructs defined in the C standard.

    The difference is that the rust compiler is much stricter, so you need to know a lot about details in the memory model, etc. to get your code past the compiler. In C you need the same knowledge to debug the program later.


  • That depends on how you decide which bucket something gets thrown into.

    The C++ community values things like the RAII and other features that developers can use to prevent classes of bugs. When that is you yard-stick, then C and C++ are not in one bucket.

    These papers are about memory safety guarantees and not much else. C and C++ are firmly in the same bucket according to this metric. So they get grouped together in these papers.


  • It’s just a git repo, so it does not replace a forge. A forge provides a lot of services around the repo and makes the project discoverable for potential users. None of that is covered by this thing.

    I frankly see little value wrapping a decentralized version control system into layers of cryptography that hides where the data is actually stored (and how long it is going to be stored). Just mirror the repo a couple of times and you have pretty good protection against the code going offline again and you are done. No cryptography needed, and you get a lot of extras, too.

    If you do not like github: Use other forges. Self-host something, go to Codeberg or sourcehut, use something other than git like pijul or fossil, or whatever tickles your fancy. Unfortunately you will miss out on a lot of potential contributors and users there :-(


  • GPL effects “derived works”. So if your code is derived from proprietary code, you can not use GPL, as you would need to re-license the proprietary code and you can’t do that (assuming you do not hold the copyright for the proprietary code). LGPL and permissive licenses are probably fine though.

    Now what exactly is a “derived work”? That is unfortunate up to interpretation and different organizations draw the line in slightly different places. We’d need people to go to court to get that line nailed down more firmly.



  • Then how do you not see the point of a distributed sourceforge?

    But this is no forge, it is just a git repo.

    Again, have you even opened the webpage?

    Yeap, I even put a repo into it. That’s why I am so certain that it is useless.

    Hosting a git repo is not a problem. Having an discoverable forge is. And this does not help with that in any way.

    So github is not a problem?

    Something can not be a solution independent of whether or not something else is another problem or not.

    And regarding crypto, show me where in the code it forces you to use crypto. Show me the rad command that inhibits you from doing a normal git operation by bringing up crypto.

    There is lots of needless crypto(graphy) going on all over the place. It is entirely useless for code hosting in a git repo.


  • No, I would prefer a world where not everything is concentrated on github, but that is the world we have to work with:-)

    But how does this address any of the problems you brought up?

    Do you think a project will be more discoverable when you say: “Clone foo/bar from github” or when you say “install this strange crypto-BS, then clone rad:xyhdhsjsjshhhfuejthhh just like you normally would”?

    Apart from discoverability you get a known workflow for contributors, a CI and a bug tracker. Coincidently those make it hard for projects to switch away from github… how does this address any of that? “Use this workflow, which is even wierder than any of the other github alternatives!” and “just set up a server yourself”?

    Sorry, this is just yet another crypto-bro solution in search of a problem. Technically interesting, I’m give you that, but useless.