• 0 Posts
  • 182 Comments
Joined 2 years ago
cake
Cake day: June 19th, 2023

help-circle

  • Here’s a tip on good documentation: try to write the documentation first. Use it as your planning process, to spec out exactly what you’re going to build. Show the code to people (on GitHub or on a mailing list or on lemmy or whatever), get feedback, change the documentation to clarify any misunderstandings and/or add any good ideas people suggest.

    Only after the docs are in a good state, then start writing the code.

    And any time you (or someone else) finds the documentation doesn’t match the code you wrote… that should usually be treated as a bug in the code. Don’t change the documentation, change the code to make them line up.


  • Sure we can make a different ticket for that to move this along, but we’re getting product to agree first.

    Ooof, I’m glad I never worked there.

    QA’s job should be to make sure the bugs are known/documented/prioritised. It should never be a roadblock that interrupts work while several departments argue over what to do with a ticket.

    Seriously who cares if the current ticket is left open with a “still need to do XYZ” or it gets closed and a new one is open “need to do XYZ”. Both are perfectly valid, do whichever one you think is appropriate and don’t waste anyone’s time discussing it.


  • what else do I get with something like CARROT that the default doesn’t offer

    More control over what data is highlighted as the primary metrics at the top of the report (or on widgets).

    Where I live the actual temperature and “feels like” temperature are often really far apart. Apps like Carrot can be configured to show “feels like” as the main temperature, but Apple only shows it if you scroll down all the way down past a bunch nearly useless stats like the sunset time (spoiler, it will be the same as yesterday) and how the current temperature compares to the historical average.

    Also, I live near the beach and want to know the tides. That’s almost more important than the temperature.


  • Some of us don’t like watching beloved musical instruments destroyed. We also don’t like how so many people think watching TikTok on an iPad is “music”.

    When my father died, my sister didn’t give a shit about the house. She just wanted the guitar - which our father (a drummer) inherited when the lead guitarist in his band died. The guitarist had two dozen guitars but was his favourite.

    It’s close to a century old, nobody knows what trade secrets the luthier who created it used to get that sound, and no other instrument sounds the same. It’s been used on stage in countless live performances on every continent in the world and has been used to record over a hundred songs in professional recording studios. It was used to play music at the funeral of both the previous owners and it’s literally impossible to replace.

    I get it, not every instrument is that special… but this instrument wasn’t that special either when the first guitarist ever picked it up. Nearly all instruments have the potential to become that special… and Apple created a video dedicated to destroying a bunch of them while also implying that listening to an MP3 is as good as an actual instrument. No way.


    1. Write down who your customers are.
    2. Write down what problem your customers have, which can be solved with your product.
    3. Write down how your product can solves the problem.
    4. Figure out how you can achieve that goal (this needs to be separate to step 3 - you’re essentially tackling the same thing from a different perspective… helping you see things that might not be visible from the other one).
    5. Anything that does not bring your product closer to the goal(s), remove it from your product.
    6. Anything that would bring you closer, but isn’t achievable (not enough resources, going to take too long, etc), remove those as well.

    Those are not singular items. You have multiple customers (hopefully thousands). Your customers have multiple problems your product can solve. Actually solving all of those problems will require multiple things.

    If that list sounds functional, that’s because good design is functional. Aesthetics matter. Why do you choose a black shirt over an orange shirt with rainbow unicorns? Take the same approach for the colours on your app icon. Why do you choose jeans that are a specific length? Take that same approach for deciding how many pixels should separate two buttons on the screen.


    You said you struggle with looking at a blank page. Everyone does. If you follow the steps I outlined above, then you won’t be working with a blank page.



  • I guess if you spend all your time working with a laptop on a kitchen counter, this product can with that.

    … but WTF are you doing working on a kitchen counter? Get yourself a proper desk. Seriously. And if you’ve got a proper desk two or three large displays will provide better pixel density and a more comfortable work environment for a lot less money.

    I can get behind using it for mediation, gaming, watching videos, etc… but no way am I going to spend this kinda money on any of those use cases. I look forward to a future version that is an order of magnitude cheaper.


  • iTunes didn’t start life as a first party app though. They bought a third party app (SoundJam MP) and hired the entire development team. That team shipped iTunes shortly after and at least one of them still works for Apple today.

    While iTunes 1.0 was quite different from SoundJam, it’s likely they were working on a major redesign when Apple bought them and they simply finished it off - I’d guess Apple’s only real contribution was the “glass” user interface elements which eventually became systemwide.

    iTunes got progressively less logical/intuitive with every release after the initial purchase.

    Here’s SoundJam MP, iTunes 1.0, and iTunes 10.0 which was either the best or worst version (best, because it had the most features, worst, because do you really want a social network and movies/tv shows in your music player?) — from iTunes 11 onwards they finally cut features, but threw out the baby with the bathwater.


  • Sure - for example we migrated all our stuff from MySQL to MariaDB.

    It was completely painless, because all of the source code and many of the people who wrote that code migrated to MariaDB at the same time. They made sure the transition was effortless. We spent a months second guessing ourselves, weighing all of our options, checking and triple checking our backups, verifying everything worked smoothly afterwards… but the actual transition itself was a very short shell script that ran in a few seconds.

    I will never use a proprietary database unless it’s one I wrote myself and I’d be extremely reluctant to do that. You’d need a damned good reason to convince me not to pick a good open source option.

    My one exception to that rule is Backblaze B2. I do use their proprietary backup system, because it’s so cheap. But it’s only a backup and it’s not my only backup, so I could easily switch.

    I’m currently mid transition from MariaDB to SQLite. That one is more complex, but not because we did anything MariaDB specific. It’s more that SQLite is so different we have a completely different database design (for one thing, we have hundreds of databases instead of just one database… some of those databases are less than 100KB - the server just reads the whole thing into RAM and slow queries on our old monolithic database are less than 1 millisecond with this new system).

    never use anything vendor specific like stored procedures, vendor specific datatypes or meta queries

    Yeah we don’t do anything like that. All the data in our database is in a JSON type (string, number, boolean, null) with the exception of binary data (primarily images). It doesn’t even distinguish between float/int - though our code obviously does. All of the queries we run are simple “get this row by primary key” or "find all rows matching these simple where clauses. I don’t even use joins.

    Stored procedures/etc are done in the application layer. For example we don’t do an insert query anywhere. We have a “storage” object with simple read/write functions, and on top of that there’s an object for each model. That model does all kinds of things, such as writing the same data in different places (with different indexes) and catching “row not found” failures with an “ok, lets check if it’s in this other place”. That’s also the layer we do constraints which includes complex business rules, such as “even if this data is invalid — we will record it anyway, and flag it for a human to follow up on”.



  • Is there some huge benefit that I’m missing?

    For example I recently fixed a bug where a function would return an integer 99.9999% of the time, but the other 0.0001% returned a float. The actual value came from a HTTP request, so it started out as a string and the code was relying on dynamic typing to convert that string to a type that could be operated on with math.

    In testing, the code only ever encountered integer values. About two years later, I discovered customer credit cards were charged the wrong amount of money if it was a float value. There was no exception, there was nothing visible in the user interface, it just charged the card the wrong amount.

    Thankfully I’m experienced enough to have seen errors like this before - and I had code in place comparing the actual amount charged to the amount on the customer invoice… and that code did throw an exception. But still, it took two years for the first exception to be thrown, and then about a week for me to prioritise the issue, track down the line of code that was broken, and deploy a fix.

    In a strongly typed language, my IDE would have flagged the line of code in red as I was typing it, I would’ve been like “oh… right” and fixed it in two seconds.

    Yes — there are times when typing is a bit of a headache and requires extra busywork casting values and such. But that is more than made up for by time saved fixing mistakes as you write code instead of fixing mistakes after they happen in production.


    Having said that, I don’t use TypeScript, because I think it’s only recently become a mature enough to be a good choice… and WASM is so close to being in the same state which will allow me to use even better typed languages. Ones that were designed to be strongly typed from the ground up instead of added to an existing dynamically typed language.

    I don’t see much point in switching things now, I’ll wait for WASM and use Rust or Swift.


  • The big difference is Memcached is multi-threaded, while Redis is single threaded.

    That makes Redis more efficient - it doesn’t have to waste time with locks and assuming the server isn’t overloaded any individual operation should be faster in Redis. Potentially a lot faster.

    But obviously by sharing the load across threads as Memcached does, and that cant theoretically allow higher throughput under high load… if your task is suited to multithreading and doesn’t involve a shedload of contested locks.

    Which one is a better choice will depend on your task. I’d argue don’t limit yourself to either one, consider both, pick the one that aligns best with your needs.




  • have there been any improvements in this area?

    Um… what rock have you been living under?

    For simple code generation, I use GitHub CoPilot.

    In both cases, I’m essentially writing the code “from scratch” myself every time, but now I can type “write a person class” then “add a name property”, etc. Best of both words - the control of hand written code, and the efficiency of not having to type all that code out.

    When your code is really repetitive, you don’t even need to give it any prompts at all. You can usually just start a new empty line and it will guess what line goes there. For example if you have a firstName property, it will predict you’re about to add lastName.

    When it’s more complex, for example if I haven’t figured out how to structure the code yet, I use ChatGPT+. That’s more of a conversation approach, similar to bouncing ideas off a colleague… “how would you do this; what about that edge case; etc”.




  • Any chance work will issue you with a Mac? This would be so much easier if they would. You’d be able to use the iPad as an external display for the Mac, and could run Mac note taking apps (which save all their data on the Mac) on the iPad screen, with full touchscreen and pencil support for drawing/etc. The iPad basically becomes a Cintiq.

    I think the only way Windows can connect to an iPad over USB is with “iTunes File Sharing” which requires installing iTunes on Windows - then it will be able to access some data on the iPad. It used to be pretty widespread for note taking apps to support that, but I’m not sure how common it is these days. Almost everyone uses cloud sync these days.


  • Lets have a look at the memory speed on your 2012 Mac:

    • RAM: 25 GB/s
    • HDD: 0.1GB/s
    • SSD: if it’s a really good one, 0.7GB/s. If it’s a cheap one, might be closer to the HDD

    Now compare that to the latest MacBook Air:

    • RAM: 100GB/s
    • SSD: 5GB/s

    And aside from bandwidth, there are also latency improvements that are even more impressive.


    These are the numbers that you are actually going to notice in every day life - they are far more important than CPU speed. They are also far more important than wether or not the software you’re using is native or emulated - because modern emulation usually works quite well (I run intel software all day every day on my M1 MacBook Air, which is a lot slower than the computers you’re considering).

    The SSD being an order of magnitude faster than on your old 2012 model also means a lot of things that historically needed to be stored in RAM, no-longer need to be in RAM. That’s particularly true for Photoshop and iMovie which both will use all of the memory you have, and use swap if they need more than that. In practice, you won’t notice when they use swap - because what used to be a three second beachball in Photoshop is now zero seconds.

    Another thing to consider is modern versions of MacOS will compress some of your RAM which is incredibly effective. Windows and Linux do that too — it’s an industry standard now and not just to save memory. If you can store 2GB of data in 1GB of RAM, that effectively doubles your memory bandwidth (because compressing and decompressing takes zero time with a good memory controller). Software memory, it turns out, is usually extremely compressible.

    Like you, I had 16GB on my 2012 MacBook Pro, and I still have 16GB today on my Apple Silicon Mac. It was all I could afford in 2012 and I wished I could have more. These days I can afford more, but I just don’t see the point in paying. 16GB is enough now*.

    (* although if you want to play with generative AI, then you’ll want more RAM)


    The primary difference between the MacBook Air and MacBook Pro is the GPU, but it doesn’t sound like that will be an issue for you. I recommend the MacBook Air - not just because it’s cheaper, it’s also smaller, lighter, bigger screen, etc.


    Regarding the Mac Mini, no I don’t think 8GB isn’t enough. Keep in mind the core operating system itself uses about 4GB of your RAM… so an 8GB Mac will have 4GB for the software you run, and a 16GB Mac will have 12GB available for your software. Personally I’ve configured Docker on my Mac to use 8GB on it’s own… but it obviously depends what containers (and how many) you are running.

    8GB probably would be just enough, your docker containers sound smaller than mine, but my feeling is it’s a little too close for comfort and you would likely regret it in a couple years time, when you run something that needs 16GB.