This is something I have thought a lot recently since I recently saw a project that absolute didn’t care in the slightest about this and used many vendor specific features of MS SQL all over the place which had many advantages in terms of performance optimizations.

Basically everyone always advises you to write your backend so generically with technologies like ODBC, JDBC, Hibernate, … and never use anything vendor specific like stored procedures, vendor specific datatypes or meta queries with the argument being that you can later switch your DBMS without much hassle.

I really wonder if this actually happens in the real world with production Software or if this is just some advice that makes sense on surface level but in reality never pans out. I personally haven’t seen any large piece of Software switch to a different DBMS, even if there would be long term advantages of doing so, because the risk and work to retest everything would be far too great.

The only examples I know of (like SAP) were really part of a much larger rewrite or update rather than “just” switching DBMS.

  • Lmaydev@programming.dev
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    9 months ago

    In 15 years I have never actually seen this happen.

    As you’ve said writing generically can have big performance implications.

    Almost all projects I’ve seen end up locked in one way or another.

    A better approach if you want to do this is abstract away the actual database stuff from your main code.

    This way you can do whatever provider specific stuff you need and still allow the option to rip it out with minimal refactoring.

    Your main code shouldn’t really care what provider you are using. Only code that interacts directly with the database should.