"UPDATE table_name SET w = $1, x = $2, z = $4 WHERE y = $3 RETURNING *",
does not do the same as
"UPDATE table_name SET w = $1, x = $2, y = $3, z = $4 RETURNING *",
It’s 2 am and my mind blanked out the WHERE, and just wanted the numbers neatly in order of 1234.
idiot.
FML.
This is a hard lesson to learn. From now on, my guess is you will have dozens of backups.
And a development environment. And not touch production without running the exact code at least once and being well slept.
Fuck that, get shit housed and still do it right. That’s a pro.
That’s not pro, that’s just reckless gambling.
Totally right! You must set yourself up so a fool can run in prod and produce the expected result. Which is the purpose of a test env.
Replied hastily, but the way to run db statements in prod while dealing with sleep deprivation and drinking too much is to run it a bunch in several test env scenarios so you’re just copy pasting to prod and it CAN confidently be done. Also enable transactions and determine several, valid smoke tests.
Edit: a -> several
And always use a transaction so you’re required to commit to make it permanent. See an unexpected result? Rollback.
Transactions aren’t backups. You can just as easily commit before fully realizing it. Backups, backups, backups.
Yes, but
- Begin transaction
- Update table set x=‘oopsie’
- Sees 42096 rows affected
- Rollback
Can prevent a restore, whereas doing the update with auto commit guarantees a restore on (mostly) every error you make
Can prevent a restore, whereas doing the update with auto commit guarantees a restore on (mostly) every error you make
Exactly. Restores often result in system downtime and may take hours and involve lots of people. The backup might not have the latest data either, and restoring to a single table you screwed up may not be feasible or come with risk of inconsistent data being loaded. Even if you just created the backup before your statement, what about the transaction coming in while you’re working and after you realize your error? Can you restore without impacting those?
You want to avoid all of that if possible. If you’re mucking with data that you’ll have to restore if you mess up, production or not, you should be working with an open transaction. As you said… if you see an unexpected number of rows updated, easy to rollback. And you can run queries after you’ve modified the data to confirm your table contains data as you expect now. Something surprising… rollback and re-think what you’re doing. Better to never touch a backup and not shoot yourself in the foot and your data in the face all due to a stupid, easily preventable mistake.
Backups are for emergencies.
Transactions are for oopsies.
I’ve read something like “there are two kinds of people: those who backup and those who are about to”
This is the way
This doesn’t help you but may help others. I always run my updates and deletes as selects first, validate the results are what I want including their number and then change the select to delete, update, whatever
I learned this one very early on in my career as a physical security engineer working with access control databases. You only do it to one customer ever. 🤷♂️
That’s an easy one to recover from:
Simply fake your own death and become a goat herder in Guatemala.I still remember that time (hours ago) when “fake your own death” was the top voted recommendation for recovering from a SQL mistake.
Sign me up!
Pro tip: transactions are your friend
Completely agree, transactions are amazing for this kind of thing. In a previous team we also had a policy of always pairing if you need to do any db surgery in prod so you have a second pair of eyes + rubber duck to explain what you’re doing.
They are - until you leave them open and go home…
Temporarily locked overnight >>> broken stuff in prod
This is the way.
Postgres has a useful extension, pg_safeupdate
https://github.com/eradman/pg-safeupdate
It helps reduce these possibilities by requiring a where clause for updates or deletes.
I guess if you get into a habit of addingwhere 1=1
to the end of your SQL, it kind of defeats the purpose.MySQL (and by extension, MariaDB) has an even better option:
mysql --i-am-a-dummy
Amazing! These are going in my.conf ASAP.
Transactions help more, IMO. The 1=1 becomes a real habit.
All (doesn’t seem like MsSQL supports it, I thought that’s a pretty basic feature) databases have special configuration that warn or throw error when you try to
UPDATE
orDELETE
withoutWHERE
. Use it.I tried to find this setting for postgres and Ms SQLserver, the two databases I interact with. I wasn’t able to find any settings to that effect, do you happen to know them?
for postgres and Ms SQLserver
It’s not really a SQL Language feature, more an IDE feature. So to tell you where the settings are, we’d have to know which IDE you’re using.
For example, in DataGrip (which I think you can use both for postgres and MSSQL), there’s “Show warning before running potentially unsafe queries”
If you forgot to put the WHERE clause in DELETE and UPDATE statements, DataGrip displays a notification to remind you about that. If you omitted the WHERE clause intentionally, you can execute current statements as you planned.
That would be SQL management studio and psql on the command line.
The best I could find was some plugins for SQL management studio (ssmsboost) and disable automatic commits for psql.
I didn’t mean this as IDE thing, there is an extension to postgres and server configuration for mysql/mardiadb. Posted the links above
–i-am-a-dummy 😂
I didn’t mean this as IDE thing
Well, the link you’ve posted is specifically for MySQL CLI Client - Maybe I should have I said “Client” instead of “IDE” - but if he uses a different IDE/Client besides MySQL-CLI it’s probably a different setting
It’s supported in MySQL and MariaDB out of box:
https://dev.mysql.com/doc/refman/8.0/en/mysql-command-options.html#option_mysql_safe-updates
In Postgres there is an extension for it:
https://supabase.com/docs/guides/database/extensions/pg-safeupdate
You’re not the first. You won’t be the last. I’m just glad my DB of choice uses transactions by default, so I can see “rows updated: 3,258,123” and back the fuck out of it.
I genuinely believe that UPDATE and DELETE without a WHERE clause should be considered a syntax error. If you want to do all rows for some reason, it should have been something like UPDATE table SET field=value ALL.
Because I’m relatively new at this type of thing, how does that appear on the front end? I’m using a js/html front end and a jsnode backend. Would I just see a popup before I make any changes?
No idea. My tools connect directly to the DB server, rather than going though any web server shenanigans.
If you’re asking about the information about the number of rows, oracle db clients do that. For nodejs, oracle’s library will provide this number in the response to a dml statement execution. So you can retrieve it in your backend code. You have to write additional code to bring this message to the front-end.
Awesome, thanks for the info. Definitely super useful for debug mode whilst I’m fixing and tampering!
this folks, is why you don’t raw dog sql like some caveman
Me only know caveman. Not have big brain only smooth brain
Yep. If you’re in a situation where you have to write SQL on the fly in prod, you have already failed.
Me doing it for multiple years in a Bank…Uhm…
(let’s just say I am not outting my money near them… and not just because of that but other things…)
Tell that to my former employer…
Yeah, I swear it’s part of the culture at some places. At my first full-time job, my boss dropped the production database the week before I started. They lost at least a day of records because of it and he spent most of the first day telling me why writing sql in prod was bad.
it’s time to commit sqlpukku
But the adrenaline man… some of us are jonkies of adrenaline but we are too afraid of anything more of physically dangerous…
You may be interested in suicide linux then. it’s a distro that wipes your entire hard drive if you mistype a command
Raw dog is the fastest way to finish a task.
- productivity
- risk
It’s a trade-off
There’s no way you’re endorsing the way OP handled their data right?
No, but people are sometimes forced to do these things because of pressure from management and/or lack of infrastructure to do it in any other way.
Definitely don’t endorse it but I have done it. Think of a “Everything is down” situation that can be fixed in 1 minute with SQL.
Got it. I’m with you.
Always SELECT first. No exceptions.
Better yet… Always use a transaction when trying new SQL/doing manual steps and have backups.
mind explaining?
By running a select query first, you get a nice list of the rows you are going to change. If the list is the entire set, you’ll likely notice.
If it looks good, you run the update query using the same where clause.
But that’s for manual changes. OP’s update statement looks like it might be generated from code, in which case this wouldn’t have helped.
I did when I made the query a year ago. Dumdum sleep deprived brain thought it would look more organised this way
I once dropped a table in a production database.
I never should have had write permissions on that database. You can bet they changed that when clinicians had to redo four days of work because the hosting company or whatever only had weekly backups, not daily.
So, I feel your pain.
deleted by creator
There is still the journal you could use to recover the old state of your database. I assume you commited after your update query, thus you would need to copy first the journal, remove the updates from it, and reconstruct the db from the altered journal.
This might be harder than what I’m saying and heavily depends on which db you used, but if it was a transactional one it has to have a journal (not sure about nosql ones).
It is after the event that I find that postgres’ WAL journalling is off by default 🙃
You all run queries against production from your local? Insanity.
Everyone has a production system. Some may even have a separate testing environment!
The distinctions get blurry if you’re the sole user.
My only education is a super helpful guy from Reddit who taught me the basics of setting up a back end with nodejs and postgres. After that it’s just been me, the references and stack overflow.
I have NO education about actual practises and protocol. This was just a tool I made to make my work easier and faster, which I check in and update every few months to make it better.
I just open vscode, run node server.js to get started, and within server.js is a direct link to my database using the SQL above. It works, has worked for a year or two, and I don’t know any other way I should be working. Happy to learn though!
(but of course this has set me back so much it would have been quicker not to make the tool at all)
With that amount of instruction you’ve done well
There’s probably lots of stuff you don’t even know you don’t know.
Automated testing is a big part of professional software development, for example, and helps you catch things like this issue before they go live.
I’m up to 537 lines of server code, 2278 lines in my script, and 226 in my API interfacing, I’m actually super proud of it haha.
But you’re totally right, there are things I read that I just have no clue what they even mean or if I should know it, and probably use all the wrong terminology. I feel like I should probably go back to the start and find a course to teach me properly. I’ve probably learned so many bad habits. It doesn’t help that I learned JS before ES6 so I need to force myself not to use var and force myself to understand and use arrow functions.
I absolutely know that the way I’ve written the program will make some people cringe, but I don’t know any better. There are a few sections where I’m like “would that actually be what a real, commercial web app would do, or have I convoluted everything?”
For example, the entire thing is just one 129-line html file. I just hide and unhide divs when I need a new page or anything gets changed. I’m assuming that’s a bad thing, but it works, it looks good, and I don’t know any better!
Have a look at an ORM, if you are indeed executing plain SQL like I’m assuming from your comment. Sequelize might be nice to start with. What it does is create a layer between your application and your database. Using which, you can define the way a database object looks (like a class) and execute functions on that. For instance, if you’re creating a library, you could do book.update(), library.addBook(), etc. Since it adds a layer in between, it also helps you prevent common vulnerabilities such as SQL injection. This is because you aren’t writing the SQL queries in the first place. If you want to know more, let me know.
Thanks, I’ll look into it! I’m interested in why you got downvoted though! 😅
I didn’t downvote but some people like ideologically dislike orms. The reasons I’ve heard are usually “I can write better SQL by hand”, “I don’t want to use/learn another library”, “it has some limitations”
Those things can be true. Writing better SQL by hand definitely is a big “it depends”, though.
I can see why people might dislike them. Adds some bloat perhaps. But at the same time, I like the idea that my input is definitely sanitised since the ORM was written by people who know what they’re doing. That’s not to say it won’t have any vulnerabilities at all, but the chance of them existing is a lot lower than when I write the queries by hand. A lapse of judgement is all it takes. Even more relevant for beginning developers who might not be aware of such vulnerabilities existing.
For a personal tool that runs locally I can handle some bloat in the name of safety!
Short story, haters gonna hate ¯\_(ツ)_/¯ Long story, see my comment to the commenter below you. :)
deleted by creator
Periodic, versioned backups are the ultimate defense against bugs.
Periodic, versioned and tested backups.
It absolutely, totally, never ever happened to me that I had a bunch of backups available that turned out to be effectively unrestorable the moment I needed them. 😭
The worse feeling than realizing you don’t have backup is realizing your backup archives are useless.
Or like that time gitlab found out that none of its 5 backup/replications worked and lost 6 hours of data.
WHO, WHAT,
WHERE, WHEN, WHY, HOWwho thought it was a good idea to make the where condition in SQL syntax only correct after the set?? disaster waiting to happen
The people designing SQL, not having learned from the mistakes of COBOL, thought that having the syntax as close to English as possible will make it more readable.