

DLC Quest comes to mind. It’s not even a parody anymore 😭
DLC Quest comes to mind. It’s not even a parody anymore 😭
“New” and “All” disagree. Respectfully.
I’m your neighbor.
The lines between fact (…) and opinion can be blurry at times
Are they though?
Funny tangent. I remember windows HRESULTS containing E_SUCCESS (error success) and something along the lines of S_FAILURE (success failure) I’m a little fuzzy on that second one though, so someone else can correct me if I have the wrong name for it.
This is sounding oddly specific. OP, please don’t kill your family member.
Edit: Actually, I’d like to add something. If you were to suffocate someone with a pillow or poison them, both of them would point to foul play, which would instantly pull in a list of possible suspects. In that list, the ones with motive would be investigated a bit more: ie, had a spat before the death. And from the on, it would be up to match evidence, now that an investigator knows what they’re looking for, it won’t take long to put 2 and 2 together.
Oh. Missed that lol.
It won’t but you also won’t be disappointed by it if you never play them!
That’s the fun part about being in a place where you can hold a discussion. Some people don’t agree with you, but they can still see the benefits of the option you are talking about or even agree that they are a great solution for now.
I don’t have a great solution for this particular problem.
However any solution that you come up with has to be resilient enough that the nodes that execute such scenario are always available.
You don’t just want a system with high availability, you want a system that will stand the test of time. For example, it might trigger 30 or 50 years from now. You might not want to use AWS or Google or Azure or any sort of system like that. They don’t seem to keep their solutions available for that long. So you’ll need to host something yourself and make sure it’s resilient to a multitude of scenarios that might bring the “back end” down.
You’d also need to set-up some sort of test for the system to make sure it’s still running and it’ll do what you want it to. Maybe it runs every 3 months or so like a fire system drill.
Honestly the trigger can be something as simple as you hitting a button connected to your system every week with a way for it to ping and prompt you to do it you if you haven’t “reset” the counter in a timely fashion.
I would probably do something like that with a weekly cadence and a whole other week to make sure I don’t miss the reset.
You probably also want to be able to set it to different modes if you think you will be away for a while. Like a vacation mode or oh shit I’m in the hospital mode.
Additionally, I also wouldn’t be as fatalistic as sending goodbyes to everyone. I would use it more as a system to sound an alarm that I’m not okay and something has happened to me and communicate that with people who could do something about it. Like verify if I’m alive or not, or contact local authorities to post a missing persons report.
This same system of notifying could also allow closer people to me to trigger an “oh shit I’m dead mode” which would then execute whatever is in that idea of yours.
These are all situations that you would want to alert your loved ones though. And the power outage one will probably be solved faster than your switch hopefully.
Me on basically any post that has incorrect information from blahaj.
What? Ballmer hasn’t had anything to do with msft since 2014 man.
Software engineer here, but not llm expert. I want to address one of the questions you had there.
Why doesn’t it occasionally respond with a hundred thousand word response? Many of the texts it’s trained on are longer than it’s usual responses.
An llm like chatgpt does some rudimentary level of pattern matching when it analyzes training data. So this is why it won’t generate a giant blurb of text unless you ask it to.
Let’s say for example one of its training inputs is a transcription of a conversation. That will be tagged “conversation” by a person. Then it will see that tag when analyzing hundreds of input texts that are conversations. Finally, the training algorithm writes down that “conversation” have responses of 1-2 sentences with x% likelyhood because that’s what the transcripts did. Now if another of the training sets is “best selling novels” it’ll store that “best selling novels have” responses" that are very long.
Chatgpt will probably insert a couple of tokens before your question to help it figure out what it’s supposed to respond: “respond to the user as if you are in a casual conversation”
This will make the model more likely to output small answers rather than giving you a giant wall of text. However it is still possible for the model to respond with a giant wall of text if you ask something that would contradict the original instructions. (hence why jailbreaking models is possible)
M