No clue but maybe you confused it with this one?
https://www.privacytools.io
Using IP laws to legislate this could also lead to disastrous consequences, like the monopolization of effective AI. If only those with billions in capital can make use of these tools, while free or open source models become illegal to distribute, it could mean a permanent power grab. If the capitalists end up controlling the “means of generation” and we the common folk can’t use it.
I imagine that theoretically you could have algorithms or machine learning to calibrate this. Like make test sounds so you see how the sound diffuses and then filter it out.
Yeah I think paying the creators generously and allowing them to make a good living is how tiktok got off the ground so fast.
I really love the vine 6 sec sketch format but I only ever watches compilations on youtube. It’s like a box of chocolate, you never know what you’ll get, but eat enough of them… :D PS: Man this makes me nostalgic about those ancient times when everything wasn’t going to shit yet
It would be nice to have a “patreon” like monthy support and then an open accounting - so we know the money is split to development, instance server hosting costs and maybe admin wages. Or maybe can vote on it. I think fediverse is only the first step, we’re going to need some kind of global non profit funded by users to create federated software and content for users.
The error message says “.exe” and looks like a dot net namespace.
Hmm 😇 The afterlife might be a good way to make it up. Have you seen “The Good Place”?
And how do you determine who falls in this category? Again, by a set of parameters which we’ve chosen.
Sure, that is my argument, that we choose to make social progress based on our nature and scientific understanding. I never claimed some 100% objective morality, I’m arguing that even though that does not exist, we can make progress. Basically I’m arguing against postmodernism / materialism.
For example: If we can scientifically / objectively show that some people are born in the wrong body and it’s not some mental illness, and this causes suffering that we can alleviate, then moral arguments against this become invalid. Or like the gif says “can it”.
I’m not arguing that some objective ground truth exists but that the majority of healthy human beings have certain values IF they are not tainted that if reinforced gravitate towards some sort of social progress.
You needn’t argue for the elimination of meaning, because meaning isn’t a substance present in reality - it’s a value we ascribe to things and thoughts.
Does mathematics exist? Is money real? Is love real?
If nobody is left to think about them, they do not exist. If nobody is left to think about an argument, it becomes meaningless or “nonsense”.
I’m not arguing for “one single 100% objective morality”. I’m arguing for social progress - maybe towards one of an infinite number of meaningful, functioning moralities that are objectively better than what we have now. Like optimizing or approximating a function that we know has no precise solution.
And “objective” can’t mean some kind of ground truth by e.g. a divine creator. But you can have objective statistical measurements for example about happiness or suffering, or have an objective determination if something is likely to lead to extinction or not.
I agree somewhat with that but: only if the starting conditions were completely random. Otherwise if you set the conditions to be similar to what we know about humanity, you’d have to anticipate both cooperation and competition and parasitic behavior leading to wars and atrocities. And that also assumes that they actually have a chance to grow up for the suffering to have any meaning. If you just turn it off your science experiment at some point you have invalidated the argument.
Either way when you’re playing god you’d have to morally justify yourself. Imagine you create a universe that eventually becomes an eternal hell where trillions of sentient beings are tortured through something like “I have no mouth but I must scream”.
You’d look at things like the holocaust or million other atrocities and say “this is fine”. Also you can’t assume they’d die out naturally in 5 billion years, they might colonize other planets and go on and on and on until you pull the switch. They might have created beautiful art and things and preserved much of their history for future generation and then poof all gone. What if they would find out? Would you say “I created them, therefor I own them and can do with my toys as I please”. Really?
My main argument would be that it would be incredibly unethical. And any intelligent civilization powerful enough to create a simulation like this would be more likely than not to be ethical, and if it was this unethical it is unlikely to exist for long. Those would be two potential reasons why the “infinite regress” in simulation theory is unlikely.
The Starmaker is an interesting exploration into simulation theory.
You misrepresent or misunderstood my argument
Comrade pinko barbie!
There’s no such thing as 100% objective morality.
Maybe not, maybe there is an infinity of variation of objective morality. There will always be broken people with pathologies like sociopathy or narcissism that wouldn’t agree. But the vast majority, like 95% of people would agree for example on the universal human rights - at least if they had the rights and freedoms to express themselves and the education to understand and not be brainwashed. Basically given the options of a variety of moralities and the right circumstances (safety/not in danger, modicum of prosperity, education) you would get an overwhelming consensus on a large basis of human rights or “truths”. The argument would be that just because a complex machine is forever running badly, that there still can be an inherent objective ideal of how it should run, even if perfection isn’t desirable or the machine and ideal has to be constantly improved.
There is another way to argue for a moral starting point: A civilization that is on the way to annihilate itself is “doing something wrong” - because any ideology or morality that argues for annihilation (even if that is not the intention, but the likely outcome) is at the very least nonsensical since it destroys meaning itself. You cannot argue for the elimination of meaning without using meaning itself, and after the fact it would have shown that your arguments were meaningless. So any ideology or philosophy that “accidentally” leads to extermination is nonsensical at least to a degree. There would still be an infinity of possible configurations for a civilization that “works” in that sense, but at least you can exclude another infinity of nonsense.
“Who watches the watchers” is of course the big practical problem because any system so far has always been corrupted over time - objectively perverted from the original setup and intended outcome. But that does not mean that it cannot be solved or at least improved. A basic problem is that those who desire power/money above all else and prioritize and focus solely on the maximization of those two are statistically most likely to achieve it. That is adapted or natural sociopathy. We do not really have much words or thoughts about this and completely ignore it in our systems. But you could design government systems that rely on pure random sampling of the population (a “randocracy”). This could eliminate many of the political selection filtering and biases and manipulation. But there seems very little discussion on how to improve our democracies.
Another rather hypothetical argument could come from scientific observation of other intelligent (alien) civilizations. Just like certain physical phenomena like stars, planets, organic life are naturally emergent from physical laws, philosophical and moral laws could naturally emerge from intelligent life (e.g. curiosity, education, rules to allow stability and advancement). Unfortunately it would take a million years for any scientific studies on that to conclude.
Nick Bostrom talks a bit about the idea of a singleton here, but of course there be dragons too.
It is quite possible that it’s too late now, or practically impossible to advance our social progress because of the current overwhelming forces at work in our civilization.
The eye tracking is very interesting. Would this support OpenVR?
Maybe that is what we need to do. “Decide” on certain moral questions based on best scientific data and our values and sound arguments and then stop debating them. Unless new scientific evidence challenges those moral edicts.
Somehow we keep going round in circles as a civilization.
You forgot the journalists who frame narratives and the intellectuals who secrete the ideology that makes it all possible.
I’ve recently read a comment saying the great Chinese firewall somehow “learns” that you are using a VPN. So people doing quick tests “yep VPN works” but then a little later it doesn’t work anymore. No clue if that is true though.