I believe the answer is, unfortunately, no.
Long answer: In the past, an ML researcher trying to do this would have used either manual labels (for example a dictionary of parts of speech for each word) or multiple sub-models trained to solve each sub-problem before combining into a full prediction model, and even then performance is not great.
However, once the models grew to billions of parameters it turned out that none of this external linguistic knowledge is necessary and the model can learn it all on its own. But it takes billions to trillions of examples to learn all these weights, which means a double hit to the training time: each step is slower due to more parameters, and more steps are needed to train on the full dataset.
None of these models are trainable without a cluster of GPUs, which massively parallelizes the training process.
That doesn’t mean you can’t try, but my results training a small toy model from scratch for 20-30 hours on a consumer GPU have been underwhelming. You get some nearly-grammatical sentences but also a lot of garbage, repetition, and incoherence.
I doubt anyone you are talking to is opposed to all human rights, that sounds very much like a straw man statement. Reasonable people can disagree about whether any particular right should be protected by law.
The reason is simple: any legally-protected right you have stands in direct opposition to some other right that I could have:
No right is ever meant to be or can be absolute, and not all good government policy is based on rights. Turning a policy argument into one about human rights is not generally going to win the other person over, it’s akin to calling someone a racist because of their position on affirmative action. There’s no rational discussion that can be had after that point.