For real, it almost felt like an LLM written article the way it basically said nothing. Also, the way it puts everything in bullet points is just jarring to read.
Hi! I am Creesch, also creesch on other platforms :)
For real, it almost felt like an LLM written article the way it basically said nothing. Also, the way it puts everything in bullet points is just jarring to read.
True, though that isn’t all that different from people doing knee jerk responses on the internet…
I am not claiming they are perfect, but for the steps I described a human aware of the limitations is perfectly able to validate the outcome. While still having saved a bunch of time and effort on doing an initial search pass.
All I am saying is that it is fine to be critical of LLM and AI claims in general as there is a lot of hype going on. But some people seem to lean towards the “they just suck, period” extreme end of the spectrum. Which is no longer being critical but just being a reverse fanboy/girl/person.
I don’t know how to say this in a less direct way. If this is your take then you probably should look to get slightly more informed about what LLMs can do. Specifically, what they can do if you combine them with with some code to fill the gaps.
Things LLMs can do quite well:
These are all the building blocks for searching on the internet. If you are talking about local documents and such retrieval augmented generation (RAG) can be pretty damn useful.
You are glossing over a lot of infrastructure and development, when boiled down to the basics you are right. So it is basically a question of getting enough users to have that app installed. Which is not impossible given that we do have initiatives like OpenStreetMap.
It depends on the platform you are using. But, for platforms like github and gitlab there are extensions for popular IDEs and editors available that allow you to review all changes in the editor itself.
This at the very least allows you to simply do the diffing in your own editor without having to squash or anything like that.
At least for the instance this was posted on: the February 2024 Beehaw Financial Update
Long term wearing of vr headsets might indeed be not all that good. Though, the article is light on actual information and is mostly speculation. Which for the Apple Vision Pro can only be the case as it hasn’t been out long enough to conduct anything more than a short term experiment. So that leaves very little data in the way of long term data points.
As far as the experiment they did, there was some information provided (although not much). From what was provided this bit did stand out to me.
The team wore Vision Pros and Quests around college campuses for a couple of weeks, trying to do all the things they would have done without them (with a minder nearby in case they tripped or walked into a wall).
I wonder why the Meta Oculus Quests were not included in the title. If it is the meta Quest 3, it is fairly capable as far as pass through goes. But, not nearly as good as I understand the Apple Vision Pro’s passthrough is. I am not saying the Apple Vision Pro is perfect, in fact it isn’t perfect if the reviews I have seen are any indicator. It is still very good, but there is still distortion around edges of vision, etc.
But given the price difference between the two I am wondering if the majority of the particpants actually used Quests as then I’d say that the next bit is basically a given:
They experienced “simulator sickness” — nausea, headaches, dizziness. That was weird, given how experienced they all were with headsets of all kinds.
VR Nausea is a known thing even experienced people will get. Truly walking around with these devices with the distorted views you get is bound to trigger that. Certainly with the distortion in pass through I have seen of Quests 3 videos. I’d assume there are no Quests 2 in play as the passthrough there is just grainy black and white video. :D
Even Apple with all their fancy promo videos mostly shows people using the Vision pro sitting down or in doors walking short distances.
So yeah, certainly with the current state of technology I am not surprised there are all sorts of weird side effects and distorted views of reality.
What I’d be more interested in, but what is not really possible to test yet, is what the effects will be when these devices become even better. To the point where there is barely a perceivable difference in having them on or off. That would be, I feel, the point where some speculated downsides from the article might actually come into play.
They’re for different needs.
Yes… but also extremely no. Superficially you are right, but a lot of the arguments of why many new distros are created is just because of human nature. This covers everything from infighting over inane issues to more pragmatic reasons. A lot of them, probably even a majority, don’t provide enough actual differentiators to be able to honestly claim that it is because of different needs. In the end it all boils down to the fact that people can just create a new distro when they feel like it.
Which is a strength in one way, but not with regard to fragmentation.
What do you mean by “it”? The chatGPT interface? Could be, but then you are also missing the point I am making.
After all, chatGPT is just one of the possible implementations of LLMs and indeed not perfect in how they implemented some things like search. In fact, I do think that they shot themselves in the foot by implementing search through bing and implementing it poorly. It basically is nothing more than a proof of concept tech demo.
That doesn’t mean that LLM’s are useless for tasks like searching, it just means that you need to properly implement the functionality to make it possible. It certainly is possible to implement search functionality around LLMs that is both capable and can be reviewed by a human user to make sure it is not fucking up.
Let me demonstrate. I am doing some steps that you would normally automate with conventional code:
I started about by asking chatGPT a simple question.
It then responded with.
The following step I did manually, but is something you would normally have automated. I put the suggested query in google, I quickly grabbed the first 5 links and then put the following in chatGPT.
It then proceeded to give me the following answer
Going over the search results myself seems to confirm this list. Most importantly, except for the initial input, all of this can be automated. And of course, a lot of it can be done better, as I didn’t want to spend too much time.