• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: February 14th, 2025

help-circle




  • This is one team that disagrees out of many that agree.

    To explain what you are seeing. The above image is the inverse Fourier transform (FT) of different frequencies of sinus waves that compose an image.

    The very large baseline interferometer (VLBI) applied in the event horizon telescope (EHT) is using different telescopes all over the world, in a technique called interferometry, to achieve high enough resolutions to observe different frequencies in Fourier space that make up an image. If you observe all, you can recreate the full image perfectly. They did not, they observed for a long time and thus got a hefty amount of these “spatial” frequencies. Then they use techniques that limit the image to physical reality (e.g. no negative intensities/fluxes) and clean it from artefacts. Then transform it to image space (via the inverse FT)

    Thereby, they get an actual image that approximates reality. There is no AI used at all. The researchers from Japan argued for different approach to the data, getting a slightly different inclination in that image. This may well be as the data is still too few to 100 % determine the shape, but looks more to me like they chose very different assumptions (which many other researchers do not agree with).

    Edit: They did use ML for simulations to compare their sampling of the Fourier space to.






  • To be honest, I feel like what you describe in the second part (the monkey analogy) is more of a genetic algorithm than a machine learning one, but I get your point.

    Quick side note, I wasn’t at all including a discussion about energy consumption and in that case ML based algorithms, whatever form they take, will mostly consume more energy (assuming not completely inefficient “classical” algorithms). I do admit, I am not sure how much more (especially after training), but at least the LLMs with their large vector/matrix based approaches eat a lot (I mean that in the case for cross-checking tokens in different vectors or such). Non LLM, ML, may be much more power efficient.

    My main point, however, was that people only remember AI from ~2022 and forgot about things from before (e.g. non LLM, ML algorithms) that were actively used in code completion. Obviously, there are things like ruff, clang-tidy (as you rightfully mentioned) and more that can work without and machine learning. Although, I didn’t check if there literally is none, though I assume it.

    On the point of game “AI”, as in AI opponents, I wasn’t talking of that at all (though since deep mind, they did tend to be a bit more ML based also, and better at games, see Starcraft 2, instead of cheating only to get an advantage)


  • How so? A Large Language Model is usually a transformer based approach nowadays, right (correct me if outdated)?

    AI is artificial intelligence, which has been used and abused for many different things, none of which are intelligent right now (among others used for machine learning).

    Machine learning is based on linear algebra like linear regression or other methods depending what you want to do.

    An algorithm is by definition anything that follows a recipe so to say.

    All of these things, bare transformers and newer in development approaches like spiked neural networks or liquid neural networks are fairly basic, no?

    EDIT: typos


  • I am not talking about what it does, I am talking about what it is.

    And all tools do tend to replace human labor. For example, tractors replaced many farmhands.

    The thing we face nowadays, and this is by no means limited to things like AI, is that less jobs are created by new tools than old destroyed (in my earlier simile, a tractor needs mechanics and such).

    The definition of something is entirely disconnected from its usage (mainly).

    And just because everyone calls LLMs now AI, there are plenty of scientific literature and things that have been called AI before. As of now, as it boils down all of these are algorithms.

    The thing with machine learning is just that it is an algorithm that fine tunes itself (which is often blackbox-ish btw). And strictly speaking LLMs, commonly refered to as AI, are a subclass of ML with new technology.

    I make and did not make any statement of the values of that technology or my stance on it