• 1 Post
  • 513 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle




  • A simple line of code that goes “if moisture < 0.25 then loaddone” of “water = weight * 0.43” isn’t AI, true.
    But when you start stacking enough of them with the goal and results being “We could get a chef to check how the pizza is doing every few seconds and, and control all of the different temperatures of this oven until it’s perfectly done, but we have made a computer algorithm that manages to do that instead”, then it’s quite hard to argue it isn’t software that is “performing a task typically associated with human intelligence, such as … perception, and decision-making.”

    Especially if that algorithm was (I have no idea if it was in this case btw) not done by just stacking those if clauses and testing stuff manually until it works, but by using machine learning to analyze a mountain of baking data to create a neural network that does it itself. Because at that point, it definitely is artificial intelligence - it’s not an artificial general intelligence, which many people think is the only type of “true AI”, but it is an AI.





  • X11.

    One notable difference between X11 and W3C is the case of “Gray” and its variants. In HTML, “Gray” is specifically reserved for the 128 triplet (50% gray). However, in X11, “gray” was assigned to the 190 triplet (74.5%), which is close to W3C “Silver” at 192 (75.3%), and had “Light Gray” at 211 (83%) and “Dark Gray” at 169 (66%) counterparts. As a result, the combined CSS 3.0 color list that prevails on the web today produces “Dark Gray” as a significantly lighter tone than plain “Gray”, because “Dark Gray” was descended from X11 – for it did not exist in HTML nor CSS level 1 – while “Gray” was descended from HTML. Even in the current draft for CSS 4.0, dark gray continues to be a lighter shade than gray. Some browsers such as Netscape Navigator insisted on an “a” in any “Gray” except for “Light Grey”.







  • LLMs have a perfect track record of doing exactly what they were designed to, take an input and create a plausible output that looks like it was written by a human. They just completely lack the part in the middle that properly understands what it gets as the input and makes sure the output is factually correct, because if it did have that then it wouldn’t be an LLM any more, it would be an AGI.
    The “artificial” in AI does also stand for the meaning of “fake” - something that looks and feels like it is intelligent, but actually isn’t.



  • They already did. AGI - artificial general intelligence.

    The thing is, AGI and AI are different things. Like your “LLMs aren’t real AI” thing , large language models are a type of machine learning model, and machine learning is a field of study in artificial intelligence.
    LLMs are AI. Search engines are AI. Recommendation algorithms are AI. Siri, Alexa, self driving cars, Midjourney, Elevenlabs, every single video game with computer players, they are all AI. Because the term “Artificial Intelligence” by itself is extremely loose, and includes the types of narrow AI all of those are.
    Which then get hit by the AI Effect, and become “just another thing computers can do now”, and therefore, “not AI”.