Search for:



This comes from a full video dissecting how LLMs work. In the shorts player, you can click the link at the bottom of the screen, or for reference: https://youtu.be/wjZofJX0v4M

41 Comments

  1. This comes from a full video dissecting how LLMs work, where much more detail is given about the nature of these embeddings. In the shorts player, you can click the link at the bottom of the screen, or for reference: https://youtu.be/wjZofJX0v4M

  2. In the ideal case, where the relationships between words are perfectly represented in the vector space, these relationships should exhibit a form of symmetry. In the example of word embeddings you provided, if "uncle" were unknown (represented by "x"), we could indeed attempt to solve for it, using the known relationship between "aunt," "woman," and "man."

    Given the equation `E(aunt) – E(x) ≈ E(woman) – E(man)`, you could rearrange the terms to solve for `E(x)`, the embedding for "uncle," using algebraic manipulation:

    E(x) ≈ E(aunt) – (E(woman) – E(man))

    However, in practice, these relationships are not always perfectly symmetric due to various factors such as polysemy, context-specific usage, and the limitations of the embedding algorithm itself. Word embeddings are learned from actual language use in large corpora, and the nuances of language can lead to embeddings that do not always form perfectly symmetric relationships.

  3. And that's why ChatGPT keeps making things up. It doesn't understand you or think they same way.

  4. This reminds me of a fun little daily game called linxicon. It gives you two words which you have to connect by using words, often used in similar contexts. I never quite understood how this worked but I guess it's somewhat similar to this.

  5. Higher dimensional space seems so fitting for describing how AI works, this blew my mind, thank you.

  6. Why is everyone talking about Hitter so much on social media? STOP PROMOTING HILTER. STOP TALKING ABOUT HIM. HE WAS THE DEVIL. YOU WILL BRING HIM BACK IF YOU KEEP USING HIS NAME.

  7. This is why AI should never be filtered ESPECIALLY IF OFFENSIVE as that almost shows as a watermark the lack of filtering.
    Language model's are only useful naked, not filtered through the lens of opinions legal or politically safe to output.

  8. There is a very compelling theory that the capacity for abstract thought originally evolved from the same part of our brain that processes navigation and spatial orientation. In a certain sense, just about every practical problem in logic can be encoded as an equivalent geometric problem, so you can reuse the same structures to solve totally abstract problems that have nothing to do with moving around. If this is true, it would be very extremely insightful – when someone says they can't "visualize" something, maybe what they really mean is that they can't build a space in their brain to navigate.

  9. I am chilean and I was super curious about where you are. I saw the prompts checking for Nunoa and another one for things to do in Chime.

  10. That’s such an interesting visualization. It results in multi-dimensional versions of the
    This :: That as That :: This
    Relationships.

  11. Please mr algorithm, this is what i want for shorts. I want numbers and graphs, not brainrot

  12. "K, so over there is the French place. Over here is the Italian place, don't mind that, it's the WW2 place, and here's Python…"

Write A Comment