This comes from a full video dissecting how LLMs work. In the shorts player, you can click the link at the bottom of the screen, or for reference: https://youtu.be/wjZofJX0v4M
This comes from a full video dissecting how LLMs work, where much more detail is given about the nature of these embeddings. In the shorts player, you can click the link at the bottom of the screen, or for reference: https://youtu.be/wjZofJX0v4M
In the ideal case, where the relationships between words are perfectly represented in the vector space, these relationships should exhibit a form of symmetry. In the example of word embeddings you provided, if "uncle" were unknown (represented by "x"), we could indeed attempt to solve for it, using the known relationship between "aunt," "woman," and "man."
Given the equation `E(aunt) – E(x) ≈ E(woman) – E(man)`, you could rearrange the terms to solve for `E(x)`, the embedding for "uncle," using algebraic manipulation:
E(x) ≈ E(aunt) – (E(woman) – E(man))
However, in practice, these relationships are not always perfectly symmetric due to various factors such as polysemy, context-specific usage, and the limitations of the embedding algorithm itself. Word embeddings are learned from actual language use in large corpora, and the nuances of language can lead to embeddings that do not always form perfectly symmetric relationships.
This reminds me of a fun little daily game called linxicon. It gives you two words which you have to connect by using words, often used in similar contexts. I never quite understood how this worked but I guess it's somewhat similar to this.
Why is everyone talking about Hitter so much on social media? STOP PROMOTING HILTER. STOP TALKING ABOUT HIM. HE WAS THE DEVIL. YOU WILL BRING HIM BACK IF YOU KEEP USING HIS NAME.
This is why AI should never be filtered ESPECIALLY IF OFFENSIVE as that almost shows as a watermark the lack of filtering. Language model's are only useful naked, not filtered through the lens of opinions legal or politically safe to output.
There is a very compelling theory that the capacity for abstract thought originally evolved from the same part of our brain that processes navigation and spatial orientation. In a certain sense, just about every practical problem in logic can be encoded as an equivalent geometric problem, so you can reuse the same structures to solve totally abstract problems that have nothing to do with moving around. If this is true, it would be very extremely insightful – when someone says they can't "visualize" something, maybe what they really mean is that they can't build a space in their brain to navigate.
41 Comments
This comes from a full video dissecting how LLMs work, where much more detail is given about the nature of these embeddings. In the shorts player, you can click the link at the bottom of the screen, or for reference: https://youtu.be/wjZofJX0v4M
Oh I kinda get this actually. Sick.
this is awesome – explaining it in terms that are natural to me and intuitive.
Does that work with code?
I LOVE THAT YOU MAKE SHORTS NOW
answer to the first question is Mussolini
What is the furthest thing on the hitler axis tough?
In the ideal case, where the relationships between words are perfectly represented in the vector space, these relationships should exhibit a form of symmetry. In the example of word embeddings you provided, if "uncle" were unknown (represented by "x"), we could indeed attempt to solve for it, using the known relationship between "aunt," "woman," and "man."
Given the equation `E(aunt) – E(x) ≈ E(woman) – E(man)`, you could rearrange the terms to solve for `E(x)`, the embedding for "uncle," using algebraic manipulation:
E(x) ≈ E(aunt) – (E(woman) – E(man))
However, in practice, these relationships are not always perfectly symmetric due to various factors such as polysemy, context-specific usage, and the limitations of the embedding algorithm itself. Word embeddings are learned from actual language use in large corpora, and the nuances of language can lead to embeddings that do not always form perfectly symmetric relationships.
And that's why ChatGPT keeps making things up. It doesn't understand you or think they same way.
They should update infinite craft to work this way. Literal infinite crafting
Never have thought of analogies as arithmetic
This is what Wittgenstein explained in his Tractatus logico philosophicus
So if we ask it what is Putin-Russia+USA I wonder what it'd predict
Fun fact: you can ask ChatGPT to show the vectors it generates from your prompt
Beautifully explained. It can't be any better.
This reminds me of a fun little daily game called linxicon. It gives you two words which you have to connect by using words, often used in similar contexts. I never quite understood how this worked but I guess it's somewhat similar to this.
Higher dimensional space seems so fitting for describing how AI works, this blew my mind, thank you.
The fascism axis
It looks a lot like computing the divisors of a curve except it's over a field rather than integers.
Why is everyone talking about Hitter so much on social media? STOP PROMOTING HILTER. STOP TALKING ABOUT HIM. HE WAS THE DEVIL. YOU WILL BRING HIM BACK IF YOU KEEP USING HIS NAME.
🪄✨
This is why AI should never be filtered ESPECIALLY IF OFFENSIVE as that almost shows as a watermark the lack of filtering.
Language model's are only useful naked, not filtered through the lens of opinions legal or politically safe to output.
That…actually kind of makes sense. Fascinating.
Actually really interesting
The funny part is Mussolini was my answer to the question too because he's an Italian fascist (in fact he's the guy who coind the word)
AI is cancer 👎
Quantum physics ple
Damn, can't believe i actually got that right 😂
It's insanely cool that this idea worked out this way.
gibberish
There is a very compelling theory that the capacity for abstract thought originally evolved from the same part of our brain that processes navigation and spatial orientation. In a certain sense, just about every practical problem in logic can be encoded as an equivalent geometric problem, so you can reuse the same structures to solve totally abstract problems that have nothing to do with moving around. If this is true, it would be very extremely insightful – when someone says they can't "visualize" something, maybe what they really mean is that they can't build a space in their brain to navigate.
I am chilean and I was super curious about where you are. I saw the prompts checking for Nunoa and another one for things to do in Chime.
I'm waiting for the feminists to complain why E(man) is higher than E(woman)…
Hitler is to Gwrmany as Mussolini is to Italy
Hitler + Italy – Germany 💀💀💀
That’s such an interesting visualization. It results in multi-dimensional versions of the
This :: That as That :: This
Relationships.
Please mr algorithm, this is what i want for shorts. I want numbers and graphs, not brainrot
A delayed introduction is just the better way to get someones attention 😂
How do i do this by myself.
Bro you okay?
"K, so over there is the French place. Over here is the Italian place, don't mind that, it's the WW2 place, and here's Python…"