Two Precursors to this Post:
1. My old blog post about physics and relationships.
2. You know how sometimes you have to hear something twice in order to really understand it? I had watched the really great 3B1B explainer video about LLMs (all screenshots below are from that video) a few months ago which really primed me for this story.
And now on to the post:
My brother and I were visiting my parents over the MLK Day weekend when he began to talk about AI, LLMs, and where the meaning of things seems to be embedded in language. After he explained tokenization:
(that's when human speak is translated into mathematical objects):
He then explained that embedding of these tokens into an abstract multidimensional phase space (which is what all the training of the LLMs is primarily doing... constructing the right encoding for these tokens) actually contains more information that the words themselves.
Where these various tokens lie in this multidimensional phase space is meaningful in that the relationships between these tokens is geometrical and those relationships is what makes them useful to perform operations on.
As he was explaining, I had two simultaneous realization. This is how the universe itself works as well and this may be how thinking happens in the human mind. In the universe, the properties of an object only become significant or meaningful at all in relation to other objects (how would you even know if something were charged electrically if there were no other charges in the universe?). When you are thinking subconsciously and having feelings about things... how does all that nonverbal thinking happen? We have actual multidimensional representations for concepts in the weighting of and the connections between the neurons in our brains.
Of course this second realization seemed a lot less insightful when I remembered than human neuronal connections are the inspirations for neural nets which (once you add back propagation to train them) become LLM's which everyone is now calling AI.
For everyone who thinks that humans are so special with our ability to think and communicate, what are LLMs demonstrating about our vaunted abilities? So rather than be impressed with how AI-like LLMs are, maybe we should just be less impressed with ourselves filling in the next word in the sentences we are constructing in our heads...
In the end, maybe the machines are not being promoted but we are being demoted?
I used to try to correct people who say "AI" when they mean "LLM" but now I'm just fine with it because I am going to start telling people who say or do something smart that they are a fairly effective LLM rather than say they are naturally intelligent.