Whenever I see the direction AI is going, there’s always one thing that annoys me, because I was interested in AI before it was cool, and now it’s becoming uncool again. That problem is that none of these AI models really seem to be all that intelligent at all. AIs have to have gigantic corpuses of text, pictures, videos, or whatever their medium is fed into them, and even then, people are still smarter than they are, since they frequently hallucinate and can be jailbroken easily by nefarious or merely rambunctious agents.
I’ve always thought AI should use some form of basically universal grammar, which is encapsulated by Wilhelm von Humboldt’s quote “language is the infinite use of finite means.” The fact is when children learn language, they barely have to hear anything to start picking it up, compared to these LLMs that have to hear more than most people ever have and still can’t do it very well. Why don’t we program the generative model into the AI itself?
Has ChatGPT refuted Noam Chomsky? – Philosophy of Linguistics (wordpress.com)
Why did language evolve? – Philosophy of Linguistics (wordpress.com)
This paper one of my friends helped write really gets at that same idea, in which they make an AI on a 7-year-old personal computer by programming in the rules to the game Othello itself instead of just throwing examples at the computer and trying to get it to predict things. However, what I think would need to be done differently regarding something like language is that universal grammar is different from programming in a specific language. Once the example of language is done other examples can also be used. In linguistics this is called the poverty of the stimulus.
Applications of Neural Architecture Search to Deep Reinforcement Learning Methods (tamu.edu)
Poverty of the stimulus - Wikipedia
AI is probably the greatest argument for the idea of the poverty of the stimulus ever, because the computer has a gigantic corpus of information and still can’t do what a person can.
So, McKuen mutants who read my blog where I mostly try to argue about the importance of bringing back some parts of German idealism, who would like to help try to work on some generative AIs?
I think perhaps the most important neglected aspect is probably the input system for learning itself. This is the main thing I personally think needs to be addressed. For example, babies mostly learning words in contexts where the thing they refer to is present or pointed at, though extending that to abstract concepts is generally fairly odd even if that’s how it happens. How can a computer learn what a word is? Instead of giving it a database of text, maybe it should get an image, sounds, etc. at the same time. How can this information be encoded to it?
I feel half-tempted to try to figure out how to selectively feed computers thoughts in some non-invasive ways I keep considering, which also makes me doubly ashamed for people like Chippy the Elongated Muskrat, worst of the ex-X-Men, trying to drill holes in the skulls of people with disabilities and permanently connect them to the Internet from inside their heads when having your cell phone next to your ear with all your personal data on it is enough of a concern for most people, though I guess to be fair that’s one way to show Professor X you’re really mad at him and out to get him for kicking you out of the X-Men. I think I can think of lots of non-invasive ways to pick up most of the relevant information and it should also be about the same level as making an AI on your seven-year-old personal computer at home. For example, synesthesia can encode information and seems like one of the bases of language in the first place.
This paper seems to have been removed from Twitter but it’s still available to read:
The Bouba–Kiki effect is predicted by sound properties but not speech properties (springer.com)
[PDF] The Sign is not Arbitrary | Semantic Scholar
Additionally, apparently since AI is just a buzzword anyway, you can turn people’s brainwaves into images without even having a processor because the computer already works the same way.
Researchers translate brain waves into scarily accurate images using Stable Diffusion AI | PC Gamer
On a less related but funnier note, caffeine also helps batteries get more energy, so it appears similar causes often do lead to similar effects across a lot of nature.
Caffeine makes fuel cells more efficient, cuts energy cost • The Register
Of course, this goes against a tremendous amount of received so-called “wisdom” in computer science since it’s basically saying the brain isn’t a Turing machine and computers aren’t Turing machines either. How’s that been working for everyone, considering Dijkstra has been deceased for a while now and this was his criticism? His solution basically was to have a generative model where the way it works was programmed into it as well, just he thought about it in different terms.