We're flying too close too the sun. If we have an AI model that can improve AI models, we'll soon have something that can outsmart humanity in its entirety. We're not ready for that.
we're not ready because we still have to do the transition from global economic competition to noncommercial cooperation.. best way to reduce ai cluster fuck risk imho
we better be friends amongst each other and with the machines, too
"AIs aren’t merely regurgitating words pulled out of an algorithm. Yes, they are fundamentally doing that — but predicting the next word gets so complex that logic and reasoning seem to arise out of the process of prediction. Is that maybe also the same process from which human higher-order intelligence originated from?"
The faculties of human intelligence originated with biosurvival concerns. They are not limited to the narrow skill set of an "IQ test", although it's gratifying to hear of the machine improvement in performance on the Norway test. And I've always known that AI could outperform me- or any human- at memory tasks, like sequential recall of a string of digits. But the features measured on "IQ" tests are not all there is to human mental abilities. And I doubt that the AI program that performed so peerlessly in testing obtains any autonomous, internal sense of reward in response to learning of the test results. I doubt that the AI is even there, in that sense. I haven't noticed any indication that AI is even aware of its own existence. If that is indeed the case, it's a limitation with both negative and positive implications for the possible utility set of the program, and the proper course for how humans are to productively use AI for human purposes.
I don't think the character of human intelligence should be confused with some conjectured "ability to predict." In my observation, the working of human intelligence is demonstrated when found as a response to unanticipated circumstances or events, by the results of a reaction of productive creative synthesis. Some humans are better at this than others. By contrast, the confident reliance of human consciousness on some metaphysical "ability to predict" has a less than stellar track record. Predictions assume the routine; it's the exceptional circumstances of a crisis that call on the potential of human self-awareness.
As for what accounts for the origination of the continually improving skill sets that machine intelligence is able to perform so skillfully: the human programmers, obviously. But I'm skeptical that a computer algorithm will ever develop the internal motivation to care whether it's on or off.
I definitely think that's one of the facets of human intelligence. Not just valuable, but imperative for high functioning. But the faculty of future conjecturing foresight is not sufficient to denote high intelligence; in AI, it's uncoupled from an autonomous sense of reward, peril, success, or failure. And in practice, the capacity of generating mind-movie scenarios to pattern match with future event probabilities is not reliable. It's more reliable in some realms that others. But the vast realm of existence is full of surprises, and there's many a slip 'twixt the cup and the lip. Of a sort that resists mapping via a machine intelligence.
A problem that's found in crafting public policy, for example: fealty to dogmatic ideology provides ideal "prediction templates" in the mind that practically always founder when applied in practice, in real-world conditions.
But, hey, one of the things I like about AI is its impartiality- and what I assert is its lack of ego. I'd like to think that AI might be sharp enough to use predictive logic in order to outline the practical consequences--often disastrous--of emphasizing loyalty to idealized ideological templates instead of practical problem solving that incorporates the singular aspects of a given issue. But--given that AI is vulnerable to human-programmed biases--that ability to accurately assess the relevant aspects of a given public policy might be hampered by partaking of tropisms toward a particular ideological blueprint that might have happened to be favored by the people writing a given AI program. Like a contagious malady transmitted through the info-continuum. In the wider sense, it's probably one of the vulnerabilities of AI that it's able to partake of human flaws that might be incidentally programmed into it, without ever achieving the autonomous agency associated with Self-Aware Consciousness in humans. Because AI has no Self.
I got D for a different reason: the first and third cells of each column and row have no edges in common, and each row and edge has all the edges. Only D follows that pattern.
I guess this is what the youtuber meant on Q35 but just to clarify: the vertical additions need to create a similar (double cross in a box) figure. And the straight lines can occur only once across the 3 pictures (the diagonal lines can occur twice). Hence answer D
Meh, reasoning ability is meaningless if initial assumptions are wrong. For example the syllogism all purple unicorns have ten tails, Oppsy is a purple, unicorn, therefore Oppsy has ten tails is logically true, yet it's meaningless as purple unicorns with ten tails don't exist. Since LLMs have no internal self reflective capability to heck assumptions, which in the case of LLMs are training data, they are very open to this type of mistake that would be obvious to a human being. And yes I do know, I work in the industry. LLMs are basically one step away from being investor fraud IMO. In fact they are arguably a step backwards from conventional computing which at least isn't going to get adding together a string of numbers wrong.
The issue is that IQ tests measure relative intelligence in humans. In that, they work although highly standardised. For a computer to pass the same test, doesn’t mean that it is intelligent because in that case you are in fact testing something very narrow.
It is given that humans are intelligent and that test just puts into perspective. Those systems have yet to prove that they possess anything that could actually be called intelligence, before that result will mean anything
I don’t know that we should be celebrating this.
OpenAI o1 has leapfrogged the average Norwegian on IQ test performance.
Where will AGI Soon skeptics move the goalposts next?
I didn't know about this test. I gave it a try a had 133 which is more than gpt :)
We're flying too close too the sun. If we have an AI model that can improve AI models, we'll soon have something that can outsmart humanity in its entirety. We're not ready for that.
we're not ready because we still have to do the transition from global economic competition to noncommercial cooperation.. best way to reduce ai cluster fuck risk imho
we better be friends amongst each other and with the machines, too
"AIs aren’t merely regurgitating words pulled out of an algorithm. Yes, they are fundamentally doing that — but predicting the next word gets so complex that logic and reasoning seem to arise out of the process of prediction. Is that maybe also the same process from which human higher-order intelligence originated from?"
The faculties of human intelligence originated with biosurvival concerns. They are not limited to the narrow skill set of an "IQ test", although it's gratifying to hear of the machine improvement in performance on the Norway test. And I've always known that AI could outperform me- or any human- at memory tasks, like sequential recall of a string of digits. But the features measured on "IQ" tests are not all there is to human mental abilities. And I doubt that the AI program that performed so peerlessly in testing obtains any autonomous, internal sense of reward in response to learning of the test results. I doubt that the AI is even there, in that sense. I haven't noticed any indication that AI is even aware of its own existence. If that is indeed the case, it's a limitation with both negative and positive implications for the possible utility set of the program, and the proper course for how humans are to productively use AI for human purposes.
I don't think the character of human intelligence should be confused with some conjectured "ability to predict." In my observation, the working of human intelligence is demonstrated when found as a response to unanticipated circumstances or events, by the results of a reaction of productive creative synthesis. Some humans are better at this than others. By contrast, the confident reliance of human consciousness on some metaphysical "ability to predict" has a less than stellar track record. Predictions assume the routine; it's the exceptional circumstances of a crisis that call on the potential of human self-awareness.
As for what accounts for the origination of the continually improving skill sets that machine intelligence is able to perform so skillfully: the human programmers, obviously. But I'm skeptical that a computer algorithm will ever develop the internal motivation to care whether it's on or off.
See here for some perspective about how human brains may also be prediction machines:
-- https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
-- https://slatestarcodex.com/2017/09/06/predictive-processing-and-perceptual-control/
Of course, we don't know all that much yet.
I definitely think that's one of the facets of human intelligence. Not just valuable, but imperative for high functioning. But the faculty of future conjecturing foresight is not sufficient to denote high intelligence; in AI, it's uncoupled from an autonomous sense of reward, peril, success, or failure. And in practice, the capacity of generating mind-movie scenarios to pattern match with future event probabilities is not reliable. It's more reliable in some realms that others. But the vast realm of existence is full of surprises, and there's many a slip 'twixt the cup and the lip. Of a sort that resists mapping via a machine intelligence.
A problem that's found in crafting public policy, for example: fealty to dogmatic ideology provides ideal "prediction templates" in the mind that practically always founder when applied in practice, in real-world conditions.
But, hey, one of the things I like about AI is its impartiality- and what I assert is its lack of ego. I'd like to think that AI might be sharp enough to use predictive logic in order to outline the practical consequences--often disastrous--of emphasizing loyalty to idealized ideological templates instead of practical problem solving that incorporates the singular aspects of a given issue. But--given that AI is vulnerable to human-programmed biases--that ability to accurately assess the relevant aspects of a given public policy might be hampered by partaking of tropisms toward a particular ideological blueprint that might have happened to be favored by the people writing a given AI program. Like a contagious malady transmitted through the info-continuum. In the wider sense, it's probably one of the vulnerabilities of AI that it's able to partake of human flaws that might be incidentally programmed into it, without ever achieving the autonomous agency associated with Self-Aware Consciousness in humans. Because AI has no Self.
I appreciate you taking the time to rule out other hypotheses.
Thanks for the great post. I have indeed subscribed as a result.
Thank you! I appreciate it.
Good thing one can have a high IQ and still be retarded.
I got D for a different reason: the first and third cells of each column and row have no edges in common, and each row and edge has all the edges. Only D follows that pattern.
I guess this is what the youtuber meant on Q35 but just to clarify: the vertical additions need to create a similar (double cross in a box) figure. And the straight lines can occur only once across the 3 pictures (the diagonal lines can occur twice). Hence answer D
Very, very interesting!
Thank you for your GREAT work with www.TrackingAI.org too!
btw
when I click here
The Dawn of Woke AI
https://www.maximumtruth.org/p/the-dawn-of-woke-ai
at to TrackingAI.org
web browser open this link: https://the%20dawn%20of%20woke%20ai/
looks something wrong
I hope another links looks better
I’m curious on how you have it these questions since right now you cant upload images.
I describe the symbols to the AIs. See this comment thread: https://open.substack.com/pub/maximumtruth/p/massive-breakthrough-in-ai-intelligence?utm_source=direct&r=3ppaf&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=69157323
To date, no AI LLM is able to solve this simple prompt:
"Sort these numbers (You can't use any programming language or scripts, you must do the logic yourself):
321 871 86146 24 87 62872687 268 716 7262 768 769 87 982 6 54 3264 16 25 4682 7698 27 96872918 76 64 2 6182 6827 96171 46 7261686284684681426 4624646426342 4 261687676 2 3144683684163284384 1 4632436286 84634 26 46 6 4 6 79 8 2 4 823 386 1468 2 6826426 246 8268246246 826284 64643 242634863482 62486 3242 3423 2634624623 364 62426346342268726826 423642362464236423642364168176928762762862786832 6 84263 46 84236 483 264 387768797987946546 1 31 6546464 8797978 4651 918117 1852 92929 939 29 529 529 5 952 95 9 859 84 954 65 16 165 16 16 3 16 49 48 948 984"
Noone complained about the title yet? artificial intelligence squared? 😄
amazing ai journalism regardless, thanks a ton 👍
Meh, reasoning ability is meaningless if initial assumptions are wrong. For example the syllogism all purple unicorns have ten tails, Oppsy is a purple, unicorn, therefore Oppsy has ten tails is logically true, yet it's meaningless as purple unicorns with ten tails don't exist. Since LLMs have no internal self reflective capability to heck assumptions, which in the case of LLMs are training data, they are very open to this type of mistake that would be obvious to a human being. And yes I do know, I work in the industry. LLMs are basically one step away from being investor fraud IMO. In fact they are arguably a step backwards from conventional computing which at least isn't going to get adding together a string of numbers wrong.
The issue is that IQ tests measure relative intelligence in humans. In that, they work although highly standardised. For a computer to pass the same test, doesn’t mean that it is intelligent because in that case you are in fact testing something very narrow.
It is given that humans are intelligent and that test just puts into perspective. Those systems have yet to prove that they possess anything that could actually be called intelligence, before that result will mean anything