I definitely think that's one of the facets of human intelligence. Not just valuable, but imperative for high functioning. But the faculty of future conjecturing foresight is not sufficient to denote high intelligence; in AI, it's uncoupled from an autonomous sense of reward, peril, success, or failure. And in practice, the capacity of g…
I definitely think that's one of the facets of human intelligence. Not just valuable, but imperative for high functioning. But the faculty of future conjecturing foresight is not sufficient to denote high intelligence; in AI, it's uncoupled from an autonomous sense of reward, peril, success, or failure. And in practice, the capacity of generating mind-movie scenarios to pattern match with future event probabilities is not reliable. It's more reliable in some realms that others. But the vast realm of existence is full of surprises, and there's many a slip 'twixt the cup and the lip. Of a sort that resists mapping via a machine intelligence.
A problem that's found in crafting public policy, for example: fealty to dogmatic ideology provides ideal "prediction templates" in the mind that practically always founder when applied in practice, in real-world conditions.
But, hey, one of the things I like about AI is its impartiality- and what I assert is its lack of ego. I'd like to think that AI might be sharp enough to use predictive logic in order to outline the practical consequences--often disastrous--of emphasizing loyalty to idealized ideological templates instead of practical problem solving that incorporates the singular aspects of a given issue. But--given that AI is vulnerable to human-programmed biases--that ability to accurately assess the relevant aspects of a given public policy might be hampered by partaking of tropisms toward a particular ideological blueprint that might have happened to be favored by the people writing a given AI program. Like a contagious malady transmitted through the info-continuum. In the wider sense, it's probably one of the vulnerabilities of AI that it's able to partake of human flaws that might be incidentally programmed into it, without ever achieving the autonomous agency associated with Self-Aware Consciousness in humans. Because AI has no Self.
I definitely think that's one of the facets of human intelligence. Not just valuable, but imperative for high functioning. But the faculty of future conjecturing foresight is not sufficient to denote high intelligence; in AI, it's uncoupled from an autonomous sense of reward, peril, success, or failure. And in practice, the capacity of generating mind-movie scenarios to pattern match with future event probabilities is not reliable. It's more reliable in some realms that others. But the vast realm of existence is full of surprises, and there's many a slip 'twixt the cup and the lip. Of a sort that resists mapping via a machine intelligence.
A problem that's found in crafting public policy, for example: fealty to dogmatic ideology provides ideal "prediction templates" in the mind that practically always founder when applied in practice, in real-world conditions.
But, hey, one of the things I like about AI is its impartiality- and what I assert is its lack of ego. I'd like to think that AI might be sharp enough to use predictive logic in order to outline the practical consequences--often disastrous--of emphasizing loyalty to idealized ideological templates instead of practical problem solving that incorporates the singular aspects of a given issue. But--given that AI is vulnerable to human-programmed biases--that ability to accurately assess the relevant aspects of a given public policy might be hampered by partaking of tropisms toward a particular ideological blueprint that might have happened to be favored by the people writing a given AI program. Like a contagious malady transmitted through the info-continuum. In the wider sense, it's probably one of the vulnerabilities of AI that it's able to partake of human flaws that might be incidentally programmed into it, without ever achieving the autonomous agency associated with Self-Aware Consciousness in humans. Because AI has no Self.