Several up and coming AI workers (plus Ray Kurzweil) have predicted that human-level AI will be achieved within the next 10 to 20 years. Where is the progress which they can point to, in order to justify these claims? Within their own imaginations, apparently. According to some of the thinkers and AI pioneers quoted at the link above, the problem of AI is not being approached correctly. One of the most piercing criticisms of the field seems to be that it is focusing on tactics rather than strategy.
...clearly, the AI problem is nowhere near being solved. Why? For the most part, the answer is simple: no one is really trying to solve it. This may come as a surprise to people outside the field. What have all those AI researchers been doing all these years? The reality is that they have largely given up on the grand ambitions of AI and are instead working on increasingly specialized subproblems: not just machine learning or natural-language understanding, say, but issues within those areas, like classifying objects or parsing sentences. _TechnologyReviewThis is clearly true. What is tragic is that a large proportion of the current crop of top-level AI researchers do not seem to understand the difference between strategy and tactics, in the context of AI. Strategy in AI calls for much deeper level thinking than tactics, which may in fact be beyond the capacity of most researchers -- and even beyond the range of philosophers such as Dan Dennett, who has relatively low expectations for human-level AI in the foreseeable future.
As brilliant as earlier AI researchers a John McCarthy, Marvin Minsky, Seymour Papert may be (and may have once been) in their hay-day, the extent of the problem of human-level AI was too poorly defined for anyone to grasp the challenge.
Modern researchers who make boastful claims for the likelihood of achieving human-level AI in 10-20 years do not have that excuse. Clearly neither Kurzweil nor the other AI hopefuls truly have a grasp of the problem as a whole. And that failure will be their undoing.
What would it take to succeed at human-level AI? A closely-knit, multidisciplinary team of thinkers willing to try outlandish and counter-intuitive approaches to the problem -- well out of the mainstream. To achieve human-type machine intelligence, high level expertise in one's field is but the barest beginning, hardly a single step in the journey of a thousand miles. One must also have a child-like approach to the world, be both brilliant and incredibly humble in the face of the ineffable, and be able to integrate complex ideas from both within and without his own field of expertise into a new, working holism.
Of course it sounds like psycho-babble, but when approaching something far too complex for words, such failures of communication are inevitable. Al Fin cognitive engineers recommend that the concepts of "embodied cognition" and "preverbal metaphor" be kept foremost in in the minds of any hopeful AI developers.
For everyone else, don't get your hopes up too high. Higher education in the advanced world -- particularly in North America -- is moving into serious difficulty, which means that research facilities and funding are likely to be cut back, perhaps severely. The economies of the advanced nations, and the US in particular, are being badly mismanaged, which means that private sector efforts will also likely be cut back. The problem may require the emergence of an individual with the brilliance of Einstein, the persistence of Edison, and the wide-ranging coherent creativity of a da Vinci.
In other words, people need to become smarter -- at least some people. Then they can learn to evolve very smart machines. And perhaps to interface with networks of these very smart machines. Then, possibly, using this symbiotic intelligence to design even smarter people.
Because let's face it: Humans -- especially PC-cultured humans -- will only take us to the Idiocracy, sooner or later. And the Idiocracy will be not only stupid, but downright brutal.
No comments:
Post a Comment