もっと詳しく

On May 30, world-class influencer Elon Musk tweeted:

2029 is a pivotal year, and I would be surprised if we haven’t achieved Artificial General Intelligence (AGI) by then. So do people on Mars (weird).

Not long after Musk tweeted, Gary F. Marcus, a well-known artificial intelligence scientist and a professor of psychology at New York University, immediately wrote a blog post clamoring for Musk to “popularize” the knowledge of general artificial intelligence to Musk from five aspects.Give your reasons why you think AGI won’t be possible in 2029.

Will general artificial intelligence be achieved in 2029?  Gary Marcus

Musk has not responded to Gary Marcus’ challenge.

Melanie Mitchell, an AI expert at the Santa Fe Institute, recommends placing the bets on longbets.org,Marcus says that as long as Musk is willing to bet, he is happy to go to the appointment.

Will general artificial intelligence be achieved in 2029?  Gary Marcus

The following are Gary Marcus’s five angles to refute Musk. The AI ​​technology review has been sorted out as follows:

Musk is a “big talk” prophet

First, Musk’s predictions about time are always inaccurate.

In 2015, Musk said that true self-driving cars were still two years away; he’s said the same thing every year since, but true self-driving cars have yet to emerge.

Will general artificial intelligence be achieved in 2029?  Gary Marcus

Musk doesn’t focus on edge-case challenges

Second, Musk should focus more on the challenges of edge cases (aka outliers, or unusual situations) and think about what those outliers might mean for predictions.

Because of the long tail, it’s easy to assume that the AI ​​problem is much simpler than it actually is. We have a huge amount of data about daily affairs in our hands, which is easily processed by current technology, which can mislead us and give us a false impression;And for rare events, we have very little data, and current technology is difficult to process this data.

We humans possess a wealth of skills for reasoning with imperfect information, perhaps to overcome the long tail of life. But for the currently popular artificial intelligence technology that relies more on big data than reasoning, the long tail problem is a very serious problem.

In a 2016 interview titled “Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence?” Gary Marcus tried to sound the alarm. Here’s what he said at the time:

Despite a lot of hype around AI and a lot of money going into AI, I feel like the field is going in the wrong direction. In the specific direction of deep learning and big data, there are many low-hanging fruit. People are very excited about big data and what it brings to them now, but I’m not sure if it will bring us any closer to deeper questions in AI, like how do we understand language or how we reason about the world.

Think again about driverless cars. You’ll find that in general, driverless cars are great. If you put them in a sunny Palo Alto location, the vehicle will perform really well. But if you put your vehicle in a place where it’s snowing or raining, or somewhere you haven’t seen it before, there are bound to be problems with these cars. Steven Levy wrote an article about Google’s autonomous car factory, where he talks about research in late 2015 that allowed them to finally get the system to recognize leaves.

The system does identify leaves, but for the less common, you can’t get as much data. Common sense can be used to communicate between humans. We can try to figure out what this thing is and how it got there, but all the system can do is memorize things, and that’s the real limit.

Tesla's Autopilot crashes into $3 million jet

▲ Tesla autopilot crashes into $3 million jet

Unexpected situations have always been the scourge of contemporary AI technology, and likely will remain until a real revolution emerges.Here’s why Marcus assured Musk won’t launch an L5 self-driving car this year or next.

Outliers are not completely unsolvable, but are still a significant problem with no known robust solution to date. Marcus believes that people must move away from their heavy reliance on existing technologies such as deep learning. 7 years away from 2029, 7 years is a long time, but if AGI is to be achieved by the end of this decade, the field needs to invest in other ideas. Otherwise, mere outliers are enough to defeat the goal of achieving AGI.

General artificial intelligence is broad

The third thing Musk needs to consider is,AGI is a broad issue, because intelligence itself involves a wide range. Marcus quotes this quote from Chaz Firestone and Brian Scholl here:

There is not only one way of thinking in the world, because thinking is not a whole. Rather, the mind can be divided into parts, and its different parts work in different ways: “seeing a color” works differently than “planning a vacation”, while “planning a vacation” is not the same as “understanding a sentence”, “moving a limb” “, “remembering a thing” or “feeling an emotion” are all different.

For example, deep learning does fairly well at recognizing objects, but less so at planning, reading, or language comprehension. This situation can be represented by the following diagram:

Will general artificial intelligence be achieved in 2029?  Gary Marcus

Current AI is good at some perceptions, but still needs work in others. Even in perception, 3D perception remains a challenge that scene understanding has not addressed. There are still no stable or trustworthy solutions for several domains such as common sense, reasoning, language or analogy. The truth is, this pie chart Marcus has been using for 5 years, and the situation with AI has hardly changed.

In Marcus’ 2018 article “Deep Learning: A Critical Appraisal,” he concluded:

Despite my questions, I don’t think we should abandon deep learning.

Instead, we need to redefine deep learning: deep learning is not a universal solvent, but a tool, besides which we need hammers, wrenches and pliers, not to mention chisels, drills, voltmeters, logic probes and oscilloscope.

Four years on, many people still hope that deep learning will be the panacea; this is still unrealistic for Marcus, who still believes that humans need more technology. Practically speaking, 7 years may not be long enough to invent these tools (if they don’t exist yet) or to get them out of the lab and into production.

Marcus presented Musk with what he called “production hell” in 2018 (Musk thinks the Model 3 electric sedan production phase is hellish and calls it “production hell”). Such an integration in less than a decade on a set of technologies that has never been fully integrated before would be very demanding.

Marcus said, “I don’t know what Musk is going to make Optimus[Tesla’s humanoid robot]but I can guarantee that the AGI required for a general-purpose home robot is far more than that of a car, after all Driving on the road or on the road is more or less the same.”

Complex cognitive systems have yet to be built

The fourth thing Musk needs to realize is,Humans still do not have a proper methodology for building complex cognitive systems.

Complex cognitive systems have too many moving parts, which often means that people who build things like self-driving cars end up playing a giant game of whack-a-mole, often just after solving a problem and then appearing again another problem. Patch after patch sometimes works and sometimes doesn’t. Marcus doesn’t think it’s possible to get AGI without addressing the methodological issues, and he doesn’t think anyone has a good proposal yet.

Debugging with deep learning is very difficult because no one really understands how it works or how to fix things, collect more data and add more layers and so on. The kind of debugging as it is known to the general public does not apply in a classical programming environment; because deep learning systems are so inexplicable, one cannot think the same way about what a program is doing, nor can one expect the usual process of elimination. Instead, in the deep learning paradigm right now, there is a lot of trial and error, retraining, and retesting, not to mention a lot of data cleaning and data augmentation experiments and so on. A recent report from Facebook candidly says that there is a lot of trouble in training a large language model OPT.

Sometimes this is more like alchemy than science, as in the picture below:

▲ “Is this your machine learning system?”

“Yeah, you dump the data into this whole pile of linear algebra and go to the other side to pick up the answer.”

“What if the answer is wrong?”

“Then mess around with all this stuff,Until the answer looks correct. “

Programming verification may eventually help, but again, there are no tools for writing verifiable code in deep learning. If Musk wants to win the bet, they may have to settle this too, and soon.

betting standard

The last thing Musk needs to consider is the standard of the bet. If you want to bet, make ground rules. The term AGI is rather vague, as Marcus said on Twitter the other day:

I define AGI as “flexible and general intelligence with resourcefulness and reliability that rivals or exceeds human intelligence.”

Will general artificial intelligence be achieved in 2029?  Gary Marcus

Marcus also proposed to take a bet with Musk and formulate specific basic betting rules. He and Ernie Davis wrote the following five prophecies at the request of those working with Metaculus:

  • By 2029, AI still won’t be able to tell you exactly what’s going on while watching a movie (Marcus called it a “comprehension challenge” in The New Yorker in 2014), nor can it figure out who these characters are, What are their conflicts and motivations.

  • By 2029, AI will still be unable to read novels and accurately answer questions about plot, characters, conflict, motivation, and more.

  • By 2029, AI will still not be able to be a competent chef in any kitchen.

  • By 2029, AI will still be unable to reliably write more than 10,000 lines of bug-free code based on natural language specifications or through interactions with non-expert users. (Gluing together code from existing libraries doesn’t count.)

  • By 2029, AI will still be unable to arbitrarily extract proofs from mathematical literature written in natural language and convert them into symbolic forms suitable for symbolic verification.

If Musk (or someone else) manages to break at least three predictions in 2029, he wins; if only one or two is broken, then it can’t be said that AGI will come true, and the winner is Marcus.

Marcus was eager to try this bet, and said to Musk: “Want to bet?How about a $100,000 bet? “

What do you think? Who do you think will win? (eat melon)

Reference link:

https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things?s=w

https://www.ted.com/talks/elon_musk_elon_musk_talks_twitter_tesla_and_how_his_brain_works_live_at_ted2022

https://arxiv.org/abs/1801.00631

https://www.wsj.com/articles/elon-musk-races-to-exit-teslas-production-hell-1530149814

.
[related_posts_by_tax taxonomies=”post_tag”]

The post Will general artificial intelligence be achieved in 2029? Gary Marcus “challenges” Musk: How about betting $100,000? – IT House appeared first on Gamingsym.