The reasoning ability of self driving car is not as good as that of a 7 month old baby?

0
195

Tencent technology news on September 5, for most infants, by the time they are seven months old, they can realize that some things still exist even when they are invisible. For example, if you put a toy under a blanket, a child of that age still knows it’s still there and can reach under the blanket and take it out. This understanding of “object eternity” is not only the process of a child’s normal development, but also one of the basic principles of reality.
However, this capability is not available to the current self driving car, and it is also a formidable challenge for this technology. Autopilot cars are getting better and better, but they still can’t understand the world as human beings do. For a self driving car, a bicycle that is temporarily sheltered by passing minibus is a bicycle that no longer exists.
This failure is one of the challenges facing today’s widely used computing discipline, which claims to be artificial intelligence (AI), but in fact it is a slightly misleading name. At present, AI works by building complex world statistical models, but it lacks a deeper understanding of reality. How to make AI have similar understanding ability at least on the surface, that is, the reasoning ability of 7-month-old children, is now being studied.
Modern AI is based on the concept of machine learning. If an engineer wants a computer to recognize a stop sign, he will not try to write thousands of lines of code to describe each pixel pattern that may indicate the sign. Instead, he would write a self-learning program and show the program thousands of pictures of parking signs. After repeated training, the program can gradually find the common features of all these pictures.
Similar technologies are also used to train autopilot cars in transportation. Therefore, cars have learned how to obey Lane signs, how to avoid other vehicles, how to brake at red lights, and so on. But they do not understand many things that human drivers take for granted. Although cars on the road have engines and four wheels, or they can abide by traffic rules and physical laws most of the time, they do not understand what is “object eternity”.
Mehul Bhatt of Orebro University in Sweden is also the founder of the co design lab, a start-up company that is commercializing his ideas. In a paper recently published in the journal artificial intelligence, Bart and his colleagues introduced a different approach. They used some existing AI programs used by some autopilot cars, and embedded software called the symbolic inference engine.
This software does not approach the real world in a probabilistic way like machine learning. Instead, it is programmed to apply basic physical concepts to processing program output from sensor signals of autopilot. Then, the modified output is provided to the software driving the vehicle. The concepts involved include the view that discrete objects continue to exist over time and are spatially related to each other, such as “front” and “back”. They can be completely or partially visible, or completely hidden by another object.
This method has worked. In the test, if one vehicle temporarily blocks the line of sight of another vehicle, the reasoning enhancement software can continue to track the blocked vehicle, predict when and where it will reappear, and take measures to avoid it if necessary. However, not much progress has been made. In the standard test, Dr. Bart’s system score is about 5% higher than the existing software. But it proves the validity of this principle. In addition, it produces something else. Because, unlike machine learning algorithms, inference engines can also tell you why they do so.
For example, you can ask a car with an inference engine why it brakes. It will tell you that it thinks a bicycle covered by a van is about to enter the intersection ahead. Machine learning programs cannot do this. Dr. Bart believes that this information will not only help improve the project design, but also help regulators and insurance companies. Therefore, it may speed up the public acceptance of autopilot.
Dr. Bart’s work is part of a long-standing debate in the field of AI. As early as the 1950s, AI researchers at that time used this pre programmed reasoning and achieved partial success. However, since the 1990s, machine learning has been dramatically improved due to the combination of better programming technology, more powerful computers and more data availability. Today, almost all AI is based on this.
Nevertheless, Dr. Bart is not the only skeptic. Gary Marcus studies psychology and neuroscience at New York University and is also the boss of an AI and robotics company called robot.ai. He agrees. To support his view, Dr. Marcus cited a well-known result, although it happened eight years ago. At that time, engineers of deepmind, who was still in an independent state, wrote a program to learn how to play the video game breakout without giving any tips about the rules. It involves hitting a moving virtual ball with a virtual racket.
Deepmind’s program is a great player. However, when another group of researchers tinkered with the code of breakout, such as moving the position of the racket by only a few pixels, its ability decreased sharply. The program cannot generalize what it has learned from a particular situation, or even deal with nuances.

For Dr Marcus, this example highlights the vulnerability of machine learning. However, some people believe that this is a very fragile symbolic reasoning, and machine learning still has a long way to go. Geoff Hawke (Jeff Hawke) is one of the technical vice presidents of Wayve, an automatic driving car company in London. The company’s approach is to train and train all the components of the vehicle at the same time, rather than training individually. In the demonstration, wave’s car made the right decision when navigating in the narrow and busy streets of London, which is a headache for many human drivers.
As Dr. Hawke said, “most real-world tasks are more complex than manual rules can solve. As we all know, specialized systems built with rules are often difficult to deal with complex problems. This is true no matter how well formal logic is considered or how well the structure is. ”
For example, such a system may make a rule that cars should stop at a red light. However, the design of traffic lights is different in different countries. Some lights are designed for pedestrians rather than cars. In other cases, you may need to run a red light, such as making way for a fire engine. But Dr. Hawke said: “the beauty of machine learning is that all these factors and concepts can be automatically found and learned from the data. And with more data, it will continue to learn and become more intelligent. ”
Nicholas rhinehart, who studies robotics and artificial intelligence at the University of California, Berkeley, also supports machine learning. Dr. Bart’s method does show that you can combine the two methods, but he is not sure whether it is necessary. In his and others’ work, the machine learning system alone can predict the possibility in the next few seconds, such as whether another car may give way, and make emergency plans based on these predictions.
Dr. Bart believes that you can train a car with driving data accumulated for millions of kilometers, but you are still not sure that you have covered all the necessary situations. In many cases, it may be simpler and more effective to program certain rules from the beginning.
For the advocates of these two strategies, the problem is not only the autopilot, but the future of AI itself. “I don’t think we’re taking the right approach,” Dr. Marcus said. Machine learning has been proved to be very useful for many things such as speech recognition, but it is not applicable to AI. We have not really solved the problem of intelligence. Anyway, seven month old babies seem to have a lot to teach machines. “( Tencent Technology (reviser / Jinlu)