Ray Kurzweil: Step up the development of policies and regulations to make artificial intelligence better


This article is produced by NetEase Smart Studio (public number smartman 163). Focus on AI and read the next big era!

[NetEase Smart News Nov. 22 news] Inventor and writer Ray Kurzweil (Ray Kurzweil) recently cooperated with the Gmail team on Google to conduct research on automatic reply emails. He recently spoke at the Council on Foreign Relations with Nicholas Thompson, editor of Wired magazine. The following is an edited conversation record.

[Nicholas Thompson]: Our conversation begins with an explanation of the law of accelerated return, which is one of the basic ideas that underpins your writing and work.

[Ray Corswell]: In the half of the human genome project, 1% of the genome was collected after 7 years. So mainstream critics say, “I told you that it didn’t work. It took you 7 years to reach 1%. It took a total of 700 years. My response was: “Wow, we finished 1%? Then we're almost done. "Because 1% is only 100% 7 times. It doubles every year. Indeed, this situation continues. Seven years later, this project was completed. This situation has continued since the end of the genome project." The cost of a genome is a billion dollars, and now we have dropped to 1,000 dollars.

I want to mention one meaning of the law of accelerated return because it has many chain reactions. It is indeed the reason behind this impressive digital revolution that we have seen, and the deflation rate of information technology is as high as 50%. So, I can get the same calculations, communications, gene sequencing, and brain data as I did a year ago, and the price a year ago was only half that of today. This is why you can buy an iPhone or an Android phone half the price of two years ago today, and the phone performance is twice as good as before. The improved price/performance ratio is partly reflected in the price, and the other part in the performance of the mobile phone. So when an African girl buys a smart phone for $75, it is equivalent to $75 in economic activity, although this figure was about $1 trillion in around 1960 and about 1 billion in 1980. Dollars. It has millions of dollars of free information apps, one of which is an encyclopedia, which is far better than I paid for when I was a teenager. All of these are zero in economic activity because they are all free. So we really don't calculate the value of these products.

All of this will change: we will print clothes with a 3-D printer. At present we are in a speculation stage of some kind of 3-D printing. But in the early twentieth century, we will be able to print clothing. There will be a lot of cool open source designs that you can download for free. We still have a fashion industry, just as we still have the music, film and book industries, coexisting with free, open source products that are first class products and proprietary products. We can use vertical agriculture to produce very cheap foods: hydroponics to grow fruits and vegetables and clone muscle tissue in vitro. The first hamburger produced in this way has already been purchased. It's expensive, priced at hundreds of thousands of dollars, but it's good. All these different resources will become information technology. Recently, the Lego-style small modules that came out of Asian 3-D printers were spelled together into a three-story office building a few days later. This will be the essence of architecture in the 1920s. The 3-D printer will print out what we need.

[Nicholas Thompson]: Let's talk about intelligence, just like my cell phone in my pocket. It is better than me and it is more powerful than me. It is better than I do in many things. When is it better than me to talk to people? When will I interview you instead?

[Lee Croswell]: We do have technology that allows us to talk. My team created a smart reply on Google. So we are writing millions of emails. It must understand the meaning of the email to be responded to, although the suggestions made are brief. But your question is a question similar to the Turing test, which is equivalent to the Turing test. I believe that the Turing test is a valid test of the full range of human intelligence. You need complete human intelligence to pass an effective Turing test. No simple natural language processing skills can do this. If human judges can't tell the difference, then we think that artificial intelligence is part of human wisdom and that's exactly what you are asking. This is one of my key predictions. I have been saying 2029. In 1989, in the book “The Age of Intelligent Machines,” I set the boundary between the early 1920s and the end of the 1930s, and the 1999 “Moral Machine Age”. In The Age of Spiritual Machines, I was talking about 2029. The artificial intelligence department of Stanford University felt this was daunting, so they convened a meeting when the artificial intelligence experts agreed that it would take hundreds of years. 25% of people think this will never happen. My point of view is closer and closer to the common point of view or middle point of the artificial intelligence expert, but this is not because I have been changing my point of view.

In 2006, Dartmouth convened a conference called "Artificial Intelligence @ 50". The consensus at that time was 50 years, when I was talking about 23 years. We just organized an artificial intelligence ethics conference in Asilomar. The consensus at that time was between 20 and 30 years, and I would say that it was 13 years. I'm still more optimistic, but it's not so optimistic. More and more people think I'm too conservative.

I did not mention that one of the key issues in the law of accelerated returns is that hardware not only grows exponentially, but also software. I feel more and more confident. I think the artificial intelligence community is getting more and more confident. We believe we are not far from this milestone.

We will combine with artificial intelligence technology to make us smarter. It has already done it. These devices are brain expanders. People think so. This is a new thing. Just a few years ago, people did not think of their smartphones as brain expanders. By definition, they will enter our bodies and brains, but I think this is an arbitrary distinction. Although they are outside our bodies and brains, they are already an extension of the brain and they will make us smarter and more interesting.

[Nicholas Thompson]: Please explain the framework of policy makers, how they should look at this acceleration technique, what they should do and what they should not do.

[Lee Croswell]: People are very concerned about artificial intelligence, how to ensure the safety of technology. This is a polarized discussion, just like many discussions now. In fact, I have talked about promises and dangers for a long time. Technology is always a double-edged sword. The fire warms us and cooks our food, but it can also burn our house. These technologies are more powerful than fire. I think we should go through three stages, at least I think so. The first is to be happy to have the opportunity to solve the long-standing problems: poverty, illness, and so on. Then we must warn that these technologies may be destructive, and even bring about survival risks. Finally, I think we need to point out that we must realize that apart from the progress we have made, we must continue to promote the development of these technologies in a moral sense. This is a completely different issue. People think that the situation is getting worse. However, the situation is actually getting better. There are still many human sufferings that need to be solved. Only sustained development in the field of artificial intelligence will enable us to continue to overcome poverty, disease and environmental degradation, but at the same time we will also be at risk.

This is a good framework. Forty years ago, visionary people saw the prospects and risks of biotechnology, fundamentally changed biology, and allowed biology to make rapid progress on the right path. Therefore, they held an "Axiromat Conference" at the conference center in Asilomar, and put forward ethical guidelines and strategies for how to ensure the safety of these technologies. It is now forty years later. We are being clinically affected by biotechnology. Today is a trickle. The next decade will be a scourge. So far, the number of people who have been harmed by the misuse of biotechnology is zero. This is a good learning model.

We have just held the first Asilomar Conference on Artificial Intelligence Ethics. Many of these ethical guidelines, especially in the field of biotechnology, have been written into law. So I think this is our goal. The most extreme is, "Let us ban this technology," or "Let's slow down." This is certainly not the correct way. We must guide it in a constructive way. There are some strategies to do this. This is another complicated discussion.

[Nicholas Thompson] You can imagine that Congress might say that everyone working in a certain technology field must disclose his own data, or he must be willing to share his data set, at least to make it difficult for competitive markets to overcome these difficulties. Confidence is a powerful tool. As you can imagine, the government would say, "Actually, we will have a large government-funded project, like OpenAI, but it will be managed by the government." As you can imagine, there was a huge national infrastructure movement to develop this. Technology, so at least a core person in charge of the public interest can control some of them. Do you have any recommendations?

[Lee Croswell]: I think open source data and algorithms are usually a good idea. Google places all artificial intelligence algorithms and open source code (TensorFlow) in the public domain. I think this is actually a combination of open source and accelerating return laws, which will bring us closer to the ideal. There are many issues, such as privacy, which are the key to maintenance. I think people in this area are usually very concerned about these issues. It is not clear what the correct answer is. I think we want to continue to progress, but when you have such great power, even if it is out of goodwill, there will be abuse.

[Nicholas Thompson]: What are you worried about? You look very optimistic about the future. But what are you worried about?

[Lee Kerswell]: I was accused of being an optimist. As an entrepreneur, you must be an optimist because if you understand all the problems that you have encountered, you may never start doing it. Any item. But as I said, I have always been concerned about and writing about some of the negative effects. These technologies are very powerful, so I am really worried, even though I am an optimist. I am optimistic and we will finally weather the storm. I would not be optimistic that we will not encounter any difficulties. During the Second World War, 50 million people died, and the power of technology at that time even exacerbated the death toll. I think it is important for people to realize that we are making progress. A poll of 24,000 people was recently conducted in 26 countries. The question to be investigated is: “Is the poverty in the world alleviated?” 90% of people said that the situation has become worse, but this is the wrong answer. Only 1% of people gave the correct answer, saying that the world’s poor population has fallen by at least 50%.

[Nicholas Thompson]: What should the audience do about a career? They are about to enter a world in which career choices map to a world with completely different technologies. So in your opinion, what advice do you give to people in this room?

[Lee Kerswell]: This is indeed an old suggestion, that is to follow your passion, because there really is no area that will not be affected, or this is not part of this story. We will merge in the cloud to simulate a new cerebral cortex. So, we will still become smarter. I don't think that artificial intelligence will replace us. It will increase our strength. It has already done it. Without the brain expanders we have today, who can complete their work. This situation will continue. People will say, "Well, only rich people will have these tools," I said. "Yes, just like smartphones, 3 billion people already have smartphones." I said twenty before. Billion people, but I just read the news, it is probably 3 billion people. After a few years it will become six billion. This is because of the explosive price performance. So find a place where you have passion. Some people have complex emotions and they are not easily categorized, so find a way to use the tools available to make you feel that you can change the world. My reason for accelerating the law of returning is to record the time required for my own technical project, so I can begin to experiment and predict the direction of technology development a few years before the project is feasible. Just a few years ago, we had small devices that looked like smart phones, but they weren't very easy to use. Therefore, such technological revolutions and mobile applications almost never existed five years ago. Five years later, the world will be completely different, so try to keep your project up to speed with the train station.

Audience Question: So much emphasis is on the good side of human nature, science, and exploration. I'm also curious about the next step toward our robot partners. What about the dark side? What about wars, war machines and violence?

[Lee Croswell]: We are learning how to use and manipulate these platforms to expand human tendencies, many of which are the latest information we are learning. So artificial intelligence learns from examples. In this field there is a maxim that life begins with one billion examples. The best example is to learn from people, so artificial intelligence often learns from humans. This is not always the case. AlphaGo Zero has just learned how to fight against itself, but it is not always feasible, especially when you are trying to deal with more complex real-world problems. Great efforts have been made in this area. All major companies and open source research fields are also working hard to study. Their purpose is to eliminate artificial intelligence bias and overcome gender bias and racial prejudice, because if AI learns from biased humans It will get prejudice from people. As human beings, we will gain prejudice from everything we have seen, many of which are subconscious. As an educated person, we will recognize prejudice and try to overcome it. There will be conflicts in our minds. There is a whole area of ​​research to eliminate artificial intelligence biases and overcome the prejudice they derive from humans. So this is a study that can overcome the problems of machine intelligence. From these perspectives, machine intelligence is actually less than the human bias it learns. In general, although all kinds of promises and risks on social media are intertwined, it is a very beneficial thing overall. I cross the airport and every child over two years old has his own equipment. Social media has become a worldwide community. I think the current generation feels that they are citizens of the world more than any other generation because they have connections with all cultures in the world.

[Nicholas Thompson]: Last year, the relationship between the United States and the rest of the world did not get much closer. Many people would say that our democracy has not become better. Is this the twists and turns on the road to continuous progress with humanity, or is it misunderstood by many people?

[Lee Kerswell]: Political polarization between the United States and the rest of the world is unfortunate. I don't think this is the topic we are talking about today. I mean, we have experienced major twists and turns in the world. The Second World War was a considerable setback and did not actually affect these trends. In some government officials or governments, there may be something that we do not like. However, there is one point to discuss here. We are not in an era of totalitarianism that cannot express our own views. If we develop in that direction, I will be more worried, but I do not think this will happen. Therefore, do not belittle the importance of the government and those in power, but this is at a different level. The issues we talked about are not affected by these things. What I worry about is the risk that exists because technology is a double-edged sword.

Audience question: My question is related to inequality. During most of human history, economic inequality is quite high, and there are many stages. I wonder if you think the 20th century is an anomaly and how the proliferation of technology will affect this inequality.

[Ray Coswell]: The issue of economic equality is moving in a good direction. According to the World Bank, in the past 20 years, the poverty population in Asia has decreased by more than 90%, from the primitive agricultural economy to the prosperous information economy. The economic growth rates in Africa and South America are much higher than those in developed countries. There is inequality everywhere you see it, but things are moving in the right direction. In the past 20 years, the world’s poor population has decreased by 50%. There are many other metrics. So we are moving in the right direction. At any point in time, there are serious inequalities and people are experiencing hardship, but these phenomena are moving in a good direction.

Audience Question: I know from your comments that you are predicting that there are 12 years left from the next stage in artificial intelligence. You have already mentioned it several times. Although you are optimistic, you are concerned about the risks, so I want to know Can you elaborate on what you mean and what do you think technical experts should do to reduce these risks?

[Lee Kerswell]: What I mean is that there is a risk that threatens the survival of our civilization. Therefore, the first survival risk that humanity faces is nuclear proliferation. We are already capable of destroying all human life. With these new technologies, we can easily imagine that they may be extremely destructive and destroy all humanity. For example, biotechnology. We have the ability to reprogram biology from disease, such as immunotherapy. This is a very exciting breakthrough in the treatment of cancer. I think this is very revolutionary, but it is only just beginning. It is reprogramming the immune system to treat cancer, which is usually not possible. However, bioterrorists can reprogram a virus to make it more deadly, more contagious, and more stealthy, creating a super weapon. This is the first time that the Asilomar meeting 40 years ago has caused problems. These recurring meetings have made these ethical guidelines, security protocols, and tactics more complex and so far it has worked. But we have been making this technology more complex, so we must reformulate these guidelines again and again. We have just held our first meeting on artificial intelligence ethics. We have put forward a set of ethical rules and have all signed it. Many of them are vague. I think this is a very important issue. We found that we must establish ethical values ​​in the software. A typical example is driverless cars. The whole motivation for a self-driving car is to avoid the death of 99% of the 2 million human drivers, but it will fall into a situation where moral decisions must be made: it should be moving towards a stroller or an elderly couple. Still heading to the wall, which may lead to the death of passengers. Does the driverless vehicle have ethical guidelines that do not result in the death of passengers in the vehicle? In this case, the driverless car can't send an e-mail to the software designer, and then say, "My God, what should I do?" The code of ethics must be built into the software. So these are practical problems. Artificial intelligence has a complete field on this issue.

But how do we deal with the more actual risks: Weapons of artificial intelligence can be realized in the short term. Departments of Defense around the world are using artificial intelligence. At the time, there was a document asking people to agree to ban the use of automatic weapons. This sounds like a good idea. The example used is “We ban chemical weapons, so why not use autonomous AI weapons?” This is a bit complicated because we can Anthrax does not need smallpox. It is possible to prohibit chemical weapons. But automatic weapons are a dual-use technology. An Amazon drone that can send your frozen waffles or medicines to a hospital in Africa may be used to transport a weapon. This is the same technology and everything is ready to go. In other words, this is a more complicated issue and how to deal with it. But our goal is to gain promise and control risk. Without a simple algorithm, we can put this subroutine into our artificial intelligence. "Well, put this subroutine in." It will keep your AI benign. “Intelligence is inherently uncontrollable. My strategy is not stupid. It is in our own human society that we practice the moral, moral, and values ​​we want to see in the world. Because the future society is not certain from Mars.” The intrusion of intelligent machines. It was born in our civilization today. It will enhance our self. Therefore, if we are practicing the values ​​we value today, it is the best strategy in the future to have a world with these values. (From: Wired Compilation: Netease See Compilation Robot Review: Xue Yaqin)

Pay attention to NetEase smart public number (smartman163), obtain the latest report of artificial intelligence industry.

Lightpole Led Display

Lightpole Led Display,Street Light Pole Led Display,Led Display Pole Light,Light Pole Led

ShenZhen Megagem Tech Co.,Ltd , https://www.megleddisplay.com

This entry was posted in on