Why AI will be a threat to humanity – 为什么人工智能必将威胁我们的文明 – English

100%
16 paragraph translated (16 in total)
Read or translate in

You are viewing an old revision of this post, from January 11, 2017 @ 14:26:49. See below for differences between this version and the current revision.

My background is in astrophysics, and as a scientist, working at the Observatory for over ten years, I have some understanding of science, but I don’t engage in research about artificial intelligence. As to artificial intelligence, I am entirely opposed to it. I believe that humanity is currently playing with fire on two or three fronts, and artificial intelligence is one of the most dangerous, but most people don’t seem to realised that.

We may consider the threat of artificial intelligence in the short term, mid-term and long term.

In the short term, an important concern is large-scale unemployment and large-scale military applications. In the mid-term, the risk is that, as the public will come to understand the issue better, it will come to fear that artificial intelligence is out of control, leading to rebellions. But many people are not even thinking of the ultimate long-term threat.

For every type of danger, there is a lot of misperception. The most commonly seen is this: artificial intelligence is still very weak now, we’re afraid about rebellion, things getting out of control – aren’t we looking too far ahead. This is a very absurd reasoning: if we were talking about tigers, would you argue that ‘the tiger is still small’ and find that a good reason?

Let’s look at short-term dangers first. In this area, artificial intelligence experts have come up with very naive reasoning. First they promise us that in the foreseeable future, 95% or more of jobs could be replaced by artificial intelligence. If this is truly as they say, this means that the majority of people in our society will be unemployed, and only a minority of workers remain to feed everyone else. Such a social structure is something that has never been seen before in human history.

The unemployment rate today is 10 to 15%, we say that this is a high unemployment rate, and this is a dangerous social pattern. If we reach a point where 85 or 95% of people don’t need to work, what will they be doing then? You can say that we won’t be calling this unemployment, if we let the robots do the work, they can also look after us, and we can just stay home, eat, drink and be merry. But you’ve got to be careful at this point: apart from eating, drinking and being merry, people want another thing yet: they want a revolution. If 95% of workers can be replaced with robots, we can imagine that we might let the robots take care of the revolution as well. But when we start thinking of a robot revolution, as we’ve seen in many sci-fi movies, like the end of ‘mechanical enemy’, the whole public square fills up with robots – and this kind of society becomes a very dangerous one.

Experts engaging with artificial intelligence say that, in human history, whenever an important technological revolution arises, jobs get lost in the process, but new jobs are also created. This kind of reasoning has a logical flaw. These 95% of jobs that robots can take over – is that the case or not?

In fact, the substitution of artificial intelligence for human intelligence will lead to a situation completely unlike the previous one. We used to be able to say, after the rise of the car, it didn’t matter that the horse-cart driver lost their job, they could become a car driver; but now the car itself has become intelligent, and no longer requires people to drive it, and so, mass unemployment is actually a very serious threat. I believe, we shouldn’t think of using robots only because the cost would be lower. If down the track you switch to robots in order to follow industrial upgrades, this may be worth considering, although it is not good in the long run. But if you cause large numbers of people to lose their jobs just in order to reduce cost, then this is not a stable society.

Putting robots to military use will certainly be very effective. Although we all know about Asimov’s three laws for robots, any military use is a direct violation of the three laws. But then again, the three laws are not really laws, they’re just the imaginary products of a novelist. Besides, the use of robots for military purposes will bring further ethical issues. For instance, if we decide to drop bombs somewhere today, this is a decision that we, people, make; now that there are intelligent unmanned aerial vehicles and the like, which need to decide on the spot whether they will attack a target or not, then it becomes the robot’s decision to kill a person or not, and this, from an ethical perspectives, is a radical change from previous war situations.

So if we start using artificial intelligence in the military today, I think we can only have the attitude of the Americans when they started the Manhattan project, just because our opponents are also working on it, we can’t afford to lose, and so we have no choice and heading forward. That’s why Tesla’s CEO, Elon Musk, proposed that all countries should negotiate an agreement banning the use of artificial intelligence for military purposes.

中期的问题是人工智能的反叛,现在有些业界专家还在说,可以拔掉机器人的电源。实际上人工智能跟互联网结合以后,它完全不需要什么形态,以前没有互联网的时候,单个的人工智能有局限,哪怕全部都是芯片,也有存储、运算方面的极限。但是现在跟互联网一结合,所有的物质结构都不再需要了,只要靠在网络上下单、订购各种服务,就可以完成对社会的破坏。所以到那个时候你根本无法拔电源,因为它不存在一个电源让你拔掉。

On this question, experts often say, we write what’s on the chips of robots, and so we won’t let them go bad. But in relation to this issue, the simplest question is, you can’t even guarantee that you’re own children won’t go bad, so what is it going to be like with artificial intelligence? Therefore, so far, nobody has been able to give clear guarantees that they would prevent a rebellion of artificial intelligence.

但是最危险的是来自远期的威胁,这是终极威胁,假如人工智能一点都不反叛,全能地、忠诚地、全心全意为我们服务,所有的事情都被人工智能做完了,你活着干什么呢?你会变成行尸走肉,人类这个物种会迅速退化,无论在体能和智能上都迅速退化,这样对人类来说一点意义都没有,实际上就是自掘坟墓。

所以我们应该把人工智能认识为大规模杀伤性武器,最好的局面是将来世界各大国坐下来举行谈判,加以限制。最多保留一点低级的人工智能,但是我们要想办法防止它的进化。

其实我们在生活中用了很多人工智能,比如说你在网上买一张火车票、飞机票,这个系统就是人工智能。这类人工智能如果不进化问题不大,但是由于人工智能有可能是以极快的速度进化的,这个进化的过程我们很难操控,因此我们能做的就是严防它的进化,当然我们能做到什么程度,这个事情很难说。

我们现在能确定的是现在科学家憋着劲儿要搞人工智能,不确定的是人工智能最后会不会毁灭我们。



Source : 南方周末
image source:http://www.gstv.com.cn/folder10/folder65/2016-12-27/336932.html

Article Revisions:

Changes:

January 11, 2017 @ 14:26:49Current Revision
Content
<p>My background is in astrophysics, and as a scientist, working at the Observatory for over ten years, I have some understanding of science, but I don't engage in research about artificial intelligence. As to artificial intelligence, I am entirely opposed to it. I believe that humanity is currently playing with fire on two or three fronts, and artificial intelligence is one of the most dangerous, but most people don't seem to realised that. </p> <p>My background is in astrophysics, and as a scientist, working at the Observatory for over ten years, I have some understanding of science, but I don't engage in research about artificial intelligence. As to artificial intelligence, I am entirely opposed to it. I believe that humanity is currently playing with fire on two or three fronts, and artificial intelligence is one of the most dangerous, but most people don't seem to realised that. </p>
<p>We may consider the threat of artificial intelligence in the short term, mid-term and long term. </p> <p>We may consider the threat of artificial intelligence in the short term, mid-term and long term. </p>
<p>In the short term, an important concern is large-scale unemployment and large-scale military applications. In the mid-term, the risk is that, as the public will come to understand the issue better, it will come to fear that artificial intelligence is out of control, leading to rebellions. But many people are not even thinking of the ultimate long-term threat. </p> <p>In the short term, an important concern is large-scale unemployment and large-scale military applications. In the mid-term, the risk is that, as the public will come to understand the issue better, it will come to fear that artificial intelligence is out of control, leading to rebellions. But many people are not even thinking of the ultimate long-term threat. </p>
<p>For every type of danger, there is a lot of misperception. The most commonly seen is this: artificial intelligence is still very weak now, we're afraid about rebellion, things getting out of control - aren't we looking too far ahead. This is a very absurd reasoning: if we were talking about tigers, would you argue that 'the tiger is still small' and find that a good reason? </p> <p>For every type of danger, there is a lot of misperception. The most commonly seen is this: artificial intelligence is still very weak now, we're afraid about rebellion, things getting out of control - aren't we looking too far ahead. This is a very absurd reasoning: if we were talking about tigers, would you argue that 'the tiger is still small' and find that a good reason? </p>
<p>Let's look at short-term dangers first. In this area, artificial intelligence experts have come up with very naive reasoning. First they promise us that in the foreseeable future, 95% or more of jobs could be replaced by artificial intelligence. If this is truly as they say, this means that the majority of people in our society will be unemployed, and only a minority of workers remain to feed everyone else. Such a social structure is something that has never been seen before in human history. </p> <p>Let's look at short-term dangers first. In this area, artificial intelligence experts have come up with very naive reasoning. First they promise us that in the foreseeable future, 95% or more of jobs could be replaced by artificial intelligence. If this is truly as they say, this means that the majority of people in our society will be unemployed, and only a minority of workers remain to feed everyone else. Such a social structure is something that has never been seen before in human history. </p>
<p>The unemployment rate today is 10 to 15%, we say that this is a high unemployment rate, and this is a dangerous social pattern. If we reach a point where 85 or 95% of people don’t need to work, what will they be doing then? You can say that we won't be calling this unemployment, if we let the robots do the work, they can also look after us, and we can just stay home, eat, drink and be merry. But you've got to be careful at this point: apart from eating, drinking and being merry, people want another thing yet: they want a revolution. If 95% of workers can be replaced with robots, we can imagine that we might let the robots take care of the revolution as well. But when we start thinking of a robot revolution, as we've seen in many sci-fi movies, like the end of 'mechanical enemy', the whole public square fills up with robots - and this kind of society becomes a very dangerous one. </p> <p>The unemployment rate today is 10 to 15%, we say that this is a high unemployment rate, and this is a dangerous social pattern. If we reach a point where 85 or 95% of people don’t need to work, what will they be doing then? You can say that we won't be calling this unemployment, if we let the robots do the work, they can also look after us, and we can just stay home, eat, drink and be merry. But you've got to be careful at this point: apart from eating, drinking and being merry, people want another thing yet: they want a revolution. If 95% of workers can be replaced with robots, we can imagine that we might let the robots take care of the revolution as well. But when we start thinking of a robot revolution, as we've seen in many sci-fi movies, like the end of 'mechanical enemy', the whole public square fills up with robots - and this kind of society becomes a very dangerous one. </p>
<p>Experts engaging with artificial intelligence say that, in human history, whenever an important technological revolution arises, jobs get lost in the process, but new jobs are also created. This kind of reasoning has a logical flaw. These 95% of jobs that robots can take over - is that the case or not? </p> <p>Experts engaging with artificial intelligence say that, in human history, whenever an important technological revolution arises, jobs get lost in the process, but new jobs are also created. This kind of reasoning has a logical flaw. These 95% of jobs that robots can take over - is that the case or not? </p>
<p>In fact, the substitution of artificial intelligence for human intelligence will lead to a situation completely unlike the previous one. We used to be able to say, after the rise of the car, it didn't matter that the horse-cart driver lost their job, they could become a car driver; but now the car itself has become intelligent, and no longer requires people to drive it, and so, mass unemployment is actually a very serious threat. I believe, we shouldn't think of using robots only because the cost would be lower. If down the track you switch to robots in order to follow industrial upgrades, this may be worth considering, although it is not good in the long run. But if you cause large numbers of people to lose their jobs just in order to reduce cost, then this is not a stable society. </p> <p>In fact, the substitution of artificial intelligence for human intelligence will lead to a situation completely unlike the previous one. We used to be able to say, after the rise of the car, it didn't matter that the horse-cart driver lost their job, they could become a car driver; but now the car itself has become intelligent, and no longer requires people to drive it, and so, mass unemployment is actually a very serious threat. I believe, we shouldn't think of using robots only because the cost would be lower. If down the track you switch to robots in order to follow industrial upgrades, this may be worth considering, although it is not good in the long run. But if you cause large numbers of people to lose their jobs just in order to reduce cost, then this is not a stable society. </p>
<p>Putting robots to military use will certainly be very effective. Although we all know about Asimov's three laws for robots, any military use is a direct violation of the three laws. But then again, the three laws are not really laws, they're just the imaginary products of a novelist. Besides, the use of robots for military purposes will bring further ethical issues. For instance, if we decide to drop bombs somewhere today, this is a decision that we, people, make; now that there are intelligent unmanned aerial vehicles and the like, which need to decide on the spot whether they will attack a target or not, then it becomes the robot's decision to kill a person or not, and this, from an ethical perspectives, is a radical change from previous war situations. </p> <p>Putting robots to military use will certainly be very effective. Although we all know about Asimov's three laws for robots, any military use is a direct violation of the three laws. But then again, the three laws are not really laws, they're just the imaginary products of a novelist. Besides, the use of robots for military purposes will bring further ethical issues. For instance, if we decide to drop bombs somewhere today, this is a decision that we, people, make; now that there are intelligent unmanned aerial vehicles and the like, which need to decide on the spot whether they will attack a target or not, then it becomes the robot's decision to kill a person or not, and this, from an ethical perspectives, is a radical change from previous war situations. </p>
<p>So if we start using artificial intelligence in the military today, I think we can only have the attitude of the Americans when they started the Manhattan project, just because our opponents are also working on it, we can't afford to lose, and so we have no choice and heading forward. That's why Tesla's CEO, Elon Musk, proposed that all countries should negotiate an agreement banning the use of artificial intelligence for military purposes. </p> <p>So if we start using artificial intelligence in the military today, I think we can only have the attitude of the Americans when they started the Manhattan project, just because our opponents are also working on it, we can't afford to lose, and so we have no choice and heading forward. That's why Tesla's CEO, Elon Musk, proposed that all countries should negotiate an agreement banning the use of artificial intelligence for military purposes. </p>
<p>中期的问题是人工智能的反叛,现在有些业界专家还在说,可以拔掉机器人的电源。实际上人工智能跟互联网结合以后,它完全不需要什么形态,以前没有互联网的时候,单个的人工智能有局限,哪怕全部都是芯片,也有存储、运算方面的极限。但是现在跟互联网一结合,所有的物质结构都不再需要了,只要靠在网络上下单、订购各种服务,就可以完成对社会的破坏。所以到那个时候你根本无法拔电源,因为它不存在一个电源让你拔掉。</p> <p>The mid-term issue is the rebellion of artificial intelligence. At this point, industry experts say that you can always unplug the robots. But in fact, once artificial intelligence is connected to the Internet, it doesn’t need any physical form any more. Before the Internet, individual artificial intelligence had limitations: even if entirely made of chips, it had limits to its storage and computation capacity. But now, if it connects to the entirely, it doesn’t need any physical structure at all, as long as it can rely on the network to access all sorts of services, it can completely destroy society. So by that time, there is no way you can pull the plug, because there is no plug to pull. </p>
<p>On this question, experts often say, we write what's on the chips of robots, and so we won't let them go bad. But in relation to this issue, the simplest question is, you can't even guarantee that you're own children won't go bad, so what is it going to be like with artificial intelligence? Therefore, so far, nobody has been able to give clear guarantees that they would prevent a rebellion of artificial intelligence. </p> <p>On this question, experts often say, we write what's on the chips of robots, and so we won't let them go bad. But in relation to this issue, the simplest question is, you can't even guarantee that you're own children won't go bad, so what is it going to be like with artificial intelligence? Therefore, so far, nobody has been able to give clear guarantees that they would prevent a rebellion of artificial intelligence. </p>
  <p>But the most dangerous of all is the long-term threat. This is the ultimate threat. Even if artificial intelligence shows no trace of rebellion, and competently and loyally completely aims to be at our service, when everything is accomplished by artificial intelligence, what is left for us to do? We might become zombies, the human species will rapidly degenerate, both its physical capacity and intelligence will decrease, and so this has no significance at all for humanity - in fact, it's about us digging our own graves. </p>
  <p>So, we should recognise artificial intelligence as a weapon of mass destruction. The best situation would be one where the world's great powers sit down and conduct negotiations to impose limits. We might wish to preserve a measure of low-level artificial intelligence, but we must find ways to prevent its further evolution. </p>
  <p>In fact, we're already using artificial intelligence a lot in our daily lives, for instance, when we buy a train or plane ticket on the Internet, the system behind it is a form of artificial intelligence. This kind of artificial intelligence, as long as it does not evolve, does not present a great problem. But because artificial intelligence might evolve very fast, and this evolution process is difficult to control, therefore all we can do is prevent its evolution, though of course, to what extent we can achieve this is hard to say. </p>
<p>但是最危险的是来自远期的威胁,这是终极威胁,假如人工智能一点都不反叛,全能地、忠诚地、全心全意为我们服务,所有的事情都被人工智能做完了,你活着干什么呢?你会变成行尸走肉,人类这个物种会迅速退化,无论在体能和智能上都迅速退化,这样对人类来说一点意义都没有,实际上就是自掘坟墓。</p> <p>What we can do for sure today is hold back the scientific efforts to develop artificial intelligence; what we cannot be sure about is whether artificial intelligence will end up destroying us. </p>
<p>所以我们应该把人工智能认识为大规模杀伤性武器,最好的局面是将来世界各大国坐下来举行谈判,加以限制。最多保留一点低级的人工智能,但是我们要想办法防止它的进化。</p>  
<p>其实我们在生活中用了很多人工智能,比如说你在网上买一张火车票、飞机票,这个系统就是人工智能。这类人工智能如果不进化问题不大,但是由于人工智能有可能是以极快的速度进化的,这个进化的过程我们很难操控,因此我们能做的就是严防它的进化,当然我们能做到什么程度,这个事情很难说。</p>  
<p>我们现在能确定的是现在科学家憋着劲儿要搞人工智能,不确定的是人工智能最后会不会毁灭我们。</p>  

Note: Spaces may be added to comparison text to allow better line wrapping.

About Michael Broughton