Artificial IntelligenceComputers & InternetRobotics

7 Scary Facts About Artificial Intelligence

We are now in the fourth industrial revolution, characterized by advances in robotics and autonomous car technology, the proliferation of smart home appliances, and more. At the forefront of all this is artificial intelligence (AI), which is the development of automated computer systems that could match or even surpass humans in intelligence.

Artificial intelligence is seen as the next big thing, so big that future technologies will depend on it. But then, do we really know where we stand? Here are ten scary facts about artificial intelligence.

7) Your Self-Driving Car Might Be Programmed To Kill You

Suppose you’re driving on a road. Then a group of children suddenly appear in front of your car. You brake, but they don’t walk. Now you have two options: The first is to run over the children and save your life. The second is to swerve into a nearby wall or bollard, saving the children but killing you. What would you choose?

Self-Driving Car Might Be Programmed To Kill You

Most people agree that they will swerve and kill themselves.

Now imagine your car is self-driving, and you are the passenger. Do you still want him to swerve and kill you? Most people who agreed that they would swerve into the bollard if they were the driver also agreed that they would not want their self-driving car to swerve into the bollard and kill them. In fact, they would not even buy such a car if they knew it would deliberately endanger them in an accident.

This brings us to another question: What would the cars do?

The cars would do what they were programmed to do. As it stands, the independent car manufacturers are not talking. Most, like Apple, Ford, and Mercedes-Benz, are tactfully dodging the question in each case. An executive of Daimler AG (the parent company of Mercedes-Benz) once said that their autonomous cars would “protect [the] passenger at all costs”. However, Mercedes-Benz refuted this, stating that their vehicles are built to ensure that such a dilemma never occurs. That is ambiguous because we all know that such situations will occur.

Google came clean on this and said that its autonomous cars would avoid hitting unprotected road users and moving things around. That means the car would hit the bollard and kill the driver. Google further stated that in the event of an imminent accident, its autonomous cars would hit the smaller of two vehicles. In fact, Google’s autonomous cars could seek to be closer to small objects at all times.

6) Robots could demand rights just like humans do

Robots Might Demand Rights Just Like Humans

With the latest trends in AI, it is very possible for robots to attain a stage of self-realization. When this happens, they can demand their rights as if they were humans. In other words, they will need housing and health care benefits and will demand to be allowed to vote, serve in the military and obtain citizenship. In return, governments would make them pay taxes.

That is what is happening according to a joint study by the Horizon Digitization Centre of the Office of Science and Innovation in the United Kingdom. This research was reported by the BBC in 2006 when AI was much less advanced, and it was conducted to speculate on the technological advances they might see in 50 years.

5) Automatic Killer Robots Are In Use

When we say “killer robots”, we mean robots that can kill without human influence. Drones do not qualify because they’re controlled by people. One of the automatic killing robots we are talking about is the SGR-A1, a sentinel gun developed jointly by Samsung Techwin (now called Hanwha Techwin) and Korea University. The SGR-A1 looks like a huge surveillance camera, except that it has a high-powered machine gun that can automatically lock on and kill any target of interest.

SGR-A1 from Ayro AM on Vimeo.

The SGR-A1 is already in use in Israel and South Korea, which has installed several units along its Demilitarized Zone (DMZ) with North Korea. South Korea denies activating the automatic mode that allows the machine to decide who to kill and who not to kill.

4) War-robots can change sides

In 2011, Iran has captured a top-secret RQ-170 Sentinel stealth drone intact. This last word is necessary because it means that the drone was not shot down. Iran said it forced the drone to land after spoofing its GPS signal and making it believe it was in friendly territory. Some American experts say that this is not true, but then the drone was not shot down. So what did happen?

For all we know, Iran could be telling the truth. Drones, GPS, and robots are all based on computers, and as we all know, computers are hacked. War robots would be no different if they went to the battlefield. In fact, there’s every chance that the enemy army would try to hack them and use them against the same army that’s lining them up.

Autonomous killing robots are not yet widely used, so we’ve never seen a hacker. However, imagine an army of robots that suddenly make allegiance on the battlefield and turn against their own masters. Or imagine North Korea hacking those SGR-A1 machine guns at the DMZ and using them against South Korean soldiers.

3) Russia uses bots to spread propaganda on Twitter

 

Russia is still in the news to use these bots to sow discord among U.S. voters and influence them toward voting for Donald Trump in the 2016 election. Another little-reported incident is Russia using these bots to influence British voters to vote to leave the European Union in the 2016 Brexit referendum.

A few days before the referendum, more than 150,000 Russian bots, which had previously focused on tweets related to the war in Ukraine and Russia’s annexation of the Crimea, suddenly began to formulate pro-Brexit tweets encouraging the UK to leave the EU. These bots sent out around 45,000 pro-Brexit tweets within two days of the referendum, but tweets dropped to almost zero immediately after the referendum.

2) Machines Will Take Our Jobs

No doubt the machines will take over our jobs one day. But what we do not realize is when they will take over, and to what extent? Well, as we’re about to find out, it’s to a large extent.

According to the consulting and auditing firm PricewaterhouseCoopers (PwC), robots will account for more than 21 percent of jobs in Japan, 30 percent of jobs in the United Kingdom, 35 percent of jobs in Germany and 38 percent of jobs in the United States by 2030. 6] By the next century, they will have taken over more than half of the jobs available to humans.

The sector most affected will be transport and storage, where 56 percent of the jobs will be in the transport and storage sector. percent of the workforce will be machined. Next are manufacturing and retail, where machinery will account for 46% and 44% of the workforce. of all available jobs.

The “when” assumes that machines will be behind the wheel of trucks by 2027 and in retail stores by 2031. By 2049, they will be shortening books, and by 2053, they will be performing surgery. Only a few occupations will be exempt from the machine incursion. One is the role of a church minister, who would remain free not because a machine cannot run a church, but because most people will not approve of being preached to by a robot.

1) Robots have learned to be misleading

In experimentation, a robot was given some resources to keep. He often checked the resources but started visiting fake places every time he detected the presence of another robot in the area. This experiment was sponsored by the U.S. Office for Naval Research, which means it could have military applications. Robots guarding military supplies could change their patrol routes if they noticed they were being watched by enemy forces.

In another experiment, this time at the Ecole Polytechnique Fédérale de Lausanne in Switzerland, scientists created 1,000 robots and divided them into ten groups. The robots were required to look for a “good resource” in a designated area, while they avoided hanging around a “bad resource”. Each robot had a blue light, which it flashed to attract other members of its group whenever it found the right resource. The top 200 robots were removed from this first experiment, and their algorithms were “crossed” to create a new generation of robots.

The robots improved to find the right resource. However, this led to the congestion that other robots crowded around the price. In fact, things became so bad that the robot that found the resource was sometimes put off by its discovery. 500 generations later, robots have learned to turn off their lights every time they found the right resource. The idea was to avoid congestion and the likelihood of them being fired if other members of the group joined them. At the same time, other robots evolved to find the lying robots by looking for areas where the robots converged with their lights off, which is the exact opposite of what they were programmed to do.

SOURCE

Tags
Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close