German media: EU proposes legislation to limit abuse of artificial intelligence

According to the German “Frankfurter Allgemeine Zeitung” website reported on April 13, the victory of artificial intelligence is unstoppable. The COVID-19 pandemic in particular has drawn attention to the value of such self-learning systems to the organization of healthcare systems. Experts believe that artificial intelligence can also play an important role in tackling climate change.

But on the other hand, the technology has also been questioned. Should artificial intelligence determine who gets jobs in the future? What about automatic identification technology? Using it to unlock a phone might not be a problem, but if companies use the technology to monitor people from all angles, it’s a problem.

Therefore, the European Commission is now to set clear rules for the use of artificial intelligence technology in sensitive areas, and individual applications will be banned outright. For example, it is prohibited to use artificial intelligence to influence the behavior, opinions or decisions of individuals or groups so that they are disadvantaged or harmed, or to use artificial intelligence to identify weaknesses that can be used for this purpose.

In addition, the use of artificial intelligence technology to conduct arbitrary and blind surveillance of individuals will also be prohibited.

The European Commission will formally propose specific plans next week, the report said. At present, “Frankfurt Allgemeine Zeitung” has obtained the 81-page draft bill. The regulation will apply directly to all EU member states.

The report pointed out that violations of the regulations will be subject to high penalties. The draft stipulates that fines for companies can be as high as 4% of their global annual sales, with a cap of 20 million euros (1 euro is about 7.8 yuan – note on this website). However, that amount has yet to be determined and could still change by next week.

There are exceptions to the ban, but only if the maintenance of public order requires it. There must be clear rules for this. Didier Raindale, the EU commissioner in charge of the law, stressed a few days ago that in the case of terrorist attacks, there should be exceptions for at least a period of time.

For high-risk applications in sensitive areas, certain minimum standards must be met before they can be used in the European Single Market. These high-risk applications first cover identification technologies used in public places. It also includes the use of artificial intelligence to assess creditworthiness, hire or promote employees, access social benefits, or track crime.

In all of the above cases, humans should have ultimate control over the decision-making, the report emphasizes. It should also ensure that the data provided for AI is neutral, so that certain groups are not discriminated against.

The European Commission noted that the list of high-risk applications will be regularly revised to ensure that applications with irreversible consequences for humans in severe and extreme cases are included in the list as a whole.

For other relatively harmless AI applications, it should at least be clear that people are dealing with AI, such as chatbots answering hotlines, not real people; It is difficult to identify such fraud in context.

The European Commission said the recommendations were aimed at building confidence in the safety of the use of artificial intelligence and economic planning. Only in this way can the EU exploit its enormous potential.

The European Parliament criticized the draft. Green Party MEP Alexandra Gerser said: “This draft law of the European Commission is not sharp enough on some key points.” Gerser pointed out that if people are evaluated and managed by machines, discrimination will be everywhere. .

According to reports, the regulation still needs the approval of the European Parliament and the EU Council of Ministers to enter into force.

The Links:   G215HAN012 NL12880BC20-13ND TFT-PANEL

Bookmark the permalink.

Comments are closed.