A regulatory vision of the AI-cybersecurity link

A regulatory vision of the AI-cybersecurity link

In what has been called the Fourth Industrial Revolution, artificial intelligence (AI) is radically transforming global economies at a pace that is forcing regulators to keep pace.

In the European Union (EU), the draft regulation establishing harmonized rules on artificial intelligence (“the AI ​​Act”) is the most comprehensive piece of legislation to date. While the AI ​​law will have far-reaching implications for many sectors, the most affected will be AI applications that the EU deems high risk.

See also: European Parliament discusses AI law with deal in sight

As stated in the proposal, the law “imposes regulatory burdens only where an AI system is likely to present high risks to fundamental rights and security. For other AI systems not presenting a high risk, only very limited transparency obligations are imposed.”

The main concern of lawmakers with regard to high-risk systems is the need for robust security measures and the growing threat that cyberattacks pose to the EU, both from an economic point of view and from a military intelligence point of view. .

As is often the case with AI, for financial services technology is both the problem and the solution. As fraudsters and hackers deploy an ever more sophisticated range of AI-powered Trojans, ransomware and DDOS attacks, equally sophisticated cybersecurity and fraud prevention tools are also using the technology to protect consumers.

Read more: AI could help financial institutions fight crime and evade regulators

Much of the deployment of AI in the financial services industry is happening behind the scenes, where it is mobilized by the anti-money laundering and anti-fraud departments of banks and other financial institutions to monitor and block suspicious transactions. But upstream of European payment systems, AI is also changing the way consumers verify their identity.

Related: 5 European startups making waves in the AML tech space

Using biometrics and behavioral analytics, banks and payment service providers are increasingly able to authenticate users without the need for passwords, text messages or card verification methods.

In a recent PYMNTS report, Micheal Sheehy, Chief Compliance Officer at Payoneer, discussed the importance of biometrics in the fight against money laundering and identity-based fraud.

He said that due to data breaches, “we can expect most individuals’ traditional personal identifying information to be obtainable somewhere on the dark web.” Consequently, “the biometric information […] becomes the best way to ensure that the person you are dealing with is that person.

Read the report: The future of cross-border trade: how AI and biometrics are transforming global risk management

It is important for financial institutions in the EU that the application of AI in biometric identification is classified as high risk and will therefore be subject to the enhanced reporting and transparency obligations of the AI ​​law.

Although most of the law’s biometrics concerns relate to facial recognition in public places, in a short passage (Article 80) dealing with the use of AI by financial institutions, the proposal essentially leaves it to the European Central Bank to determine how best to interpret relevant laws and regulations where they overlap.

Learn more: How Face ID can power end-to-end verification

Join the dots

The growing importance of cybersecurity to EU defense and stability means AI law is emerging as part of a regulatory architecture that includes data law, the second network security directive and Information (NIS2), the Digital Services Package and the Cyber ​​Resilience Act.

See also: EU Cyber ​​Resilience Law Could Set New Global Standards

Together, the aforementioned legislations, which are in various stages of negotiation and ratification, aim to streamline and clarify the EU’s approach to digital technologies, including AI. But in solving some of the current challenges facing the bloc, the emerging regulatory framework also poses new ones.

A recent report by the Brussels-based think tank Carnegie Europe on the “AI-Cybersecurity Nexus” argues that to strengthen its overall security on all fronts, the EU needs to further integrate the various laws being rolled out and the various agencies in charge of security. application and implementation of cybersecurity standards.

As the report states, “The EU pursues the dual objectives of establishing a robust cybersecurity architecture across the bloc and harnessing the benefits of AI for societal (cyber)security and defense purposes and economies.Yet if the objective is to ensure the cyber-secure deployment of AI systems and services […] it is essential to connect the dots between the various initiatives, processes and stakeholders. »

For all PYMNTS EMEA coverage, subscribe daily EMEA Newsletter.

New PYMNTS Study: How Consumers Use Digital Banks

A PYMNTS survey of 2,124 US consumers shows that while two-thirds of consumers have used FinTechs for some aspect of banking, only 9.3% call them their primary bank.

We are always looking for partnership opportunities with innovators and disruptors.

Learn more

https://www.pymnts.com/emea/2022/europe-braces-for-more-financial-volatility-amid-russian-war/partial/

#regulatory #vision #AIcybersecurity #link

Leave a Comment

Your email address will not be published. Required fields are marked *