Loading

Lecture: Security of Artificial Intelligence

Loading Events

« All Events

  • This event has passed.

Lecture: Security of Artificial Intelligence

June 27 @ 7:00 pm - 8:00 pm

Welcome to our upcoming event on the Security of Artificial Intelligence! Join us for an insightful discussion on the importance of protecting AI systems from potential threats and vulnerabilities. Our expert speakers will delve into the world of cybersecurity and AI, highlighting key strategies to safeguard these technologies. This in-person lecture promises to be engaging and educational, offering valuable insights for both professionals and enthusiasts in the field. Don’t miss this opportunity to learn more about the security challenges facing Artificial Intelligence today!

Please register here: www.eventbrite.de/e/lecture-security-of-artificial-intelligence-registration


SECURITY OF LANGUAGE MODELS: Compromising Large Language Models (LLMs) at large scape

Large language models (LLMs) like ChatGTP and others are currently used widely and intensively – but they are prone to attacks. While data protection is already considered (at least partially), the real security issues are at the moment still widely ignored.

Indirect Prompt Injection enables large-scale remote takeover of LLM applications. An attacker smuggles hidden instructions into the dialog context of a language model via external sources (websites, documents, etc.) and gets the dialog under his control. The user doesn’t notice anything about it. This vulnerability was published and demonstrated by sequire technology in February 2023.

There were detailed discussions with affected providers, such as Microsoft, OpenAI and Google. In the ranking of the most dangerous vulnerabilities in language models (OWASP Top 10), prompt injection was listed as the top 1 threat; The Federal Office for Information Security (BSI) published a warning based on sequire’s work.

In this lecture, Dr. Christoph Endres explains the threats to large language models, provides details on Indirect Prompt Injection, gives examples of current and future attacks and explains why current defensive measures do not work or will at least not be sufficient.

Lecturer: Dr. Christoph Endres, born in 1971, is a computer scientist and originally an AI researcher. After completing his doctorate in the field of intelligent driver assistance systems with Prof. Wolfgang Wahlster (DFKI), he switched to the field of cybersecurity and has been managing director of sequire technology GmbH, which he co-founded, since 2021. He is part of the team that found and analyzed the Indirect Prompt Injection vulnerability in language models.

Affiliation: sequire technology GmbH has been part of the N4 group of companies since 2021 and deals with the topic of IT security. The company portfolio includes individual software development, penetration tests, consulting, training and a product for secure communication through a firewall (sequinox). In 2023, sequire technology took the lead in discovering and analyzing a vulnerability in LLMs and now also advises on the topic of LLM Safety and offers an enterprise solution for the secure operation of LLMs and other AI systems.

Organizer

International Collaboration Hub
Email
hello@collaborations.earth
View Organizer Website