Cyber Security Testing Using Large Language Models (LLMs)

As industrial control systems (ICS) and edge-to-cloud technologies evolve, so do the cybersecurity challenges they face. This BSc/MSc thesis work explores how artificial intelligence, particularly large language models (LLMs), can be applied to enhance automated security testing for ICS environments. The goal is to develop innovative methods that improve resilience to cyber-attacks and support continuous security testing efforts. This approach not only strengthens the protection of industrial operations but also equips security teams with advanced tools for identifying and managing vulnerabilities.

Student Target Groups:

  • Students of ICE/Telematics;
  • Students of Computer Science;
  • Students of Software Engineering.

Thesis Type:

  • Bachelor Thesis / Master Project

Goal and Tasks:

The goal of this thesis is to apply advanced large language model (LLM) methods to a defined Industrial Control System (ICS) setup based on ROS/ROS2, aiming to detect potential vulnerabilities within such systems. This will involve leveraging the MITRE ATT&CK® framework, a widely recognized knowledge base of adversary tactics and techniques derived from real-world observations. The project will focus on developing specific threat models and methodologies, which will then be implemented using LLMs for automated security testing.

  • Conduct a comprehensive literature review on LLM applications in cybersecurity and ICS environments;
  • Identify and select appropriate LLM techniques for automated testing in ROS/ROS2-based ICS systems;
  • Implement and evaluate selected LLM methods;
  • Summarize the findings in a written thesis and deliver an oral presentation of the results.

Recommended Prior Knowledge:

  • Programming skills in Python;
  • Prior experience with deep learning frameworks is desirable (preferably PyTorch);
  • Interest in the topic.

Start:

  • a.s.a.p.

Contact: