Large Language Models (LLMs)– The power of words

Application of Large Language Models: opportunities and risks

The introduction of Large Language Models, such as chatbots like ChatGPT and others, has generated significant hype and is widely applied in various everyday contexts. But how secure is this technology, and is its application feasible in the B2B context? We delved into this question with experts Dr. Christoph Endres, CEO of sequire technology, and Prof. Dr. Eugen Staab, Professor of Business Informatics at the Hochschule Kaiserslautern.

What exactly are Large Language Models (LLMs), their application areas, and why are they currently making such a big impact?

EUGEN: Large Language Models are a technology of artificial intelligence. They are powered by “artificial neural networks” trained on massive amounts of text. Through this training, they develop a comprehensive understanding of language, enabling, for the first time, natural language communication between humans and computers.

CHRISTOPH: The most well-known version currently is GPT (Generative Pre-Trained Transformer), a generative AI. Since the release of ChatGPT, it has made a significant impact because it is highly accessible and user-friendly. Many people enjoy experimenting with this new technology. The application areas are also quite versatile: you can generate text from a few keywords, as well as summarize, rephrase, or translate large texts. So, it’s capable of anything related to text processing.

What opportunities arise from the use, and what needs to be considered while using them?

EUGEN: For the first time, we can develop extensive, natural language human-machine interfaces in IT. We can converse with the computer like talking to other people. The LLM doesn’t have to possess all knowledge; it can also find answers through queries to external information systems. Such integration is referred to as ‘Retrieval Augmented Generation (RAG),’ and it is likely to play a significant role in using LLMs in the future. It’s essential to note that the generated texts may contain errors or even ‘hallucinations.’ When the model cannot find suitable information, it often fabricates responses in current language models and does so convincingly. Therefore, texts must always be critically assessed when using them.

CHRISTOPH: A clear opportunity: By delegating tasks, workflows can be more efficient. Even if the generated content needs to be corrected, time is saved just by generating the text. As mentioned, the threshold for interacting with AI or a program is lowered significantly, as no programming knowledge is required to give instructions to the model. However, another serious issue is the bias in the training data, which can replicate and amplify societal prejudices.

What should be considered when integrating LLMs into a company's IT system landscape?

EUGEN: Firstly, employees, especially when using open cloud solutions, must be trained not to input sensitive content as a ‘Prompt.’ Security and privacy requirements are lower if a company opts for a self-hosted solution. I would recommend this approach. The question should not be whether to let a language model make autonomous decisions. A human should always make the decision, with the LLM serving as an assistant for guidance.

christoph_endres

"As part of our research with CISPA, we have found that attackers or malicious actors can take control of language models, subsequently extracting information from users. In this regard, one should always handle sensitive data cautiously."

Weakness: Indirect Prompt Injection

In February 2023, employees of sequire technology, in cooperation with the Helmholtz Center for Information Security (CISPA), published a research paper onvulnerabilities in Large Language Models (LLMs).In their work, the experts discovered that attackers could manipulate the data accessed by LLMs and place undesirable instructions there. These so-called ‘Indirect Prompt Injections’ can be executed without the knowledge of the chatbot user.

How can companies use LLMs safely?

EUGEN: When companies integrate LLMs into their applications, which they may even provide to their customers, it’s crucial to seek expert consultation on potential risks beforehand. Only after conducting a proper risk analysis can the risk-benefit ratio be assessed.

CHRISTOPH: Consultation is extremely important before implementing a language model. However, I believe one should start even earlier and ask what one wants to protect against. The second step involves figuring out how to protect against those threats. It is necessary to define what critical information is and what risks could arise to determine how to establish security measures accordingly.

eugen_staab

"We need awareness of how language models work so that people question them critically."

What does the future look like? Can we discover new, innovative business models through LLMs?

EUGEN: I believe two significant areas could be fascinating. Firstly, the integration of LLMs with external information systems, such as the internet or other AI algorithms, and internal, company-specific software systems. This opens up new possibilities and associated business models. Secondly, the retraining, also known as ‘Fine-Tuning,’ of language models with company-specific data, potentially turning the model into an expert within the company. In my assessment, these systems will establish themselves as assistants, enabling us to engage in natural language interactions with the computer. This will help us more efficiently manage the vast amounts of data we deal with today.

CHRISTOPH: I believe that there will be significant progress in many societal domains through LLMs or Artificial Intelligence (AI) in general, especially when considering the expert systems mentioned. In my opinion, there are many areas where they can be meaningfully utilized.

christoph_endres

DR. CHRISTOPH ENDRES
CEO
sequire technology

eugen_staab

PROF. DR. EUGEN STAAB
PROF. DR. EUGEN STAAB
University of Kaiserslautern

Other articles that might be interesting for you