Hands on introduction to Large Language Models for developers
This training provides a solid and practice‑oriented introduction to developing applications and workflows based on Large Language Models (LLMs). Participants receive a structured overview of key concepts, technologies, and use cases of LLMs and learn how these can be systematically integrated into applications and work processes. The focus is on foundational methodological knowledge and the practical application of core techniques to develop LLM‑based applications and automated workflows.
In concrete terms, we work with:
- Prompt Engineering
- Tool/Function Calling
- RAG Pipelines (vector search via Postgres + pgvector)
- Stateful agent and workflow graphs (LangChain / LangGraph)
- Production‑oriented deployment with Python + FastAPI
- Additionally: Local inference with Transformers and efficient fine‑tuning (LoRA/QLoRA) using Unsloth
The training combines theoretical foundations with hands‑on exercises and covers:
- Introduction to Large Language Models
(history, relevant providers, typical application scenarios) - Developing LLM‑based applications
(prompt engineering, fine‑tuning, domain adaptation) - Workflows & production‑oriented usage
(Retrieval‑Augmented Generation (RAG), agents, automation, basic MLOps concepts)
The training “Development of LLM‑Based Applications and Workflows – Basic” provides a structured and hands‑on introduction to working with Large Language Models. It is aimed at developers and technical beginners who want to understand LLMs and systematically apply them in applications and workflows.
Through a combination of foundational knowledge and practical exercises, participants acquire the key competencies needed to begin working with modern LLM‑based application scenarios.
Fraunhofer Institute for Open Communication Systems