Security Testing with Fuzzing

Find security vulnerabilities before others do – efficiently, reproducibly, and in a regulatorily compliant way

Your software works. But is it robust against unusual or faulty inputs?
This is where fuzzing comes in: automated tests generate large numbers of unexpected, sometimes invalid inputs to uncover vulnerabilities early. This allows you to close gaps that traditional tests often miss, while also meeting current regulatory requirements (e.g., the Cyber Resilience Act).

This training is designed for professionals who develop or test software and want to introduce or expand security testing in a practical, scalable way.

Why now? 

  • Attackers exploit unexpected inputs – exactly the cases fuzzing systematically covers.
  • Modern software consists of first‑party, third‑party, and open‑source components: Fuzzing helps to cover the entire software with security tests.
  • Regulations require regular, traceable security tests – fuzzing provides scalable evidence.

How we work in this course

Format: Blended learning – 2 days in person + 1 online session (~3 hours) for deepening and practice.
Learning style: concise inputs, plenty of hands‑on work with real fuzzing tools (open source & commercial), guided exercises, checklists & templates.

After the seminar you will be able to…

After the seminar you will be able to…

…identify where fuzzing has the highest impact (APIs, protocols, formats, services).

…set up, run, and prioritize your own fuzzing tests efficiently.

…analyze, reproduce, and correctly triage crashes and findings (false positives vs. actual vulnerabilities).

…define and justify stopping criteria (when is “enough” testing?).

…combine fuzzing with existing tests (unit, integration, E2E) and integrate it into CI/CD pipelines.

…document results in a regulatory‑compliant way (e.g., for the Cyber Resilience Act, incl. open source software/third‑party components).

Course Content

Course Contents

  • Test methods: static and dynamic
  • Introduction to fuzzing
  • Fuzzing approaches and techniques (black box, white box, gray box, generation-based, mutation-based, coverage‑guided, generator‑based, model‑based, directed)
  • Stopping conditions
  • Combination with static analysis
  • Use of self‑developed fuzzing tools
  • Search‑based fuzzing
  • Fuzzing in different programming languages
  • Fuzzing for certification purposes

Overview of the training course »Security Testing with Fuzzing«

Event Type Open format (also available as an in-house training upon request)
Format Blended Learning (in-person and online sessions)
Duration 2 days in person + 0.5 day online
Dates
  • 05.05.2026 - 06.05.2026 in Berlin (Registration until 20.04.2026)
  • 23.06.2026 - 24.06.2026 in Berlin (Registration until 08.06.2026)
  • 29.09.2026 - 30.09.2026 in Berlin (Registration until 14.09.2026)
  • 24.11.2026 - 25.11.2026 in Berlin (Registration until 09.11.2026)
Language German (English upon request)
Prerequisites Knowledge and basic experience in software development, programming, IT security, and testing
Target Audience Developers, testers, test engineers, test managers
Location Online and at Fraunhofer FOKUS, Kaiserin-Augusta-Allee 31, 10589 Berlin
Price EUR 1,625 per person (VAT exempt according to §4 No. 22 letter a German VAT Act)
Trainer*innen

Martin Schneider is Head of the Testing Group in the Quality Engineering business unit at the Fraunhofer Institute FOKUS. He is an expert in quality assurance and security testing for networked software systems. He earned his diploma in Computer Science from TU Berlin in 2012 and has more than 10 years of experience in applied research and teaching, as well as in coordinating, designing, and delivering customer‑specific trainings, including Security Testing Basic. He is co‑author of the technical book “Basiswissen Sicherheitstests” (dpunkt‑Verlag) and co‑author of a fuzzing guideline for the German Federal Office for Information Security (BSI).

Ramon Barakat completed his Master’s degree in Computer Science at Technische Universität Berlin. He works at the Fraunhofer Institute for Open Communication Systems (FOKUS), where he contributes to several industry and research projects in the areas of software quality assurance, security testing, NG112, and artificial intelligence.

FAQ – Security Testing with Fuzzing

What is fuzzing in software testing, and what is it used for?

Fuzzing is an automated testing method in which software is deliberately tested with unusual or faulty inputs.
The goal is to find security vulnerabilities and stability issues that traditional tests often fail to detect. Fuzzing is particularly effective for APIs, parsers, protocols, and complex input formats.

 

Why do I need fuzzing if I already have unit and integration tests?

Because traditional tests check expected inputs—fuzzing checks unexpected ones.
Fuzzing covers edge cases, faulty payloads, and unusual system states that often lead to crashes, denial‑of‑service issues, or security vulnerabilities. It therefore complements existing test methods in a meaningful way.

 

Who benefits from a fuzzing training?

Developers, testers, and QA professionals with basic knowledge of software testing.
The training is explicitly designed for people without prior security testing experience who want to apply fuzzing effectively or meet regulatory requirements.

 

How can I get started with fuzzing if I have no experience?

With a clear test objective, a suitable tool, and defined stopping criteria.
In the training, participants learn step by step how to set up, run, and evaluate fuzzing tests—including common pitfalls and proven best practices from real‑world projects.

 

Which security vulnerabilities can be found through fuzzing?

Typical findings include memory errors, parser bugs, crashes, and denial‑of‑service issues.
Fuzzing is especially effective in identifying zero‑day vulnerabilities, faulty input validation, and protocol edge cases—even in third‑party and open‑source components.

 

How do I integrate fuzzing effectively into CI/CD pipelines?

By running automated fuzz tests with defined time and resource budgets.
Short fuzzing jobs can run on every build, deeper tests on a scheduled basis. The training explains how to store, deduplicate, and process results in a reproducible way.

 

How do I know when I’ve done “enough” fuzzing?

When defined stopping criteria—such as time budget, coverage coverage, or crash stagnation—are reached.
Without clear criteria, fuzzing quickly becomes inefficient. In the training, participants learn when tests should reasonably end and how to prioritize results.

 

Does fuzzing help meet the requirements of the Cyber Resilience Act (CRA)?

Yes, fuzzing is a key component of demonstrable security testing.
The CRA requires regular security evaluations, including for third‑party and open‑source software. Fuzzing provides reproducible results that can be documented in an audit‑ready manner.

 

Why should companies care about fuzzing at all?

Because traditional software tests do not reliably uncover security‑relevant weaknesses.
Fuzzing finds issues caused by unusual or faulty inputs—the same kinds of triggers used in real attacks. Companies can reduce security risks before deployment and avoid costly incidents later.

 

What business risks does fuzzing reduce specifically?

Fuzzing lowers the risk of security incidents, liability issues, and reputational damage.
Undetected vulnerabilities in interfaces, protocols, or open‑source components are among the most common causes of security‑critical incidents. Early fuzzing acts as a measurable and preventative safeguard.

 

Is fuzzing relevant for regulatory compliance, such as the Cyber Resilience Act?

Yes. Fuzzing is an effective part of verifiable security testing under the CRA.
The Cyber Resilience Act requires regular and documented security tests—even for third‑party and open‑source components. Fuzzing generates reliable results that can be structured into compliant documentation.

 

Is fuzzing alone sufficient to fulfill regulatory requirements?

No, but it is a central building block of a robust security testing strategy.
Fuzzing enhances existing development and testing processes and increases their effectiveness. The key is to combine fuzzing with other measures—the training covers how to do this efficiently.

 

How much effort is required to introduce and operate fuzzing?

The initial effort is manageable, and the benefits scale with maturity.
Even time‑limited test runs produce valuable results. With clear stopping criteria and CI/CD integration, the ongoing effort remains predictable and controlled—without large new projects.

 

Do we need security specialists or additional roles for fuzzing?

No. Fuzzing can be implemented by existing development and testing teams.
After the training, teams can apply fuzzing independently, interpret results, and forward them appropriately. Specialists are only needed for highly specific edge cases.

 

How quickly will we see measurable benefits from fuzzing?

Very quickly—often during the first test runs.
Fuzzing frequently identifies crashes or weaknesses early on. These findings can be prioritized and fixed immediately, making the return on investment visible early.

 

What does the training deliver to the company in concrete terms?

It enables a controlled, efficient, and traceable use of fuzzing.
Participants learn how to use fuzzing purposefully, evaluate results correctly, and document them in an audit‑ready way. This reduces risk, improves test quality, and provides regulatory assurance—without unnecessary overhead.

Contact

Contact Press / Media

Anne Halbich

Fraunhofer Institute for Open Communication Systems
Kaiserin-Augusta-Allee 31
10589 Berlin, Germany

Phone +493034637346

Melden Sie sich zu unserem Newsletter an!

Erhalten Sie regelmäßig Neuigkeiten und exklusive Inhalte direkt in Ihr Postfach.
Bitte füllen Sie das Pflichtfeld aus.

Bitte füllen Sie das Pflichtfeld aus.