Definition
Conceptually, a fuzzing test starts with generating massive normal and abnormal inputs to target applications, and try to detect exceptions by feeding the generated inputs to the target applications and monitoring the execution states.
Comparison
There are some other vulnerability discovery techniques: static analysis, dynamic analysis and symbolic execution.
Technique | Easy to start? | Accuracy | Scalability |
---|---|---|---|
Static analysis | easy | low | relatively good |
Dynamic analysis | hard | high | uncertain |
Symbolic execution | hard | high | bad |
Fuzzing | easy | high | good |
However, it is faced with many disadvantages, such as low efficiency and low code coverage.
The process
graph TD A[Start] --> B[Testcase generation] B --> C[Program execution]; C --> D{Violation?}; D --> |Yes| F[Bugs]; D --> |No| B;
Types of fuzzers
- Generation based and mutation based
Easy to start? | Priori knowledge | Coverage | Ability to pass validation | |
---|---|---|---|---|
Generation based | hard | needed, hard to acquire | high | strong |
Mutation based | easy | not needed | low, affected by initial inputs | weak |
-
White box, gray box and blackbox
-
Directed and coverage-based:
- Directed: cover target code and target paths of programs.
- Coverage-based: cover as much code of programs as possible
- Directed fuzzers expect a faster test on programs, and coverage-based fuzzers expect a more thorough test and detect as more bugs as possible.
-
Smart and dumb: according to whether there is a feedback between the monitoring of program execution state and testcase generation.
Key challenges in fuzzing
- The challenge of how to mutate seed inputs.
- The challenge of low code coverage.
- The challenge of passing the validation.