CS-725 Topics in Language-based Software Security

From Sanitization to Mitigation

Mathias Payer -- Fall semester 2018, 2 credit course


Course overview

Unsafe languages like C/C++ are widely used for their great promise of performance. Unfortunately, these languages are prone to a large set of different types of memory and type errors that allow the exploitation of several attack vectors such as code reuse, privilege escalation, or information leaks. On a high level memory and type safety (and type safety) would solve all these problems. Safe languages can (somewhat) cheaply enforce these properties. Unfortunately, these guarantees come at a high cost if retrofitted onto existing languages.

When working with unsafe languages, three fundamental approaches exist to protect against software flaws: formal verification (proving the absence of bugs), software testing (finding bugs), and mitigation (protecting against the exploitation of bugs). In this seminar, we will primarily focus on the latter two approaches. Formal verification, while giving strong guarantees, struggles to scale to large software.

This seminar explores three areas: the understanding of attack vectors, approaches to software testing, and mitigation strategies. First you need to understand what kind of software flaws exist in low level software and how those flaws can be exploited.

Each student will pick one topic (one specific testing approach, mitigation, or attack vector) from the list of topics below. The student is expected to organize the material and prepare a presentation of the topic for the other students. In addition, students will work through the practical course projects. The main goals of this seminar are:

  1. understanding and defining the security policy and corresponding guarantees/trade-offs implemented by given work;
  2. reasoning about the power and effectiveness (completeness in regard to attack vectors covered, strength of the guarantees, and effectiveness) of different security policies (and being able to compare between them);
  3. reasoning on the computational and resource cost of mechanisms and possible downsides;
  4. alternative implementations of the policy at other levels of abstraction.
  5. developing skills to present a technical topic in computer science to an audience of peers;
  6. learning how to identify possible research topics and articulating differences to existing related work.

Your grade is based on:

  1. technical presentation of your topic and writing a 1 page summary of the presentation after your topic (60%);
  2. active participation in class (10%);
  3. three class projects (30%)
  4. except as by prior arrangement or notification by the professor of an extension before the deadline, missing or late work will be counted as a zero/fail.

Topic presentations

The length of presentations for research papers should be between 20 and 30 minutes. You can structure the presentation as follows:

  1. Motivation of the paper (1-2 slides, ~3 minutes)
  2. Presentation of the core design and implementation of the research paper (4-8 slides, ~10 minutes)
  3. Evaluation of the security policy (2-3 slides, ~4 minutes)
  4. Material for discussion: advantages, disadvantages, limitations of the approach (2-3 slides, ~5 minutes)
  5. Summary slide of the paper: policy, defense property (at which point in memory model), implementation (language, compiler, or runtime)

Course project

Throughout this course we will have three (small) practical projects that will allow you to deepen your understanding of software flaws, how to find them, and how to protect against their exploitation.

  1. Project 1: fuzzing. Given different programs (standalone binaries or libraries) we will play with different fuzzing approaches to discover hidden vulnerabilities. FuzzLab project files and fuzzlab slides. Submit the report by November 13 (3 weeks from now).
  2. Project 2: code sanitization. We will explore different sanitizers and use them to detect bugs in our programs. SaniLab project files. See the README.md in the archive for instructions. Submit the report by December 04 (3 weeks from now).
  3. Project 3: code reuse under CFI. CFI is a promising mitigation, we will explore how different CFI policies protect against control flow hijacking and what their limitations are by developing different exploits. MitiLab project files. Submit the report by December 21 (1 1/2 weeks from now).

For each project, you will write up your results and discuss the bugs you have discovered as a short write up in ACM sigplan double column format. Your write up should include what policy you were testing for, what bugs you found, and your results. For projects one and two, simply listing crashes is not enough but you will discuss how you triaged the vulnerabilities, i.e., how you identified unique vulnerabilities. For project three, you will discuss how you could extend CFI to include protection against the classes of attack vectors you found.

Each of the projects will result in 5-10 hours of work.


This list is non-exhaustive and the list may be adapted during class and students may suggest other policies they are interested in. The open book Software Security: Principles, Policies, and Protection [1] provides an overview of many topics but does not go into depth for each policy.

Software Flaws

Language Safety and Formal Verification

Software Testing



The seminar meets Tuesdays from 15:15 to 17:00 in BC02. Office hours are each Tuesday from 14:15 to 15:00 in BC106. A draft of the schedule looks as follows but remember that no plan survives contact with reality!

Date Topic Presenter(s) Reading Material
9/18 Course administration and Eternal War in Memory [2] Mathias Payer [3] [4]

Presentation techniques

Return Oriented Programming

Mathias Payer

Adrien Gosh


Control-Flow Bending

Counterfeit Object Oriented Programming






tauCFI: Type-Assisted Control-Flow Integrity for x86-64 Binaries


Paul Muntean








[10], [11]


Rowhammer for flash

AFL, Fuzzing evaluation lab

Anil Kurmus

Mathias Payer

[12], [13]

Mathias is traveling. Wouter Lueks and Philipp Jovanovic will help.















Sanitizers (Start of sanitizer lab)

DangSan, DangNull

Mathias Payer


[18], [19]






[27] [28]






[22] [23]

[24] [25]


Mathias is traveling. Giovanni Chubin will help.


Shadow Stacks







Diversity (SoK)






[1](1, 2) Software Security: Principles, Policies, and Protection. Mathias Payer
[2]Eternal War in Memory slides
[3](1, 2) SoK: Eternal War in Memory. Laszlo Szekeres, Mathias Payer, Tao Wei, and Dawn Song. In Oakland'13: Proc. Int'l Symp. on Security and Privacy, 2013.
[4](1, 2) Smashing The Stack For Fun And Profit. Aleph1. In Phrack49.
[5](1, 2, 3) The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86). Hovav Shacham. In CCS'07
[6](1, 2) Control-Flow Bending: On the Effectiveness of Control-Flow Integrity. Nicholas Carlini, Antonio Barresi, Mathias Payer, David Wagner, and Thomas R. Gross. In SEC'15
[7](1, 2, 3) Counterfeit Object-Oriented Programming
[8](1, 2) CCured: type-safe retrofitting of legacy software. George C. Necula, Jeremy Condit, Matthew Harren, Scott McPeak, and Westley Weimer. In ACM POPL'02 extended TOPLAS version
[9](1, 2) Cyclone: A safe dialect of C. Trevor Jim, Greg Morrisett, Dan Grossman, Michael Hicks, James Cheney, and Yanling Wang. In Usenix ATC
[10](1, 2) SoftBound: Highly Compatible and Complete Spatial Memory Safety for C. Santosh Nagarakatte, Jianzhou Zhao, Milo M. K. Martin, Steve Zdancewic. In PLDI'09
[11](1, 2) CETS: Compiler-Enforced Temporal Safety for C. Santosh Nagarakatte, Jianzhou Zhao, Milo M. K. Martin, and Steve Zdancewic. In ISMM'10
[12](1, 2) American Fuzzy Lop. Michal Zalewski. Technical Report'14.
[13](1, 2) Evaluating Fuzz Testing. George Klees, Andrew Ruef, Benji Cooper, Shiyi Wei, Michael Hicks. In CCS'18
[14](1, 2) Driller: Augmenting Fuzzing Through Selective Symbolic Execution. Nick Stephens, John Grosen, Christopher Salls, Audrey Dutcher, Ruoyu Wang, Jacopo Corbetta, Yan Shoshitaishvili, Christopher Kruegel, Giovanni Vigna. In NDSS'16
[15](1, 2) T-Fuzz: fuzzing by program transformation Hui Peng, Yan Shoshitaishvili, and Mathias Payer. In Oakland'18
[16](1, 2) MemCheck: Using Valgrind to detect undefined value errors with bit-precision. Julian Seward and Nicholas Nethercote. In Usenix ATC'05
[17](1, 2) AddressSanitizer: A Fast Address Sanity Checker. Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitry Vyukov. In Usenix Security'12
[18](1, 2) DangSan: Scalable Use-after-free Detection. Erik van der Kouwe, Vinod Nigade, Cristiano Giuffrida. In EuroSYS'17
[19](1, 2) Preventing Use-after-free with Dangling Pointers Nullification. Byoungyoung Lee, Chengyu Song, Yeongjin Jang, Tielei Wang. In NDSS'15
[20](1, 2) HexType: Efficient Detection of Type Confusion Errors for C++ Yuseok Jeon, Priyam Biswas, Scott A. Carr, Byoungyoung Lee, and Mathias Payer. In CCS'17
[21](1, 2) LAVA: Large-scale Automated Vulnerability Addition. Brendan Dolan-Gavitt, Patrick Hulin, Engin Kirda, Tim Leek, Andrea Mambretti, Wil Robertson, Frederick Ulrich, Ryan Whelan. In SP'16
[22](1, 2) Address Space Layout Randomization. PaX team.
[23](1, 2) Data Space Randomization. Sandeep Bhatkar and R. Sekar. In DIMVA'08.
[24](1, 2) DieHard: Probabilistic Memory Safety for Unsafe Languages. Emery D. Berger and Benjamin G. Zorn. In PLDI'06
[25](1, 2) DieHarder: Securing the Heap. Gene Novark and Emery D. Berger. In CCS'10
[26](1, 2) The Performance Cost of Shadow Stacks and Stack Canaries. Thurston H.Y. Dang, Petros Maniatis, David Wagner. In AsiaCCS'15.
[27](1, 2) Control-Flow Integrity. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, Jay Ligatti. In CCS'05
[28](1, 2) Control-Flow Integrity: Precision, Security, and Performance. Nathan Burow, Scott A. Carr, Joseph Nash, Per Larsen, Michael Franz, Stefan Brunthaler, and Mathias Payer. In CSUR'17
[29](1, 2) Securing software by enforcing Data-Flow Integrity. Miguel Castro, Manuel Costa, Tim Harris. In OSDI'06.
[30]Code-Pointer Integrity. Volodymyr Kuznetsov, Mathias Payer, Laszlo Szekeres, George Candea, R. Sekar, Dawn Song. In OSDI'14.
[31](1, 2) SoK: Automated Software Diversity. Per Larsen, Andrei Homescu, Stefan Brunthaler, Michael Franz. In SP'14.