15th IEEE Workshop on Offensive Technologies
May 27, 2021, co-located with IEEE S&P and in cooperation with Usenix
The Workshop on Offensive Technologies (WOOT) aims to present a broad picture of offense and its contributions, bringing together researchers and practitioners across all areas of computer security. Offensive security has changed from a hobby to an industry. No longer an exercise for isolated enthusiasts, offensive security is today a large-scale operation managed by organized, capitalized actors. Meanwhile, the landscape has shifted: software used by millions is built by startups less than a year old, delivered on mobile phones and surveilled by national signals intelligence agencies. In the field's infancy, offensive security research was conducted separately by industry, independent hackers, or in academia. Collaboration between these groups was difficult. Since 2007, the Workshop on Offensive Technologies (WOOT) has been bringing those communities together.
Call for Papers
Computer security exposes the differences between the actual mechanisms of everyday trusted technologies and their models used by developers, architects, academic researchers, owners, operators, and end users. While being inherently focused on practice, security also poses questions such as "what kind of computations are and aren't trusted systems capable of?" which harken back to fundamentals of computability. State-of-the-art offense explores these questions pragmatically, gathering material for generalizations that lead to better models and more trustworthy systems.
WOOT provides a forum for high-quality, peer-reviewed work discussing tools and techniques for attacks. Submissions should reflect the state of the art in offensive computer security technology, exposing poorly understood mechanisms, presenting novel attacks, highlighting the limitations of published attacks and defenses, or surveying the state of offensive operations at scale. WOOT '21 accepts papers in both an academic security context and more applied work that informs the field about the state of security practice in offensive techniques. The goal for these submissions is to produce published works that will guide future work in the field. Submissions will be peer reviewed and shepherded as appropriate. Submission topics include, but are not limited to, attacks on and offensive research into:
- Hardware, including software-based exploitation of hardware vulnerabilities
- Virtualization and the cloud
- Network and distributed systems
- Operating systems
- Browser and general client-side security (runtimes, JITs, sandboxing)
- Application security
- Analysis of mitigations and automating how they can be bypassed
- Automating software testing such as fuzzing for novel targets
- Internet of Things
- Machine Learning
- Cyber-physical systems
- Cryptographic systems (practical attacks on deployed systems)
- Malware design, implementation and analysis
- Offensive applications of formal methods (solvers, symbolic execution)
The presenters will be authors of accepted papers. There will also be a keynote speaker and a selection of invited speakers. WOOT '21 will feature a Best Paper Award and a Best Student Paper Award.
Note that WOOT'21 and other IEEE S&P workshops will be virtual.
WOOT ’21 welcomes submissions without restrictions of origin. Submissions from academia, independent researchers, students, hackers, and industry are welcome. Are you planning to give a cool talk at Black Hat in August? Got something interesting planned for other non-academic venues later this year? This is exactly the type of work we'd like to see at WOOT '21. Please submit—it will also give you a chance to have your work reviewed and to receive suggestions and comments from some of the best researchers in the world. More formal academic offensive security papers are also very welcome.
Systemization of Knowledge
Continuing the tradition of past years, WOOT '21 will be accepting "Systematization of Knowledge" (SoK) papers. The goal of an SoK paper is to encourage work that evaluates, systematizes, and contextualizes existing knowledge. These papers will prove highly valuable to our community but would not be accepted as refereed papers because they lack novel research contributions. Suitable papers include survey papers that provide useful perspectives on major research areas, papers that support or challenge long-held beliefs with compelling evidence, or papers that provide an extensive and realistic evaluation of competing approaches to solving specific problems. Be sure to select "Systematization of Knowledge paper" in the submissions system to distinguish it from other paper submissions.
Due to the short review timeframe, we will require an abstract registration this year. Abstract registration allows the reviewers to bid on the papers they are interested in and allows the authors to continue finalizing the paper until the paper submission deadline. When submitting, make sure to put in a descriptive abstract.
- Abstract registration deadline:
Wednesday, January 27, 2021, 11:59 AoE (Anywhere on Earth)
- Paper submission deadline:
Friday, January 29, 2021, 11:59 AoE (Anywhere on Earth)
- Notification date:
Monday, March 01, 2021
- Workshop date:
Thursday, May 27, 2021
What to Submit
Submissions must be in PDF format. Papers should be succinct but thorough in presenting the work. The contribution needs to be well motivated, clearly exposed, and compared to the state of the art. Typical research papers are at least 4 pages, and maximum 10 pages long (not counting bibliography and appendix). Yet, papers whose lengths are incommensurate with their contributions will be rejected.
The submission should be formatted in 2-columns, using 10-point Times Roman type on 12-point leading, in a text block of 6.5” x 9”. Please number the pages. Authors must use the IEEE templates, for LaTeX papers this is IEEETran.cls version 1.8b.
Note that paper format rules may be clarified. Stay tuned.
Submissions are double blind: submissions should be anonymized and avoid obvious self-references (authors are allowed to release technical reports and present their work elsewhere such as at DefCon or BlackHat). Submit papers using the submission form.
Authors of accepted papers will have to provide a paper for the proceedings following the above guidelines. A shepherd may be assigned to ensure the quality of the proceedings version of the paper.
If your paper should not be published prior to the event, please notify the chairs. Submissions accompanied by non-disclosure agreement forms will not be considered. Accepted submissions will be treated as confidential prior to publication on the WOOT '21 website; rejected submissions will be permanently treated as confidential.
Policies and Contact Information
Simultaneous submission of the same work to multiple competing academic venues, submission of previously published work without substantial novel contributions, or plagiarism constitutes dishonesty or fraud. Note: Work presented by the authors at industry conferences, such as Black Hat, is not considered to have been "previously published" for the purposes of WOOT '21. We strongly encourage the submission of such work to WOOT '21, particularly work that is well suited to a more formal and complete treatment in a published, peer-reviewed setting. In your submission, please do note any previous presentations of the work.
If the submission describes, or otherwise takes advantage of, newly identified vulnerabilities (e.g., software vulnerabilities in a given program or design weaknesses in a hardware system) the authors should disclose these vulnerabilities to the vendors/maintainers of affected software or hardware systems prior to the CFP deadline. When disclosure is necessary, authors should include a statement within their submission and/or final paper about steps taken to fulfill the goal of disclosure.
Submissions that describe experiments on human subjects, that analyze data derived from human subjects (even anonymized data), or that otherwise may put humans at risk should:
- Disclose whether the research received an approval or waiver from each of the authors’ institutional ethics review boards (e.g., an IRB).
- Discuss steps taken to ensure that participants and others who might have been affected by an experiment were treated ethically and with respect.
- If a paper raises significant ethical or legal concerns, including in its handling of personally identifiable information (PII) or other kinds of sensitive data, it might be rejected based on these concerns.
WOOT '21 Artifact Evaluation
All deadlines are 23:59 AoE (Anywhere on Earth):
- March 1: Invitation to authors of accepted papers to submit artifacts
- March 12: Artifact submission deadline
- March 13-March 26: Authors must be reachable for questions in this period
- March 27: Notification
Authors are expected to submit the following:
- A PDF with an abstract for the artifact, which specifies the core idea, the focus of the artefact, and what the evaluation should check
- A PDF of the most recent version of the accepted paper
- Documentation about the artifact (how to reproduce the contributions of the paper)
- A link to the artifact, which must be available anonymously (artifact evaluation is single-blind)
Please submit your artifacts to firstname.lastname@example.org. Do not include any binary programs in attachments, but link to them where needed.
A scientific paper consists of a constellation of artifacts that extend beyond the document itself: software, hardware, evaluation data and documentation, raw survey results, mechanized proofs, models, test suites, benchmarks, and so on. In some cases, the quality of these artifacts is as important as that of the document itself, yet many of our conferences offer no formal means to submit and evaluate anything but the paper itself. To address this shortcoming, WOOT will run an optional artifact evaluation process, inspired by similar efforts in software engineering and security conferences.
The AEC evaluates whether the artifact does or does not conform to the expectations set by the paper. We expect artifacts to be:
- consistent with the paper
- as complete as possible
- documented well
- easy to reuse, facilitating further research
We believe the dissemination of artifacts benefits our science and engineering as a whole, as well as the authors submitting them. Their availability improves replicability and reproducibility and enables authors to build on top of each other's work. It can also help more unambiguously resolve questions about cases not considered by the original authors. The authors receive recognition, leading to higher-impact papers, and also benefit themselves from making code reusable.
Artifact evaluation is a separate process from paper reviews, and authors will be asked to submit their artifacts only after their papers have been (conditionally) accepted for publication at WOOT.
After artifact submission, at least one member of the AEC will download and install the artifact (where relevant) and evaluate it. Since we anticipate small glitches with installation and use, reviewers may communicate with authors to help resolve glitches while preserving reviewer anonymity. The AEC will complete its evaluation and notify authors of the outcome.
For the camera ready version, authors that have successfully passed the evaluation process will receive dedicated badges on their papers to demonstrate that their paper has passed this additional evaluation. We also ask the authors to make the artifacts available such that others can replicate the results.
To avoid excluding some papers, the AEC will try to accept any artifact that authors wish to submit. These can be software, hardware, data sets, survey results, test suites, mechanized proofs, and so on. Given the experience in other communities, we decided to not accept paper proofs in the artifact evaluation process. The AEC lacks the time and often the expertise to carefully review paper proofs. Obviously, the better the artifact is packaged, the more likely the AEC can actually work with it during the evaluation process.
While we encourage open research, submission of an artifact does not contain tacit permission to make its content public. All AEC members will be instructed that they may not publicize any part of your artifact during or after completing the evaluation, nor retain any part of it after evaluation. Thus, you are free to include, e.g., models, data files, or proprietary binaries in your artifact. Also, note that participating in the AEC experiment does not require you to later publish your artifacts, but of course we strongly encourage you to do so.
We recognize that some artifacts may attempt to perform malicious operations by design. These cases should be boldly and explicitly flagged in detail in the readme so AEC members can take appropriate precautions before installing and running these artifacts. The evaluation of exploits and similar results might lead to additional hurdles where we still need to collect experience how to handle this best. Please contact us in case you have concerns, for example when evaluating bug finding tools or other types of artifacts that need special requirements.
The AEC will consist of about 5–10 members. We intend for other members to be a combination of senior graduate students, postdocs, and researchers. We seek to include a broad cross-section of the WOOT community on the AEC.
If you are interested in joining the AEC, or supervise PhD students who might be, please contact us at email@example.com.
- Mathias Payer (EPFL)
- Fangfei Liu (Intel)
- Daniel Gruss (TU Graz)
Artifact evaluation committee
- Chair: Erik van der Kouwe (VU)
- Ateeq Sharfuddin (SCYTHE)
- Brian Chapman (SCYTHE)
- Daniel Uroz (University of Zaragoza)
- Mohsen Ahmadi (Arizona State University)
- Ricardo J. Rodríguez (University of Zaragoza)
- Victor Duta (Vrije Universiteit Amsterdam)
- Johanna Amann, (International Computer Science Institute)
- Daniele Antonioli, (EPFL)
- Cornelius Aschermann, (Facebook)
- Jean-Philippe Aumasson, (Taurus Group)
- Dana Baril, (Microsoft)
- Lejla Batina, (Radboud University, The Netherlands)
- Sarani Bhattacharya, (imec-COSIC, ESAT, KU Leuven)
- Kevin Borgolte, (TU Delft)
- Juan Caballero, (IMDEA Software Institute)
- Yueqiang Cheng, (NIO Security Research)
- Chitchanok Chuengsatiansup, (The University of Adelaide, Australia)
- Jiska Classen, (TU Darmstadt, Secure Mobile Networking Lab)
- Lucas Davi, (University of Duisburg-Essen)
- Jennifer Fernick, (NCC Group)
- Andrea Fioraldi, (EURECOM)
- Yanick Fratantonio, (CISCO Talos)
- Christina Garman, (Purdue)
- Alexandre Gazet, (Airbus)
- Mariano Graziano, (Cisco Talos)
- Daniel Gruss, (Graz University of Technology)
- Christophe Hauser, (Information Sciences Institute, University of Southern California)
- Sean Heelan, (Optimyze)
- Rich Johnson, (Fuzzing IO)
- Marina Krotofil, (Hamburg University of Technology)
- Anil Kurmus, (IBM Research Europe)
- Pierre Laperdrix, (CNRS, University of Lille, Inria)
- Martina Lindorfer, (TU Wien)
- Matt Miller, (Microsoft)
- Veelasha Moonsamy, (Ruhr University Bochum)
- Asuka Nakajima, (NTT Secure Platform Laboratories)
- Yossi Oren, (Ben Gurion University of the Negev, Israel)
- Sara Rampazzi, (University of FLorida)
- Eyal Ronen, (Tel Aviv University)
- Christian Rossow, (CISPA Helmholtz Center for Information Security)
- Michael Schwarz, (CISPA Helmholtz Center for Information Security)
- Kostya Serebryany, (Google)
- Natalie Silvanovich, (Google)
- Maddie Stone, (Google Project Zero)
- Thomas Unterluggauer, (Intel Labs)
- Gabrielle Viala, (Quarkslab)
- Lukas Weichselbaum, (Google)
- Wenyuan Xu, (Zhejiang University)
- Stefano Zanero, (Politecnico di Milano)
- Aurélien Francillon, EURECOM
- Dan Boneh, Stanford
- Yuval Yarom, University of Adelaide and Data61
- Clémentine Maurice, CNRS
- Sarah Zennou, Airbus
- Collin Mulliner, Cruise
- Michael Bailey, University of Illinois, Urbana-Champaign