Publications
2024
Watermark Stealing in Large Language Models
Nikola Jovanović, Robin Staab, Martin Vechev
ICML
2024
R2-FM@ICLR24 Oral
![CC BY 4.0 by @fontawesome - https://fontawesome.com](/assets/images/award.png)
Instruction Tuning for Secure Code Generation
Jingxuan He*, Mark Vero*, Gabriela Krasnopolska, Martin Vechev
ICML
2024
* Equal contribution
Prompt Sketching for Large Language Models
Luca Beurer-Kellner, Mark Niklas Müller, Marc Fischer, Martin Vechev
ICML
2024
Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation
Luca Beurer-Kellner, Marc Fischer, Martin Vechev
ICML
2024
A Synthetic Dataset for Personal Attribute Inference
Hanna Yukhymenko, Robin Staab, Mark Vero, Martin Vechev
arXiv
2024
Code Agents are State of the Art Software Testers
Niels Mündler, Mark Niklas Müller, Jingxuan He, Martin Vechev
ICML Workshop on LLMs and Cognition (also at Workshop on Foundation Models in the Wild)
2024
ConStat: Performance-Based Contamination Detection in Large Language Models
Jasper Dekoninck, Mark Niklas Müller, Martin Vechev
ArXiv
2024
Beyond Memorization: Violating Privacy Via Inference with Large Language Models
Robin Staab, Mark Vero, Mislav Balunović, Martin Vechev
ICLR
2024
Spotlight, 2024 PPPM-Award
![CC BY 4.0 by @fontawesome - https://fontawesome.com](/assets/images/award.png)
Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
Niels Mündler, Jingxuan He, Slobodan Jenko, Martin Vechev
ICLR
2024
Black-Box Detection of Language Model Watermarks
Thibaud Gloaguen, Nikola Jovanović, Robin Staab, Martin Vechev
arXiv
2024
Exploiting LLM Quantization
Kazuki Egashira, Mark Vero, Robin Staab, Jingxuan He, Martin Vechev
arXiv
2024
NextGenAISafety@ICML24 Oral
![CC BY 4.0 by @fontawesome - https://fontawesome.com](/assets/images/award.png)
Controlled Text Generation via Language Model Arithmetic
Jasper Dekoninck, Marc Fischer, Luca Beurer-Kellner, Martin Vechev
ICLR
2024
Spotlight
![CC BY 4.0 by @fontawesome - https://fontawesome.com](/assets/images/award.png)
Large Language Models are Advanced Anonymizers
Robin Staab, Mark Vero, Mislav Balunović, Martin Vechev
arXiv
2024
Evading Data Contamination Detection for Language Models is (too) Easy
Jasper Dekoninck, Mark Niklas Müller, Maximilian Baader, Marc Fischer, Martin Vechev
arXiv
2024
2023
Large Language Models for Code: Security Hardening and Adversarial Testing
Jingxuan He, Martin Vechev
ACM CCS
2023
Distinguished Paper Award
![CC BY 4.0 by @fontawesome - https://fontawesome.com](/assets/images/award.png)
LMQL Chat: Scripted Chatbot Development
Luca Beurer-Kellner*, Marc Fischer*, Martin Vechev
Neural Conversational AI Workshop, TEACH -- ICML
2023
* Equal contribution
Large Language Models are Zero-Shot Multi-Tool Users
Luca Beurer-Kellner*, Marc Fischer*, Martin Vechev
Knowlege and Logical Reasoning Workshop -- ICML
2023
* Equal contribution
Prompting Is Programming: A Query Language for Large Language Models
Luca Beurer-Kellner, Marc Fischer, Martin Vechev
PLDI
2023