Help
RSS
API
Feed
Maltego
Contact
Domain > lamps-ccs.com
×
More information on this domain is in
AlienVault OTX
Is this malicious?
Yes
No
DNS Resolutions
Date
IP Address
2024-06-07
18.165.53.36
(
ClassC
)
2024-09-24
3.160.212.53
(
ClassC
)
2026-01-10
3.169.173.102
(
ClassC
)
Port 80
HTTP/1.1 301 Moved PermanentlyServer: CloudFrontDate: Sat, 10 Jan 2026 19:25:47 GMTContent-Type: text/htmlContent-Length: 167Connection: keep-aliveLocation: https://lamps-ccs.com/X-Cache: Redirect from cloudfrontVia: 1.1 7ad3d6571deff4c3c83d7e4476fcc6d0.cloudfront.net (CloudFront)X-Amz-Cf-Pop: HIO52-P4X-Amz-Cf-Id: MMN7BdOwaYn2aUeQ8txrJf3S8-Hp_wXsA8YILn9C0pJKE6uxxTzGQw html>head>title>301 Moved Permanently/title>/head>body>center>h1>301 Moved Permanently/h1>/center>hr>center>CloudFront/center>/body>/html>
Port 443
HTTP/1.1 200 OKContent-Type: text/htmlContent-Length: 62186Connection: keep-aliveX-Amz-Cf-Pop: HIO52-P4Date: Sat, 10 Jan 2026 19:25:49 GMTLast-Modified: Mon, 07 Oct 2024 01:35:36 GMTx-amz-server-side-encryption: AES256Accept-Ranges: bytesServer: AmazonS3ETag: 4371e794679132b4515fa44ce4f095fbVia: 1.1 a454a679efa1e16833b77cb6af61e11c.cloudfront.net (CloudFront), 1.1 5f7d374d92b73172fce43b7879076d1c.cloudfront.net (CloudFront)X-Cache: Miss from cloudfrontX-Amz-Cf-Pop: HIO52-P4X-Amz-Cf-Id: GX6FJfJTuakPHgoPylcDx04ZV-oOiEkscg9jGBFxhKcYT9gb7OxsdQ !DOCTYPE html>html langen>head> meta http-equivContent-Type contenttext/html; charsetUTF-8> meta namegenerator contentHTML Tidy for HTML5 for Apple macOS version 5.6.0> meta nameviewport contentwidthdevice-width, initial-scale1, shrink-to-fitno> meta namedescription contentLarge AI Systems and Models with Privacy and Safety Analysis Workshop (LAMPS)> meta namekeywords contentDeep Learning, Machine Learning, Security, Adversarial Examples, Attacks, Intrusion Detection, Program Analysis, Malware, Botnets, Vulnerability, Phishing, Forensics, Neural Networks, Recurrent Networks, Generative Adversarial Networks, AISec> meta nameauthor contentCCS-LAMPS Chairs> title>CCS-LAMPS24/title> link hrefhttps://cdn.jsdelivr.net/npm/bootstrap@5.3.0-alpha3/dist/css/bootstrap.min.css relstylesheet integritysha384-KK94CHFLLe+nY2dmCWGMq91rCGa5gtU4mk92HdvYe+M/SXH301p5ILy+dN9+nJOZ crossoriginanonymous> link href./css/font-awesome.min.css relstylesheet typetext/css> link hrefhttps://fonts.googleapis.com/css?familyK2D:400,700 relstylesheet typetext/css> !-- Custom styles for this template --> link href./css/agency.css relstylesheet> script async srchttps://www.googletagmanager.com/gtag/js?idG-QDQDHN7F62>/script> script> window.dataLayer window.dataLayer || ; function gtag() { dataLayer.push(arguments); } gtag(js, new Date()); gtag(config, G-QDQDHN7F62); /script>/head>body idpage-top> !-- Navigation --> nav classnavbar navbar-expand-lg navbar-dark fixed-top navbar-shrink idmainNav> div classcontainer> a classnavbar-brand js-scroll-trigger href#page-top>CCS-LAMPS 2024/a> button classnavbar-toggler navbar-toggler-right typebutton data-togglecollapse data-target#navbarResponsive aria-controlsnavbarResponsive aria-expandedfalse aria-labelToggle navigation>Menu/button> div classcollapse navbar-collapse idnavbarResponsive> ul classnavbar-nav text-uppercase ml-auto> li classnav-item> a classnav-link js-scroll-trigger href#page-top>Home/a> /li> li classnav-item> a classnav-link js-scroll-trigger href#keynote>Keynotes/a> /li> li classnav-item> a classnav-link js-scroll-trigger href#programme>Programme/a> /li> li classnav-item> a classnav-link js-scroll-trigger href#accepted>Accepted Papers/a> /li> li classnav-item> a classnav-link js-scroll-trigger href#cfp>Call for Papers/a> /li> !-- li classnav-item> a classnav-link js-scroll-trigger href#award>Best Paper Award/a> /li> --> li classnav-item> a classnav-link js-scroll-trigger href#committee>Committee/a> /li> li classnav-item> a classnav-link js-scroll-trigger target_blank hrefhttps://www.sigsac.org/ccs/CCS2024/>ACM CCS/a> /li> /ul> /div> /div> /nav> !-- Header --> header classmasthead> div classcontainer> div classintro-text> div classintro-heading> 1 sup> small> b>st/b> /small> /sup> ACM Workshop on br> Large AI Systems and Models with Privacy and Safety Analysis /div> div classintro-lead-in> b>October 14, 2024/b> — Salt Lake City, U.S.A. /div> div classintro-lead-in> co-located with the 31st ACM Conference on Computer and Communications Security /div> !-- div classphoto-credit> Photo: a target_blank hrefhttps://content.r9cdn.net/rimg/dimg/9e/00/7edd696b-city-11592-16ed2e36ce3.jpg?croptrue&width1020&height498>Wikipedia/a> (License: a hrefhttps://creativecommons.org/licenses/by/2.0/> CC BY 2.0 /a> ) /div> --> /div> /div> /header> section idkeynote> div classcontainer> div classrow> h2 classsection-heading text-uppercase>Keynotes/h2> /div> div classrow> div classcol-lg-3> center> img srcimg/ben-omai.png classportait> /center> /div> div classcol-lg-9 text-justify> h3 classsection-subheading> b>Title: Progress and Challenges in Detecting Generative “AI Art”/b> /h3> details> summary> b>Ben Y Zhao, Professor, University of Chicago/b> /summary> p>Prof. Zhao is a Neubauer Professor of Computer Science at University of Chicago. Over the years, He has worked on a number of areas from P2P networks, online social networks, cognitive radios/dynamic spectrum, graph mining and modeling, user behavior analysis. Since 2016, he has focused on security and privacy in machine learning and wearable systems. Since 2022, he works primarily on adversarial machine learning and tools to mitigate harms of generative AI models against human creatives in different industries. His primary research venues are CCS/Oakland/USENIX Security. In the past, he published at a range of top conferences, including NeurIPS/CVPR, IMC/WWW, CHI/CSCW, and SIGCOMM/NSDI/Mobicom. br> https://people.cs.uchicago.edu/~ravenben/ /p> /details> p>Generative AI models are adept at producing images that mimic visual art created by human artists. Beyond mimicking individual artists and their styles, text to image diffusion models are often used to commit fraud against individuals and commercial entities interested in licensing or purchasing human art. In this talk, I will discuss the challenges of distinguishing generative AI images from visual art produced by human artists, and why it is an important problem to solve for both human artists and AI model trainers. I will present our recent results from a large experimental study evaluating the practical efficacy of different genAI image detectors, including supervised classifiers, diffusion-specific detectors, and humans (via a user study involving more than 4000 artists). We find that there are no ideal solutions, and perhaps a hybrid of artists and ML models are our best hope moving forward./p> /div> /div> div classrow> div classcol-lg-3> center> img srcimg/chaoxiaowei.jpg classportait> /center> /div> div classcol-lg-9 text-justify> h3 classsection-subheading> b>Title: Emergent Threats in the Era of Large Language Models/b> /h3> details> summary> b>Chaowei Xiao, Assistant Professor, the University of Wisconsin, Madison/b> /summary> p> Dr. Chaowei Xiao is an Assistant Professor at the University of Wisconsin, Madison, and a research scientist at NVIDIA Research. He is currently very interested in exploring the safety and security problem in (Multimodal) Large Language Models and systems, as well as studying the role of LLMs in different application domains. He has received multiple Best Paper Awards at top-tier security and system conferences such as USENIX Security, MobiCom, and ESWN, along with the ACM Gordon Bell Special Prize for COVID and the Amazon Faculty Award. His research work has been featured in multiple media outlets, including Nature, Wired, Fortune, and The New York Times. One of Dr. Xiaos research outputs is also on display at the London Science Museum. br> https://xiaocw11.github.io/ /p> /details> p> In recent years, Large Language Models (LLMs) have garnered significant attention for their extraordinary ability to comprehend and process a wide range of textual information. Despite their vast potential, they are still facing safety challenges, hindering their practical applications. In this talk, our journey starts from exploring two safety challenges of existing LLMs: jailbreak attack and prompt injection attacks. I will introduce the principles for red-teaming LLMs by automatically generating jailbreak and prompt injection threats. Following this, I will then discuss mitigation strategies that can be employed to defend against such attacks, ranging from the alignment, inference stage and system stage. /p> p>/p> /div> /div> /div> /div> /section> !-- section classbg-light idprogramme> div classcontainer> div classrow> div classcol-lg-12 text-justify> h2 classsection-heading text-uppercase>Programme/h2> table cellpadding5> p>The following times are on CET (UTC +1)./p> tr> td classorga width120px>09:00–9:15/td> td classorga>Opening and Welcome/td> /tr> tr> td classorga width120px>9:15–10:00/td> td classorga uline>Keynote 1/td> /tr> tr> td>/td> td> em classpaper>When decentralization, security, and privacy are not friends/em> br> b>Carmela Troncoso/b> , Associate Professor @ EPFL /td> /tr> tr> td classorga>10:00–10:20/td> td classorga>Coffee break/td> /tr> tr> td classorga>10:20-11:00/td> td classorga uline>Spotlights/td> tr> td>/td> td> em classpaper>When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence/em> br> b>Authors/b> : Benoit Coqueret (Univ. Rennes, Inria), Mathieu Carbone (Thales ITSEF), Olivier Sentieys (Univ. Rennes, Inria), Gabriel Zaid (Thales ITSEF) /td> /tr> tr> td>/td> td> em classpaper>Lookin Out My Backdoor! Investigating Backdooring Attacks Against DL-driven Malware Detectors/em> br> b>Authors/b> : Mario DOnghia (Politecnico di Milano), Federico Di Cesare (Politecnico di Milano), Luigi Gallo (Cyber Security Lab, Telecom Italia), Michele Carminati (Politecnico di Milano), Mario Polino (Politecnico di Milano), Stefano Zanero (Politecnico di Milano) /td> /tr> tr> td>/td> td> em classpaper>Not what youve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection/em> br> b>Authors/b> : Sahar Abdelnabi (CISPA Helmholtz Center for Information Security), Kai Greshake (Saarland University, sequire technology GmbH), Shailesh Mishra (Saarland University), Christoph Endres (sequire technology GmbH), Thorsten Holz (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security) /td> /tr> tr> td>/td> td> em classpaper>Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning/em> br> b>Authors/b> : Chris Hicks (The Alan Turing Institute), Vasilios Mavroudis (The Alan Turing Institute), Myles Foley (Imperial College London), Thomas Davies (The Alan Turing Institute), Kate Highnam (Imperial College London), Tim Watson (The Alan Turing Institute) /td> /tr> /tr> tr> td classorga>11:00–12:00/td> td classorga uline>Poster session 1/td> /tr> tr> td classorga>12:00–13:30/td> td classorga>Lunch/td> /tr> tr> td classorga width120px>13:30–14:15/td> td classorga uline> Keynote 2 br> /td> /tr> tr> td>/td> td> em classpaper>Emerging challenges in securing frontier AI systems/em> br> b>Mikel Rodriguez/b> , AI Red Teaming @ Google Deepmind /td> /tr> tr> td classorga>14:15–14:45/td> td classorga>Break/td> /tr> tr> td classorga width120px>14:45–15:30/td> td classorga uline> Keynote 3 br> /td> /tr> tr> td>/td> td> em classpaper>Trustworthy AI and A Cybersecurity Perspective on Large Language Models/em> br> b>Mario Fritz/b> , Faculty @ CISPA Helmholtz Center for Information Security /td> /tr> /tr> tr> td classorga>15:30–16:30/td> td classorga uline>Poster session 2/td> /tr> tr> td classorga>16:30–16:45/td> td classorga>Closing remarks/td> /tr> /table> /div> /div> /div> /section> --> !-- section idaccepted> div classcontainer> div classrow> div classcol-lg-12 text-justify> h2 classsection-heading text-uppercase>Accepted Papers/h2> p> You can find the accepted papers in the a hrefhttps://dl.acm.org/doi/proceedings/10.1145/3605764>proceedings/a> . /p> strong>Privacy-Preserving Machine Learning (Poster session 1)/strong> table cellpadding5> tr> td>/td> td> em classpaper>Differentially Private Logistic Regression with Sparse Solutions/em> br> b>Authors/b> : Amol Khanna (Booz Allen Hamilton), Fred Lu (Booz Allen Hamilton; University of Maryland, Baltimore County), Edward Raff (Booz Allen Hamilton; University of Maryland, Baltimore County), Brian Testa (Air Force Research Laboratory) /td> /tr> tr> td>/td> td> em classpaper>Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models/em> br> b>Authors/b> : Florian A. Hölzl (Artifical Intelligence in Medicine, Technical University of Munich), Daniel Rueckert (Artifical Intelligence in Medicine, Technical University of Munich), Georgios Kaissis (Artifical Intelligence in Medicine, Technical University of Munich) /td> /tr> tr> td>/td> td> em classpaper>Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile/em> br> b>Authors/b> : Tyler LeBlond (Booz Allen Hamilton), Joseph Munoz (Booz Allen Hamilton), Fred Lu (Booz Allen Hamilton), Maya Fuchs (Booz Allen Hamilton), Elliot Zaresky-Williams (Booz Allen Hamilton), Edward Raff (Booz Allen Hamilton), Brian Testa (Air Force Research Laboratory) /td> /tr> tr> td>/td> td> em classpaper>Information Leakage from Data Updates in Machine Learning Models/em> br> b>Authors/b> : Tian Hui (The University of Melbourne), Farhad Farokhi (University of Melbourne), Olga Ohrimenko (The University of Melbourne) /td> /tr> tr> td>/td> td> em classpaper>Membership Inference Attacks Against Semantic Segmentation Models/em> br> b>Authors/b> : Tomas Chobola (Helmholtz AI), Dmitrii Usynin (Department of Computing, Imperial College London; Artificial Intelligence in Medicine and Healthcare, TUM), Georgios Kaissis (Artificial Intelligence in Medicine and Healthcare, TUM; Institute for Machine Learning in Biomedical Imaging, Helmholtz Zentrum München; Department of Computing, Imperial College London) /td> /tr> tr> td>/td> td> em classpaper>Utility-preserving Federated Learning/em> br> b>Authors/b> : Reza Nasirigerdeh (Technical University of Munich), Daniel Rueckert (Technical University of Munich), Georgios Kaissis (Technical University of Munich) /td> /tr> /table> strong>Machine Learning for Cybersecurity (Poster session 1)/strong> table cellpadding5> tr> td>/td> td> em classpaper>Certified Robustness of Static Deep Learning-based Malware Detectors against Patch and Append Attacks/em> br> b>Authors/b> : Daniel Gibert (CeADAR, University College Dublin), Giulio Zizzo (IBM Research Europe), Quan Le (CeADAR, University College Dublin) /td> /tr> tr> td>/td> td> em classpaper>AVScan2Vec: Feature Learning on Antivirus Scan Data for Production-Scale Malware Corpora/em> br> b>Authors/b> : Robert J. Joyce (Booz Allen Hamilton, University of Maryland Baltimore County), Tirth Patel (University of Maryland Baltimore County), Charles Nicholas (University of Maryland Baltimore County), Edward Raff (Booz Allen Hamilton, University of Maryland Baltimore County) /td> /tr> tr> td>/td> td> em classpaper>Drift Forensics of Malware Classifiers/em> br> b>Authors/b> : Theo Chow (Kings College London), Zeliang Kan (Kings College London), Lorenz Linhardt (Technische Universität Berlin), Lorenzo Cavallaro (University College London), Daniel Arp (Technische Universität Berlin), Fabio Pierazzi (Kings College London) /td> /tr> tr> td>/td> td> em classpaper>Lookin Out My Backdoor! Investigating Backdooring Attacks Against DL-driven Malware Detectors/em> br> b>Authors/b> : Mario DOnghia (Politecnico di Milano), Federico Di Cesare (Politecnico di Milano), Luigi Gallo (Cyber Security Lab, Telecom Italia), Michele Carminati (Politecnico di Milano), Mario Polino (Politecnico di Milano), Stefano Zanero (Politecnico di Milano) /td> /tr> tr> td>/td> td> em classpaper>Reward Shaping for Happier Autonomous Cyber Security Agents/em> br> b>Authors/b> : Elizabeth Bates (The Alan Turing Institute), Vasilios Mavroudis (The Alan Turing Institute), Chris Hicks (The Alan Turing Institute) /td> /tr> tr> td>/td> td> em classpaper>Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors/em> br> b>Authors/b> : Biagio Montaruli (SAP Security Research, EURECOM), Luca Demetrio (Università degli Studi di Genova), Maura Pintor (University of Cagliari), Battista Biggio (University of Cagliari), Luca Compagna (SAP Security Research), Davide Balzarotti (EURECOM) /td> /tr> /table> strong>Machine Learning Security (Poster session 2)/strong> table cellpadding5> tr> td>/td> td> em classpaper>Certifiers Make Neural Networks Vulnerable to Availability Attacks/em> br> b>Authors/b> : Tobias Lorenz (CISPA Helmholtz Center for Information Security), Marta Kwiatkowska (University of Oxford), Mario Fritz (CISPA Helmholtz Center for Information Security) /td> /tr> tr> td>/td> td> em classpaper>Not what youve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection/em> br> b>Authors/b> : Sahar Abdelnabi (CISPA Helmholtz Center for Information Security), Kai Greshake (Saarland University, sequire technology GmbH), Shailesh Mishra (Saarland University), Christoph Endres (sequire technology GmbH), Thorsten Holz (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security) /td> /tr> tr> td>/td> td> em classpaper>Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning/em> br> b>Authors/b> : Chris Hicks (The Alan Turing Institute), Vasilios Mavroudis (The Alan Turing Institute), Myles Foley (Imperial College London), Thomas Davies (The Alan Turing Institute), Kate Highnam (Imperial College London), Tim Watson (The Alan Turing Institute) /td> /tr> tr> td>/td> td> em classpaper>The Adversarial Implications of Variable-Time Inference/em> br> b>Authors/b> : Dudi Biton (Ben Gurion University of the Negev), Aditi Misra (University of Toronto), Efrat Levy (Ben Gurion University of the Negev), Jaidip Kotak (Ben Gurion University of the Negev), Ron Bitton (Ben Gurion University of the Negev), Roei Schuster (Wild Moose), Nicolas Papernot (University of Toronto and Vector Institute), Yuval Elovici (Ben Gurion University of the Negev), Ben Nassi (Cornell Tech) /td> /tr> tr> td>/td> td> em classpaper>Dictionary Attack on IMU-based Gait Authentication/em> br> b>Authors/b> : Rajesh Kumar (Bucknell University), Can Isik (Syracuse University), CHILUKURI MOHAN (Syracuse University) /td> /tr> tr> td>/td> td> em classpaper>When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence/em> br> b>Authors/b> : Benoit Coqueret (Univ. Rennes, Inria), Mathieu Carbone (Thales ITSEF), Olivier Sentieys (Univ. Rennes, Inria), Gabriel Zaid (Thales ITSEF) /td> /tr> tr> td>/td> td> em classpaper>Task-Agnostic Safety for Reinforcement Learning/em> br> b>Authors/b> : Md Asifur Rahman (Wake Forest University), Sarra Alqahtani (Wake Forest University) /td> /tr> tr> td>/td> td> em classpaper>Broken Promises: Measuring Confounding Effects in Learning-based Vulnerability Discovery/em> br> b>Authors/b> : Erik Imgrund (SAP Security Research), Tom Ganz (SAP Security Research), Martin Härterich (SAP Security Research), Niklas Risse (Max-Planck-Institute for Security and Privacy), Lukas Pirch (Technische Universität Berlin), Konrad Rieck (Technische Universität Berlin) /td> /tr> tr> td>/td> td> em classpaper>Measuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognition/em> br> b>Authors/b> : Luke E. Richards (University of Maryland, Baltimore County), Edward Raff (University of Maryland, Baltimore County; Booz Allen Hamilton), Cynthia Matuszek (University of Maryland, Baltimore County) /td> /tr> /table> /div> /div> /section> -->section classbg-light idprogram> div classcontainer> div classrow> div classcol-lg-12 text-justify> h2 classsection-heading text-uppercase>Program - Oct 14, 2024/h2> h3 classsection-heading>Full-day Workshop/h3> p>The following times are on local time zone./p> table cellpadding5> tr> td classorga width120px>9:20–9:30/td> td classorga>Opening Remarks/td> /tr> !-- tr> td classorga width120px>9:30–10:30/td> td classorga uline>Keynote Speech 1: Zico Kolter (Professor, Carnegie Mellon University)/td> /tr> --> tr> td classorga width120px>9:30–10:30/td> td classorga uline>Keynote Speech 1: Ben Zhao (Professor, University of Chicago)/td> /tr> tr> td classorga width120px>10:30–11:00/td> td classorga>Morning Coffee Break/td> /tr> tr> td classorga width120px>11:00–11:30/td> td classorga uline> Session I: Cybersecurity Threat Intelligence/td> /tr> tr> td>/td> td>11:00: em classpaper>ThreatKG: An AI-Powered System for Automated Online Threat Intelligence/em>br/> i>Peng Gao (Virginia Tech), Xiaoyuan Liu (University of California, Berkeley), Edward Choi (University of California, Berkeley), Sibo Ma (University of California, Berkeley), Xinyu Yang (Virginia Tech), and Dawn Song (University of California, Berkeley)/i> /td> /tr> tr> td>/td> td>11:10: em classpaper>Mitigating Unauthorized Speech Synthesis for Voice-Activated Systems/em>br/> i>Zhisheng Zhang (Beijing University of Posts and Telecommunications), Qianyi Yang (Beijing University of Posts and Telecommunications), Derui Wang (CSIROs Data61), Pengyang Huang (Beijing University of Posts and Telecommunications), Yuxin Cao (National University of Singapore), Kai Ye (The University of Hong Kong), and Jie Hao (Beijing University of Posts and Telecommunications)/i> /td> /tr> tr> td>/td> td>11:20: em classpaper>How to Efficiently Manage Critical Infrastructure Vulnerabilities? Toward Large Code-graph Models/em>br/> i>Hongying Zhang (Shanghai Jiao Tong University), Gaolei Li (Shanghai Jiao Tong University), Shenghong Li (Shanghai Jiao Tong University), Hongfu Liu (Shanghai Jiao Tong University), Shuo Wang (Shanghai Jiao Tong University), and Jianhua Li (Shanghai Jiao Tong University)/i> /td> /tr> tr> td classorga width120px>11:30–12:00/td> td classorga uline> Session II: Adversarial Attacks and Robustness/td> /tr> tr> td>/td> td>11:30: em classpaper>Adversarial Attacks to Multi-Modal Models/em>br/> i>Zhihao Dou (Duke University), Xin Hu (The University of Tokyo), Haibo Yang (Rochester Institute of Technology), Zhuqing Liu (The Ohio State University), and Minghong Fang (Duke University)/i> /td> /tr> tr> td>/td> td>11:40: em classpaper>TrojFair: Trojan Fairness Attacks/em>br/> i>Jiaqi Xue (University of Central Florida), Mengxin Zheng (University of Central Florida), Yi Sheng (George Mason University), Lei Yang (George Mason University), Qian Lou (University of Central Florida), and Lei Jiang (Indiana University Bloomington)/i> /td> /tr> tr> td>/td> td>11:50: em classpaper>PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts/em>br/> i>Kaijie Zhu (Institute of Automation, Chinese Academy of Sciences), Jindong Wang (Microsoft Research), Jiaheng Zhou (Institute of Automation, Chinese Academy of Sciences), Zichen Wang (Institute of Automation, Chinese Academy of Sciences), Hao Chen (Carnegie Mellon University), Yidong Wang (Peking University), Linyi Yang (Westlake University), Wei Ye (Peking University), Yue Zhang (Westlake University), Neil Gong (Duke University), and Xing Xie (Microsoft)/i> /td> /tr> tr> td classorga width120px>12:00–14:00/td> td classorga>Lunch/td> /tr> tr> td classorga width120px>14:00–15:00/td> td classorga uline>Keynote Speech 2: Chaowei Xiao (NVIDIA and University of Wisconsin, Madison)/td> /tr> tr> td classorga width120px>15:00–15:30/td> td classorga>Afternoon Coffee Break/td> /tr> tr> td classorga width120px>15:30–16:00/td> td classorga uline> Session III: Large Language Model Security/td> /tr> tr> td>/td> td>15:30: em classpaper>Have You Merged My Model? On The Robustness of Merged Machine Learning Models/em>br/> i>Tianshuo Cong (Tsinghua University), Delong Ran (Tsinghua University), Zesen Liu (Xidian University), Xinlei He (The Hong Kong University of Science and Technology (Guangzhou)), Jinyuan Liu (Tsinghua University), Yichen Gong (Tsinghua University), Qi Li (Tsinghua University), Anyu Wang (Tsinghua University), and Xiaoyun Wang (Tsinghua University)/i> /td> /tr> tr> td>/td> td>15:40: em classpaper>Prompter Says: A Linguistic Approach to Understanding and Detecting Jailbreak Attacks Against Large-Language Models/em>br/> i>Dylan Lee (University of California, Irvine), Shaoyuan Xie (University of California, Irvine), Shagoto Rahman (University of California, Irvine), Kenneth Pat (University of California, Irvine), David Lee (University of California, Irvine), and Qi Alfred Chen (University of California, Irvine)/i> /td> /tr> tr> td>/td> td>15:50: em classpaper>Towards Large Language Model (LLM) Forensics Using Feature Extraction/em>br/> i>Maxim Chernyshev (Deakin University), Zubair Baig (Deakin University), and Robin Ram Mohan Doss (Deakin University)/i> /td> /tr> tr> td classorga width120px>16:00–16:20/td> td classorga uline> Session IV: Secure Learning and Model Attribution/td> /tr> tr> td>/td> td>16:00: em classpaper>CryptoTrain: Fast Secure Training on Encrypted Data/em>br/> i>Jiaqi Xue (University of Central Florida), Yancheng Zhang (University of Central Florida), Yanshan Wang (University of Pittsburgh), Xueqiang Wang (University of Central Florida), Hao Zheng (University of Central Florida), and Qian Lou (University of Central Florida)/i> /td> /tr> tr> td>/td> td>16:10: em classpaper>Detection and Attribution of Diffusion Model of Character Animation Based on Spatio-Temporal Attention/em>br/> i>Fazhong Liu (Shanghai Jiao Tong University), Yan Meng (Shanghai Jiao Tong University), Tian Dong (Shanghai Jiao Tong University), Guoxing Chen (Shanghai Jiao Tong University), and Haojin Zhu (Shanghai Jiao Tong University)/i> /td> /tr> tr> td classorga width120px>16:20–16:30/td> td classorga>Concluding Remarks/td> /tr> /table> /div> /div> /div>/section> section idcfp> div classcontainer> div classrow> div classcol-lg-12 text-justify> h2 classsection-heading text-uppercase>Call for Papers/h2> h3 classsection-subheading>Important Dates/h3> ul> li> Paper and talk submission deadline: July 18th, 2024, 11:59 PM (all deadlines are AoE, UTC-12) /li> li> Acceptance notification: August 14th, 2024 /li> li>Camera ready due: September 8th, 2024/li> li>Workshop day: October 14th, 2024/li> /ul> h3 classsection-subheading>Overview/h3> p> As Large AI Systems and Models (LAMs) become increasingly pivotal in a wide array of applications, their potential impact on the privacy and cybersecurity of critical infrastructure becomes a pressing concern. LAMPS is dedicated to addressing these unique challenges, fostering a dialogue on the latest advancements and ethical considerations in enhancing the privacy and cybersecurity of LAMs, particularly in the context of critical infrastructure protection. /p> p> LAMPS will bring together global experts to dissect the nuanced privacy and cybersecurity challenges posed by LAMs, especially in critical infrastructure sectors. This workshop will serve as a platform to unveil novel techniques, share best practices, and chart the course for future research, with a special emphasis on the delicate balance between advancing AI technologies and securing critical digital and physical systems. /p> h3 classsection-subheading>Topics of Interest/h3> p>Topics of interest include (but are not limited to):/p> p> b>Secure Large AI Systems and Models for Critical Infrastructure/b> /p> ul> li>AI-Enhanced Threat Intelligence and Detection/li> li>Automated Security Orchestration and Incident Response/li> li>Large AI Models in Vulnerability Assessment and Penetration Testing/li> li>AI-Driven Network Security Management/li> li>AI-Enabled Security Awareness and Education/li> li>Collaborative AI for Global Cyber Threat Intelligence Sharing/li> li>Regulatory Compliance and AI in Cybersecurity/li> /ul> p> b>Large AI Systems and Models Privacy and Security Vulnerabilities/b> /p> ul> li>Advanced Threat Landscape/li> li>Holistic Security and Privacy Frameworks/li> li>Innovations in Privacy Preservation/li> li>Secure Computation in AI/li> /ul> p> b>Data Anonymization and Synthetic Data/b> /p> ul> li>Advancements in Data Protection/li> li>Cross-Border Data Flow and Cooperation/li> li>Intellectual Property Protection/li> li>Combatting Deepfakes/li> /ul> p> b>Human-Centric Large AI Systems and Models/b> /p> ul> li>User Vulnerability and Defense Mechanisms/li> li>Equity and Inclusivity in AI/li> li>Participative Large AI Governance/li> li>Enhancing Explainability and Trust/li> li>Designing for Security and Usability/li> li>Ethics and Decision-Making in AI/li> li>Frameworks for Responsible AI Governance/li> /ul> h3 classsection-subheading>Submission Guidelines/h3> p> Submitted papers must not substantially overlap with papers that have been published or simultaneously submitted to a journal or a conference with proceedings. Short submissions should be at most 4 pages in the ACM double-column format. Submissions should be at most 10 pages in the ACM double-column format, excluding well-marked appendices, and at most 12 pages in total. Systematization of knowledge (SoK) submissions could be at most 15 pages long, excluding well-marked appendices, and at most 17 pages. Submissions are not required to be anonymized. /p> h3 classsection-subheading>Submission Site/h3> p> Submission link: a hrefhttps://ccs24-lamps.hotcrp.com>https://ccs24-lamps.hotcrp.com/a> /p> p> Only PDF files will be accepted. Submissions not meeting these guidelines risk rejection without consideration of their merits. Authors of accepted papers must guarantee that one of the authors will register and present the paper at the workshop. Proceedings of the workshop will be available on a CD to the workshop attendees and will become part of the ACM Digital Library. /p> p> The archival papers will be included in the workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance. /p> p> Authors are responsible for obtaining appropriate publication clearances. Attendance and presentation by at least one author of each accepted paper at the workshop are mandatory for the paper to be included in the proceedings. /p> p> For any questions, please contact one of the workshop organizers at a hrefmailto:jason.xue@data61.csiro.au>jason.xue@data61.csiro.au/a> or a hrefmailto:wangshuosj@sjtu.edu.cn>wangshuosj@sjtu.edu.cn/a> . /p> /div> /div> /div> /section> !-- section idaward> div classcontainer> div classcol-lg-12 text-justify> h2 classsection-heading text-uppercase>Best Paper Award/h2> p> We will award the best paper, selected by the reviewers among all the submitted papers. /p> /div> /div> /section> --> section idcommittee classbg-light> div classcontainer> div classrow> div classcol-lg-12 text-left> h2 classsection-heading text-uppercase>Committee/h2> h3 classsection-subheading>Workshop Chairs/h3> ul classnoindent> li> img src./img/Bo.png height100em width80 stylemargin: 10px;> a hrefhttps://aisecure.github.io/ target_blank>Bo Li/a> , University of Chicago, USA /li> li> img src./img/wenyuan.png height100em width80 stylemargin: 10px;> a hrefhttps://sites.google.com/view/xuwenyuan/main target_blank>Wenyuan Xu/a> , Zhejiang University, China /li> li> img src./img/jieshan.jpeg height100em width80 stylemargin: 10px;> a hrefhttps://chenjshnn.github.io/ target_blank>Jieshan Chen/a> , CSIROs Data61, Australia /li> li> img src./img/yang.png height100em width80 stylemargin: 10px;> a hrefhttps://yangzhangalmo.github.io/ target_blank>Yang Zhang/a> , CISPA, Germany /li> li> img src./img/Jason.jpg height100em width80 stylemargin: 10px;> a hrefhttps://people.csiro.au/x/j/jason-xue target_blank>Jason Xue/a> , CSIROs Data61, Australia /li> li> img src./img/shuo.png height100em width80 stylemargin: 10px;> a hrefhttps://www.wang-shuo.com/ target_blank>Shuo Wang/a> , Shanghai Jiao Tong University, China /li> li> img src./img/guangdong.jpeg height100em width80 stylemargin: 10px;> a hrefhttps://baigd.github.io/ target_blank>Guangdong Bai/a> , The University of Queensland, Australia /li> li> img src./img/xyuan.jpg height100em width80 stylemargin: 10px;> a hrefhttps://xyuancs.github.io/ target_blank>Xingliang Yuan/a> , The University of Melbourne, Australia /li> /ul> h3 classsection-subheading>Program Committee/h3> style typetext/css>.tg {border-collapse:collapse;border-color:#ccc;border-spacing:0;}.tg td{background-color:#fff;border-bottom-width:1px;border-color:#ccc;border-style:solid;border-top-width:1px; border-width:0px;color:#333;font-family:Arial, sans-serif;font-size:14px;overflow:hidden;padding:10px 5px; word-break:normal;}.tg th{background-color:#f0f0f0;border-bottom-width:1px;border-color:#ccc;border-style:solid;border-top-width:1px; border-width:0px;color:#333;font-family:Arial, sans-serif;font-size:14px;font-weight:normal;overflow:hidden; padding:10px 5px;word-break:normal;}.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top}.tg .tg-btxf{background-color:#f9f9f9;border-color:inherit;text-align:left;vertical-align:top}/style>table classtg styleundefined;table-layout: fixed; width: 929px>colgroup>col stylewidth: 71px>col stylewidth: 114px>col stylewidth: 476px>col stylewidth: 268px>/colgroup>!-- thead> tr> th classtg-0pky>First/th> th classtg-0pky>Last/th> th classtg-0pky>Affiliation/th> th classtg-0pky>Country/th> /tr>/thead> -->tbody> tr> td classtg-btxf>Chong/td> td classtg-btxf>Xiang/td> td classtg-btxf>Princeton University/td> td classtg-btxf>United States of America/td> /tr> tr> td classtg-0pky>Derui/td> td classtg-0pky>Wang/td> td classtg-0pky>CSIROs Data61/td> td classtg-0pky>Australia/td> /tr> tr> td classtg-btxf>Giovanni/td> td classtg-btxf>Apruzzese/td> td classtg-btxf>University of Liechtenstein/td> td classtg-btxf>Liechtenstein/td> /tr> tr> td classtg-0pky>Jamie/td> td classtg-0pky>Hayes/td> td classtg-0pky>Google Deepmind/td> td classtg-0pky>United Kingdom/td> /tr> tr> td classtg-btxf>Jinyuan/td> td classtg-btxf>Jia/td> td classtg-btxf>The Pennsylvania State University/td> td classtg-btxf>United States of America/td> /tr> tr> td classtg-0pky>Konrad/td> td classtg-0pky>Rieck/td> td classtg-0pky>TU Berlin/td> td classtg-0pky>Germany/td> /tr> tr> td classtg-btxf>Kristen/td> td classtg-btxf>Moore/td> td classtg-btxf>CSIROs Data61/td> td classtg-btxf>Australia/td> /tr> tr> td classtg-0pky>Mainack/td> td classtg-0pky>Mondal/td> td classtg-0pky>Indian Institute of Technology, Kharagpur/td> td classtg-0pky>India/td> /tr> tr> td classtg-btxf>Mathias/td> td classtg-btxf>Humbert/td> td classtg-btxf>University of Lausanne/td> td classtg-btxf>Switzerland/td> /tr> tr> td classtg-0pky>Minghong/td> td classtg-0pky>Fang/td> td classtg-0pky>Duke University/td> td classtg-0pky>United States of America/td> /tr> tr> td classtg-btxf>Peng/td> td classtg-btxf>Gao/td> td classtg-btxf>Virginia Tech/td> td classtg-btxf>United States of America/td> /tr> tr> td classtg-0pky>Pin-Yu/td> td classtg-0pky>Chen/td> td classtg-0pky>IBM Research/td> td classtg-0pky>United States of America/td> /tr> tr> td classtg-btxf>Sagar/td> td classtg-btxf>Samtani/td> td classtg-btxf>Indiana University/td> td classtg-btxf>United States of America/td> /tr> tr> td classtg-0pky>Sai Teja/td> td classtg-0pky>Peddinti/td> td classtg-0pky>Google/td> td classtg-0pky>United States of America/td> /tr> tr> td classtg-btxf>Shiqing/td> td classtg-btxf>Ma/td> td classtg-btxf>University of Massachusetts Amherst/td> td classtg-btxf>United States of America/td> /tr> tr> td classtg-0pky>Shuang/td> td classtg-0pky>Hao/td> td classtg-0pky>University of Texas at Dallas/td> td classtg-0pky>United States of America/td> /tr> tr> td classtg-btxf>Stjepan/td> td classtg-btxf>Picek/td> td classtg-btxf>Radboud University/td> td classtg-btxf>Netherlands/td> /tr> tr> td classtg-0pky>Tian/td> td classtg-0pky>Dong/td> td classtg-0pky>Shanghai Jiao Tong University/td> td classtg-0pky>China/td> /tr> tr> td classtg-btxf>Tianshuo/td> td classtg-btxf>Cong/td> td classtg-btxf>Tsinghua University/td> td classtg-btxf>China/td> /tr> tr> td classtg-0pky>Torsten/td> td classtg-0pky>Krauß/td> td classtg-0pky>University of Wuerzburg/td> td classtg-0pky>Germany/td> /tr> tr> td classtg-btxf>Varun/td> td classtg-btxf>Chandrasekaran/td> td classtg-btxf>University of Illinois Urbana-Champaign/td> td classtg-btxf>United States of America/td> /tr> tr> td classtg-0pky>Xiaoning/td> td classtg-0pky>Du/td> td classtg-0pky>Monash University/td> td classtg-0pky>Australia/td> /tr> tr> td classtg-btxf>Xinlei/td> td classtg-btxf>He/td> td classtg-btxf>The Hong Kong University of Science and Technology (Guangzhou)/td> td classtg-btxf>China/td> /tr> tr> td classtg-0pky>Yanjiao/td> td classtg-0pky>Chen/td> td classtg-0pky>Zhejiang University/td> td classtg-0pky>China/td> /tr> tr> td classtg-btxf>Yinzhi/td> td classtg-btxf>Cao/td> td classtg-btxf>Johns Hopkins University/td> td classtg-btxf>United States of America/td> /tr>/tbody>/table> !-- p>We are currently looking for reviewers. Contact a hrefmailto:TBD@xxx.xx>TBD@xxx.xx/a> if you want to be involved. /p> --> /div> /div> /div> /section> !-- Footer --> footer> div classcontainer> div classrow> div classcol-md-6> span classcopyright>img srclogo.jpg altLAMPS width42 height42>. Copyright © CCS-LAMPS 2024 img srcCCS-logo.png altLAMPS width42 height42>/span> br> /div> !-- div classcol-md-6> Support kindly provided by the a hrefhttps://www.unica.it/unica/en/homepage.page/ target_blank>University of Cagliari/a> and by the a hrefhttps://elsa-ai.eu/ target_blank> ELSA project /a> . br> img src./temp_files/unica_800_black.png height50em stylemargin: 10px;> img src./temp_files/elsa_logo_RGB_twocolor.jpg height50em stylemargin: 10px;> /div> --> /div> /div> /footer> !-- script src./temp_files/jquery.slim.min.js.download integritysha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj crossoriginanonymous>/script> script src./temp_files/bootstrap.bundle.min.js.download integritysha384-ENjdO4Dr2bkBIFxQpeoTz1HIcje39Wm4jDKdf19U8gI4ddQ3GYNS7NTKfAdVQSZe crossoriginanonymous>/script> script src./temp_files/agency.min.js.download>/script> -->/body>/html>
View on OTX
|
View on ThreatMiner
Please enable JavaScript to view the
comments powered by Disqus.
Data with thanks to
AlienVault OTX
,
VirusTotal
,
Malwr
and
others
. [
Sitemap
]