Preamble is a U.S.-based artificial intelligence security (AI) startup company, founded in 2021, notable for discovering prompt injection attacks in large language models. Preamble is known for their contributions to identifying and mitigating prompt injection attacks in large language models (LLMs).

Preamble, C-corp
Company typePrivate
IndustryArtificial intelligence
Founded2021; 3 years ago (2021)
Founders
  • Jonathan Cefalu
  • Jeremy McHugh
HeadquartersPittsburgh, Pennsylvania, U.S.
Websitepreamble.com

Notability

edit

Preamble is particularly notable for its early discovery of vulnerabilities in widely used AI models, such as GPT-3, with a primary discovery of the prompt injection attacks, a critical security issue for AI systems.[1][2][3] These findings were first reported privately to OpenAI in 2022 and have since been the subject of numerous studies in the field.

Preamble has entered a partnership with nvidia boost AI safety and risk mitigation for enterprises.[4] They are a part of the Air Force security program as a notable Pittsburgh AI hub[5]

Research

edit

Preamble's research revolves around artificial intelligence security, AI ethics, privacy and policy regulations. In May 2022, Preamble's researchers discovered critical vulnerabilities in GPT-3, which allowed malicious actors to manipulate the model’s outputs through prompt injections.[6][3] The resulting paper investigated the vulnerability of large pre-trained language models (PLMs), such as GPT-3 and BERT, to adversarial attacks. These attacks are designed to manipulate the models' outputs by introducing subtle perturbations in the input text, leading to incorrect or harmful outputs, such as generating hate speech or leaking sensitive information.[citation needed]

References

edit
  1. ^ Kosinski, Matthew; Forrest, Amber. "What is a prompt injection attack?". IBM.com.
  2. ^ Rossi, Sippo; Michel, Alisia Marianne; Mukkamala, Raghava Rao; Thatcher, Jason Bennett (2024-01-31). "An Early Categorization of Prompt Injection Attacks on Large Language Models". arXiv:2402.00898.
  3. ^ a b Rao, Abhinav Sukumar; Naik, Atharva Roshan; Vashistha, Sachin; Aditya, Somak; Choudhury, Monojit (2024). "Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks". In Calzolari, Nicoletta; Kan, Min-Yen; Hoste, Veronique; Lenci, Alessandro; Sakti, Sakriani; Xue, Nianwen (eds.). Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (PDF). Torino, Italia: ELRA and ICCL. pp. 16802–16830.
  4. ^ Doughty, Nate (2023-08-08). "Nvidia selects AI safety startup Preamble for its business development program". Pittsburgh Business Times. Retrieved 2024-08-15.
  5. ^ Dabkowski, Jake (2024-05-17). "Pittsburgh-area companies aim to make AI for businesses more secure". Pittsburgh Business Times. Retrieved 2024-08-15.
  6. ^ Rossi, Sippo; Michel, Alisia Marianne; Mukkamala, Raghava Rao; Thatcher, Jason Bennett (2024-01-31). "An Early Categorization of Prompt Injection Attacks on Large Language Models". arXiv:2402.00898.
edit