Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications.  Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3 YouTube: https://youtu.be/64Achcz97PI Resources: AI Tooling: Azure AI Tooling Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog Prompt Shields to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety. Groundedness detection to detect “hallucinations” in model outputs, coming soon. Safety system messagesto steer your model’s behavior toward safe, responsible outputs, coming soon. Safety evaluations to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview.  Risk and safety monitoring to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service. AI Defender for Cloud AI Security Posture Management AI security posture management (Preview) - Microsoft Defender for Cloud | Microsoft Learn AI Workloads Enable threat protection for AI workloads (preview) - Microsoft Defender for Cloud | Microsoft Learn     AI Red Teaming Tool Announcing Microsoft’s open automation framework to red team generative AI Systems | Microsoft Security Blog AI Development Considerations:  AI Assessment from Microsoft Conduct an AI assessment using Microsoft’s Responsible AI Impact Assessment Template Responsible AI Impact Assessment Guide for detailed instructions Microsoft Responsible AI Processes Follow Microsoft’s Responsible AI principles: fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability Utilize tools like the Responsible AI Dashboard for continuous monitoring and improvement Define Use Case and Model Architecture Determine the specific use case for your LLM Design the model architecture, focusing on the Transformer architecture  Content Filtering System How to use content filters (preview) with Azure OpenAI Service - Azure OpenAI | Microsoft Learn Azure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system uses an ensemble of classification models to detect and prevent harmful content in both input prompts and output completions The filtering system covers four main categories: hate, sexual, violence, and self-harm Each category is assessed at four severity levels: safe, low, medium, and high Additional classifiers are available for detecting jailbreak risks and known content for text and code. JailBreaking Content Filters Red Teaming the LLM Plan and conduct red teaming exercises to identify potential vulnerabilities Use diverse red teamers to simulate adversarial attacks and test the model’s robustness Microsoft AI Red Team building future of safer AI | Microsoft Security Blog Create a Threat Model with OWASP Top 10 owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_1.pdf Develop a threat model and implement mitigations based on identified risks  Other updates: Los Angeles Azure Extended Zones Carbon Optimization App Config Ref GA OS SKU In-Place Migration for AKS Operator CRD Support with Azure Monitor Managed Service Azure API Center Visual Studio Code Extension Pre-release Azure API Management WordPress Plugin Announcing a New OpenAI Feature for Developers on Azure
Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications.  Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3 YouTube: https://youtu.be/64Achcz97PI Resources: AI Tooling: Azure AI Tooling Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog Prompt Shields to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety. Groundedness detection to detect “hallucinations” in model outputs, coming soon. Safety system messagesto steer your model’s behavior toward safe, responsible outputs, coming soon. Safety evaluations to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview.  Risk and safety monitoring to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service. AI Defender for Cloud AI Security Posture Management AI security posture management (Preview) - Microsoft Defender for Cloud | Microsoft Learn AI Workloads Enable threat protection for AI workloads (preview) - Microsoft Defender for Cloud | Microsoft Learn     AI Red Teaming Tool Announcing Microsoft’s open automation framework to red team generative AI Systems | Microsoft Security Blog AI Development Considerations:  AI Assessment from Microsoft Conduct an AI assessment using Microsoft’s Responsible AI Impact Assessment Template Responsible AI Impact Assessment Guide for detailed instructions Microsoft Responsible AI Processes Follow Microsoft’s Responsible AI principles: fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability Utilize tools like the Responsible AI Dashboard for continuous monitoring and improvement Define Use Case and Model Architecture Determine the specific use case for your LLM Design the model architecture, focusing on the Transformer architecture  Content Filtering System How to use content filters (preview) with Azure OpenAI Service - Azure OpenAI | Microsoft Learn Azure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system uses an ensemble of classification models to detect and prevent harmful content in both input prompts and output completions The filtering system covers four main categories: hate, sexual, violence, and self-harm Each category is assessed at four severity levels: safe, low, medium, and high Additional classifiers are available for detecting jailbreak risks and known content for text and code. JailBreaking Content Filters Red Teaming the LLM Plan and conduct red teaming exercises to identify potential vulnerabilities Use diverse red teamers to simulate adversarial attacks and test the model’s robustness Microsoft AI Red Team building future of safer AI | Microsoft Security Blog Create a Threat Model with OWASP Top 10 owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_1.pdf Develop a threat model and implement mitigations based on identified risks  Other updates: Los Angeles Azure Extended Zones Carbon Optimization App Config Ref GA OS SKU In-Place Migration for AKS Operator CRD Support with Azure Monitor Managed Service Azure API Center Visual Studio Code Extension Pre-release Azure API Management WordPress Plugin Announcing a New OpenAI Feature for Developers on Azure