Cybersecurity insights from industry experts.
Automate Routine Operational Workflows With Generative AI
GenAI has the potential to revolutionize how organizations approach enterprise security, compliance, identity, and management processes.
When you think about your day-to-day responsibilities across security, compliance, identity, and management, how much of that work follows a repeatable process? How much more efficient could you be if those processes were automated through generative artificial intelligence (GenAI)?
GenAI has the power to greatly streamline operational workflows and democratize knowledge across the entire security team, regardless of experience levels or knowledge about a specific technology or threat vector. Rather than having to manually research information on their own, SOC analysts can use natural language processing (NLP) embedded within GenAI models to ask questions and receive answers in a more natural format. NLP also gives GenAI the flexibility to "understand" what a user is asking and adapt to their style or preferences.
However, it's important to recognize that GenAI is not intended to replace human expertise. Rather, it should help analysts respond to threats more efficiently by assisting them with guided recommendations and best practices based on the organization's own security data, known threat intelligence, and existing processes. Here's how.
Establish Trust Through Transparency
Before a security, compliance, identity, or management workflow can be automated, teams first need to be confident that all of the information at their disposal is complete and accurate. Routine back-end work is an ideal candidate for automation because it is both predictable and easily verified. Rather than having analysts spend their time responding to simple help-desk tickets or writing incident reports, why not leverage NLP and GenAI to automate those tasks? This way analysts can dedicate their time to more business-critical work.
For this to work effectively, GenAI models must be transparent. Analysts should be able to understand the sources that the AI model pulled from and easily validate that information to ensure the AI is providing accurate recommendations.
At Microsoft, we've defined, published, and implemented ethical principles to guide our AI work. And we've built out constantly improving engineering and governance systems to put these principles into practice. Transparency is one of the foundational principles of our Responsible AI framework, alongside fairness, reliability and safety, privacy and security, inclusiveness, and accountability.
How to Deploy GenAI in Your Environment
A number of repeatable, multistep processes across security, compliance, identity, and management are primed for automation.
For example, when investigating incidents, analysts often have to examine scripts, command-line arguments, or suspicious files that may have been executed on an endpoint. Rather than manually researching this information, analysts can simply provide the script they observed and ask the AI model to break it down using a collection of prompts put together to accomplish specific security-related tasks. Each prompt book requires a specific input — for example, a code snippet or a threat actor name.
The script is then explained step by step and the AI model is consulted to provide input as to whether the script may be malicious. From there, if any network indicator is present, it's correlated against threat intelligence and relevant results are summarized before being included. The AI can also provide recommendations based on the script actions and generate a report that summarizes the session for nontechnical audiences.
Using AI this way provides two core benefits. First, the AI can automatically upskill users who may not understand the complexities of analyzing a script or file using a very transparent, repeatable process. Second, it saves time by having the model assist with common follow-up actions, such as correlating any indicators to threat intelligence and writing a summary report.
Another GenAI use case is device management and compliance through conditional access policies. If devices don't meet specific policies, they are restricted from accessing company resources. This can lead to machines being locked out and users filing internal tickets to resolve the issue. In this scenario, IT operations or help-desk support staff can leverage NLP prompts to input the unique device identifier and quickly understand the compliance status of the device. The AI can then use NLP to explain why the device is noncompliant and provide step-by-step instructions on how to resolve the issue in the appropriate tool. This is powerful because someone without direct experience in a particular tool can now perform the task, avoiding the need to escalate.
Ultimately, GenAI has the potential to completely revolutionize the way we approach enterprise security, compliance, identity, and management processes. By extending our thinking on how to apply GenAI in operational roles, we can save practitioners time, equip them with new skills, and ensure their time is spent on what matters most.
— Read more Partner Perspectives from Microsoft Security
Read more about:
Partner PerspectivesAbout the Author
You May Also Like