Microsoft’s top new security tools wants to help keep your shiny new generative AI systems safe for good


Microsoft has unveiled a new security tool aimed at keeping generative AI tools secure, and safe to use.
PyRIT, short for Python Risk Identification Toolkit for generative AI, will help developers respond to growing threats facing businesses of all sizes from criminals looking to take advantage of new tactics.
As most of you already know by now, generative AI tools such as ChatGPT are being used by cybercriminals to quickly create code for malware, to generate (and proofread) phishing emails, and more.
Manual work still needed
Developers responded by changing how the tool responds to different prompts, and somewhat limiting its capabilities, and Microsoft has now decided to take it a step further.
Over the past year, the company red teamed “several high-value generative AI systems” before they hit the market, and during that time, it started building one-off scripts. “As we red teamed different varieties of generative AI systems and probed for different risks, we added features that we found useful,” Microsoft explained. “Today, PyRIT is a reliable tool in the Microsoft AI Red Team’s arsenal.”
The Redmond software giant also stresses that PyRIT is by no means a replacement for manual red teaming of generative AI systems. Instead, the company hopes other red teaming teams can use the tool to eliminate tedious tasks and speed things up.
“PyRIT shines light on the hot spots of where the risk could be, which the security professional than can incisively explore,” Microsoft further explains. “The security professional is always in control of the strategy and execution of the AI red team operation, and PyRIT provides the automation code to take the initial dataset of harmful prompts provided by the security professional, then uses the LLM endpoint to generate more harmful prompts.”
The tool is also adaptable, Microsoft stresses, as it’s capable of changing its tactics depending on the generative AI system’s response to previous queries. It then generates the next input, and continues the loop until the red team members are happy with the results.
More from TechRadar Pro
Microsoft has unveiled a new security tool aimed at keeping generative AI tools secure, and safe to use. PyRIT, short for Python Risk Identification Toolkit for generative AI, will help developers respond to growing threats facing businesses of all sizes from criminals looking to take advantage of new tactics. As…
Recent Posts
- Fraudsters seem to target Seagate hard drives in order to pass old, used HDDs as new ones using intricate techniques
- Hackers steal over $1bn in one of the biggest crypto thefts ever
- Annapurna’s 2025 lineup of indie games is full of tea and T-poses
- Google Drive gets searchable video transcripts
- Andor is on the offensive in latest season 2 trailer
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010