How AI remediation will impact developers


Developers are under the gun to generate code faster than ever – with constant demands for greater functionality and seamless user experience – leading to a general deprioritization of cybersecurity and inevitable vulnerabilities making their way into software. These vulnerabilities include privilege escalations, back door credentials, possible injection exposure and unencrypted data.
This pain point has existed for decades, however, artificial intelligence (AI) is poised to lend considerable support here. A growing number of developer teams are using AI remediation tools to make suggestions for quick vulnerability fixes throughout the software development lifecycle (SDLC).
Such tools can assist the defense capabilities of developers, enabling an easier pathway to a “security-first” mindset. But – like any new and potentially impactful innovation – they also raise potential issues that teams and organizations should explore. Here are three of them, with my initial perspectives in response:
Co-Founder and CEO, Secure Code Warrior.
No. If effectively deployed, the tools will allow developers to gain a greater awareness of the presence of vulnerabilities in their products, and then create the opportunity to eliminate them. Yet, while AI can detect some issues and inconsistencies, human insights are still required to understand how AI recommendations align with the larger context of a project as a whole. Elements like design and business logic flaws, insight into compliance requirements for specific data and systems, and developer-led threat modeling practices are all areas in which AI tooling will struggle to provide value.
In addition, teams cannot blindly trust the output of AI coding and remediation assistants. “Hallucinations,” or incorrect answers, are quite common, and typically delivered with a high degree of confidence. Humans must conduct a thorough vetting of all answers – especially those that are security-related – to ensure recommendations are valid, and to fine-tune code for safe integration. As this technology space matures and sees more widespread use, inevitable AI-borne threats will become a significant risk to plan for and mitigate.
Ultimately, we will always need the “people perspective” to anticipate and protect code from today’s sophisticated attack techniques. AI coding assistants can lend a helping hand on quick fixes and serve as formidable pair programming partners, but humans must take on the “bigger picture” responsibilities of designating and enforcing security best practices. To that end, developers must also receive adequate and frequent training to ensure they are equipped to share the responsibility for security.
Training needs to evolve to encourage developers to pursue multiple pathways for educating themselves on AI remediation and other security-enhancing AI tools, as well as comprehensive, hands-on lessons in secure coding best practices.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
It is certainly handy for developers to learn how to use tools that enhance efficiency and productivity, but it is critical that they understand how to deploy them responsibly within their tech stack. The question we always need to ask is, how can we ensure AI remediation tools are leveraged to help developers excel, versus using them to overcompensate for lack of foundational security training?
Developer training should also evolve by implementing standard measurements for developer progress, with benchmarks to compare over time how well they’re identifying and removing vulnerabilities, catching misconfigurations and reducing code-level weaknesses. If used properly, AI remediation tools will help developers become increasingly security-aware while reducing overall risk across the organization. Moreover, mastery of responsible AI remediation will be seen as a valuable business asset and enable developers to advance to new heights with team projects and responsibilities.
The software development landscape is changing all the time, but it is fair to say that the introduction of AI assistive tooling into the standard SDLC represents a rapid shift to essentially a new way of working for many software engineers. However, it perpetuates the same issue of introducing poor coding patterns that can potentially be exploited quicker, and at greater volume, than at any other time in history.
In an environment operating in a constant state of flux, training must keep pace and remain as fresh and dynamic as possible. In an ideal scenario, developers would receive security training that mimics the issues faced in their workday, in the formats that they find most engaging. Additionally, modern security training should place emphasis on secure design principles, and account for the deep need to employ critical thinking to any AI output. That, for now, remains the domain of a highly skilled security-aware developer who knows their codebase better than anyone else.
It all comes down to innovation. Teams will thrive with solutions that expand the visibility of issues and resolution capabilities during the SDLC, yet do not slow down the software development process.
AI cannot step in to “do security for developers,” just as it’s not entirely replacing them in the coding process itself. No matter how many more AI advancements emerge, these tools will never deliver 100 percent, foolproof answers about vulnerabilities and fixes. They can, however, perform critical roles within the greater picture of a total “security-first” culture – one that depends equally on technology and human perspectives. Once teams undergo required training and on-the-job knowledge-building to reach this state, they will indeed find themselves creating products swiftly, effectively and safely.
It must also be said that, similar to online resources like Stack Overflow or Reddit, if a programming language is less popular or common, this will be reflected in the availability of data and resources. You’re unlikely to struggle to find answers to security questions in Java or C, but data may be lacking or conspicuously absent when trying to solve complex bugs in COBOL or even Golang. LLMs are trained on publicly available data, and they are only as good as the dataset.
This is, again, a key area in which security-aware developers fill a void. Their own hands-on experience with more obscure languages – coupled with formal and continuous security learning outcomes – should help fill a distinct knowledge gap and reduce the risk of implementing AI output on faith alone.
We’ve featured the best online learning platform.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Developers are under the gun to generate code faster than ever – with constant demands for greater functionality and seamless user experience – leading to a general deprioritization of cybersecurity and inevitable vulnerabilities making their way into software. These vulnerabilities include privilege escalations, back door credentials, possible injection exposure and…
Recent Posts
- One of the best AI video generators is now on the iPhone – here’s what you need to know about Pika’s new app
- Apple’s C1 chip could be a big deal for iPhones – here’s why
- Rabbit shows off the AI agent it should have launched with
- Instagram wants you to do more with DMs than just slide into someone else’s
- Nvidia is launching ‘priority access’ to help fans buy RTX 5080 and 5090 FE GPUs
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010