The Changing Expectations for Developers in an AI-Coding Future
AI's proficiency at creating software code won't put developers out of a job, but the job will change to one focused on security, collaboration, and "mentoring" AI models.
COMMENTARY
The relentless rise of generative AI (GenAI) in software creation has foisted a new reality on software engineers. They are facing a future in which writing code — the traditional territory of software developers for as long as software has existed — will be diminished, if not expunged altogether. Though the future may feel somewhat uncertain for developers, especially those looking to enter the field, they do have an intrinsic place going forward. It's just one that likely will involve less code writing and more security, mentorship, and collaboration.
Security-aware developers who demonstrate expertise in safely leveraging AI tools eventually will be able to take on new roles as AI guardians or mentors, working with AI to ensure the passage of safe code into their codebase.
For their part, enterprises must support the developer cohort in becoming AI's responsible older sibling, a senior partner holding the reins of a very talented, if sometimes erratic, AI upstart. This will require full executive buy-in, careful implementation of AI into the existing tech stack, and adoption of secure-by-design principles as part of a security-first culture that refuses to shortchange the importance of a successful rollout.
And it will require precise training of developers in secure coding practices and giving them opportunities to apply security to the developer environment.
Teams Are Using AI, but They Need to Understand the Risks
Since the arrival of large language models (LLMs) like ChatGPT, GitHub Copilot, OpenAI Codex, and others, developers have shown enthusiasm for using AI tools. A GitHub survey conducted in the spring of 2023 — seven months after ChatGPT's seismic first appearance — found 92% of developers already using AI tools both inside and outside of work. And 70% said the tools would improve code quality, accelerate completion times, and help them resolve issues more quickly.
However, significant security issues are being overlooked in the process. A more recent survey by Snyk, in which 96% of software engineering and security team members and leaders said they were using AI coding tools, found that a large majority of developers were ignoring AI code security policies despite the fact that AI tools were regularly generating insecure code.
Although nearly 76% of survey respondents said they think that AI code is more secure than code created by humans, more than half — 56.4% — nevertheless said AI code introduces security issues either sometimes or frequently. Eighty percent said they skip AI code security policies during development.
And because AI models, which are trained on vast amounts of existing code, have not been adept at recognizing flaws in the code it's using, those flaws can easily spread through the software ecosystem.
Organizations need a new approach if they are to reap the benefits in speed, efficiency, and code quality that AI offers while mitigating the risks of AI coding tools and avoiding the pitfalls of becoming overly reliant on AI. They must establish security as a priority in code development, automate processes more thoroughly, and educate teams on using AI securely. For developers, it dictates that the focus of their jobs must shift.
What a Developer's Future Job Could Look Like
For all the benefits that AI coding tools bring, the bottom line is that they can't be trusted to work entirely on their own. Their propensity to use insecure code without spotting the flaws, introducing errors on their own, and possessing no contextual awareness of how the code will function with the rest of the codebase requires that their work is carefully checked before it goes into production. The job of looking over an AI's shoulder will fall to developers.
For companies that are serious about putting security first, this job would dovetail with developers' focus on bringing security into the development process at the beginning. Whether companies see it as shifting left or simply starting left, developers must be trained in secure coding best practices.
Beyond writing secure code themselves and assessing the code output of AI tools, developers' jobs will change in other ways. As they accumulate knowledge about secure coding and AI's tendencies, they will be responsible for helping to instill secure best practices on an ongoing basis. They will train greener developers and their teams on how to leverage AI responsibly. Developers will also be involved in setting parameters for the data that enterprise AI tools will train on, ensuring that training data is comprehensive with regard to the subject matter and as free from flaws and vulnerabilities as possible.
Expectations for developers and the measures of their success will change. For example, security will soon be among the key performance indicators (PKIs) developers must measure up to. As developers grow into their new security-focused roles, they will be expected to work with AppSec teams on aligning with "security at speed" goals.
Companies and other organizations will need to support the transition with precisely targeted, hands-on training designed to help developers solve real-world problems, with training materials delivered in a variety of formats and scheduled to fit in with how developers work. A security-first culture will also allow developers to expand their critical thinking skills, ensuring they act with a security-first mindset, especially when assessing the potential threats that vulnerable code created by their AI assistants could bring.
Given the potency and sophistication of the current threat landscape, developers' jobs have already been moving toward a security mindset in many organizations. Besides, secure software is something that boardrooms are increasingly supporting as well, and something that the growth of AI coding could threaten without proper guidance and guardrails. Thankfully, with proper training, developers can become the first line of defense against AI coding errors, allowing organizations to reap AI's many benefits while mitigating its considerable shortcomings.
About the Author
You May Also Like