Software Engineering

Generative AI Adoption Surges Among Developers Despite Security Concerns, Survey Finds

Generative AI sees rapid adoption among developers despite widespread security concerns, highlighting the need for regulation and responsible implementation.
Published on
September 12, 2023

A new survey from Sonatype reveals that nearly all software developers are rapidly adopting generative AI tools like ChatGPT into their workflows, even as most acknowledge heightened security risks.

The survey of 800 developer operations (DevOps) and application security operations (SecOps) leaders found 97% are currently using generative AI, with 74% feeling pressure to adopt the technology despite potential vulnerabilities. Further, 45% of SecOps respondents have fully implemented generative AI already, compared to just 31% of DevOps respondents.

While both groups saw benefits like increased productivity and faster development times, they differed on the extent of these gains. Significantly more SecOps leaders reported saving 6+ hours per week with generative AI compared to DevOps leaders.

Perhaps most troubling, over 75% of DevOps respondents felt generative AI will introduce more vulnerabilities into open source software. Surprisingly, SecOps leaders were less concerned at 58%.

This disparity highlights the early stages of this technology, as organizations determine how to balance productivity with security. Brian Fox, CTO of Sonatype, noted parallels to the rise of open source software, where new innovations inevitably introduce new risks.

Proper oversight and specialized training will be key to ensuring generative AI safely benefits software development, according to Fox. Most survey respondents agreed, with 78% of SecOps and 59% of DevOps leaders saying government and companies should share responsibility for regulation.

The survey also highlighted uncertainties around the ownership of AI-generated code. With no clear copyright laws yet established, 40% felt developers should own the copyright to code produced by generative models. An overwhelming majority in both groups agreed developers should be compensated if their work appears in AI training data.

Here too, many survey respondents saw a need for regulation - 42% of DevOps and 40% of SecOps leaders worried unclear policies could discourage open source contributions.

As generative AI becomes further enmeshed into the development process, practitioners face a critical balancing act. While productivity gains are clear, responsible adoption demands proactive security measures and evolving best practices.

For both DevOps and SecOps leaders, the survey makes clear that generative AI marks a pivotal new era - one whose full risks and rewards remain to be seen. Careful governance and specialized training will help maximize benefits while mitigating the new attack surfaces these tools may introduce.

Read next

Coming soon
Feature one
Feature two
Feature three
Backed by Europe's top VCs
Blog
See all ->
Coming Soon: Release & Change Sensors
(Fund)Raising 🙌 Women on International Women’s Day
When done isn’t finished