Software Development

60% Developers Using Untested Code Generated by ChatGPT

Published on
November 3, 2023

A new survey commissioned by Sauce Labs and conducted by OnePoll has uncovered concerning trends in software development practices that could put enterprise applications and data at risk. The survey of 500 U.S.-based developers found over two-thirds (67%) admitted to pushing code changes into production environments without testing. More alarmingly, over a quarter (28%) reported doing so regularly.

The survey also found over 60% of developers are using untested AI-generated code from services like ChatGPT in enterprise applications. Over a quarter (26%) reported using such unvetted code regularly.

According to Jason Baum, Sauce Labs' director of developer community, these findings point to a potential brewing crisis as more code of uncertain provenance enters production systems. "While new AI capabilities hold tremendous promise, the reality today is much of this auto-generated code has not been properly evaluated from a quality and security perspective before being integrated into mission-critical systems," said Baum.

The survey also uncovered risky source code management practices. Over two-thirds (68%) of respondents admitted to merging their own pull requests without a review, with over a quarter (28%) doing so routinely.

Most concerning were the survey findings around security practices. Three-quarters of respondents admitted to circumventing security protocols like multi-factor authentication or VPN connections to complete tasks more quickly, with over a third (39%) reporting doing so regularly. 70% also confessed to using coworkers' credentials to bypass access controls and restrictions.

According to Baum, these practices directly conflict with modern DevSecOps best practices and indicate major gaps in governance and controls. "While in some cases developers are just trying to move quickly, routinely bypassing security reviews and controls creates substantial risk," said Baum. “It only takes one overlooked vulnerability to lead to a major breach.”

Baum believes the report highlights an emerging crisis in software quality, security, and governance driven by two colliding trends. First, organizations are shifting more testing responsibility to developers without providing adequate tools, automation and oversight. At the same time, new AI capabilities are accelerating the volume of new code but not yet delivering enterprise-grade quality and security.

“Generating code faster doesn’t matter if you’re generating bigger problems down the line,” said Baum. “We need to evolve processes, best practices and governance to account for these new capabilities and risks.”

Baum recommends organizations focus on automating testing earlier in development workflows to catch issues sooner. Automated code scanning and reviews should also be implemented to govern the quality and security of AI-generated code. Most importantly, security policies and controls must be enforced uniformly across development teams.

“The risks identified in this report underscore the critical need for continuous testing and automated governance to keep pace with modern software practices,” concluded Baum. “Otherwise these trends could lead to dangerous levels of technical debt accumulation and latent security risks.”

Read next

Coming soon
Feature one
Feature two
Feature three
Backed by Europe's top VCs
Blog
See all ->
Coming Soon: Release & Change Sensors
(Fund)Raising 🙌 Women on International Women’s Day
When done isn’t finished