Virtualization Technology News and Information
Article
RSS
New Black Duck Research Finds Majority of DevSecOps Teams Not Confident About Securing AI-Generated Code
Black Duck Software, Inc. announced the publication of the "Global State of DevSecOps 2024" report examining the trends, challenges, and opportunities impacting software security. According to the data, a wave of AI adoption is radically shifting how software goes from ideation to deployment. Nearly all survey respondents - over 90% - said that they are using AI in some capacity for their software development process, demonstrating just how crucial it is for organizations to take the proper security measures throughout the entire development lifecycle. And yet, 67% of respondents were concerned about securing AI-generated code.

Industries across the Technology, Cybersecurity, Fintech, Education, Banking/Financial, Healthcare, Media, Insurance, Transportation, and Utilities sectors reported similar high adoption, underscoring the importance of having seamless security mechanisms in place. In the Nonprofit sector, which is traditionally slower to technological advancements due to constrained resources, at least half of organizations surveyed reported that they were using AI. Unsurprisingly, the larger the organization, the more likely it has significantly adopted some facet of AI in its software development.

"AI is a technology enabler that should be invested in, not feared, so long as the proper guardrails are being prioritized," said Jason Schmitt, CEO of Black Duck. "For DevSecOps teams, that means finding sensible uses to implement AI into the software development process and layering the proper governance strategy on top of it to protect the heart and soul of an organization - its data."

The new report from Black Duck is based on a survey conducted by Censuswide, which polled more than 1,000 IT professionals around the world - including software developers, AppSec professionals, CISOs, and DevOps engineers across multiple countries and industries. Key findings from the report include:

  • AI is the standard, but security pros aren't fully convinced. A large majority (85%) of survey respondents noted that they have at least some measures in place to address the challenges posed by AI-generated code, such as potential IP, copyright, and license issues that an AI tool may introduce into proprietary software. However, less than a quarter (24%) are "very confident" in their policies and processes for testing this code.
  • Security is still a barrier to speed. More than half of respondents (61%) said that security testing moderately or severely slows down development. Fifty percent of those that feel this way also say that most projects are still being added manually.
  • A broad proliferation of tools is leading to high levels of testing inconsistencies. A whopping 82% of organizations are using between 6 and 20 different security testing tools, making it challenging to effectively integrate and correlate results across platforms and pipelines, leading to difficulty in distinguishing between genuine issues and false positives.
Published Wednesday, October 09, 2024 8:54 AM by David Marshall
Filed under: ,
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<October 2024>
SuMoTuWeThFrSa
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789