Virtualization Technology News and Information
Perforce 2024 Predictions: Open source, testing quality, AGI and AI bias


Industry executives and experts share their predictions for 2024.  Read them in this 16th annual series exclusive.

Open source, testing quality, AGI and AI bias

By Rod Cope, CTO of Perforce

While 2023 has been a whirlwind of the evolution of AI and open source - software leaders and companies are set to continue a rapid pace of transformation in 2024 as well.

As we look ahead into the new year, my fellow leaders at Perforce and I share predictions of what to expect in 2024.

The Rise of Open Source Program Offices 

Last year, organizations and OSS communities witnessed growth in the use and adoption of OSS by organizations across industries, creating a need for specified enterprise leadership and strategy as more organizations begin to embed themselves in OSS projects.

Javier Perez, Chief Open Source Evangelist at Perforce, finds that due to this increased usage and need for guidance, organizations will adopt OSS Program Offices in 2024 for successful OSS management, fostering a resilient relationship between the community, enterprise stakeholders, and developers.   

"Under the direction of an executive, such as a Chief Open Source Officer or similar high-level position, these offices will operate and require software bill of materials (SBOMs), identify and remedy any open source license compliance issue, as well as addressing security vulnerabilities that need patching, and more, promoting responsible, secure and strategic use of OSS at the enterprise level. As usage continues to grow, having one expert is no longer enough, but rather an established group of experts who oversee the entire organization's OSS strategy. Forming these offices will allow organizations to remain agile in 2024 and approach OSS with a more strategic point of view, therefore helping maintain the growth of OSS at the enterprise level."  

AI's Evolution in Software Testing  

There are positive and negative effects of leveraging AI in software testing and development. Stephen Feloney, Vice President of Products, Continuous Testing, at Perforce, emphasizes that when teams know how to leverage it effectively and can study and learn from what the AI did that it was asked to do, they will reap the biggest reward.

"From a testing point of view, AI can do more accurate testing at a faster speed than humans alone. In 2024, we'll see more companies take advantage of generative AI in multiple aspects of testing. For example, we'll see generative AI being used in the visual analysis of the test itself. One of the biggest challenges testers deal with is gathering test data to run tests - this is an area where teams will leverage generative AI, as it can help generate test data. Over the next year, we'll also see an increased use of generative AI in analytics reporting and in understanding posttest logs and actionable analytics. AI will pinpoint where teams should focus their time and effort and how to fix any problems.  

"AI will also assist in test development, specifically in analyzing the tests to ensure the accuracy of what the team is trying to test. This will reduce, and could even eliminate, the amount of pretest and posttest analysis humans must do manually. All this to say, AI still generates too many errors for it to completely take over test creation, but when looking at a five-year horizon, AI could solely own the testing process. In the future, AI will be able to scan developer's code and understand what it is going to do. From its scan, the AI model will catch security problems with the code and can highlight bugs faster. For now, AI still needs a level of human intervention, as it's not perfect and is still prone to errors. But we could see that auto-generation of tests by AI in the next five or more years."

The next phase of AI: from Generative AI to AGI

Within the AI world, there is an apparent shift with Generative AI and its direction. Kapil Tandon, VP of Product Management for Perforce, says the focus is increasingly centered around artificial general intelligence (AGI) and the rise of intelligent agents. For agents, there are two parts that will be critical in the world of AIOps and MLOps.

"One is purely around learning control and infrastructure management with agents ensuring automated configuration management and drift protection. The learning agent needs to understand how to make improvements, perform, give feedback and determine how the performance should be modified. This practice applies to AI infrastructure management, ensuring it's built and test to deploy tasks by the agent. Looking at the near-future agenda, the trends within workplaces, most notably the bigger companies, will be associated with AI and organizations will need to control the agents. Organizations cannot let AI become autonomous without proper infrastructure. 

"For the next phase of AI to reach from Generative AI to AGI, infrastructure needs to be set in place first and foremost and embedding platform engineering will be important to accelerate the delivery of applications. Organizations need configurations to work no matter where learning systems are (hybrid or private cloud)."

AI bias is in everything, everywhere all at once 

And finally, sharing my own prediction on AI - centered around bias. The growth of AI will have an impact on healthcare, finance and government as the technology becomes more engrained in highly regulated industries. Organizations need to set a foundation for proper use to avoid lawsuits.

It's a double-edged sword. Despite the excitement surrounding AI advancements, there's fear over its use and potential bias beyond human interference that may lead to distrust or, worse, lawsuits. For example, what if a bank uses AI to decide who gets a loan, and the data the model was trained on discriminates against a specific gender or race? Companies must be prepared to protect customers and stakeholders from bias, which extends to data protection to avoid misusing personal information in training data, for example.  

Anticipating the next phase of control, this shift will give tech players more authority to back up their claims and safeguard personal and proprietary data in AI. If companies join the AI race without a clear plan, they're unlikely to succeed, leaving themselves vulnerable to mistakes and breaches. Detecting bias in AI and ML can be challenging as it's heavily dependent on data. To effectively recognize and root out the bias, tech companies must establish foundational models, thoroughly analyze the bias and exercise control by applying anti-bias features and introducing safe data collection practices before proceeding. Even then, it's not easy to eradicate bias. Having more foundational and anti-bias models showcases a company's commitment to the highest standard of care.



Rod Cope 

As founder and CTO of OpenLogic, Rod Cope drives the technology vision for OpenLogic and heads the product management organization. Rod has over 20 years of experience in software development spanning a number of industries including telecommunications, aerospace, healthcare, and manufacturing. Rod holds both Bachelor's and Master's degrees in Software Engineering from the University of Louisville.

Published Wednesday, January 17, 2024 7:36 AM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<January 2024>