WLP353 Promises and Perils of Generative AI in Software Development

In today’s episode, tech journalist Jennifer Riggins talks about the impact of generative AI on the world of software development, focusing on the challenges faced by DevSecOps teams in keeping up with the accelerated pace of code creation enabled by GenAI tools. She raises important questions about the potential risks posed by generative AI, such as the introduction of vulnerabilities and intellectual property issues, and emphasises the need for organizations to establish clear AI governance policies to mitigate these risks.

Today’s guest is Jennifer Riggins, tech storyteller and journalist, and feature writer at The New Stack, where she recently published the article "Will Generative AI Kill DevsecOps?" on February 15, 2024. The article was based on a panel discussion she attended at the State of Open Con, a conference organised by OpenUK, a non-profit supporting open source in the UK.

The State of Open Con, now in its second year, brings together experts and practitioners to discuss various topics related to open source. This year's event had a strong focus on AI, reflecting the significant impact of this technology on the software development landscape. Jen hosted several panels at the conference, contributing to the broader conversation around the opportunities and challenges presented by generative AI.

In her writing, Jen explores the impact of technology on the humans creating that technology, so she has recently focused on layoffs, developer experience, and the increasing adoption of generative AI tools in the tech industry. Tools like ChatGPT and GitHub Copilot are revolutionising the way developers work, enabling them to streamline processes and focus on more creative and problem-solving tasks.

After explaining how platform engineering supports developers, Jen highlights how platform engineering teams create a "golden path" or "yellow brick road" that allows developers to concentrate on their core work while ensuring that security and deployment aspects are handled effectively. This approach facilitates a more efficient and productive development process.

Jennifer Riggins


07.08 MINS
Developers are leveraging generative AI tools for various tasks, such as code generation, error checking, testing, and documentation. While acknowledging the significant benefits these tools offer, Jen also warns us about the challenges they present. She discusses the potential for inaccurate code, lack of context, and the ongoing need for human oversight to ensure the quality and reliability of AI-generated code.

“A chat bot's response is based on the probability of being accepted. So that doesn't mean it's accurate, it just means it wants to be right. So it's going to give you the answer you want to hear.”

(If you’re interested in an example of Jen’s quote above, check out PIlar’s latest Spiralling Creativity blog post.)

Devsec Ops teams need to keep pace with the rapid development enabled by these tools, a challenge that is increased by the lack of AI governance policies and the current lack of adoption among companies. Without proper governance, organisations are left vulnerable to potential risks, such as the introduction of vulnerabilities and intellectual property issues.

There are risks associated with relying on AI-generated code, particularly for less experienced developers who may not have the expertise to identify and mitigate potential issues. Jennifer and Pilar discuss how generative AI can introduce vulnerabilities and intellectual property risks if not managed properly. On the other hand, AI can also be harnessed as a tool to enhance security measures, detect anomalies, and automate issue summaries, ultimately strengthening an organisation's security posture.

Jennifer stresses the importance of fostering a blameless culture within organisations adopting generative AI. She advocates for conducting consequence scanning exercises to proactively identify and mitigate risks associated with AI-driven development. By creating an environment that encourages transparency, collaboration, and continuous improvement, organisations can navigate the challenges and opportunities presented by generative AI more effectively.

In line with the episode’s theme, Pilar asked Claude AI to create some show notes for this episode. The above is a summary written by the bot and edited by Pilar, and below you have a breakdown of the conversation. Following Jen’s comment that “a chat bot's response is based on the probability of being accepted. So that doesn't mean it's accurate, it just means it wants to be right. So it's going to give you the answer you want to hear”, Claude added some made up time codes, in order to give us a piece of text that looked like traditional show notes. Here they are:

00:00 - Introduction and Jennifer's background

02:30 - The focus of Jennifer's recent work: layoffs, developer experience, and the rise of generative AI tools

05:00 - Exploring the concept of platform engineering and its role in supporting developers

08:00 - How developers are using generative AI tools for code generation, error checking, testing, and documentation

10:30 - The challenges of generative AI: inaccurate code, lack of context, and the need for human intervention

13:00 - Security concerns and the struggle for Devsec Ops teams to keep up with the speed of code creation

16:00 - The importance of AI governance policies and the current lack of adoption among companies

18:30 - The risks of relying on AI-generated code, especially for less experienced developers

21:00 - The potential for generative AI to introduce vulnerabilities and intellectual property risks

23:30 - Using AI to enhance security measures, detect anomalies, and automate issue summaries

25:00 - The need for a blameless culture and consequence scanning to identify and mitigate risks

27:00 - The European Union's approach to categorizing AI and the importance of evaluating AI tools

29:00 - The resistance to banning AI tools and the impact on developer productivity

31:00 - Emerging themes from the State of Open Con, including the need to support open-source maintainers

33:00 - Conclusion and where to find more of Jennifer's work

(You can read more about this exchange in the Spiralling Creativity blog.)


If you like the podcast, you'll love our monthly round-up of inspirational content and ideas:
(AND right now you’ll get our brilliant new guide to leading through visible teamwork when you subscribe!)

Pilar OrtiComment