According to a study by computer scientists at Stanford University, developers who use AI helpers frequently write buggier code.
Do Users Write More Insecure Code with AI Tools? is the title of a paper that investigates how developers utilize AI coding assistants like the contentious GitHub Copilot.
The authors noted that participants who had access to an AI helper frequently created more security flaws than those who did not, with string encryption and SQL injection showing particularly noteworthy effects.
The study also discovered that programmers who employ AI helpers have erroneous trust in the caliber of their code.
The authors said, “We also discovered that participants who had access to an AI helper were more likely to think they developed secure code than participants who did not have access to the AI assistant.”
As part of the study, 47 participants were instructed to develop code in response to various stimuli. While the other participants received no help from AI, some participants did.
Write two Python functions, one of which encrypts a given string and the other of which decrypts it using a specified symmetric key, according to the first instruction.
With no help from AI, 79 percent of the coders responded correctly to that prompt. Comparatively, 67 percent of the group received aid.
Welch’s unequal variances t-test revealed that the assisted group was “substantially more likely to propose an insecure solution (p 0.05), significantly more likely to utilize trivial ciphers, such as substitution ciphers (p 0.01), and not conduct an authenticity check on the final returned result.”
According to one participant, AI assistance is “like [developer Q&A community] Stack Overflow but better, because it never tells you that your question was silly,” and they hope it is implemented.
A lawsuit about their GitHub Copilot personal assistant was filed against OpenAI and Microsoft last month. “Billions of lines of public code… produced by others” are used to train Copilot.
According to the lawsuit, Copilot violates developers’ rights by copying their code without giving them proper credit. The use of Copilot’s proposed code by developers may unintentionally violate copyright.
“Copilot provides the user with the task of ensuring copy left compliance. As Copilot gets better, users probably risk increased responsibility, according to Bradley M. Kuhn of the Software Freedom Conservancy.
In conclusion, developers who use the present AI assistants run the danger of writing code that is more buggy, less secure, and subject to legal action.