AI Pair Programmers: The Future Copilots for No-code?🔗
The increasing demand for software is at the heart of significant changes taking place in the software industry. The annual increase in the number of developers joining the workforce cannot keep up with the sharp rise in demand for developer talent. This creates a shortage of developers for which many people are trying to find a solution.
Our no-code platform, Code2, itself is a product of an initiative to solve this developer shortage, along with some other platforms.
"Helping coders with the boring, tedious parts of their work so that they can focus on things where their creativity will be better employed…"
This has been a common theme running through the mission statements of many no-code platforms. It looks like no-code tools are not alone in this, as illustrated by Greg Brockman, CTO of OpenAI lab, who hopes that his firm's latest product will "help solve programmer shortage in the U.S."
In this blog post, we will take a look at two prominent AI pair programming tools with a lot of hype around them. Do OpenAI Codex and Github Copilot represent a breakthrough in software engineering? Or are they tools that overpromise and underdeliver like many others?
Headquartered in San Francisco, OpenAI has just launched OpenAI Codex, its AI-based programming assistant, and has been making the headlines recently. Codex is a machine learning model that can work with programming and natural languages. It is based on OpenAI's former sensation, GPT-3, which is trained on the open-source data available on the internet and contains 175 billion parameters.
Having swallowed whole the open-source data, machine-learning models like Codex calculate what the next word should be in any given sentence. It is strictly an exercise in statistical pattern matching with an AI system analyzing the existing code to come up with the most suitable code patterns.
However, Codex differs from its predecessor with its bigger memory, which helps it handle tasks that GPT-3 could not. Codex can work on a dozen programming languages and even understand commands given in daily spoken English. The platform understands metaphors or nuanced expressions and can still carry out the tasks despite the ambiguity involved in the commands. Putting aside what this all means for the future of software engineering, it undoubtedly makes for impressive demos. Check this out.
Copilot is powered by OpenAI's Codex machine learning model. For now, Copilot is limited to being a coding assistant that autocompletes lines of code on Visual Studio Code. However, it won't be of much help on a new platform where it cannot learn from past patterns of code.
A study conducted at New York University illustrated this path-dependency problem of AI-assisted code generation. In the study, when GitHub Copilot was given an intentionally compromised code as an initial scenario to work with, it generated code containing some form of weakness or vulnerability almost half the time. Therefore, Copilot's success depends on two factors:
- The previously written code in the project it will work on,
- The code quality of the open-source repository it will use as training data.
Copilot set out to reduce manual work for professional coders and lower the entry barriers for people from non-technical backgrounds Nevertheless, question marks about its true value as an AI-based assistant still remain. It may at times provide broken code or code that is hard to understand. Moreover, the code suggested by Copilot may actually slow down coders as they have to weigh different suggestions while they write code. These two problems dent Copilot's reputation as the next big technological breakthrough.
Apart from initial shortcomings, which may be fixed really soon, the real hurdle in front of these AI pair programming tools will likely be ethical rather than technical.
These machine learning models train on open-source code scraped from well-known code repositories for the profit of corporate organizations. This fact is bound to rub some members of the coding community the wrong way. Although GitHub describes its use of publicly available code as "fair use," not many people agree. Combined with the verbatim copying of pieces of code in some instances, this practice fuels the debate over code ownership: Who owns the code? The person who first wrote it and put it out there for public use? The AI coding assistant that changed it and suggested it to a customer? Or the customer who paid for the services of platforms like Codex or Copilot?
Another problem with AI pair programming tools involves the unfiltered nature of the outputs that can be considered biased, discriminatory, and offensive against some cultures, communities, or religions. This problem stems from the way these platforms learn from what is available on the internet. Machine learning models owe their scalability to the lack of human intervention. Without any human intervention, it is only natural that they will spit out stuff closely resembling what is out there on the internet. "Garbage in, garbage out," as they say. Wary of getting drawn into a scandal, executives of both GPT-3 and Copilot have already vowed to take the necessary steps to eliminate this kind of output.
Glimpses of a future collaboration🔗
The rise of AI-based programming assistants corresponds to a particular point in the evolution of software. The outputs these tools produce as of now are still for coders only, who can use AI pair programming tools for fine-tuning the code they write. But, it is too risky to put to use AI-generated code without some sort of human review.
Some experts have a different take on the issue, though. They regard no-code as the only solution capable of bridging the gap between the number of data scientists experienced in machine learning models and the growing demand for the services of these professionals. These people will leverage no-code technology to expand the reach of machine learning in the business environment, similar to how "citizen developers" use no-code tools to build internal tools.
The next real breakthrough in the evolution of software will take place when AI-based tools become ready to produce outputs in the form of blocks of code that can be dragged and dropped in a no-code framework, catering to the needs of people with no knowledge of coding. These AI pair programming assistants will then truly become force multipliers as they will be capable of producing code for a citizen developer, who would otherwise have to depend on a coder.
With the introduction of AI-assisted no-code platforms, it is not difficult to see developers assuming more responsibility as designers or system architects. As they will have more time on their hands, developers will be able to tackle more strategic matters like system complexity, functionality, and ethical concerns that will definitely require more attention in the future.