Soon AI Will Write All the Code

Soon AI Will Write All the Code

Several big-picture technologies will change how we all approach technology in the future. Such as the end of Moore’s Law, the explosion of data, and the evolution of machine learning. 

We are already seeing a big explosion of machine learning models and advancements in AI that are taking us by surprise.

Consider GPT-3, a language generating model. This model was released by Open AI in 2020. It was trained using hundreds of billions of words mined from the entire internet. And is designed to ingest gigabytes of texts, learn from them, and automatically generate natural paragraphs that are no different from human-written prose.


The Rise of AI-Assisted Programming

In 2020, Microsoft teamed up with OpenAI and exclusively licensed GPT-3 alongside a $1 Billion investment. Under this partnership, OpenAI started developing another language model. A programmer AI that would leverage the general GPT-3 language skills, and use them for coding instead.

One result of this partnership is a product recently released by Microsoft, dubbed GitHub Copilot.

GitHub Copilot is powered by Codex, a GPT-3 descendant. Codex was trained using data sourced from a codebase in repositories on GitHub and other sites. It essentially translates natural language into code. This way, it makes it possible to build a natural language interface to existing applications. 

Though still only accessible to select users, GitHub Copilot functions like a pair programmer. It can complete lines of code, convert descriptive comments into code, autofill repetitive code, and even write unit tests for your methods.

GitHub Copilot is expected to free engineers from mundane tasks and help them focus on more interesting work. However, it’s not 100% reliable, having been trained using a mixture of code that has insecure patterns, bugs, and references to outdated APIs.


How Long Until AI Can Write All the Code?

When GPT-3 was released in 2020, its ability to write code based on simple prompts caught most of us by surprise. But these little snippets of its capability were nothing close to real-life programming.

GPT-3's only notable feat was that it could translate a bunch of words into code. Yet, it has grown by leaps since then. In retrospect, it has only taken a year since the development of GPT-3 to create GitHub copilot.

But the question remains: does the evolution of machine learning mean that soon developers will be out of work?

Consider a simple problem posed to an AI assistant: Buy me toilet paper? 

There are assumptions baked into this request, leading to different results if constraints are not defined in advance. For instance, how do you factor in attributes like price, delivery date, softness, or quantity of toilet paper?

Normally, a human programmer would have to determine the attributes and values that make good toilet paper. Then they would write a logical approach to “buying toilet paper.” Writing code helps express the structure of such a problem as 'buying toilet paper' down to the very last detail. 

In the same way, AI-generated code can only be relied on if it factors in the constraints and context that a programmer has to consider to solve a problem. At the moment, it can’t do so, and some human input is still needed.


The Future of AI-Assisted Programming

Using AI to write flawless code within a single function is easy, but much harder for an entire app. 

Researchers at MIT have shown that you can easily deceive an AI model trained to write code by making a few well-calculated changes to it. Something as simple as changing key variables could create harmful programs.

Additionally, these models are trained on human-written code. So, their suggestions might be buggy and insecure, as is untested human code.

Models like Codex are trained with so much data that they can generate code on anything. The scope of its output is unlimited, unlike human beings who are limited by domain expertise. This complexity makes it difficult to test AI-generated code, given the high number of possible outputs.

Until we find a way to test AI-generated code, it will be hard to trust it to write reliable code. At the moment, you can best use tools like GitHub Copilot for low-risk tasks such as creating static pages, or autocompleting code. You would still need human input to optimize for business constraints or users' desires.

At present, technologies like GPT-3 are mostly useful for rudimentary tasks. Such as identifying likely bugs, by looking for patterns that are surprising to the language model. It makes sense for tech institutions to automate writing repetitive code using AI. But it’s unlikely that it'll entirely replace human programmers. 



A More Collaborative Future.

Software engineers in startups do more than write code. They review and write tickets as well. They evaluate user experience for their products. They interview potential hires, and extensively discuss constraints on hypothetical features.

In a way, software engineers are more like generalists. It's still not possible for AI to take up the entire scope of their work. We're still far from a future where AI can write all the code. For now, it has to work under the close supervision of humans with domain expertise.

-

You might like these

cta-20240215-01

Find out how Contractbook can change the way you store, manager, and analyze your contracts.

Check out case studies, contract templates, webinars, and many other resources.

Visit Contractbook
cta-20240219-01

Form a Scalable Agile Team with Us

With 3000+ professionals on board, we’re ready to assist you with full-cycle development.

Get on Discovery Call
cta-20240219-02

Design, development, DevOps, or Cloud

Which team do you need?
Chat with our seniors to see if we have a good match

Schedule a Call
cta-20240219-03

Take your idea to the next level

Launch a better digital product with us

Hire The Best Developers