Matt Asay
Contributing Writer

The AI singularity is here

The time to figure out how to use generative AI and large language models in your code is now.

artificial intelligence / machine learning / binary code / virtual brain

Mea culpa: I was wrong. The artificial intelligence (AI) singularity is, in fact, here. Whether we like it or not, AI isnโ€™t something that will possibly, maybe impact software development in the distant future. Itโ€™s happening right now. Today.

No, not every developer is taking advantage of large language models (LLMs) to build or test code. In fact, most arenโ€™t. But for those who are, AI is dramatically changing the way they build software. Itโ€™s worth tuning into how theyโ€™re employing LLMs like ChatGPT to get some sense of how you can use such tools to make yourself or your development teams much more productive.

AI-driven ambition

One of the most outspoken advocates for LLM-enhanced development is Simon Willison, founder of the Datasette open source project.ย As Willison puts it, AI โ€œallows me to be more ambitious with my projects.โ€ How so? โ€œChatGPT (and GitHub Copilot) save me an enormous amount of โ€˜figuring things outโ€™ time. For everything from writing a for loop in Bash to remembering how to make a cross-domain CORS request in JavaScriptโ€”I donโ€™t need to even look things up anymore, I can just prompt it and get the right answer 80% of the time.โ€

For Willison and other developers, dramatically shortening the โ€œfiguring outโ€ process means they can focus more attention on higher-value development rather than low-grade trial and error.

For those concerned about the imperfect code LLMs can generate (or outright falsehoods), Willison says in a podcast not to worry. At least, not to let that worry overwhelm all the productivity gains developers can achieve, anyway. Despite these non-trivial problems, he says, โ€œYou can get enormous leaps ahead in productivity and in the ambition of the kinds of projects that you take on if you can accept both things are true at once: It can be flawed and lying and have all of these problems โ€ฆ and it can also be a massive productivity boost.โ€

The trick is to invest time learning how to manipulate LLMs to make them what you need. Willison stresses, โ€œTo get the most value out of themโ€”and to avoid the many traps that they set for the unwary userโ€”you need to spend time with them and work to build an accurate mental model of how they work, what they are capable of, and where they are most likely to go wrong.โ€

For example, LLMs such as ChatGPT can be useful for generating code, but they can perhaps be even more useful for testing code (including code created by LLMs). This is the point that GitHub developer Jaana Dogan has been making. Again, the trick is to put LLMs to use, rather than just asking the AI to do your job for you and waiting on the beach while it completes the task. LLMs can help a developer with her job, not replace the developer in that job.

โ€œThe biggest thing since the World Wide Webโ€

Sourcegraph developerย Steve Yegge is willing to declare, โ€œLLMs arenโ€™t just the biggest change since social, mobile, or cloudโ€”theyโ€™re the biggest thing since the World Wide Web. And on the coding front, theyโ€™re the biggest thing since IDEs and Stack Overflow, and may well eclipse them both.โ€ Yegge is an exceptional developer, so when he says, โ€œIf youโ€™re not pants-peeingly excited and worried about this yet, well โ€ฆ you should be,โ€ itโ€™s time to take LLMs seriously and figure out how to make them useful for ourselves and our companies.

For Yegge, one of the biggest concerns with LLMs and software is also the least persuasive. I, for one, have wrung my hands that developers relying on LLMs still have to take responsibility for the code, which seems problematic given how imperfect the code is that emerges from LLMs.

Except, Yegge says, this is a ridiculous concern, and heโ€™s right:

All you crazy mโ€”โ€”s are completely overlooking the fact that software engineering exists as a discipline because you cannot EVER under any circumstances TRUST CODE. Thatโ€™s why we have reviewers. And linters. And debuggers. And unit tests. And integration tests. And staging environments. And runbooks. And all of โ€ฆ Operational Excellence. And security checkers, and compliance scanners, and on, and on and on! [emphasis in original]

The point, to follow Willisonโ€™s argument, isnโ€™t to create pristine code. Itโ€™s to save a developer time so that she can spend more time trying to build that pristine code. As Dogan might say, the point is to use LLMs to generate tests and reviews that discover all the flaws in our not-so-pristine code.

Yegge summarizes, โ€œYou get the LLM to draft some code for you thatโ€™s 80% complete/correct [and] you tweak the last 20% by hand.โ€ Thatโ€™s a five-times productivity boost. Who doesnโ€™t want that?

The race is on for developers to learn how to query LLMs to build and test code but also to learn how to train LLMs with context (like code samples) to get the best possible outputs. When you get it right, youโ€™ll sound likeย Higher Groundโ€™s Matt Bateman, gushing โ€œI feel like I got a small army of competent hackers to both do my bidding and to teach me as I go. Itโ€™s just pure delight and magic.โ€ This is why AWS and other companies are scrambling to devise waysย to enable developers to be more productive with their platforms (feeding training material into the LLMs).

Stop imagining a future without LLM-enabled software development and instead get started today.

Matt Asay

Matt Asay runs developer marketing at Oracle. Previously Asay ran developer relations at MongoDB, and before that he was a Principal at Amazon Web Services and Head of Developer Ecosystem for Adobe. Prior to Adobe, Asay held a range of roles at open source companies: VP of business development, marketing, and community at MongoDB; VP of business development at real-time analytics company Nodeable (acquired by Appcelerator); VP of business development and interim CEO at mobile HTML5 start-up Strobe (acquired by Facebook); COO at Canonical, the Ubuntu Linux company; and head of the Americas at Alfresco, a content management startup. Asay is an emeritus board member of the Open Source Initiative (OSI) and holds a JD from Stanford, where he focused on open source and other IP licensing issues. The views expressed in Mattโ€™s posts are Mattโ€™s, and donโ€™t represent the views of his employer.

More from this author