Like an impatient commuter on the 7:45 to London Waterloo, it’s time for me to cram myself onto the unreasonably overloaded hype train that is AI blog posts. I’ve long wanted to write something on the topic, but the tidal wave of hyperbole — both glowing and gloomy — has made me hesitate. I just don’t want to feed into that.
So anyway… time to feed into that!
The catalyst that finally made me write this is simple: I see a lot of fear over the pace of change, and I also see a lot of people with wholly unrealistic expectations. I want to provide reassurance to those that are worried, and reality to those that are overly optimistic.
This post aims to be a grounded1 view of what AI is, what I think it’s useful for, and where I see it heading. Each section is just a snippet of my views on various AI-related topics. I may cover some of the ideas touched on here in more depth via future posts, but for now this is just a fleeting tour across the landscape of AI and some of my more fringe takes on the subject. No hate. No hype.
Never-ending exponential growth
AI is going to replace your job.
This was a very common phrase only a few months back, but these days most people have had enough experience with models to know, at least intuitively, that AI isn’t coming for most jobs any time soon.
As it became apparent that artificial general intelligence (AGI) perhaps wasn’t quite as close as Mr Altman and his ilk would have us believe, the phrase was quickly replaced by “AI isn’t coming for your job, but someone who knows how to use it is”. This is a phrase with perhaps some more substance to it.
Let me start off by saying this: LLMs are really cool tools, you should use them. Experiment with Copilots, Chat GPT and Midjourney. These tools will have a fundamental impact on how a lot of jobs are done; it would be foolish to suggest otherwise.
There is one thing that really bugs me in a lot of conversations around AI2 though – in particular during talks about where it’s going in the future. That is the assumption that progress with AI will be rapid and continuous.
I don’t think we’re seeing exponential progress now. Nor do I think we will see it in the near future.
There have been two main advances in AI in the last fifty years: backpropagation (1986) and the transformer architecture (2017)3. There are lesser advancements like dropout and ReLU/GELU, but aside from those two main breakthroughs, the vast majority of progress comes from stuffing more training data into ever-larger models. It’s raw computational power and scale. Nothing more.
The problem is, we’ve now run out of data.
We saw an exponential boom in AI technology in 2018 when the first transformer architecture model was released to the general public. That was exponential growth. After that we saw linear improvements in the model quality as the models grew larger. Now that we’re out of data that growth will tail off - unless there is another major breakthrough on the scale of the transformer architecture. Progress has slowed and it will continue to do so. New technology like agentic AI and deep research are effectively just tweaks and wrappers around the same types of models.
All that said – I do think there is a threat to some jobs.
It doesn’t actually matter how good the technology is in reality. What matters right now is how good it’s perceived to be by those in control of the hiring and firing at companies. AI does a wonderful job of looking very convincing at first glance. It’s not until you dig into the detail that the faults become clearer. When all you have is the 1000-foot view, it’s very difficult to see those faults, particularly when comparing them to the very expensive human that appears to be doing a similar job.
So whilst it may sound slightly downbeat, I do see less astute companies reducing head count in favour of AI in the short term under the banner of “increased efficiency”4. Potentially AI’s quality catches up fast enough to hide these mistakes, but more likely those companies will be rehiring teams of people in a few years to patch over the messy codebases that result. This is especially true when it comes to backend development.
The good news is that this is also an opportunity for those willing to seek it out. Well planned and targeted use of AI by knowledgeable individuals can be a brilliant productivity boost. Although I wouldn’t advocate for wholesale replacement of humans in most roles, learning to augment your work with the right AI tools at the right time can really help you stand out in a tough job market.
Regulation and existential threats
Another way that AI is dominating the news cycles currently is via the impending threat of human extinction.
This is a genuinely important thing for us to be talking about as a species5. But doesn’t it seem a little odd that those parroting lines about the potential existential threat to humankind are the very people making the technology?
It’s also notable that it’s the AI companies themselves that are calling for tighter regulations on AI. Why would that be?
Perhaps they’re beautifully altruistic people who want only the best for the world. Call me cynical, but I suspect there may be another reason…
To a lesser extent, it’s important to note that most (but not all) technology companies trade off of their potential future value, not what they’re currently selling. This is the reason why companies like Tesla have market caps that are orders of magnitude higher than their closest competitors. Tesla is not valued on the cars it sells, it’s valued on its future potential. This is also why it’s common for Musk to repeatedly make grandiose predictions and fail to meet them – it’s a mechanism for keeping perceived future value high.
This is important in the context of companies pushing AI for similar reasons. Claiming that your technology may be so colossally game-changing that it could be a threat to life itself is an indirect way of saying your company will have tremendous future value. It helps fuel investment. It helps pump the stock price. It helps gain media traction. It helps with funding rounds. This has been a very common play for major technology companies like Google, Meta and Microsoft over the past few decades.
But what about Anthropic and OpenAI? They’re even calling for tighter regulations too…
The second reason is about market control. It’s becoming fairly well understood that owning the main AI platform in the near future will be akin to Google owning internet search, Microsoft owning operating systems or Apple owning smart phones - it’s an incredibly powerful, monopolistic position on which to build an extremely dominant company. Anthropic and OpenAI have built multi-billion dollar companies in a landscape of free markets and little-to-no regulation. They and other AI companies are making a play to become the de facto AI functionality that people go to, and tighter regulation is an excellent way to make it harder for newer companies from appearing and disrupting their market share. When you’re in the lead, regulation stops being a barrier to you and starts being a moat to protect you from would-be competitors.
We should take the potential threats of AI very seriously. But given the potential tail-off in technological progress I foresee, I read the calls for regulation as an attempt to cement market position and further the interests of the key companies in the AI space, rather than a genuine expression of concern.
Outsourcing thought
If you’ve made use of tools like Replit or any other agentic setup, you’ve probably recognised a certain pattern of working. You think a little about the problem whilst writing it down, send the prompt, then you disengage mentally whilst the agents do their thing.
It’s all very hands off.
I can see why it’s very appealing. Much like we outsourced our strength to machines during the industrial revolution, we’re starting to outsource the need to think to AI. This can have exceptional benefits – you can dabble in areas you never knew about without having to put in the hard yards to learn about it properly. This is great if it’s something you don’t intend to learn. In effect, it allows everyone to be a generalist.
But what about things you really should learn?
To me it has all the trappings of doom scrolling – a little dopamine hit with no real exertion or effort. You get to feel like you’re progressing, but you’re skipping the important part: the learning. That’s fine if you’re dabbling. But if you need deep understanding later, it’s a shortcut that comes at a steep cost. It can be a dangerous and addictive path to go down – if you can skip out the need to learn and go straight to the solution, why would you ever bother learning anything? Achieving whatever you like without taking the time and effort to learn may seem tempting, but AI is not (and likely nor will it ever be) perfect all of the time6.
So then… what happens when the AI is wrong or can’t solve the problem, and you’ve outsourced all of your thinking to it for the past few years?
This skill erosion is a very real threat to companies in the near-to-mid-term and to the technology sector as a whole. If the current crop of junior developers grow up reliant on AI to do their thinking for them, how will they develop the experience and critical reasoning skills required to become the next generation of senior engineers? You only develop those skills through years of wrestling with difficult problems, reasoning about solutions and experimenting with different approaches to coding.
I can feel the immediate retort from any AI maximalists reading this – “this is no different from the advent of compilers”. It’s a common counterpoint, but this time is very different for three key reasons:
- Understanding - We understood wholly how compilers worked and could therefore continually and deliberately refine their quality. This is not true of AI. We do not understand how it works and the testing surface is impossibly large. We cannot meaningfully know with any degree of exactitude that AI models are getting better at writing code. Ultimately we can only “feel” if they are improving.
- Unreliability - We are starting at a much lower bar. Compilers never created assembler that didn’t run, short of defects. Models frequently produce code that simply does not compile, let alone do what was asked of it. This means that AI requires drastically more human intervention than compilers ever did. Yes, compilers may produce slower-running code which a programmer may choose to optimise, but it always functioned properly. This is particularly important given the context of how models are likely to scale (or rather, not scale) in the coming years that was covered in the first section.
- Non-determinism - Models are by their very definition, non-deterministic. Determinism also reduces as the problem space or length of task gets larger. This makes their usage unpredictable and can cause AI-only codebases to diverge drastically, even when performing very similar operations.
Critical analysis skills will still be needed. Maintainability of code will still be needed. Good abstraction into shared functionality will still be needed. It doesn’t matter if coding is cheaper, it’s still beneficial for a whole host of reasons to create good architecture, minimize duplication, and craft well-thought-out abstractions.
I’d highly encourage any developers reading this – particularly junior ones – to view AI as an augmentation and not as a replacement. Using AI is a means of gaining speed now at the cost of understanding later. If you’re never going to learn the thing, by all means use AI as a shortcut to get something half-decent. But if it’s a core skill you’ll need in the future, or the product you’re building needs to stand the test of time, take the time to learn it properly and don’t shortcut things with AI.
What can we do?
“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”7 – Isaac Asimov
If there’s one thing I hope this post has made clear, it’s that AI isn’t something to worship – or to fear. It’s a tool. A powerful one, yes. A disruptive one, undoubtedly. But still just a tool. And one of many we have at our disposal. Tools are only ever as good as the craftsperson using them.
It’s tempting, in the face of relentless hype and lofty predictions, to think you have to adopt AI everywhere and for everything. That if you’re not using it all the time, you’re falling behind. But good engineering has never been about blind adoption. It’s about choosing the right approach, at the right time, for the right problem.
So try it. Break it. Question it. Use it where it makes sense – and don’t be afraid to walk away when it doesn’t. Experiment and learn how it behaves, where it fails, and where it truly shines. The best developers I know aren’t the ones who mindlessly adopt every new trend; they’re the ones who stay curious, who gain first-hand experience, and who apply critical thought.
AI might change the industry, but it doesn’t change what it means to be a good developer. Be rigorous. Be thoughtful. Be curious. That’s how we’ve always moved forward. That’s how we still will, no matter the tools we use.
-
Perhaps using a terminator-style evil eyed robot for the banner image isn’t exactly the most impartial, but you’ve got to admit, it does a great job of conveying the impending sense of inevitability that a lot of people feel when it comes to AI. Hopefully this is counterbalanced by the fact that I generated said image using AI… ↩︎
-
To be clear, when I say AI in the rest of this article, I’m referring specifically to LLMs. ↩︎
-
In case you’re interested in the paper that really kicked off the current wave of AI, take a look at “Attention Is All You Need” by Vaswani et al. ↩︎
-
We’re already seeing this in some industries like non-fiction publishing and low-brow news sites. A good chunk of these companies are now trying to churn out fully AI-written content instead of employing human writers, then just having human editors check over it. The results that I’ve witnessed so far leave something to be desired. No doubt you’ve also witnessed the copious amounts of AI slop that permeates your social media feeds these days too. ↩︎
-
If you’re interested in the philosophical side of AI safety, Robert Miles (no, not the DJ) has been speaking at length on this topic for almost a decade. He’s an excellent speaker and a seemingly impartial source. ↩︎
-
In fact, at the time of writing this, the best performing model in the world only gets tasks that take humans an hour to complete correct 50% of the time. This model, Claude 3.7, also happens to be the only model capable of meeting that threshold, with the next best model meeting the 50% correctness threshold at tasks that take humans around 30 minutes to complete. It’s also important to add to this that “correct” does not mean bug-free, well architected, or in any way robust. It just means that the code compiles and that the golden path performs what was required of it. Details of one of the articles this comes from are found here. ↩︎
-
Obviously I couldn’t do an AI blog without a cliché quote from Isaac Asimov… ↩︎