London based software development consultant

  • 122 Posts
  • 40 Comments
Joined 5 months ago
cake
Cake day: September 29th, 2025

help-circle











  • There are some really good tips on delivery and best practice, in summary:

    Speed comes from making the safe thing easy, not from being brave about doing dangerous things.

    Fast teams have:

    • Feature flags so they can turn things off instantly
    • Monitoring that actually tells them when something’s wrong
    • Rollback procedures they’ve practiced
    • Small changes that are easy to understand when they break

    Slow teams are stuck because every deploy feels risky. And it is risky, because they don’t have the safety nets.









  • In fact, this garbage blogspam should go on the AI coding community that was made specifically because the subscribers of the programming community didn’t want it here.

    This article may mention AI coding but I made a very considered decision to post it in here because the primary focus is the author’s relationship to programming, and hence worth sharing with the wider programming community.

    Considering how many people have voted this up, I would take that as a sign I posted it in the appropriate community. If you don’t feel this post is appropriate in this community, I’m happy to discuss that.



  • Regardless of what the author says about AI, they are bang on with this point:

    You have the truth (your code), and then you have a human-written description of that truth (your docs). Every time you update the code, someone has to remember to update the description. They won’t. Not because they’re lazy, but because they’re shipping features, fixing bugs, responding to incidents. Documentation updates don’t page anyone at 3am.

    A previous project I worked on we had a manually maintained Swagger document, which was the source of truth for the API, and kept in sync with the code. However no one kept it in sync, except for when I reminded them to do so.

    Based on that and other past experiences, I think it’s easier for the code to be the source of truth, and use that to generate your API documentation.





  • This quote on the abstraction tower really stood out for me:

    I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.

    They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.

    But sure. AI is the moment they lost track of what’s happening.

    The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack. AI is just the layer that made the pretence impossible to maintain.









  • I originally shared this after stumbling upon it in one of Martin Fowler’s posts.

    The article reminds me of how my mother used to buy dress patterns, blueprints if you will, for making her own clothes. This no code library is much the same, because it offers blueprints if you wanted to build your own implementation.

    So the thing that interests me is what has more value - the code or the specifications? You could argue in this age of AI assisted coding that code is cheap but business requirements still involve a lot of effort and research.

    To give a non-coding example, I’ve been wanting to get some cupboards built, and every time I contact a carpenter about this, it’s quite expensive to get something bespoke made. However, if I could buy blueprints that I could tweak, then in theory, I could get a handyman to build it for a lower cost.

    This is a very roundabout way of saying I do think there are some scenarios where the specifications would be more beneficial than the implementation.







  • I am not surprised that there are parallels between vibe coding and gambling:

    With vibe coding, people often report not realizing until hours, weeks, or even months later whether the code produced is any good. They find new bugs or they can’t make simple modifications; the program crashes in unexpected ways. Moreover, the signs of how hard the AI coding agent is working and the quantities of code produced often seem like short-term indicators of productivity. These can trigger the same feelings as the celebratory noises from the multiline slot machine.