(Editor’s note: this post was written by human hands. No AIs were harmed or employed in writing these words.)
I can’t say at the moment that I have any great revelation to add to the AI discussion. I am as impressed, excited, and scared as anyone.
Some of the best things I’ve read so far include:
- Bill Gates – The AI revolution has begun
- Steve Yegge – Cheating is All You Need
- MSFT research – Sparks of AGI (this one is pretty scary)
- OpenAI GPT4 technical report
I think I share the most common fear that I see being expressed: that things are developing very, very fast. Literally every day for the last two weeks the Hacker News home page has been dominated by either a new AI product release or some AI lab enhancement, or both. Since the announcement of GPT4 a few weeks ago, we’ve already reached the point where anyone can run a competent LLM on their laptop, trained on a freely available dataset, and fine-tuned by some hundredB-plus parameter LLM. It’s hard not to feel like we are only 1 or 2 years (if not simply months) away from having “super-human capable” AI models at commodity availability. That feels way too fast for the general population and policy makers to keep up.
Of course I have also been thinking a lot about the impact that AI is going to have on software engineering. I do generally believe that we will still need programmers, and that AI tools will mostly just enable everyone to operate at a higher level of abstraction. Usually when we get this technology unlock it raises our ambitions for the software that we are going to create, and this drives the continued demand for great software builders.
At the moment I think everyone is mostly focused on how new AI tools will allow us to create the same kind of software so much more easily. But that is not how these technical advances go – our new capabilities will inevitably raise the bar of expectations for what we can build. Towards that end, I’m not above a little prognostication:
- Obviously “natural language interface to everything” will be the obvious first step
- Looks like we will finally get real, autonomous agents that can learn enough about me to act on my behalf
- There is a large world of “plumbing code” that gets written and re-written to connect systems together. I think we can already see with GPT4 that most of this integration work will get fully automated. It is amazing what we can do already by connecting systems together (just look at the leverage of using AWS) manually – imagine how it will look when systems can learn autonomously how to coordinate together.
- If we play out that last bullet, I think what we may see is that “UX” will get detached from the underlying service. The AI will provide such a superior human interface that the primary responsibility of something like a “SaaS” app will be to provide a set of services via API that the AI can use. Because APIs are based on natural language, they will be super easy for an AI to utilize. And as a human – why would I want to both logging into my bank portal to check my balance when I can just ask my AI assistant to do it for me?
- On the UI front, I expect we may move into a world of the “dynamic interface”. An AI will be able to construct an interface, on the fly, for whatever function we need. Maybe really complex, rich interfaces, like Figma or spreadsheet, will still be constructed by hand. But the typical SaaS “buttons and forms” interface will simply be constructed as needed by your AI.
Almost assuredly what actually happens will be very different than of these predictions. Gonna be wild.

