One word. Autonomous.
With autonomous systems, humans (ICs) can be tasked with larger problems without being bottlenecked by humanistic capabilities and restrictions. Focus time can be extended, work hours increased. These systems should be able to scale greater than a fleet of humans. For one, they should eventually be cheaper, smarter, and more efficient. As AI’s capability grows, so does the individual contributors’. Painfully writing code without an editor, without an LSP, without a quick build system, slow internet, etc. I don’t think this necessarily makes you a better programmer. The outcomes to the problems you solve will define the quality of an engineer you are. The means of achieving the outcome in actuality is quite uninteresting.* Sure, the painful experience might also result in a better understanding of the system you’re working on, but all knowledge is not equal.
Besides philosophical reasoning.The utility I am seeking is the ability to achieve today is legacy system knowledge retrieval that should result in effective coding agents that complete daily software tasks. This will leave the IC responsible for designing systems, steering, and validation.
I want to change the SDLC. I really only have experience of this from a big tech perspective, but I think it could be applicable to other smaller and mid-sized companies and teams. I think that we all solve tasks in our domain that are boring, take time, take some expertise in the domain. Lets call them for wha they are, boring problems, busy work, but stuff that needs to get done by something**. These I want to reduce to reserve time for innovation. If we have more time to think on systems without having to steer agents or be in every loop, then I’d argue there is more time for research and exploratory work. The normal IC, SWE, no longer is a code monkey. They are exploratory human agents solving hard problems and effective delegators.
Next question: Can I design this so it’s extensible for companies to use?
(*) I am saying the means to the outcome is uninteresting, yet I am calling for the means to be agents. I see the contradiction. My point here is that people are reluctant to give agency to something else than a human. Stop caring so much about the how, I don’t care if you wrote the code in notepad++ and only read the source code and docs. Lets try different means to an end and push it. If it ends up failing, so be it. Lets learn from it. And by the off chance it performs better than the previous norms of software enginering, lets redefine it entirely.
(**) Notice how I didn’t say someone ;)
<<This is a journal entry before boarding flight back to Seattle. Journaling a lot lately because I see ai as a moment in my career that could be an inflection point.>>