Innovation
·
11
min read
AI is speeding up software engineering, but how do we use that speed?

Lukáš Volf
Mar 9, 2026

At our recent event in London, we brought together engineering and product experts to talk about how AI is changing the way software gets built. The conversation quickly moved beyond the usual headlines. Less about whether AI can write code, more about what actually changes when it becomes part of everyday workflows.
To guide this conversation, we were joined by an expert panel featuring Macs Dickinson (Director of engineering at LHV Bank), Uma Kala (Staff software engineer and team lead at ComplyAdvantage), Ian Cheng (Senior digital strategy executive at Jaguar Land Rover), and our very own Applifting UK CEO Jan Hauser.
A few themes stood out.
Ambiguity is the real bottleneck
AI is incredibly capable, but it relies on clear input. Vague specifications lead to unpredictable output. Precise intent, on the other hand, produces remarkably useful results.
That’s why teams are rediscovering the value of well-defined problems and strong architecture. Structured thinking and solid practices remain crucial when using AI. Without them, the technology only exposes where good engineering discipline is missing.
"If you are vaguely giving instructions and using AI for the sake of using it, you are only creating a mess for yourself to clean up later."
— Uma Kala
The teams that benefit the most aren’t necessarily those with the fanciest tools, but the ones who know exactly what they’re trying to build. In that sense, engineering maturity has become a true force multiplier.

Feedback arrives earlier in the process
Many people assume AI in engineering mainly speeds up code generation. In reality, one of its most powerful effects is shortening the feedback loop.
In a typical workflow, certain gaps or misalignments only surface during later code reviews, sometimes hours or days after the work was done. AI can detect these issues as soon as the work is submitted, referencing the original specifications to flag inconsistencies or missing pieces immediately.
“Treat it like a human and think about what humans are good at and what humans are not so good at. And as humans, we're really rubbish at remembering a big list of things that we need to check.”
— Macs Dickinson
For instance, an AI agent can be used to automatically review Jira tickets against a project's "definition of ready" or "definition of done" as soon as they are pulled in. Previously, a poorly defined ticket might suffer a three-day turnaround before a human finally picked it up and realised critical information was missing. Now, the person creating the ticket gets immediate feedback to fix the gaps right away.
AI doesn’t replace human review. But it helps engineers catch issues early, while the context is still fresh.

Speed alone doesn’t tell you if you’re building the right thing
There’s a lot of talk about how AI helps teams ship software faster. And while speed matters—engineering has always measured productivity by cycle time—faster delivery only counts if it actually delivers value.
“It is now very easy to develop new features, but we should not create problems just to deliver more features. The problems should still come from customer validation.”
— Jan Hauser
Even the most efficient processes won’t help if you’re building the wrong features. Metrics like velocity or output volume don’t tell you whether people actually use what you create. Ultimately, the question is whether customers care.
AI can help teams answer that question. With better access to data and insights, it’s easier to understand usage patterns, behaviour, and friction points. This creates an opportunity to shift focus from output to outcomes. From measuring how much was built to understanding what truly makes a difference.
Tools expire, but principles remain
With the landscape of AI shifting so rapidly, it is incredibly difficult for engineering teams to standardise their workflows. As the panel pointed out, the best-in-class AI model changes on a week-by-week basis. Trying to go all-in on a single provider often means missing out on the next major breakthrough just days later.
Because of this rapid churn, the most resilient organisations aren't just teaching their engineers how to use specific AI prompts—they are doubling down on foundational governance, security, and ethics.
"Tooling will be great to have, but it will expire and be replaceable. The principle is the everlasting, evergreen thing."
— Ian Cheng
When models are constantly swapped out and updated, a developer's understanding of the underlying problem remains their most valuable asset. It doesn't matter which AI agent writes the code, so long as the engineer understands the ethical boundaries, security risks, and the ultimate purpose of what they are building.

The "Google Maps" effect in engineering
As AI takes over more of the heavy lifting, a new concern is emerging among engineering leaders: cognitive decline. When developers offload both coding and critical thinking to an AI agent, they risk losing the foundational skills required to understand the systems they are building.
"It's the Google Maps effect. Google Maps came out, and suddenly no one knows directions."
— Macs Dickinson
This dynamic is similar to the transition an engineer makes when moving into management. The further removed you are from the day-to-day details of the codebase, the faster your practical knowledge fades. If developers rely entirely on AI to generate complex architecture, they will eventually lack the deep, granular understanding required to fix those systems when things inevitably break.
To combat this, engineers will need to treat coding like a martial art—practising "coding katas" and solving problems from scratch, not because they have to, but to keep their minds sharp and their skills intact.

Navigating AI adoption: Expert answers on model costs, security, and engineering maturity
We wrapped up with a Q&A, answering audience questions and digging into the reality of what actually works with AI in engineering. Here they are:
Do you imagine a fully AI-developed and run company in the near future?
AI-driven agentic companies already exist, built on the DAO principles we recognise from the crypto era. Take a look at Virtuals as a prime example of this concept. There are also autonomous hedge funds already managed by AI. We’re actually living this reality right now.
What are your views on cognitive decline caused by over-reliance on AI, considering that both code and critical thinking can, and may, be offloaded to it?
Interesting point. It leads toward a general understanding that if we continue to degrade as humans, AI will also become increasingly 'dumber' because there won’t be enough creative, human-led content being produced. On a philosophical level, we should use AI to encourage critical thinking rather than solely leaning on it. However, that’s often tricky given our general tendency toward laziness. We shouldn't stop thinking or over-rely on bots.
I also don't see much point in observing 'bot-only' social networks like Moltbook, where AI develops its own threads. It’s an interesting experiment, but despite its bizarre nature, it isn’t something that personally enlightens me.
How are you managing the hidden model costs of AI tools like Claude and Cursor?
From a vendor perspective, I wouldn’t say costs are 'hidden'—Anthropic and Cursor are quite transparent—but they can certainly be unpredictable. You’re paying for the seat and the usage, but the true hidden factor is Context Bloat. Every time you prompt, these tools often send relevant project files back to the model. Managing spend isn't just about the monthly bill; it’s about how you manage your context.
Be intentional with the context you provide and the rules or skills you use to adjust the agent's behaviour. Also, exercise caution with MCPs (Model Context Protocol). Enabling multiple MCP servers simultaneously can bloat the context window before you even start prompting.
The most effective way to save is simply using the right brain for the right job:
Opus 4.6 / GPT-5.2 (The Architect): Reserve these for high-level design, complex debugging, and system architecture. Use expensive models only when the cost of a mistake outweighs the cost of the tokens.
Sonnet 4.5 (The Workhorse): This is your daily driver. It’s the sweet spot for coding, UI building, and logic implementation.
Haiku / Mini Models (The Utility): Perfect for 'grunt work' like unit tests, boilerplate, or simple scripts. They are nearly free and near-instant.
Finally, maintain 'Context Hygiene.' If a thread gets too long, the tool sends the entire history back with every new prompt, burning tokens fast. Start fresh threads often—it keeps the model sharp and the bill low.
What happens when Claude Code goes down?
You follow your backup plan: switch to Cursor and lean back on your own coding skills. But the reality is that the question is very similar to asking, 'What if AWS goes down?'
How should organisations support the Engineer AI maturity journey, and can you share an ideal maturity distribution across an engineering department?
AI maturity is a nuanced topic; it’s highly individual and depends heavily on the codebase you’re managing. A legacy system requires a different skillset and setup than a fresh greenfield project using cutting-edge tech.
As a rule of thumb, it’s best to have your most mature engineers leading AI adoption. However, these should be experts who don’t use AI blindly—they’ve built systems from the ground up and understand the architecture even without the tooling. It's also worth remembering that the SDLC isn't just about coding; there is immense value in AI for Product, HR, Sales, and general process automation.
Regarding the AI maturity journey: create a playful environment. Encourage your engineers to experiment with tooling and workflows. They’ll never become more efficient if you, as an entrepreneur or lead, don’t give them the space to grow. Juniors are a special case; as discussed in the panel, companies must invest in them now, or there simply won't be enough seniors on the market later. Focus on step-by-step growth—there are no shortcuts.
How do you think (if at all) AI usage within your work and UK-based businesses in general would be impacted if the UK began heavily regulating AI like the EU?
I’m not a regulatory expert, but in my view, any regulation will either support growth or stifle opportunity. It all depends on the intent: is it an instrument for market control, or was it co-created with businesses and the public to foster a clear, safe environment that protects consumers while driving innovation and business growth?
The EU AI Act is largely about defining risk appetite, though the broad definition of AI within the Act is concerning. It raises the question of whether the timing is right. So far, I haven’t noticed a massive discrepancy between the EU and UK AI startup ecosystems, but it’s worth noting that iterative cycles for AI products currently tend to move faster in the UK.
If intellectual property is being developed and considered a competitive advantage for your business, is it not a concern to send it over to Claude, etc.?
Security is a common concern, especially for enterprises that previously banned AI. However, the tide has turned. Establishing clear AI policies is now the preferred path; it prevents employees from inadvertently sharing company IP with private chatbots.
Once a company-wide policy is in place, you can generally trust that legal and risk departments have vetted the licensing and security factors. Most enterprise-grade AI providers guarantee that data is not used for training and is stored in secure, licensed environments only for as long as necessary. Ultimately, it’s about weighing these manageable risks against the tangible value that AI workflow automation brings to the business.
How do you manage AI regulation with adoption? And do you think we should “pause“ at some point to better understand what we’re building as AI?
A clear AI policy is the foundation for safe, effective adoption. As for the idea of a 'pause' to better understand what we’re building: I expect natural delays in development anyway. These will likely stem from hardware constraints like storage availability, capital market shifts, or simply the organic pace of scientific discovery.
None of these are inherently bad; they act as a form of self-regulation. While the speed of Generative AI research can feel daunting, there’s no need for 'doomer' rhetoric or calls to halt progress—at least that’s my view.
How has your business changed now that it seems like you need a level 3+ AI engineer and way fewer engineers in general?
Market demand did not slow down; it’s quite the opposite. I don’t think needing fewer engineers is a reality at this stage. It’s more about the fact that the iteration cycles of products and businesses have accelerated, and we’re able to deliver even more value and higher quality, faster.
It’s more of an enabler for our engineers to build their personal projects and test them on the market in parallel with their daily jobs, which creates more opportunities.
If we merge all roles, doesn't that mean that a lot of people will be without work?
I don’t think so, although some sectors will be impacted more than others. Look at the interpreter and translator market; the amount of high-quality products and translation automation in this space is enormous.
As we discussed during the keynote, required skills will merge, and different skill sets will be considered of higher importance. We’ll start focusing on soft skills more, and problem-solving will be super important. Roles will merge toward multi-skilled individuals, and we need to start this change at schools and universities to prepare the new generation of the workforce.
Will there be agents that are highly dedicated to specific functions, such as a platform engineer agent? Is it possible to combine agents together?
Oh, it’s already here. You can already define a specific focus and the tooling available for a specific agent. You can also combine them: select the right agent for a specific task and ask a different one to review it.
During the usage of various models, is the output similar when developing an app?
We touched on where it’s wise to use a fully AI-driven flow and where it isn't. We recommend using your own knowledge to define boundaries that prevent models, AI, or agents from going off-track. The human engineer remains in control, though focusing more on the specification and review phases rather than writing every line of code.
If you create the right spec and use a suitable model—especially now, when various business models and tools are highly comparable—you can achieve excellent, consistent results.
How do you teach fundamentals, whilst also leaning into agentic tooling?
We facilitate internal AI talks within our academy platform and focus on finding the right ambassadors inside the company—essentially driving multi-level adoption through key knowledge stakeholders. We have also defined a clear company-wide AI policy.
Additionally, we offer an AI Adoption Program as a service for our customers. This framework is built on six pillars:
DX Change Management
AI Adoption Strategy
Systematic AI Education
Metrics & Governance
Custom AI SDLC (Software Development Life Cycle)
AI Process Automation Exploration
Let us know if you would like further details on any of these. You can find more at dxheroes.io.
What are the biggest challenges you face when scaling AI adoption in your company?
During the panel, Macs correctly pointed out that some developers may be extremely resistant to AI tooling—referring to them as 'Level -1' in terms of AI maturity. This is a real challenge that can block an entire workstream from getting the SDLC workforce onboarded and up to speed.
Another hurdle is that AI adoption can feel overwhelming, particularly for junior developers, creating a significant mental block. The solution is to leverage AI evangelists within the company to demonstrate practical, real-world examples. Scaling adoption through these internal advocates is the most effective way to break down those barriers.
How much is AI used to actually ideate, as opposed to reinvent the wheel, recreating resources that already exist?
This is an interesting question. AI chatbots are fantastic tools for brainstorming, though they usually aren’t the ones providing true creativity or innovation—that’s still on you. However, they act as a 'healthy opponent' for your ideas. I personally use AI as a sounding board to bounce off quick concepts and to validate whether my thoughts correctly reflect reality.
How does your engineering team keep on top of what is going on when the rate of change in the product is so vast?
We keep on top of things by following the right articles and testing cutting-edge releases of the tooling immediately. The biggest enthusiasts do this because they’re passionate about the topic; they’ve jumped on the wave of AI transformation and have no intention of getting off. Personally, I go to them and ask questions about what I’ve missed while focusing on other areas. My advice is to find those inspiring individuals around you.
Is YouTube now a training resource?
This question is two-sided. If you’re asking if it’s a training resource for people, the answer is obvious—it’s been one for a long time. However, it's important to maintain a sense of critical thinking. As for whether video content is used for AI training: absolutely, though it’s currently a battle over content terms. YouTube content, owned by Google, is definitely used to train Gemini models, and they try to restrict other AI giants from doing the same.
An interesting take here is: what level of development will AI achieve when it begins training on AI-generated content? Will this cause model degradation or stagnation in the long term? Who knows.




