A discussion regarding best practices in Claude coding with our CTO yesterday turned somehow into a very interesting discussion about how this AI Agent driven software engineering is changing the SAFE ways of working we have in planning and follow-ups.
We know SAFE manifesto with all its ceremonies and roles are build upon one assumption: that bottleneck of the software engineering is the capacity of our developers, their availability, productivity and possibility to scale.
Now with the agentic AI based coding, for the first time this bottleneck is being lifted. If we have unlimited token and computing resources, conceptually most of products can be built within a week. As producing code, test and validation as well as debugging can be done in minutes and hours with the right design and orchestration, no longer days and months.
Now what this means to SAFE ways of estimation and backlog refinement and breakdown. Does product owner has enough capacity to validate and produce requirements as fast and precise as AI Agent produces outcome? The dramatically reduced lead time in code delivery and test cast as well a shadow over the traditional long planning cycle with the increments and PI. It is getting clear that there is some fundamental missmatch here with the SAFE methodology and the way production works with a hybrid AI agent and human developer model.
The blog post from Steve Jones points out even more gaps and contradictories.
We are living in an age of dramatic changes. It was just 6-7 years ago people started to through away the traditional waterfall methodology for manging projects and initiatives, and dismiss the ITIL processes for operational governance in IT and embrace SAFE as the cure for all problems.
Away with the documentation, away with workflow and clear boundary of roles and responsibilities.
In with self-governance with complete transparency, trust in team collaboration with day-to-day intensive communication;
In with town-hall level of gathering for days long collaboration between teams for estimations and dependency mapping;
In with strict backlog and resource guardiance so that teams determines the development pace from their availability and vocation planning.
And those premises no longer hold in the age of Agentic AI. As Agentic AI tools and technologies evolve quickly month for month, the tide with transformation of ways of working and governance will arrive, this time probably sooner than the entrance of SCRUM last time. SAFE is no longer safe.
The new keyword for governance is: precision and clarity in requirement documentation, guardrail for AI Agents, , token budget planning, validation and acceptance criteria.
Bellow is some summaries Claude provided me when confronting this question and I think they are pretty sensible.
The emerging consensus from practitioners is not "throw SAFe away" but rather a significant reorientation:
• Estimation is shifting from effort-based (story points) toward outcome-based metrics and AI-assisted forecasting
• PI Planning is evolving toward intent-setting and dependency mapping, with less focus on capacity allocation
• The human premium moves decisively toward problem definition, outcome validation, stakeholder alignment, and ethical governance of AI outputs
• New hybrid team patterns are forming where humans orchestrate fleets of agents rather than write every line themselves
• Governance and quality gates — the Definition of Done, architecture guardrails, security reviews — become more important, not less, because agents produce volume that humans must still be accountable for