Estonia Is Making AI Feasibility Assessment Mandatory in Its Legislative Process
Estonia has proposed an amendment to its Good Legislative Practice and Legislative Drafting Rules (HÕNTE), the framework governing how the government prepares laws and regulations. Among the changes: a new obligation requiring policymakers to assess whether automation or AI could be part of the solution, every time a new legislative initiative is prepared.
The amendment is in public consultation, with changes taking effect on 1 May 2026.
This is not a strategy document or an innovation white paper. It is an amendment to the core procedural rules of how government drafts legislation. That distinction matters.
The proposal
At the earliest stage of Estonia's legislative process, when a ministry prepares a pre-legislative impact assessment (VTK) that outlines the problem and possible solutions before any draft law is written, officials will now be required to consider whether the problem could be addressed through automation or AI-based solutions. The explanatory memorandum describes this as a "control mechanism" ensuring all legislative solutions are fit for a digital society.
The requirement is not that every solution must be technological. The obligation is to analyse whether it could be.
Additionally, when a new law requires IT system development, the explanatory memorandum must now explicitly address whether AI capabilities can be used. And "impact on digital society" becomes an entirely new impact assessment category alongside existing domains like social, economic and environmental impact.
Why this approach matters
Many governments have published AI strategies and ethics guidelines. Estonia is doing something different: writing the AI question directly into the procedural rules that civil servants must follow when drafting laws. This is the machinery of government, not a policy declaration.
The VTK is the very first formal step in Estonia's legislative process. By placing the AI assessment here, the requirement catches every initiative at inception, before any draft law exists. And the obligation is to assess feasibility, not to adopt technology. The answer can be "no."
Think about what this means in practice for financial services. Every new piece of insurance, banking or payments legislation would need to be assessed for AI and automation potential. Every supervisory framework too. Could regulatory reporting use AI? Should a new consumer protection rule be designed with RegTech implementation in mind from day one? On the supervisory side: could SupTech tools help supervisors monitor compliance more effectively than manual processes? These are questions that today get asked late, if at all. Under this framework, they would be asked at the start.
The EU AI Act regulates AI systems and their providers. DORA addresses digital operational resilience in financial services. But neither addresses a more basic question: how do governments systematically decide where AI should and should not be used in public administration and policy implementation? Rather than creating yet another separate process, Estonia chose to build the AI question into the legislative quality framework that already exists.
What we do not know yet
The explanatory memorandum acknowledges this is new territory. Detailed requirements for how to conduct the AI feasibility analysis are deliberately not prescribed, because the field is evolving rapidly. There is an honest acknowledgment that it is difficult to predict how the obligation will play out in practice, and that it will increase workload for civil servants. Guidance materials and training are planned, and a built-in transition period suggests pragmatism over perfection.
What I find interesting is the method. Not a grand AI strategy, but a quiet change to the procedural rules that civil servants open every Monday morning when they sit down to draft the next piece of legislation. That is where systemic change actually happens. And it is exactly the kind of thing the European Commission and other governments could do tomorrow. If you want AI to be considered seriously in public administration, stop writing strategy documents and start changing the rules that people actually follow.
Member discussion