<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>Blog - SparkFabrik Website</title><link>https://www.sparkfabrik.com/en/blog/</link><description>Signals from the field: reflections on Cloud Native, AI and modern platforms, born from real projects and daily production.</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Fri, 26 Jan 2024 17:00:00 +0000</lastBuildDate><atom:link href="https://www.sparkfabrik.com/en/blog/feed.xml" rel="self" type="application/rss+xml"/><item><title>Why CTOs choose Drupal: AI, sovereignty, and platform engineering</title><link>https://www.sparkfabrik.com/en/blog/why-ctos-choose-drupal-enterprise/</link><pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/why-ctos-choose-drupal-enterprise/</guid><description>Modern enterprise architectures require solid foundations to manage critical data and complex integrations. Drupal is evolving beyond its role as a simple CMS to become a pillar of platform engineering. Discover how to integrate artificial intelligence while ensuring full digital sovereignty.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Drupal has evolved from a simple CMS into an enterprise application framework, offering CTOs a robust solution for managing complex workflows and advanced API integrations. By adopting platform engineering practices and cloud-native architectures, companies can ensure digital sovereignty and integrate artificial intelligence securely. This strategic approach transforms the platform into a durable asset, overcoming the limitations of closed SaaS systems and protecting corporate information assets.
  </div>
</div>
<p>The web development market has undergone an irreversible fracture. On one hand, the proliferation of AI-based tools has reduced the creation of basic websites to a low-cost commodity. On the other, complex enterprise architectures require increasingly solid engineering foundations, and it is in this scenario that Drupal&rsquo;s strategic repositioning fits. Choosing a core technology is no longer a tactical marketing decision, but a strategic imperative to support deep integrations and critical data flows.</p>
<p>The market for simple websites—those we define as showcase sites or brochureware—has been completely commoditized. Between closed visual site builders and the ability of GenAI to produce frontend code, the landscape has changed decisively. The value of building a simple site with a robust framework has collapsed.</p>
<p><strong>As the low-end market saturates, a gap is opening at the high end.</strong> This is where the need to manage complex digital infrastructures, structured data, and business-critical processes arises. For years, the market treated content management systems as generic tools, but today that vision is obsolete.</p>
<p>This bifurcation is a hot topic that permeates the background of all strategic discussions. We have felt and experienced it at all the major conferences of the last few months.</p>
<p>It emerged as a central theme at <strong>Drupal Pivot EU</strong>, the exclusive unconference held in Ghent in January 2026. A restricted event that brought together leading European technology leaders to redefine the role of open source systems in enterprise architectures. We at SparkFabrik actively participated in the working groups with our CTO <strong>Paolo Mainardi</strong>, helping to chart the course for the coming years.</p>
<p>We saw the same common thread at <a href="/en/blog/drupal4goveu-sovranita-digitale-e-open-source-per-la-pa/"><strong>Drupal4GovEU</strong></a>, the event focused on open source for the European Public Administration. And it obviously appeared in several talks presented at major events, such as the last <a href="/en/blog/drupalcon-vienna-2025/"><strong>DrupalCon in Vienna</strong></a> and the very recent <a href="/en/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/"><strong>DrupalCon Chicago</strong></a>.</p>
<p>The conclusion reached is unequivocal: continuing to treat content management platforms as simple web page providers is a calculation error that generates technical debt.</p>
<p>Indeed, Drupal is establishing and repositioning itself not as a simple CMS, but as the framework of choice for ambitious <a href="/en/landing/guida-drupal/"><strong>Digital Experience Platforms (DXP)</strong></a>. We are no longer talking about managing web pages, but about governing APIs, digital identities, and complex workflows in a secure environment.</p>
<p>Forward-thinking companies are repositioning their investments. They are shifting budget from ephemeral frontends toward backend infrastructures that are governable, secure, and designed to last.</p>
<p>For a business decision-maker, understanding these dynamics is fundamental to correctly allocating IT budget. Continuing to treat Drupal exclusively as a content manager means underestimating a strategic asset essential for digital resilience, especially when combined with cloud-native technologies and AI.</p>
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-210799301439"
  style="max-width:100%; max-height:100%;" data-hubspot-wrapper-cta-id="210799301439">
  <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLKPeBdhP5H92EG3KQHdBroqU9y6CN20ZEm021IdI1Klqq%2FZp9wKEKM1Idu%2FXn9MUzzUlc6u1vfoPXyBGq6SpFKU%2FHZ1g9t0x%2B4ix9%2BCDns7T0zaP2RLTBbEOht83liFG9DIqx%2Byi9DtCHRKbYM2JgrfqJUg%2BYiMdoj64T9hvEtZZLb3lzSvCLxYUgM3FY%2BHWE9N%2BfyreXk%3D&webInteractiveContentId=210799301439&portalId=6897318" target="_blank" rel="noopener" crossorigin="anonymous">
    <img alt="Drupal: From CMS to DXP &nbsp; Transform your CMS from a simple repository into a competitive advantage: a comprehensive Digital Experience Platform. &nbsp;" loading="lazy" src="https://no-cache.hubspot.com/cta/default/6897318/interactive-210799301439.png" style="height: 100%; width: 100%; object-fit: fill"
      onerror="this.style.display='none'" />
  </a>
</div>
<h2 id="drupal-as-a-business-enabler-from-cms-to-business-application-framework">Drupal as a business enabler: from CMS to Business Application Framework</h2>
<p>Drupal has evolved from a simple open source CMS into a true application framework for the enterprise market. It functions as a business enabler by providing the architectural infrastructure necessary to manage complex workflows, advanced API integrations, and structured data, overcoming the functional limits of simple showcase sites.</p>
<p>During the working groups in Ghent, the discussion highlighted the need for an ontological redefinition. <strong>Market perception must align with the platform&rsquo;s actual technical capabilities.</strong></p>
<p>Traditionally, a CMS is viewed as a repository for text and images, but this vision is limiting. We no longer sell a packaged product. With Drupal, we provide a relational engine capable of orchestrating an entire company&rsquo;s digital experience, thanks to an extremely flexible entity architecture.</p>
<p>This shift in perspective transforms the software from a cost center into a true <strong>Business Application Framework</strong>, capable of modeling unique business logic without forcing internal processes to adapt to pre-set software.</p>
<p>What does this mean in practical terms for the business? It means transforming the platform into the backbone for:</p>
<ul>
<li><strong><a href="/en/servizi/by-industry/enterprise-intranet/">Complex enterprise intranets</a></strong>: Granular permission management, multi-level approval workflows, and integration with corporate Identity Providers. (<strong>Discover the <a href="/en/case-studies/cnp-vita/">CNP Vita Intranet case study</a></strong>)</li>
<li><strong>Service portals</strong>: Where data security and accessibility are mandatory non-functional requirements.</li>
<li><strong>Headless Content Hub</strong>: Drupal acts as a single source of truth, decoupling the backend from the frontend and distributing content via REST or GraphQL APIs to various consumer applications.</li>
<li><strong>Data Management platforms</strong>: Native modeling of complex data relationships without the need to write SQL queries or manage manual schema migrations.</li>
<li><strong>LMS systems</strong> (Learning Management System): Proprietary e-learning platforms where progress tracking, skills certification, and protection of educational materials are non-negotiable requirements.</li>
</ul>
<p>We see a concrete example in the manufacturing sector, where Drupal is adopted as middleware to aggregate data from ERP and CRM systems, exposing them in unified dashboards. This is a pure application framework use case, not that of a simple web page manager.</p>
<p>For IT decision-makers, understanding this evolution means entering a <a href="/en/blog/drupal-cms-la-nuova-era-del-content-management-per-il-business/">new era of content management for business</a>, where content management is just a subset of much broader capabilities. There is now an unbridgeable gap between quick solutions and engineered platforms.</p>
<p>The difference between the two approaches manifests in several critical areas:</p>
<ul>
<li><strong>Distinction between &ldquo;disposable code&rdquo; and &ldquo;durable infrastructure.&rdquo;</strong> AI, visual site builders, and modern frontend frameworks excel at rapid interface creation but introduce a high rate of obsolescence. Generated landing pages have a lifecycle measurable in months and do not survive corporate pivots.</li>
<li><strong>Long-term maintainability.</strong> The backend, business logic, and data model must ensure lasting stability. A Drupal-based infrastructure is designed for decade-long lifecycles, absorbing business evolutions.</li>
<li><strong>Isolated data vs. centralized hubs.</strong> Closed systems fragment information into inaccessible silos. A framework-first approach natively exposes every entity via REST or GraphQL APIs, acting as a single source of truth for <a href="/en/blog/drupal-headless/">omnichannel ecosystems</a>.</li>
<li><strong>Standard logic vs. custom logic.</strong> SaaS products impose their own operational workflows. An open architecture allows for mapping granular permissions and approval flows exactly to existing corporate hierarchies.</li>
</ul>
<p>Choosing this path means investing in a technology that scales with business complexity, ensuring solid foundations for the future. (To learn more: <a href="/en/blog/guides/vantaggi-di-drupal/">Complete Guide - Why choose Drupal for complex corporate sites</a>)</p>
<h2 id="is-drupal-the-ideal-solution-for-corporate-digital-sovereignty">Is Drupal the ideal solution for corporate digital sovereignty?</h2>
<p>The advantages of Drupal for digital sovereignty lie in total control over architecture and data. Being open source, it eliminates the vendor lock-in typical of SaaS platforms, allowing CTOs to implement secure artificial intelligence models and maintain full infrastructure compliance without ceding control to third parties.</p>
<p>In the European enterprise market, the concept of sovereignty is often reduced to a mere matter of regulatory compliance and geographic server localization to respect GDPR. This vision is limiting. <strong>True technological sovereignty</strong> is only achieved when an organization possesses the unconditional ability to inspect, modify, and migrate its software stack without asking third parties for permission.</p>
<p>When critical infrastructure rests on closed cloud services, the company cedes control of its technological roadmap to the decisions of an external provider. <strong>Vendor lock-in</strong> represents the most underestimated operational risk in IT budgets today. Arbitrary changes to pricing models, sudden deprecation of fundamental APIs, or corporate acquisitions can paralyze a company&rsquo;s digital operations.</p>
<p>Adopting open standards neutralizes this risk at the root. It returns bargaining power to management and the freedom to choose where and how to run their workloads, whether on hyperscaler cloud providers or private infrastructure.</p>
<p>This independence becomes crucial in the age of artificial intelligence. Companies possess invaluable information assets that cannot be fed into public language models. <strong>Sovereign AI</strong> requires platforms capable of orchestrating open source models or private instances within the corporate perimeter, allowing the use of artificial intelligence without exposing sensitive data to external networks.</p>
<p>The Drupal community shares this vision, embracing a vendor-agnostic approach that is easily adaptable. By using an open framework, it is possible to <a href="/en/blog/drupal-cms-sicurezza-compliance-settori-regolamentati/">ensure security and compliance for corporate data</a> by implementing Retrieval-Augmented Generation systems that query internal databases without ever exposing intellectual property on uncontrolled networks.</p>
<h2 id="how-does-platform-engineering-transform-drupal-into-a-durable-infrastructure">How does platform engineering transform Drupal into a durable infrastructure?</h2>
<p>Platform engineering transforms Drupal into a durable infrastructure by applying cloud-native practices that ensure maximum reliability and scalability. By standardizing operations through an internal platform, development teams reduce cognitive load, cut technical debt, and significantly accelerate the time-to-market for new features.</p>
<p>The strategic transformation of Drupal into a business-critical application would not be possible without an evolution of the underlying infrastructure. Abandoning old monolithic hosting paradigms is the prerequisite for operating at an enterprise scale, in favor of a Cloud Native approach and Platform Engineering practices.</p>
<blockquote>
<p>SparkFabrik&rsquo;s vision is based on a rigorous engineering principle: the value of software is inseparable from the quality of the infrastructure that hosts it.</p>
</blockquote>
<p>This is not a stylistic choice, but a reliability requirement. When an application becomes central to business processes, downtime or performance bottlenecks are intolerable. Investing in the underlying platform is the only proven method to mitigate obsolescence and ensure operational continuity.</p>
<p>To <strong>transform a CMS into a true cloud-native application</strong>, it is essential to <a href="/en/blog/platform-engineering-perch%C3%A9-adottarlo/">create an internal developer platform to standardize operations</a>. This methodological approach shifts the focus from manual server management to process automation, offering tangible business benefits:</p>
<ol>
<li><strong>Immutability and reliability through containerization.</strong> The use of containers and modern orchestrators (e.g., Docker, Kubernetes) allows infrastructure to be managed as code. Production environments are not &ldquo;updated&rdquo; manually, but replaced entirely with every deploy. This eliminates &ldquo;configuration drift&rdquo; and ensures that the development environment is identical to the production one, reducing unexpected bugs.</li>
<li><strong>Horizontal scalability, resilience, and self-healing.</strong> Business applications have variable workloads. An architecture based on orchestrators like Kubernetes allows Drupal to scale horizontally (adding pods/nodes) in response to real traffic. This ensures high availability and self-healing, automatically restoring failed processes without human intervention and guaranteeing uptime.</li>
<li><strong>Operational standardization and reduction of cognitive load with the Internal Developer Platform (IDP).</strong> By providing developers with standard paths and preconfigured resources, the need to manage complex infrastructure configurations is eliminated. Teams can thus focus exclusively on writing business logic. This accelerates time-to-market while maintaining centralized control over infrastructure governance.</li>
<li><strong>Security integrated into the supply chain.</strong> By shifting security checks to the early stages of development, automated CI/CD pipelines block vulnerabilities before they reach production environments. This proactive approach is essential to meet enterprise security standards, from the very first line of code.</li>
</ol>
<p>For IT decision-makers, the investment is not just in application software: the application and the platform must be designed in symbiosis. Without modern infrastructure, even the best Drupal code risks becoming unmanageable technical debt.</p>
<h2 id="the-impact-of-ai-and-agents-in-drupal-collaboration-or-replacement">The impact of AI and agents in Drupal: collaboration or replacement?</h2>
<p>Artificial intelligence does not replace software architecture, but enhances it by accelerating its tactical execution. Drupal provides the structure, validation rules, and data truth upon which generative models can operate, ensuring long-term governance essential for protecting corporate information assets.</p>
<p>The most common perspective error among IT decision-makers is considering AI as an alternative to traditional backend systems. On the contrary, <strong>advanced language models need structured platforms</strong> to avoid generating hallucinations or uncontrollable output. The integration between these two technologies pushes the framework up the corporate value chain, transforming it into the conductor of automated interactions.</p>
<p>The real leap in quality lies in <strong>Agentic AI</strong>, or the ability to orchestrate autonomous agents that operate within a governed perimeter. For development teams, this means moving from simple prompt writing to designing complex system instructions that allow agents to interact securely with platform APIs (agentic coding).</p>
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-201809539912"
  style="max-width:100%; max-height:100%;" data-hubspot-wrapper-cta-id="201809539912">
  <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLIGqoaThHLkjROAOhZeIFB08kBo5PvlQk%2FvPNJswBdt3qae1%2Ft%2BLOIX80TJoq5wO8%2ByKbpX%2FithCRZ4lTdrcahw2Utes2fsHqKFTN96RfPxoNPxuAIMbm6%2F99dTKWVHS1B%2FO98t84%2BlT2wSKMfZ25pRN0xd22x%2FZDRL6V%2FX7UVXYwlw6laTavImXTy9mGXX9h6z0XIAnSpecZybHS4xs7MTrjQjd0%2BX2E68tney9%2BBvHcTryIM%3D&webInteractiveContentId=201809539912&portalId=6897318" target="_blank" rel="noopener" crossorigin="anonymous">
    <img alt="Gli agenti AI che trasformano&nbsp;i processi aziendali &nbsp; Nuovi sistemi intelligenti, scalabili e sicuri applicabili oggi in azienda. &nbsp;" loading="lazy" src="https://no-cache.hubspot.com/cta/default/6897318/interactive-201809539912.png" style="height: 100%; width: 100%; object-fit: fill"
      onerror="this.style.display='none'" />
  </a>
</div>
<p>CTOs must <a href="/en/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/">adopt a new agentic-first approach to development</a>, ensuring that the infrastructure provides the validation rules, semantic context, and operational limits within which artificial intelligence can move without corrupting corporate data.</p>
<p>AI enhances Drupal and, in turn, Drupal provides the perfect structured and orchestrated context in which AI can thrive. This &ldquo;<strong>technological synergy</strong>&rdquo; (widely debated at all strategic events, from the Drupal Pivot Unconference in Ghent to the major DrupalCons), manifests through practical applications that redefine team efficiency:</p>
<ul>
<li><strong>AI content governance:</strong> Models generate massive volumes of information, content, and metadata, and Drupal acts as a control layer. It is the framework that orchestrates AI agents and imposes approval workflows, ensuring that every output respects brand guidelines and legal requirements before publication.</li>
<li><strong>RAG (Retrieval-Augmented Generation):</strong> In corporate contexts, AI must provide answers based on secure internal data. The platform acts as a central hub to orchestrate corporate data toward vector databases, allowing AI agents to answer user queries based exclusively on certified corporate documentation, accessing only pertinent information and strictly respecting individual access permissions.</li>
<li><strong>Development acceleration:</strong> Generating pages with GenAI, visual building, and automating repetitive tasks frees up valuable engineering resources. This allows technical teams to focus on architecture, complex integrations, and security.</li>
</ul>
<p>To fully understand how to implement these hybrid architectures in your business processes, we recommend consulting our <a href="/en/blog/drupal-ai-panoramica-novita-visione-di-sparkfabrik/">overview of Drupal AI and the SparkFabrik vision</a>, where we analyze the most effective integration strategies.</p>
<p>The future sees technical teams focusing on data architecture and security. In our experience at SparkFabrik, adopting AI reduces development time for repetitive tasks by 30%, but only if the underlying framework imposes strict rules that prevent generated code from compromising system stability in production.</p>
<h2 id="why-is-the-pivot-necessary-now">Why is the &ldquo;Pivot&rdquo; necessary now?</h2>
<p>Drupal&rsquo;s strategic repositioning is necessary today because the market has split sharply. While basic online presence is now commoditized, the demand for deep integrations is growing. CTOs need flexible platforms to solve the build vs. buy dilemma without ceding architectural control.</p>
<p>IT budget optimization requires ruthless choices, even more so today, in a market where code and online presence have become commodities. Financing custom development for low-impact projects is a waste of engineering resources, as automated tools can cover those needs at a fraction of the cost. Think, for example, of the myriad SaaS, &ldquo;no-code&rdquo; solutions, and AI builders for creating landing pages. In these cases, a technology like Drupal is clearly inefficient.</p>
<p>However, applying the same cost-saving logic to core systems generates technical debt that is paid for by the inability to scale. Enterprise companies no longer ask for digital showcases, but transactional ecosystems. Needs have shifted toward deep integration.</p>
<p>This level of complexity manifests in <strong>advanced use cases that escape the capabilities of traditional CMS</strong>s. Structured and complex business cases like customer portals, intranets, e-learning platforms, and data repositories require solid foundations.</p>
<p>In this scenario, technology leaders constantly face the <strong>Build vs. Buy</strong> dilemma. Buying a finished product guarantees initial speed, but SaaS black boxes show their limits as soon as business processes deviate from the vendor&rsquo;s intended standard. Serious structural rigidities arise, such as non-modifiable data schemas, API limits, dependence on development roadmaps, and vendor lock-in.</p>
<p>Building everything from scratch, on the other hand, entails unsustainable maintenance costs. A mature application framework offers the optimal middle ground: solid foundations already written and tested, combined with absolute freedom to customize business logic.</p>
<p>The theme of digital sovereignty should be read from this engineering perspective even before a regulatory one. It is not just a matter of data residency, but of <strong>control over architecture</strong>. European companies need platforms where database access, business logic, and integrations are not bound by black boxes.</p>
<p><strong>Drupal positions itself as an open source framework</strong> that guarantees full access to the stack, allowing data to be modeled exactly as required by the business. It offers the flexibility of custom code and the robustness of an enterprise framework, leaving the simple site market to automated tools.</p>
<h3 id="what-are-the-next-steps-for-it-decision-makers">What are the next steps for IT decision-makers?</h3>
<p>Owning your technology in an era of uncertainty represents the ultimate competitive advantage for enterprise companies. The transition to robust solutions requires an in-depth audit of current infrastructure and the adoption of open source platforms governed by rigorous engineering practices, capable of supporting business growth in the long term.</p>
<p>Here is a strategic checklist:</p>
<ol>
<li><strong>Technical debt audit</strong>: Analyze your digital properties. Which are simple showcase sites and which are critical applications? Identify where you need control over data and software longevity. Those are the candidates for the new enterprise Drupal approach.</li>
<li><strong>Technological independence assessment</strong>: Do your current systems allow you to extract and migrate data without friction? If the answer is no, you are accumulating operational risk. Adopting open standards and open source platforms is the most effective technical mitigation against vendor lock-in.</li>
<li><strong>Platform-First roadmap</strong>: Stop funding siloed projects. Invest in the underlying platform. A cloud-native base has an initial setup cost, but it reduces the marginal cost of every subsequent application and ensures uniform security standards.</li>
<li><strong>Partner selection</strong>: Modern challenges require skills that go beyond traditional CMS development. You need a partner with expertise in distributed architectures, application security, and DevOps practices. Look for proven expertise in SRE and software lifecycle management.</li>
</ol>
<p>The dividing line in the IT market is now clearly drawn. On one side, we find companies that continue to squander budget on application silos and closed platforms, accumulating operational risks. On the other, industry leaders who invest in open, scalable ecosystems ready for the secure integration of artificial intelligence.</p>
<p>Drupal Pivot and other strategic events have confirmed that technological maturity is not measured by the quantity of features, but by the ability to govern complexity. Drupal has chosen to position itself as the tool for those who build durable digital assets, offering total control over their technology.</p>
<p>We invite technology leaders to <strong>evaluate their current infrastructure with a critical eye</strong>. Is it ready to support business-critical processes for the next ten years? Or is it time to &ldquo;pivot&rdquo; toward more robust solutions capable of supporting the business for the next decade?</p>
<p>If your architecture does not guarantee data sovereignty and operational agility, we invite you to <a href="/en/servizi/drupal/">discover our Drupal development and consulting services</a> and <a href="/en/contatti/">contact our experts</a> to design a future-proof cloud-native platform together.</p>
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-192504234572"
  style="max-width:100%; max-height:100%;" data-hubspot-wrapper-cta-id="192504234572">
  <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLIOqIahxa9CWGP8KCUetLEMH8ErDPgkx5SaCOierE6wBkjprfaPJZGhcXFxv4Ja5X92A6ipXzlENWU6kRLHxZeq1rdQLhG0oEge%2FT2gI89j6irc1mY4tmMWSkfVWhQJzDqH0r6uuzJi1vBmCgplZQDrepnfBRrENIscncskXYJNHuomAPf9F5ODC7lArf2QJaJB&webInteractiveContentId=192504234572&portalId=6897318" target="_blank" rel="noopener" crossorigin="anonymous">
    <img alt="Drupal Development and Consulting. Tell us about your Project" loading="lazy" src="https://no-cache.hubspot.com/cta/default/6897318/interactive-192504234572.png" style="height: 100%; width: 100%; object-fit: fill"
      onerror="this.style.display='none'" />
  </a>
</div>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/perche-i-cto-scelgono-drupal-ai-sovranita-e-platform-engineering/featured-en.webp" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/perche-i-cto-scelgono-drupal-ai-sovranita-e-platform-engineering/featured-en.webp" type="image/jpeg"/><category>Drupal</category><category>Cloud Native</category><category>Digital Transformation</category><category>AI</category></item><item><title>Drupal AI 1.3: security, governance, maturity and new tools</title><link>https://www.sparkfabrik.com/en/blog/drupal-ai-1-3-security-governance/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/drupal-ai-1-3-security-governance/</guid><description>The adoption of LLMs in CMSs requires solid architectures to avoid privacy risks and hallucinations. The new release of the Drupal AI module successfully tackles these challenges. Discover the governance features that transform experiments into production-ready platforms.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Release 1.3 of the Drupal AI module transforms the CMS into a secure enterprise platform thanks to advanced governance features. The implementation of bidirectional Guardrails prevents sensitive data leaks while keeping latency low. Furthermore, the integration of semantic Re-ranking enhances the relevance of results and reduces false positives in RAG architectures, while native support for OpenTelemetry enables real-time monitoring of the costs and usage of language models.
  </div>
</div>
<p>Today, Drupal represents not only the best enterprise-grade CMS solution, but also the one that integrates artificial intelligence in the most mature way. The feature-rich <strong>1.3.0 release of the Drupal AI module</strong> marks an important transition from an experimental integration to a production-ready platform. The adoption of LLMs in corporate CMSs has so far been held back by tangible risks related to data privacy, model hallucinations, and a lack of observability over background operations. This version tackles these structural criticalities, introducing advanced governance features, standardized telemetry flows, and orchestration tools that transform experiments into solid architectures.</p>
<p>The SparkFabrik team played an <strong>active role</strong> in the development of the core module, leading the design of the security and advanced search systems. The Cloud Native engineering approach made it possible to apply security-by-design and DevSecOps principles directly to artificial intelligence, ensuring that every interaction is traceable and secure.</p>
<p>To understand the extent of this ecosystem, it is useful to consult our <a href="/en/blog/drupal-ai-panoramica-novita-visione-di-sparkfabrik/">complete overview of AI features in Drupal</a>. As illustrated in the in-depth video by Marcus Johansson, Tech Lead of the Drupal AI Initiative, the update provides architectural foundations for complex operations. We detailed the journey of these implementations in our dedicated article on <a href="/en/blog/drupal-ai-contributions-2025/">how we shaped the future of Drupal AI in 2025</a>, demonstrating how the integration of language models requires cross-functional skills spanning CMS development and distributed infrastructures.</p>
<h2 id="why-ai-guardrails-transform-drupal-into-a-secure-enterprise-platform">Why AI Guardrails transform Drupal into a secure enterprise platform</h2>
<p>AI Guardrails transform Drupal into a secure platform by acting as <strong>bidirectional filters that intercept requests and validate the responses of Large Language Models</strong>. This governance system blocks the leak of sensitive data and prevents hallucinations, ensuring the necessary compliance for enterprise applications in production.</p>
<p>The implementation of these policies helps increase compliance on interactions with LLMs, both outbound and inbound. To delve deeper into the design of these components, you can analyze the architectural strategies to mitigate the risks of language models in the <a href="/en/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/">detailed article on Guardrails in Drupal AI</a>.</p>
<p>During the <a href="https://www.youtube.com/playlist?list=PLSD9hiOyso87bv6Ay3g1ns0mkSo3cgSBH">Drupal X Business</a> event, <strong>Luca Lusso</strong> presented in detail how these protection mechanisms work. In his talk, he highlighted how the implementation of strict rules shifts artificial intelligence from an experimental paradigm to a governable tool. The Cloud Native approach adopted in the design ensures that these controls operate efficiently, keeping latency very low to avoid negatively impacting editorial processes.</p>
<h3 id="interception-and-masking-of-sensitive-data">Interception and masking of sensitive data</h3>
<p>Guardrails operate at a deep level of the architecture, preventing the leak of sensitive data before the HTTP request leaves the corporate servers. This includes blocking PII (Personally Identifiable Information), financial data, or intellectual property. The system can entirely block the request or mask the input using regular expressions or external validation services like AWS Bedrock.</p>
<p>A practical example illustrates the effectiveness of this approach. If an editor enters the code name of a classified product into a prompt, such as the internal project &ldquo;MDX 250&rdquo;, the configured Guardrail immediately intercepts the text. Or, much more simply, if a user sends their personal data or credit card number in the prompt, the system blocks or obscures them before sending them to public LLMs like those of OpenAI or Anthropic.</p>
<p>This preventive validation guarantees the security of the data supply chain, ensuring that the corporate infrastructure does not become a vehicle for the dispersion of trade secrets. The application of these filters happens in real time and provides immediate feedback to the user. By explaining exactly which security policy was violated, the system maintains a high level of awareness among editorial teams.</p>
<h3 id="agnostic-architecture-and-bidirectional-validation">Agnostic architecture and bidirectional validation</h3>
<p>The Guardrails system is designed with a strictly agnostic architecture, meaning it is not tied to a single vendor or a specific language model. This independence allows organizations to define centralized security policies. These rules remain valid even if you decide to migrate from one cloud provider to another, cutting down refactoring costs.</p>
<p>The protection offered by the module is structured on three distinct levels of intervention:</p>
<ul>
<li><strong>Preventive blocking of the outbound request</strong>, which analyzes the user&rsquo;s prompt and the provided context to identify violations of corporate policies before any network communication.</li>
<li><strong>Reformatting or blocking of the inbound response</strong>, which analyzes the output generated by the language model to intercept inappropriate content, offensive language, or responses that violate ethical guidelines.</li>
<li><strong>Prevention of hallucinations and maintenance of the corporate tone of voice</strong>, ensuring that the model does not invent non-existent facts or use a communication style foreign to the brand guidelines.</li>
</ul>
<p>This bidirectional validation ensures that the CMS maintains final authority over the content. Artificial intelligence is treated as a service provider that must be constantly supervised by strict business logic.</p>
<h2 id="how-semantic-reranking-improves-rag-architectures-on-drupal-ai">How semantic Reranking improves RAG architectures on Drupal AI</h2>
<p>Semantic Reranking improves RAG architectures on Drupal by introducing a second evaluation pass based on artificial intelligence. After the initial vector filtering, a specialized model reorders the retrieved documents by analyzing their actual contextual relevance, ensuring that Large Language Models receive precise information to generate responses.</p>
<p>The <strong>new Reranking operation type</strong> represents another area of strong contribution by SparkFabrik, essential for implementing effective Retrieval-Augmented Generation architectures. The adoption of these advanced techniques requires a <a href="/en/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/">new architectural approach oriented towards AI agents</a>, where the precision of information retrieval directly determines the quality of the final output.</p>
<p>Development companies frequently clash with the limits of pure vector search, which often retrieves similar but not contextually relevant documents. By implementing reranking, we observed a 70% reduction in &ldquo;false positives&rdquo; during complex document queries and a more effective ordering of results. This deep semantic filter ensures that the language model receives only the strictly necessary context, also optimizing token consumption.</p>
<h3 id="overcoming-the-limits-of-standard-vector-search">Overcoming the limits of standard vector search</h3>
<p>A standard vector database returns results based exclusively on the mathematical distance between the coordinates of the texts in multidimensional space. Although this method is fast and useful for sifting through large volumes of data, it does not always understand the linguistic nuances or the real intent behind a complex query. The order of the documents provided as context to an LLM significantly affects the quality of the final response. In fact, models tend to give more weight to the information presented first, despite increasingly large context windows.</p>
<p><strong>Re-ranking operates as a key second pass.</strong> After the vector search engine has retrieved an initial set of documents, for example, the first fifty results, a specialized model analyzes this subset. The model evaluates the actual semantic relevance of each document with respect to the specific question, assigning a new relevance score.</p>
<p><strong>This process reorders the results</strong>, bringing to the top the documents that actually contain the answer, even if mathematically they were not the closest to the original query. The result is an optimized context that is then passed to the generative language model, drastically reducing the error rate.</p>
<h3 id="native-integration-between-vector-database-and-llm">Native integration between Vector Database and LLM</h3>
<p>The technical flow implemented in version 1.3 involves a solid integration between an advanced search engine, such as <strong>Typesense</strong> (for which we are <a href="https://www.drupal.org/project/search_api_typesense">maintainers of the Drupal module</a>) or a relational database with a vector extension, and the chosen AI provider. Drupal orchestrates this communication transparently. First, it queries the database to obtain the candidates, then it sends the results to the reranking service, and finally, it passes the reordered documents to the generative model.</p>
<p>This two-stage architecture increases the reliability of conversational systems. In internal document chatbots, employees get precise answers based on the correct corporate procedures. On e-commerce platforms, semantic searches return products that match the user&rsquo;s purchase intent, significantly improving conversion rates.</p>
<p>By treating reranking as an agnostic operation, the infrastructure allows the use of specialized models (e.g., cross-encoders) trained specifically for the semantic reordering task. These models are architecturally different and much more precise in this phase compared to using a generative LLM. The most powerful and expensive generative LLMs can thus be reserved only for the final text generation. This separation of tasks optimizes operational costs and decreases the overall latency times of the system.</p>
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-189641220106"
  style="max-width:100%; max-height:100%; width:502px;" data-hubspot-wrapper-cta-id="189641220106">
  <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLKWfLVOas%2FtKZu2odmBK4vvZXoxKC4uqQtJn1JPkZc9NMawSAgt9v2XiefMWJbtZWkg7xsnuc5EkqtoWCe3UBGN%2BSpqqiZYzR%2BqWf64ar1pywI8Gm5B327Am2HVLfWKtbrMFPYqPJDb5WNqZKCrpdCXcZpNakWmuEr0NNozPM6MREH85z3XT1%2F15uhE8FtreV60Jcf206mphic%3D&webInteractiveContentId=189641220106&portalId=6897318" target="_blank" rel="noopener" crossorigin="anonymous">
    <img alt="Custom AI Development. We develop tailored AI solutions for your business and integrate them into your systems." loading="lazy" src="https://no-cache.hubspot.com/cta/default/6897318/interactive-189641220106.png" style="height: 100%; width: 100%; object-fit: fill"
      onerror="this.style.display='none'" />
  </a>
</div>
<h2 id="cloud-native-observability-and-advanced-api-management">Cloud Native observability and advanced API management</h2>
<p>Cloud Native observability in the Drupal AI module takes shape through the integration of the OpenTelemetry standard. This architecture allows tracking every single request to the language models, monitoring crucial metrics in real time such as latency, token consumption, and operational costs, treating artificial intelligence as a measurable microservice.</p>
<p>The implementation of artificial intelligence in production environments requires <strong>precise metrics and strict control over resources</strong>. This need explicitly connects to SparkFabrik&rsquo;s engineering experience, where AI is not seen as a black box, but as a distributed component. To fully understand this architectural philosophy, it is useful to explore the <a href="/en/blog/guides/guida-completa-cloud-native/">fundamentals and advantages of the Cloud Native approach</a>.</p>
<p>Version 1.3 of Drupal AI introduces native support for <strong>OpenTelemetry</strong>, allowing the tracking of the entire lifecycle of an autonomous agent. This level of transparency is necessary to diagnose bottlenecks and optimize performance. Having exact visibility into the costs per single AI transaction allows CTOs to justify technological investments to corporate stakeholders with irrefutable data.</p>
<h3 id="distributed-tracing-with-opentelemetry">Distributed tracing with OpenTelemetry</h3>
<p>The standardized export of metrics, spans, and traces allows operations teams to analyze every single AI request with high granularity. When an editor requests the generation of a summary, the system records exactly how long the provider took to respond. It also tracks how many context tokens were sent and how many completion tokens were generated.</p>
<p>These data allow calculating operational costs in real time, associating the expense with specific site features or certain editorial flows. The open-standards-based approach guarantees compatibility with the most popular market tools, such as Honeycomb, Grafana, and Datadog. DevOps teams can visualize the performance of artificial intelligence on the same dashboards used to monitor the database or Kubernetes clusters.</p>
<p>The adoption of OpenTelemetry avoids lock-in to the proprietary monitoring tools of individual cloud vendors. Regardless of whether the infrastructure uses models hosted on AWS, Google Cloud, or local solutions, the format of the observability data remains consistent. This unified approach greatly simplifies the management of IT operations on a large scale.</p>
<h3 id="rate-limiting-failover-and-normalized-metadata">Rate limiting, failover and normalized metadata</h3>
<p>The advanced API management in version 1.3 transforms the way Drupal communicates with external providers. The system introduces <strong>rate limit thresholds and timeouts for HTTP requests</strong> that can be configured directly from the user interface. This new feature eliminates the need to write custom code to handle the limitations imposed by cloud services.</p>
<p>This evolution brings with it significant architectural advantages for platform stability:</p>
<ul>
<li>Implementation of <strong>automatic failover logic</strong>, which diverts traffic to a secondary model or an alternative provider when the primary service reaches the allowed request limit.</li>
<li><strong>Secure management of asynchronous calls</strong>, allowing AI agents to perform complex tasks in the background without blocking the main web server processes or causing timeouts for users.</li>
<li><strong>Normalization of metadata</strong>, which provides consistent information on model costs and technical capabilities regardless of the chosen provider, facilitating the switch from one vendor to another.</li>
</ul>
<p>These protection mechanisms ensure that a spike in requests to the site&rsquo;s intelligent features does not compromise the overall availability of the CMS. The platform degrades in a controlled manner, keeping critical services operational and ensuring enterprise-grade business continuity.</p>
<h2 id="the-automation-ecosystem-from-editorial-workflows-to-ai-moderation">The automation ecosystem: from editorial workflows to AI moderation</h2>
<p>The Drupal AI automation ecosystem optimizes editorial workflows by integrating decision-making capabilities directly into the content management interface. Through tools like Field Widget Actions and automated moderation, the system reduces the cognitive load on teams, transforming complex manual tasks into fluid and immediate processes.</p>
<p>These automation features make the offering of an AI software development company immediately tangible for the business. The impact translates into a return on investment based on time savings and the reduction of human errors (an estimated saving of over 20 hours per week of manual review and moderation for medium-sized editorial teams). The goal is to simplify content management by transparently integrating AI into daily operations.</p>
<p>User interface improvements include a <strong>new Markdown editor for drafting system prompts</strong>. This technical choice is particularly useful since language models interpret the Markdown format much more efficiently than HTML or plain text, and it is also possible to give a better structure to the prompts.</p>
<p>Last but not least, the <strong>Context Control Center</strong> allows you to define tone of voice, audience, policies, and specific corporate details just once, in a single environment and at one time. The various parts of the context can then be used by the different editorial teams to support their activities.</p>
<p>The CCC also supports the auto-completion of variables and tokens, thus allowing administrators to dynamically &ldquo;inject&rdquo; the data of the current user or node into the instructions sent to the model. This increases the overall precision of the context, consequently improving the quality of the LLM&rsquo;s output as well.</p>
<p>And, tying back to observability, the CCC also has usage tracking, logging, and agent debugging features, as well as a wide range of automated tests that cover all its features—fundamental aspects for engineers.</p>
<h3 id="content-moderation-and-object-detection">Content moderation and Object Detection</h3>
<p><strong>Content moderation based on artificial intelligence</strong> introduces a level of automated control over the texts entered by users or editors. The system analyzes the content in real time and can autonomously alter the moderation state of the node. For example, if inappropriate language is detected, the state automatically changes from published to flagged, requiring the intervention of a human supervisor.</p>
<p>In parallel, the integration of <strong>Object Detection</strong> expands the analysis capabilities to multimedia content. Using computer vision models or deep learning algorithms, often run locally or through platforms like Hugging Face, the system recognizes specific objects within the uploaded images. This technology returns the exact coordinates of the identified elements, allowing for complex validations.</p>
<p>A typical use case involves blocking uploads that do not comply with corporate guidelines. The system can prevent the upload of an image if it does not detect the presence of a specific safety device in a construction site photo. Or it can reject images that contain competitors&rsquo; logos, automating a quality control process that would require hours of manual work.</p>
<h3 id="field-widget-actions-for-structured-data">Field Widget Actions for structured data</h3>
<p><strong>Field Widget Actions</strong> represent the deepest integration of automation within the editorial experience, bringing generative capabilities directly to individual Drupal fields. Instead of using a generic chatbot, the editor has contextual buttons that perform specific operations on the data during entry.</p>
<p>The technical use cases introduced (or improved) in version 1.3 cover a wide spectrum of operational needs:</p>
<ul>
<li>Extraction of physical addresses from unstructured text, normalizing geographical information through integrations with services like Google Places to automatically populate map fields.</li>
<li>Generation of optimized SEO meta tags, creating catchy titles and relevant descriptions based on the analysis of the node&rsquo;s textual content.</li>
<li>Conversion of raw textual data into strictly structured JSON formats, an essential step for exposing consistent information via APIs in headless architectures.</li>
<li>Automatic creation of FAQ sections starting from long documents and text-to-speech operations that transform textual articles into audio files associated with media fields.</li>
</ul>
<p>These actions transform the CMS from a simple information container into an active assistant. By forcing language models to respect predefined data schemas, the system ensures that the generated output is immediately usable by the site&rsquo;s display logic. This approach drastically reduces the need for manual cleaning and formatting interventions. Last but not least, these functions are easily usable by anyone, without particular technical skills.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Drupal AI 1.3 represents a major update that highlights the maturity of the ecosystem. By providing advanced tools like Guardrails for security, Reranking for semantic precision, and integration with OpenTelemetry for observability, the module offers the necessary infrastructure to operate in complex and regulated enterprise environments.</p>
<p>The secure integration of language models within corporate processes requires cross-functional skills that go well beyond the simple installation of a plugin. <strong>A deep understanding of CMS development, Cloud Native distributed architectures, and strict data governance practices is necessary.</strong> Only an integrated approach, guided by method and experience, guarantees that technological innovation does not compromise the security or performance of the system.</p>
<p>To orchestrate these technologies in a scalable and secure way, the added value lies in relying on technological partners with proven experience in open-source dynamics and in <a href="/en/servizi/ai-development/">custom artificial intelligence software development</a>. The evolution of Drupal demonstrates that the future of content management belongs to platforms capable of combining editorial flexibility with engineering rigor. <a href="/en/contatti/">Contact our experts</a> and tell us about your challenges to discover how to implement these solutions in your architecture.</p>
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-192504234572"
  style="max-width:100%; max-height:100%; width:600px;" data-hubspot-wrapper-cta-id="192504234572">
  <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLKrxnzIEqGOx2Iu7ofqOnnbNTY8dGocvHaB54jmibayL17r8owQlkfNFX31KR1DmOE1hY2pmu9oXbJJ4mrIpCln%2FWLMoxDY6hSU9mv%2FGTur4RgZb6piedwxvuWzJanKvvbJLVeN891Q%2BXaQmXlVr8e7vY2EQju%2BknmT%2Be4lVlMTLYELWj7KrRqn0BSkYhHGNDNg&webInteractiveContentId=192504234572&portalId=6897318" target="_blank" rel="noopener" crossorigin="anonymous">
    <img alt="Drupal Development and Consulting. Tell us about your Project" loading="lazy" src="https://no-cache.hubspot.com/cta/default/6897318/interactive-192504234572.png" style="height: 100%; width: 100%; object-fit: fill"
      onerror="this.style.display='none'" />
  </a>
</div>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/drupal-ai-1-3-sicurezza-governance-maturita-e-nuovi-tools/featured-en.webp" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/drupal-ai-1-3-sicurezza-governance-maturita-e-nuovi-tools/featured-en.webp" type="image/jpeg"/><category>AI</category><category>Drupal</category><category>Security</category><category>Cloud Native</category></item><item><title>Drupal development and AI: the new agentic-first approach</title><link>https://www.sparkfabrik.com/en/blog/drupal-ai-agentic-first-approach/</link><pubDate>Tue, 31 Mar 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/drupal-ai-agentic-first-approach/</guid><description>Source code is becoming a disposable resource. The true value in Drupal development is shifting toward defining specifications and system architecture. The CTO's role is transforming into an AI orchestrator. Discover the new agentic-first approach.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Dries Buytaert&rsquo;s keynote at DrupalCon Chicago 2026 redefines Drupal development through a new agentic-first approach. The ecosystem evolves with DrupalCMS 2.1
  </div>
</div>
<p>Artificial intelligence has transformed coding into a commodity: this is the inescapable truth for CTOs. And it is the central point of Dries Buytaert&rsquo;s keynote at DrupalCon Chicago 2026, during the event celebrating 25 years of the Drupal open-source project. Source code is literally becoming a disposable resource. The true value of software development is rapidly shifting from mere coding to the rigorous definition of specifications and system architecture design. The result? A massive shift from manual programming to the orchestration of autonomous agents.</p>
<p><img src="/images/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/inline-1.webp" alt="25 years of Drupal - Driesnote DrupalCon Chicago 2026"></p>
<p>The observations that emerged perfectly reflect a core principle of the SparkFabrik Playbook. <strong>Ephemeral code</strong> frees up resources, but artificial intelligence does not replace software engineering; it exposes it ruthlessly. <strong>If a company has a clear vision and solid requirements, AI multiplies operational efficiency; in the absence of strategic direction, it merely amplifies errors on a large scale.</strong></p>
<p>In this scenario, our <a href="/en/servizi/drupal/">Drupal development and consulting services</a> are evolving radically, positioning us as strategic partners for the governance of digital processes and the implementation of <a href="/en/risorse/hot-topics/ai-enterprise-solutions/">enterprise-grade AI solutions</a>.</p>
<p>Let&rsquo;s explore in detail the main evolutions that emerged from the event:</p>
<ul>
<li>The architecture of DrupalCMS 2.1 and the new site templates that lower barriers to entry and accelerate time-to-market.</li>
<li>The Context Control Centre, which allows you to configure tone, audience, and company policies just once for every AI interaction.</li>
<li>The evolution of visual building with Canvas and the creation of production-ready pages via AI.</li>
<li>The update of the Drupal AI module to version 1.3, featuring major innovations including the guardrails system contributed by SparkFabrik.</li>
<li>The agentic-first approach and the redefined role of code in the AI era.</li>
</ul>
<h2 id="what-are-the-new-features-introduced-by-drupalcms-21-for-the-enterprise-ecosystem">What are the new features introduced by DrupalCMS 2.1 for the enterprise ecosystem?</h2>
<p>The main innovations introduced by DrupalCMS 2.1 for the enterprise ecosystem include an advanced architecture based on <strong>core 11.3, capable of reducing database queries by 50% for uncached pages</strong>. Additionally, the native marketplace is updated with 11 industry-specific site templates, specifically designed to drastically cut release times for complex corporate platforms.</p>
<p>The technological infrastructure presented in Chicago redefines performance expectations for large organizations. The <strong>DrupalCMS 2.1</strong> engine does not merely update system dependencies; it rewrites the deep logic of data access. This structural optimization translates into immediate savings on cloud computational resources. Metrics show that corporate infrastructures can now handle intense traffic spikes with a fraction of the load on traditional servers, optimizing operational costs according to FinOps principles.</p>
<p>Alongside raw power, the engineering focus shifts to the speed of operational implementation. The new marketplace introduces <strong>11 site templates specific to vertical sectors</strong>, from healthcare and education to financial services and public administration, available directly in the integrated marketplace.</p>
<p><img src="/images/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/inline-2.webp" alt="11 site templates - Driesnote DrupalCon Chicago 2026"></p>
<p>This modular architecture radically transforms the classic conception of Drupal development. The months of work typically required for the initial setup of standard business logic and content modeling are eliminated.</p>
<p>For IT decision-makers, adopting this platform represents entry into a <a href="/en/blog/drupal-cms-la-nuova-era-del-content-management-per-il-business/">new era of content management for business</a>. The technical <strong>advantages</strong> over legacy architectures and proprietary solutions are tangible and measurable:</p>
<ul>
<li>Drastic reduction in <strong>time-to-market</strong> thanks to pre-assembled configurations for specific industries.</li>
<li>Optimization of the infrastructural load with a sharp drop in database queries and more aggressive caching.</li>
<li>Native integration with GenAI services and solutions to streamline complex editorial workflows.</li>
<li>Standardization of security best practices inherited directly from the evolution of core 11.3.</li>
<li>The adoption of a fully open-source solution reduces the Total Cost of Ownership and eliminates vendor lock-in.</li>
</ul>
<p>The impact of these innovations on IT budgets is direct and quantifiable. When engineering teams no longer have to spend tens or hundreds of hours configuring basic roles, permissions, and publishing workflows, the budget can be entirely redirected toward core system integration and advanced customization.</p>
<h3 id="the-role-of-the-context-control-centre-in-data-governance">The role of the Context Control Centre in data governance</h3>
<p>The <strong>Context Control Centre</strong> (CCC) is the new native subsystem designed to solve the problem of hallucinations in language models applied to the enterprise. Without this tool, every time AI is used, you start from scratch, forcing teams to re-explain the brand, correct the output, and redo the work. The CCC eliminates this inefficiency by allowing you to define tone of voice, audience, policies, and design just once.</p>
<p>Through the CCC, IT teams <strong>encode the corporate context</strong> via guidelines, brand voice, tone of voice, design systems, analytical data, and regulatory requirements (even in different languages). And they do it only once, directly in the CCC.</p>
<p>When artificial intelligence queries the CMS to create new content, the CCC ensures that the final output is perfectly aligned with corporate compliance standards. In this way, any deviation from the authorized communication perimeter is blocked at the source.</p>
<p><img src="/images/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/inline-3.webp" alt="Context Control Centre - Driesnote DrupalCon Chicago 2026"></p>
<p>But the corporate context is not static. Products evolve, metrics fluctuate, information becomes obsolete. The CCC development team is exploring the concept of <strong>dynamic context</strong>: the ability to update the context as it evolves in time and to connect external data sources (like Google Analytics) directly to the orchestration engine.</p>
<p>The goal is an <strong>autonomously self-monitoring system</strong>. Imagine, for example, a sudden drop in key metrics, or pages with obsolete details that no longer reflect the current features of a product or service. With a dynamic context, the system would be able to detect these anomalies.</p>
<p>The direction is clear: <strong>moving from a context defined once to a context that evolves with the company itself</strong>. A CMS that doesn&rsquo;t just produce brand-aligned content, but proactively flags when that content requires an update and proposes contextualized corrections. Of course, it is still in an embryonic phase, but it represents the natural frontier of AI orchestration applied to enterprise content management.</p>
<h2 id="canvas-and-display-builder-how-does-visual-creation-change-in-drupal-development">Canvas and Display Builder: how does visual creation change in Drupal development?</h2>
<p>Visual creation in Drupal development is changing radically through the use of AI agents capable of transforming text documents into production-ready pages. Tools like Canvas (the flagship tool promoted by the AI Initiative) allow for rapid prototyping guided by artificial intelligence, while more mature solutions like Display Builder ensure the rigorous application of complex design systems on a large scale.</p>
<p>The latest practical demonstration of <a href="https://www.drupal.org/project/canvas"><strong>Canvas</strong></a>&rsquo;s capabilities, in the plenary session in Chicago, shows a constant evolution of the tool, combining visual building and AI generation capabilities. A raw text document, containing only product specifications and unformatted copy, was converted into a complete web page in a matter of minutes.</p>
<p><img src="/images/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/inline-4.webp" alt="Canvas - Driesnote DrupalCon Chicago 2026"></p>
<p>This level of automation firmly positions the Drupal ecosystem at the top of AI-powered tools for accelerating the delivery of complex interfaces.</p>
<p>Unlike disposable prototypes created by external tools, Canvas operates natively within the CMS. Language models interpret the creator&rsquo;s intent and map the content onto the visual components available in the system, keeping the permission structure, translation logic, cross-linking, and SEO metadata intact. The result is not a mockup to be rebuilt, but a production-ready page integrated into the corporate editorial workflow.</p>
<p>The new AI-assisted workflow transforms traditional frontend operations:</p>
<ol>
<li>Loading text specifications or product briefs directly into the CMS interpretation engine.</li>
<li>Semantic analysis by AI to identify the logical structure, including headings, calls to action, and structured data.</li>
<li>Automatic generation of the visual layout by applying predefined components and Canvas typographic rules.</li>
<li>Human intervention for final accessibility validation, aesthetic refinement, and formal approval for publication.</li>
</ol>
<p>For IT directors and product managers, understanding the <a href="/en/blog/drupalcon-vienna-2025/">integration of Canvas and native design systems</a> becomes fundamental to evaluating the trade-off between execution speed and global visual standardization. While Canvas excels at rapidly generating new views, large enterprise architectures often require a higher level of architectural control over design tokens.</p>
<h3 id="the-solid-alternative-display-builder-and-design-system-integration">The solid alternative: Display Builder and design system integration</h3>
<p>In contrast to the generative and prototyping-focused approach, the open-source ecosystem offers solutions specifically designed for visual governance on a global scale. During <strong>our Drupal X Business event</strong>, Michael Fanini presented <a href="https://www.drupal.org/project/display_builder"><strong>Display Builder</strong></a>, a more mature visual builder developed to meet the most stringent needs of large omnichannel organizations.</p>
<p>Unlike tools focused on instantaneous speed, Display Builder offers <strong>deep and native support for complex corporate design systems</strong> (and is completely integrated with the ecosystem of modules and themes <a href="https://www.drupal.org/project/ui_suite">UI Suite</a>). This feature ensures that every single component inserted into the page meticulously respects brand constraints, a non-negotiable requirement when providing development services and solutions for enterprises and institutions, such as multinational pharmaceutical companies or banking institutions.</p>
<p>To delve deeper into the potential of this enterprise visual architecture, we invite you to <a href="https://www.youtube.com/watch?v=jjYtRA0uIUY">watch the full talk on our YouTube channel</a>.</p>
<h2 id="how-does-the-drupal-ai-13-module-guarantee-corporate-data-security">How does the Drupal AI 1.3 module guarantee corporate data security?</h2>
<p>The Drupal AI 1.3 module guarantees corporate data security through a native Guardrails system that intercepts and filters communications with Large Language Models. This architecture applies pre- and post-processing validation rules, blocking the exposure of sensitive information and ensuring total regulatory compliance before publication.</p>
<p>The maturity reached by the open-source ecosystem transforms the Drupal CMS into a true enterprise-grade platform for the secure orchestration of language models. With the <strong>release of version 1.3 of the Drupal AI module</strong>, the community has established a new de facto standard for organizations seeking reliable architectures in the field of AI software development.</p>
<p><strong>This release tackles head-on the main security issues plaguing CTOs in the AI space</strong>: the concrete risk of data leaks and the consequent loss of control over proprietary information, the hallucinations of probabilistic models, and the potential reputational damage of AI output not aligned with the brand.</p>
<p>The core of this infrastructural security is represented by the <strong>Guardrails system</strong>, a fundamental architectural component developed and contributed directly by the SparkFabrik team (discover <a href="/en/blog/drupal-ai-contributions-2025/">all our contributions to Drupal AI</a>).</p>
<p>As detailed in the article <a href="/en/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/">Guardrails AI in Drupal</a>, we designed this protection layer to act as a <strong>bidirectional and real-time semantic firewall</strong>. Before a request is sent to external providers, the system proactively verifies the absence of personally identifiable information (PII), access credentials, or trade secrets.</p>
<p>Similarly, the post-processing phase analyzes the generated output to ensure <strong>compliance with current regulations, internal policies, and copyright restrictions</strong>. This architectural approach demonstrates how modern solutions must integrate robust safety nets, observable through standards like OpenTelemetry, around generative models.</p>
<p><strong>Data security</strong> is no longer an optional add-on to be evaluated at the end of a project, but the indispensable foundation upon which to build any corporate automation initiative. Once the data security perimeter is locked down, companies can finally focus on the true value multiplier: the strategic orchestration of autonomous agents.</p>
<h2 id="why-does-the-agentic-first-approach-redefine-the-role-of-ai-software-development-companies">Why does the agentic-first approach redefine the role of AI software development companies?</h2>
<p>The agentic-first approach redefines the role of development companies, transforming them from code executors to orchestrators of intelligent systems. Artificial intelligence does not replace engineers, but amplifies their architectural capabilities, allowing a single experienced professional to generate the qualitative and quantitative output of an entire team.</p>
<p>The practical implementation of this <strong>agentic-first model</strong> implies the integration of Artificial Intelligence as a native architectural component. The operational center of gravity is shifted from manual programming to the <strong>orchestration of AI models and agents</strong> and the configuration of automated workflows. And this requires precise technical knowledge gained from experience in real projects, <a href="/en/blog/guida-allo-spec-driven-development/">Spec Driven Development</a> practices, and rigorous data governance to ensure scalability and security.</p>
<p>Instead of writing individual functions, IT teams define the rules of engagement for multiple AI agents collaborating to solve complex tasks, from code refactoring to generating automated tests. This means being able to explore <a href="/en/landing/agentic-ai-scenari-reali/">concrete application scenarios based on agentic AI</a> that reduce bottlenecks in software releases, ensuring previously unimaginable operational scalability.</p>
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-207352844150"
  style="max-width:100%; max-height:100%; width:700px;height:252.9375px" data-hubspot-wrapper-cta-id="207352844150">
  <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLKx3UYunbEIsMJ21Q1sjzrZpgoGBgnWGCfvEUtf79q8QrHUdmR%2F6b40z005PTxH8yUDg1ao9rHmrBKj3ZFbqUW040bWWzLJ2GTEeFsAjSjdNvtj8wks8Rdxm4jkBfVuispKn4ja3QZ8j2NYouRfn5KlJsBf8nsAdTciPO0qDENhO%2BH%2B%2F6Zz4BrttG66bhPa6nUhiq1Szya2CgTQ&webInteractiveContentId=207352844150&portalId=6897318" target="_blank" rel="noopener" crossorigin="anonymous">
    <img alt="Smetti di chiederti cosa far&agrave; l'AI in futuro. Scopri cosa pu&ograve; fare oggi per il tuo business.&nbsp; Agentic AI: 6 Scenari applicativi realizzabili subito &nbsp;" loading="lazy" src="https://no-cache.hubspot.com/cta/default/6897318/interactive-207352844150.png" style="height: 100%; width: 100%; object-fit: fill"
      onerror="this.style.display='none'" />
  </a>
</div>
<p>SparkFabrik&rsquo;s vision embraces this structural transformation. Treating Artificial Intelligence as a simple external API enormously limits a platform&rsquo;s potential. Conversely, designing systems where autonomous agents operate within a secure perimeter allows for the automation of entire business processes.</p>
<p>In our daily operational framework, we encode this transformation with an unequivocal principle: <strong>artificial intelligence does not replace you, it exposes you</strong>.</p>
<p><strong>If you know what you want, it multiplies; if you don&rsquo;t know, it amplifies errors.</strong></p>
<p><img src="/images/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/inline-5.webp" alt="Don&amp;rsquo;t submit code you don&amp;rsquo;t understand - Driesnote DrupalCon Chicago 2026"></p>
<p>The most glaring and documented demonstration of this augmented productivity came from the work of developer Jurgen Haas on the ECA (Event-Condition-Action) module. Assisted by advanced artificial intelligence tools, a single senior developer wrote, validated, and documented 90,000 lines of code in just six weeks.</p>
<p>This volume of work certifies that individual output is destined to scale dizzily, but only if you know what you want and if you start from a solid base of skills that allow you to orchestrate the work, holding the reins firmly.</p>
<p>To successfully implement the <strong>agentic-first approach</strong>, the architecture is based on three crucial phases:</p>
<ul>
<li>The design and implementation of centralized orchestration systems to robustly yet flexibly manage skills, system prompts, agent profiles, MCP protocols, and custom tools.</li>
<li>The integration of guardrails and advanced security systems, rigorously applying DevSecOps practices to protect corporate data flows.</li>
<li>The application of automated governance policies that validate the output of AI agents through automated testing prior to publication.</li>
</ul>
<p>Software development companies that limit themselves to selling manual programming hours are destined for rapid obsolescence. The enterprise market exclusively rewards those who know how to govern systemic complexity and orchestrate ecosystems of intelligent agents.</p>
<h3 id="spec-driven-development-and-the-harmony-between-skills-and-relationships">Spec Driven Development and the harmony between skills and relationships</h3>
<p>In an ecosystem driven by Artificial Intelligence, the quality of the generated output depends entirely on the precision of the initial specifications. SparkFabrik&rsquo;s operational strategy is firmly based on <strong>Spec Driven Development</strong>. Language models operate exclusively within the boundaries outlined by system prompts and architectural rules. An ambiguous requirement, which in the past would have required clarification among developers, today translates into a large-scale hallucination or an application outage.</p>
<p>Consequently, the role of the CTO and VP of Engineering is increasingly focused on validating information architecture and data security. The value of technical management shifts from source code review to the definition of unassailable API contracts and the verification of access policies. The success of an enterprise Drupal development project is measured today by the robustness of its specifications, which act as the true source code for AI agents.</p>
<p>The market is clearly rewarding entities capable of bridging this gap, transforming agencies from simple labor providers into strategic consultants. Fundamental to the transition, however, is understanding that the commoditization of code is not something to be feared, but a profound change to be managed with clear strategy and governance.</p>
<p><strong>AI automates execution, but strategy requires empathy and a deep understanding of the client&rsquo;s business on the one hand, and training and change management internally on the other.</strong> As we openly declare <a href="https://playbook.sparkfabrik.com/ai-development/where-we-are">in our company Playbook</a>, technology changes at a dizzying pace, but our founding principles remain steadfast.</p>
<blockquote>
<p>&ldquo;What won&rsquo;t change is why this company exists. Our vision has always been harmony between skills and human relations.&rdquo;</p>
</blockquote>
<p>The future of IT belongs to those who can balance the computational power of autonomous agents with the irreplaceable human ability to build lasting relationships of trust.</p>
<h2 id="drupalcon-chicago-2026-what-are-the-impacts-and-takeaways">DrupalCon Chicago 2026: what are the impacts and takeaways?</h2>
<p>What should we take away from DrupalCon Chicago 2026? The message for decision-makers is clear: the modernization of enterprise systems no longer involves the endless manual rewriting of code, but rather the agentic approach.</p>
<p>Contemporary Drupal development represents the true vanguard in the orchestration of autonomous agents within an intrinsically secure, scalable framework governed by clear rules. From the optimized performance of core 11.3 to the rigorous management of semantic context via the Context Control Centre, the open-source platform confirms itself as the platform of choice for large organizations that reject the vendor lock-in of proprietary models.</p>
<p>SparkFabrik does not limit itself to observing market trends or passively using these new generative tools. As demonstrated by the release of the Guardrails system and other contributions, <strong>we are actively committed to forging the technologies that define the new global standards for security, governance, and development</strong>. We position ourselves as the ideal strategic partner to guide companies through the treacherous complexities of application modernization and the secure adoption of artificial intelligence models.</p>
<p>Explore <a href="/en/risorse/hot-topics/ai-enterprise-solutions/">our custom AI solutions</a> and <a href="/en/contatti/">speak with our experts</a> for tailored architectural consulting, designed to solve the specific challenges of your organization.</p>
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-192504234572"
  style="max-width:100%; max-height:100%;" data-hubspot-wrapper-cta-id="192504234572">
  <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLLhar8Mf5jwyDl5DioC5dKEy0x2hyvxzB2UgqDEE3Q%2Fe2nwjbeO3cDdf9RlEe4kpj6nUBfqAcnHMLvdJJrXbWqIHEJz%2FaDUKRFpFPAl1CzdUkkykJN1MJalLlCikCcxmuG03dp9HeFctmREUdGZGWsPv9eEqWLwocYEwHBnK5or6oNztUqG4C6jy%2F%2FHWYAJleaB&webInteractiveContentId=192504234572&portalId=6897318" target="_blank" rel="noopener" crossorigin="anonymous">
    <img alt="Drupal Development and Consulting. Tell us about your Project" loading="lazy" src="https://no-cache.hubspot.com/cta/default/6897318/interactive-192504234572.png" style="height: 100%; width: 100%; object-fit: fill"
      onerror="this.style.display='none'" />
  </a>
</div>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/featured-en.webp" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/sviluppo-drupal-e-ai-il-nuovo-approccio-agentic-first/featured-en.webp" type="image/jpeg"/><category>AI</category><category>Drupal</category></item><item><title>Drupal4GovEU: Digital Sovereignty and Open Source for Public Administration</title><link>https://www.sparkfabrik.com/en/blog/drupal4gov-digital-sovereignty-open-source-pa/</link><pubDate>Tue, 10 Mar 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/drupal4gov-digital-sovereignty-open-source-pa/</guid><description>Digital sovereignty is essential for Public Administration, protecting sensitive data and critical national infrastructure. Open source emerges as a cornerstone.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    The Drupal4GovEU conference in Brussels highlighted how digital sovereignty is now a concrete priority for European public administration. Governments must control the data, infrastructure, and source code behind public services. Open source — and Drupal in particular — is the key tool to eliminate vendor lock-in, ensure security and accessibility, and maintain technological independence. The core message: free software must be treated as public infrastructure, and institutions must shift from passive users to active contributors within open source communities.
  </div>
</div>
<p>Have you ever sat down at your computer one evening to book a medical appointment through your region&rsquo;s portal, or to pay your municipality&rsquo;s waste tax? You enter your personal details, provide sensitive information about your health or assets, and click &ldquo;submit&rdquo;. But have you ever wondered where that data physically ends up? Who owns the servers where it&rsquo;s stored? And above all, who wrote the code that manages such delicate information?</p>
<p>These questions are no longer mere speculations for industry insiders, but represent the core of a fundamental debate for our future. On January 29, 2026, the city of Brussels hosted the <strong>first edition of <a href="https://drupal4gov.eu">Drupal4Gov EU</a></strong>. This event, organized during the <a href="https://opensourceweek.eu">Open Source Week</a> by the European Commission&rsquo;s Drupal Community of Practice and the EUIBAs, proved to be a crucial moment for defining the guidelines for tomorrow&rsquo;s public services.</p>
<p>During the conference, an unequivocal truth emerged. <strong>Digital sovereignty</strong> is not an abstract concept or a bureaucratic whim, but a practical and urgent necessity. European governments and institutions are addressing this challenge through the strategic adoption of open technologies. In this scenario of profound transformation, <strong>open source platforms like Drupal</strong> are at the forefront, offering the necessary tools to build a secure, transparent, and truly independent public infrastructure.</p>
<h2 id="what-is-digital-sovereignty-and-why-does-it-concern-all-of-us">What is digital sovereignty and why does it concern all of us?</h2>
<p>Digital sovereignty is the <strong>ability of a State or institution to exercise full control over its technological infrastructure, citizens&rsquo; data, and the software used</strong>. Without this autonomy, government bodies depend on external providers, losing decision-making power over essential public services.</p>
<p>To understand this concept, we can use a metaphor very close to everyday life: the difference between renting and owning a home. An institution that does not control its technology is exactly like a tenant. It can use the apartment, but it doesn&rsquo;t have the keys to change the lock, it can&rsquo;t decide to renovate the rooms, and, even worse, the landlord might decide to drastically increase the rent or evict them with little notice. Building services for citizens on closed, proprietary platforms effectively means handing over the keys to the public house to private entities.</p>
<p>To ensure true independence, control must be articulated on three fundamental levels:</p>
<ul>
<li><strong>Control over data:</strong> This concerns the physical location where information is saved. Citizens&rsquo; health, tax, and personal data must reside on servers subject to European regulations, protected from interference or surveillance by third-party nations.</li>
<li><strong>Control over operations:</strong> This defines who physically manages the systems day-to-day. Administrations must be guaranteed that critical infrastructures are not interrupted or altered by corporate decisions made on the other side of the world.</li>
<li><strong>Control over technology:</strong> This concerns who writes, inspects, and modifies the source code. Only by having access to the internal mechanisms of the software is it possible to verify the absence of hidden vulnerabilities and adapt the tools to the real needs of the community.</li>
</ul>
<p>The impact of all this on citizens is direct and tangible. <strong>Privacy protection</strong> cannot exist without mathematical certainty of how data is processed. National security requires government systems to withstand technological blackmail or cyberattacks. Finally, the continuity of essential public services must be guaranteed at all times, ensuring that a hospital, a court, or a municipality can always operate without depending on the commercial fate of a single software provider.</p>
<h2 id="how-does-open-source-ensure-the-digital-sovereignty-of-public-administration">How does open source ensure the digital sovereignty of Public Administration?</h2>
<p><strong>Open source ensures the digital sovereignty of Public Administration</strong> by eliminating vendor lock-in, which is the forced dependence on a single technological provider. By adopting open code, governments retain the freedom to inspect, modify, and transfer their systems without being subject to commercial constraints or external technical limitations.</p>
<p>The concept of vendor lock-in is one of the most serious risks for a public body. When an administration purchases closed software, whose internal mechanisms are secret, it becomes inextricably linked to the company that produces it. If that company decides to double license prices, discontinue technical support, or change product features, the entity has no alternatives. Migrating to a new system would cost too much time and money, forcing the government to accept unfavorable conditions paid with taxpayers&rsquo; money. Open code breaks this chain, restoring total freedom of choice and maneuver to institutions.</p>
<p>During the event in Brussels, an extremely effective analogy was used to explain this dynamic: the aqueduct analogy. A government has an absolute duty to ensure that the drinking water reaching citizens&rsquo; homes is safe, clean, and free of pathogens. To do this, it cannot merely trust the word of a private supplier; it must be able to inspect the pipes, analyze the sources, and check the filters.</p>
<p>The exact same principle applies to technology. A government must independently know and verify the software on which citizen services are based. If the code is secret, inspection is impossible. And it&rsquo;s also a matter of responsibility: a government is also responsible for knowing which open source software projects are safe and reliable to use. And the only real way to do that is to be directly involved in how these projects are created and maintained.</p>
<p>This requires a profound cultural and financial paradigm shift on the part of institutions, similar to the <a href="/en/blog/digital-transformation-e-resilienza-cosa-ci-insegna-il-coronavirus/">accelerated digital transformation during the pandemic</a>. It is no longer enough to purchase licenses as one buys stationery. As a company strongly committed to the development and promotion of the open source ecosystem, we are well aware of this dynamic.</p>
<p>As our <strong>CTO Paolo Mainardi</strong> precisely highlighted during the reflections arising from the conference:</p>
<blockquote>
<p><em><strong>&ldquo;Open Source is a public good that must be supported and funded in a new, modern way: like public infrastructure&rdquo;</strong></em>.</p>
</blockquote>
<p>Free software must be treated, funded, and maintained exactly as highways, bridges, or, indeed, public aqueducts are.</p>
<h2 id="drupal4goveu-digital-sovereignty-lessons-from-the-heart-of-europe">Drupal4GovEU: Digital sovereignty lessons from the heart of Europe</h2>
<p>The first edition of Drupal4GovEU demonstrated that to achieve true digital sovereignty, <strong>European institutions must stop being mere consumers of software and become active contributors</strong>. Participating in the open source ecosystem is the only way to ensure the security and continuous evolution of public services.</p>
<p>Our direct observations from the conference confirm an unequivocal trend. Administrations that merely download and use open code gain only a partial benefit. To truly govern technology, it is necessary to sit at the decision-making tables of communities, propose changes, and invest resources in shared development. For those who wish to delve deeper into the individual presentations, the <a href="https://www.youtube.com/playlist?list=PLNubpNMwP36QH5Y3RlbOiV4f9hjlrxCOo">official event playlist on YouTube</a> is available, a valuable resource for understanding the direction of European public innovation.</p>
<h3 id="the-active-role-of-governments-and-local-artificial-intelligence">The active role of governments and local artificial intelligence</h3>
<p>The shift from passive users to active creators was the focus of <strong>Sachiko Muto&rsquo;s keynote</strong>, titled <em>&ldquo;Unlocking Public Sector Contributions to Open Source&rdquo;</em>. Her speech clarified how governments must structure themselves to be effectively involved in open source projects.</p>
<p>Being directly involved, funding development, and allowing their internal developers to write code for open projects is the only way to fully understand how these high-public-interest projects (really) work. Only by contributing directly can institutions ensure that the software precisely meets the complex needs of the public administrative machine.</p>
<blockquote>
<p><em><strong>“Public institutions should take part in open-source projects not only by providing funding, but also by actively contributing to them.”</strong></em></p>
</blockquote>
<p>This need for control becomes even more pressing when it comes to new technologies. Josef Kruckenberg illustrated an illuminating practical case in his presentation <em>&ldquo;How AI is Supporting End Users and Editors at the Canton of Basel-Stadt&rdquo;</em>. The Canton of Basel has implemented an <strong>AI-based chatbot to help citizens handle bureaucratic procedures</strong> quickly and intuitively. The true innovation, however, lies in the system&rsquo;s architecture.</p>
<p>To keep sensitive data secure and ensure digital sovereignty, the entire artificial intelligence model is hosted in <strong>Swiss data centers</strong>, such as those provided by Infomaniak. This approach demonstrates that <strong>it is possible to combine the most advanced technological innovation with rigorous protection of local data</strong>, without ceding information to overseas providers.</p>
<h3 id="accessibility-and-scalability-for-european-citizens">Accessibility and scalability for European citizens</h3>
<p>Beyond security, public platforms must handle immense traffic volumes while maintaining <strong>structural consistency and accessibility</strong>. Sandro d&rsquo;Orazio and Massimiliano Molinari recounted the European Commission&rsquo;s successful journey in creating a centralized solution for the <strong>Europa.eu domain</strong>. Using <strong>a Drupal-based architecture</strong>, they managed to consolidate hundreds of fragmented websites into a coherent ecosystem, drastically improving security, scalability, and user experience for millions of European citizens.</p>
<p>But a scalable service is useless if it&rsquo;s not usable by everyone. Mike Gifford&rsquo;s talk addressed <strong>accessibility not as a mere technical requirement, but as a fundamental right</strong>. Gifford explained the practical impact of the <strong>Web Accessibility Directive</strong> (WAD) and the <strong>European Accessibility Act</strong> (EAA).</p>
<p>Building an accessible government website, which allows people with visual, motor, or cognitive disabilities to navigate without obstacles, is not just an obligation to avoid legal penalties. It is an essential civic duty. Open platforms allow communities to develop modules and themes already compliant with these directives, facilitating the work of administrations in ensuring total digital inclusion.</p>
<h2 id="why-is-drupal-the-engine-of-innovation-for-complex-institutions-and-highly-regulated-industries">Why is Drupal the engine of innovation for complex institutions and highly regulated industries?</h2>
<p><strong>Drupal has established itself as the engine of innovation for complex institutions</strong> thanks to its flexible architecture, highest security standards, and the support of a vast global community. This open source platform allows managing enormous volumes of data while ensuring total adherence to regulations.</p>
<p>During the Brussels sessions, it became clear that this CMS (Content Management System) is no longer considered just one option among many, but the <strong>platform of choice for high-level government portals</strong>. A concrete example of this excellence is the official European Union portal, Europa.eu, which manages vital information for millions of citizens in dozens of different languages.</p>
<p>Drupal&rsquo;s strength lies in its ability to model extremely complex information architectures, typical of ministries or large public agencies. Furthermore, the open nature of the code allows thousands of developers worldwide to identify and resolve potential vulnerabilities with a speed that proprietary software cannot match.</p>
<p>Security and regulatory compliance are non-negotiable pillars for the public sector. An architecture based on open technologies greatly facilitates adherence to stringent regulations. As we analyzed in our in-depth look at <a href="/en/blog/nis2-dora-impatto-sulla-cybersecurity-nel-cloud-native/">the impact of NIS2 and DORA on cybersecurity in Cloud Native</a>, institutions must ensure proactive resilience against cyberattacks. Drupal integrates perfectly into modern cloud ecosystems, allowing the application of rigorous security policies and maintaining full control over who accesses critical information.</p>
<p>Our team&rsquo;s direct experience confirms these potentials. At SparkFabrik, we design and develop solutions for organizations that cannot afford the slightest margin of error. <strong>We have carried out complex projects in areas where security and stability are vital</strong>, providing <a href="/en/servizi/by-industry/financial-services/">digital solutions for financial services</a>. We are talking about critical platforms for clients of the caliber of <strong>London Stock Exchange</strong> and <strong>Borsa Italiana/Euronext</strong> that are based on robust architectures requiring levels of reliability comparable, if not superior, to those of governments.</p>
<p>Similarly, we manage large-scale modernizations in the education sector, as demonstrated by our work for <a href="/en/case-studies/la-scuola-sei/">La Scuola</a>, where we implemented a secure and scalable infrastructure based on Drupal 10. These experiences demonstrate that open technologies are ready to support the most critical and challenging workloads.</p>
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-192504234572"
  style="max-width:100%; max-height:100%;" data-hubspot-wrapper-cta-id="192504234572">
  <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLKlmKzf2Hwn55UNYXdIUXyflC%2FAHRYJmg6vs7FkrjDd%2BepXRoEaL9nbqDMCpspkF2kl7nvupcyxbwNfVK63o6rSmbdaCfGo5%2F0dlt01OF%2FUyiKgdUZw8ABn4civy7LbBv0ak4g6D5GmLsNwIycEJZQ%2FG6Bja8XCQdYVSmSbsVS4dw26YHIRlsu69RTrcBXmawqW&webInteractiveContentId=192504234572&portalId=6897318" target="_blank" rel="noopener" crossorigin="anonymous">
    <img alt="Drupal Development and Consulting. Tell us about your Project" loading="lazy" src="https://no-cache.hubspot.com/cta/default/6897318/interactive-192504234572.png" style="height: 100%; width: 100%; object-fit: fill"
      onerror="this.style.display='none'" />
  </a>
</div>
<h2 id="conclusion">Conclusion</h2>
<p>The first edition of Drupal4GovEU has drawn a clear line for the future of European digital services. Digital sovereignty, uncompromising accessibility, and the strategic adoption of open source are no longer theoretical concepts, but the three pillars on which to build a modern, efficient, and truly citizen-centric Public Administration. We have seen how control over data and code is the only effective shield against vendor lock-in and how active participation in development communities is vital for national security.</p>
<p>This paradigm shift requires courage and vision. Public decision-makers, project managers, and innovation leaders within complex organizations must <strong>prioritize the adoption of open and secure technologies</strong> for their institutional portals.</p>
<p>But the transition to technological independence is a journey that should not be undertaken alone. SparkFabrik positions itself as a key technological partner in this transition, thanks to our proven experience in open source contribution, the development of secure Cloud Native architectures, and deep technical and strategic expertise in Drupal.</p>
<p><a href="/en/contact-us/">Contact us for a personalized consultation with our experts</a>: it&rsquo;s the first step to transforming regulatory challenges into extraordinary innovation opportunities.</p>
<hr>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/drupal4goveu-sovranita-digitale-e-open-source-per-la-pa/featured-en.webp" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/drupal4goveu-sovranita-digitale-e-open-source-per-la-pa/featured-en.webp" type="image/jpeg"/><category>Open Source</category><category>Drupal</category><category>Digital Transformation</category></item><item><title>AI guardrails in Drupal: agents and advanced management</title><link>https://www.sparkfabrik.com/en/blog/ai-guardrails-drupal-advanced-management/</link><pubDate>Fri, 06 Mar 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/ai-guardrails-drupal-advanced-management/</guid><description>Integrating language models in Drupal requires architectural strategies to mitigate the risks of non-deterministic systems.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    AI guardrails are security mechanisms that filter LLM inputs and outputs to prevent hallucinations, prompt injection, and sensitive data leaks. This article shows how to implement them in Drupal via the Drupal AI module, configure validation policies, and safely orchestrate autonomous agents.
  </div>
</div>
<p>Integrating <strong>Large Language Models</strong> (LLMs) into production environments introduces unprecedented architectural challenges for software engineering teams. Generative artificial intelligence models are inherently non-deterministic systems, meaning the same input can produce different outputs over time.</p>
<p>This variability exposes enterprise applications to critical risks, including hallucinations, the unintentional disclosure of sensitive data (PII), and the generation of content in direct conflict with company policies.</p>
<p><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&amp;t=107s"><img src="/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/inline-0.webp" alt="Luca Lusso, developer at SparkFabrik, introduces the topic of Artificial Intelligence applied to enterprise systems during his presentation."></a></p>
<p class="video-timestamp"><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&t=107s">▶ Watch this segment in the video</a></p>
<p>To mitigate these risks, relying on accurate prompt engineering is not enough. It is necessary to implement a structural control layer that acts as an intermediary between the user, the content management system (CMS), and the language model. During the talk &ldquo;Drupal X Business: Next-Gen Digital Experiences&rdquo;, Luca Lusso, developer at SparkFabrik, illustrated how the Drupal ecosystem is tackling this challenge. The goal is to transform AI from a potential risk into a governable and secure asset.</p>
<p>In this article, we explore the practical implementation of these security barriers within the CMS, from the unique perspective of direct contributors to the guardrails system in Drupal. We will analyze how to configure validation policies, orchestrate data flows via autonomous agents, and what architectural strategies to adopt to protect the brand. For a broader view on how we are driving this transformation, we invite you to read <a href="/en/blog/drupal-ai-contributions-2025/">how we shaped the future of Drupal AI in 2025</a> and our <a href="/en/blog/drupal-ai-panoramica-novita-visione-di-sparkfabrik/">comprehensive overview of Drupal AI news and SparkFabrik&rsquo;s vision</a>.</p>
<p>Adopting a rigorous methodological approach is the only way to take artificial intelligence out of the experimental phases and integrate it into mission-critical processes. Platform Engineering teams and developers must collaborate to build pipelines where security is guaranteed by the infrastructure&rsquo;s design itself.</p>
<h2 id="what-are-ai-guardrails-and-why-do-they-protect-the-business">What are AI guardrails and why do they protect the business?</h2>
<p><strong>AI guardrails</strong> are an architectural security infrastructure designed to intercept, validate, and filter communications between users and Large Language Models (LLMs) in real time. They operate on a <strong>security-by-design</strong> approach to ensure that the generated outputs strictly comply with company policies and privacy regulations.</p>
<p>These tools are not limited to being simple keyword-based filters. They represent a true intelligent middleware layer that semantically analyzes the context of conversations. When a user sends a request, the system evaluates it before it reaches the external provider, blocking manipulation attempts or out-of-context requests.</p>
<p><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&amp;t=198s"><img src="/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/inline-1.webp" alt="Architectural diagram showing the positioning of guardrails as an intermediate security layer between the user application and the Large Language Model."></a></p>
<p class="video-timestamp"><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&t=198s">▶ Watch this segment in the video</a></p>
<p><strong>Brand protection</strong> is the main business driver for adopting these technologies. An unconstrained language model exposes the company to enormous reputational damage, as it can generate responses that are out of line with the corporate tone of voice, or worse, offensive or discriminatory content. Furthermore, accidentally sending personal data to third-party providers constitutes a serious violation of regulations like the GDPR.</p>
<p>Implementing a robust validation system translates into an immediate competitive advantage. Companies that manage to govern the unpredictability of LLMs can scale the use of artificial intelligence across all business processes, from internal customer care to automated content generation. This approach transforms a potential legal and image risk into a reliable and certifiable automation tool.</p>
<h3 id="anatomy-of-an-ai-control-system">Anatomy of an AI control system</h3>
<p>A modern validation architecture typically consists of two fundamental logical elements: Checkers and Correctors. Checkers are specialized algorithms or models that analyze the payload in transit, verifying the presence of anomalies, malicious patterns, or violations of configured policies. Their sole task is to issue a verdict on data compliance.</p>
<p>Correctors come into play subsequently, applying the necessary mitigation actions. Depending on the severity of the violation, they can mask parts of the text, rewrite the response into a safe format, or block the transaction entirely by returning a predefined error message. This separation of responsibilities facilitates rule maintenance.</p>
<p>In Cloud Native architectures managed by Platform Engineering teams, these components are often deployed as independent microservices or sidecar containers within a <strong>Kubernetes</strong> cluster. This isolation ensures that validation operations, which can be computationally intensive, do not impact the performance of the main application and can scale horizontally based on the request load.</p>
<h2 id="how-ai-guardrails-work-input-output-and-agents">How AI guardrails work: input, output, and agents?</h2>
<p>The operation of AI guardrails is based on continuous bidirectional control. In the input phase, they apply prompt filtering to block malicious injections or forbidden topics. In the output phase, they perform response filtering to censor hallucinations, inappropriate language, and prevent the exposure of sensitive data.</p>
<p>The security of an LLM-based application requires that neither direction is neglected. If you only filter the input, the model could still produce hallucinations based on past training data. If you only filter the output, you expose the infrastructure to unnecessary computational costs to process malicious prompts that should have been discarded upstream.</p>
<p>To better understand the dynamics of this control, it is useful to analyze the three main areas of application:</p>
<ul>
<li><strong>Input filtering (Prompt Filtering):</strong> Analyzes the user&rsquo;s intentions to prevent <strong>prompt injection</strong> attacks, where the user tries to overwrite the model&rsquo;s system instructions. It also serves to keep the conversation confined to topics relevant to the company&rsquo;s business.</li>
<li><strong>Output filtering (Response Filtering):</strong> Evaluates the response generated by the model before showing it to the user. It detects and blocks toxic language, responses inconsistent with the provided context, or information that violates corporate compliance directives.</li>
<li><strong>Sensitive data management (PII Redaction):</strong> Identifies personally identifiable information, such as email addresses, phone numbers, or tax codes, within the user&rsquo;s prompt and replaces them with secure placeholders before sending them to the model.</li>
</ul>
<p>Managing sensitive information is perhaps the most critical aspect from a regulatory standpoint. During processing, an automatic redaction system intercepts strings like &ldquo;<a href="mailto:test@example.com">test@example.com</a>&rdquo; and converts them into anonymous tokens like &ldquo;[EMAIL]&rdquo;. In this way, the model processes the request without ever &ldquo;seeing&rdquo; the real data, ensuring total compliance with data privacy requirements.</p>
<p><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&amp;t=672s"><img src="/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/inline-2.webp" alt="Chat Generation Explorer interface illustrating the interception of an email address and its replacement with a placeholder to protect user privacy."></a></p>
<p class="video-timestamp"><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&t=672s">▶ Watch this segment in the video</a></p>
<p>These security policies are not limited to chat interfaces exposed to human users. They take on even greater importance when managing automated workflows. If you want to dive deeper into how to orchestrate these complex architectures, you can read our guide on how to develop AI-powered cloud native applications, from code review to multi-agent systems.</p>
<p>In modern systems, autonomous agents constantly communicate with each other and with third-party APIs to perform complex tasks. In these scenarios, validation systems act as true semantic firewalls between the various nodes of the system. They ensure that an agent with database access does not inadvertently transmit the entire schema to an external model during a query generation request.</p>
<h2 id="how-are-ai-guardrails-configured-in-drupal">How are AI guardrails configured in Drupal?</h2>
<p>To implement guardrails in Drupal, the Drupal AI module is used, which allows configuring validation policies via a graphical interface, orchestrating complex workflows with Flowdrop AI, and automating operations via Runner APIs. This centralized approach ensures rigorous control over autonomous agents and data flows.</p>
<p>The main advantage of the Drupal ecosystem is the ability to manage complex validation logic directly from the back office, without having to write custom code for every new rule. The base module provides the necessary infrastructure, while additional modules expand the types of controls available, allowing system administrators to react quickly to new threats or business requirements.</p>
<p><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&amp;t=620s"><img src="/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/inline-3.webp" alt="Native configuration screen in Drupal showing the addition and management of validation policies for AI models."></a></p>
<p class="video-timestamp"><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&t=620s">▶ Watch this segment in the video</a></p>
<p>Configuring a security policy follows a well-defined logical process, designed to integrate with the existing workflows of site builders and developers. Here are the fundamental steps to activate an operational control:</p>
<ol>
<li><strong>Creating the individual policy:</strong> Access the dedicated section in the back office and select the desired validation plugin (for example, a control based on external cloud services or a local filter).</li>
<li><strong>Defining blocking rules:</strong> Instruct the system on specific parameters, such as configuring the &ldquo;Restrict to Topic&rdquo; plugin to prevent the model from generating responses regarding a direct competitor.</li>
<li><strong>Setting the fallback message:</strong> Define the exact text the system should return to the user when the policy is breached, ensuring a controlled user experience (e.g., &ldquo;I am not authorized to discuss this topic&rdquo;).</li>
<li><strong>Assigning to a Guardrail Set:</strong> Group the created policies into logical sets, specifying which rules to apply in the input phase (pre-generation) and which in the output phase (post-generation).</li>
</ol>
<p><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&amp;t=1106s"><img src="/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/inline-4.webp" alt="Practical example of blocking in action: the system intercepts a question about the disallowed topic &amp;lsquo;WordPress&amp;rsquo; and returns the configured fallback message."></a></p>
<p class="video-timestamp"><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&t=1106s">▶ Watch this segment in the video</a></p>
<p>In addition to configuring static rules, advanced management requires visual orchestration tools. This is where <strong>Flowdrop AI</strong> comes in, an innovative solution that allows drawing logical workflows through a node-based interface. This tool is essential for development teams that need to build pipelines where the output of one model becomes the input of another, with intermediate validation steps.</p>
<p><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&amp;t=1511s"><img src="/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/inline-5.webp" alt="Node-based interface of Flowdrop AI used to visually design and orchestrate complex workflows based on autonomous agents."></a></p>
<p class="video-timestamp"><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&t=1511s">▶ Watch this segment in the video</a></p>
<p>Through Flowdrop AI, it is possible to visually map the entire data lifecycle. Validation nodes can be strategically inserted to verify that an intermediate task, such as extracting metadata from a PDF, does not contain sensitive information before being passed to the node tasked with generating a public summary.</p>
<p>The true potential of this architecture is expressed when AI is not limited to generating text but performs actions on the system. Drupal&rsquo;s <strong>Runner APIs</strong> allow an AI agent to execute complex operations on the CMS starting from a natural language prompt. An authorized user could ask the agent to &ldquo;create a new content type for Events with fields for date and location&rdquo;.</p>
<p><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&amp;t=1695s"><img src="/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/inline-6.webp" alt="Demonstration of using the Runner APIs, where an AI agent processes a natural language request to perform structural operations on the CMS."></a></p>
<p class="video-timestamp"><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&t=1695s">▶ Watch this segment in the video</a></p>
<p>The Runner APIs translate the textual intent into structured API calls, creating the entities in the database. In this scenario, security policies become fundamental to validate the agent&rsquo;s permissions and ensure that the requested operations do not compromise the integrity of the site&rsquo;s information architecture, while maintaining a complete audit log of all executed actions.</p>
<h3 id="declarative-management-of-validation-policies-as-code">Declarative management of validation policies as code</h3>
<p>For DevOps Engineers managing infrastructure through declarative approaches, security policies can be defined and versioned as code. This approach ensures that business rules are identically replicable across all environments, from staging to production.</p>
<p>Instead of operating only from the interface, it is possible to <strong>structure a YAML file</strong> that defines a set of rules to mask sensitive data and block inappropriate language. Within this declarative configuration, unique identifiers and descriptions for the policy are established, then dividing the controls into two distinct phases.</p>
<p>In the <strong>pre-generation phase</strong>, plugins for PII redaction are activated, specifying entities such as emails, credit cards, or phone numbers to be masked with appropriate characters, along with filters for offensive language in restrictive mode.</p>
<p>In the <strong>post-generation phase</strong>, hallucination control is configured, setting a tolerance threshold and a fallback message in case the generated information is not present in the company documents.</p>
<p>This structure allows security rules to be easily integrated into Continuous Integration pipelines, validating policies before every infrastructural release.</p>
<h2 id="what-are-the-alternatives-and-the-technological-ecosystem">What are the alternatives and the technological ecosystem?</h2>
<p>Alternatives to guardrails in Drupal include managed cloud services like Amazon Bedrock, open-source frameworks like Guardrails AI and LangChain, or on-premise Small Language Models (SLMs) for maximum security of sensitive data. The choice depends on compliance requirements, budget, and team skills.</p>
<p>The technological ecosystem for validating language models offers diversified approaches that can be combined to create a layered defense architecture.</p>
<p>Managed services from major cloud providers offer the fastest adoption path. <strong>Amazon Bedrock</strong>, for example, provides pre-configured policies to filter toxic content, block specific topics, and remove PII. The main advantage of these solutions is native scalability and the reduction of operational load for internal teams, who do not have to worry about updating block dictionaries or maintaining the validation infrastructure.</p>
<p>For development teams needing more granular control, the open-source landscape offers powerful tools. Exploring GitHub repositories for AI guardrails reveals flexible solutions for every stack. For example, <strong>Guardrails AI</strong> (that is its actual name) is a Python framework that allows writing complex custom logic, often orchestrated via LangChain to validate autonomous agent pipelines. With these tools, a developer can implement structural controls on the output, ensuring, for instance, that the model always returns a valid JSON that respects a specific corporate schema, blocking the pipeline otherwise.</p>
<p>In the case of highly confidential data, sending information to a public LLM, however protected by filters, might not be acceptable. In these scenarios, the best architectural strategy involves adopting specialized Small Language Models (SLMs). These models, which are lighter and focused on specific tasks, can be run entirely within the company&rsquo;s infrastructure.</p>
<p>Hosting an <strong>SLM</strong> on a proprietary Kubernetes cluster ensures that sensitive data never leaves the company&rsquo;s network perimeter. To effectively manage this infrastructural complexity, it is essential to adopt modern provisioning practices. In this regard, we suggest exploring the benefits of Infrastructure as Code in Cloud Native development, an indispensable approach to automate and scale the deployments of on-premise AI models in a secure and reproducible way.</p>
<p><strong>Choosing the correct approach</strong> depends on several factors, primarily the company&rsquo;s compliance requirements, the available budget, and the skills of the team of engineers and architects. There is no one-size-fits-all solution, but rather a range of options that can be combined to create a layered defense architecture.</p>
<h2 id="sparkfabriks-contribution-to-the-drupal-ai-initiative">SparkFabrik&rsquo;s contribution to the Drupal AI Initiative</h2>
<p>SparkFabrik actively contributes to the <strong>Drupal AI Initiative</strong> by developing core components for integrating enterprise-grade artificial intelligence into the CMS. Our approach combines Platform Engineering practices with application development, ensuring Cloud Native solutions that are scalable, secure by design, and ready for complex multi-cloud environments.</p>
<p>As a Kubernetes Certified Service Provider (KCSP) and an active member of the Cloud Native Computing Foundation (CNCF), our vision goes beyond the simple implementation of features. We believe that AI adoption must be based on solid infrastructural foundations, where observability, software supply chain security, and the operational resilience of services are guaranteed from the earliest design phases.</p>
<p><strong>Our commitment to Open Source translates into concrete contributions</strong> to Drupal&rsquo;s source code, as discussed in our report on DrupalCon Vienna 2025. Developers on our team, such as Luca Lusso and Roberto Peruzzo, work daily to extend the capabilities of the AI module, introducing advanced features that respond to the real needs of the enterprise market. To discover in detail the technical innovations we have introduced, we invite you to read the article on <a href="/en/blog/drupal-ai-contributions-2025/">how we shaped the future of Drupal AI in 2025</a>.</p>
<p><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&amp;t=2326s"><img src="/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/inline-7.webp" alt="Reranking, a method for optimizing vector searches by sorting results by relevance"></a></p>
<p class="video-timestamp"><a href="https://www.youtube.com/watch?v=-d9mPU1Ghoc&t=2326s">▶ Watch this segment in the video</a></p>
<p>Our architectural vision considers the CMS no longer as an isolated monolith, but as an intelligent hub within a distributed ecosystem. When we implement reranking features to improve vector searches, or develop orchestration systems for autonomous agents, we do so thinking about how these processes will behave under stress in a containerized production environment.</p>
<p>This holistic approach allows us to support companies in creating Internal Developer Platforms (IDPs) where artificial intelligence is integrated natively and securely. The ultimate goal is not just to provide a smarter CMS, but to equip IT teams with governable tools that accelerate time-to-market without ever compromising the stability and security of the corporate infrastructure.</p>
<h2 id="conclusions-and-next-steps">Conclusions and next steps</h2>
<p>Implementing AI guardrails represents a crucial juncture for the technological evolution of digital platforms. These security barriers should not be interpreted as a brake on innovation, but rather as the technical and architectural prerequisite that makes artificial intelligence effectively usable in mission-critical and highly regulated contexts.</p>
<p>The future roadmap for the Drupal ecosystem envisions even more advanced developments. The next steps will focus on the automated and intelligent generation of entire pages, based on complex and contextualized prompts. Furthermore, context management will become increasingly sophisticated, allowing autonomous agents to fully understand brand guidelines and the semantic structure of the site before proposing or executing any content modifications.</p>
<p>For Tech Leads, DevOps Engineers, and CTOs, the time to act is now. Integrating AI into your business processes requires careful architectural planning and specific skills in Platform Engineering.</p>
<p>Discover how SparkFabrik can help you implement enterprise-grade AI guardrails in your Drupal. Contact our team of certified architects for a personalized consultation.</p>
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-189639856783"
  style="max-width:100%; max-height:100%;" data-hubspot-wrapper-cta-id="189639856783">
  <a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLKPyxJ6hw2R%2B%2FlEoBdmgbjBHRaJHxqLXjCN1EvRMIM%2BGYAE%2FuNOOqwh3I1KedIFT%2BoDLABLbY0gCGvPC3lFg2UyEefC%2FrU%2BFDjFU4Lk2V8Teg0mF%2BzhA9hM%2BAAFIWdGIlKSqReoFWs%2FKA1zoZue0QtRy%2BtzDL1LSif6HHda5Tmg0meN0ICg1oe7rxltAB9tW3RJVjgI0SfJew%3D%3D&webInteractiveContentId=189639856783&portalId=6897318" target="_blank" rel="noopener" crossorigin="anonymous">
    <img alt="Custom AI Development. Customized artificial intelligence solutions." loading="lazy" src="https://no-cache.hubspot.com/cta/default/6897318/interactive-189639856783.png" style="height: 100%; width: 100%; object-fit: fill"
      onerror="this.style.display='none'" />
  </a>
</div>
<hr>
<h2 id="fonte-video">Fonte video</h2>
<p>Questo articolo è basato sul video &ldquo;Guardrails e altre novità dal mondo Drupal AI&rdquo;.</p>
<div class="video-embed" style="position:relative; padding-bottom:56.25%; height:0; overflow:hidden; margin:1.5rem 0; border-radius:12px;"><iframe src="https://www.youtube.com/embed/-d9mPU1Ghoc" style="position:absolute; top:0; left:0; width:100%; height:100%;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></div>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/featured-en.webp" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/guardrails-ai-in-drupal-agenti-e-gestione-avanzata/featured-en.webp" type="image/jpeg"/><category>AI</category><category>Drupal</category></item><item><title>Guides</title><link/><pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate><guid/><description/><content:encoded></content:encoded></item><item><title>How to Choose the Right Cloud Provider for Kubernetes</title><link>https://www.sparkfabrik.com/en/blog/choosing-kubernetes-cloud-provider/</link><pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/choosing-kubernetes-cloud-provider/</guid><description>A technical comparison of GKE, EKS, and AKS to choose the best Kubernetes cloud provider. Explore architectures, integrations, and costs.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Choosing a cloud provider for Kubernetes directly impacts workload performance, costs, and operations. This guide analyzes the technical integration between Kubernetes and cloud infrastructure, compares GKE, EKS, and AKS across networking, storage, security, and pricing, and covers alternatives like DigitalOcean and OVHcloud.
  </div>
</div>
<p>In a landscape where <strong><a href="/en/blog/guides/kubernetes-guida-completa-orchestrazione-container/">Kubernetes</a></strong> has become the de facto standard for container orchestration, the choice of cloud provider directly impacts how your workloads are executed and managed. In this guide, we analyze the technical aspects that truly matter when discussing <strong>Kubernetes cloud providers</strong>, from infrastructure integration to a head-to-head comparison of <strong><a href="https://cloud.google.com/kubernetes-engine">GKE</a></strong>, <strong><a href="https://aws.amazon.com/eks/">EKS</a></strong>, and <strong><a href="https://azure.microsoft.com/en-us/products/kubernetes-service">AKS</a></strong>.</p>
<h2 id="how-kubernetes-integrates-with-cloud-infrastructure">How Kubernetes Integrates with Cloud Infrastructure</h2>
<p>When it comes to <strong>choosing a cloud provider for Kubernetes</strong>, it is essential to understand that the provider is not simply &ldquo;the place where you install the cluster.&rdquo; It is a deep, bidirectional integration between Kubernetes and the underlying infrastructure, allowing the cluster to natively leverage services such as load balancers, persistent storage, virtual nodes, and cloud-native networking. The key component is the <strong>Cloud Controller Manager (CCM)</strong>, which acts as a bridge between the Kubernetes API and the provider&rsquo;s APIs. It translates, for example, a Service LoadBalancer into a concrete cloud resource (e.g., AWS ELB, Azure LB, GCP CLB).</p>
<p>Historically, integrations were <strong>in-tree</strong>: cloud-specific code included in the Kubernetes core. This approach created maintenance issues, feature delays, and integration barriers for new providers. Since 2017, Kubernetes has moved toward an <strong>out-of-tree</strong> model (the current standard): the CCM is an external component, maintained by the provider or the community, and deployed separately.</p>
<p>With Kubernetes v1.31 (2024), the old in-tree implementation code was completely removed, and today the <em>&ndash;cloud-provider=external</em> parameter is mandatory for any provider (except for bare metal/on-prem clusters or those without cloud integrations).</p>
<p><strong>The main advantages of the out-of-tree approach are:</strong></p>
<ul>
<li>CCM updates independent of the Kubernetes release cycle</li>
<li>Faster innovation (new services, quick fixes)</li>
<li>A lighter, vendor-agnostic core</li>
<li>A lower perception of platform lock-in</li>
</ul>
<p>In parallel with this Cloud Controller Manager evolution, Kubernetes has standardized other critical interfaces to further separate the core from vendor-specific implementations:</p>
<ul>
<li><strong>CSI (Container Storage Interface)</strong>: external storage drivers (replacing the old in-tree plugins)</li>
<li><strong>CNI (Container Network Interface)</strong>: networking plugins (e.g., Amazon VPC CNI, Azure CNI, Cilium)</li>
</ul>
<p><strong>In summary</strong>: before comparing GKE, EKS, and AKS, it makes sense to understand how much each provider invests in the quality of these integrations.</p>
<p>Therefore, it is important <strong>when evaluating a cloud provider for a production-grade Kubernetes workload</strong> to look beyond the high-level &ldquo;managed&rdquo; features. You need to verify the maturity and update frequency of their <strong>Cloud Controller Manager</strong>, the quality and performance of the <strong>CSI driver</strong>, and the efficiency of the <strong>CNI implementation</strong>. These are the building blocks that determine how smoothly Kubernetes &ldquo;perceives&rdquo; and leverages the underlying infrastructure, directly affecting reliability, costs, and scaling capabilities.</p>
<p>Before moving on, we recommend a resource if you are evaluating <strong>how to introduce K8s in your organization in a structured way</strong>. In our <strong><a href="https://landing.sparkfabrik.com/it/guida-all-adozione-di-kubernetes?hsLang=it-it">Guide to Kubernetes Adoption</a></strong> you can explore requirements, roadmaps, and mistakes to avoid: download the free ebook and use the checklist to plan your adoption journey.</p>
<h2 id="architectures-compared-control-plane-and-node-management">Architectures Compared: Control Plane and Node Management</h2>
<p>Now that we have clarified how Kubernetes communicates with the cloud, the next step is to <strong>understand &ldquo;who does what&rdquo; between you and the provider</strong> in cluster management. When choosing a cloud provider for Kubernetes, the control plane architecture and worker node management determine the level of abstraction, operational overhead, and flexibility. Here is a direct comparison among the major hyperscalers: <strong>GKE (Google), EKS (AWS), and AKS (Azure).</strong></p>
<h3 id="control-plane-level-of-abstraction">Control Plane: Level of Abstraction</h3>
<ul>
<li><strong>GKE Autopilot:</strong> Fully managed and abstracted. Google entirely controls the control plane and nodes. No visibility or direct management of underlying nodes. Ideal for workload focus (extended SLA on control plane + compute).</li>
<li><strong>GKE Standard</strong>: Control plane managed by Google (multi-zone HA), but worker nodes managed by the customer (provisioning, manual scaling or with autoscaler).</li>
<li><strong>EKS</strong>: Control plane always managed by AWS (multi-AZ, self-healing, endpoint via NLB). Native high availability, but no fully &ldquo;serverless&rdquo; mode for nodes (options: managed node groups, self-managed, Fargate, EKS Auto Mode for advanced automation).</li>
<li><strong>AKS</strong>: Control plane managed by Azure (free, HA). From 2025, AKS Automatic introduces a fully-managed mode similar to Autopilot, with dynamic autoscaling via Karpenter-like and automatic patching.</li>
</ul>
<p>The result of this comparison highlights how <strong>Autopilot / AKS Automatic drastically reduce operational toil</strong>, but limit the level of possible customization. <strong>EKS and GKE Standard offer more control, at the cost of greater management overhead.</strong></p>
<h3 id="worker-node-lifecycle-management">Worker Node Lifecycle Management</h3>
<p>This is a topic that deeply resonates with operations teams: <strong>how much manual work is required to keep the node fleet healthy</strong>? Let&rsquo;s compare.</p>
<table>
<thead>
<tr>
<th>Provider</th>
<th>Provisioning</th>
<th>Updates &amp; Patching</th>
<th>Self-healing &amp; Scaling</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>GKE Autopilot</strong></td>
<td>Automatic (based on pod requests)</td>
<td>Automatic (release channel)</td>
<td>Pod-driven auto-scaling, optimized bin-packing</td>
</tr>
<tr>
<td><strong>GKE Standard</strong></td>
<td>Manual or node pool autoscaler</td>
<td>Automatic or managed (surge upgrades)</td>
<td>Cluster Autoscaler + node pool autoscaler</td>
</tr>
<tr>
<td><strong>EKS</strong></td>
<td>Managed node groups, self-managed, Fargate, Auto Mode</td>
<td>Managed: automatic; Self: manual</td>
<td>Cluster Autoscaler + Karpenter (recommended)</td>
</tr>
<tr>
<td><strong>AKS</strong></td>
<td>User node pools or Automatic (Karpenter-style)</td>
<td>Automatic (orchestrated upgrade)</td>
<td>Cluster Autoscaler + VMSS; Automatic: dynamic</td>
</tr>
</tbody>
</table>
<h3 id="node-customization-options">Node Customization Options</h3>
<p>Let&rsquo;s continue the comparison by analyzing the <strong>level of node customization</strong> offered by each provider.</p>
<ul>
<li><strong>GKE Autopilot:</strong> Very limited customization. Direct choice of instance type, OS, or custom GPU is not possible (requested via pod spec, Google optimizes). Supports specialized chips (GPU/TPU) on request.</li>
<li><strong>GKE Standard</strong>: High customization: custom VM types (n2, c3, tau), OS (Container-Optimized OS), taints/labels, preemptible/spot, custom machine types.</li>
<li><strong>EKS:</strong> Very high customization. Granular control through EC2 instance types, custom AMIs (Bottlerocket, AL2), Launch Templates, spot instances, GPU/ARM, Fargate for serverless approaches.</li>
<li><strong>AKS</strong>: High customization: VM sizes (Standard_D, Fsv2, etc.), Azure Linux/Windows, spot/low-priority, custom images, GPU/InfiniBand.</li>
</ul>
<p>In practice, <strong>the higher the abstraction level</strong> (Autopilot/AKS Automatic), <strong>the more operational responsibility you offload to the provider</strong>, giving up some fine-tuning levers for optimization. The &ldquo;<strong>sweet spot</strong>&rdquo; depends on how much you want to standardize and how much you are willing to invest in internal management.</p>
<p><strong>READ ALSO</strong></p>
<ul>
<li><a href="/en/blog/kubernetes-architecture-guida-ai-componenti/">Kubernetes architecture: a guide to the main components</a></li>
<li><a href="/en/blog/kubernetes-operator-cosa-sono/">Kubernetes operators: what they are and examples</a></li>
</ul>
<h2 id="networking-integration-service-ingress-and-cni-compared">Networking Integration: Service, Ingress, and CNI Compared</h2>
<p>Once you have chosen the control plane and node model, the next topic is <strong>how traffic enters, exits (Service, Ingress)</strong> and moves within the cluster (CNI). Networking integration is another fundamental element when choosing a <strong>Kubernetes cloud provider</strong>: it impacts latency, costs, scalability, and IP management.</p>
<p>To compare GKE, EKS, and AKS, it makes sense to look at three distinct but interconnected layers:</p>
<ol>
<li>How you expose Services (LoadBalancer)</li>
<li>How you manage HTTP/HTTPS routing (Ingress)</li>
<li>Which CNI governs internal traffic between pods</li>
</ol>
<h3 id="1-loadbalancer-service-implementation">1. LoadBalancer Service Implementation</h3>
<p>The <strong>Service controller</strong> (part of the external CCM) creates a cloud load balancer when a Service type: LoadBalancer is defined. The question here is: what type of LB do you get &ldquo;out of the box&rdquo; and what are the implications for performance and costs?</p>
<ul>
<li><strong>GKE</strong>: Uses a <strong>Passthrough Network Load Balancer</strong> (external/internal) operating at Layer 4 (TCP/UDP). The &ldquo;passthrough&rdquo; aspect is important because it preserves the original request IP address. It also supports subsetting, a feature that optimizes backend speed and scalability (it optimizes load distribution by sending traffic only to nodes that actually have active pods for that service, avoiding unnecessary sends to &ldquo;empty&rdquo; nodes).</li>
<li><strong>EKS</strong>: Here, the <strong>AWS Load Balancer Controller</strong> acts as the default controller for LoadBalancer-type services and, when such a service is created, automatically provisions a Network Load Balancer (NLB) (Layer 4, high throughput, static IPs, UDP/TCP). ALB for Ingress (Layer 7) and other load balancers are also supported (Gateway LB, and the now-legacy Classic LB).</li>
<li><strong>AKS</strong>: Creates an <strong>Azure Standard Load Balancer</strong> by default (Layer 4) to handle both public and private traffic. It also automatically manages outbound cluster traffic (outbound type LoadBalancer for egress). A dedicated annotation allows creating an internal LB. An important feature is backend flexibility. The backend pool (the set of traffic recipients) can be configured with nodeIP (sending to nodes hosting the Pods) or podIP (sending directly to individual Pods, eliminating intermediate network hops in Azure and maximizing performance).</li>
</ul>
<p><strong>GKE and EKS prioritize passthrough/performance while AKS offers simpler integration with Azure networking (NSG, outbound rules)</strong>. In general, NLB/ALB on AWS tend to be more expensive for L7 traffic compared to Azure, which however offers a more limited overall feature set. If your workloads are heavily externally exposed or handle high traffic volumes, this mix of LB type, features, and pricing becomes an important selection criterion.</p>
<h3 id="2-recommended-or-integrated-ingress-solutions">2. Recommended or Integrated Ingress Solutions</h3>
<p>Ingress manages HTTP/HTTPS routing (Layer 7). Here, not only load balancing and request routing come into play, but also security topics such as SSL/TLS certificates, WAF firewalls, and integration with the provider&rsquo;s security and observability services.</p>
<ul>
<li><strong>GKE</strong>: The <strong>default Ingress controller</strong> creates a <strong>Google Cloud Application Load Balancer</strong> (classic or internal) to manage traffic. It supports container-native LBs via NEGs (communicating directly with individual containers without intermediate hops to maximize speed). GKE was the first to natively support the new Gateway API for even more precise traffic management (advanced routing and easy multi-cluster traffic management). It is a recommended solution for global HTTP workloads.</li>
<li><strong>EKS</strong>: The <strong>AWS Load Balancer Controller creates an</strong> Application Load Balancer (ALB) for Ingress. It manages traffic based on address or path (path/host routing). The main advantage is the deep, turnkey integration with AWS services: it uses WAF for security, ACM TLS for SSL certificates, and CloudWatch for logs. Alternative: NGINX/Traefik, but ALB is native and the preferred option for AWS service integration.</li>
<li><strong>AKS</strong>: Offers the <strong>Application routing add-on</strong>, which is the recommended method because it is simple and integrated. For advanced security needs, it also offers the <strong>Application Gateway Ingress Controller (AGIC)</strong> for <strong>Azure Application Gateway</strong> (which includes WAF, SSL offload, path-based routing). Gateway API is supported.</li>
</ul>
<p>The components natively provided by providers (ALB, App Gateway, Google ALB) reduce management overhead and offer security features (WAF), while open-source controllers (NGINX) focus on offering fewer features but greater management simplicity. The choice, therefore, is between deeper integration with the cloud ecosystem (managed LBs) and greater portability/operational simplicity (open-source Ingress controllers). It is also worth noting that many Ingress solutions are evolving into the Kubernetes Gateway API, which offers even more control than classic Ingress.</p>
<h3 id="3-default-cni-plugins-and-implications">3. Default CNI Plugins and Implications</h3>
<p>The <strong>CNI</strong> manages pod networking, IP allocation, and policies. Unlike Service and Ingress, which deal with traffic &ldquo;toward&rdquo; the cluster, here we focus on internal traffic between pods, IP address assignment to Pods, and the rules for them to communicate with each other.</p>
<table>
<thead>
<tr>
<th>Provider</th>
<th>Default CNI</th>
<th>IP Management</th>
<th>Performance &amp; Implications</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>GKE</strong></td>
<td><strong>Dataplane V2</strong> (eBPF/Cilium-based)</td>
<td>VPC-native: pod IP from VPC range, routable</td>
<td>High (eBPF bypasses iptables), native and more integrated network policy, native logging, IPv6/dual-stack, multi-network</td>
</tr>
<tr>
<td><strong>EKS</strong></td>
<td><strong>Amazon VPC CNI</strong></td>
<td>Direct pod IP from VPC/ENI (prefix delegation for scaling)</td>
<td>Good throughput, security group per pod; ENI/prefix delegation limits</td>
</tr>
<tr>
<td><strong>AKS</strong></td>
<td><strong>Azure CNI</strong> (advanced)</td>
<td>Pod IP from VNet (flat or overlay)</td>
<td>Good, NSG per pod; kubenet fallback (overlay)</td>
</tr>
</tbody>
</table>
<p>For networking, it is worth choosing a native CNI for simplicity/integration and an alternative (Cilium) for advanced observability, pure eBPF, and multi-cloud. This choice directly affects IP scalability, operational complexity, and internal traffic visibility: three variables to balance based on the type of workloads you need to manage and the level of control you want to maintain over the data plane.</p>
<h2 id="persistent-storage-csi-driver-analysis-and-performance">Persistent Storage: CSI Driver Analysis and Performance</h2>
<p>Networking aside, the real litmus test for many Kubernetes clusters comes <strong>when databases, queues, and stateful components enter the picture</strong>. Persistent storage is a critical pillar when choosing a cloud provider for Kubernetes: it influences latency, throughput, and IOPS (speed), costs, and reliability for services such as databases, stateful apps, and AI/ML workloads.</p>
<p>All major providers use mature CSI (Container Storage Interface) drivers for dynamic provisioning of block volumes for a single pod (ReadWriteOnce) and shared file systems across multiple pods (ReadWriteMany), completely replacing the old in-tree plugins.</p>
<p>The <strong>CSI driver</strong> allows automatic creation of PersistentVolumes from a PersistentVolumeClaim (PVC), managing attach/detach, expansion, snapshots, and reclaim. In other words, the CSI driver automates the disk lifecycle: when an app requests space (via a PVC), the driver creates, attaches, and manages the volume without manual intervention, and also supports resizing and backup functions. To choose disk speed, you simply select a <strong>StorageClass</strong>. Each provider offers predefined or custom StorageClasses across different performance tiers, suited to different use cases.</p>
<p>Below is a direct comparison table of block (disk) and file options, with major tiers, indicative performance, and typical use cases.</p>
<table>
<thead>
<tr>
<th>Provider</th>
<th>Block Storage (CSI Driver)</th>
<th>File Storage (CSI Driver)</th>
<th>Main Tiers &amp; Performance</th>
<th>Typical Use Cases</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>GKE</strong></td>
<td>Compute Engine Persistent Disk CSI</td>
<td>Filestore CSI / Managed Lustre CSI</td>
<td>- <strong>pd-balanced</strong> (SSD): baseline 6-30k IOPS, 240-1200 MiB/s - <strong>pd-ssd</strong> (Premium): up to 120k IOPS, 2.4 GB/s - <strong>Hyperdisk Balanced/Extreme</strong>: independent IOPS/throughput (e.g., 300k+ IOPS, 4.8+ GB/s)</td>
<td>Databases, AI/ML training, HPC (Hyperdisk)</td>
</tr>
<tr>
<td><strong>EKS</strong></td>
<td>Amazon EBS CSI</td>
<td>Amazon EFS CSI</td>
<td>- <strong>gp3</strong> (default): baseline 3k IOPS / 125 MiB/s, provisionable up to 16k IOPS / 1 GB/s - <strong>io2/io2 Block Express</strong>: up to 256k IOPS, 4 GB/s - io1 (legacy)</td>
<td>Transactional databases, log-heavy, general-purpose</td>
</tr>
<tr>
<td><strong>AKS</strong></td>
<td>Azure Disk CSI</td>
<td>Azure Files CSI</td>
<td>- <strong>Premium SSD v2</strong>: independent IOPS/throughput (up to 80k+ IOPS, 1.2+ GB/s) - <strong>Ultra Disk</strong>: up to 160k+ IOPS, 2 GB/s (provisioned) - <strong>Premium SSD</strong>: up to 20k-80k IOPS (bursting)</td>
<td>Mission-critical databases, high-IOPS apps</td>
</tr>
</tbody>
</table>
<p>Here, the selection criterion is not just &ldquo;who offers more IOPS,&rdquo; but also how your real workloads map to the different tiers: OLTP databases, log management, AI/ML &ndash; each has very different access patterns that can reveal significant advantages of one provider over another.</p>
<p><strong>READ ALSO</strong></p>
<ul>
<li><a href="/en/blog/scegliere-il-cloud-provider-confronto-aws-azure-gcp-alibaba/">Choosing a cloud provider: comparing AWS, Azure, GCP, and Alibaba</a></li>
</ul>
<h2 id="beyond-orchestration-cloud-native-services-and-added-value">Beyond Orchestration: Cloud-Native Services and Added Value</h2>
<p>So far, we have focused on &ldquo;<strong>core Kubernetes</strong>&rdquo;: control plane, nodes, networking, storage. But in day-to-day practice, much of the <strong>value of a Kubernetes cloud provider comes from the services surrounding it.</strong> Beyond Kubernetes cluster management, a cloud provider&rsquo;s true added value emerges from <strong>native integration</strong> with the managed services in its ecosystem, which reduce operational complexity and accelerate the adoption of enterprise best practices. Each hyperscaler enriches Kubernetes with an entire integrated ecosystem across three key areas:</p>
<ul>
<li><strong>Identity and Security</strong>: Integration with identity systems enables federated workload identities and least-privilege access without static keys. These are cornerstone concepts of modern security: instead of &ldquo;injecting&rdquo; secrets and passwords into pods, the pod itself becomes a cloud IAM identity and, through its identity, can access other resources (databases, buckets, etc.) without additional credentials. Each hyperscaler offers its own system: AWS IRSA (IAM Roles for Service Accounts), Azure Workload Identity with Microsoft Entra ID, Google Workload Identity Federation with Google IAM. This eliminates the risk of credential leakage and simplifies cross-service RBAC permissions.</li>
<li><strong>Observability</strong>: Native logging and monitoring are ready to use and significantly reduce setup effort. Amazon CloudWatch Container Insights collects metrics, logs, and traces from pods and nodes; Azure Monitor with Container Insights offers Kube-state, Prometheus scraping, and Log Analytics; Google Cloud Operations Suite (formerly Stackdriver) provides structured logging, Prometheus metrics, and distributed tracing with native OpenTelemetry. All support centralized alerting and ready-to-use dashboards. In short, each supports monitoring optimized for its own ecosystem, with pre-configured dashboards and alerts, without needing to install external systems.</li>
<li><strong>Service Mesh &amp; Advanced Networking</strong>: To manage internal (east-west) traffic securely and observably, providers offer managed or plug-and-play solutions: Google Anthos Service Mesh (Istio-based, with integrated policy and telemetry), AWS App Mesh (serverless-friendly, with X-Ray tracing), Azure Service Mesh (Istio-based in the AKS add-on). These reduce the need to manage Istio/Linkerd from scratch (which is complex), providing automatic inter-service encryption (mTLS), facilitating gradual releases (canary rollouts), and offering out-of-the-box observability.</li>
</ul>
<p>From a design perspective, these services often carry as much weight (if not more) as the pure Kubernetes differences: choosing a provider also means choosing its ecosystem &ldquo;around the cluster.&rdquo;</p>
<p><strong>READ ALSO</strong></p>
<ul>
<li><a href="/en/blog/errori-comuni-kubernetes/">3 mistakes to avoid when adopting Kubernetes</a></li>
<li><a href="/en/blog/container-e-kubernetes-aziende-che-li-usano-con-successo/">Containers and Kubernetes: 3 companies using them successfully</a></li>
</ul>
<h2 id="decision-framework-choosing-a-provider-by-cost-scenarios-and-alternatives">Decision Framework: Choosing a Provider by Cost, Scenarios, and Alternatives</h2>
<p>At this point, the picture is clear but complex: each provider has areas of excellence and trade-offs. To navigate this, you need to connect technical characteristics to concrete scenarios. The choice of a <strong>Kubernetes cloud provider</strong> depends on clear priorities and several important factors: time-to-market, operational control, ecosystem integration, total costs, and potentially regulatory constraints.</p>
<ul>
<li><strong>Google Kubernetes Engine (GKE)</strong>: Best for hybrid/multicloud ecosystems, AI/ML, and global workloads. Excels in simplicity (especially Autopilot), performant networking (Dataplane V2), flexible storage (Hyperdisk), and Anthos for hybrid/on-prem. Ideal when rapid time-to-market and Google innovation (TPU, BigQuery federation) are needed.</li>
<li><strong>Amazon EKS</strong>: Best for maximum configurability and deep AWS integration, creating synergy with other AWS services and reducing operational overhead when you are in the Amazon ecosystem. Wins on complex enterprise workloads, intensive use of EC2 spot, Fargate serverless, App Mesh, and IRSA. The natural choice if your organization is already heavy-AWS. A free tier for the control plane has been introduced for small clusters under 50 nodes, an attractive option for small businesses.</li>
<li><strong>Azure Kubernetes Service (AKS)</strong>: Best for Microsoft-centric enterprises and competitive storage/networking costs. Strong on Entra ID integration, Azure Monitor, Application Gateway, and .NET workloads and beyond. Great for those seeking a balance between simplicity and control. Often an economically viable option for generalist workloads under 100 nodes thanks to the free control plane and competitive networking/storage costs.</li>
</ul>
<h3 id="cost-models-beyond-the-per-node-price">Cost Models (Beyond the Per-Node Price)</h3>
<ul>
<li><strong>Control plane</strong>: Free on AKS in the free tier, but variable cluster management costs in higher tiers and other SKUs. GKE/EKS ~$73/month (typically an hourly fee per cluster). From 2026, EKS has also introduced a free tier for small clusters (under 50 nodes).</li>
<li><strong>Egress</strong>: Often the dominant cost item, depending on region and tier. AWS is more expensive (~$0.09/GB), Azure/Google are similar.</li>
<li><strong>Load Balancer</strong>: Costs per LB instance; on AWS, NLB/ALB cumulative costs are high &ndash; with many microservices using individual load balancers, costs explode (&ldquo;NLB proliferation&rdquo;). On Azure/Google, costs are more predictable; a single IP address is often used for multiple services, consolidating costs.</li>
<li><strong>Storage</strong>: AWS EBS gp3 is the benchmark for price/performance ratio, being economical compared to competitors especially for small but fast disks because it allows configuring IOPS and throughput independently of disk size. Google Persistent Disk and Azure Managed Disk are similar, but separate IOPS &ldquo;weigh&rdquo; more economically because they are often tied to disk size.</li>
</ul>
<p>Under these aspects, AKS wins on multiple clusters, EKS has a licensing model that can lead to cost explosion on LB/egress, and GKE Autopilot greatly simplifies management but costs more.</p>
<h3 id="viable-alternatives">Viable Alternatives</h3>
<ul>
<li><strong>DigitalOcean Kubernetes</strong>: Suitable for startups and limited budgets: simple pricing, free control plane, one of the simplest interfaces on the market. A solid alternative for those who want to run apps without a degree in AWS networking.</li>
<li><strong>OVHcloud</strong>: EU data sovereignty, full GDPR compliance, protection from extra-EU regulations. A major advantage: zero egress costs (outbound data traffic) for most cases in Europe.</li>
<li><strong>Others (Linode, Scaleway, Hetzner)</strong> for low-cost scenarios and EU presence (bare metal, low-power ARM instances).</li>
</ul>
<p>Ultimately, there is no &ldquo;<strong>best Kubernetes cloud provider</strong>&rdquo; in absolute terms: there is the provider <strong>best suited to your technical context</strong>, your cost constraints, and your product roadmaps. The choice involves a combined reading of cluster architecture, infrastructure integration, managed services, and pricing models.</p>
<p>If you want to evaluate in a structured way which direction to take &ndash; or how to design a portable Kubernetes architecture across multiple clouds &ndash; <strong>SparkFabrik can support you from the analysis phase through to production</strong>, helping you transform these technical variables into solid strategic decisions. Take a look at our <strong><a href="https://www.sparkfabrik.com/en/services/cloud-native-services/kubernetes-consultancy/">Kubernetes Consultancy</a></strong> service and <strong><a href="https://www.sparkfabrik.com/en/contacts/">contact us</a></strong>.</p>]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/kubernetes-cloud-provider-guida-alla-scelta/featured.jpg" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/kubernetes-cloud-provider-guida-alla-scelta/featured.jpg" type="image/jpeg"/><category>Cloud Native</category><category>Cloud Management</category></item><item><title>Spec driven development: a guide to moving beyond vibe-coding with AI</title><link>https://www.sparkfabrik.com/en/blog/spec-driven-development-guide/</link><pubDate>Fri, 27 Feb 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/spec-driven-development-guide/</guid><description>Discover spec driven development, the paradigm that turns LLMs into true allies. Learn how to guide AI with precise specifications for higher-quality code.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Spec Driven Development (SDD) moves beyond the limits of vibe-coding by making executable specifications (understandable by both humans and machines) the core of AI-powered development. This article covers the four operational phases (Specify, Plan, Tasks, Implement), key tools (Spec Kit, OpenSpec, Kiro, Tessl), and the practical challenges to watch out for.
  </div>
</div>
<p>Over the last few months, AI-powered software development has moved from early curiosity experiments to a daily practice for many teams. Tools like <strong>Copilot</strong> and other <strong>LLMs for developers</strong> let you generate code from increasingly rich prompts, but as projects grow, the clear limits of plain vibe-coding start to show. This is where <strong>Spec Driven Development (SDD)</strong> comes in: a paradigm that tries to bring order to the way you use artificial intelligence to write software.</p>
<h2 id="why-isnt-vibe-coding-enough-anymore">Why isn&rsquo;t vibe-coding enough anymore?</h2>
<p>Let&rsquo;s look at a typical scenario. You write a prompt for your AI assistant:</p>
<p><em>&ldquo;Implement a React registration form with real-time validation for email and password, error handling, a POST request to /api/register, a modern Tailwind style, state management via Zustand, and HTTP calls with Axios.&rdquo;</em></p>
<p>In a matter of seconds the LLM generates a complete component. It looks well-structured, has validations, visual feedback, even a small success toast. You integrate it, test the happy path, everything works. Commit and deploy. Then reality comes knocking, and the quality team flags an endless list of issues.</p>
<p>This is an example of AI-driven software development based entirely on gut-feel prompts (in other words, vibe-coding). The concrete limits of vibe-coding become obvious pretty quickly:</p>
<ul>
<li><strong>Imprecision and semantic ambiguity.</strong> Generic expressions are interpreted differently with every generation.</li>
<li><strong>Lack of reproducibility.</strong> The same prompt, run multiple times, produces semantically different implementations.</li>
<li><strong>Complex maintenance and refactoring</strong>, because there&rsquo;s no consistent standard for how the code is generated.</li>
<li><strong>Latent security and performance risks</strong>, for instance: client-side-only validation, unsafe parsing of untrusted inputs, and issues that go unnoticed during generation but become costly when the product scales.</li>
</ul>
<p>In short, <strong>vibe-coding</strong> has played (and still plays) an important role: it accelerated prototyping, lowered the barrier to entry, and helped many teams discover the real potential of <a href="/en/blog/ai-for-developers-the-open-source-revolution/">AI coding agents and AI at the service of developers</a>. But it&rsquo;s no longer sufficient when it comes to <strong>software that needs to operate in enterprise contexts</strong>.</p>
<p>To go beyond gut-feel prompt engineering, you need a <strong>shift in paradigm.</strong> The perspective has to change: from code driven by intuition and appearance, to <strong>code driven by explicit, verifiable, and shared specifications.</strong> That&rsquo;s exactly where Spec Driven Development fits in.</p>
<h2 id="spec-driven-development-setting-the-rules-of-the-game-for-ai">Spec driven development: setting the rules of the game for AI</h2>
<p><strong>Spec Driven Development (SDD)</strong> means treating specifications as the central and most important element of a project (the true source of truth) rather than the generated code. But this isn&rsquo;t about classic specs written in Word, PDFs, or Confluence pages that go stale the moment a release ships.</p>
<p>We&rsquo;re talking about <strong>executable specifications</strong>, built for AI-powered software development, <strong>written in a language understandable by both humans and machines</strong>, that describe the system&rsquo;s purpose, usage scenarios, and constraints in a precise and unambiguous way.</p>
<p>If you&rsquo;re familiar with <strong>Test Driven Development (TDD)</strong> and/or <strong>Behavior Driven Development (BDD)</strong>, here&rsquo;s how the analogy works. Think of SDD as TDD taken to the next level and made collaborative with AI:</p>
<ul>
<li><strong>TDD:</strong> you write the test first, then the minimum code needed to make it pass.</li>
<li><strong>BDD:</strong> you specify the expected behaviour in structured natural language.</li>
<li><strong>SDD:</strong> does the same thing, but pushes the format even further toward the machine, making the specification the primary artefact from which everything else is derived.</li>
</ul>
<p>In other words, the developer&rsquo;s intent (your intent) becomes the real &ldquo;spec&rdquo; that governs the LLM.</p>
<h2 id="the-operational-phases-from-idea-to-guided-implementation">The operational phases: from idea to guided implementation</h2>
<p>To understand how this approach changes the way you work, it helps to map out the typical workflow of a developer adopting Spec Driven Development. The phases below give you a useful mental model.</p>
<h3 id="specify">Specify</h3>
<p>In the <strong>specification phase</strong>, the developer describes the expected behaviour of the system in structured natural language: the user journey step by step, business goals, the main use cases alongside relevant edge cases, and all non-negotiable validation rules or constraints.</p>
<p>Your work here is primarily one of analysis and intent formalisation, not code writing.</p>
<p>Starting from this narrative, the AI generates a formal, detailed specification: JSON schemas for requests and responses, validation rules with concrete examples, expected and failure cases in Given-When-Then format, business invariants, and mocks for any external services.</p>
<p>Importantly, this is not a one-shot activity. It&rsquo;s not simply a matter of the developer defining things upfront and the AI then refining and structuring them. Rather, it&rsquo;s an iterative process in which the developer works alongside the AI, defining the specifications: the <em>what</em> and the <em>why</em> of the project.</p>
<p>Equally important, specifications must be maintained over time so they don&rsquo;t become stale as the actual implementation progresses and the project naturally evolves.</p>
<h3 id="plan">Plan</h3>
<p>In the <strong>planning phase</strong>, the developer defines the non-negotiable technical constraints: the technology stack, preferred architecture, libraries to use for validation, state management, HTTP calls and testing, plus any performance or bundle size limits.</p>
<p>The AI then produces a complete and realistic technical plan, covering the folder structure, the main components with their respective responsibilities, the data flow, the chosen libraries and the reasoning behind them, the error and loading state management strategy, and the mocks needed for testing.</p>
<p>At this stage, prompting becomes genuine prompt engineering: you&rsquo;re no longer asking &ldquo;do X for me&rdquo;, but guiding an AI agent with clear, verifiable constraints.</p>
<h3 id="breakdown--tasks">Breakdown / Tasks</h3>
<p>In the <strong>task breakdown phase</strong>, the AI takes the approved plan and splits it into atomic, ordered, and independent tasks. Each task is designed to be small, independently testable, and tied to clear acceptance criteria, often linked directly to one or more examples from the specification.</p>
<p>This step makes the work feel much closer to a classic agile backlog, but one generated and kept consistent with the spec.</p>
<h3 id="implement">Implement</h3>
<p>Finally, there&rsquo;s the <strong>implementation phase</strong>. The AI generates code one task at a time, always respecting the agreed specification and plan. The developer reads and validates small, targeted code chunks: checking that the code handles both typical cases and the expected edge cases, running local tests or previewing the result, and either approving the output or requesting targeted corrections. The cycle is fast, usually 5 to 15 minutes per task.</p>
<p>This way, the LLM becomes a collaborator you&rsquo;re in control of, not a generator of monolithic blocks that are hard to understand.</p>
<p>At this point you have a complete flow: from specification to implementation, with the AI following clear rules instead of improvising. <strong>The next step is choosing the right tools</strong> to support this way of working.</p>
<h2 id="essential-tools-from-spec-kit-to-emerging-alternatives">Essential tools: from Spec Kit to emerging alternatives</h2>
<p><a href="https://github.com/github/spec-kit"><strong>GitHub Spec Kit</strong></a> remains the open source reference point. It&rsquo;s a modular toolkit that lets you write specifications in structured markdown (with JSON schemas, Given-When-Then examples, and invariants), validate them automatically, and generate code via contextualised prompts to any LLM.</p>
<p>Its strength lies in its simplicity, transparency, and the fact that it doesn&rsquo;t lock you into a single provider: you can use it with Claude, GPT, Gemini, or local models. It&rsquo;s the ideal starting point for teams that want to experiment with SDD without vendor lock-in.</p>
<p><a href="https://openspec.dev/"><strong>OpenSpec</strong></a> is an open source framework for SDD with coding agents. What makes it stand out is how well it suits brownfield contexts with legacy code. Rather than high-level requirements, its strength is defining operational requirements for individual agents, starting from the context of an existing codebase and feeding it a single issue.</p>
<p>It&rsquo;s easy to get started with: just run <em>openspec init</em> to integrate it into a codebase, and it installs only three commands: <em>proposal</em> (proposes a new change), <em>apply</em> (implements it), <em>archive</em> (archives it and updates the specs). One interesting aspect is how it keeps current specifications (the &ldquo;source of truth&rdquo;) separate from proposed changes, in two distinct folders that are reconciled when changes are archived.</p>
<p>This keeps diffs manageable and tracked at all times, a key requirement in many contexts. It&rsquo;s also very lightweight, using only markdown files, and supports all the major coding assistants you&rsquo;re probably already using (Claude Code, GitHub Copilot, <a href="https://opencode.ai/">OpenCode</a>, Cursor, Windsurf&hellip;).</p>
<p>There are also numerous other, more niche SDD frameworks with specific capabilities. One example is <a href="https://github.com/Priivacy-ai/spec-kitty"><strong>Spec-Kitty</strong></a>, designed for those who want high orchestration capabilities (parallel execution of multiple agents without conflicts) combined with full visibility into what&rsquo;s actually happening and what each agent is working on. Its most distinctive feature is a visual dashboard that automatically tracks all progress, letting you view planned, in-progress, under review, and completed tasks in a kanban-style board, as well as see which agents are working on which task.</p>
<p><strong>In the video &ldquo;</strong><a href="https://www.youtube.com/live/-kHCGTTFbZE?si=H3DSJ1y8i3NkXdvb&amp;t=7230"><strong>So you think you know Copilot</strong></a><strong>?&rdquo;</strong>, we also go into a practical deep dive showing the interaction with the AI agent, highlighting how a well-executed Spec Driven Development approach changes the way you use tools like GitHub Copilot.</p>
<p>For those looking for a more integrated experience than classic Copilot, the most interesting alternatives are <a href="https://kiro.dev/"><strong>Kiro</strong></a> and <a href="https://tessl.io/"><strong>Tessl</strong></a>. <strong>Kiro</strong> focuses on a collaborative workflow with &ldquo;constitutions&rdquo; (style and architecture rules enforced at the project level) and automatic checklists for every generation. It&rsquo;s particularly useful in large teams where consistency is critical.</p>
<p><strong>Tessl</strong>, on the other hand, represents the most radical approach: it&rsquo;s the first true <strong>spec-as-source</strong> tool. The specification is the only modifiable artefact, code is regenerated from scratch with every change, and the project&rsquo;s history lives almost entirely in the spec itself. It&rsquo;s the choice for those who want to push to the maximum the idea that code is a derived output, not the source of truth.</p>
<p>If you want to see how AI can also support the <strong>DevOps</strong> and delivery side, you can explore the topic further in <strong>this article on</strong><a href="/en/blog/ai-devops-artificial-intelligence/"><strong>AI, DevOps and Platform Engineering</strong></a><strong>.</strong></p>
<h2 id="from-executor-to-orchestra-conductor-the-developers-new-role">From executor to orchestra conductor: the developer&rsquo;s new role</h2>
<p>SDD doesn&rsquo;t reduce the developer to a simple &ldquo;reviewer of AI-generated code&rdquo;. On the contrary, it elevates them to a <strong>strategic, high-value role</strong>: from someone who wrote every line, to someone who defines, orchestrates, and guarantees the quality of the entire system.</p>
<p>In the traditional model, the developer was often an executor of requirements, translating technical details and business logic into code once those had been partially formalised. With Spec Driven Development, the focus shifts decisively upward:</p>
<ul>
<li>Formalising intent and constraints with surgical precision</li>
<li>Designing sustainable and scalable architectures</li>
<li>Choosing the right tools and patterns within the context of the real product</li>
<li>Validating that generated code meets both functional and non-functional requirements</li>
<li>Reasoning about complex trade-offs that no AI can decide on its own</li>
</ul>
<p>In practice, you write far less boilerplate code, but invest significantly more in critical thinking, system design, clear communication of requirements, and deep validation skills. These are rare competencies, difficult to automate, and increasingly in demand on the market.</p>
<p>At <strong>SparkFabrik</strong>, we see this shift as a tremendous opportunity. Our mission isn&rsquo;t to replace development teams with artificial intelligence, but to <strong>help your team evolve</strong> toward roles of greater impact and value.</p>
<h2 id="concrete-benefits-when-to-adopt-a-spec-driven-approach">Concrete benefits: when to adopt a spec-driven approach</h2>
<p>Spec Driven Development comes with several concrete advantages:</p>
<ul>
<li>Higher-quality code with fewer bugs</li>
<li>Implementations that are more faithful to actual requirements</li>
<li>Much simpler maintenance, thanks to clear specifications that serve as living, always up-to-date documentation</li>
<li>Real alignment between business and development, through explicit expression of intent before generation</li>
<li>Faster throughput and a drastic reduction of AI-introduced errors, thanks to systematic validation and Human-in-the-Loop</li>
</ul>
<p>Of course, not every context benefits equally. SDD delivers its maximum value <strong>in three main scenarios.</strong></p>
<h3 id="greenfield-projects">Greenfield Projects</h3>
<p><em>Greenfield</em> refers to projects started from scratch, with no legacy constraints, pre-existing architectures, or accumulated technical debt. In this context, Spec Driven Development reaches its full potential: specifications become the true origin point of the system and guide the entire process from the very first commit.</p>
<p>The architecture is born already aligned to functional and non-functional requirements, technical decisions are traceable and motivated, and the risk of divergence between project vision and concrete implementation is drastically reduced.</p>
<p>Specifications are not just initial documentation: they become a structural backbone that reduces surprises, prevents evolutionary inconsistencies, and limits the build-up of early technical debt, creating solid foundations for the system&rsquo;s future scalability.</p>
<h3 id="brownfield-projects">Brownfield Projects</h3>
<p><em>Brownfield</em> refers to contexts where you&rsquo;re working on existing systems: legacy platforms, architectures layered over time, complex ecosystems already in production. In these scenarios, Spec Driven Development isn&rsquo;t about &ldquo;building from scratch&rdquo;, but about making explicit what is often only implicit: architectural constraints, dependencies, integration contracts, emergent behaviours, and structural limits of the system.</p>
<p>Specifications become a tool for formalising the real context, precisely defining touch points, compatibility rules, and functional boundaries. This enables the AI to generate code that is truly contextualised, reducing the risk of regressions, integration errors, and systemic inconsistencies.</p>
<p>The result is increased evolutionary throughput, greater confidence in releases, and a tangible simplification of long-term maintenance, even in highly complex software ecosystems.</p>
<h3 id="legacy-evolution">Legacy Evolution</h3>
<p>In this case, SDD makes the transition gradual and controlled: the desired behaviours of the new code are described first, reverse engineering is tackled in a structured way, specifications are redefined or updated, and obsolete portions are replaced. This way you can reduce regressions, manual overhead, and the risk of breakage, while requiring an initial investment to contextualise the existing system.</p>
<h2 id="the-challenges-of-spec-driven-development-how-to-avoid-common-pitfalls">The challenges of spec driven development: how to avoid common pitfalls</h2>
<p>Like any emerging paradigm, Spec Driven Development brings real benefits but also concrete <strong>risks</strong>. Here are the most common traps to watch out for.</p>
<h3 id="verschlimmbesserung-making-things-worse-in-an-attempt-to-make-them-better">Verschlimmbesserung: making things worse in an attempt to make them better</h3>
<p>Elaborate workflows with dozens of markdown files, checklists, and constitutions can create overhead that outweighs the benefit, turning a small fix into a heavy bureaucratic process. Or worse, the attempt to improve can end up making the initial situation worse (literally &ldquo;worsening-improvement&rdquo;).</p>
<h3 id="overly-verbose-specifications">Overly verbose specifications</h3>
<p>AI-generated specs tend to be redundant, repetitive, and tedious to review. Instead of providing clarity, they increase cognitive load: more text to read than code to write.</p>
<p>It&rsquo;s important to avoid treating SDD as the exhaustive but &ldquo;hollow&rdquo; writing of requirements that nobody reads, as a form of bureaucracy creation, or as &ldquo;waterfall planning&rdquo;: a sterile exercise in extensive upfront planning that tries to account for every eventuality and the entire future of development.</p>
<p>Instead, Spec Driven Development is about making technical decisions explicit and reviewable, as well as easy to understand (not just for machines, but especially for people) and straightforward to evolve.</p>
<h3 id="wrong-level-of-detail">Wrong level of detail</h3>
<p>Too vague, and the AI misunderstands and generates incorrect code. Too rigid, and the process becomes inflexible, impossible to adapt to rapid changes or brownfield contexts. Finding the right balance takes practice and iteration.</p>
<h3 id="false-sense-of-control">False sense of control</h3>
<p>Even with detailed specs, checklists, and large context windows, AI often ignores instructions, duplicates existing code, or over-applies rules. Non-determinism persists: the same spec can produce different output on every regeneration.</p>
<p>This is the central challenge of the new programming paradigm, no longer deterministic, but tied to probabilistic AI tools. This paradigm shift is explored in depth in the <a href="https://www.youtube.com/watch?v=f-bFIb7ao2s&amp;list=PLSD9hiOyso85HJ9IKTA5z1b8qMtzdL-rO&amp;index=4">talk by Enrico Zimuel</a> at our GenAI x Business event.</p>
<h3 id="amnesia">Amnesia</h3>
<p>In complex codebases or particularly long working sessions, agents can lose track of part of the context: implicit relationships between components, decisions already made, or changes previously applied. Without a continuous anchor to the specifications, this can lead to inconsistencies, duplications, or unintentional regressions.</p>
<h3 id="general-limits-of-sdd">General limits of SDD</h3>
<p>For trivial fixes, the overhead is disproportionate; for very complex or ambiguous features it often isn&rsquo;t enough; and introducing it on legacy codebases requires a high initial investment. Moreover, if the specification isn&rsquo;t kept up to date it becomes a more dangerous source of confusion than the code itself, repeating the historical mistakes of model-driven development: rigidity combined with unpredictability.</p>
<p>Being aware of these limits lets you apply Spec Driven Development where it actually makes sense, and avoid turning it into a new dogma.</p>
<h2 id="the-future-of-development-spec-driven-development-spec-as-source-always-on-agents">The future of development: spec-driven development, spec-as-source, always-on agents</h2>
<p>Spec Driven Development represents a new frontier of AI-driven development, a direction in which many teams are experimenting with different approaches. Within this context, an even more radical evolution is taking shape: <strong>spec-as-source</strong> development.</p>
<p>Under this approach, the specification becomes the only stable and modifiable artefact. When requirements change, the tech stack shifts, or a better LLM model comes along, you update only the spec → the plan, tasks, and code are regenerated accordingly, automatically. The project&rsquo;s history lives in the spec itself, including the &ldquo;commit history&rdquo;. Code loses its central role: it becomes a derived output, temporary and regenerable.</p>
<p>Tools like Tessl are already pushing in this direction, though they are still very experimental (currently limited to a one-to-one spec-to-code relationship), while <strong>Agent Skills</strong> (originally from Claude and now open source, donated to the Agentic AI Foundation) point to a parallel trend: autonomous agents executing complex tasks under guardrails defined by specifications. One example is <a href="https://github.com/obra/superpowers">Superpowers by Obra</a> on GitHub for development tasks, while <a href="https://skills.sh/">Skills.sh</a> brings together thousands of Skills covering the widest range of areas, from frontend design to brainstorming to copywriting.</p>
<p>This paradigm profoundly changes the developer&rsquo;s role. An inexperienced person stays in vibe-coding mode, relying on generic prompts (and getting generic results). A senior developer, on the other hand, unlocks explosive potential: providing precise specifications, rigorous guardrails, and solid architecture, transforming AI from a random generator into a reliable and tireless executor.</p>
<p>SDD in the context of AI is still young, with nuances still evolving and best practices still being discovered. The direction, however, is clear: it won&rsquo;t be &ldquo;here&rsquo;s my prompt, run it, I&rsquo;ll wait&rdquo; anymore, but &ldquo;here are the tasks, the rules, and the direction, keep working&rdquo;.</p>
<p>Looking further ahead, we&rsquo;ll see a &ldquo;workforce shift&rdquo;, with machines and agents operating 24/7 while humans supervise, define intent, specifications, architecture, and other high-value aspects. Writing code will become less central; writing clear, verifiable, and durable specifications will become a core skill for the modern developer.</p>]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/guida-allo-spec-driven-development/featured.png" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/guida-allo-spec-driven-development/featured.png" type="image/jpeg"/><category>AI</category><category>DevOps</category></item><item><title>Multilingual strategies in Drupal in the GenAI era</title><link>https://www.sparkfabrik.com/en/blog/drupal-multilingual-ai/</link><pubDate>Wed, 21 Jan 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/drupal-multilingual-ai/</guid><description>Drupal per l'enterprise multilingua: unisci velocità qualità e governance con l'AI. Scopri come Lara Translate e TMGMT ottimizzano traduzioni e workflow.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    This article explains how to manage enterprise translations in Drupal by combining native multilingual architecture, the TMGMT workflow module, and Lara Translate (an LLM specialized in professional-quality translations). SparkFabrik built the TMGMT Lara Translate module: a university case study reduced multilingual publication times by 80% while maintaining editorial control through a human-in-the-loop approach.
  </div>
</div>
<p>Launching <strong>a multilingual website is a strategic decision</strong> that opens doors to new markets, increases user trust, and strengthens your brand&rsquo;s identity globally. At the same time, managing a multilingual digital ecosystem has always been a balancing act.</p>
<p>Anyone who has managed an enterprise platform knows that the challenge doesn&rsquo;t lie so much in the translation technology itself, but in orchestrating the processes: exponentially growing content volumes, review cycles that slow down time-to-market, data governance, and operational costs.</p>
<p>Today, <strong>Generative AI</strong> (GenAI) has brought brutal acceleration to this scenario. The promise of instant, near-zero-cost translations is seductive, but brings with it <strong>new risks</strong> : <strong>loss of brand consistency</strong> , <strong>hallucinations</strong> from probabilistic models, <strong>quality levels not always up to par</strong> from generalist models, and the <strong>difficulty of maintaining rigorous editorial control</strong> over thousands of auto-generated pages.</p>
<p>At SparkFabrik, we work daily on <a href="/en/drupal-cms-digital-experience?hsLang=en">complex Drupal-based projects</a>, serving clients who manage <strong>large digital ecosystems</strong> , from universities and public institutions that need to publish important tenders and official informations, to enterprise companies with broad product portfolios and global presence.</p>
<p>For these organizations, <strong>linguistic precision</strong> isn&rsquo;t an aesthetic detail, it’s not just to “look and sound good”: it&rsquo;s a <strong>requirement for brand identity and reputation</strong> (and, in certain contexts, also for <strong>compliance</strong>).</p>
<p>In this scenario, Drupal confirms itself not only as a solid choice, but as the enterprise CMS best positioned to transform the GenAI revolution into a concrete operational advantage, even for multilingual needs, and without sacrificing quality.</p>
<h2 id="the-multilingual-challenge">The Multilingual Challenge</h2>
<p>Let&rsquo;s address the central issue right away: <strong>multilingualism is not a trivial matter of translating words from language A to language B</strong>.</p>
<p>If it were that simple, a Google Translate plugin would suffice. Multilingualism is strategy. It&rsquo;s international technical SEO, it&rsquo;s cultural adaptation (localization), it&rsquo;s evolutionary maintenance of content that must remain synchronized over time.</p>
<p>In short, having a multilingual presence is a <strong>multifaceted strategic decision for the brand</strong>. And in this area, the <strong>choice of Content Management platform</strong> is the founding decision for any internationalization strategy.</p>
<p>Drupal stands out in the enterprise CMS landscape** , excelling in the structural management of these complexities thanks to its architecture that conceives <strong>multi-language as a native attribute of data</strong>.</p>
<p>At the same time, a major &ldquo;Achilles&rsquo; heel&rdquo; of any multilingual system has always been the automation of translation workflows, in terms of balancing costs and quality. Traditional methods, such as sending files via email to agencies or using old-generation automatic translators, are now obsolete for the pace and quality levels required by today&rsquo;s market.</p>
<p>Our thesis is clear: the only viable path for modern organizations is the intelligent use of GenAI, but rigorously accompanied by strategic human control.</p>
<h3 id="why-has-multilingualism-become-a-current-topic-again">Why has multilingualism become a current topic again?</h3>
<p>To understand the scope and relevance of the multilingual content topic, it&rsquo;s necessary to look at the historical context we&rsquo;re experiencing.</p>
<p>First of all, in the digital landscape formed in recent years, we&rsquo;ve witnessed a convergence that has ultimately led to an <strong>explosive increase in the quantity of content and translations</strong> :</p>
<ul>
<li>The maturation of Generative AI (GenAI) technologies.</li>
<li>An explosion in digital content production, fueled mainly by GenAI which has empowered teams of all sizes.</li>
<li>A consequent increase in the demand for localization of produced content (a <a href="https://www.smartling.com/blog/smartling-unveils-2024-state-of-translation-report-highlighting-industry-trends-and-ai-driven-efficiencies">recent report</a> indicates a surge in enterprise translation demand of 30% annually). Moreover, translations fit into a more general market trend towards consistent and personalized content for end users (we discussed this in the context of <a href="/en/drupal-headless?hsLang=en">omnichannel with Drupal</a>).</li>
</ul>
<p>But it&rsquo;s not just about content quantity; there&rsquo;s an equally important increase in <strong>pressure on speed</strong>. Marketing campaigns, communications, and other content must be released simultaneously in all languages. There are no more weeks of time for manual localization.</p>
<p>Third, the <strong>need for quality</strong>. Enterprise organizations face a crossroads: continue to rely on manual processes, now unsustainable in terms of costs and time, or embrace automation while risking compromising brand reputation with low-quality translations (not just literal, but in terms of brand tone-of-voice).</p>
<p>GenAI can represent the &ldquo;Holy Grail&rdquo; that balances quantity, speed, and quality in this area. At the same time, however, there&rsquo;s the <strong>need for control</strong> : in a world where content is machine-generated, editorial governance becomes the last bastion of brand identity. Both fine-tuning AI systems according to each brand&rsquo;s identity and human supervision and review become essential.</p>
<p>It&rsquo;s also worth considering the impact of GenAI on editorial teams: content management teams should not be replaced, but empowered, freeing them from repetitive tasks to focus on creativity and qualitative supervision, including in terms of localization (in this sense, Drupal fully embraces this approach to AI).</p>
<p>Last but not least, making (or maintaining) a brand multilingual is a strategic decision that opens doors to new markets and strengthens the brand internationally or globally. The interest for brands in this strategy is absolutely evident, now made significantly more accessible to organizations of all sizes thanks to GenAI.</p>
<h2 id="drupal-and-multilingualism-what-works-whats-changing">Drupal and multilingualism: what works, what&rsquo;s changing</h2>
<p>Drupal needs no introduction when it comes to multilingual capabilities; in fact, Drupal&rsquo;s centrality in the enterprise sector is largely attributable to its architectural maturity regarding multilingual data structures.</p>
<p>Unlike other CMSs that require heavy plugins to manage translations, <strong>Drupal handles multilingualism at the</strong> <em>Core</em><strong>level</strong>. This means that every entity (from nodes to content blocks, from taxonomies to menus) is natively translatable.</p>
<p>However, the ability to store translations is useless without an <strong>efficient operational process</strong> to create and manage them. This is the domain of modules like the Translation Management Tool (TMGMT).</p>
<p>Let&rsquo;s analyze in more detail the multilingual aspects in Drupal&rsquo;s Core and in TMGMT.</p>
<h3 id="multilingual-content-and-localization-in-drupal-core">Multilingual content and localization in Drupal Core</h3>
<p>Drupal incorporates multilingualism into its main Core, at the deepest level of its application framework. This means that <strong>robustness and scalability</strong> are guaranteed, not depending on third-party plugins that can break at any moment.</p>
<p>More specifically, Drupal integrates language support at the <strong>Entity</strong> and <strong>Field</strong> level. Every content element is an entity (be it a page, a block, a taxonomy term, a menu, or a media asset). The native translation system allows creating language variants for each entity while maintaining a single unique ID.</p>
<p>At the same time, you can configure which specific fields of content must be translated (e.g., product titles and descriptions) and which should remain unchanged (e.g., product codes, numeric technical specifications, global images). This not only optimizes translation costs by reducing word volume but also ensures the integrity of technical data across markets.</p>
<p>Drupal&rsquo;s linguistic architecture therefore operates on four levels:</p>
<table>
<thead>
<tr>
<th>Translation Level**</th>
<th><strong>Description</strong></th>
<th><strong>Enterprise Implication</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>Content Translation**</td>
<td>Translation of nodes, articles, products, and base pages.</td>
<td>Enables localization of marketing messages and product information.</td>
</tr>
<tr>
<td>Configuration Translation**</td>
<td>Translation of views, fields, menus, and system settings.</td>
<td>Ensures that the site infrastructure &ldquo;speaks&rdquo; the user&rsquo;s language, not just the content.</td>
</tr>
<tr>
<td>Interface Translation**</td>
<td>Translation of user interface strings and modules.</td>
<td>Essential for user experience (UX) and for editorial teams distributed across various countries.</td>
</tr>
<tr>
<td>Entity Translation**</td>
<td>Translation of complex entities such as taxonomies, media, and user profiles.</td>
<td>Enables complex architectures and localized categorizations for SEO and navigation.</td>
</tr>
</tbody>
</table>
<p>Furthermore, organizations can choose whether to maintain a <strong>symmetric structure</strong> (every page exists in all languages) <strong>or asymmetric</strong> (specific content for local markets), managing everything within a single instance or through a centrally governed multisite architecture. The logic that determines which variant to serve to the user is also configurable: URL prefixes (e.g., /it/), top-level domains, authenticated user preferences, or browser settings.</p>
<p>Equally important, Drupal&rsquo;s <strong>granular permission management</strong> is a fundamental aspect for more structured organizations, allowing precise role-based permissions and review, approval, and publication pipelines for each language or region to be set.</p>
<p>In short, Drupal supports flexibility essential to support the most complex international product, content, and SEO strategies.</p>
<h4 id="drupals-architectural-superiority-compared-to-competitors">Drupal&rsquo;s Architectural Superiority compared to competitors</h4>
<p>When <a href="/en/drupal-cms-a-comparison-with-the-main-alternatives?hsLang=en">compared with alternatives</a> like WordPress or Adobe Experience Manager (AEM), Drupal&rsquo;s native architecture offers indisputable business advantages.</p>
<ul>
<li><strong>Comparison with WordPress:</strong> WordPress typically requires plugins like WPML or Polylang. These often store translations as separate posts linked by metadata, which can lead to database bloat and query inefficiency at scale. Drupal&rsquo;s entity-based translation stores translations within the same entity record, optimizing performance, simplifying API queries, and ensuring greater data consistency.</li>
<li><strong>Comparison with Adobe Experience Manager (AEM):</strong> While AEM offers robust &ldquo;Language Copies,&rdquo; it comes with high licensing costs and often requires heavy customization for complex workflows. Drupal offers comparable enterprise capabilities (granular permissions, workflow integration, multi-site management) without licensing fees, significantly reducing Total Cost of Ownership (TCO) and allowing budget to be reinvested in innovation and content quality.</li>
<li><strong>Comparison with Headless CMS:</strong> The evolution of digital architectures towards &ldquo;<a href="/en/composable-architecture-with-drupal-cms?hsLang=en">Composable</a>&rdquo; and &ldquo;<a href="/en/drupal-headless?hsLang=en">Headless</a>&rdquo; models has made a CMS&rsquo;s ability to act as a central repository for multilingual content even more critical. Drupal, thanks to its API-first approach, natively exposes translated content via JSON:API and GraphQL. Importantly, data is exposed in a structured format consumable by any frontend (React, Vue, Angular), facilitating omnichannel distribution without complex middleware for language logic management.</li>
</ul>
<h3 id="tmgmt-as-a-workflow-orchestrator">TMGMT as a workflow orchestrator</h3>
<p>While Drupal Core provides the ability to store translations, it doesn&rsquo;t fully manage the <strong>operational translation process</strong>. This is where the <a href="https://www.drupal.org/project/tmgmt"><strong>Translation Management Tool (TMGMT)</strong></a> comes in.</p>
<p>Used by over 10,000 high-traffic sites, it&rsquo;s a suite of tools that standardize the translation process. In enterprise contexts, and for anyone managing advanced editorial workflows, TMGMT truly becomes the beating heart of the system.</p>
<p>Manual translation management (export copy-paste via email) is the main bottleneck for scalability. TMGMT solves this problem by introducing an abstraction and automation layer.</p>
<p>First of all, TMGMT allows <strong>completely decoupling the content source from the translation provider</strong>. We can therefore see two levels:</p>
<ol>
<li><strong>Sources:</strong> TMGMT can extract text from any Drupal element (Nodes, Blocks, I18n Strings). It doesn&rsquo;t matter if the content resides in a paragraph, a custom field, or a configuration string; TMGMT normalizes it into a translation-ready format.</li>
<li><strong>Translators:</strong> Thanks to its plugin architecture, TMGMT is agnostic about <em>who</em> performs the translation. It can be a human user, an external agency connected via XLIFF files, or an automatic translation service. Today, LLMs are also included among translators, offered by various providers (OpenAI, Gemini, Ollama, Lara…).</li>
</ol>
<p>The advantage of this flexibility is clear: it allows <strong>changing translation providers without having to rewrite code or retrain editorial staff</strong> , drastically reducing vendor lock-in risk.</p>
<p>Governance functionalities** are another central added value of TMGMT. It allows <strong>assigning translation jobs to specific users</strong> , managing granular progress states (&ldquo;pending&rdquo;, &ldquo;translated&rdquo;, &ldquo;reviewed&rdquo;, &ldquo;accepted&rdquo;), and having an overview of what has been translated and what hasn&rsquo;t. This structured approach ensures that translations aren&rsquo;t published blindly, but according to advanced review and validation pipelines.</p>
<p>Finally, an advanced functionality (particularly useful for high-update-volume sites) is <strong>Continuous Translation Jobs</strong>.</p>
<p>This feature reverses the traditional paradigm: instead of waiting for an editor to manually create a translation &ldquo;package,&rdquo; the system proactively monitors content. When content is created or updated, TMGMT detects it and the new content is automatically added to a Job, then sent to the translation provider.</p>
<p>This mechanism eliminates &ldquo;dead times&rdquo; and the risk of <em>drift</em> between original and translated content, essential for maintaining consistency in e-commerce ecosystems or real-time news.</p>
<p>However, until recently, there was a traditional limitation. The options were polarized: on one hand <strong>manual translation</strong> (high quality, high costs and time), on the other <strong>classic Machine Translation</strong> (low quality, low cost). An effective &ldquo;bridge&rdquo; to services capable of combining automation speed with enterprise-grade publication quality was missing.</p>
<p>GenAI** is changing this paradigm, fitting exactly into this space and enabling <strong>hybrid workflows</strong> that were previously unthinkable.</p>
<h2 id="ai-translations--human-in-the-loop-speed-is-important-but-not-at-the-expense-of-quality">AI translations + human-in-the-loop: speed is important, but not at the expense of quality</h2>
<p>LLM-based (Large Language Models) language automation today allows managing translation volumes that would have been humanly and economically impossible just a few years ago. Think of translating thousands of product sheets, technical knowledge bases, or historical news archives.</p>
<p>However, speed cannot become an excuse for quality degradation.**</p>
<p>For institutional, strategic, or core business-related content, human input remains essential. AI, however advanced, can lack sensitivity to specific cultural context or may misunderstand tone nuances crucial to the brand. The winning strategy we&rsquo;re observing isn&rsquo;t replacement, but the <strong>hybrid approach</strong> : <strong>AI + Review (Human-in-the-loop)</strong>.</p>
<p>Here arises a <strong>critical problem</strong> : many try to solve the issue by connecting Drupal to generalist models like ChatGPT or Gemini via generic APIs. While technically possible, this approach is often ineffective for enterprise. <strong>Generalist models</strong> are &ldquo;know-it-alls&rdquo;: they translate a poem with the same statistical probability as they translate a technical manual, often inserting hallucinations or losing necessary terminological consistency.</p>
<p>Enterprise and Academic clients cannot afford these risks. A legal term translated approximately or an overly colloquial tone in institutional communication can create real damage.</p>
<p>When quality is a fundamental KPI, relying on generalist systems means shifting cost from translation to massive review, canceling the economic advantage.</p>
<p>If we want to leverage GenAI&rsquo;s power in contexts where accuracy is central, we need a specialized AI model. We need a technology partner that has solved the quality problem at the root. It&rsquo;s in this scenario that we introduce <strong>Lara Translate</strong>.</p>
<h2 id="integration-with-lara-translate-why-we-built-it">Integration with Lara Translate: why we built it</h2>
<p>While Drupal Core provides the ability to store translations and TMGMT provides the logistical infrastructure and integration with providers, <strong>output quality depends on the translation engine</strong>.</p>
<p>If generic Large Language Models (LLM) have demonstrated impressive fluency, they often lack the domain specificity and terminological consistency required for enterprise use.</p>
<p>This is where specialized Language Models like <a href="https://laratranslate.com/about-lara"><strong>Lara Translate</strong></a> stand out. It&rsquo;s an AI created by the Italian company Translated, a company specialized vertically in translations and high-quality AI technologies.</p>
<p>Our choice to integrate it into Drupal stems from the <strong>specific need of an institutional client to integrate a quality translation provider</strong>. From an in-depth analysis of available market solutions, Lara consistently positions itself a step above standard automatic translation, approaching the performance of the best human professional translators.</p>
<p>But what differentiates Lara from other solutions? The difference lies in the project&rsquo;s DNA. Lara is the LLM developed by <a href="https://translated.com/">Translated</a>, a company operating in the professional translation sector since 1999.</p>
<p>Unlike generalist models trained on the entire web (including low-quality content), Lara was trained and fine-tuned on a proprietary dataset of millions of professional translations.</p>
<p>We&rsquo;re talking about decades of work done by over 500,000 professional linguists for 397,000 enterprise clients, in more than 200 languages, for a total of <strong>over 25 million real professional translations</strong>.</p>
<p>Lara &ldquo;learned&rdquo; to translate by watching how the best humans work, not by reading online forums. This specialization in training data is what guarantees superior output.</p>
<p><img src="/images/blog/drupal-multilingual-ai/Lara_20Translate_20-_20Translation_20quality_20chart.png" alt="Lara Translate - Translation quality chart"></p>
<p>To bring this power into our projects, at SparkFabrik we developed and released the <a href="https://www.drupal.org/project/tmgmt_laratranslate"><strong>TMGMT Lara Translate</strong></a> module, a plugin that introduces Lara as a translation provider for all content in Drupal.</p>
<p>The plugin allows editorial teams to send content to Lara and receive translations directly in the Drupal interface, keeping intact all TMGMT&rsquo;s governance, review, and workflow functionalities.</p>
<p>The result is a fluid process: no more copy-paste, all the advantages of multilingual in Drupal, combined with automatically high quality. But to reach this quality level, some peculiar functionalities have been developed in Lara (and are fully supported in Drupal).</p>
<p>Additionally, <strong>Translated</strong> also offers the possibility to integrate <a href="https://laratranslate.com/ai-human-translation"><strong>professional human review</strong></a><strong>(human-in-the-loop)</strong> for those translations requiring an extra layer of guarantee. As seen, Lara is a highly performant GenAI model in translation tasks precisely thanks to Translated&rsquo;s human-centric philosophy, which led to training based on millions of human professional translations (you can <a href="https://laratranslate.com/ai-human-translation">learn more here</a>).</p>
<h3 id="distinctive-features-of-lara-translate-integrated-in-drupal"><strong>Distinctive features of Lara Translate integrated in Drupal</strong></h3>
<ul>
<li><strong>Translation styles.</strong><br>
Companies don&rsquo;t communicate in a single way. A legal contract requires absolute precision, while a marketing campaign requires creativity. Similarly, Lara doesn&rsquo;t translate flatly but natively integrates three distinct translation styles.
<ul>
<li><em>Faithful:</em> Ideal for technical manuals, legal contracts, and content where terminological precision is vital.</li>
<li><em>Fluid:</em> Perfect for general editorial content, blog posts, and news.</li>
<li><em>Creative:</em> Designed for marketing and storytelling, where AI takes the liberty to adapt the message to maximize emotional impact.</li>
</ul>
</li>
</ul>
<table>
<thead>
<tr>
<th>Style**</th>
<th><strong>Description</strong></th>
<th><strong>Enterprise Use Case</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>Faithful**</td>
<td>Absolute priority to literal and terminological accuracy.</td>
<td>Contracts, technical manuals, safety sheets, financial reports.</td>
</tr>
<tr>
<td>Fluid**</td>
<td>Balance between accuracy and flow naturalness.</td>
<td>Internal communications, emails, blog articles, news.</td>
</tr>
<tr>
<td>Creative**</td>
<td>Freedom in structure to capture emotional intent and tone.</td>
<td>Advertising slogans, marketing copy, brand storytelling.</td>
</tr>
</tbody>
</table>
<ul>
<li><strong>Context awareness and document coherence.</strong><br>
Unlike old systems that translated sentence by sentence losing the thread of discourse, Lara analyzes the entire document. It understands relationships between sentences, maintains consistency of grammatical gender and references throughout the text, ensuring natural flow.</li>
<li><strong>Glossaries.</strong><br>
Allow specifying correct translations for specific terms and phrases that are crucial for your particular context. This ensures Lara applies the right terminology consistently across all translations.</li>
<li><strong>Trust Attention.</strong><br>
Lara uses a proprietary mechanism to &ldquo;weigh&rdquo; information. During generation, it prioritizes data from verified professional translations over less reliable sources. These also include revisions, corrections, and &ldquo;error memory.&rdquo;</li>
<li><strong>Lara Feedback.</strong><br>
Thanks to its dataset that also includes real corrections, Lara is able to &ldquo;explain&rdquo; its translation choices, providing an unprecedented level of transparency for an AI system (the so-called &ldquo;AI Explainability&rdquo;).</li>
<li><strong>Access to experts.</strong><br>
Translated&rsquo;s ecosystem allows, when AI isn&rsquo;t enough (for example for ultra-sensitive content), activating professional human translator services through the same pipeline. The transition from AI translation to on-demand professional human translation is thus made more immediate.</li>
</ul>
<h3 id="how-to-use-lara-as-a-translation-provider-in-drupal">How to use Lara as a translation provider in Drupal</h3>
<p>If you&rsquo;re familiar with TMGMT, it will be immediate to start using Lara. If you&rsquo;re new, here&rsquo;s a quick procedure overview (common to other providers).</p>
<ol>
<li>Obviously make sure you&rsquo;ve installed and activated <a href="https://www.drupal.org/project/tmgmt">TMGMT</a> and installed the <a href="https://www.drupal.org/project/tmgmt_laratranslate">TMGMT Lara Translate</a> plugin.</li>
<li>Go to <em>Translation Management → Providers</em>. Create an instance for Lara by adding your API credentials (you&rsquo;ll obviously need a Lara account). The settings allow you to customize the module according to your specific context, for example selecting the default style and linking glossaries.</li>
<li>Through TMGMT, choose the entities to be translated (nodes, paragraphs, etc.) and necessary languages. You thus create Jobs to send content to Lara.</li>
<li>Lara automatically translates and returns output to Drupal. Here you see Lara&rsquo;s quality: translations respect specific context, tone, and terminology.</li>
<li>Typically, output requires minimal human editing. Additionally, Lara supports review by highlighting ambiguities and providing explanations.</li>
<li>Once approved, translations are automatically published.</li>
</ol>
<h3 id="the-hybrid-approach-native-but-modern">The hybrid approach, native but modern</h3>
<p>As you may have noticed from the procedure, using Lara seems absolutely native in Drupal, especially if you&rsquo;ve already worked on a multilingual site with TMGMT. What&rsquo;s different is the &ldquo;engine&rdquo; behind the scenes, a super specialized LLM.</p>
<p>Even with Lara as the basis of the automatic translation process, the human role in the process isn&rsquo;t eliminated or diminished. This is the concept of <strong>&ldquo;Human in the Loop&rdquo;</strong> (HITL), which here takes on a dual meaning.</p>
<ul>
<li><strong>Quality AI as base.</strong> Lara provides a high-quality &ldquo;first translation&rdquo; that is often already final, drastically reducing editing time.</li>
<li><strong>Editorial control in Drupal.</strong> Thanks to TMGMT, the human editor can review the translation directly in the CMS before publication and manually edit the content. Thanks to output quality, these are typically minor interventions, especially if Lara is correctly configured with glossaries and brand tone. The reviewer is thus empowered and transformed into a strategic supervisor.</li>
<li><strong>Professional translations.</strong> For more specific and particular cases, it&rsquo;s possible to request professional translator services from Translated, the parent company behind Lara.</li>
</ul>
<p>Adopting this technology stack generates immediate and measurable economic impact: the company can reduce translation budget by up to 80% or, with the same budget, translate 5 times more content, opening new markets previously unreachable due to cost limitations.</p>
<p>Indeed, 2025 market data highlights an enormous disparity between human and AI translation costs, and the hybrid approach allows having the best of both worlds: the following table offers an indicative estimate (see details <a href="https://www.weglot.com/blog/ai-translation-vs-human-translation">here</a> and <a href="https://seatongue.com/blog/insights/translation-inflation-localization-budget-2025/">here</a>).</p>
<table>
<thead>
<tr>
<th>Method**</th>
<th><strong>Estimated Cost (per word)</strong></th>
<th><strong>Time (10k words)</strong></th>
<th><strong>Notes</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>Human Translation**</td>
<td>€0.08 - €0.25</td>
<td>~1 Week</td>
<td>High quality, but slow and expensive. Not scalable for large volumes. 2000-2500 words per day is the standard human productivity.</td>
</tr>
<tr>
<td>Lara Translate (AI, API usage)**</td>
<td>~€0.0001 - €0.0002</td>
<td>~Minutes</td>
<td>&ldquo;Near-Human&rdquo; quality. Fractional cost, unlimited scalability.</td>
</tr>
<tr>
<td>Hybrid Model (Lara + Review)**</td>
<td>~€0.005 - €0.08</td>
<td>~Hours, at most 1-2 Days</td>
<td>The &ldquo;sweet spot,&rdquo; optimal enterprise compromise: guaranteed quality, minimal review, 60-80% reduced costs, fast times, high scalability. A careful review operates at a pace of 1000-1500 words/hour, an extremely fast review for low-risk content at 5000-6000 words/hour.</td>
</tr>
</tbody>
</table>
<p>But the advantages of this approach don&rsquo;t stop at economic aspects. Equally relevant are:</p>
<ul>
<li><strong>Time-to-Market acceleration</strong> , with consequent increase not only in speed but also in competitiveness in local markets</li>
<li><strong>Brand consistency</strong> , through use of correct terminology and unified tone of voice, otherwise difficult to obtain with fragmented human teams. We discussed consistency extensively in terms of <a href="/en/design-system-and-drupal-cms?hsLang=en">Design System</a>, but equally important is consistency in textual content.</li>
<li><strong>Operational scalability</strong> : the marketing team (or external support figures) doesn&rsquo;t need to grow linearly with the number of content and supported languages. It&rsquo;s possible to automate translation of &ldquo;low-risk&rdquo; content and focus human attention on sensitive content, strategy, and other high-value aspects.</li>
</ul>
<h2 id="use-cases">Use cases</h2>
<p>Adopting this architecture (Drupal + TMGMT + Lara Translate) isn&rsquo;t a theoretical exercise, but a practical solution to real problems. Not surprisingly, this integration was born from a client&rsquo;s request in a real business case.</p>
<p>It&rsquo;s the <strong>ideal configuration for high-content-volume sites</strong> that cannot afford the costs of a traditional agency for every single word, but also cannot accept the poor quality of raw machine translation.</p>
<p>Think of <strong>projects where tone of voice, consistency, and clarity are non-negotiable assets</strong> : international marketing portals, technical product documentation, legal or institutional sites. In these contexts, automation must be intelligent.</p>
<p>An immediate example? Think of an enterprise e-commerce with 50,000 SKUs: it can automatically translate product descriptions (in Fluid style) and technical specifications (with Faithful style), reserving human budget for reviewing technical details, marketing campaign pages and the home page, maximizing ROI.</p>
<h3 id="business-case-the-digital-university">Business Case: The Digital University</h3>
<p>Let&rsquo;s look in more detail at a specific business case. A concrete example of the value of this solution is the work done for a prestigious Italian University (a real client for whom we originally developed the module).</p>
<ul>
<li><strong>The context.</strong><br>
A University is a huge editorial machine with hundreds of people working in different languages: institutional sites, department sites, news, research highlights, competition announcements, regulations, course program descriptions, administrative information&hellip; are just part of the content managed by university editorial teams. And typically, they must be published in different languages. In the education context, Drupal proves to be the ideal CMS.</li>
<li><strong>The problem.</strong><br>
Manual translation times are incompatible with news speed. Solutions like Google Translate and continuous copy-paste (which also break formatting) are now unthinkable. But even using generalist LLMs, significant quality limitations were encountered, resulting in significant resource investment in the review phase. An alternative system was needed, high-quality and integrated directly into Drupal, in a workflow familiar to operators and able to guarantee full governance.</li>
<li><strong>The solution.</strong><br>
After thorough research, Lara was identified as a provider and we implemented the TMGMT Lara Translate module.</li>
<li><strong>The new workflow.</strong><br>
Today, University editors create content in Italian (or the initial language) on Drupal. With one click, they select target languages and send the job to Lara directly from the editing interface. Lara returns a high-quality translation, respecting academic terminology (thanks to specific training, use of glossaries, and customized instructions) and keeping HTML tags intact. The content returns to Drupal in the &ldquo;To be reviewed&rdquo; state. The editor takes a quick look, approves, possibly optimizes, and publishes.</li>
<li><strong>The result.</strong><br>
Multilingual publication times have been reduced by 80%. Translation costs have plummeted, allowing many more contents to be translated with the same budget and high quality. Editorial control has remained firmly in the hands of the University, without duplications or data loss.</li>
</ul>
<h2 id="conclusions-recommendations-and-next-steps">Conclusions, recommendations, and next steps</h2>
<p>GenAI has had a disruptive impact on the entire content world. Yet, despite how it may seem, <strong>the GenAI era doesn&rsquo;t ask us to choose between automation and human quality, but to orchestrate them to leverage the best parts of both</strong>.</p>
<p>Managing a multilingual ecosystem is a strategic lever that directly impacts growth, Time-to-Market, and brand reputation. In a world of tool abundance, some fundamental details make the difference: <strong>quality, workflow, supervision</strong>.</p>
<p>The <strong>combination of Drupal CMS</strong> , with its solid, API-first, and inherently secure architecture, <strong>TMGMT</strong> , to effectively manage the localization process, <strong>and Lara Translate</strong> , with its specialized contextual intelligence, finally offers a concrete answer.</p>
<p>Brands are no longer forced to sacrifice quality on the altar of speed, nor to drain operational budgets to ensure terminological consistency on a global scale. The identified hybrid solution and the &ldquo;Human-in-the-Loop&rdquo; approach (validated through real case studies) are the ideal compromise. Editorial teams can free themselves from repetitive, low-value &ldquo;linguistic data entry&rdquo; work and elevate themselves to curators of global strategy, focusing on the cultural and communicative nuances that make brands unique in every market.</p>
<h3 id="recommendations-for-decision-makers">Recommendations for Decision Makers</h3>
<p>For decision makers who intend to transform this vision into operational reality, the recommended roadmap is articulated in four essential steps:</p>
<ol>
<li><strong>Audit current flows:</strong> Map the existing &ldquo;content-translation&rdquo; lifecycle. Identify bottlenecks caused by human intervention, manual file management, or email exchanges. Effectively, how much time passes from creating master content in Italian to its actual publication in Chinese, German, or Arabic? If the answer is still measured in weeks rather than hours, the competitive gap is growing.</li>
<li><strong>Adopt the structural stack:</strong> Implement multilingual management in Drupal with the TMGMT module. For enterprise sites, it&rsquo;s not optional, but an architectural requirement necessary to &ldquo;decouple&rdquo; content creation from its translation.</li>
<li><strong>Opt for specialized AI:</strong> Start an initial pilot on non-critical segments, replacing generalist LLMs or manual processes with Lara Translate. Leverage the model&rsquo;s unique ability to understand the entire document context and programmatically adhere to your brand&rsquo;s style (&ldquo;Faithful&rdquo;, &ldquo;Fluid&rdquo;, &ldquo;Creative&rdquo;) to drastically reduce the time and cost of human review.</li>
<li><strong>Define governance:</strong> Establish clear guidelines on which types of content require human post-editing versus AI-only translation, using TMGMT workflow states to enforce these rules. For critical content, consider maintaining manual intervention by localization professionals.</li>
</ol>
<p>By shifting the focus from manual translation to <strong>strategic supervision</strong> of reliable and contextual AI, companies can overcome language barriers with unprecedented speed and quality.</p>
<p>SparkFabrik, through its deep technical and strategic expertise in Drupal and the development of tools like the <a href="https://www.drupal.org/project/tmgmt_laratranslate">Lara connector for Drupal</a>, positions itself as a key technology partner to guide organizations in this transition, transforming the challenge of linguistic complexity into a structural competitive advantage.</p>
<hr>
<p>If your organization is exploring <strong>adopting Drupal as a CMS</strong> that&rsquo;s robust, reliable, and customizable, introducing <strong>multilingual strategies</strong> , or <strong>AI integration</strong> for its digital initiatives, we invite you to:</p>
<ol>
<li>Explore our <a href="https://www.sparkfabrik.com/en/success-stories/">case studies</a> of enterprise Drupal implementations</li>
<li><a href="https://www.sparkfabrik.com/en/contact-us/">Contact our team</a> for an assessment of your specific needs</li>
<li>Discover how our <a href="https://www.sparkfabrik.com/en/services/drupal/">Drupal services suite</a> can support your AI strategy</li>
</ol>
<hr>
<p>This article is part of our series dedicated to Drupal CMS. To explore other aspects of the platform, we invite you to consult our previous articles on <a href="/en/drupal-cms-the-new-era-of-enterprise-content-management?hsLang=en">features and benefits</a>, <a href="/en/drupal-cms-a-comparison-with-the-main-alternatives?hsLang=en">comparison with alternatives</a>, <a href="/en/migration-to-drupal-cms-complete-guide-for-a-successful-transition?hsLang=en">migration strategies</a>, <a href="/en/drupal-cms-security-compliace-regulated-sector?hsLang=en">security and compliance</a>, <a href="/en/composable-architecture-with-drupal-cms?hsLang=en">composable architecture</a>, <a href="/en/design-system-and-drupal-cms?hsLang=en">Design System</a>, <a href="/en/drupal-headless?hsLang=en">Drupal headless omnichannel</a>, and <a href="/en/drupal-ai-overview-news-vision?hsLang=en">Drupal AI overview and news</a>.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/drupal-multilingual-ai/Drupal_20Strategie_20Multilingua_20SparkFabrik_20Lara_20Translate_20-_20Featured_20Image.png" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/drupal-multilingual-ai/Drupal_20Strategie_20Multilingua_20SparkFabrik_20Lara_20Translate_20-_20Featured_20Image.png" type="image/jpeg"/><category>Drupal</category><category>AI</category></item><item><title>How we shaped the future of Drupal AI in 2025</title><link>https://www.sparkfabrik.com/en/blog/drupal-ai-contributions-2025/</link><pubDate>Thu, 15 Jan 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/drupal-ai-contributions-2025/</guid><description>Here's how we helped build the future of Drupal in 2025 (explained with a technical slant, but also designed for those who have to make decisions)</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Overview of SparkFabrik&rsquo;s contributions to the Drupal AI Initiative in 2025: DDEV development environment for AI, Guardrails security framework, async AI Agent Runner on Symfony Messenger, RAG with Typesense, MCP integration for agentic workflows, and the TMGMT Lara Translate module. These contributions make Drupal an enterprise-grade CMS ready for AI in production.
  </div>
</div>
<p>For the Drupal community, January is the month when the candles are blown out: today, January 15, 2026, <strong>we celebrate 25 years</strong> of a technology that has shaped the web. 🎂</p>
<p><strong>But in Open Source, the best way to honor a project is to build its future and tell its story.</strong> ​What better time, then, to stop, tidy up, and share our contributions to the ecosystem?</p>
<p>2025 was the year when artificial intelligence went from &ldquo;interesting feature&rdquo; to <strong>infrastructure</strong>. A bit like the cloud a decade ago: first curiosity, then experimentation, finally inevitability. In between, a question that those working on digital platforms can no longer postpone: <em>&ldquo;How do we bring AI into production… without losing control, security, and quality?&rdquo;</em></p>
<p>This is exactly where the <a href="/en/drupal-ai-overview-news-vision?hsLang=en"><strong>Drupal AI Initiative</strong></a> comes in: a project born to transform the (very real) energy of the community into a coordinated vision, with the goal of making Drupal not just &ldquo;AI-compatible,&rdquo; but an <strong>enterprise-grade CMS even when AI enters the heart of editorial and business workflows</strong>.</p>
<p>At SparkFabrik, we didn&rsquo;t just stand by and watch. We understood that we didn&rsquo;t want to be mere users of this new technology, but &ldquo;makers.&rdquo; After all, Open Source isn&rsquo;t a &ldquo;marketing strategy&rdquo; for us—it&rsquo;s part of our history and our DNA.</p>
<p>We chose to be there, with real contributions, in areas we identified as decisive for real adoption: <strong>developer experience</strong> , <strong>governance &amp; security (guardrails)</strong>, <strong>RAG &amp; search</strong>, <strong>agentic workflows (MCP and toolchain)</strong> , <strong>enterprise integrations</strong> , and, something often underestimated, <strong>communication and community building</strong>.</p>
<p>Here&rsquo;s how we contributed to building the future of Drupal in 2025 (explained with a technical angle, but designed also for decision-makers).</p>
<p>And speaking of contribution: what better occasion to announce that <strong>on Saturday, January 31</strong> , we will host the <strong>Drupal Contribution Day</strong> in Milan? ​It&rsquo;s now a recurring and unmissable event in our offices, a time to meet, write code together (but not only!) and &ldquo;give back&rdquo; to the community. <a href="/en/eventi/drupal-contribution-day-2026/"><strong>Register here to join us!</strong></a></p>
<h2 id="what-is-the-drupal-ai-initiative">What is the Drupal AI Initiative</h2>
<p>The Drupal AI Initiative is a strategic project aimed at integrating AI into Drupal effectively, evolving the system to position it as the best open-source &ldquo;<strong>agentic CMS</strong>.&rdquo;</p>
<p>The Drupal ecosystem had been talking about AI for some time, with features, integrations, and modules that were growing and providing a taste of what was possible. However, it became clear that to bring real strategic impact, it was necessary to go beyond fragmented contributions and channel efforts.</p>
<p>With this awareness, <a href="https://www.drupal.org/about/starshot/initiatives/ai">the Drupal AI Initiative was launched on June 9, 2025</a>, with the goal of bringing structure, strategy, and shared direction to innovation (and a common vision of AI that supports people, is secure, and fully governable even in enterprise environments). In other words: not just &ldquo;AI modules,&rdquo; but a common direction, a framework, and an ecosystem capable of growing without losing governability.</p>
<p>For decision-makers (CTOs, CEOs, digital managers, marketing leads), this point is huge: it means being able to bring AI into digital experiences without vendor lock-in, without having to rebuild everything from scratch, and with the governance backbone typical of Drupal.</p>
<h2 id="first-of-all-who-contributed-people-not-just-code">First of all: who contributed (people, not just code)</h2>
<p>Innovation is driven by people. Our contribution wouldn&rsquo;t have been possible without a significant structural investment: since June 2025, we&rsquo;ve dedicated <strong>50% of the working time</strong> of two of our best technical talents exclusively to the Drupal AI Initiative.</p>
<p>We&rsquo;re not talking about spare time, but constant, daily commitment. An amount of resources justified by our internal strategic vision, and often further fueled by personal passion that goes well beyond office hours.</p>
<table>
<thead>
<tr>
<th><strong>Contributor</strong></th>
<th><strong>Role &amp; Focus</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://www.drupal.org/u/lussoluca"><strong>Luca Lusso</strong></a></td>
<td><em>Lead Developer &amp; Architect</em>.  Drupal contributor and speaker, with experience in modules and advanced integrations (including WebProfiler, Monolog, Symfony Messenger, Search API Typesense).  He worked on infrastructural foundations (Runner, Guardrails) and complex architectures (DDEV development environment).</td>
</tr>
<tr>
<td><a href="https://www.drupal.org/u/robertoperuzzo"><strong>Roberto Peruzzo</strong></a></td>
<td><em>Senior Developer &amp; Maintainer</em>.  Contributor on various modules and integrations (Iubenda, Search API Typesense, Panther, TMGMT Lara Translate, MCP Client).  He focused on interoperability (MCP), RAG, and vertical integrations.</td>
</tr>
</tbody>
</table>
<p>The team&rsquo;s prior experience was fundamental, because many of our 2025 contributions weren&rsquo;t theoretical exercises, but technical choices made by those who have already seen <strong>what works</strong> (and what breaks) when a project has to live in the real world.</p>
<h3 id="how-we-worked-governance-consistency-and-alignment">How we worked: governance, consistency and alignment</h3>
<p>Guiding the strategic vision was our CTO, <strong>Paolo Mainardi</strong> , who coordinated activities with precise methodology to ensure we produced value continuously (not only for the ecosystem and community, but also to address the real enterprise challenges of our clients).</p>
<p>In 2025, we worked in sprints, with dedicated issues and milestones, aligning with internal weekly coordination calls. Additionally, we always participated in the community&rsquo;s weekly asynchronous alignments to bring our perspective. This model allowed us to:</p>
<ul>
<li>choose priorities and &ldquo;cut&rdquo; what didn&rsquo;t bring value</li>
<li>explore and deepen various solutions, beyond contributions</li>
<li>define and evolve PoCs into reusable solutions</li>
<li>maintain a constant flow of contributions, despite the pressure of commercial projects</li>
<li>create a stable bridge between technical work and communication.</li>
</ul>
<p>Regarding the last point, an important aspect was working together with the marketing team: <strong>Stefano Mainardi (CEO)</strong> and <strong>Alessandro De Vecchi</strong> worked constantly to tell the story of Drupal, the AI Initiative, and what we were building.</p>
<p>Because open source really works when you <strong>build</strong> and <strong>share</strong>.</p>
<h2 id="ai-development-environment-in-drupal-our-ddev-add-on-to-lower-the-barrier-to-entry-ddev-development-environment">AI Development Environment in Drupal: our DDEV add-on to lower the barrier to entry (DDEV development environment)</h2>
<p>The first major challenge we faced was infrastructural. In June, at the beginning of the Drupal AI Initiative, contributing to development was complex and slowed down by a fundamental bottleneck in terms of Developer Experience (DevEx).</p>
<p>Our first major contribution was therefore <strong>a</strong><a href="https://www.drupal.org/project/ai/issues/3532795"><strong>DDEV-based development environment</strong></a>, designed to make local setup replicable and fast, and to allow working on multiple modules simultaneously without going crazy.</p>
<p>More specifically, the <strong>main problem</strong> was the complexity in configuring a local development environment to work effectively in a context like Drupal AI, where you need to work and test multiple modules simultaneously. Dependency management was decidedly inflexible (with the risk of having to reinstall them every time), only one project at a time could be cloned (making many manual git clones necessary when working on multiple modules), and the need to add (and maintain) some DDEV-specific configuration files made everything tedious and not scalable.</p>
<p>Our effort was therefore directed at lowering these barriers to entry and accelerating the innovation cycle for the entire community, providing developers with a pre-configured AI local development environment working in very short time.</p>
<p>Our strategy evolved in two phases, with a general and a specialized solution. The first step was the proposal of a new DDEV add-on (<a href="https://github.com/lussoluca/ddev-drupal-suite"><strong>DDEV Drupal Suite add-on</strong></a><strong>)</strong> , a generic tool that can be used by any contrib module, drastically simplifies setup, and makes contributing to multiple modules simultaneously streamlined.</p>
<p><a href="https://github.com/lussoluca/ddev-drupal-suite"><img src="/images/blog/drupal-ai-contributions-2025/DDev_20Development_20Enviroment.png" alt="DDev Development Enviroment"></a></p>
<p>Based on this, we first developed a &ldquo;Drupal recipe&rdquo; (<a href="https://www.drupal.org/project/ai_dev_recipe">AI Dev Recipe</a>) that installs and configures a minimal set of AI modules, then also an interactive CLI wizard (<a href="https://github.com/Drupal-AI/ddev-drupal-ai"><strong>DDEV Drupal AI Add-on</strong></a>) that orchestrates the entire AI functionality configuration, automatically manages dependencies, config and install, and is also easily extensible with simple YAML instructions.</p>
<p>Furthermore, development continued to make the solution even more flexible, for example including support for ready-to-use vector databases (PostgreSQL, pgvector), but also optional support for high-performance scenarios (Milvus) and quality assurance tools (GrumPHP).</p>
<p>Today, this solution is used by the global community and is traceable as an add-on. In parallel, to further facilitate adoption, we also proposed transferring it to the official DDEV namespace, so as to make it the absolute standard for AI development in Drupal. It was one of our first contributions, but it laid the foundation for everything else.</p>
<p><strong>Why does this contribution matter for business too?</strong> Because when AI enters an enterprise CMS, the impact is as much functional as organizational. The speed with which a team can do tests, fixes, iterations, and releases determines the real time-to-market. And a standardized development environment is often a first, powerful productivity multiplier.</p>
<h2 id="guardrails-bringing-security-governance-and-control-to-drupal-ai">Guardrails: bringing security, governance, and control to Drupal AI</h2>
<p>Every time we talk about AI in production, sooner or later we get here: <strong>trust</strong>.</p>
<p>AI can create content, summarize, classify, orchestrate actions, call external tools. But it can also &ldquo;hallucinate&rdquo; (with great confidence), deviate from company policies, expose sensitive data, generate unsafe or inappropriate output.</p>
<p>Think for example: how can we ensure that a chatbot doesn&rsquo;t provide inappropriate responses, suggesting a competitor&rsquo;s product, offending the user, or inventing false information? And how can we prevent sensitive data (PII) from being sent to third-party providers (through our prompts, or through information collected by autonomous agents)?</p>
<p>The risks are real and, in the enterprise world, an AI that &ldquo;hallucinates&rdquo; or responds inappropriately isn&rsquo;t just a bug, it&rsquo;s an unacceptable reputational risk. And if it shares sensitive data, it&rsquo;s an even more serious violation. While many focused on text generation, we focused on <strong>control</strong>.</p>
<p>For this reason, we worked on <a href="https://www.drupal.org/project/ai/issues/3518963"><strong>defining so-called &ldquo;Guardrails&rdquo;</strong></a>, intelligent rules that control and guide AI behavior, effectively limiting it to ensure it respects the guidelines, values, and objectives of each organization.</p>
<p>In our vision, <strong>guardrails are not optional</strong> : they are a strategic requirement to make AI <strong>deployable</strong> in real contexts.</p>
<h3 id="definition-of-guardrails-in-drupal-ai">Definition of guardrails in Drupal AI</h3>
<p>Luca&rsquo;s contribution on this front was substantial, starting with the <strong>definition itself of the concept of Guardrails in Drupal</strong> (which revealed a much greater breadth than initially hypothesized).</p>
<ul>
<li><strong>Guardrails are in fact necessary as a cross-cutting layer for all interactions with LLMs</strong> : modules, agents, chatbots, content generators, etc. Importantly, guardrails are also essential for securing <strong>communications with other external systems</strong> , protecting the exchange of parameters and sensitive data, such as those managed through MCP.</li>
<li><strong>They must analyze both user input</strong> and sanitize data before it reaches the LLM (e.g., removing personal data or blocking forbidden topics)<strong>, as well as AI output</strong> , analyzing the response before it&rsquo;s shown to the user.</li>
<li>In case of a failed check, a guardrail can <strong>completely block the execution</strong> of a request (&ldquo;Sorry, I can&rsquo;t respond due to policy violation&rdquo;) <strong>or rewrite input/output by removing problematic data</strong> (&ldquo;Here&rsquo;s the response omitting personal information&rdquo;).</li>
<li>A single filter is typically not sufficient. Rather, a &ldquo;<strong>Set of Guardrails</strong> &quot; is needed that operate simultaneously, each specialized in different types of controls (e.g., one dedicated to personal information, one to forbidden content, one dedicated to user permissions).</li>
</ul>
<p>Our contribution culminated in the creation of a <strong>new plugin architecture to manage guardrails</strong>. The solution is highly customizable: it not only supports the configuration of individual guardrails, but also the ability to combine multiple controls in different sets.</p>
<p>Both controls on user inputs (pre-LLM controls) and on AI output (post-LLM controls) are supported, and it&rsquo;s possible to define different checks in the two phases, for different needs. Last but not least, guardrails of two distinct types are implemented: deterministic type (regex) and non-deterministic type (LLM-based topic detection).</p>
<p>For decision-makers, the value of this contribution is immense: guardrails transform Drupal AI from a promising technological experiment to a reliable, secure, and enterprise-ready platform, where AI always works <em>for</em> the business, and never against it.</p>
<h3 id="support-for-bedrock-and-data-sovereignty">Support for Bedrock and data sovereignty</h3>
<p>During experiments, we explored the main Guardrails solutions already available, with a particular focus on AWS Bedrock, one of the most powerful solutions on the market.</p>
<p>Our tests confirmed excellent compatibility with many languages (including Italian), particularly the PII masking functions, specific guardrails for removing personal information without blocking execution.</p>
<p>But it&rsquo;s important to tell the reality honestly: <strong>an external (extra-European) service introduces privacy implications, compliance questions, and technological dependence (vendor lock-in).</strong></p>
<p>These concerns are particularly relevant in regulated contexts (or in public administration), where it must be seriously evaluated which data is transmitted externally, how it&rsquo;s processed, what residual risk remains.</p>
<p>For this reason, while supporting Bedrock, <strong>our architecture is designed to be agnostic</strong>. It allows integrating other solutions in the future (such as Azure, Google Cloud, <a href="http://guardrails.ai/">Guardrails.ai</a>) and, a very interesting alternative, also local models, to ensure data sovereignty.</p>
<h3 id="streaming-guardrails-evolutions">Streaming, guardrails, evolutions</h3>
<p>Another technical point we addressed is the <strong>complexity of guardrails in the presence of streaming responses</strong> , i.e., when the AI sends each piece of the response as it generates it, and not just the complete output at the end of generation.</p>
<p>For user experience, streaming is a highly appreciated feature. However, in the presence of streaming, it&rsquo;s not enough to perform a final output check, because the user might already have been exposed to inappropriate information. Rather, guardrail control must be performed at each update of the response from the LLM (at each new output token).</p>
<p>Validating output token-by-token, managing tool calls in the flow, and simultaneously activating guardrail sets at each update is a non-trivial issue.</p>
<p>The temporary solution was to <strong>disable streaming when guardrails are active</strong> , pending a more robust approach. It&rsquo;s an important detail for usability, moving from a &ldquo;safe&rdquo; solution to &ldquo;safe and pleasant to use.&rdquo;</p>
<p>And there&rsquo;s another front that will need to be addressed in 2026: <strong>guardrails for multimedia content</strong> (images, videos). It&rsquo;s a topic still to be explored in depth, but we already know it can&rsquo;t be left to chance.</p>
<h2 id="ai-agents-in-production-asynchronous-tasks-and-streaming-with-symfony-messenger-to-overcome-php-limitations">AI Agents in production: asynchronous tasks and streaming with Symfony Messenger (to overcome PHP limitations)</h2>
<p>AI Agents are a completely different entity from simple chatbots of the past. A complex agent plans, executes multiple steps, calls tools, handles errors, queries databases, processes data. In short, Agents require time to &ldquo;reason,&rdquo; with processes that can last even several minutes.</p>
<p>And this is where the classic synchronous model of a PHP web request shows its limits: timeouts, sessions that close, UI that interrupts execution, difficulty with retry/monitoring.</p>
<p>The problem is therefore: how do we run complex agents on a Drupal site, without the page timing out? A real problem, which also emerged on a project for one of our clients.</p>
<p>To address this limitation, we developed a solution and a PoC of <a href="https://www.drupal.org/project/ai/issues/3493260"><strong>&ldquo;AI Agent Runner&rdquo; based on Symfony Messenger</strong></a>, with two clear objectives:</p>
<ol>
<li><strong>asynchronous execution</strong> of agentic tasks (robust, retryable, monitorable), decoupling agent execution from the user&rsquo;s web request.</li>
<li><strong>streaming of responses</strong> to the frontend when a fluid conversational experience is needed, while continuing to support synchronous execution with (almost) the same code.</li>
</ol>
<p><a href="https://www.youtube.com/watch?v=gnuEwL1S9Gc"><img src="/images/blog/drupal-ai-contributions-2025/AI_20Agents_20on_20Symfony_20Messenger_20PoCs_20-_20Drupal_20AI_20Initiative_20Webinar_205-7_20screenshot.png" alt="AI Agents on Symfony Messenger PoCs - Drupal AI Initiative Webinar 5-7 screenshot"></a></p>
<p>This contribution was also brought to a public context: a <a href="https://www.youtube.com/watch?v=gnuEwL1S9Gc"><strong>technical webinar</strong></a> organized together with the team coordinating the Drupal AI Initiative, in which Luca showed how Symfony Messenger can become a key piece to overcome the architectural limitations of Drupal and PHP when talking about agents.</p>
<p>For a decision maker, the point is very simple: with synchronous execution alone, agents remain beautiful but limited and fragile prototypes. With an asynchronous architecture, <strong>automation can become complex and reliable</strong>.</p>
<h2 id="rag-and-typesense">RAG and Typesense</h2>
<p>One of the quickest ways to lose trust in AI is to ask it something it should know… and see a plausible but wrong answer. RAG (Retrieval-Augmented Generation) was born for this: to anchor responses to a real and controllable knowledge base.</p>
<p>SparkFabrik has historically been the maintainer of the <a href="https://www.drupal.org/project/search_api_typesense"><strong>Search API Typesense</strong></a> module, which has given us deep experience both in &ldquo;classic&rdquo; search and in the evolution towards semantic search and vector databases. With the advent of AI, it was natural for us to evolve this tool.</p>
<p>Roberto Peruzzo worked on implementing <a href="https://www.drupal.org/project/search_api_typesense/issues/3543841"><strong>Typesense support for RAG in the AI module</strong></a>, with an approach aimed at improving relevance and efficiency.</p>
<p><img src="/images/blog/drupal-ai-contributions-2025/Typsesense_20RAG_20Agent.png" alt="Typsesense RAG Agent"></p>
<p>One of the most interesting elements we introduced is the concept of &ldquo;Router Agent,&rdquo; a <strong>sub-agent</strong> that determines &ldquo;where to search.&rdquo; In a complex system with many indexes (e.g., Technical Manuals, Blog, Products), querying everything is inefficient. Given a user input, the main agent activates the su-agent, which selects the most relevant collection to query on Typesense (based on intent), before formulating the final response.</p>
<p><img src="/images/blog/drupal-ai-contributions-2025/Typesense_20RAG_20sub-agent.png" alt="Typesense RAG sub-agent"></p>
<p>This reduces &ldquo;noise&rdquo; (irrelevant data sent to AI) and hallucinations, lowers token costs, and drastically increases response accuracy. Furthermore, dynamic routing avoids having to hardcode the mapping of collections and adding new collections does not require new code.</p>
<p>In short, you get a <strong>better user experience, scalable code and cost savings</strong>.</p>
<h2 id="mcp-drupal-becoming-context-and-toolchain-for-agents">MCP: Drupal becoming context and toolchain for agents</h2>
<p>MCP (Model Context Protocol) is one of the &ldquo;pieces&rdquo; that make Drupal conversant with external systems, tools, and agents in a standardized way. It&rsquo;s one of the next big frontiers, and in 2025 we worked and contributed on MCP in multiple phases.</p>
<h3 id="exploration-and-direction">Exploration and direction</h3>
<p>We started with an exploration of the state of the Drupal MCP ecosystem: understanding what was already possible, what was missing, and where it made sense to invest to generate value also for our real projects. In this phase, we also hypothesized the use of <strong>JsonAPI via MCP</strong> to build decoupled frontends.</p>
<h3 id="poc-generating-a-react-frontend-via-jsonapi--tools">PoC: generating a React frontend via JSONAPI + Tools</h3>
<p>After exploration, we carried out a more concrete experiment with the goal of testing the effectiveness of the approach and the potential of the protocol: creating a complete React app through an AI connected to the Drupal MCP server, capable of reading its structure and content (content types, fields…).</p>
<p>Realized through a very detailed prompt divided into different phases (&ldquo;spec-driven development&rdquo;), the experiment was a success: it demonstrated that Drupal can expose its structure (&ldquo;intelligence&rdquo;) to external systems, allowing AI to act not only as a text generator, but as a <strong>junior developer</strong> guided by the CMS context (and supervised).</p>
<h3 id="brainstorming-on-concrete-use-cases-and-work-in-progress">Brainstorming on concrete use cases and work in progress</h3>
<p>Subsequently, we did an internal brainstorming to define solid and repeatable use cases, for different stakeholders. Two emerged:</p>
<ul>
<li>for Drupal developers: starting from an SDC model and generating entities, paragraphs, and fields via MCP</li>
<li>for frontend developers/site builders: building a decoupled frontend using only the MCP server</li>
</ul>
<p>We chose to deepen the first use case, still ongoing today with exploration on automating backend structure generation starting from SDC templates.</p>
<h3 id="improving-mcp-in-drupal-and-becoming-maintainer">Improving MCP in Drupal and becoming maintainer</h3>
<p>In parallel, we also contributed directly to improving the implementation of MCP in Drupal.</p>
<p>Roberto became maintainer of the <a href="https://www.drupal.org/project/mcp_client"><strong>Drupal MCP Client</strong></a> module, modernizing it to support the new <strong>Tool API</strong> and improving security with support for authentication <em>headers</em> (such as <em>bearer token</em>). Last but not least, Roberto also contributed bugfixes to the <a href="https://www.drupal.org/project/mcp">Drupal MCP</a> module to make Drupal an MCP server.</p>
<p>Why does MCP matter also for those looking at business? Because it&rsquo;s an &ldquo;enabling&rdquo; technology. It&rsquo;s what allows agents to not just be conversational interfaces, but <strong>operators</strong> that know how to use tools, interact with APIs, execute actions, and automate processes.</p>
<h2 id="lara-translate-integration-enterprise-ai-translations-superior-multi-lingual-content">Lara Translate Integration: enterprise AI translations, superior multi-lingual content</h2>
<p>Translation is one of the most immediate and concrete use cases of AI in enterprise content systems: multi-country sites, catalogs, knowledge bases, documentation, compliance.</p>
<p>Parallel to infrastructural work, SparkFabrik responded to a specific client need by developing the <a href="https://www.drupal.org/project/tmgmt_laratranslate"><strong>TMGMT Lara Translate module</strong></a>. Lara is an AI model specialized in translation tasks, with qualitatively superior output to generalist LLMs (ChatGPT, Gemini, Llama…) while maintaining lexical and stylistic consistency.</p>
<p>The module integrates Lara as a translation provider within Drupal&rsquo;s TMGMT (Translation Management Tool) system, supporting all of Lara&rsquo;s unique features and with attention also to practical details such as effective text splitting and error handling logic.</p>
<p>An interesting aspect is that this contribution also triggered direct contact, opening collaboration opportunities and demonstrating how project needs can transform into valuable contributions for the community.</p>
<h2 id="beyond-code-content-outreach-and-community">Beyond code: content, outreach and community</h2>
<p>Contributing doesn&rsquo;t just mean writing code. It also means <strong>telling the story</strong> : explaining what we&rsquo;re doing, what&rsquo;s ready, what&rsquo;s evolving, and especially <em>why</em> it&rsquo;s worth investing. In 2025, we&rsquo;re really proud of the collaborative work between the technical and marketing teams to translate our work into shared culture.</p>
<h3 id="content-drupal-ai-logs-and-blog-articles">Content: Drupal AI Logs and blog articles</h3>
<p>We told the story of some of the contributions also on our social channels through the <a href="https://www.linkedin.com/search/results/content/?keywords=%23DrupalAILog&amp;origin=FACETED_SEARCH&amp;sid=qeq"><strong>Drupal AI Logs</strong></a> series, making the contribution process transparent, transforming technical updates into usable insights and (we hope) inspiring other contributors and teams to experiment.</p>
<p>A great effort was also dedicated to producing <a href="/it/tag/drupal?hsLang=en"><strong>blog articles dedicated to Drupal and Drupal AI</strong></a> (and in our <a href="https://tech.sparkfabrik.com/en/">tech blog</a>), sharing our perspectives and promoting the ecosystem of our favorite CMS (in 2025 alone we shared 13 vertical articles on Drupal).</p>
<h3 id="drupalcamp-italy-2025-two-talks-two-complementary-perspectives">DrupalCamp Italy 2025: two talks, two complementary perspectives</h3>
<p>In November 2025, we didn&rsquo;t just contribute to organizing <a href="https://www.drupalcampitaly.it/">DrupalCamp Italy</a>. We also brought the Drupal AI Initiative to the stage, with two talks that, together, tell our approach well.</p>
<ul>
<li><strong>The state of the Drupal AI Initiative.</strong><br>
Luca Lusso shared the overall state of the initiative and some of our contributions, offering an insider vision, realistic and hype-free. The goal: to inspire the Italian community and promote the initiative. (<a href="https://www.youtube.com/watch?v=UJI4ThU2Izg&amp;list=PL9purqp7U2jxr0mE-Q5TA-8eThiGQ7gIv&amp;index=2">Talk recording</a> and <a href="https://docs.google.com/presentation/d/1EeVKxJj8LEH-96fq-1rppw-0sMj0rCymqreZSAcUF_I/edit?slide=id.g3a192d937c0_0_52#slide=id.g3a192d937c0_0_52">slides</a>).</li>
<li><strong>AI Agents in Drupal and MCP (with PoC)</strong><br>
Roberto showed how to implement AI agents in Drupal, both via UI and via code. In the talk, he showed a <strong>PoC of AI Customer Assistant for e-commerce</strong> : an agent that converses with the user, searches for products, suggests options, adds to cart, and notifies events in Slack via MCP. (<a href="https://www.youtube.com/watch?v=GzsSWgq1ioA&amp;list=PL9purqp7U2jxr0mE-Q5TA-8eThiGQ7gIv&amp;index=4">Talk recording</a> and <a href="https://docs.google.com/presentation/d/1mKu7HGiuIAPsSIKIoGp0zZVgsjJC2wmXqoo0wpMYcTA/edit?usp=sharing">slides</a>).</li>
</ul>
<p><a href="https://docs.google.com/presentation/d/1EeVKxJj8LEH-96fq-1rppw-0sMj0rCymqreZSAcUF_I/edit?slide=id.g3a192d937c0_0_52#slide=id.g3a192d937c0_0_52"><img src="/images/blog/drupal-ai-contributions-2025/AGENTI_20AI_20IN_20DRUPAL_20-_20Roberto_20Peruzzo_20-_20DrupalCamp_20Italy_202025.png" alt="AGENTI AI IN DRUPAL - Roberto Peruzzo - DrupalCamp Italy 2025"></a><a href="https://docs.google.com/presentation/d/1mKu7HGiuIAPsSIKIoGp0zZVgsjJC2wmXqoo0wpMYcTA/edit?usp=sharing"><img src="/images/blog/drupal-ai-contributions-2025/Where_20are_20we_20with_20the_20AI_20__initiative__20-_20Luca_20Lusso_20-_20DrupalCamp_20Italy_202025.png" alt="Where are we with the AI _initiative - Luca Lusso - DrupalCamp Italy 2025"></a></p>
<h3 id="technical-webinar-ai-agents-on-symfony-messenger-for-robust-ai-agents">Technical webinar: AI Agents on Symfony Messenger for &ldquo;robust&rdquo; AI agents</h3>
<p>As mentioned above, we also contributed to an <a href="https://www.youtube.com/watch?v=gnuEwL1S9Gc"><strong>international technical webinar</strong> ,</a> organized with the team coordinating the Drupal AI Initiative (James Abrahams and Marcus Johansson).</p>
<p>Luca showed his PoC based on Symfony Messenger, demonstrating agents capable of executing asynchronous tasks and supporting message streaming (even in case of tool invocation).</p>
<p>The goal was to show how SM can become a key piece to overcome the architectural limitations of Drupal and PHP when talking about agents. Interestingly, in the same webinar a second demo focused on FlowDrop was shared and other community members intervened to participate in the discussion.</p>
<h3 id="an-extra-that-comes-from-the-initiative-workshop-on-github-copilot">An extra that comes from the initiative: workshop on GitHub Copilot</h3>
<p>In the study and R&amp;D path related to the Drupal AI Initiative, we also invested in training and sharing on AI development tools, such as GitHub Copilot.</p>
<p>We shared our experience in a technical and practical workshop held by Luca Lusso, to explain in practice how to integrate Copilot as a partner in the development cycle in VS Code.</p>
<p><a href="https://www.youtube.com/live/-kHCGTTFbZE?si=CfcdFsifyqsAeOAM&amp;t=1701"><strong>The Copilot workshop</strong></a> is free and freely accessible, not vertical on Drupal but intentionally more open to the development world.</p>
<p><a href="/en/eventi/workshop-copilot/"><img src="/images/blog/drupal-ai-contributions-2025/SparkFabrik_20Connect_20-_20Workshop_20Copilot_20Cover.png" alt="SparkFabrik Connect - Workshop Copilot Cover"></a></p>
<h2 id="a-year-of-innovation-and-a-look-to-the-future">A year of innovation and a look to the future</h2>
<p>2025 closed with an extremely positive balance for the Drupal ecosystem, a year of construction in which SparkFabrik contributed to laying the fundamental bricks: a solid development environment, a security framework (Guardrails), a robust execution engine (Async Runner on Symfony Messenger), and interoperability protocols (MCP).</p>
<p>Under the guidance of our CTO Paolo Mainardi, <strong>our contributions focused on high-value areas and critical nodes to bring AI into real client projects</strong> , with strong emphasis on governance and security, autonomous AI agents capable of performing complex actions (from site building to configuration).</p>
<p>We also recognized (and promoted) the need to support completely <strong>open source and local stacks</strong> (technological agnosticism), anticipating the digital sovereignty and data privacy needs required by European enterprise clients, subject to regulations such as GDPR, AI Act, and Cyber Resilience Act).</p>
<p>**Summary of main technical contributions 2025 (Drupal AI)</p>
<hr>
<p><strong>Area</strong> |  <strong>Main Contribution</strong> |  <strong>Impact / Result</strong><br>
<strong>Infrastructure</strong> |  Unified DDEV development environment |  Standardization of setup for all global contributors.<br>
<strong>Security</strong> |  Guardrail Agents Architecture |  Framework for AI input/output security, viewable and configurable.<br>
<strong>Performance</strong> |  AI Agents on Symfony Messenger: Async Runner &amp; Streaming |  Execution of complex agents without timeouts, synchronous and asynchronous execution, reactive UX thanks to real-time streaming.<br>
<strong>Search</strong> |  RAG and Router Agent |  Intelligent system to route queries to the correct index, reducing costs and noise.<br>
<strong>Interoperability</strong> |  Drupal MCP Server &amp; Client |  POC for generating React frontend from Drupal via AI with MCP; modernization of MCP client.<br>
<strong>Localization</strong> |  TMGMT Lara Translate Provider |  Integration of professional translation service into Drupal editorial workflow.</p>
<p>Looking to 2026, our vision is clear. Drupal is no longer &ldquo;just a CMS.&rdquo; It&rsquo;s positioning itself as the ideal platform for <strong>Enterprise AI</strong> : a place where data is structured, secure, and accessible, and where artificial intelligence isn&rsquo;t a toy just for &ldquo;wow effect,&rdquo; but a tool governed by precise compliance and security rules. It&rsquo;s exactly the type of AI needed for production, not for demos.</p>
<p>SparkFabrik will continue to be at the forefront. We won&rsquo;t just use AI; we&rsquo;ll continue to write the code that makes it possible, open, and secure for everyone, for real projects.</p>
<hr>
<p>At SparkFabrik, we combine deep technical expertise in Drupal with advanced skills in AI integration, composable architectures, and enterprise governance. Our <a href="https://www.sparkfabrik.com/en/services/drupal/">Drupal development services</a> cover the entire spectrum: from strategic consulting on the AI readiness of your current architecture, to implementation of custom AI-powered solutions, through to security, ongoing support, and optimization.</p>
<p>If you&rsquo;re evaluating how to integrate AI into Drupal (or into a broader enterprise ecosystem), and you want to do it with a partner who has really put their hands in it (at the product, community, and delivery level), <a href="https://www.sparkfabrik.com/en/contact-us/">let&rsquo;s talk</a>.</p>
<hr>
<p>This article is part of our series dedicated to Drupal. To explore other aspects of the platform, we invite you to consult our previous articles on <a href="/en/drupal-cms-the-new-era-of-enterprise-content-management?hsLang=en">features and benefits</a>, <a href="/en/drupal-cms-a-comparison-with-the-main-alternatives?hsLang=en">comparison with alternatives</a>, <a href="/en/blog/migrazione-a-drupal-cms-guida-completa/">migration strategies</a>, <a href="/en/blog/drupal-cms-sicurezza-compliance-settori-regolamentati/">security and compliance</a>, <a href="/en/blog/architettura-composable-con-drupal-cms/">composable architecture</a>, <a href="/en/blog/guides/design-system-ux-accessibilita-ai/">Design System</a>, <a href="/en/blog/drupal-headless/">Drupal headless omnichannel</a>, and <a href="/en/blog/drupal-ai-panoramica-novita-visione-di-sparkfabrik/">overview and news of Drupal AI</a>.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/drupal-ai-contributions-2025/SparkFabrik_20Drupal_20AI_202025_20Contributions_20-_20Featured_20Image.png" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/drupal-ai-contributions-2025/SparkFabrik_20Drupal_20AI_202025_20Contributions_20-_20Featured_20Image.png" type="image/jpeg"/><category>Drupal</category><category>AI</category></item><item><title>AWS vs Azure vs GCP: a guide for choosing your business cloud</title><link>https://www.sparkfabrik.com/en/blog/cloud-provider-comparison-aws-azure-gcp-alibaba/</link><pubDate>Tue, 13 Jan 2026 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/cloud-provider-comparison-aws-azure-gcp-alibaba/</guid><description>Complete comparison of AWS, Azure, and GCP for costs, security, and performance. Discover which cloud provider is the strategic choice for your business.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    A comprehensive comparison of AWS, Azure, GCP, and Alibaba Cloud covering costs, features, security, and use cases. AWS excels in scalability, Azure in Microsoft integration, GCP in AI/ML and analytics. Pricing is similar across the three hyperscalers: the choice depends on organizational context, business priorities, and digital maturity.
  </div>
</div>
<p>In a market that rewards agility, rapid adaptation, and the ability to scale when demand grows, <strong>adopting the cloud</strong> is no longer simply an IT decision, but a <strong>strategic choice that directly influences competitiveness and business results</strong>.</p>
<p>As we explored in depth in our<a href="/it/blog/guides/cloud-transformation-vantaggi-per-le-aziende/"> <strong>guide on cloud transformation</strong></a>, companies that successfully integrate cloud technologies into their processes achieve <strong>concrete results</strong> in terms of <strong>time-to-market speed, flexibility in resource allocation, and support for continuous innovation</strong>.</p>
<h2 id="a-strategic-choice-cloud-models-and-key-market-players">A strategic choice: cloud models and key market players</h2>
<p>To navigate the market correctly, it&rsquo;s useful to first understand <strong>the three main cloud computing models</strong> : Public Cloud, Private Cloud, and Hybrid Cloud. Let&rsquo;s start with this simple distinction.</p>
<ul>
<li><strong>Public Cloud</strong> means the infrastructure resides on shared platforms managed by an external provider, making it ideal for startups or e-commerce businesses that need to scale rapidly without high initial investments.</li>
<li><strong>Private Cloud</strong> , on the other hand, is reserved for a single company (or group) that directly manages the infrastructure or outsources it: this option becomes relevant when compliance, security, or absolute control of data represent business priorities.</li>
<li><strong>Hybrid Cloud</strong> combines the two models, allowing critical workloads to be managed in a private environment while leveraging the benefits of public cloud for variable loads or innovative services: a fantastic mix that, however, requires skills and orchestration.</li>
</ul>
<p>Having clarified the different infrastructure models (you can explore further<a href="/it/blog/hybrid-vs-public-vs-private-cloud-guida-alla-scelta-in-azienda/"> in this article</a>), the next obvious question is: <strong>which provider should you choose?</strong> In the global market, players like <strong>AWS</strong> , <strong>Azure</strong> , and <strong>Google Cloud Platform</strong> stand out, capable of offering a wide range of services, a widespread geographical presence, and the solidity of a mature ecosystem. But how do you navigate these options without getting lost in technicalities?</p>
<p>Alongside the big names, <strong>Alibaba Cloud</strong> also deserves attention, the undisputed leader in the Asian market and a strategic partner for all those companies looking with interest at <strong>internationalization in China and the Asia-Pacific region</strong>. A concrete example? <strong>Caleffi</strong> , a company that, with the goal of strengthening its digital presence in China, chose a multi-cloud approach together with the SparkFabrik team (<strong>details can be found in our case study:</strong><a href="https://www.sparkfabrik.com/en/success-stories/caleffi-china/"> <strong>Caleffi Hydronic Solutions</strong></a>).</p>
<h2 id="cost-analysis-comparing-pricing-models">Cost analysis: comparing pricing models</h2>
<p>Once you&rsquo;ve understood the models and main providers, a often decisive aspect in the choice comes into play: <strong>the cost structure</strong>. Understanding the <strong>different pricing models</strong> is essential, both for those evaluating cloud migration and for more mature companies wanting to expand their infrastructure consciously. The topic concerns IT Managers, CTOs, or even Marketing Managers of e-commerce businesses planning large-scale digital investments.</p>
<p>The <strong>three main models</strong> worth keeping in mind are pay-as-you-go, reserved instances, and volume-based discounts. Here&rsquo;s how they differ:</p>
<ul>
<li>The <strong>pay-as-you-go model</strong> allows you to pay only for resources actually consumed, ideal for startups or projects with variable demand.</li>
<li><strong>Reserved instances</strong> , with one or more year commitments, offer significant discounts if you anticipate constant usage.</li>
<li>Finally, <strong>volume-based discounts</strong> reward those who reach high usage or purchase thresholds through Enterprise contracts.</li>
</ul>
<p>Beyond pricing models, price as we mentioned is one of the aspects that attracts the most attention when choosing a business cloud. So which is the most cost-effective provider: AWS, Azure, or GCP? In reality, <strong>in most cases, pricing doesn&rsquo;t represent a real differentiating factor</strong>. For equivalent usage scenarios, in fact, the main platforms offer very similar rates, at least in the most consolidated markets.</p>
<p>However, some variations emerge looking eastward. Alibaba Cloud, for example, offers significant advantages in the Asian context or for specific workloads, confirming itself as a strategic choice for those targeting those markets.</p>
<p>It&rsquo;s important to emphasize that simply comparing prices of individual resources isn&rsquo;t enough to determine overall cost-effectiveness: <strong>pricing must always be part of a broader evaluation</strong> , taking into account both technical needs and anticipated growth dynamics.</p>
<p>On this note, before continuing with the provider comparison, we think you might be interested in exploring <strong>the</strong><a href="/it/landing/soluzioni-cloud-transformation/"> <strong>ebook</strong></a><strong>we created based on our experience in Cloud Transformation projects for SMEs and Enterprise companies</strong>. Discover <strong>which cloud solutions to choose</strong> to remain competitive and keep pace with continuous market demands (ebook in Italian).</p>
<h2 id="main-features-and-services-what-each-provider-offers">Main features and services: what each provider offers</h2>
<p>Features** are fully part of the &ldquo;broader picture&rdquo; to evaluate beyond price. To facilitate navigation among the many possibilities, in the following table you&rsquo;ll find a concise comparison of the three main providers (AWS, Azure, and GCP — divided by functional areas.</p>
<h3 id="compute-macchine-virtuali-serverless"><strong>Compute (Macchine Virtuali, Serverless)</strong></h3>
<table>
<thead>
<tr>
<th>Provider**</th>
<th><strong>Strength</strong></th>
<th><strong>Business need</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>AWS**</td>
<td>Wide variety of instances and configurations, strong scalability</td>
<td>Startup che scalano velocemente o e-commerce con picchi variabili</td>
</tr>
<tr>
<td>Azure**</td>
<td>Native integration with Windows/Office environments and Microsoft-centric</td>
<td>Company already based on Microsoft stack wanting to extend to cloud without disruption</td>
</tr>
<tr>
<td>GCP**</td>
<td>Highly customizable Compute Engine, optimized for containers and microservices</td>
<td>CTO focused on rapid modernization, agile deployment, minimal latency</td>
</tr>
</tbody>
</table>
<h3 id="storage--database">Storage &amp; Database</h3>
<table>
<thead>
<tr>
<th>Provider**</th>
<th><strong>Strength</strong></th>
<th><strong>Business need</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>AWS**</td>
<td>S3 for historical storage, vast database offering (RDS, Aurora, DynamoDB)</td>
<td>IT Manager wanting a consolidated and reliable data infrastructure</td>
</tr>
<tr>
<td>Azure**</td>
<td>Blob Storage with high durability, integrated SQL Server/CosmosDB</td>
<td>Microsoft - centric company wanting to cover relational and NoSQL scenarios without radical changes</td>
</tr>
<tr>
<td>GCP**</td>
<td>Differentiated storage tiers (Standard/ Nearline/ Coldline), big-data friendly databases</td>
<td>Data-driven CEO / CTO or e-commerce wanting customization and analytical capabilities</td>
</tr>
</tbody>
</table>
<h3 id="networking">Networking</h3>
<table>
<thead>
<tr>
<th>Provider**</th>
<th><strong>Strength</strong></th>
<th><strong>Business need</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>AWS**</td>
<td>Global presence of regions and availability zones, VPC services, Direct Connect</td>
<td></td>
</tr>
<tr>
<td>Global company requiring resilience and international coverage</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Azure**</td>
<td>Virtual Networks + ExpressRoute with Microsoft ecosystem</td>
<td>Enterprise with Microsoft on-premises infrastructure wanting smooth cloud connection</td>
</tr>
<tr>
<td>GCP**</td>
<td>High-performance global network, optimized for low latency</td>
<td>Digital marketing and commerce with distributed users requiring smooth experience</td>
</tr>
</tbody>
</table>
<h3 id="ai--machine-learning">AI &amp; Machine Learning</h3>
<table>
<thead>
<tr>
<th>Pr<strong>ovider</strong></th>
<th><strong>Strength</strong></th>
<th><strong>Business need</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>AWS**</td>
<td>SageMaker, Comprehend, Rekognition: broad ML / AI coverage</td>
<td>Startup or IT wanting to build data-science capabilities internally</td>
</tr>
<tr>
<td>Azure**</td>
<td>Cognitive Services + strong integration in Microsoft ecosystem</td>
<td>Enterprises less mature in ML wanting to start with integrated tools</td>
</tr>
<tr>
<td>GCP**</td>
<td>Recognized leader in AI / ML with Vertex AI, AutoML, TensorFlow support</td>
<td>E-commerce or startup focused on advanced personalization and data-driven models</td>
</tr>
</tbody>
</table>
<p>What emerges from this comparison? In summary, the choice of ideal provider depends on context and business objectives.</p>
<p>AWS stands out for scalability and variety of services** : it&rsquo;s often the preferred solution for startups, e-commerce with demand peaks, and companies seeking reliable, resilient, and global infrastructure.</p>
<p>Azure** , for its part, <strong>represents the natural choice for organizations already structured on the Microsoft ecosystem</strong> , focused on continuity, native integration, and smooth transition from on-premises to cloud.</p>
<p>Google Cloud Platform** (GCP), finally, <strong>offers advantages in customization, advanced analytics, and data-driven solutions</strong>.</p>
<h2 id="security-and-compliance-how-providers-protect-your-business-data">Security and compliance: how providers protect your business data</h2>
<p>And security? When it comes to cloud, security is in all respects an essential requirement that impacts, among other things, business continuity. A fundamental concept to understand in cloud and security is the <strong>shared responsibility model</strong>. A model in which the <strong>cloud provider</strong> assumes responsibility for the security of physical infrastructure, network, and virtualization, while <strong>the customer company</strong> remains responsible for security in the cloud; that is, data, applications, configurations, and user access.</p>
<p>But how does this translate into practice? The main cloud providers continuously invest to offer <strong>tools, certifications, and standards</strong> that help companies ensure security and compliance, each with its peculiarities and strengths:</p>
<ul>
<li><strong>Amazon Web Services (AWS)</strong> supports over 140 standards and certifications, including PCI-DSS, HIPAA/HITECH, FedRAMP, GDPR, and FIPS 140-3.</li>
<li><strong>Microsoft Azure</strong> has broad support for regional and local requirements, and places strong emphasis on data compliance and integrated identity within its ecosystem.</li>
<li><strong>Google Cloud Platform</strong> (GCP) has certifications in privacy areas like GDPR/CCPA and offers well-documented &ldquo;shared responsibility,&rdquo; with native security tools to help customers meet their share.</li>
</ul>
<p>Beyond standards, it&rsquo;s fundamental to evaluate <strong>where data resides</strong> : the ability to choose the region or data center where workloads are executed is now a decisive point for meeting European regulations (and beyond). From the perspective of localization and regulatory compliance, it&rsquo;s important to consider in which region or data center workloads are executed. All three providers offer the ability to select European regions, and specifically in Italy.</p>
<p>To explore in a dedicated way best practices and guidelines for protecting business data in the cloud, we refer you to the<a href="/it/blog/guides/cloud-security-come-proteggere-i-dati-nell-era-del-cloud/"> <strong>guide on Cloud Security</strong></a>, where we answer an increasingly urgent question: <strong>how to protect data in the cloud era?</strong></p>
<h2 id="performance-and-scalability-to-support-your-growth">Performance and scalability to support your growth</h2>
<p>Once data security and compliance in the cloud are guaranteed, the next step concerns <strong>the ability to grow your infrastructure without compromising performance.</strong> When a company needs to handle more users, launch new services, or face traffic peaks, it&rsquo;s fundamental that the chosen cloud offers global presence and the ability to scale: this is where concepts like <strong><em>number of regions</em></strong> and <strong><em>availability zones</em></strong> represent the operational foundation. A <em>region</em> is a distinct geographical area, while a <em>zone</em> is an isolated sub-area within the region, connected with reduced latency and designed to ensure high availability.</p>
<p>If the company uses a cloud with geographically distributed presence, it can place services and applications in the region closest to its users to ensure low latency, or in multiple zones to guarantee resilience and higher uptime.</p>
<p>In terms of geographical and infrastructure coverage, here&rsquo;s how the three main providers position themselves** :</p>
<ul>
<li><strong>AWS</strong> reports a network composed of over 120 Availability Zones distributed across 38 global geographical regions.</li>
<li><strong>Microsoft Azure</strong> boasts availability in more than 46 regions (with additional regions coming) and support for Availability Zones designed for latency below 2 ms between zones in the same region.</li>
<li><strong>Google Cloud Platform</strong> operates in locations in the Americas, Europe, Asia Pacific, Middle East, and Australia, with regions and zones designed to offer scalability and availability.</li>
</ul>
<p>In summary, <strong>AWS is currently the provider with the widest geographical coverage</strong> globally, thanks to the high number of regions and availability zones already operational. <strong>Azure</strong> follows closely with constant growth and widespread presence, especially in Europe and rapidly developing areas. Google Cloud Platform offers broad international distribution, particularly appreciated for its ability to bring performance-driven services to all major areas of the world.</p>
<p>This vast coverage allows all three platforms to offer high performance globally, but AWS, on paper, remains the reference point for those seeking maximum geographical distribution and enterprise-level resilience.</p>
<h2 id="how-to-choose-the-right-provider-our-field-experience">How to Choose the Right Provider: Our Field Experience</h2>
<p>After exploring in detail the main evaluation criteria, it becomes evident <strong>how much the comparison between cloud providers is a complex process</strong> , requiring a broad vision and deep technical and business knowledge. Precisely for this reason, relying on <strong>a specialized company like SparkFabrik</strong> can make the difference: our role as a strategic partner allows us to support the client both in the objective evaluation of solutions offered by AWS, Azure, and Google Cloud Platform, and in choosing the most coherent path with respect to the real <strong>needs and digital transformation objectives of the organization</strong>.</p>
<p>In our approach, provider selection is never limited to a &ldquo;technological&rdquo; choice, but also takes into account <strong>organizational context</strong> , <strong>business priorities</strong> , and <strong>level of digital maturity</strong>. <strong>Here are three practical examples</strong> where we supported our clients in choosing the best solution. In the table below you&rsquo;ll find three different scenarios, to which we&rsquo;ve associated the most suitable provider and the <strong>case study</strong> that tells the story of the project we realized.</p>
<h3 id="scenario-1-provider-aws">Scenario 1: Provider AWS</h3>
<p>Ideal Use Scenario</p>
<table>
<thead>
<tr>
<th>Why This Choice</th>
<th>Case Study</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Rapid scalability and a mature ecosystem that supports fast and variable growth (e.g., growing startups, e-commerce with peaks)</td>
<td>AWS offers a very broad set of services, wide global presence, and operational maturity that reduce technological risk</td>
<td>A concrete example is the <a href="https://www.sparkfabrik.com/en/success-stories/il-giornale/">project realized with Il Giornale On Line srl</a>, which saw the cloud-native migration of infrastructure, optimization of operating costs, and increased resilience on AWS</td>
</tr>
</tbody>
</table>
<h3 id="scenario-2-microsoft-azure">Scenario 2: microsoft azure</h3>
<table>
<thead>
<tr>
<th>Ideal Use Scenario</th>
<th>Why This Choice</th>
<th>Case Study</th>
</tr>
</thead>
<tbody>
<tr>
<td>Enterprise and hybrid environments, when the company already has a strong Microsoft ecosystem or on-premises infrastructure</td>
<td>Azure excels in integration with enterprise environments, hybrid cloud, and centralized governance</td>
<td>A <a href="https://www.sparkfabrik.com/en/success-stories/cloud-native-luxury-fashion/">project we realized in the luxury fashion sector</a> is an example: a journey toward enterprise cloud native, with application containerization and implementation on Azure of a code management platform and CI/CD pipelines, thus ensuring flexibility and operational continuity</td>
</tr>
</tbody>
</table>
<h3 id="scenario-3-google-cloud-platform">Scenario 3: google cloud platform</h3>
<table>
<thead>
<tr>
<th>Ideal Use Scenario</th>
<th>Why This Choice</th>
<th>Case Study</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cloud native projects, data analytics, AI / ML, management of large data flows, or data-driven startups</td>
<td>GCP is recognized for its analytics capabilities, data-driven infrastructure, and advanced tools</td>
<td>A concrete example is the <a href="https://www.sparkfabrik.com/en/case-studies/la-scuola-sei/">project developed for Gruppo Editoriale La Scuola</a>, which saw end-to-end digital modernization: the adoption of Drupal and Angular on GCP allowed reaching new levels of scalability and performance</td>
</tr>
</tbody>
</table>
<h2 id="the-hybrid-and-multicloud-approach-the-flexibility-of-aws-azure-and-gcp">The Hybrid and Multicloud approach: the flexibility of AWS, Azure, and GCP</h2>
<p>Do you really have to choose just one provider? The answer obviously is no. In business reality, the cloud choice doesn&rsquo;t have to be &ldquo;all or nothing.&rdquo; <strong>Often the most effective strategy involves a mix</strong> , combining a primary provider with on-premises extensions or other clouds.</p>
<p>If you want to explore advantages, risks, and real use cases of these strategies, you&rsquo;ll find a complete analysis in our guide <a href="/en/blog/pro-e-contro-del-multi-cloud-alla-tua-azienda-conviene/"><strong>Pros and Cons of Multi-Cloud: Is It Right for Your Company</strong></a>? A parallel analysis is also available in the guide on<a href="/it/blog/cose-hybrid-cloud-quando-sceglierlo-esempi-vantaggi/"> <strong>What is Hybrid Cloud: When to Choose It, Examples, Advantages</strong></a>. If, instead, you&rsquo;re interested in understanding how to orchestrate a multicloud environment efficiently, you can consult the in-depth article<a href="/it/blog/multi-cloud-orchestration-consigli/"> <strong>Multi-cloud Orchestration: Tips</strong></a>.</p>
<p>Returning to the comparison between Amazon aws vs azure vs GCP: below you&rsquo;ll find a table showing, for each provider, the main offering in terms of hybrid/multicloud and how these solutions respond to the needs of scalability, governance, and integration of enterprise IT.</p>
<h3 id="aws">AWS</h3>
<table>
<thead>
<tr>
<th>Hybrid / Multicloud Solution</th>
<th>What It Offers Concretely</th>
<th>For Which Business Need</th>
</tr>
</thead>
<tbody>
<tr>
<td>AWS Outposts - local extension of AWS infrastructure</td>
<td>Allows running AWS services on-premises with the same APIs, tools, and management as public cloud; ideal for low latency, data residency, integration with local systems</td>
<td>Companies with existing infrastructure or latency/data residency requirements wanting a gradual transition toward cloud or hybrid</td>
</tr>
</tbody>
</table>
<h3 id="azure">Azure</h3>
<table>
<thead>
<tr>
<th>Hybrid / Multicloud Solution</th>
<th>What It Offers Concretely</th>
<th>For Which Business Need</th>
</tr>
</thead>
<tbody>
<tr>
<td>Azure Arc - consistent management of on-premises, edge, and multicloud resources</td>
<td>Allows managing virtual machines, Kubernetes clusters, and databases wherever they are, unifying governance, security, and automation</td>
<td>Enterprises that already have a Microsoft ecosystem or on-premises/hybrid infrastructure and want to extend to cloud with centralized governance</td>
</tr>
</tbody>
</table>
<h3 id="google-cloud-platform">Google Cloud Platform</h3>
<table>
<thead>
<tr>
<th>Hybrid / Multicloud Solution</th>
<th>What It Offers Concretely</th>
<th>For Which Business Need</th>
</tr>
</thead>
<tbody>
<tr>
<td>Anthos - software platform for applications distributed across multiple clouds</td>
<td>Allows deploying and managing Kubernetes applications on GCP cloud, AWS, on-premises; facilitates rapid modernization, container migration, data-driven applications</td>
<td>Startups or digital-first companies wanting to leverage data analytics, AI / ML, containerization and wanting to avoid lock-in to a single provider</td>
</tr>
</tbody>
</table>
<h2 id="beyond-comparison-the-strategic-partner-for-your-cloud-transformation">Beyond comparison: the strategic partner for your cloud transformation</h2>
<p>Choosing the most suitable cloud provider, as we&rsquo;ve seen in the comparison between AWS, Azure, GCP, and Alibaba, represents only the starting point in a broader cloud transformation journey. The real value isn&rsquo;t exhausted with the chosen platform, but is built over time thanks to an <strong>articulated strategy of adoption, management, and optimization</strong> capable of adapting to your business evolutions.</p>
<p>To face this challenge, relying on specialized skills like those of the SparkFabrik team can facilitate not only the solution selection, but also the management of all subsequent phases: from architecture design to daily management, to continuous optimization of performance and costs. Approaches like <a href="https://www.sparkfabrik.com/en/services/cloud-native-services/managed-services/"><strong>Managed Cloud Services</strong></a> and the adoption of <strong>dedicated DevOps platforms</strong> allow reducing complexity and freeing resources for the most strategic business areas.</p>
<p>If you want to evaluate a cloud transformation path tailored to your company, you can request a consultation and explore the options most suited to your scenario.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/cloud-provider-comparison-aws-azure-gcp-alibaba/AWS_2c_20Azure_20GCP_20e_20Alibaba_20Cloud_20-_20Blog_20Featured_20Image.png" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/cloud-provider-comparison-aws-azure-gcp-alibaba/AWS_2c_20Azure_20GCP_20e_20Alibaba_20Cloud_20-_20Blog_20Featured_20Image.png" type="image/jpeg"/><category>Digital Transformation</category><category>Cloud Management</category></item><item><title>What is code refactoring: techniques, AI tools &amp; corporate strategies</title><link>https://www.sparkfabrik.com/en/blog/code-refactoring-guide/</link><pubDate>Wed, 10 Dec 2025 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/code-refactoring-guide/</guid><description>Discover what code refactoring is, the techniques, and how to use AI tools like GitHub Copilot for your application modernization. A strategic guide.</description><content:encoded><![CDATA[<p>In the software lifecycle, there always comes a moment when code needs a review. It can happen, for example, that during the project&rsquo;s growth, temporary solutions were adopted that need fixing, or that time and modifications have made the code too complex. In these cases, <strong>code refactoring</strong> comes into play: a practice often underestimated, but fundamental for maintaining applications that are efficient, high-quality and sustainable over time.</p>
<h2 id="what-is-code-refactoring-and-why-its-a-strategic-lever-for-business">What is code refactoring and why it&rsquo;s a strategic lever for business</h2>
<p><strong>Code refactoring</strong> is the process of restructuring existing source code without changing its external behavior. It&rsquo;s not about fixing bugs, as happens in debugging, nor about introducing new features: <strong>its goal is to improve the internal quality of the software</strong>, making it simpler to understand, test and evolve over time. In other words, it&rsquo;s an investment in code health, not a cosmetic intervention.</p>
<p>The real impact of refactoring emerges when we observe it as <strong>a strategic lever for business</strong>. Every application accumulates <strong>technical debt</strong> over time (those necessary quick choices or temporary solutions that, if not managed, eventually start to slow down development). Through systematic refactoring, this debt is reduced, freeing up resources and time to focus on innovation.</p>
<p>Cleaner and more coherent code allows teams to work with greater speed and confidence, fostering <strong>organizational agility</strong>, that is, the ability to respond quickly to new market or customer needs. Moreover, clear and easily understandable code translates into a reduction of regression risks (that is, the risk that code modifications damage other functionalities or introduce new bugs) and a reduction in lead time for each new release.</p>
<p>But refactoring, as we&rsquo;ll explore further in the article, is not just a technical practice, it&rsquo;s also a key element in <strong>keeping development activities sustainable and fast</strong>, in situations where code quality becomes a growth accelerator, and not, as sometimes happens, an obstacle.</p>
<h2 id="the-main-refactoring-techniques-a-practical-approach">The main refactoring techniques: a practical approach</h2>
<p>Doing <strong>refactoring is not a one-shot activity</strong> but rather a series of <strong>targeted and systematic</strong> interventions that allow, over time, to simplify the complexity of the codebase and make it more easily evolvable over time.</p>
<p>One of the most widespread techniques is <strong>Composing Method</strong>, which consists of breaking down functions or methods that are too long into smaller, more readable blocks. Let&rsquo;s take an example. Imagine a function that, in the context of <strong>an e-commerce application</strong>, handles calculating the total price of an order: it applies any discounts, adds taxes and finally returns a report. If all these operations are enclosed in a single long and complex function, understanding the role of each step becomes difficult both for those writing the code and for those who will need to modify or test it in the future. Through refactoring, each operation becomes a distinct function, with a clear name and precise responsibility. The result is code that&rsquo;s more linear, clearer and more easily isolatable for a potential unit test.</p>
<p>Another essential technique is <strong>conditional simplification</strong>, which aims to make decision logic more readable. Nested expressions like &ldquo;if X is true and Y is false or Z is null&rdquo; not only slow down reading, but increase the probability of errors. Refactoring in this case leads to introducing intermediate variables with explanatory names, or replacing complex conditions with methods that clearly express the intention: &ldquo;isEligibleForDiscount&rdquo; is much more immediate than a long chain of logical operators.</p>
<p>Finally, with <strong>Abstract Refactoring</strong> we intervene on duplicated elements or common concepts distributed in the code, extracting them into a reusable class or function. For example, if different parts of the application calculate a commission or validate an input in similar ways, creating a shared abstraction reduces redundancy and simplifies future maintenance. Every modification thus becomes centralized and coherent throughout the system, strengthening the overall stability of the application.</p>
<h2 id="how-to-do-refactoring-with-ai-a-guide-to-github-copilot">How to do refactoring with AI: a guide to GitHub Copilot</h2>
<p><strong>Artificial intelligence</strong> is becoming a valuable ally in <strong>code refactoring</strong> as well, and among the most effective tools there&rsquo;s undoubtedly <strong>GitHub Copilot</strong>: an assistant that interprets the code context and proposes cleaner, more efficient or idiomatic rewrites. Used methodically, Copilot can significantly accelerate software maintenance and improvement work.</p>
<p>Do you already know this tool? Whether the answer is yes or no, we suggest diving deeper with the workshop: <a href="https://www.youtube.com/watch?v=-kHCGTTFbZE"><strong>So you think you know Copilot</strong></a>?</p>
<p>But how does Copilot concretely integrate into the refactoring process? The first step consists of <strong>understanding the existing code</strong>, leveraging Copilot Chat commands like /explain. This function generates a textual explanation of what a code block does, clarifying intents, dependencies and potential critical points. It&rsquo;s a quick way to orient yourself in complex or poorly documented code portions, before intervening. You can also ask to add comments to the code to make it clearer even in future readings.</p>
<p>Once the necessary understanding is acquired, it&rsquo;s possible to <strong>ask for generic improvement suggestions</strong>, for example with prompts like &ldquo;How can I make this method more readable?&rdquo; or &ldquo;Is there a more efficient way to handle this logic?&rdquo;. Copilot will propose alternatives that respect the semantics of the original code, but simplify its structure or improve its performance.</p>
<p>Finally, the real value emerges when moving to <strong>targeted refactoring instructions</strong>. You can explicitly ask &ldquo;Reorganize this code applying the Composing Method pattern&rdquo; or &ldquo;Extract the validation logic into a separate function&rdquo;. Copilot will generate a refactored version consistent with the request, keeping the software behavior intact.</p>
<p>It&rsquo;s fundamental however to remember that Copilot is an <strong>intelligent assistant, not a human substitute</strong>. Proposals must always be reviewed, understood and validated by the development team. The expert judgment of the programmer remains the balance point between automation and quality. Because, on closer inspection, every refactoring must not only be technically correct, it must also respect the architecture and business objectives.</p>
<p>Speaking of artificial intelligence, have you already downloaded our <a href="https://landing.sparkfabrik.com/gli-agenti-ai-che-trasformano-i-processi-aziendali?hsLang=en"><strong>white paper</strong></a>?</p>
<h2 id="agentic-refactoring-claude-code-and-the-model-context-protocol-mcp">Agentic refactoring: Claude Code and the Model Context Protocol (MCP)</h2>
<p>In the White Paper you&rsquo;ll find all the potential of AI agents, and yes, it also concerns refactoring. <strong>Agentic refactoring</strong> indeed represents the new frontier in the evolution of AI-assisted development.</p>
<p>This is demonstrated by the evolution of tools like <strong>GitHub Copilot</strong> itself, which today no longer limits itself to suggesting improvements or code snippets, but now integrates different interaction modes graduated based on task complexity (you&rsquo;ll find everything in our <a href="https://www.youtube.com/watch?v=-kHCGTTFbZE"><strong>Workshop on Copilot</strong></a>). In particular, if the <strong>Ask mode</strong> is perfect for brainstorming without altering the code and <strong>Edit</strong> allows targeted modifications to open files under supervision, it&rsquo;s with the <strong>Agent</strong> mode that things really change pace.</p>
<p><strong>Agentic systems in fact act directly on the code, operate autonomously on multiple files and autonomously invoke external tools to reach the goal</strong>, understanding its context, dependencies and the global structure of the project. AI is no longer just a suggester, but an active collaborator that can execute complex transformations in an autonomous and controlled way.</p>
<p>In this scenario, <strong>Claude Code</strong> comes into play, one of the most advanced tools currently available for coding and agentic refactoring. Based on a next-generation language model and integrated with the <strong>Model Context Protocol (MCP)</strong>, Claude Code is able to access semantically the entire codebase of a project. Thanks to MCP, the agent truly understands the application context: it knows how the different parts of the software connect to each other, which modules depend on others and where to intervene without compromising system stability. This makes deep refactoring possible - such as the reorganization of entire components or the replacement of architectural patterns - while maintaining code safety and coherence.</p>
<p>As <strong>Enrico Zimuel</strong> emphasized in our event <a href="https://www.youtube.com/watch?v=f-bFIb7ao2s&amp;t=1272s"><strong>Talk on my machine: GenAI x business</strong></a>: the ability of AI to understand code written by other developers (and to explain it to other developers) is the first crucial step toward truly effective collaboration between human and machine. Agentic refactoring is born precisely from this deep understanding: no longer a set of specific suggestions, but an intelligent process that interprets the team&rsquo;s intent and translates it into concrete actions on the code, accelerating the development cycle and increasing the overall quality of the software.</p>
<p><a href="https://www.youtube.com/watch?v=f-bFIb7ao2s&amp;t=1272s">Watch the video: Software, uncertainty and generative artificial intelligence</a></p>
<h2 id="when-to-do-refactoring-and-above-all-when-to-avoid-it">When to do refactoring (and, above all, when to avoid it)</h2>
<p>Understanding <strong>when to do refactoring</strong> is as important as knowing how to execute it. Not every moment is suitable, and intervening in the wrong way can transform a good intention into a risk for project stability. Thinking of refactoring as a form of preventive maintenance helps to insert it into the development cycle with criteria, avoiding perfectionist drifts or waste of time.</p>
<p><strong>When to do it:</strong> the ideal moment is <strong>before adding new features</strong>, especially if these need to integrate with dated or barely readable code. Refactoring at this stage allows building on solid foundations, reducing the probability of introducing bugs. Also <strong>after launching a project</strong> is an excellent occasion: once the software is in production, the team has a clearer vision of problematic areas and can optimize them in a targeted way. Another frequent case is when a <strong>module becomes excessively complex</strong> or difficult to maintain; in this scenario, refactoring serves to restore linearity and favor collaboration between teams.</p>
<p><strong>When to avoid it:</strong> refactoring should not be tackled under <strong>pressure of tight deadlines</strong>. In those cases, the goal must remain the delivery of functional value, not code perfection. It&rsquo;s also discouraged on <strong>obsolete software or software destined for dismissal</strong>: improving something that will soon be abandoned means investing time without return. Finally, it should never be started if the code <strong>is not covered by tests</strong>: without a safety net, every modification risks introducing regressions that are difficult to detect.</p>
<p>A good criterion, ultimately, is to consider refactoring as a <strong>measurable investment</strong>: it should be executed when the benefits in terms of stability, development speed or technical debt reduction clearly exceed the cost of the intervention.</p>
<h2 id="from-refactoring-to-app-modernization-our-strategy-the-5-rs">From refactoring to app modernization: our strategy (the 5 Rs)</h2>
<p><strong>Refactoring</strong> is not just a code cleanup practice, but one of the central levers of a broader path: <strong>Application Modernization</strong>. We&rsquo;re talking about the strategic path that allows evolving IT architectures toward greater efficiency, scalability and innovation. (If you want to explore this process in depth, read the in-depth article <a href="https://blog.sparkfabrik.com/it/application-modernization-cose-vantaggi?hsLang=en"><strong>Application modernization: what it is and what are the benefits</strong></a>).</p>
<p>When an organization decides to evolve its digital ecosystem, it doesn&rsquo;t just optimize what already exists: it defines an overall strategy to make applications more agile, scalable and ready for change. In this context, <strong>refactoring</strong> becomes the <strong>balance point between preserving existing value and the push toward innovation</strong>.</p>
<p>Among modernization strategies, there&rsquo;s also replatforming, often the ideal choice for those who want to migrate from legacy applications to a new cloud platform, without having to rewrite all the software from scratch. To learn more: <a href="/en/replatforming-guide?hsLang=en"><strong>Replatforming: from legacy to a new cloud ecosystem (+ examples)</strong></a></p>
<p>Regarding app modernization, at SparkFabrik we adopt a structured approach based on the <strong>5 Rs</strong> framework: <strong>Rehosting</strong>, <strong>Refactoring</strong>, <strong>Rearchitecting</strong>, <strong>Rebuilding</strong> and <strong>Replacing</strong>. Each &ldquo;R&rdquo; represents a different level of intervention, from simple movement of the application to a cloud infrastructure to its complete reconstruction. Refactoring, in particular, occupies an intermediate position: it allows improving the architecture and code without disrupting the overall functioning of the system, making it more maintainable, performant and ready for further evolutions and for cloud optimization.</p>
<p>In this sense, refactoring is often the <strong>first concrete step toward modernization</strong>, the one that prepares the ground for deeper initiatives like containerization or transition toward a microservices architecture. It&rsquo;s the phase in which quality is sown that will allow subsequent transformations to take root in a stable and lasting way.</p>
<p>To explore our approach in depth and discover how the <strong>5 Rs</strong> guide modernization strategies at SparkFabrik, visit the page dedicated to the <a href="https://www.sparkfabrik.com/it/servizi/application-modernization/"><strong>Application Modernization service</strong></a>.</p>
<h2 id="reducing-technical-debt-a-measurable-benefit-of-refactoring">Reducing technical debt: a measurable benefit of refactoring</h2>
<p>We&rsquo;ve already mentioned <strong>technical debt</strong>, which is the IT equivalent of a variable-rate loan: it allows gaining speed in the short term, but generates interest that becomes increasingly heavy over time. Every shortcut taken in development (duplicated logic, an outdated dependency, a poorly scalable structure) accumulates until it slows down the pace of innovation and increases maintenance costs. Teams find themselves dedicating more time to understanding and correcting code than to developing new functionalities, with a direct impact on productivity and time-to-market.</p>
<p><strong>Refactoring</strong> is the main tool for reducing this debt in a systematic and sustainable way. By improving the internal quality of the software, it increases its readability and coherence, reducing errors and simplifying future evolution. In business terms, it means <strong>cutting down the hidden costs of development</strong> (those related to complexity and slowness) and returning to the team the ability to innovate quickly. It&rsquo;s an investment that doesn&rsquo;t produce immediate value, but builds the foundations for <strong>more solid technological and organizational growth in the long term.</strong></p>
<h2 id="choosing-the-right-partner-for-your-application-transformation">Choosing the right partner for your application transformation</h2>
<p>Modernizing applications doesn&rsquo;t just mean updating technology, but realizing <strong>a strategic transformation</strong> that aligns your company&rsquo;s digital assets with business objectives, making it more flexible, scalable and ready for future challenges. Tackling this transition requires <strong>specialized</strong> competencies and, above all, requires <strong>an overall vision</strong> that knows how to unite cloud-native architectures, DevOps practices, process automation and modular and API-first design strategies.</p>
<p>At <strong>SparkFabrik</strong> we work with a <strong>structured approach</strong>, guided by application modernization best practices and our experience in the field. We follow every phase of the journey: from rehosting to the adoption of microservices, artificial intelligence integration and UX/UI redesign. The goal? Transform legacy applications so they truly respond to business needs.</p>
<p>Want to see how this approach can make a difference in your company too? Explore our <strong>application modernization services</strong> or write to us: sometimes a new point of view changes the direction of the project.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/guida-al-code-refactoring/featured-image.webp" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/guida-al-code-refactoring/featured-image.webp" type="image/jpeg"/><category>Digital Transformation</category></item><item><title>NIS2 and DORA: Strategies and Best Practices for Cloud Native</title><link>https://www.sparkfabrik.com/en/blog/nis2-dora-strategies-best-practices-cloud-native/</link><pubDate>Fri, 05 Dec 2025 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/nis2-dora-strategies-best-practices-cloud-native/</guid><description>Practical strategies for implementing NIS2 and DORA requirements in cloud-native architectures. DevSecOps approach and best practices for compliance.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Practical strategies for implementing NIS2 and DORA requirements in Cloud Native environments: DevSecOps framework, shift-left security, automated security controls in CI/CD pipelines (SAST, SCA, DAST), operational resilience with chaos engineering and structured backups. The article includes 5 best practices and a 6-phase roadmap for incremental, sustainable compliance.
  </div>
</div>
<p><strong>The new NIS2 and DORA regulations</strong> have marked a transformation in <strong>cybersecurity</strong> in Europe. Adapting to this new standard is a complex challenge for thousands of companies, particularly those operating in <strong>Cloud Native environments</strong>. However, it&rsquo;s not just about meeting an obligation; it&rsquo;s about seizing the opportunity to strengthen operational resilience and gain a competitive advantage.</p>
<p>In this second deep dive of our series on NIS2 and DORA, we explore the concrete strategies and <strong>best practices</strong> we&rsquo;ve identified for effective implementation, starting with a <strong>DevSecOps</strong> approach and a culture of security. If you haven&rsquo;t yet, you can start with our <a href="/en/blog/nis2-dora-impatto-sulla-cybersecurity-nel-cloud-native/">first article</a> to discover the characteristics of the regulations and the specific challenges for the world of Cloud Native architectures.</p>
<h1 id="heading"></h1>
<h2 id="devsecops-the-framework-for-integrating-security">DevSecOps: The Framework for Integrating Security</h2>
<p><a href="/en/blog/cloud-devsecops/">The DevSecOps approach</a> represents a solid foundation for implementing the requirements of NIS2 and DORA in Cloud Native environments, thanks to its ability to integrate security into every phase of the software development lifecycle, including the initial stages of development.</p>
<h3 id="how-to-implement-security-shift-left-in-practice">How to Implement Security Shift-Left in Practice</h3>
<p><strong>Considering security aspects from the earliest stages of the development cycle</strong> is a significant change for organizations and a major improvement in the security score of any application.</p>
<p>This approach aligns perfectly with the <strong>proactive risk management requirements</strong> of the new European regulations. Furthermore, it allows for identifying and resolving vulnerabilities not only more promptly but also when the cost of remediation is still low, thus providing benefits in terms of both security and business.</p>
<p>Some best practices for effectively implementing security shift-left are:</p>
<ul>
<li><strong>Threat Modeling</strong> : Adopt a structured internal process to identify potential threats and define appropriate countermeasures. Through dedicated sessions at the beginning of a project, security can be integrated directly into the application&rsquo;s design (“security by design”). Such sessions should be conducted not only in the initial phase but also before new functionalities or significant architectural changes. Furthermore, threat modeling becomes truly effective when it involves all relevant stakeholders (developers, architects, security specialists, and potentially other stakeholders). This way, not only are varied perspectives considered, but a shared understanding of security risks and consequent mitigation strategies is fostered.</li>
<li><strong>Security requirements as user stories</strong> : Integrate security requirements into the normal development workflow, treating them as user stories in the product backlog, with clear and measurable acceptance criteria. In other words, security aspects should be considered on par with other &ldquo;business&rdquo; functionalities and as integral parts of the product. This also helps to reflect and solidify a &ldquo;security by design&rdquo; culture, where security is a &ldquo;normal&rdquo; piece of the development process, not a later consideration.</li>
<li><strong>Security Champions</strong> : Identify developers with an interest in security and invest in their specific training. These champions will be the promoters of security best practices within the team and the entire organization, fostering their understanding and adoption. Identifying Security Champions is a particularly effective strategy for distributed teams or organizations with limited resources to dedicate to security topics.</li>
</ul>
<h3 id="automating-security-controls-in-the-development-cycle">Automating Security Controls in the Development Cycle</h3>
<p>As in many other areas, automation in the world of cybersecurity is one of the most effective ways to streamline operations and processes. With increasingly complex applications and infrastructures, integrating automated security controls is now an essential best practice.</p>
<p>A mature security approach involves implementing different types of automated controls to reduce risks and errors, avoid delays in the development cycle, identify vulnerabilities in a timely manner, and intervene quickly. The main security tools that can be integrated into development pipelines are summarized in the following table.</p>
<table>
<thead>
<tr>
<th><strong>Type of control</strong></th>
<th><strong>Details</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Static Application Security Testing (SAST)</strong></td>
<td><strong>Objective:</strong></td>
</tr>
<tr>
<td>Identify vulnerabilities in source code to find vulnerabilities without running the application. The goal is to detect security flaws like SQL injections or cross-site scripting (XSS) from the earliest stages of development.  <strong>Common Tools:</strong></td>
<td></td>
</tr>
<tr>
<td>SonarQube, Checkmarx, Snyk Code.  <strong>When to Apply It:</strong></td>
<td></td>
</tr>
<tr>
<td>During build phases.</td>
<td></td>
</tr>
<tr>
<td><strong>Software Composition Analysis (SCA)</strong></td>
<td><strong>Objective:</strong></td>
</tr>
<tr>
<td>Detect vulnerabilities in dependencies and third-party libraries used. It is crucial for supply chain security, as it verifies that external components are secure.  <strong>Common Tools:</strong></td>
<td></td>
</tr>
<tr>
<td>OWASP Dependency-Check, Snyk, WhiteSource.  <strong>When to Apply It:</strong></td>
<td></td>
</tr>
<tr>
<td>When dependencies are updated.</td>
<td></td>
</tr>
<tr>
<td><strong>Container Security</strong></td>
<td><strong>Objective:</strong></td>
</tr>
<tr>
<td>Analyze container images to detect vulnerabilities, misconfigurations, and exposed secrets. Essential for preventing attacks in containerized environments.  <strong>Common Tools:</strong></td>
<td></td>
</tr>
<tr>
<td>Trivy, Clair, Anchore.  <strong>When to Apply It:</strong></td>
<td></td>
</tr>
<tr>
<td>Before deployment.</td>
<td></td>
</tr>
<tr>
<td><strong>Infrastructure as Code (IaC) Security</strong></td>
<td><strong>Objective:</strong></td>
</tr>
<tr>
<td>Verify infrastructure configurations (IaC), which is the management of IT infrastructure through code. The goal is to prevent incorrect and insecure configurations before they are applied.  <strong>Common Tools:</strong></td>
<td></td>
</tr>
<tr>
<td>Checkov, Terrascan, tfsec.  <strong>When to Apply It:</strong></td>
<td></td>
</tr>
<tr>
<td>During the deployment phase.</td>
<td></td>
</tr>
<tr>
<td><strong>Dynamic Application Security Testing (DAST)</strong></td>
<td><strong>Objective:</strong></td>
</tr>
<tr>
<td>Test running applications to simulate attacks and identify vulnerabilities that only emerge at runtime, such as authentication or session management errors.  <strong>Common Tools:</strong></td>
<td></td>
</tr>
<tr>
<td>OWASP ZAP, Burp Suite.  <strong>When to Apply It:</strong></td>
<td></td>
</tr>
<tr>
<td>In a staging environment.</td>
<td></td>
</tr>
</tbody>
</table>
<p>Integrating these controls into CI/CD pipelines requires a <strong>balance between security and speed</strong>. While these tools act automatically, it is also important to configure them precisely to minimize false positives and prioritize vulnerabilities based on their severity and context. A gradual approach to implementation, starting with the most critical controls and then progressively extending coverage, can facilitate team adoption and integration into development processes.</p>
<p>Best practices also suggest defining clear policies on which vulnerabilities block the pipeline and which can be managed as non-blocking warnings, also formalizing the process for managing exceptions. This approach allows for maintaining the right balance between development agility and security, ensuring that critical vulnerabilities are addressed promptly.</p>
<p>The importance of these controls also extends to <strong>supply chain security</strong> , a crucial aspect for Cloud Native architectures. Supply chain security refers to the security of the software and hardware components that make up an application, including all dependencies, libraries, and container images that are not developed internally but come from external sources. Protecting the supply chain means ensuring that these components do not introduce vulnerabilities or malicious code into our systems, a fundamental requirement for compliance with regulations like NIS2 and DORA.</p>
<p>For a deeper understanding of this topic, we invite you to watch the <a href="https://www.youtube.com/watch?v=KPHtRtSyL_A">talk by our CTO Paolo Mainardi</a> (in Italian) who explains its importance and how to manage it effectively. For more insights on common vulnerabilities, mitigation strategies, case studies, and best practices, the recordings of our <a href="https://www.youtube.com/playlist?app=desktop&amp;list=PLSD9hiOyso87DPMDrFWpl_i83mXAi6Mzk">Talks On My Machine event dedicated to Supply Chain Security</a> are also available (in Italian).</p>
<h3 id="monitoring-and-threat-detection">Monitoring and Threat Detection</h3>
<p>NIS2 and DORA regulations place particular <strong>emphasis on the ability to identify, classify, and respond to security incidents in a timely manner</strong>. Cloud Native environments, being by their very nature distributed, ephemeral, and dynamic, require advanced approaches to monitoring and threat detection.</p>
<p>An effective monitoring system in a Cloud Native environment must certainly include:</p>
<ul>
<li>
<p><strong>Log centralization</strong> : Collect logs from all components of the ecosystem (applications, containers, orchestrators, infrastructure) and aggregate them into a single central repository. This provides complete visibility and facilitates event correlation, allowing for the identification, mapping, and patching of vulnerabilities and security breaches. To manage the volume and complexity of logs generated in Cloud Native environments, dedicated log management tools are available, including Elasticsearch, Splunk, or Loki. These solutions allow for storing and indexing complex, large-scale logs, and are equipped with advanced search and automatic analysis functionalities that facilitate the identification of anomalies.</p>
</li>
<li>
<p><strong>Runtime security monitoring</strong> : Identify anomalous behaviors that could indicate an ongoing attack, such as privilege escalation attempts, abnormal access to sensitive resources, or unauthorized communications. Runtime monitoring through dedicated tools is particularly important in containerized environments, given that the ephemeral nature of the components makes it difficult to identify anomalous behaviors.</p>
</li>
<li>
<p><strong>Network security</strong> : Implementing a zero-trust approach, where no communication is considered secure by default, is particularly suitable for Cloud Native environments. Other best practices include network segmentation, in-transit encryption, and providing granular controls over communications between services.</p>
</li>
<li>
<p><strong>Compliance dashboards</strong> : It is a good practice to create specific visualizations that track relevant metrics such as open vulnerabilities by severity, remediation times, and the implementation status of security controls required by regulations. A shared dashboard also helps to hold the team accountable for security and compliance issues.</p>
</li>
</ul>
<h2 id="operational-resilience-and-business-continuity">Operational Resilience and Business Continuity</h2>
<p><strong>Operational resilience</strong> , which is the ability to maintain operational services even in the presence of incidents or disruptions, is a fundamental pillar of both NIS2 and DORA. It is a genuine requirement of the two regulations to ensure the continuity of key services in the European economy, with a particular emphasis on the financial sector. Let&rsquo;s look at the main best practices for maximizing resilience.</p>
<h3 id="fault-tolerant-architectures">Fault-Tolerant Architectures</h3>
<p>Cloud Native architectures offer inherent advantages in terms of resilience: by nature, they have the ability to distribute workloads across different infrastructures and scale dynamically. However, conscious design is still necessary to fully exploit their features and functionalities. It is also essential to anticipate inevitable failure scenarios and the consequent mitigation strategies.</p>
<ul>
<li><strong>Geographical distribution</strong> of workloads across multiple availability zones or regions</li>
<li><strong>Autoscaling mechanisms</strong> to adapt dynamically to load variations</li>
<li><strong>Resilience patterns</strong> like circuit breakers and bulkheads to prevent cascading failures</li>
<li><strong>Automatic failover systems</strong> to minimize downtime</li>
</ul>
<h3 id="chaos-engineering-for-testing-resilience">Chaos Engineering for Testing Resilience</h3>
<p>Chaos Engineering, initially adopted by advanced technology organizations, is becoming an increasingly mainstream practice as a tool to verify that systems respond as expected in case of problems. This methodology allows for proactively identifying weak points that might remain hidden until a real incident occurs.</p>
<ul>
<li><strong>Controlled experiments</strong> of deliberate failure to test recovery capabilities</li>
<li><strong>Incremental approach</strong> starting with simple tests in non-production environments</li>
<li><strong>Monitoring results</strong> to identify weak points in the architecture</li>
<li><strong>Continuous improvement</strong> based on experiment results</li>
</ul>
<h3 id="structured-backup-and-recovery">Structured Backup and Recovery</h3>
<p>A true constant since the dawn of IT, a solid backup and recovery strategy always remains a fundamental element of any cybersecurity strategy, even in Cloud Native architectures. Backups protect against human errors and technical malfunctions. Furthermore, they are also an important line of defense against ransomware attacks, which are an increasingly widespread and sophisticated threat.</p>
<ul>
<li><strong>Clear and complete backup policies</strong> (frequency, retention, coverage)</li>
<li><strong>Periodic recovery tests</strong> to verify the effectiveness of procedures</li>
<li><strong>Security measures</strong> to protect backups from unauthorized access</li>
<li><strong>Automation</strong> of backup processes to reduce the risk of human error</li>
</ul>
<h3 id="documented-incident-response-procedures">Documented Incident Response Procedures</h3>
<p>Documenting incident response procedures, in addition to being an explicit requirement of the regulations, is essential for ensuring a rapid and effective response in case of an incident. Periodic drills allow for verifying the effectiveness of procedures and help build practical experience, which is necessary to respond effectively in stressful and truly emergency situations.</p>
<ul>
<li><strong>Detailed playbooks</strong> for different incident scenarios</li>
<li><strong>Clearly defined roles and responsibilities</strong></li>
<li><strong>Communication templates</strong> for internal and external interactions</li>
<li><strong>Periodic drills</strong> to test the effectiveness of procedures</li>
</ul>
<h2 id="5-best-practices-for-implementing-compliance">5 Best Practices for Implementing Compliance</h2>
<p>The effective implementation of NIS2 and DORA requirements requires not only appropriate tools and technologies but also a cultural and organizational approach that integrates security as a fundamental value and a constant objective.</p>
<ol>
<li><strong>Adopt a security by design approach.</strong> By integrating security into the DNA of the development process, organizations can prevent many problems that would otherwise require costly and complex interventions. Security by design allows for identifying vulnerabilities and security flaws as early as possible, intervening promptly and reducing the cost and complexity of late remediations.<br>
Best practices:
<ul>
<li>Integrate security from the initial design phases</li>
<li>Define security principles that guide architectural decisions</li>
<li>Use already security-hardened patterns and reference architectures</li>
<li>Conduct threat modeling sessions regularly</li>
</ul>
</li>
<li><strong>Automate as much as possible.</strong> Automation plays a fundamental role in making compliance sustainable over time. Investing in automation may seem costly initially, but it offers significant returns in the medium to long term, reducing the manual workload and minimizing the risk of human error.<br>
Best practices:
<ul>
<li>Implement security controls in CI/CD pipelines</li>
<li>Automate the generation of compliance documentation</li>
<li>Use policy-as-code for automatic enforcement</li>
<li>Implement automated alerts and remediations where possible</li>
</ul>
</li>
<li><strong>Invest in training and culture.</strong> Training and organizational culture are often underestimated but crucial elements for success. By creating an environment where security is everyone&rsquo;s responsibility, not just that of the specialized team, a significantly higher level of protection is achieved. Security Champions are an effective way to promote a security culture.<br>
Best practices:
<ul>
<li>Train all teams on regulatory requirements</li>
<li>Create security champions in every team</li>
<li>Incentivize the reporting of security problems</li>
<li>Promote a culture of shared responsibility for security</li>
</ul>
</li>
<li><strong>Adopt a risk-based approach.</strong> This approach allows for allocating resources efficiently, concentrating efforts where they can have the greatest impact. Always relevant, it becomes fundamental in contexts with limited resources where it is necessary to maximize the return on security investments.<br>
Best practices:
<ul>
<li>Evaluate the criticality and sensitivity of systems and data</li>
<li>Allocate resources based on risk assessment</li>
<li>Define controls proportional to the value to be protected</li>
<li>Implement more robust protections for mission-critical systems</li>
</ul>
</li>
<li><strong>Document systematically.</strong> Documentation allows for having an always updated snapshot of assets and resources, as well as internal best practices that allow for quickly understanding how to act in an emergency.<br>
Best practices:
<ul>
<li>Maintain an updated inventory of assets and resources</li>
<li>Document architectural decisions and risk mitigations</li>
<li>Prepare audit-ready documentation</li>
<li>Implement an effective document management system</li>
</ul>
</li>
</ol>
<h2 id="the-path-to-compliance-where-to-start">The Path to Compliance: Where to Start</h2>
<p>Implementing compliance, especially for organizations approaching the security requirements introduced or strengthened by NIS2 and DORA for the first time, requires a structured and incremental approach that balances the urgency of meeting regulatory requirements with the need to maintain business operations. An effective implementation path typically consists of the following phases:</p>
<ol>
<li><strong>Awareness and initial training.</strong> The first step is to create awareness at all levels of the organization. It is important that training involves not only developers but also management and other functions. Only in this way is it possible to lay the foundations for a widespread security culture.<br>
Best practices:
<ul>
<li>Introductory workshops on regulatory requirements and their impact</li>
<li>Awareness programs for technical teams</li>
<li>Executive briefings for strategic management alignment</li>
<li>Industry benchmarks to understand how other organizations are addressing compliance</li>
</ul>
</li>
<li><strong>Assessment and gap analysis.</strong> In any methodology, a fundamental step is always a thorough evaluation that allows for understanding the current state and identifying priority areas for intervention.<br>
Best practices:
<ul>
<li>Security posture assessment to evaluate the current state</li>
<li>Analysis of all relevant aspects: architecture, processes, technologies</li>
<li>Identification of gaps against regulatory requirements</li>
<li>Definition of a baseline to measure progress</li>
</ul>
</li>
<li><strong>Roadmap definition.</strong> Planning is crucial for effective implementation. The implementation roadmap must be realistic and balanced, considering not only the regulatory urgency but also the operational impact of the interventions.<br>
Best practices:
<ul>
<li>Prioritize interventions based on risk and business impact</li>
<li>Identify &ldquo;quick wins&rdquo; that can be implemented quickly, which are invaluable for creating momentum and visibility for the entire compliance program</li>
<li>Plan more complex and long-term interventions, with clear and measurable objectives</li>
<li>Allocate resources and define realistic timelines</li>
</ul>
</li>
<li><strong>Fundamental controls.</strong> Even if the initial state and gaps vary for each organization, there are certainly some fundamental controls that every organization should implement with the highest priority.<br>
Best practices:
<ul>
<li>Access management: implementation of robust access controls and strict application of the Principle of Least Privilege (PoLP)</li>
<li>Vulnerability management: structured process of identification and remediation</li>
<li>Security monitoring: implementation of basic detection systems</li>
<li>Incident response: definition of initial incident response procedures, a requirement explicitly provided by NIS2, including the obligation to notify authorities (notification within 24 hours, full report within 30 days)</li>
</ul>
</li>
<li><strong>Incremental implementation.</strong> Cybersecurity is constantly evolving; after implementing the fundamental features, a process of continuous improvement is necessary. The incremental approach is particularly suitable in complex contexts like Cloud Native environments. This allows for testing the effectiveness of solutions on a small scale before extending them to the entire organization, with a significant minimization of risks. But it&rsquo;s not just about features; it&rsquo;s also about culture and experience in the face of real emergencies: a maturity that is built over time.<br>
Best practices:
<ul>
<li>Adopt an iterative approach with implementation-verification-adaptation cycles</li>
<li>Initial focus on fundamental and high-impact elements</li>
<li>Expansion of implemented controls</li>
<li>Involvement of key stakeholders in every phase</li>
<li>Adaptation of the roadmap based on feedback and results</li>
</ul>
</li>
<li><strong>Verification and continuous improvement.</strong> Security practices cannot be limited to the initial development phase of a project or to the moment the journey towards regulatory compliance begins. Instead, security must play a central role and be considered on par with &ldquo;business&rdquo; features, providing for continuous monitoring and improvement. From this perspective, security itself becomes a &ldquo;business feature&rdquo; in every respect.<br>
Best practices
<ul>
<li>Conduct regular internal audits</li>
<li>Periodic vulnerability assessments</li>
<li>Review and update procedures</li>
<li>Adapt to new threats and the evolution of regulatory requirements</li>
</ul>
</li>
</ol>
<p>Cybersecurity is an extremely vast and delicate subject that requires skills and expertise that must be built over time. In the initial phase of a security approach, it is particularly important to balance regulatory urgency with operational sustainability. Trying to implement all functionalities and controls at the same time can be counterproductive, leading to superficial implementations and a dangerous organizational resistance.</p>
<p>The incremental approach, instead, allows for obtaining tangible results in a reasonable time, while building the skills and culture necessary for an effective implementation of the most advanced controls. By identifying and starting with “quick wins,” it is possible to quickly achieve easy victories that create internal momentum. Each success obtained then contributes to creating momentum and gaining support for both the subsequent phases of the adaptation journey and the future commitment to continuous improvement.</p>
<h2 id="strategic-approach-to-compliance">Strategic Approach to Compliance</h2>
<p>By adopting a modern and strategic perspective on compliance, it is possible to transform a regulatory obligation into a competitive advantage. It&rsquo;s a mental, cultural, and organizational shift that turns a simple mandatory fulfillment into an opportunity to improve our processes and architectures.</p>
<p>The <strong>security enablement</strong> approach, in contrast to traditional security enforcement, fits perfectly into this vision. Rather than imposing controls from above that could be perceived as obstacles, with security enablement, the focus shifts to empowering teams to implement security, providing them with tools, knowledge, resources, and support.</p>
<p>A methodology that is also effective in the security field is that of <strong>continuous improvement</strong>. Security is not only integrated into existing processes, but continuous controls and verification of adopted protection measures are also integrated, as well as the evaluation and integration of new measures. This cycle of continuous improvement allows for adapting to the constant evolution of cyber threats, also becoming more reactive to new regulatory requirements.</p>
<p>Within the <strong>organizational culture</strong> , there must be a shift aimed at considering security on the same level as all other functionalities. But not only that: for a true “<strong>security culture</strong> ” to be established, transparency and collaboration must be fundamental values. Only in this way is it possible to create an environment of open communication about security problems, based on incident analysis and learning rather than blame, and on the creation of spaces for discussion and sharing.</p>
<p>Last but not least, a sustainable vision of security, while maintaining compliance requirements over time without sacrificing agility and innovation, cannot do without automation. <strong>Automation</strong> allows for reducing the manual workload, integrating security into daily workflows, implementing continuous monitoring, and intervening promptly—all of which are essential aspects of effective cybersecurity.</p>
<h2 id="conclusion-and-next-steps">Conclusion and next steps</h2>
<p>As we have seen, implementing the NIS2 and DORA requirements in Cloud Native environments represents a significant challenge, especially for organizations facing these aspects for the first time. However, with the right approach, it becomes an opportunity to improve security, operational resilience, skills, and organizational culture.</p>
<p>Organizations that adopt a structured approach, with an emphasis on automation, training, and continuous improvement, will be able not only to meet regulatory requirements but also to <strong>gain competitive advantages in terms of reliability, security, and innovation capability</strong>.</p>
<p>To start the compliance journey, we recommend:</p>
<ol>
<li>Evaluate the current state through an initial assessment</li>
<li>Define a realistic roadmap with clear priorities</li>
<li>Implement fundamental controls and proceed incrementally, starting with “quick wins”</li>
<li>Measure progress and celebrate successes</li>
<li>Adopt a continuous improvement approach</li>
</ol>
<p>At SparkFabrik, we follow the evolution of security regulations and their application in Cloud Native contexts with great attention and interest. Our <strong>CTO, Paolo Mainardi</strong> , personally leads the company&rsquo;s commitment to security, acting as a <strong>Security Champion</strong> and promoting internal specialization in <strong>supply chain security</strong> and <strong>DevSecOps</strong>. This focus is also reflected in our active participation in international communities, as we are members of key organizations such as <strong>CNCF, Linux Foundation Europe</strong> , and <strong>OpenSSF</strong>.</p>
<p>Our experience in the sector allows us to offer strategic consulting and support in the implementation of solutions that balance compliance, security, and innovation. To support organizations on their compliance journey, we have also created the <a href="https://go.sparkfabrik.com/nis2-dora-compendium/en"><strong>NIS2 &amp; DORA Compendium</strong></a>, a complete and free guide to help you navigate the regulatory requirements and challenges of Cloud Native environments.</p>
<p>To learn more about these topics or discuss the specific needs of your organization, we invite you to explore our service offerings or contact us directly:</p>
<ul>
<li><a href="https://www.sparkfabrik.com/en/services/cloud-native-services/supply-chain-security/">Supply Chain Security</a></li>
<li><a href="https://www.sparkfabrik.com/en/cloud-native-journey/">Cloud Native Journey</a></li>
<li><a href="https://www.sparkfabrik.com/en/services/cloud-native-services/devops-automation/">DevOps &amp; Automation</a></li>
<li><a href="https://www.sparkfabrik.com/en/services/cloud-native-services/kubernetes-consultancy/">Kubernetes Consultancy</a></li>
<li><a href="https://www.sparkfabrik.com/en/services/cloud-native-services/managed-services/">Managed Services</a></li>
<li><a href="https://www.sparkfabrik.com/en/services/cloud-native-services/cloud-migration/">Cloud Migration</a></li>
</ul>
<p>Or <a href="https://www.sparkfabrik.com/en/contact-us/">contact us directly</a> for a personalized consultation on your specific context.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/nis2-dora-strategies-best-practices-cloud-native/NIS2_20e_20DORA_20strategie_20e_20best_20practices_20Featured_20Image_20-_20Sparkfabrik.jpg" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/nis2-dora-strategies-best-practices-cloud-native/NIS2_20e_20DORA_20strategie_20e_20best_20practices_20Featured_20Image_20-_20Sparkfabrik.jpg" type="image/jpeg"/><category>Digital Transformation</category><category>Security</category></item><item><title>DrupalCon Vienna 2025: what we learned (and what changes for you)</title><link>https://www.sparkfabrik.com/en/blog/drupalcon-vienna-2025/</link><pubDate>Mon, 10 Nov 2025 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/drupalcon-vienna-2025/</guid><description>Report from DrupalCon Vienna 2025: Canvas in production, enterprise-grade AI, native design systems. Practical lessons for CTOs: what works, what to avoid.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Practical report from DrupalCon Vienna 2025: Canvas is production-ready, enterprise AI now has real governance via the Context Control Center, and Drupal is positioning itself as the first &ldquo;design system native&rdquo; CMS. This article provides concrete action items for teams planning 2026 Drupal projects, from composable architectures to the site template marketplace.
  </div>
</div>
<p>We spent four days at <a href="https://events.drupal.org/vienna2025">DrupalCon Vienna 2025</a> (October 14-17) attending over twenty technical sessions, workshops, and BOFs. The strongest message? <strong>Drupal isn&rsquo;t chasing trends but consolidating technical leadership</strong> in three critical areas: enterprise-grade AI with real governance, Canvas as the first &ldquo;design system native&rdquo; CMS, and operational maturity for projects at scale.</p>
<p>This isn&rsquo;t a marketing report but a <strong>practical account</strong> of what we saw working in production, which problems remain open, and where you should invest attention over the next six months. For Italian teams planning 2026 projects, there are decisions to make now, not later.</p>
<p>October in Vienna has that dry cold that makes everything sharper. Perfect for four days of total immersion in what turned out to be one of the most technically dense DrupalCons in recent years. It wasn&rsquo;t a conference of sensationalist announcements, but of working demos and field-tested solutions, as well as that rare combination of strategic vision and operational pragmatism that distinguishes mature communities from those in the hype phase.</p>
<p><em>All session videos will soon be available in the</em> <em>official DrupalCon Vienna 2025 playlist</em> <em>.</em></p>
<p><a href="https://www.youtube.com/watch?v=lTBim0nMD5s">Dries Buytaert&rsquo;s keynote</a> set the tone: four pillars (Canvas, AI, Orchestration, Site Templates, Marketplace) and a clear statement. <strong>AI is a technology that&rsquo;s here to stay</strong> , regardless of whether the financial bubble bursts sooner or later. <strong>Drupal is one of the best-positioned open-source CMSs to leverage it.</strong> No defensiveness, no &ldquo;we have AI too,&rdquo; but a clear strategic implementation roadmap.</p>
<p>What struck us most? The distance between announcements and reality was minimal. Canvas isn&rsquo;t vaporware: it&rsquo;s in working alpha with agencies already running pilots with real clients. AI isn&rsquo;t a tacked-on chatbot: it&rsquo;s architecture designed for enterprise governance with production-grade observability. Site templates have moved beyond the concept phase: the first ones are already here.</p>
<p>We&rsquo;re organizing this report by thematic areas, not chronologically. What matters isn&rsquo;t &ldquo;what was said on Tuesday&rdquo; but &ldquo;what changes for your 2026 projects.&rdquo;</p>
<h2 id="canvas-demo-vera-risposte-vere">Canvas: demo vera, risposte vere</h2>
<p>Drupal Canvas (the new name for Experience Builder) is <strong>Drupal&rsquo;s brand new visual builder</strong> that promises to <strong>simplify and streamline</strong> how we design and build pages in Drupal. We&rsquo;ve already discussed it in our article on <a href="/en/drupal-cms-all-innovations-of-2025?hsLang=en">Drupal CMS 2.0 innovations</a> and in the <a href="/en/drupal-ai-overview-news-vision?hsLang=en">comprehensive overview of Drupal AI</a>, where we analyzed the architecture and strategic positioning. In Vienna, we saw what it means to bring it to production, and especially where gaps still exist.</p>
<p>The &ldquo;Drupal Canvas Unleashed&rdquo; session sold out with over 400 participants. The key message: Canvas combines Single Directory Components (SDC) and blocks as &ldquo;backend&rdquo; components, but with Canvas you no longer need to know how they work once they&rsquo;ve been developed.</p>
<p>Settings in Canvas are the properties of SDCs, while a left panel controls slots, a new &ldquo;entity&rdquo; that allows managing the component tree in layers, potentially nested within each other. Additionally, in the theme configurations there&rsquo;s a setting to let Canvas manage global regions like header and footer, importing them with just a few clicks into every new page. Truly effective and intuitive: anyone familiar with Figma or other visual builders like Framer and Webflow will find themselves in a very familiar environment.</p>
<p><img src="/images/blog/drupalcon-vienna-2025/01_20_20DrupalCon_20Vienna_202025_20-_20DrupalCon_20Vienna_202025_20Driesnote_20Canvas_2025-6_20screenshot.png" alt="01  DrupalCon Vienna 2025 - Driesnote, Drupal Canvas"></p>
<p>But the most interesting feature? <strong>Code Components</strong> , which support writing React components directly in the browser. These components can use elements and CSS components defined in Canvas, also support props to add dynamic data (for example, a title). Furthermore, it&rsquo;s also possible to query via JSON:API to build dynamic components that retrieve data from the backend.</p>
<p><strong>What already works:</strong></p>
<ul>
<li>Content Templates for defining structured content type layouts (perfect for landing pages)</li>
<li>Global regions and reusable components</li>
<li>AI (in beta) that allows building code components directly via prompts</li>
<li>Views natively supported</li>
</ul>
<p><strong>Gaps still open:</strong></p>
<ul>
<li>Paragraphs and layouts are still work in progress</li>
<li>Some contrib modules that only work on nodes aren&rsquo;t compatible</li>
<li>Some contrib field types don&rsquo;t work yet</li>
<li>Multilingual isn&rsquo;t fully supported</li>
<li>Server-side rendering isn&rsquo;t implemented (blocked by security implications, support isn&rsquo;t guaranteed for all server providers)</li>
<li>APIs to extend Canvas (not just in terms of module integration, but also embedded web apps in Canvas) and offer truly rich user experiences</li>
<li>Standard permissions system (e.g., to create new components or use existing ones)</li>
</ul>
<p>The &ldquo;Strategies for Integrating Drupal Canvas in Your Existing Platform&rdquo; session provided practical guidance. Canvas creates a new &ldquo;canvas pages&rdquo; content type, so modules that only work on nodes might have compatibility issues. Canvas is a React app on the frontend, compilation and rendering in the editor happen in the browser in real time.</p>
<p>For frontend developers, the &ldquo;JavaScript Frontend Development with Drupal Canvas: Beyond Decoupling&rdquo; session showed advanced workflows. It&rsquo;s possible to develop external JS components and sync them with dedicated utilities. There&rsquo;s a bidirectional workflow to move from Drupal to Storybook and vice versa. Canvas supports adding global CSS for preview. Each component passes data to Drupal, the component is rendered with nuxt client-side, and external components are standard Vue components.</p>
<p><strong>Practical takeaway:</strong> If you&rsquo;re planning Drupal projects for Q1-Q2 2026, Canvas is production-ready (stable release November 2025, default in Drupal CMS 2.0 from January 2026). In the budget, it is essential to account for component library development time, rather than individual page implementation. The ROI is immediate for teams with high page creation volume.</p>
<h2 id="native-design-system-finally-a-cms-that-understands-design">Native design system: finally a CMS that understands design</h2>
<p>The &ldquo;Drupal, the first design-system native CMS&rdquo; session by Pierre Dureau (Beyris) presented a paradigm shift that deserves attention.</p>
<p>Indeed, in Drupal <strong>themes present significant historical problems</strong> : they&rsquo;re not shareable (a theme is for a specific project), they&rsquo;re not plug-and-play (there&rsquo;s always a missing template), they have unfriendly DX.</p>
<p>The proposed solution: <em>business agnostic coding</em>. Like the backend, the frontend must be decoupled from business through plugins—that is, a design must be conceived independently of business and brand. This is where well-structured design systems come in, enabling a major strength: &ldquo;one design, many products.&rdquo;</p>
<p>And <strong>Drupal is an ecosystem that particularly effectively leverages a design system</strong> , as a structured, organized, and well-described design is one that Drupal can easily understand and use (we also discussed this in our recent <a href="/en/design-system-and-drupal-cms?hsLang=en">Design System and Drupal CMS article</a>).</p>
<p>Very interestingly, the talk introduces a method that proposes an inversion of the traditional Drupal workflow (site builder / backend dev → templates → frontend dev). Instead, now it&rsquo;s the frontend developer who provides modular frontend plugins, and this also implies ownership for the frontend developer and YAML as the primary working tool.</p>
<p>More than a mere stylistic exercise, supporting this method and this shift are concrete features already available directly in Drupal core:</p>
<ul>
<li>breakpoints.yml for breakpoint images</li>
<li>layouts.yml for layout grid system (layouts in Layout Builder)</li>
<li>SDC for UI components</li>
<li>icons.yml for icon packs</li>
</ul>
<p>Certainly, feature coverage is still quite limited. However, coming in Drupal 11.3 and 11.4 will be important new design-dedicated API endpoints that will significantly raise the bar and further separate theming from the Drupal app:</p>
<ul>
<li><strong>Styles API</strong> with Utilities &amp; Helpers (set of mutually exclusive, self-descriptive, single-purpose, and universal HTML attributes like Typography, Borders, Colors, Spacing, Elevation) and Themes &amp; Modes (predefined branding switch, color scheme, accessibility settings)</li>
<li><strong>Design Tokens API</strong> with scoped values available for local or global overrides, which become CSS variables only at runtime</li>
</ul>
<p>Despite several gaps remaining, this talk emphasizes an ambitious but achievable goal for the first time in Drupal&rsquo;s history: the possibility of a fully automatable design workflow, from the design phase in Figma to final rendering in the browser.</p>
<p><img src="/images/blog/drupalcon-vienna-2025/02_20DrupalCon_20Vienna_202025_20-_20Drupal_2c_20the_20first_20design-system_20native_20CMS_2019-5_20screenshot.png" alt="02 DrupalCon Vienna 2025 - Drupal, the first design-system native CMS"></p>
<p>But it wasn&rsquo;t the only talk focused on the winning Design System + Drupal pair. Truly remarkable is the session on Nestlé&rsquo;s scalable multi-brand design system, which showed a real implementation. This case study discussed reorganization and design system design on three levels: core, ui components, brand overrides.</p>
<p>This structure proved fundamental both for managing the complexity of dozens of different brands and for giving overall coherence to the brand ecosystem. The Drupal theming system based on starterkit ensured rapid deployments, efficient updates, and the ability to instantiate new sites in days instead of weeks. Now, over 100 sites developed in Drupal adopt this approach.</p>
<p><strong>Practical takeaway:</strong> If you&rsquo;re building or rethinking your design system, consider a &ldquo;design system first&rdquo; approach instead of &ldquo;Drupal theme first.&rdquo; The initial investment is higher but scalability and maintainability are orders of magnitude better. For multi-brand organizations, it&rsquo;s practically mandatory.</p>
<h2 id="digital-accessibility-eaa-and-ai">Digital accessibility: EAA and AI</h2>
<p>One of the main advantages of a design system is optimization and consistency of user experience. But today, <a href="/it/guides/design-system-ux-accessibilita-ai?hsLang=en"><strong>the design system is also a strategic tool for complying with the European Accessibility Act</strong></a>. Two talks addressed the accessibility theme, and in particular the &ldquo;AI in EAA&rdquo; session explored how AI can support compliance with accessibility standards, increasingly critical with the EAA in effect since June 2025.</p>
<p>AI can accelerate <strong>QA teams</strong> (automated crawling), <strong>content editors</strong> (summaries, automatic alt-text, intelligent text editors), <strong>designers</strong> (color and contrast analysis, object recognition, ARIA property and state suggestions), and <strong>developers</strong> (linting tools, IDE extensions, autocomplete to create accessible components) in catching accessibility issues.</p>
<p>However, <strong>it cannot replace genuine human testing</strong>. Even in this area, it&rsquo;s essential to mitigate the risks of over-reliance on AI, maintaining human oversight to overcome limitations and biases of automated tools, such as cultural and linguistic peculiarities, cultural sensitivity, linguistic particularities, and misinterpretations of context.</p>
<p><img src="/images/blog/drupalcon-vienna-2025/03_20DrupalCon_20Vienna_202025_20-_20AI_20in_20EAA__2017-31_20screenshot.png" alt="03 DrupalCon Vienna 2025 - AI in EAA"></p>
<p>Fundamental in any case is <strong>understanding accessibility issues</strong> , no longer an accessory element, but both an <a href="/en/eaa-european-accessibility-act-digital-accessibility?hsLang=en">obligation and a business opportunity</a>. We&rsquo;ve explored the topic in two key resources: the whitepaper <a href="/en/landing/accessibilita-design-system/">Accessibility and Design System</a> and <a href="https://eaa.sparkfabrik.com/">the operational accessibility checklist</a>.</p>
<p><strong>Practical takeaway:</strong> For many organizations, accessibility isn&rsquo;t yet a clear priority. Automated tools and AI can offer significant support, but education on requirements, internal training, and communication are indispensable pillars. To truly be compliant and truly embed accessibility into the company DNA requires an approach that integrates accessibility from the design phase: a <strong>&ldquo;design-to-code&rdquo; method</strong> that combines technical interventions, training, and culture, which only a specialized partner can orchestrate.</p>
<h2 id="enterprise-ai-governance-first">Enterprise AI: governance first</h2>
<p>We&rsquo;ve already written an <a href="/en/drupal-ai-overview-news-vision?hsLang=en">in-depth article on Drupal AI</a> analyzing features like content generation, context management, autonomous agents, and observability. In Vienna, we saw concrete implementations and discovered patterns that work (and some that don&rsquo;t).</p>
<p>The &ldquo;The AI Agent Swarm has come to Drupal Canvas&rdquo; session showed practical integrations. Canvas template agent can build entire landing pages by assembling components, and the AI beta configured in Canvas allows building code components via prompts (some settings are preconfigured to accelerate adoption, but everything is customizable).</p>
<p>The session also explored how <strong>agents can be used everywhere in Drupal for small or large tasks</strong> , even building sites from scratch with external tools via MCP (Model Context Protocol), all without writing a line of code.</p>
<p>For effective LLM use, &ldquo;context is king.&rdquo; In our previous article, we examined the ability to define context for specific actions callable through Field Widget Actions. This system was still quite cumbersome and embryonic.</p>
<p>A nice leap forward in this regard is represented by the <strong>Context Control Center</strong> , which allows centrally defining all context information, such as your brand, persona, and topic (much like what happens in Claude Code or Copilot through claude.md or agents.md files, but directly in the Drupal UI).</p>
<p>Furthermore, centralization of contexts enables decidedly more effective governance. The various contexts are then easily callable and usable by different Drupal AI features and agents. If you want to delve deeper into Context Engineering, we refer you to the very interesting <a href="https://www.youtube.com/watch?v=f-bFIb7ao2s&amp;list=PLSD9hiOyso85HJ9IKTA5z1b8qMtzdL-rO">talk by Enrico Zimuel during our GenAI-focused event</a>.</p>
<p><img src="/images/blog/drupalcon-vienna-2025/04_20DrupalCon_20Vienna_202025_20-_20DrupalCon_20Vienna_202025_20Driesnote_20Context_20CC_2044-37_20screenshot.png" alt="04 DrupalCon Vienna 2025 - Driesnote Context Control Center"></p>
<p><strong>Pattern that works.</strong> AI for content generation with human supervision for approval (Human in the Loop): AI creates a draft, a person reviews and approves before publication. This approach resolves the trade-off between speed and control, and aligns with Drupal&rsquo;s philosophy where AI empowers people, not replaces them.</p>
<p><strong>Pattern that doesn&rsquo;t work well yet:</strong> Full automation without human supervision. Even with a well-configured Context Control Center, AI can generate content that technically respects guidelines but has wrong nuances that only a human can catch. Full automation therefore risks not aligning with brand quality level, and human supervision is strongly recommended.</p>
<p><strong>Practical takeaway:</strong> Invest in AI to accelerate execution (drafting, research, assembly), not to replace strategic decision-making. The Context Control Center is the key differentiator: without centralized governance, enterprise AI scales poorly and introduces risks. Budget time for robust initial Context Control Center configuration, don&rsquo;t think it&rsquo;s &ldquo;plug and play.&rdquo;</p>
<h2 id="devops-and-release-management-lessons-from-the-field">DevOps and Release Management: lessons from the field</h2>
<p>Some of the most practically valuable sessions were on DevOps, CI/CD, and release management: topics that truly make the difference between projects that scale and projects that collapse.</p>
<h3 id="github-actions--docker--cypress--cicd-nirvana">GitHub Actions + Docker + Cypress = CI/CD Nirvana</h3>
<p>The talk describes an ideal configuration (the &ldquo;CI/CD Nirvana&rdquo;) for the development process: tools for local development that automate QA (grumPHP), visual regression testing (BackstopJS), e2e testing (Cypress), and security audits. These same tools must also be used in CI to ensure safe code release without regressions.</p>
<p>On GitHub, this is possible with <strong>GitHub Actions</strong>. Ideally, write your own custom Actions, customized for the specific use case. This way, the obtained advantages are evident: you avoid using Docker images with unnecessary software, you can insert specific tools (e.g., debug), you have full control of what gets deployed.</p>
<p>But it&rsquo;s imperative to pay close attention to <strong>security implications</strong>. GitHub Actions can also act on production code and therefore access must be limited, ssh keys must be kept secret, close attention must be paid to third-party Actions and database dumps. As always, it&rsquo;s essential to schedule regular code audits and reviews.</p>
<p><img src="/images/blog/drupalcon-vienna-2025/05_20DrupalCon_20Vienna_202025_20-_20GitHub_20Actions_20__20Docker_20__20Cypress_20__20CI_CD_20Nirvana_204-36_20screenshot.png" alt="05 DrupalCon Vienna 2025 - GitHub Actions + Docker + Cypress = CI_CD Nirvana"></p>
<h3 id="mastering-the-release-flow-5-years-of-continuous-improvement">Mastering the Release Flow: 5 years of continuous improvement</h3>
<p>Very instructive case study that includes a Drupal instance distributing content to 5 React apps, 20 connected external services, 40 countries with up to 3 languages each (multilingual was absolutely essential in this project), 13,000 test steps distributed across 700 Behat feature files.</p>
<p>The tests in particular followed a very rigorous approach: they were structured in adherence to identified user journeys, separating production and non-production tests, core functionality and country-specific, running every night. The goal? Identify bugs and anomalies before users (and the client).</p>
<p>Despite the meticulousness, some emerging problems made it evident that it still wasn&rsquo;t sufficient. In particular, the problems encountered by the team were:</p>
<ul>
<li>Behat is no longer actively developed</li>
<li>The Mink extension had a critical bug for the latest Chrome versions for a long time (not fixed for 3 months)</li>
<li>Nightly tests on AWS now lasted more than 8 hours</li>
</ul>
<p>To overcome these limitations, the team created a new framework called <a href="https://www.npmjs.com/package/@cuppet/core">Cuppet</a> that combines Cucumber (to avoid rewriting tests) and Puppeteer (actively maintained by Google), based on NodeJS.</p>
<p><strong>Our takeaways from this case study?</strong></p>
<ul>
<li>If it&rsquo;s &ldquo;painful,&rdquo; automate it or solve it</li>
<li>Operational excellence is not optional</li>
<li>Make it work → make it right → make it fast (in this order; in this case study, the process lasted a full 5 years of iterative improvements)</li>
<li>Reliability is more important than new features (address technical debt)</li>
<li>Being a &ldquo;good person&rdquo; is more important than technical skills, even more so in a world where everyone interacts with AI, but where developers and end users are always people.</li>
</ul>
<p>This last point emerged in several sessions. The keynote &ldquo;Neurodiversity: An Underrated Superpower in Business&rdquo; opened the second day with discussion on neurodiversity and how peculiar capabilities of people often set aside can bring valuable contributions to teams and organizations.</p>
<h3 id="testing-in-the-ai-era">Testing in the AI Era</h3>
<p>The &ldquo;Test All the Things&rdquo; session emphasized that <strong>with AI generating more and more code, testing takes on even greater importance</strong>. LLMs make coding accessible to everyone, even those without prior skills, developing a false sense of security. Growing reliance on these tools increases the probability of defects or unexpected behaviors that are difficult to identify.</p>
<p>Very interesting is the overview of different testing approaches in different environments, for example:</p>
<ul>
<li>Running tests in an environment that replicates production ensures high fidelity, but it&rsquo;s a complex process, with privacy implications and long times, especially with large databases</li>
<li>Installing a site with test content is instead much faster and manageable, ideal for more agile development and verification cycles, necessarily at the expense of production environment fidelity</li>
</ul>
<p><strong>Static code analysis</strong> (PHPStan, psalm) is a valid tool for preventing bugs and keeping overall code quality under control. <strong>The workflow described in the talk is very similar to what we adopt at SparkFabrik</strong> , thus confirming its quality, with the difference that this year we adopted Symfony Panther for behavioral tests, replacing Behat. Also interesting is the mention of Pa11y for accessibility analysis, an often underestimated tool but of great support for ensuring standards compliance and inclusive experiences.</p>
<p><strong>Practical takeaway:</strong> If you&rsquo;re adopting AI for code generation, invest proportionally more in test automation and static analysis. AI-generated code often works but has unexpected edge cases that only robust tests will catch.</p>
<h2 id="security-by-design-from-requirement-to-culture">Security by Design: from requirement to culture</h2>
<p>At SparkFabrik, security is particularly close to our hearts. Beyond being a regulatory requirement for many companies, security is fundamental for protecting data, user trust, and companies&rsquo; online reputation.</p>
<p>The &ldquo;Secure by Design: Integrating Security into Drupal Development&rdquo; session presented a valid <strong>overview on the &ldquo;secure by design&rdquo; approach</strong> , with theoretical and practical guidance on strategies to implement to ensure security of sites and portals developed in Drupal, from business requirements to technical implementations.</p>
<p>The starting assumption: <strong>Security must be understood as a core business requirement</strong> , not as an &ldquo;afterthought.&rdquo;</p>
<p>And in Drupal, this assumption is taken very seriously, with a dedicated Security team that regularly publishes security advisories. In October 2023, the Security by Design Initiative in the Drupal community also launched, supported by the U.S. Cybersecurity and Infrastructure Security Agency, along with international partners, including European states like Germany and the UK.</p>
<p>The talk shares practical guidance (also very technical and specific), including:</p>
<ul>
<li>Implement security from the requirements phase</li>
<li>Periodic code review with security focus</li>
<li>Automated security scanning in CI/CD</li>
<li>Regular penetration testing for critical projects</li>
<li>Security training for the entire team (not just developers)</li>
</ul>
<p>Also worth mentioning is the &ldquo;Better Debugging with Xdebug&rdquo; session, which presented advanced debugging features, including experimental features like &ldquo;control sockets&rdquo; for debugging running processes, and the concept of &ldquo;time traveling&rdquo; to &ldquo;go back in the execution process.&rdquo;</p>
<p>If you want to delve deeper into security topics, check out our articles on <a href="/en/drupal-cms-security-compliace-regulated-sector?hsLang=en">security and compliance with Drupal CMS</a>, on <a href="/it/guides/software-security-best-practice?hsLang=en">Software Security</a>, <a href="/it/guides/cloud-security-come-proteggere-i-dati-nell-era-del-cloud?hsLang=en">Cloud Security</a>, <a href="/en/cloud-devsecops?hsLang=en">DevSecOps</a> and on the impact of <a href="/en/nis2-dora-impact-on-cybersecurity-in-cloud-native?hsLang=en">NIS2 and DORA in Cloud Native</a> (we also created a <a href="https://go.sparkfabrik.com/nis2-dora-compendium/en">complete operational guide</a>).</p>
<p><strong>Practical takeaway:</strong> For enterprise projects, it&rsquo;s important to budget specific time and resources for security activities (threat modeling, security testing, security reviews), ideally a budget of 10-15% of total time. It may seem like a lot, but it&rsquo;s a minimal fraction of the cost of a security incident.</p>
<h2 id="marketplace-and-site-templates-real-economics">Marketplace and Site Templates: real economics</h2>
<p>Ryan Szrama from Centarro presented &ldquo;How to Sell Drupal Site Templates&rdquo; with rare honesty.</p>
<p>First, <strong>what is a site template?</strong> It&rsquo;s a combination of Drupal CMS, frontend theme, possibly also a backend theme, recipes that provide functionality, and predefined content that, together, create a Drupal installation tailored for a specific purpose and allow quickly starting a new project. Think for example of a Template for SaaS companies, or an ecommerce site: use cases with specific needs that can be packaged.</p>
<p>Commerce Kickstart is precisely one of the first available Site Templates. From Drupal distribution, it was converted into a site template, enriched with recipes for eCommerce, with modern checkout experience, various configuration options, and installation simplification.</p>
<p>The appeal of Site Templates is quite clear: <strong>provide a specific solution to a specific problem</strong>. Even from a sales perspective, it should be a &ldquo;better&rdquo; sales strategy, or at least more immediate and direct, compared to a project idea, an &ldquo;abstract solution.&rdquo; This is where the most interesting and pragmatic points of the talk emerge: the critical points and the experience so far.</p>
<p><img src="/images/blog/drupalcon-vienna-2025/06_20DrupalCon_20Vienna_202025_20-_20How_20to_20sell_20Drupal_20site_20templates_209-47_20screenshot.png" alt="06 DrupalCon Vienna 2025 - How to sell Drupal site templates"></p>
<p><strong>Three are the most critical points</strong> :</p>
<ul>
<li>In practice, <strong>how and where to publish a site template</strong>? Currently, there&rsquo;s no centralized marketplace on drupal.org yet, there&rsquo;s no transaction management system, and not even file distribution. For templates to take off, it&rsquo;s fundamental to find an operational answer, avoiding having to manage purchases manually and individually.</li>
<li><strong>How to protect intellectual property</strong> when distributing the entire product as code? Very hot topic, and here too there&rsquo;s no clear answer.</li>
<li>But the priority point is: <strong>how to convince a real client to buy a Site Template?</strong> The focus is on this point, testing sales in the field. The value proposition being leveraged is monetary and time savings for the end client, accepting however the trade-off of lack of customization (&ldquo;off the shelf&rdquo; product). <strong>Important disclaimer,</strong> at the current state, no sale has been completed yet (prospects weren&rsquo;t clear what they were buying and requests veered toward coaching needs, customization, dedicated services).</li>
</ul>
<p>For Site Template sales to be sustainable, three things are therefore fundamental: a way to distribute them effectively, and clear communication about what the package includes.</p>
<p>In this regard, the &ldquo;Decision-making at Scale: Drupal Marketplace Process Behind the Scene&rdquo; session showed how a committee of 12 people handled deciding whether Drupal should create a marketplace for site templates (a process that lasted 14 weeks). The answer is positive, with a gradual release.</p>
<p>On the official page of the <a href="https://www.drupal.org/about/starshot/marketplace-initiative">Drupal Marketplace Initiative</a>, the <strong>overall plan on Site Templates and Drupal Marketplace</strong> is detailed:</p>
<ul>
<li>by DrupalCon Vienna, a pilot with the simple release of 1-2 templates;</li>
<li>by DrupalCon Chicago (March 2026), release of an MVP of the Marketplace, with 10-15 templates, both free and paid;</li>
<li>initially, Site Template Makers will be limited to some selected Drupal Certified Partners, then progressively expand to other makers;</li>
<li>for paid templates, requirements of transparent pricing, guaranteed maintenance, clear support terms must be met;</li>
<li>the MVP will include experiments in terms of pricing models and revenue division;</li>
<li>it&rsquo;s hypothesized that initially 10% of revenues will go to the Drupal Association;</li>
<li>in future iterations, the marketplace will directly manage transactions and revenue division could change to 60% to the creator, 30% to DA, 10% to a fund supporting the ecosystem.</li>
</ul>
<p><strong>Practical takeaway:</strong> The marketplace will officially launch in the coming months, and will evolve gradually but quickly. For agencies, consider developing vertical site templates (e-commerce, non-profit, education) for your target market, as soon as possible (and prepare to sell it as a product, not as a solution). The economic model is still emerging but the timing is ideal for early movers.</p>
<p>Orchestration e Architetture Composable</p>
<p>The &ldquo;From CMS to Platform: How to Build Future-Proof Digital Ecosystems with Drupal&rdquo; session presented an important vision. When a client asks for a website, in reality the request often hides the need for a <strong>complete digital ecosystem</strong> , composed of websites, applications, integrations with third-party systems for specific services.</p>
<p>In the <a href="https://www.drupal.org/project/drupal/issues/3533440">Drupal Core Strategy published in July</a>, <strong>Dries explicitly defined Drupal as a &ldquo;platform&rdquo; because talking about CMS is now limiting</strong>. Drupal has potential to serve as the heart of a digital platform for various touchpoints.</p>
<p><img src="/images/blog/drupalcon-vienna-2025/07_20DrupalCon_20Vienna_202025_20-_20From_20CMS_20to_20Platform__20How_20to_20Build_20Future-Proof_20Digital_20Ecosystems_20with_20Drupal_2017-36_20screenshot.png" alt="07 DrupalCon Vienna 2025 - From CMS to Platform_ How to Build Future-Proof Digital Ecosystems with Drupal"></p>
<p>This talk also presented an example, a project based on NodeHive, but the basic architecture can be generalized for any platform that leverages Drupal as a single headless backend, with an Orchestration layer that allows managing various touchpoints.</p>
<p>As we discussed in our article on <a href="/en/composable-architecture-with-drupal-cms?hsLang=en">composable architecture</a>, this approach is increasingly relevant for organizations that need to serve experiences across web, mobile apps, digital signage, voice assistants, and more, simultaneously and consistently.</p>
<p>Finally, in terms of workflow orchestration, an important new feature is Drupal&rsquo;s support for no-code/low-code automation tools such as Activepieces (an open-source orchestration platform licensed under MIT). This was announced during Driesnote and discussed in detail in <a href="https://dri.es/the-orchestration-shift">Dries&rsquo; dedicated article</a>.</p>
<p><strong>Practical takeaway:</strong> Don&rsquo;t think &ldquo;Drupal site&rdquo; anymore but &ldquo;Drupal as content hub.&rdquo; Architect from the start for multi-channel content delivery. The initial cost is slightly higher but future flexibility is incomparably greater.</p>
<h2 id="managing-scope-change-the-art-of-saying-no-without-saying-no">Managing scope change: the art of saying NO without saying NO</h2>
<p>Beyond technical talks, the &ldquo;Navigating Scope Creep&rdquo; session was particularly appreciated by project managers. Indeed, it addressed a universal theme in software projects: <strong>scope creep</strong> , the uncontrolled expansion of requirements, features, and deliverables of a project beyond what was initially approved.</p>
<p><img src="/images/blog/drupalcon-vienna-2025/08_20DrupalCon_20Vienna_202025_20-_20Taming_20the_20Beast__20Navigating_20Scope_20Creep_20for_20Project_20Success_203-42_20screenshot.png" alt="08 DrupalCon Vienna 2025 - Taming the Beast_ Navigating Scope Creep for Project Success"></p>
<p>Typically, scope creep manifests during certain phases: requirements gathering, client feedback, near project completion (last-minute changes are particularly risky), and when stakeholders change (for example, new team members, who can bring new ideas).</p>
<p><strong>The main causes?</strong> Unclear objectives or requirements, inadequate communication, desire to avoid conflict (probably the most common problem), but also constantly evolving regulatory and market frameworks.</p>
<p>The talk proposes a clear approach: <strong>accept scope change, but do it effectively</strong>. The approach is based on three pillars:</p>
<ol>
<li><strong>Awareness:</strong> it&rsquo;s fundamental to know the project in detail, monitor stakeholder behavior and the change request process.</li>
<li><strong>Alignment:</strong> from the early phases, create a shared vision of objectives, align priorities, discuss expectations, formalize scope documentation.</li>
<li><strong>Clear processes:</strong> implement clear Change Management processes, don&rsquo;t accept verbal requests, document requests and decisions, set clear limits.</li>
<li><strong>Avoid misunderstandings:</strong> Communicate openly and transparently (even expressing your disagreement, with motivation), involve the client in testing, explain using wireframes and prototypes.</li>
</ol>
<p>Cherry on top: four concrete tactics were shared in conclusion.</p>
<ul>
<li><strong>The alternative offer:</strong> we can work on the new request, but we&rsquo;ll need to adjust the timeline and budget.</li>
<li><strong>Priority shift:</strong> adding the request means removing something else from the backlog, what should we deprioritize?</li>
<li><strong>Future release:</strong> let&rsquo;s plan this addition for phase two, after completing core features.</li>
<li><strong>Data-driven approach:</strong> based on our analysis, this addition would bring little value, especially compared to development cost (if you have statistics against a feature, bring them to the table).</li>
</ul>
<p><strong>Practical takeaway:</strong> Invest in formalizing the Change Management Process from kickoff. It seems like an initial cost, but it prevents hours (or weeks) of rework and tensions with stakeholders. Use tools like AI notetaker for automatic meeting notes that become &ldquo;single source of truth&rdquo; shared by everyone.</p>
<h2 id="real-talk-drupal-vs-storyblok">Real Talk: Drupal vs Storyblok</h2>
<p>Another session was a breath of fresh air: &ldquo;Why We Left Drupal, Tried Storyblok, and What Happened Next&rdquo;. An agency <strong>attempted the switch from Drupal to Storyblok</strong> , driven by the desire to diversify tools and this alternative CMS&rsquo;s massive marketing. <strong>In short? The new solution didn&rsquo;t measure up.</strong></p>
<p>The main frustration was the <strong>absence of basic features</strong> almost taken for granted in Drupal, like building modules, complexity in managing URLs and redirects, configuration management (and related versioning) in Git. The team essentially found itself in the position of having to innovate autonomously to replicate basic features already available in Drupal (&quot;<em>We literally rebuilt Drupal&rsquo;s Configuration Management for Storyblok</em> &ldquo;).</p>
<p>No less important, the <strong>limited tech stack</strong>. The team had chosen Storyblok + Next.js, given their expertise, but Storyblok was optimized for Vue.js. As a result, the agency had to build workarounds autonomously, even to leverage features of the new Next.js version released during the project.</p>
<p>Features aside, <strong>Storyblok&rsquo;s support</strong> itself didn&rsquo;t meet expectations, with non-timely responses and unclear guidance that caused significant delays. You can&rsquo;t even rely on the <strong>community</strong> , being extremely small and poorly engaged. <strong>Essentially, every problem requires a custom solution.</strong></p>
<p><img src="/images/blog/drupalcon-vienna-2025/09_20DrupalCon_20Vienna_202025_20-_20Why_20we_20left_20Drupal_2c_20tried_20Storyblok_2c_20and_20what_20happened_20next_2021-44_20screenshot.png" alt="09 DrupalCon Vienna 2025 - Why we left Drupal, tried Storyblok, and what happened next"></p>
<p><strong>The discovery:</strong> A technology like Storyblok can be useful in very specific contexts, but for enterprise solutions, Drupal continues to represent a high standard. Solid, tested over 20+ years of community work, with proven solutions we risk taking for granted, forgetting not all CMSs have them.</p>
<p>A memorable tagline: &ldquo;<strong>Product before marketing</strong>.&rdquo; This resonates with the general theme of the conference. Drupal isn&rsquo;t doing aggressive marketing but is consolidating its positioning as an excellent product in critical areas that truly matter for business. As we discussed in our <a href="/en/drupal-cms-a-comparison-with-the-main-alternatives?hsLang=en">CMS comparison</a>, Drupal excels where complexity, governance, and longevity are primary requirements.</p>
<p><strong>Practical takeaway:</strong> If you&rsquo;re evaluating alternative CMSs, look beyond feature lists on paper and shiny UI. Ask hard questions: how do you handle multi-brand? How do you manage configurations and versioning? How do you ensure granular access control? How do you migrate when you&rsquo;ll inevitably need to? How is the support and community around the ecosystem? Drupal has proven answers, alternatives often have promises.</p>
<h2 id="performance-http3-and-network-level-optimizations">Performance: HTTP/3 and network-level optimizations</h2>
<p>As evident from the title &ldquo;TCP Fast Open and HTTP/3: Network-Level Optimizations for Lightning-Fast Drupal&rdquo;, this was an interesting examination of how the HTTP/3 and TCP stack works and its performance-level optimizations. That is, <strong>how to save precious milliseconds with the right configurations</strong> , and attention to possible attack vectors.</p>
<p>According to shared data, among the top 1000 sites, 60% support HTTP/3, of which 85% via CDN (Cloudflare, Fastly, etc.) and 15% directly. For Drupal specifically, Drupal.org, Acquia Cloud, and Platform.sh/Upsun support HTTP/3.</p>
<p><strong>A technical talk, but the key message is clear:</strong> It&rsquo;s time to enable HTTP/3.</p>
<p><img src="/images/blog/drupalcon-vienna-2025/10_20DrupalCon_20Vienna_202025_20-_20TCP_20Fast_20Open_20and_20HTTP_3__20Network-Level_20Optimizations_20for_20Lightning-Fast_20Drupal_2030-53_20screenshot.png" alt="10 DrupalCon Vienna 2025 - TCP Fast Open and HTTP_3_ Network-Level Optimizations for Lightning-Fast Drupal"></p>
<p><strong>Practical takeaway:</strong> If you haven&rsquo;t enabled HTTP/3 yet, now is the time. Performance gains are significant and measurable, especially for mobile connections and applications requiring low latency. Check with your hosting provider if it&rsquo;s available (Acquia, Platform.sh, Pantheon all support it).</p>
<h2 id="conclusion-concrete-action-items-for-your-projects">Conclusion: concrete action items for your projects</h2>
<p>Four days in Vienna confirmed a thesis: Drupal is going through a moment of profound technical renewal without losing the enterprise reliability that characterizes it. This is a rare and valuable combination.</p>
<p>For Italian teams planning 2026 projects, here are concrete action items to consider carefully.</p>
<p><strong>If you&rsquo;re planning a new Drupal project:</strong></p>
<ul>
<li>Budget Canvas as authoring layer (stable since November 2025, default since January 2026)</li>
<li>Architect for design system first, not for Drupal theme first</li>
<li>Include AI Readiness Assessment in the project discovery phase (introducing AI features like Context Control Center requires planning)</li>
<li>Plan for composable architecture even if starting with a single touchpoint</li>
</ul>
<p><strong>If you have Drupal in production:</strong></p>
<ul>
<li>Review CI/CD pipelines to include automated security scanning</li>
<li>Evaluate a pilot with Canvas, limiting yourself to one section of the site (not full migration immediately)</li>
<li>Check compliance for accessibility in view of new EAA requirements (but also as a new opportunity)</li>
<li>Verify HTTP/3 support</li>
</ul>
<p><strong>If you&rsquo;re an agency or system integrator:</strong></p>
<ul>
<li>Consider developing vertical site templates for your target market, as soon as possible (and prepare to sell it as a product, not as a solution)</li>
<li>Invest in design systems, reusable and consistent across projects</li>
<li>Evaluate partnerships to integrate AI solutions and features into your offering</li>
</ul>
<p><strong>If you have an in-house team:</strong></p>
<ul>
<li>Propose a Canvas pilot, to facilitate future creation of high volumes of pages and content</li>
<li>If you need content generation, experiment and define business cases to set up in the Context Control Center</li>
<li>Do a review of your change management process (reduce scope creep problem, it costs more than the formalization process)</li>
<li>Plan intensive training on the new stack of features and tools (Canvas, AI tools, design system approach)</li>
</ul>
<p>The feeling leaving Vienna? Drupal made architectural choices in 2015 (structured content, API-first, rigorous configuration management) that at the time seemed <em>overkill</em>. In 2025, those choices are proving winning. They&rsquo;re exactly the infrastructure needed for enterprise-grade AI, visual page building without sacrificing governance, and scalable composable architecture.</p>
<p>As Dries said in the keynote: &ldquo;AI is the storm, but it&rsquo;s also the way through it.&rdquo; Drupal isn&rsquo;t avoiding the storm. It&rsquo;s navigating better than anyone else.</p>
<hr>
<p>At SparkFabrik, we combine deep technical expertise in Drupal with advanced skills in AI integration, composable architectures, and enterprise governance. Our <a href="https://www.sparkfabrik.com/en/services/drupal/">Drupal development services</a> cover the entire spectrum: from strategic consulting on the AI readiness of your current architecture, to implementation of custom AI-powered solutions, through to security, ongoing support, and optimization.</p>
<p>If your organization is considering adopting AI for its digital initiatives, we invite you to:</p>
<ol>
<li>Explore our <a href="https://www.sparkfabrik.com/en/success-stories/">case studies</a> of enterprise Drupal implementations</li>
<li><a href="https://www.sparkfabrik.com/en/contact-us/">Contact our team</a> for an assessment of your specific needs</li>
<li>Discover how our <a href="https://www.sparkfabrik.com/en/services/drupal/">suite of Drupal services</a> can support your AI strategy</li>
</ol>
<p>This article is part of our series dedicated to Drupal CMS. To explore other aspects of the platform, we invite you to consult our previous articles on <a href="/en/drupal-cms-the-new-era-of-enterprise-content-management?hsLang=en">features and benefits</a>, <a href="/en/drupal-cms-a-comparison-with-the-main-alternatives?hsLang=en">comparison with alternatives</a>, <a href="/en/migration-to-drupal-cms-complete-guide-for-a-successful-transition?hsLang=en">migration strategies</a>, <a href="/en/drupal-cms-security-compliace-regulated-sector?hsLang=en">security and compliance</a>, <a href="/en/composable-architecture-with-drupal-cms?hsLang=en">composable architecture</a>, <a href="/en/design-system-and-drupal-cms?hsLang=en">Design System</a>, <a href="/en/drupal-headless?hsLang=en">Drupal headless omnichannel</a>, and <a href="/en/drupal-ai-overview-news-vision?hsLang=en">overview and news of Drupal AI</a>.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/drupalcon-vienna-2025/DrupalCon_20Vienna_20-_20What_20we_20learned_20-_20SparkFabrik_20Featured.png.png" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/drupalcon-vienna-2025/DrupalCon_20Vienna_20-_20What_20we_20learned_20-_20SparkFabrik_20Featured.png.png" type="image/jpeg"/><category>Drupal</category><category>AI</category></item><item><title>10 AI Tools for UI/UX that are redefining design</title><link>https://www.sparkfabrik.com/en/blog/ui-ux-ai-tools-for-designers/</link><pubDate>Wed, 22 Oct 2025 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/ui-ux-ai-tools-for-designers/</guid><description>Discover the the 10 best AI tools that are helping and empowering UX/UI designers. A strategic guide to add AI in your design workflows.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    A curated overview of 10 AI tools for UI/UX designers, from big players (Figma AI, Adobe Firefly, Google Workspace AI, Microsoft Copilot) to hybrid design-development platforms (Lovable, Replit, VS Code) and visual CMS tools (Framer, Webflow, Builder.io). Also covers Drupal&rsquo;s AI integration and how LLMs support UX strategy.
  </div>
</div>
<p>Every day a new AI tool is born promising to revolutionize our habits. The software we use is filling up with <strong>intelligent features</strong>, social feeds are exploding with demos, and we, in all of this, need to understand what&rsquo;s really worth knowing and testing.</p>
<p>It&rsquo;s not easy to navigate a new world that&rsquo;s still evolving: between genuine hype and fleeting illusions, between tools that actually accelerate workflow and others that end up complicating it: a reasoned approach is needed.</p>
<p>That&rsquo;s why we thought of an overview that shines a light on <strong>what both</strong> the <strong>tech giants</strong> and the <strong>emerging names</strong> on everyone&rsquo;s lips are doing. To have a map that brings together all those tools that, in one way or another, will end up redefining how we think about and create design.</p>
<h2 id="why-were-talking-about-ai-revolution-in-the-design-world">Why we&rsquo;re talking about AI revolution in the design world</h2>
<p>This isn&rsquo;t the first time design has changed its skin, but this one has all the makings of a much deeper transformation. The arrival <strong>of Artificial Intelligence is revolutionizing the way we design</strong>. Not just for the speed at which we work, but for how we think about design, which phases of the process we can automate, and which still need human intuition, sensitivity, and experience.</p>
<p>This change is already reality and doesn&rsquo;t only concern experimental tools but <strong>the platforms we use every day</strong>, from Figma to Google, which integrate AI functionality directly into the workflow. And they&rsquo;re not just doing it to keep up, but because AI is really changing how we think, organize, and produce design.</p>
<p>We&rsquo;re in a <strong>totally open, exploratory phase</strong>, as if it were a big wild west. Some tools are in beta, others work in fits and starts, still others need to be trained, tested, adapted to your own context. But waiting isn&rsquo;t always the best strategy. Those who start exploring AI possibilities now can make more conscious choices, avoid the risk of chasing fleeting trends, and, above all, fine-tune a process that really works for their team.</p>
<p>Before moving forward, let&rsquo;s talk for a second about another topic that today more than ever is fundamental for a designer (and anyone working in the digital field) to know well: <strong>accessibility</strong>. We&rsquo;ve created a <a href="/en/landing/accessibilita-design-system/"><strong>white paper</strong></a> (in Italian) to help teams adapt and work with an <em>accessible by design</em> approach, save it for later!</p>
<h2 id="ai-for-ux-or-how-to-empower-the-designer-and-not-replace-them">AI for UX, or how to empower the designer and not replace them</h2>
<p>If we had earned a cent every time someone asked us: &ldquo;<em>But will AI steal our jobs?</em>&rdquo;, by now we could buy all of Silicon Valley.</p>
<p>Our short answer is: &ldquo;<em>No</em>&rdquo;.</p>
<p>The designer&rsquo;s answer is: &ldquo;<em>It depends</em>&rdquo;. It depends on how we choose to use it.</p>
<p>The truth though is that we&rsquo;re getting caught up in panic and if we thought more calmly the right question we should ask ourselves is: &ldquo;<em><strong>How do we want AI to help us in our work?</strong></em>&rdquo;.</p>
<p>Because AI, if used critically, can really become <strong>our creative copilot</strong>. It should be the secret weapon to unblock us when we&rsquo;re stuck, to show us alternatives, to help us see more clearly in chaos, not to take our place.</p>
<p>Imagine it as a colleague always in a good mood and available for brainstorming, suggesting a structure or helping you order your thoughts. And yes, also for taking on the most boring activities like transcribing, summarizing interviews, extracting insights or generating a wireframe skeleton. But deciding the direction, giving meaning and coherence, making the choices that matter always remains with flesh-and-blood people.</p>
<p>This is exactly what&rsquo;s happening in the development world. The arrival of tools like GitHub Copilot hasn&rsquo;t replaced coders but has made their work faster and more productive. This is what&rsquo;s now called <em><strong>vibe coding</strong></em>, <strong>whose mantra states: less repetitive code and more creative problem solving.</strong> Similarly, in design, we&rsquo;re witnessing the birth of its natural parallel, <em><strong>vibe designing</strong></em> which could be translated as less pixel pushing and more strategy.</p>
<p>The goal isn&rsquo;t to automate everything, but to enhance human ingenuity, free up time, create space for reflection and experimentation. AI can do a lot, but it still can&rsquo;t, and perhaps never will, replace the ability to read between the lines (especially of project briefs), to intuit people&rsquo;s deep desires, to imagine solutions where there aren&rsquo;t any yet.</p>
<p><strong>ALSO READ:</strong> <a href="/it/ux-developer-chi-e-cosa-fa?hsLang=it-it"><strong>UX Developer: chi è e cosa fa?</strong></a> (Italian)</p>
<h2 id="10-uxui-ai-tools-that-every-design-team-should-know">10 UX/UI AI tools that every design team should know</h2>
<p>To discover AI tools with the greatest impact, there&rsquo;s no need to go far. <strong>All the big players in the sector are already integrating intelligent features</strong> directly into their products, gradually transforming our daily work.</p>
<p>These are no longer isolated experiments or plugins for tech enthusiasts: it&rsquo;s a profound change, because it&rsquo;s happening within workflows we already know. Behind familiar interfaces, there are already radical innovations.</p>
<p>Alongside the giants, platforms born for digital design that make AI their main leverage are emerging, but also hybrid tools that experiment right on the border between design and development. Here we&rsquo;re not talking about additional features, but tools built around AI from the beginning.</p>
<h3 id="the-big-players-how-ai-enters-everyday-tools">The Big Players: How AI Enters Everyday Tools</h3>
<h4 id="1-figma-ai">1. Figma AI</h4>
<p><a href="https://www.figma.com/"><strong>Figma</strong></a> didn&rsquo;t just integrate an AI assistant, it rethought the role of artificial intelligence in collaborative design. Today its ecosystem moves in three directions: accelerating creative flows, expanding automatic generation, and simplifying publishing.</p>
<p><strong>Main features:</strong></p>
<ul>
<li><a href="https://www.figma.com/figjam/ai/"><strong>FigJam AI</strong></a>: useful for automatic clustering of ideas, workshop summaries, creating mind maps and auto-layout for collaborative boards.</li>
<li><a href="https://www.figma.com/it-it/make/"><strong>Figma Make</strong></a>: is one of the most promising innovations. It allows you to create UIs and complete flows starting from text prompts, using existing components and logic.</li>
<li><a href="https://www.figma.com/it-it/sites/"><strong>Figma Sites</strong></a> <strong>(in rollout)</strong>: born to generate responsive landing pages starting from content present in the file, optimizing layout and structure for web publishing. In its current state, the code underlying the generated design isn&rsquo;t exactly optimized and accessible, but improvements are constant.</li>
</ul>
<p><strong>Ideal for</strong>: UX/UI designers who want to move faster from wireframe to prototype, distributed product teams working iteratively on shared boards, and content designers or marketing managers who need to quickly validate layouts and landing pages.</p>
<p>In our <a href="https://youtu.be/wQ14WyfTycE?si=ixB6mUXNAfaDnFk1&amp;t=1172">talk dedicated to AI&rsquo;s role as the designer&rsquo;s copilot</a> (in Italian), we illustrate the entire process in Figma to move from the wireframing phase, to defining the design system library, to generating a prototype. A realistic example of effective use of artificial intelligence&rsquo;s generative functionality, supervised and governed by the expert hand of a designer.</p>
<h4 id="2-adobe-firefly--sensei">2. Adobe Firefly &amp; Sensei</h4>
<p><a href="https://www.adobe.com/"><strong>Adobe</strong></a> has chosen a systemic approach: AI becomes part of the creative flow, reducing the distance between the initial idea and the final output of assets. The integration is designed to enhance human work, not replace it, and is guided by the logic of helping designers and creatives realize complex ideas faster, without sacrificing quality or visual coherence.</p>
<p>Firefly works on generating images, effects, graphic elements and branded styles from text prompts. Sensei instead optimizes and speeds up with automatic selections, intelligent crops, assisted fills and colorings. Together they form a pair that covers the entire flow, from experimentation to execution.</p>
<p><strong>Main features:</strong></p>
<ul>
<li><a href="https://www.adobe.com/it/products/firefly.html"><strong>Adobe Firefly</strong></a>: now an integral part of Photoshop and Illustrator, generates images, vector elements, effects and branded styles from text prompts. It was trained on Adobe Stock to ensure commercially safe outputs.</li>
<li><a href="https://www.adobe.com/it/sensei/generative-ai.html"><strong>Adobe Sensei</strong></a>: AI system that powers functions like intelligent object removal, automatic selections, composition and color suggestions. Includes Firefly-powered technologies like <em>Generative Fill</em> for image modifications via text prompts and <em>Generative Recolor</em> for rapid colorings of vector works.</li>
</ul>
<p><strong>Ideal for</strong>: visual designers and art directors creating original and branded assets, graphic designers managing color variants and revisions in rapid times, and creative teams that need to maintain visual coherence across campaigns, sites, and multiplatform materials.</p>
<h4 id="3-google-workspace-ai">3. Google Workspace AI</h4>
<p><strong>Google</strong> has transformed its productivity suite into an AI-powered ecosystem that supports every phase of the design process. Under the Gemini umbrella and with experimental projects from Google Labs, tools are multiplying to support research, analysis, writing, and prototyping.</p>
<p><strong>Main features:</strong></p>
<ul>
<li><a href="https://workspace.google.com/products/notebooklm/"><strong>NotebookLM</strong></a>: AI research notebook for synthesis and insights from documents and multimedia content (texts, PDFs, audio), ideal for summarizing and organizing large quantities of content, like user interviews, or doing competitive analysis.</li>
<li><strong>Gemini AI for Workspace</strong>: integrated assistant in Gmail, Docs, Sheets, Slides, Chat and Meet that helps with writing, brainstorming, summaries, visual generation and collaboration in daily workflow.</li>
<li><a href="https://labs.google/fx/tools/image-fx/unsupported-country"><strong>ImageFX</strong></a> <strong>(currently not available in Italy)</strong>: image generator for moodboards and visual prompt-driven. Let&rsquo;s not forget technologies like Nano Banana (image editing via prompt) and Veo3 (video generation), which complete Google&rsquo;s offering.</li>
<li><a href="https://stitch.withgoogle.com/"><strong>Stitch</strong></a> (in beta): experimental tool for rapid prototyping, to go from prompt to UI in a flash.</li>
<li><a href="https://aistudio.google.com/"><strong>AI Studio</strong></a>: Google&rsquo;s free web platform that includes various technologies, models and AI tools and includes advanced features. It allows you to explore and develop prompts with Gemini models, test creative ideas and generate text, code or images starting from personalized prompts.</li>
</ul>
<p><strong>Ideal for</strong>: UX teams doing user-centric research, designers or product managers who need to prototype quickly, create visual assets, manage large quantities of content or documents, and make collaboration super fluid.</p>
<p><a href="https://youtu.be/wQ14WyfTycE?si=PTgNbt15AsC_Xg3m&amp;t=831">In our talk</a>, we offer a concrete example of how AI supports the designer in the Discovery phase. With the support of NotebookLM, as well as a good dose of prompt engineering, it&rsquo;s actually possible to synthesize a large quantity of information (reviews, interviews, analytics, raw data), elaborate mindmaps that help exploration, generate complete reports to share with the team and client.</p>
<h4 id="4-microsoft-copilot">4. Microsoft Copilot</h4>
<p><strong>Microsoft</strong> has integrated <a href="https://copilot.microsoft.com/"><strong>Copilot</strong></a> natively into the Microsoft 365 suite, offering AI features that support business workflows, between content production, automation and design.</p>
<p><strong>Main features:</strong></p>
<ul>
<li><strong>Copilot in PowerPoint and Word</strong>: helps generate presentations from prompts or reference documents, rewrite texts, create intelligent summaries and apply consistent layouts.</li>
<li><a href="https://designer.microsoft.com/"><strong>Microsoft Designer</strong></a>: tool for creating graphics and marketing assets with AI assistance, using templates, styles and visual/textual inputs.</li>
<li><strong>Power Automate with AI</strong>: allows building automated flows starting from natural language, diagnosing and repairing errors in workflows or adding generative actions in business processes.</li>
</ul>
<p><strong>Ideal for</strong>: enterprise teams, product managers and project managers who need to prototype presentations and internal documentation quickly but above all improve internal automation of recurring processes.</p>
<h3 id="vibe-designing-and-vibe-coding-the-future-is-hybrid">Vibe designing and vibe coding: the future is hybrid</h3>
<p>If there&rsquo;s one point where design and development are really merging, this is it. New hybrid platforms are transforming text prompts, sketches and flows into real digital products, navigable, functional. Here AI isn&rsquo;t just support, it&rsquo;s the main tool to create, iterate, validate and publish faster.</p>
<h4 id="5-lovable">5. Lovable</h4>
<p><a href="https://lovable.dev/"><strong>Lovable</strong></a> is a low-code platform designed for those who want to create functional web applications starting from a text description. Just a simple prompt and in a few seconds you have a first navigable draft, with interface, flows and interactions already ready to test.</p>
<p>The logic is &ldquo;design while you build&rdquo;: perfect for projects in exploratory phase or for small teams that want to immediately understand if an idea holds up. Great for gathering feedback on something already tangible, without starting from scratch every time.</p>
<h4 id="6-replit">6. Replit</h4>
<p>Born for those who write code, <a href="https://replit.com/"><strong>Replit</strong></a> today is an AI-powered playground perfect also for designers, strategists and those working in discovery. With Ghostwriter (its AI assistant), you can test components, try micro-interactions and explore alternatives in real time.</p>
<p>It&rsquo;s a highly collaborative tool for working with multiple hands on the same prototype, seeing what happens when you change something and receiving suggestions from the model. It&rsquo;s a bridge between those who design and those who develop, perfect for making tests without blocking the sprint.</p>
<h4 id="7-visual-studio-code">7. Visual Studio Code</h4>
<p><a href="https://code.visualstudio.com/"><strong>Visual Studio Code</strong></a>, now an indispensable editor for those who develop (even design-driven projects), today integrates powerful AI features thanks to GitHub Copilot and dedicated extensions. It&rsquo;s possible to generate, correct and document code starting from natural language prompts and receive real-time suggestions while working on UI and front-end flows.</p>
<p>Various integrations, like Figma to Code extensions or the use of MCP servers, allow transforming layouts and interfaces created in Figma directly into ready-to-use code, reducing times and error risks between design and development.</p>
<p>Among emerging alternatives, it&rsquo;s worth mentioning editors like <a href="https://cursor.com/">Cursor</a> and <a href="https://www.windsurf.dev/">Windsurf</a>, which follow the same AI-driven approach. They&rsquo;re ideal tools for teams collaborating between design and dev, and for those who want to automate the writing of UI components or quickly test new ideas starting from prototypes.</p>
<h3 id="ai-powered-visual-cms">AI-powered visual CMS</h3>
<p>In recent years, new visual CMS have raised the bar by integrating AI to simplify and accelerate design, development and publishing of sites and applications. Platforms like Framer, Webflow and builder allow even those who don&rsquo;t write code to create, iterate and put online professional digital products, combining an evolved visual editor with AI components for layouts, copy, images and automations.</p>
<h4 id="8-framer-ai">8. Framer AI</h4>
<p><a href="https://www.framer.com/ai/"><strong>Framer AI</strong></a> is a tool designed to shorten the distance between concept and publishable result. With AI features you can generate complete layouts starting from text prompts, set visual hierarchies, animations, content and interactive elements in a few steps. Ideal for designers and teams who want to test ideas fluidly, explore alternatives without touching the code again and validate prototypes directly in the field.</p>
<h4 id="9-webflow-ai">9. Webflow AI</h4>
<p><a href="https://webflow.com/ai"><strong>Webflow</strong></a> is establishing itself as the king of no-code for designers, and AI now adds another level. With Webflow AI you can ask for suggestions on layouts, texts, page structure. It&rsquo;s possible to modify elements in natural language and see the result in real time. Its real strength is in precision: the result isn&rsquo;t just a draft, but a base already ready to be published or refined by hand. Perfect for those who want to maintain visual control but lighten the more technical part.</p>
<h4 id="10-builderio">10. Builder.io</h4>
<p><a href="https://www.builder.io/"><strong>Builder.io</strong></a> isn&rsquo;t a simple visual editor: it&rsquo;s an AI-powered visual development platform that unites design, code and content. Its AI engine, called Visual Copilot, intervenes in the existing flow, supporting designers and developers in automating the most mechanical parts, leaving them creative control.</p>
<h2 id="what-if-we-told-you-that-even-drupal-is-ai-driven">What if we told you that even Drupal is AI-Driven?</h2>
<p>When talking about hybrid platforms that merge design and development (the so-called <em>vibe designing</em>), it&rsquo;s easy to think of tools born in recent years. But AI innovation is also involving the most robust <strong>enterprise CMS platforms</strong>.</p>
<p>This is the case with <a href="/en/guides/drupal-advantages?hsLang=en"><strong>Drupal</strong></a>, historically known for enterprise stability and scalability, as well as for its open-source nature, which is taking giant steps in integrating AI directly into the workflow, including those involving UX/UI. In other words, Drupal today offers designers a modern playground as much as other emerging tools. Let&rsquo;s see the main aspects.</p>
<h3 id="experience-builder-and-component-generation">Experience Builder and Component Generation</h3>
<p>If the future of design is composable, AI must operate within clear rules. This is where Drupal&rsquo;s new Experience Builder (XB) comes in, a drag-and-drop visual editor that allows building interfaces and layouts by composing sections and components, embracing modern frontend technologies.</p>
<p>This is where the <strong>AI Assistant</strong> comes into play (in active development), which allows the designer to create entire templates from text prompts (&ldquo;Create a homepage template for a university&rdquo;, &ldquo;Create a product page to launch this new product&rdquo;, &ldquo;Add a section with two paragraphs and a vertical slider of five images&rdquo;).</p>
<p>The most important point: AI doesn&rsquo;t invent code from scratch, but <strong>reuses and orchestrates Single Directory Components (SDC)</strong> already approved by the design system. This way, the output is always consistent, accessible and adherent to company standards.</p>
<p>In Drupal, AI doesn&rsquo;t replace the designer, but multiplies governance effectiveness, freeing up time from &ldquo;pixel pushing&rdquo; to focus on strategy and creativity (the real added value of designers).</p>
<h3 id="design-system-and-visual-coherence">Design System and Visual Coherence</h3>
<p>A design system is the backbone of any scalable project (regardless of the AI factor). Drupal integrates natively with <strong>Storybook</strong>, the de facto standard for developing and documenting UI components (<a href="/en/design-system-and-drupal-cms?hsLang=en">we talked about it here</a>). This allows:</p>
<ul>
<li>Developing UI components in an isolated environment</li>
<li>Guaranteeing the <em>visual contract</em> between designer and developer.</li>
<li>Speeding up QA and prototyping.</li>
<li>Always keeping the component library updated</li>
</ul>
<p>To push automation further, specific AI-based addons for Storybook can help automatically generate documentation (stories) and quality tests and controls, ensuring that the entire component catalog (those that XB&rsquo;s AI will use to compose pages) is always updated and precise. An integration that, when it reaches full maturity, will transform GenAI&rsquo;s operational speed into governed and coherent output.</p>
<h3 id="drupal-mcp-server">Drupal MCP Server</h3>
<p>A perhaps less visible but truly revolutionary aspect is Drupal&rsquo;s ability to become a source of strategic context for LLMs. Thanks to support for <strong>Model Context Protocol (MCP)</strong> in fact, Drupal can expose its own data (content nodes, taxonomies, information architecture) as <strong>Resources</strong> and its own functions as <strong>Tools</strong> directly usable by external LLM models.</p>
<p>Translated into practice: an LLM can query in real time the structure of Drupal&rsquo;s contents to <strong>analyze user paths</strong>, suggest microcopy or CTA optimizations, or propose UX modifications based on the live and updated context of the site.</p>
<p>But Drupal MCP&rsquo;s possibilities look truly unlimited, allowing connection of tools and resources to AI agents in the most diverse processes, to the advantage not only of designers, but of many other business functions.</p>
<p>Drupal&rsquo;s AI features are really in continuous evolution: many functions are already available, some aspects are still in development, others are experimental, still others are only sketched. But the future is rosier than ever, with really tight development by an extremely dedicated and fierce community (SparkFabrik actively contributes too!).</p>
<h2 id="how-llms-can-support-ux-strategy">How LLMs Can Support UX Strategy</h2>
<p>When we think of AI tools for design, visual tools often come to mind. But <strong>Large Language Models (LLM) like ChatGPT and Claude</strong> are becoming fundamental allies also for those working on research, strategy and information architecture.</p>
<p>They don&rsquo;t design interfaces, but they can help think them better.</p>
<p>These models are particularly useful in the initial phase of the process, when you need to gather, rework and connect a lot of information often in tight times. Here are some concrete use scenarios:</p>
<ul>
<li><strong>Analysis of usability heuristics</strong>: you can ask an LLM to evaluate a page or interface according to Nielsen&rsquo;s 10 principles. It doesn&rsquo;t replace a real UX review, but can help you do a first quick and reasoned check.</li>
<li><strong>Analysis of key user paths</strong>: feed it the site map or a user flow and ask for an analysis of possible frictions or weaker calls to action.</li>
<li><strong>Synthesis of user interviews or tests</strong>: when you provide transcripts, the model can help you summarize pain points, recurring insights and suggestions (as we did with NotebookLM <a href="https://youtu.be/wQ14WyfTycE?si=PTgNbt15AsC_Xg3m&amp;t=831">in our talk</a>).</li>
<li><strong>Writing and testing microcopy</strong>: you can rapidly iterate on interface texts (titles, CTAs, error messages) and evaluate tone and clarity alternatives.</li>
<li><strong>Support for UX documentation</strong>: generation of personas, use scenarios, flow descriptions, even just as a first draft to then refine by hand.</li>
</ul>
<p>Even those who build AI, like the Anthropic team, use it every day to simplify their work. Claude, their language model, isn&rsquo;t only used to write code or generate texts, but also to do research, think about products, organize ideas.</p>
<p>In an internal document, <strong>the Anthropic team</strong> tells of using it to write UX project plans, reformulate value propositions, reorganize insights gathered in interviews and improve product documentation. A concrete example of <em>human-AI collaboration</em> that improves efficiency and strategic depth.</p>
<p><strong>ALSO READ:</strong> <a href="/it/ux-strategy-cosa-e-e-importanza-per-la-tua-azienda?hsLang=en"><strong>UX Strategy: l&rsquo;usabilità è al servizio del tuo brand</strong></a> (Italian)</p>
<h2 id="how-to-introduce-these-tools-in-your-team-strategically">How to introduce these tools in your team, strategically</h2>
<p>The way we see it, AI shouldn&rsquo;t be a sprint to keep up with the latest trend to embrace at all costs, but an opportunity to rethink the way we collaborate, create, test. And to do it effectively requires a clear, shared and scalable strategy.</p>
<p>The most classic advice is also the most effective: <strong>start with a pilot project</strong>, perhaps internal or low-risk. Experiment in a controlled context, measure what works (and what doesn&rsquo;t), then expand adoption. AI can accelerate processes, but without a solid upstream system it risks creating only confusion. If you already have a well-structured design system (more details in our <a href="/it/guides/design-system-ux-accessibilita-ai?hsLang=en">design system deep dive</a>), use it as a guide to select and configure tools: components, naming, tone of voice and accessibility must remain consistent.</p>
<p>And then create <strong>sharing moments</strong>: adoption works better if it&rsquo;s participatory. It&rsquo;s important to leave space for individual experimentation, but also to plan moments when the team can compare what they&rsquo;ve tried and discovered. AI is a new tool and company culture is also built this way: experimenting and talking openly together.</p>
<p><strong>ALSO READ:</strong> <a href="/it/guides/design-system-ux-accessibilita-ai?hsLang=en"><strong>Design system: guida strategica per coerenza, UX e accessibilità</strong></a> (Italian)</p>
<h2 id="how-we-integrate-ai-tools-in-our-projects">How we integrate AI Tools in our projects</h2>
<p>To understand what really works you need to get your hands dirty. At SparkFabrik we do it in the field: we experiment on internal activities, test tools and processes and bring to client projects only what generates real value.</p>
<p>An example is <a href="https://eaa.sparkfabrik.com/"><strong>EAA</strong></a>, a <strong>site dedicated to the European Accessibility Act</strong>. Here we used Replit to accelerate the cycle between design and development: create, test, improve. The site was designed to be accessible, readable and sustainable, and AI helped us reduce times without losing coherence with our design system.</p>
<p>Another interesting case is <a href="https://www.drupalcampitaly.it/"><strong>DrupalCamp Italy</strong></a>, created with Lovable to prototype and iterate while always keeping our designers&rsquo; vision alive. And we emphasize that it&rsquo;s not a matter of &ldquo;doing faster&rdquo; but of testing a new way of working, more fluid, more collaborative, closer to how we imagine the design of the future.</p>
<p>In both cases it wasn&rsquo;t about replacing our work, but making it more fluid, fast and connected. Designing with our method, with our style.</p>
<p>If you&rsquo;re thinking of introducing AI tools in your workflows, we can help you do it the right way: starting from your priorities, respecting your processes, and choosing together what can really improve your team&rsquo;s daily work. Ask our <a href="https://www.sparkfabrik.com/en/services/consultancy-design/design-unit/">Design Unit</a> how you can walk in balance between method, creativity and technology.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/ui-ux-ai-tools-for-designer/UX_20UI_20AI_20-_20Blog_20Featured_20Image.png" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/ui-ux-ai-tools-for-designer/UX_20UI_20AI_20-_20Blog_20Featured_20Image.png" type="image/jpeg"/><category>UX Design</category></item><item><title>What is KaaS (Kubernetes as a Service)? Advantages and best practices</title><link>https://www.sparkfabrik.com/en/blog/kubernetes-as-a-service/</link><pubDate>Wed, 24 Sep 2025 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/kubernetes-as-a-service/</guid><description>Discover what Kubernetes as a Service (KaaS) is, its advantages, challenges, and best practices for DevOps and AI. Complete guide to Managed Control Plane.</description><content:encoded><![CDATA[<p><a href="/en/guides/kubernetes-comprehensive-guide?hsLang=en"><strong>Kubernetes</strong></a> is now the de facto standard for <a href="/en/orchestration-vs-choreography?hsLang=en"><strong>container orchestration</strong></a> and modern management of <a href="/en/guides/how-to-make-cloud-native-applications?hsLang=en"><strong>Cloud Native applications</strong></a>. More and more companies are wondering how to simplify its adoption. If you&rsquo;re also considering managed cloud solutions to reduce risks and free up your team, you&rsquo;re in the right place.</p>
<p>In this <strong>guide to Kubernetes as a Service (KaaS)</strong> , you&rsquo;ll find everything you need: from tangible benefits to best practices for security and management, up to DevOps and AI scenarios. Without neglecting the limitations and practical advice for truly effective adoption, also thanks to SparkFabrik&rsquo;s profound experience.</p>
<h2 id="what-is-kubernetes-as-a-service-kaas-and-how-does-it-work">What is Kubernetes as a Service (KaaS) and How Does it Work?</h2>
<p><strong>Kubernetes as a Service (KaaS) is a managed cloud solution</strong> that provides ready-to-use Kubernetes clusters, such as AWS EKS, Google GKE, or Azure AKS. The main feature of KaaS is the so-called <strong>managed Control Plane</strong> : everything needed to coordinate and &ldquo;orchestrate&rdquo; the cluster&rsquo;s operation is managed directly by the provider, so the IT team can focus solely on their applications.</p>
<p>A quick note before we continue: to better understand KaaS, it&rsquo;s helpful to have some knowledge of the <strong>basic components of Kubernetes</strong>. If you want to delve into the Control Plane, Worker Nodes, and other key elements of this technology, check out our <a href="/it/kubernetes-architecture-guida-ai-componenti?hsLang=en"><strong>guide to Kubernetes architecture</strong></a> (Italian).</p>
<p>Let&rsquo;s take a closer look at how this platform works. In a traditional cluster, teams have to handle the installation, configuration, and updates of Kubernetes&rsquo; central elements. With the managed Control Plane of KaaS, however, all these responsibilities are in the hands of the provider: the <strong>API server, etcd, controller-manager, and scheduler</strong> are automatically replicated and updated by the service itself. The advantage? High availability, resilience, and security are guaranteed, with no extra effort for the internal team.</p>
<p>Another key aspect is the <strong>configuration of internal cluster services</strong>. KaaS allows you to easily set up different types of Kubernetes services, such as <strong>ClusterIP</strong> for internal communication, <strong>NodePort</strong> for external exposure, <strong>LoadBalancer</strong> to balance cloud traffic, and <strong>ExternalName</strong> to connect to external DNS. Furthermore, the <strong>Ingress controller</strong> (often NGINX or Traefik) simplifies the entry of HTTP/S traffic into the infrastructure, correctly routing requests and automatically managing TLS security, eliminating the need to manually configure a LoadBalancer for each service.</p>
<p>To summarize: compared to manual Kubernetes management, <strong>KaaS eliminates the complexity of daily cluster installation and maintenance</strong> , automates patches and updates, reduces risks and errors, and speeds up deployment (we&rsquo;re talking about going from weeks to a few hours). This way, company teams can avoid <a href="/en/blog/errori-comuni-kubernetes/"><strong>the typical mistakes made with Kubernetes</strong></a>. Most importantly, they can focus on their applications, with significant savings in time and resources.</p>
<h2 id="the-main-advantages-of-adopting-a-kaas-solution">The Main Advantages of Adopting a KaaS Solution</h2>
<p>Adopting a Kubernetes as a Service solution brings numerous concrete advantages: first, <strong>deployment and scaling become immediate</strong> and reliable operations thanks to the native integration of features like the Horizontal Pod Autoscaler (HPA) and the Cluster Autoscaler. These automatically adjust the number of pods or nodes based on real demand, preventing waste or overload and reducing provisioning times from hours to minutes.</p>
<p>Secondly, <strong>automated cluster management is completely delegated to the provider</strong> (including updates, security patches, backups, and recovery), with tools like Azure AKS or GKE offering integrated SLAs and restore tools, which relieves the DevOps team of these responsibilities.</p>
<p>Another important benefit is <strong>access to other advanced features</strong> , which would require complex manual setups in a DIY context. In addition to the more common HPA and Cluster Autoscaler, you can easily leverage:</p>
<ul>
<li><strong>Vertical Pod Autoscaler (VPA)</strong> , which optimizes resource allocation for each individual pod, ensuring that every application has exactly the resources it needs to function efficiently;</li>
<li><strong>Event-based scaling systems</strong> like KEDA (Kubernetes Event-Driven Autoscaling). Instead of relying only on metrics like CPU usage, these systems allow for autoscaling in response to specific events, such as messages in a Kafka queue or database activity, making them particularly effective for asynchronous workloads.</li>
<li><strong>Scaling to zero</strong> , a feature that reduces the number of replicas of an application to zero when there are no requests or events to handle. It&rsquo;s ideal for serverless workloads or applications that are not used continuously, allowing you to completely eliminate resource costs when they&rsquo;re not needed.</li>
<li><strong>Custom metrics</strong> to monitor performance and make autoscaling more flexible and targeted.</li>
</ul>
<p>The result of all this? <strong>Cost savings</strong> due to a <strong>more efficient use of resources</strong> : think of 24/7 services with variable loads that scale up during peak hours and then completely reduce costs based on real consumption. For example, companies that adopt KaaS in production <strong>report a 20-30% reduction in infrastructure costs</strong> , simply by enabling autoscaling and managed backups.</p>
<p>Of course, there&rsquo;s a flip side. Later, we&rsquo;ll discuss the potential <strong>disadvantages of KaaS</strong>. However, we&rsquo;ll also show how an <strong>expert technology partner</strong> like SparkFabrik can effectively mitigate these drawbacks.</p>
<h2 id="best-practices-for-effective-use-of-kubernetes-as-a-service">Best Practices for Effective Use of Kubernetes as a Service</h2>
<p>To effectively adopt Kubernetes as a Service, you need to rely on specific <strong>best practices</strong>. First, it&rsquo;s crucial to <strong>plan regular updates and patches for the control plane</strong> and nodes, preferably by enabling the automatic updates offered by providers like GKE or AKS, to reduce the attack surface and immediately benefit from the latest patches.</p>
<p>In parallel, an effective access management strategy must be implemented. Best practices involve strictly applying the <strong>Principle of Least Privilege</strong> (PoLP), which assigns each user only the minimum necessary permissions. This can be applied in conjunction with Role-Based Access Control (RBAC), defining specific roles and bindings for users, service accounts, and applications, avoiding excessive permissions like cluster-admin and periodically reviewing policies.</p>
<p><strong>Network Policies</strong> become essential for isolating traffic between Pods, reducing the risk of lateral movement within a network in case of compromise. In a microservices architecture, an attacker could exploit the compromise of a single pod (e.g., the frontend) to reach other pods (like the backend or a database) that would otherwise be protected. Network Policies address this risk by setting precise rules on which pods can communicate with each other.</p>
<p>To ensure availability and scalability, it is essential to <strong>configure HPA/VPA</strong> to automatically adjust replicas and resources based on real metrics. In the same way, you must <strong>define Pod Disruption Budgets</strong> to maintain a minimum number of active Pods during maintenance or updates.</p>
<p><strong>Monitoring and logging</strong> must transition to centralized structures with Prometheus/Grafana for metrics, EFK/ELK for logs, and Alerting configured to track errors, anomalous resources, and unauthorized attempts.</p>
<p>The <strong>security of container images</strong> is obviously fundamental to prevent vulnerabilities and malware from spreading in the cluster. This must be managed by adopting several strategies:</p>
<ul>
<li>automatic scanning of images (using tools like Trivy, Clair) to identify known vulnerabilities, misconfigurations, and obsolete dependencies. The scan is integrated directly into CI/CD pipelines to automatically block the distribution of insecure images.</li>
<li>use of reliable, secure, and private registries that offer access control and protection functions.</li>
<li>minimal images, meaning they are stripped down to the essentials, reducing the amount of potentially vulnerable software in the container, and effectively reducing the &ldquo;attack surface.&rdquo;</li>
<li>immutable signature/digest, a unique identifier that guarantees the integrity of the image and prevents unauthorized modifications after creation.</li>
</ul>
<p>No less important is the <strong>protection of the etcd datastore</strong> , a critical Kubernetes component that stores the configuration and state of the entire cluster. Its compromise would have extremely serious consequences, which is why its protection is of utmost importance. It must be encrypted in transit and at rest (protecting encrypted data from interception or direct access), as well as isolated with TLS, which ensures secure communication. It must also be subjected to frequent backups (using native tools like etcdctl or solutions with snapshots in object storage), but having backups is not enough: it&rsquo;s essential to regularly test the restore procedures as well.</p>
<p>Finally, introducing <strong>Admission Controller</strong> (such as OPA/Gatekeeper, components that intercept and can reject requests directed at the cluster), setting resource requests/limits to prevent oversubscription and overloads, and ensuring readiness/liveness probes (periodic checks to monitor a pod&rsquo;s &ldquo;health status&rdquo;) increases security and resilience.</p>
<p>Thanks to all these measures, a <strong>KaaS cluster becomes highly secure, performant</strong> , and ready to handle incidents, providing a solid foundation on which to build mission-critical applications.</p>
<p>If you want to delve deeper into Kubernetes security, check out our insights on <a href="/en/container-security-how-to?hsLang=en"><strong>Container Security</strong></a> or more generally on <a href="/it/guides/cloud-security-come-proteggere-i-dati-nell-era-del-cloud?hsLang=en"><strong>Cloud security</strong></a> (Italian).</p>
<h2 id="kaas-for-devops-optimizing-the-application-lifecycle">KaaS for DevOps: Optimizing the Application Lifecycle</h2>
<p>As we&rsquo;ve seen, Kubernetes as a Service (KaaS) revolutionizes the DevOps application lifecycle by offering a robust infrastructure abstraction that allows teams to focus on code, automation, and release quality. <strong>Native support for CI/CD</strong> is clearly evident: Kubernetes allows you to <strong>integrate pipelines with tools like GitLab CI/CD, Jenkins, or Azure DevOps</strong> , enabling automatic builds, tests, and deployments directly on clusters, with rollbacks and updates without downtime (blue/green, canary).</p>
<p><strong>Containerization</strong> ensures consistency across environments and prevents classic &ldquo;works-on-my-machine&rdquo; problems, improving productivity and predictability and accelerating time-to-market. Thanks to <strong>automatic scalability</strong> (HPA, VPA, cluster autoscaler, KEDA), pipelines can dynamically adjust resources based on load, ensuring efficiency and optimized costs.</p>
<p>Kubernetes maintains the &ldquo;desired state&rdquo; by automatically replacing downed pods and supporting end-to-end resilience, freeing DevOps from the overhead of managing standard infrastructure. The infrastructure abstraction that characterizes KaaS means that DevOps teams no longer have to worry about VM provisioning or networking: they can <strong>operate in a declarative GitOps environment</strong> , where every change is traceable, versioned, and deployable via commit.</p>
<p>Furthermore, the operational impact is drastically reduced: DevOps teams can spin up on-demand runners for CI, test in isolated namespaces, and then destroy them afterward, eliminating costs related to ad-hoc activated servers. <strong>KaaS therefore allows for the creation of robust, resilient, and agile CI/CD and IaC pipelines</strong> , with a total focus on application value, not on the underlying infrastructure.</p>
<p>We&rsquo;ve mentioned many concepts that deserve further exploration. To do so, you can check out the resources we&rsquo;ve created on <a href="/en/infrastructure-as-code-what-is-it-and-its-benefits?hsLang=en">Infrastructure as Code</a>, <a href="/en/gitops-and-kubernetes?hsLang=en">GitOps</a>, and <a href="/en/what-are-continuous-integration-delivery-deployment?hsLang=en">Continuous Integration and Delivery</a>.</p>
<h2 id="kubernetes-as-a-service-and-artificial-intelligence-scenarios-and-opportunities">Kubernetes as a Service and Artificial Intelligence: Scenarios and Opportunities</h2>
<p>Finally, we must mention the context of Artificial Intelligence. Indeed, Kubernetes as a Service is proving to be <strong>a strategic platform for the deployment and scaling of AI/ML applications</strong> , thanks to infrastructure abstraction and built-in features like HPA, VPA, and Cluster Autoscaler.</p>
<p>Tools like the ones just mentioned allow for <strong>dynamically allocating resources based on load</strong> and GPU or CPU requirements, while the Cluster Autoscaler extends scalability to the node level, preventing waste and ensuring optimal performance for training and inference. In addition, services like Azure AKS offer native integration with GPUs and spot nodes to reduce training costs, while separate Node Pools allow for differentiated workloads (e.g., training vs. inference).</p>
<p>Added to this are <strong>auto-scaling solutions based on predictive AI</strong> , capable of anticipating traffic spikes and proactively allocating resources, reducing lag during spikes and <strong>cutting costs by up to 40-50%</strong>. These results are also confirmed by real cases, such as <strong>Alibaba CS with AHPA (Adaptive HPA), which increased CPU usage by 10%</strong> (reducing wasted or idle CPU capacity) <strong>and overall reduced costs by 20%</strong> through more efficient use of cloud resources.</p>
<p>Furthermore, KaaS supports complete AI/ML ecosystems through <strong>platforms like Kubeflow</strong> , which offer interactive notebooks (writing and running code to experiment with models), training pipelines (automating model training and data preparation), operators for frameworks (which allow popular ML frameworks like TensorFlow and PyTorch to be run directly on K8s), and serving via KServe (to make the model usable in production via APIs). In short, these are complete platforms with a suite of specialized tools that provide everything data scientists and ML engineers need to orchestrate the entire model lifecycle on Kubernetes, from the initial experiment to production deployment, without having to worry about the underlying infrastructure.</p>
<p>On the operational front, there are also <strong>intelligent plugins and operators</strong> (e.g., KubeAI, kgateway, Kubectl-AI) enhanced by AI that simplify the operational management of Kubernetes, for example by generating YAML manifests or providing automated cluster introspection.</p>
<p>Other solutions like <strong>kubernetes-sigs/lws</strong> (LeaderWorkerSet API) are designed to simplify the deployment and scaling of complex AI models, especially multi-host and multi-node ones, by allowing a group of pods to be treated as a single unit of replication.</p>
<p>To concretely and easily serve LLMs on Kubernetes, there&rsquo;s also <strong>vLLM</strong> , a tool that allows LLM models to be run in a distributed and scalable way, supporting deployment on both CPUs and GPUs, optimizing resource usage across multiple nodes, and ensuring high availability and resilience in inference. Our Cloud Native Engineers delve into these aspects in the talk &ldquo;<a href="https://www.youtube.com/watch?v=0Hcz0v10SnY&amp;list=PLSD9hiOyso85HJ9IKTA5z1b8qMtzdL-rO&amp;index=2"><strong>Deploy, scale, serve: gestire motori di Inference AI su Kubernetes</strong></a>&rdquo; (in Italian).</p>
<p>In summary, <strong>KaaS for AI/ML offers a scalable, predictable, secure, and economically efficient foundation</strong> for managing complex workflows, with the added benefit of integrated predictive features to optimize costs and performance—a competitive advantage that more and more teams are adopting.</p>
<h2 id="challenges-and-disadvantages-to-consider-with-kaas-the-sparkfabrik-perspective">Challenges and Disadvantages to Consider with KaaS: The SparkFabrik Perspective</h2>
<p>After dwelling on the benefits, it&rsquo;s time to talk about the limitations of the KaaS solution. In fact, although Kubernetes as a Service (KaaS) offers numerous advantages in terms of simplification and automation, it&rsquo;s essential to approach this technology with a full awareness of the potential challenges you may face.</p>
<p>So let&rsquo;s look at the main issues and how SparkFabrik&rsquo;s consultative approach and specialized services can help mitigate them.</p>
<h3 id="the-not-always-easy-shared-responsibility-for-security">The (Not Always Easy) Shared Responsibility for Security</h3>
<p>With KaaS, the <strong>security of the Kubernetes infrastructure is inevitably shared</strong> between the provider and the customer. Fully understanding where the provider&rsquo;s responsibilities end and the user&rsquo;s begin is fundamental, but not always immediate.</p>
<p><strong>The SparkFabrik Approach:</strong> To prevent this risk from becoming a threat, we work alongside our clients to clearly define the perimeters of responsibility. Through <strong>personalized</strong><a href="https://www.sparkfabrik.com/en/services/cloud-native-services/kubernetes-consultancy/"><strong>Kubernetes consulting</strong></a> and the structured path of the <a href="https://www.sparkfabrik.com/en/cloud-native-journey/"><strong>Cloud Native Journey</strong></a>, we guide teams in implementing the security best practices that are the company&rsquo;s responsibility (and not the provider&rsquo;s). This includes adopting the Principle of Least Privilege (PoLP) with RBAC, configuring strict network policies, secure container image management, continuous monitoring, and much more.</p>
<h3 id="risks-of-vendor-lock-in">Risks of Vendor Lock-in</h3>
<p>Relying on a single KaaS provider can, unfortunately, create a <strong>strong technological dependency</strong> , making future migrations to other providers or to on-premise solutions complex and costly.</p>
<p><strong>The SparkFabrik Approach:</strong> At SparkFabrik, we favor solutions based on open standards and Open Source technologies precisely to ensure maximum portability and flexibility. Our goal is to transfer knowledge and tools to make the team autonomous, <strong>avoiding the creation of technological or contractual dependencies (no lock-in)</strong>. Therefore, our consulting is aimed at designing architectures that, while leveraging the benefits of KaaS, minimize the risk of lock-in, for example through the use of standard Kubernetes configurations and the adoption of DevOps practices that facilitate hybrid or multi-cloud management.</p>
<h3 id="limited-control-over-infrastructure-and-network-configurations">Limited Control Over Infrastructure and Network Configurations</h3>
<p>KaaS, by offering a &ldquo;Managed Control Plane,&rdquo; abstracts much of the infrastructural complexity. By its nature, this leads to <strong>less direct control over the underlying hardware</strong> and some advanced network configurations compared to a self-managed Kubernetes deployment.</p>
<p><strong>The SparkFabrik Approach:</strong> Every organization has different control needs, and we know this well. That&rsquo;s why we support you in <strong>selecting the KaaS provider</strong> and the service level that best aligns with your specific needs for governance and customization. We work to <strong>optimize the available configurations</strong> on the chosen platform, maximizing your control within the service&rsquo;s limits. If more granular control is an essential requirement, don&rsquo;t worry. We have all the experience necessary to guide you in the implementation and management of dedicated Kubernetes clusters or hybrid solutions, balancing the advantages of abstraction with the need for flexibility and direct control.</p>
<p>To conclude, we can say that addressing these challenges requires <strong>expertise, experience, and method</strong>. Exactly what we can offer you to transform the potential pitfalls of KaaS into opportunities for growth and optimization.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/kubernetes-as-a-service/Kubernetes_20as_20a_20Service_20-_20Featured_20Image.jpg" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/kubernetes-as-a-service/Kubernetes_20as_20a_20Service_20-_20Featured_20Image.jpg" type="image/jpeg"/><category>Cloud Native</category></item><item><title>Drupal Headless: the Omnichannel CMS for a unified experience</title><link>https://www.sparkfabrik.com/en/blog/drupal-headless/</link><pubDate>Wed, 10 Sep 2025 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/drupal-headless/</guid><description>Implementing Drupal headless: the omnichannel CMS to distribute content on web, mobile, and IoT, unifying experiences. With case studies and best practices</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Drupal headless separates backend and frontend to distribute content across web, mobile, IoT, and digital signage from a single centralized hub. This article covers the API ecosystem (JSON:API, GraphQL), omnichannel implementation patterns, real case studies (retail and media) with measurable results, and best practices for tackling preview, layout, and operational complexity challenges.
  </div>
</div>
<p>With an ever-growing number of digital channels and touchpoints, organisations are facing a more complex challenge: creating, managing and distributing <strong>consistent and personalised content, across a constantly growing ecosystem of platforms</strong>. From websites and mobile apps to digital kiosks, digital signage, voice assistants, and IoT devices, the proliferation of channels requires a radically new approach to content management.</p>
<p>In <a href="/en/tag/drupal?hsLang=en">previous articles in our series</a>, we explored the general features of Drupal CMS, its advantages over alternatives, migration strategies, security aspects, integration with Design Systems, and solutions for specific vertical sectors. In this article, we examine one of the most innovative and promising architectures: Drupal in headless mode, which enables the creation of a truly omnichannel CMS.</p>
<h2 id="omnichannel-cms-beyond-the-traditional-web">Omnichannel CMS: Beyond the Traditional Web</h2>
<p>Before exploring the technical aspects, it is important to understand what we mean by &ldquo;omnichannel CMS&rdquo; and why this approach is becoming increasingly crucial for managing content across a myriad of different channels.</p>
<p>The omnichannel approach is based on the idea of a cohesive and continuous user experience**. Rather than simply adapting content for different channels, it is managed in a unified way to ensure consistency and relevance at every point of contact.</p>
<h3 id="the-evolution-of-digital-needs">The Evolution of Digital Needs</h3>
<p>The concept of a CMS (Content Management System) has undergone a profound evolution in recent years. In fact, we can identify three macro-phases, determined by both technological evolution and the changing of user needs, expectations, and behaviors.</p>
<ul>
<li><strong>First phase (1990-2010)</strong> : CMS were monolithic systems focused almost exclusively on managing websites.</li>
<li><strong>Second phase (2010-2020)</strong> : Multichannel CMS became widespread, with extensions to manage mobile apps and other channels, while still maintaining a predominantly web-centric approach.</li>
<li><strong>Current phase (2020+)</strong> : The paradigm is evolving towards omnichannel CMS, which act as a single, central content hub capable of feeding any channel, present and future.</li>
</ul>
<p>According to Gartner research, by 2025 over 75% of medium and large-sized organizations will adopt an omnichannel approach to content management, up from 40% today.</p>
<p>This trend is driven not only by the proliferation of channels but also by a growing expectation of personalization and consistency in the user experience.</p>
<h3 id="difference-between-multichannel-and-omnichannel">Difference Between Multichannel and Omnichannel</h3>
<p>It&rsquo;s important to distinguish between a merely multichannel approach and a truly omnichannel one:</p>
<ul>
<li><strong>Multichannel</strong> : Content is adapted and distributed across different channels, but these often operate as separate silos. Consequently, there is a significant duplication of effort and content.</li>
<li><strong>Omnichannel</strong> : Content is created in a &ldquo;channel-agnostic&rdquo; way and managed centrally. Subsequently, it is orchestrated to provide a consistent but personalized experience across all touchpoints. The brand can thus offer a unified and cohesive experience across all platforms, while also meeting the typical personalization needs of modern users.</li>
</ul>
<p>The omnichannel approach requires a fundamental rethinking of CMS architecture, moving from monolithic systems to headless or decoupled solutions that clearly separate content management from its presentation.</p>
<h2 id="drupal-headless-key-concepts">Drupal Headless: Key Concepts</h2>
<p>Drupal in headless mode represents a significant evolution from the traditional approach. But what exactly does it consist of?</p>
<h3 id="what-is-headless-architecture">What is Headless Architecture</h3>
<p>Headless architecture, which literally means &ldquo;without a head,&rdquo; is based on the separation between the backend, which manages content, and the frontend, which handles its display. In this model, a headless CMS involves:</p>
<ul>
<li>The &ldquo;<strong>backend</strong> &quot; (in this case Drupal) acts as a centralized content hub. It deals exclusively with content management, its structuring, and business logic.</li>
<li>The &ldquo;<strong>frontend</strong> &quot; (how our content looks and how the user interacts with it) is completely separated and implemented with specialized technologies (React, Vue, Angular, Swift, Kotlin, etc.).</li>
<li>The <strong>communication between backend and frontend</strong> , which happens exclusively via APIs.</li>
</ul>
<p>This &ldquo;<strong>API-first</strong> &quot; approach allows Drupal to be used as a central content repository that can potentially power any digital channel or touchpoint. This extreme flexibility in content usage and delivery is one of the main advantages of this model.</p>
<h3 id="drupals-apis-a-mature-ecosystem">Drupal&rsquo;s APIs: A Mature Ecosystem</h3>
<p>In a headless architecture, the API link is the fundamental element that makes the separation between content and presentation possible.</p>
<p>Drupal stands out in the CMS landscape for the maturity and completeness of its API ecosystem**. This leadership is the result of an &ldquo;API-First&rdquo; initiative launched by the Drupal community back in 2016, long before many other CMS began to seriously consider the headless approach.</p>
<p>Drupal&rsquo;s API framework includes:</p>
<ul>
<li><strong>JSON:API</strong> : Integrated into Drupal&rsquo;s core, it offers a complete implementation of the JSON:API specification, an industry standard for REST APIs.</li>
<li><strong>GraphQL</strong> : Supported by stable modules, it allows for precise and optimized queries.</li>
<li><strong>Custom REST APIs</strong> : The ability to create custom REST endpoints for specific needs.</li>
</ul>
<p>According to a 2023 Forrester report, Drupal is ranked among the leaders in the enterprise headless CMS segment, precisely because of the maturity of its API offering. A recognition that demonstrates it is a reliable choice for complex projects.</p>
<h2 id="advantages-of-the-headless-approach-with-drupal">Advantages of the Headless Approach with Drupal</h2>
<p>Adopting Drupal in headless mode offers numerous strategic advantages for organizations aiming for an omnichannel strategy. The headless approach is not just a technical solution, but an opportunity to optimize processes, improve the user experience, and future-proof the investment.</p>
<h3 id="flexibility-and-future-proofing">Flexibility and Future-Proofing</h3>
<p>The <strong>clear separation between content and presentation</strong> is undoubtedly one of the main advantages of headless architecture, ensuring unparalleled flexibility. This advantage is reflected in several ways:</p>
<ul>
<li>Content editors can focus on creating quality content, regardless of how and where it will be displayed.</li>
<li>Frontend developers can use the most suitable technologies for each channel and touchpoint.</li>
<li>The addition of new channels does not require changes to the backend, guaranteeing the longevity of the investment.</li>
</ul>
<p>As Dries Buytaert, the founder of Drupal, observed: &ldquo;With a headless approach, you get a content repository that can last for decades, while frontends can evolve rapidly following emerging technologies.&rdquo;</p>
<h3 id="performance-and-scalability">Performance and Scalability</h3>
<p>Load times are a crucial aspect of any digital experience. Empirical evidence shows that every extra second of loading leads to a worsening of fundamental metrics such as bounce rate and conversion rate.</p>
<p>Modern users expect speed, and attention is won by offering instant experiences.**</p>
<p>Headless architecture allows for <strong>significant performance improvements</strong> thanks to a series of optimizations:</p>
<ul>
<li>The ability to implement <strong>aggressive caching strategies</strong> on Drupal APIs to drastically reduce response times.</li>
<li>The use of <strong>Content Delivery Networks (CDNs) and</strong> <em>edge computing</em>** to distribute content more efficiently, serving it to users from the closest geographical location.</li>
<li><strong>Client-side rendering</strong> to reduce the load on the server, freeing up resources for other critical operations.</li>
<li>The <strong>reduced size of payloads</strong> , which transfer only the data that is strictly necessary, speeds up loading and reduces bandwidth consumption.</li>
</ul>
<p>These combined factors lead to a significantly faster user experience. In our headless projects, we have seen improvements in Core Web Vitals of up to 45% compared to traditional implementations.</p>
<h3 id="advanced-user-experience">Advanced User Experience</h3>
<p>The complete separation of the frontend allows for the development of <strong>richer and more interactive user experiences</strong>. Development teams can free themselves from the need to adapt to a monolithic framework, instead gaining the freedom to <strong>choose the most suitable frontend technology for each scenario</strong>.</p>
<p>This approach favors the development of <strong>Progressive Web Apps</strong> (PWAs) with offline functionality, which ensure service continuity even without a connection.</p>
<p>Interfaces** can integrate smooth transitions, complex animations, and latest-generation visual effects, while the use of technologies like WebSockets allows for <strong>real-time content updates</strong> , for a dynamic and constantly current experience.</p>
<p>The absence of technological constraints on the frontend therefore allows for the customization of the interface and interactions to adapt perfectly to every device and context, offering a tailored experience for users. This flexibility is fundamental in an omnichannel context, where each touchpoint has specific characteristics and constraints.</p>
<h2 id="implementing-drupal-headless-for-omnichannel-scenarios">Implementing Drupal Headless for Omnichannel Scenarios</h2>
<p>To successfully implement Drupal headless in real-world scenarios, it is necessary to consider various practical aspects that go beyond the simple separation of backend and frontend.</p>
<p>First and foremost, the separation of data and frontend makes a deep consideration of content structuring essential. To serve this content, an accurate design of the APIs is required, while on the frontend side, a strategic approach is needed to maintain user experience consistency across various channels, for example through the definition of a Design System.</p>
<h3 id="content-structuring-for-omnichannel">Content Structuring for Omnichannel</h3>
<p>The first fundamental step for an omnichannel implementation is to rethink the approach to <strong>content structuring from a channel-agnostic perspective</strong>.</p>
<p>In fact, instead of creating content specific to each channel, the strength of this approach is the centralized definition of content, which is then orchestrated across the various touchpoints. This approach therefore involves:</p>
<ul>
<li><strong>Structured content modeling</strong> : Creating highly structured and semantic content types that facilitate reuse.</li>
<li><strong>Content/presentation separation</strong> : Avoiding references to layout or visual aspects within the content itself.</li>
<li><strong>Rich metadata</strong> : Adding metadata that allows for the selection and personalization of content for different channels.</li>
<li><strong>Atomic content</strong> : Content is broken down into components that can be reused across different channels.</li>
</ul>
<p>In a recent project for a retail client, we implemented a &ldquo;content atoms&rdquo; model in Drupal that allowed the same content to be reused on the website, mobile app, in-store kiosks, and digital signage, with an estimated saving of 60% in editorial work.</p>
<h3 id="strategic-api-design">Strategic API Design</h3>
<p>APIs are the bridge connecting the backend to the frontends, and their effectiveness determines the flexibility and scalability of the entire ecosystem. <strong>API design</strong> is a critical aspect that requires a strategic approach, clearly defining specifications before even starting implementation:</p>
<ul>
<li><strong>API contracting</strong> : Defining API contracts before implementation, including rules, specifications, and expectations, ensures clarity, consistency, and stability throughout development.</li>
<li><strong>Versioning</strong> : A clear strategy for API versioning ensures backward compatibility and orderly management of changes.</li>
<li><strong>Response optimization</strong> : The configuration of <em>sparse fieldsets</em> and <em>includes</em> reduces payload sizes, optimizing performance and data transfer.</li>
<li><strong>Caching</strong> : Implementing advanced caching strategies allows content to be served more quickly and efficiently.</li>
<li><strong>Authentication and authorization</strong> : Adopting robust and granular security mechanisms is essential to protect data and prevent API abuse.</li>
</ul>
<p>In our approach, we use a &ldquo;design-first&rdquo; process for APIs, with tools like OpenAPI to document and validate API contracts even before implementation begins.</p>
<h3 id="frontend-architecture-for-omnichannel">Frontend Architecture for Omnichannel</h3>
<p>Managing <strong>multiple frontend implementations</strong> requires a well-structured architectural approach. Just as content duplication is avoided in the backend, reusable elements can also be defined on the frontend for different contexts and devices, making development more efficient and reducing technical debt:</p>
<ul>
<li><strong>Shared component library</strong> : Developing validated and reusable UI components across different platforms.</li>
<li><strong>Cross-platform Design System</strong> : Defining principles and patterns that adapt to different contexts, while maintaining the brand&rsquo;s identity.</li>
<li><strong>Centralized state management</strong> : Consistent management of the application state, including customizations for each user and channel.</li>
<li><strong>Unified authentication strategy</strong> : Single Sign-On across different touchpoints.</li>
</ul>
<p>A pattern we have found particularly effective is the implementation of an &ldquo;API middleware&rdquo; that acts as an orchestration layer between Drupal and the various frontends, managing caching, personalization, and channel-specific transformations.</p>
<p>Equally strategic is defining a Design System that guarantees visual and functional consistency, while also reducing development times. Read our dedicated resources to find out more: the article on how to implement a <a href="/en/design-system-and-drupal-cms?hsLang=en">Design System with Drupal CMS</a>, the <a href="/it/guides/design-system-ux-accessibilita-ai?hsLang=en">guide on Design System and UX</a> (Italian), and our<a href="/en/landing/accessibilita-design-system/"> downloadable guide on Accessibility and Design System</a> (Italian).</p>
<h2 id="case-study-omnichannel-implementations-with-drupal-headless">Case Study: Omnichannel Implementations with Drupal Headless</h2>
<p>To illustrate the omnichannel headless approach in practice, let&rsquo;s examine some real-world cases implemented by SparkFabrik. The implementation of this architecture makes it possible to meet the flexibility and performance needs of both editorial teams and complex e-commerce platforms, generating tangible and measurable business results.</p>
<h3 id="omnichannel-retail-integrated-in-store-and-digital-experience">Omnichannel Retail: Integrated In-Store and Digital Experience</h3>
<p>For a major retailer, we implemented an omnichannel platform based on Drupal headless that powers:</p>
<ul>
<li>An e-commerce site with a React frontend.</li>
<li>A native mobile app for iOS and Android.</li>
<li>Touchscreen kiosks in stores.</li>
<li>In-store digital signage and screens.</li>
</ul>
<p>The tangible results obtained in this project include:</p>
<ul>
<li>A 70% reduction in the time needed to launch new cross-channel marketing campaigns.</li>
<li>A consistent user experience with a 35% improvement in Net Promoter Score (NPS).</li>
<li>Optimization of operational costs thanks to centralized content management.</li>
</ul>
<p><em>&ldquo;The headless approach with Drupal has radically transformed our ability to offer consistent and personalized omnichannel experiences. The separation of content and presentation has given us unprecedented flexibility to innovate across different touchpoints.&rdquo;</em> - Digital Director</p>
<h3 id="media-and-publishing-efficient-multichannel-distribution">Media and Publishing: Efficient Multichannel Distribution</h3>
<p>For a publishing group, we implemented a content hub based on Drupal headless. This backend distributes content to:</p>
<ul>
<li>A responsive website with a Next.js frontend.</li>
<li>A mobile news app.</li>
<li>Personalized newsletters.</li>
<li>Feeds for smart speakers and voice assistants.</li>
<li>Integrations with social platforms.</li>
</ul>
<p>The results of the headless approach include:</p>
<ul>
<li>A 40% increase in editorial productivity.</li>
<li>Simultaneous publication across all channels with one click.</li>
<li>A 25% improvement in mobile device loading times.</li>
</ul>
<p><em>&ldquo;Our editorial team can now focus on creating quality content, knowing that it will be optimally distributed across all our channels. A headless implementation of Drupal has allowed us to evolve our digital offering quickly without having to constantly rethink the backend.&rdquo;</em> - Chief Digital Officer</p>
<h2 id="challenges-and-solutions-in-adopting-drupal-headless">Challenges and Solutions in Adopting Drupal Headless</h2>
<p>While there are many advantages, adopting a headless approach with Drupal also presents specific challenges that are important to address proactively to ensure project success.</p>
<h3 id="content-preview">Content Preview</h3>
<p>One of the main challenges in any headless implementation is previewing content before it is published.</p>
<p>Unlike a traditional CMS, in a headless architecture there is no native interface to preview how content will look on different channels before publication.</p>
<p>To solve this problem, we implemented a &ldquo;headless preview&rdquo; system that allows content editors to see in real time how the content will appear on various channels directly from the Drupal interface, using simulated renderings of the different frontends.</p>
<h3 id="layout-and-presentation-management">Layout and Presentation Management</h3>
<p>In a purely headless approach, editors lose the ability to control layout and presentation aspects of the content, which can be a limiting factor in some contexts.</p>
<p>We have developed a &ldquo;content-as-configuration&rdquo; approach that allows layout structures and presentation rules to be defined in Drupal and exposed via APIs, which are then interpreted by the various frontends, giving control back to editors without compromising the benefits of the headless architecture.</p>
<h3 id="operational-complexity">Operational Complexity</h3>
<p>Headless architecture inevitably introduces greater operational complexity. Managing multiple frontends, APIs, and environments requires more sophisticated DevOps processes and a more diverse skill set within the team.</p>
<p>To address this challenge, we implement automated CI/CD pipelines that manage the entire ecosystem as a single unit, with end-to-end testing that verifies the integrity of the experience across all touchpoints. In addition, we provide detailed documentation and centralized monitoring tools that offer visibility into the entire ecosystem.</p>
<h2 id="best-practices-for-success-with-omnichannel-drupal-headless">Best Practices for Success with Omnichannel Drupal Headless</h2>
<p>From our experience in implementing headless projects with Drupal, we have distilled some fundamental best practices that go beyond simple technology and involve project strategy and organization.</p>
<h3 id="1-adopt-an-api-first-approach-at-every-stage">1. Adopt an &ldquo;API-First&rdquo; Approach at Every Stage</h3>
<p>The success of a headless implementation depends heavily on a true <strong>&ldquo;API-First&rdquo; approach</strong> that must permeate all stages of the project. This approach involves several aspects:</p>
<ul>
<li><strong>Content Strategy</strong> : Define the content strategy from an API perspective before thinking about implementation.</li>
<li><strong>Design Thinking</strong> : Include API aspects in the project&rsquo;s ideation and design phase.</li>
<li><strong>Development</strong> : Use methodologies like test-driven development (TDD) for APIs.</li>
<li><strong>Documentation</strong> : Invest in comprehensive and actively maintained API documentation.</li>
</ul>
<p>This approach ensures that APIs are not an afterthought but the foundation of the entire architecture.</p>
<h3 id="2-build-a-scalable-and-maintainable-architecture">2. Build a Scalable and Maintainable Architecture</h3>
<p>Scalability** and <strong>maintainability</strong> are critical aspects for an omnichannel ecosystem. In this context, the <strong>Cloud Native approach</strong> ensures attention to both, for example:</p>
<ul>
<li>A <strong>microservices</strong> architecture can be considered for critical components.</li>
<li>Manage the entire infrastructure as code (<strong>Infrastructure as Code</strong>) to ensure consistency and scalability.</li>
<li>Implement comprehensive <strong>monitoring and alerting</strong> systems to keep performance and anomalies under control.</li>
<li>It is appropriate to define (and monitor) <strong>performance budgets</strong> for each channel.</li>
</ul>
<p>In a recent project, we implemented a &ldquo;micro-frontends&rdquo; architecture that allowed different teams to work autonomously on different channels, while maintaining consistency and performance.</p>
<h3 id="3-invest-in-tools-for-content-editors">3. Invest in Tools for Content Editors</h3>
<p>The long-term success of any platform depends heavily on <strong>adoption by content editors</strong>. Investing in <strong>tools for editorial teams</strong> can make the difference between enthusiastic adoption and resistance to change.</p>
<p>These tools include effective preview systems, such as simulators that show a preview of content for various channels and devices, and the integration of cross-channel analytics to measure the performance and effectiveness of each piece of editorial content.</p>
<h3 id="4-plan-for-evolution-and-growth">4. Plan for Evolution and Growth</h3>
<p>An omnichannel ecosystem is, by definition, in constant evolution. For effective evolution, a clear strategy is necessary. This involves several aspects:</p>
<ul>
<li><strong>Versioning Strategy</strong> : Define the API versioning strategy in advance.</li>
<li><strong>Deprecation Policy</strong> : Have clear policies for deprecating features.</li>
<li><strong>Canary Testing</strong> : Implement mechanisms to test new features on subsets of users.</li>
<li><strong>Feature Flagging</strong> : Use feature flags to enable/disable functionality on different channels.</li>
</ul>
<p>This planning allows the ecosystem to evolve smoothly, without interruptions and while maintaining compatibility with existing channels.</p>
<h2 id="the-future-of-omnichannel-with-drupal-headless">The Future of Omnichannel with Drupal Headless</h2>
<p>Looking ahead, we can identify several emerging trends that will further shape the evolution of the omnichannel headless approach with Drupal, making it even more effective and powerful. The Drupal ecosystem is constantly evolving, and these innovations are designed to anticipate market needs.</p>
<h3 id="composable-dxp-and-the-evolution-of-drupal">Composable DXP and the Evolution of Drupal</h3>
<p>The concept of a &ldquo;Composable Digital Experience Platform&rdquo; (DXP) is gaining ground, with Drupal positioning itself as a central component in composable architectures.</p>
<p>Instead of a single monolithic platform, the Composable DXP is based on the combination of specialized tools that work together in an integrated ecosystem.</p>
<p>Drupal acts as a central hub that manages and distributes content in an agnostic way, allowing organizations to build customized technology stacks composed of the best solutions (<strong>&ldquo;best of breed&rdquo;</strong>) for every aspect of the digital experience, such as e-commerce, personalization, and data analytics.</p>
<p>Learn more in our article <a href="/en/composable-architecture-with-drupal-cms?hsLang=en">Composable architecture with Drupal CMS</a> and in our downloadable guide <a href="/it/landing/guida-drupal/">Drupal as a Marketing Asset, from CMS to DXP</a> (in Italian).</p>
<h3 id="ai-and-omnichannel-personalization">AI and Omnichannel Personalization</h3>
<p>Artificial intelligence is establishing itself as a central element in the Drupal ecosystem. The community has launched the Drupal AI Initiative to accelerate the development of AI features, an initiative that SparkFabrik is also actively contributing to.</p>
<p>The <strong>AI capabilities in Drupal</strong> are growing at a rapid pace and offer significant opportunities for personalization and automation, even in omnichannel contexts. Examples include:</p>
<ul>
<li><strong>Content Intelligence</strong> : Automatic analysis and tagging of content.</li>
<li><strong>Predictive Personalization</strong> : Predictive personalization based on cross-channel behaviors of each user or user category.</li>
<li><strong>Automated Content Transformation</strong> : Automatic adaptation of content for different channels.</li>
<li><strong>Conversational Interfaces</strong> : Integration with conversational interfaces and virtual assistants.</li>
</ul>
<p>In a previous article, we provided an <a href="/en/drupal-ai-overview-news-vision?hsLang=en">overview of AI in Drupal</a>, mentioning many other features such as assisted content generation, automatic tagging, and smart SEO optimization.</p>
<p>Drupal, with its flexible architecture and robust APIs, provides an ideal foundation for integrating these emerging AI technologies, supporting different AI models and allowing any functionality to be extended with AI capabilities.</p>
<h3 id="edge-computing-and-new-delivery-models">Edge Computing and New Delivery Models</h3>
<p>Edge computing, which involves processing data as close as possible to its source to increase speed and reduce latency, is redefining how content is distributed to users.</p>
<p>This approach <strong>enormously improves performance</strong> , distributing content and logic closer to the end user to optimize &ldquo;at the edge&rdquo; rendering, including personalization aspects. In this way, specific and personalized functionalities can be delivered more quickly and closer to users, while other higher-level functionalities can reside in different geographical areas.</p>
<p><em>Edge-first</em> architectures, such as those that combine so-called &ldquo;static generation&rdquo; and &ldquo;dynamic hydration&rdquo; (e.g., Jamstack and SSG), offer an optimal mix of performance and flexibility, combining the advantages of static systems with those of dynamic systems. This geographical distribution also improves operational resilience, a critical factor for companies operating globally.</p>
<h2 id="conclusion-and-next-steps">Conclusion and next steps</h2>
<p>The headless approach with Drupal represents a strategic response to the modern challenges of omnichannel. By clearly separating content management from its presentation, Drupal headless offers the flexibility, scalability, and agility needed to effectively manage a constantly expanding ecosystem of touchpoints.</p>
<p>As we have seen in the case studies, organizations that adopt this approach can achieve significant benefits:</p>
<ul>
<li>Greater operational efficiency in content management.</li>
<li>Richer, faster, and more consistent user experiences across different channels.</li>
<li>Reduced time-to-market for new digital initiatives.</li>
<li>Protection of investment in a rapidly evolving technological landscape.</li>
</ul>
<p>At SparkFabrik, we combine deep technical expertise in Drupal with advanced skills in API architecture and modern frontend development, positioning ourselves as the ideal partner for organizations that intend to implement omnichannel strategies based on Drupal headless.</p>
<p>If your organization is considering a headless approach for its omnichannel initiatives, we invite you to:</p>
<ol>
<li>Explore our <a href="https://www.sparkfabrik.com/en/success-stories/">case studies</a> of headless implementations.</li>
<li><a href="https://www.sparkfabrik.com/en/contact-us/">Contact our team</a> for an assessment of your specific needs.</li>
<li>Discover how our <a href="https://www.sparkfabrik.com/en/services/drupal/">Suite of Drupal services</a> can support your omnichannel strategy.</li>
</ol>
<hr>
<p>This article is part of our series dedicated to Drupal CMS. To explore other aspects of the platform, we invite you to consult our previous articles on the <a href="/en/drupal-cms-the-new-era-of-enterprise-content-management?hsLang=en">features and advantages</a> of Drupal CMS, its <a href="/en/drupal-cms-a-comparison-with-the-main-alternatives?hsLang=en">comparison with alternatives</a>, <a href="/en/migration-to-drupal-cms-complete-guide-for-a-successful-transition?hsLang=en">migration strategies</a> from other systems, <a href="/en/drupal-cms-security-compliace-regulated-sector?hsLang=en">security and compliance</a> with a focus on regulated sectors, <a href="/en/composable-architecture-with-drupal-cms?hsLang=en">composable architecture</a>, <a href="/en/design-system-and-drupal-cms?hsLang=en">Design System</a>, and <a href="/en/drupal-ai-overview-news-vision?hsLang=en">overview and news on Drupal AI</a>.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/drupal-headless/Drupal_20CMS_20-_20Headless_20_26_20Omnichannel_20-_20Featured_20Image.png" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/drupal-headless/Drupal_20CMS_20-_20Headless_20_26_20Omnichannel_20-_20Featured_20Image.png" type="image/jpeg"/><category>Drupal</category></item><item><title>Drupal AI: Overview, Latest News and SparkFabrik's Vision</title><link>https://www.sparkfabrik.com/en/blog/drupal-ai-overview-news-and-sparkfabrik-vision/</link><pubDate>Wed, 30 Jul 2025 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/drupal-ai-overview-news-and-sparkfabrik-vision/</guid><description>A comprehensive deep dive into AI features in Drupal, the latest news, the important AI Initiative coordinating innovation, and SparkFabrik's contribution.</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    The Drupal AI module provides a complete framework for integrating artificial intelligence into any Drupal site, with multi-provider support, AI Automators, semantic search, AI agents, and content generation. The Drupal AI Initiative coordinates and funds this development, while SparkFabrik actively contributes with a dedicated half-FTE and direct community participation.
  </div>
</div>
<p>Artificial Intelligence is redefining how websites and digital experiences are built, managed, and experienced. This transformation shifts the focus from manual workflows to results-oriented AI orchestration, where users define goals and AI contributes to achieving them. In this dynamic context, <strong>Drupal positions itself as a proactive leader in AI integration</strong> , distinguished by its open-source and responsible approach and its robust community.</p>
<p>AI adoption in Drupal has seen remarkable acceleration. During <a href="https://www.youtube.com/watch?v=XaYhTO9iCUo">DrupalCon Atlanta 2025</a>, Dries Buytaert, founder of Drupal, highlighted how AI framework adoption has tripled in just one year, involving even large organizations like the European Commission and the United Nations, actively engaged in experimenting with Drupal and AI. Furthermore, in the <a href="https://www.drupal.org/association/blog/drupal-launches-new-ai-initiative-to-democratize-intelligent-digital-experiences-for-everyone">AI Initiative announcement</a>, over 290+ AI modules are already highlighted as available.</p>
<p>This trend is not about replacing human capabilities, but rather enhancing them, freeing up time for creative and strategic work while AI handles routine activities, suggests improvements, and responds to feedback in real-time.</p>
<p>After outlining the main innovations of the ecosystem in our article “<a href="/en/drupal-cms-all-innovations-of-2025?hsLang=en">The future of Drupal CMS 2.0: All Innovations expected in 2025</a>&quot;, this deep dive focuses on the latest innovations in the AI field for Drupal, exploring in detail the functionalities of the Drupal AI module and the progress of the strategic Drupal AI Initiative. We will also outline SparkFabrik&rsquo;s active role in this evolution, demonstrating constant commitment and a clear vision for the future of the ecosystem.</p>
<h2 id="the-drupal-ai-module-the-heart-of-innovation">The Drupal AI Module: The Heart of Innovation</h2>
<p>At the center of Artificial Intelligence integration in Drupal lies the <a href="https://www.drupal.org/project/ai">Drupal AI module</a>, a unified solution that enables the use of various AI technologies within the platform. This module provides a <strong>complete framework for easily integrating AI into any Drupal site</strong> , supporting a wide range of models and providers. Its primary goal is to offer a suite of modules and an API that serve as the foundation for generating textual content, images, audio, video, translations, but also entire components and sections and much more.</p>
<p><img src="/images/blog/drupal-ai-overview-news-vision/drupal_20AI_20logo-color.jpg" alt="drupal AI logo-color"></p>
<p><strong>The AI module&rsquo;s strength lies in its abstraction layer</strong> , which enables seamless integrations with third-party AI providers like OpenAI (ChatGPT, DALL-E), Anthropic (Claude and also Claude Code), Google (Gemini, Vertex), Perplexity, Fireworks, and Mistral, or more specialized ones such as DeepL and ElevenLabs. It&rsquo;s also possible to use <strong>open-source models hosted on user-controlled servers</strong> through integrations with Olama, LMStudio, and Huggingface, among others. This functionality is particularly appreciated by organizations operating in regulated sectors.</p>
<p>This flexibility ensures that the AI module is a powerful tool both for site-builders, who can create complex applications without writing code (<a href="/it/low-code-platform-e-no-code-platform-il-futuro-dello-sviluppo-con-drupal?hsLang=en">we discussed Drupal&rsquo;s first-class low-code/no-code capabilities</a> in a previous article), and for developers and administrators, who benefit from simplified integration and immediate support for multiple providers.</p>
<p>The Drupal AI module aims to be the foundation of AI functionalities, offering a set of standard tools to use directly or on which to develop custom integrations. Indeed, the <strong>core functionalities are already extensible</strong> with a series of sub-modules, recipes, and other compatible modules. To mention some, among these stand out:</p>
<ul>
<li>
<p><strong>AI Core:</strong> Provides access to common AI models (ChatGPT, Claude, Gemini&hellip;) and is extensible to any needed model.</p>
</li>
<li>
<p><strong>AI Automators:</strong> These are various sub-modules that allow populating and modifying any field in Drupal. Each Automator has specific functionalities, such as automation, web scraping, OCR extraction, chart data extraction, summary generation, transcript production, email address extraction, etc. The great versatility allows creating complex AI applications, where different prompts and different automators can be chained in custom workflows.</p>
</li>
<li>
<p><strong>AI Assistants API + Chatbot:</strong> A framework for configuring chatbot functionality, enabling advanced forms of AI search and enabling conversational search.</p>
</li>
<li>
<p><strong>AI Content:</strong> Offers assistance tools for content creation and editing, such as precise tone-of-voice adjustment, summary creation, taxonomy suggestions, moderation violation control. This level of flexibility and control is fundamental for quality output that respects your brand guidelines.</p>
</li>
<li>
<p><strong>AI Translate:</strong> Provides AI translations with one click, ideal for multilingual sites. For translations, there are several other alternative modules, specific to your needs and configurations.</p>
</li>
<li>
<p><strong>AI Search:</strong> Improves traditional search with semantic understanding, allowing the search engine to understand the meaning of users&rsquo; terms. To achieve maximum results and reduce hallucinations, it&rsquo;s necessary to use vector databases and provide access to your own data (RAG). SparkFabrik also proposes a <a href="https://www.drupal.org/project/search_api_typesense">semantic search solution based on Typesense</a>.</p>
</li>
<li>
<p><strong>Vector Databases:</strong> Enhance semantic search, chatbot, assistant, and other features. They allow expanding LLM knowledge by extending it to proprietary business data, which is vectorized and saved in the database.</p>
</li>
</ul>
<p>LLMs can then recall this data and generate more relevant responses through the so-called Retrieval Augmented Generation (RAG) process. Various vector databases are supported, including Postgres, Milvus, Pinecone, Azure, and SQLite.</p>
<ul>
<li>
<p><strong>AI Agents:</strong> A framework for creating custom AI agents (currently, text-to-action type) capable of manipulating the website based on provided instructions, both on the content side (e.g., text and image generation) and on the configuration side (e.g., creating taxonomies or new page types).</p>
</li>
<li>
<p><strong>ai_image_alt_text:</strong> Automatically generates alternative text for images, increasing accessibility and improving SEO. Relevant and descriptive alt-text is also a fundamental accessibility requirement to ensure compliance with the European Accessibility Act.</p>
</li>
<li>
<p><strong>ai_seo:</strong> Provides SEO analysis and reports for individual nodes. Thanks to targeted feedback, administrators, content managers, and marketing teams can optimize both technical SEO and semantic SEO of their content.</p>
</li>
</ul>
<p>Thanks to AI-powered analysis, not only basic aspects are considered, such as structure, headings, meta tags, image optimization, and URLs, but also more advanced aspects like Topic Authority, Topic Depth, keyword usage and natural language, responsiveness, and loading times. In short, a 360° analysis that also includes new SEO aspects that emerged in the AI era.</p>
<ul>
<li>
<p><strong>ai_text:</strong> allows easily creating an <em>ai.txt</em> file to give instructions to AI systems that interact with the site (in a way completely similar to <em>robots.txt</em> for web crawlers).</p>
</li>
<li>
<p><strong>AI Providers:</strong> To enable AI functionalities, you obviously need to choose one or more providers and enable the related module, for example Anthropic, OpenAI Perplexity, EvelenLabs, DeepL Translate. Many providers are supported, from the most well-known to local models, ensuring maximum flexibility.</p>
</li>
</ul>
<p>These integrations (only some among all those already available or under development) demonstrate the depth and breadth of AI capabilities that Drupal can offer, covering a vast spectrum of digital needs.</p>
<h2 id="recent-evolutions-drupal-ai-110-and-120">Recent Evolutions: Drupal AI 1.1.0 and 1.2.0</h2>
<p>The Drupal AI module is constantly evolving, with releases that introduce increasingly sophisticated and user-friendly functionalities, add AI providers, and resolve issues. In the spirit of sharing typical of the Drupal community, on the occasion of major releases, the Drupal AI Initiative team details the latest news.</p>
<p>In particular, after the initial launch, we have already enthusiastically witnessed <strong>two main releases of Drupal AI</strong> : <a href="https://www.drupal.org/about/starshot/initiatives/ai/blog/drupal-ai-110-is-out-and-brings-major-new-features">1.1.0</a> in June and <a href="https://www.drupal.org/about/starshot/initiatives/ai/blog/drupal-ai-120-alpha1-is-out-and-ready-to-be-tested">1.2.0</a> in July. Recent innovations aim to further simplify interaction with AI for editors and developers, enhancing content creation, automation, and site management. Let&rsquo;s see the main ones in detail.</p>
<h3 id="content-power-up-between-creation-and-content-management">Content Power-up, Between Creation and Content Management</h3>
<p>The new functionalities make content creation and editing more intuitive and AI-assisted:</p>
<ul>
<li><strong>Field Widget Actions:</strong> It&rsquo;s now possible to call an AI function through special buttons that can be added to any field in the interface (entity form).</li>
</ul>
<p>These allow editors to easily interact with AI: in one click they can generate content to populate fields and, if not satisfied, generate again or intervene manually, always maintaining control over the final output. This functionality integrates with AI Content Suggestions, AI Automators, and AI Agents modules, supporting a wide range of uses.</p>
<p>For example, it&rsquo;s possible to rewrite an attachment&rsquo;s filename so that it&rsquo;s relevant to the content (finally goodbye to dozens of &ldquo;image01.png&rdquo; in your system) or generate alt-text for an image. Furthermore, you can request assistance in generating titles and summaries, based on page content and adopting your personal style thanks to a custom prompt. It&rsquo;s also possible to select or generate system tags to classify your content, or extract specific information like email addresses.</p>
<p><img src="/images/blog/drupal-ai-overview-news-vision/Drupal_20AI_20Form_20Widget_20Actions.png" alt="Drupal AI Form Widget Actions"><br>
Important, for each of these widgets it&rsquo;s possible to provide specific instructions and context, enabling the generation of truly personalized content.</p>
<p>To ensure flexibility for the most diverse use cases, multiple widgets can also be connected to each field (for example, imagine having two buttons to generate a 50-character summary or a 200-character one, or to generate images with different styles).</p>
<p>In this video, a <a href="https://youtu.be/G62GgvD_Imw?t=1334">complete overview of Field Widgets</a> is available, including their configuration.</p>
<ul>
<li><strong>Content Suggestions extended to other entities:</strong> Content suggestion functionalities, previously available only on Nodes, have been extended to Block and Taxonomy Terms entities as well.</li>
</ul>
<p>Additionally, AI content suggestions can be generated based on an entity&rsquo;s rendered HTML, therefore on its final appearance, role, and content (for example, for a button, a CTA will be suggested, not a multi-line paragraph).</p>
<ul>
<li><strong>New types of Automators:</strong> AI Automators now include new available types. Among these, &ldquo;Image Alt Text&rdquo; stands out (which analyzes an image and context to generate appropriate alt text), &ldquo;Image Filename Rewrite&rdquo; (to rename image files for SEO purposes), and &ldquo;Summary Generation for Text with Summary&rdquo; (to generate summaries from main text).</li>
</ul>
<p>Thanks to the Field Widgets Actions we saw earlier, Automators can also be easily activated by editors, directly from the user interface.</p>
<h3 id="agentic-framework-and-developer-tools">Agentic Framework and Developer Tools</h3>
<p>The keyword of AI&rsquo;s latest frontiers is &ldquo;Agents&rdquo;. The introduction of the new agentic framework in Drupal and advanced tools for developers accelerates innovation and customization:</p>
<ul>
<li><strong>New Agentic Framework:</strong> The big news of v1.1.0 is the introduction of the Agentic Framework, which allows anyone to build AI agents without writing a single line of code.</li>
</ul>
<p>The AI Agents module simplifies and optimizes the creation of every type of AI agent, including &ldquo;text-to-action&rdquo; agents that can create or modify content types, fields, and taxonomies based on natural language instructions. This transforms users&rsquo; simple words into concrete actions within Drupal.</p>
<p>A notable aspect is that agents are stored as configurations, making them exportable and reusable across different systems. They can be activated from various points, including Chatbots, CLI, widgets, and through APIs, making Drupal an ideal platform for agent execution thanks to its flexibility and stability.</p>
<p>Furthermore, <strong>Drupal&rsquo;s native support for the Model Context Protocol (MCP)</strong> allows agents to connect to potentially any tool, exponentially extending their capabilities. Equally importantly, human intervention will always be possible (the so-called &ldquo;human in the loop&rdquo;), ensuring human governance.</p>
<ul>
<li><strong>Prompt Library:</strong> Like agents, all prompts can now also be distributed as configurations, facilitating the provision of suggested prompts from third-party modules.</li>
</ul>
<p>This is crucial for a well-functioning AI ecosystem: prompt engineering is a complex field, and a predefined library with best practices streamlines processes, enables users, reduces uncertainties and dependencies on maintainers.</p>
<ul>
<li>
<p><strong>Mocking Library for testing and replay of AI requests:</strong> This is a new functionality designed specifically for developers to facilitate development. Instead of making a real request, it&rsquo;s now possible to simulate a request to an AI provider and reproduce it during development or automated testing. This allows alleviating both long waiting times and high costs associated with prompt processing by AI providers.</p>
</li>
<li>
<p><strong>Support for more file types for AI models:</strong> The AI Core module base has been modified to support any file type (including PDFs and videos) to provide to AI models, in response to the evolution of LLM models.</p>
</li>
<li>
<p><strong>Recipes, abstraction from Providers and Vector Databases:</strong> Recipes are predefined, reusable configurations that simplify the implementation of specific features, such as automating module installation or configuration implementation (we go into more detail <a href="/en/drupal-cms-the-new-era-of-enterprise-content-management?hsLang=en">in this article</a>).</p>
</li>
</ul>
<p>Previously, Recipes that added AI features such as “chat with your documents” or similar required you to specify exactly which providers and vector databases were used, as well as certain settings. With the latest update, however, AI providers and vector databases have been “abstracted,” meaning they no longer need to be specifically defined at the Recipe level, as well as some common features. It&rsquo;s a small change, but it has made AI Recipes much more modular, versatile, and maintainable, as well as more reusable.</p>
<ul>
<li><strong>AI Agents Testing Tool:</strong> A new visual tool for testing AI agents has been added, allowing the configuration of complex scenarios and repeatedly retesting them without needing to be a developer.</li>
</ul>
<h3 id="interface-creation-ai--experience-builder">Interface Creation: AI + Experience Builder</h3>
<p>Experience Builder is the evolution of all previous Drupal site building solutions. Modern features like completely visual and drag-and-drop interface, integrated previews, and on-the-fly creation of new components make it a <strong>truly advanced solution for creating new digital experiences</strong> (we already talked about it <a href="/en/blog/drupal-cms-tutte-le-innovazioni-2025/">here</a> and <a href="/en/blog/drupal-cms-la-nuova-era-del-content-management-per-il-business/">here</a>).</p>
<p><img src="/images/blog/drupal-ai-overview-news-vision/experience-builder-state-of-drupal.jpg" alt="Drupal Experience Builder"></p>
<p>The integration of AI with Experience Builder has the potential to even more profoundly revolutionize not only the authoring experience, but also the design and development processes themselves in Drupal.</p>
<p>During the <a href="https://www.youtube.com/watch?v=XaYhTO9iCUo&amp;t=2302">March 2025 Driesnote</a>, it was demonstrated how an image from Figma can be transformed into a UI component in real-time. The user can interact with the AI agent through chat, requesting iterative changes and refinements to colors, button shapes, and other elements, with changes immediately visualized in a code editor integrated into Experience Builder.</p>
<p>Creating a single component is just the first step: the vision is to create entire pages or sites, ideally even with migration from other systems. In this vision, AI will help users build and perfect sites faster, more intuitively and creatively, while maintaining human oversight.</p>
<p><img src="/images/blog/drupal-ai-overview-news-vision/Drupal_20AI_20_26_20Experience_20Builder_2c_20from_20Figma_20Image_20to_20component._20DrupalCon_20Atlanta_20opening_20keynote.png" alt="Drupal AI &amp;amp; Experience Builder, from Figma Image to component. DrupalCon Atlanta opening keynote"></p>
<h3 id="enhanced-semantic-search">Enhanced Semantic Search</h3>
<p>Search functionality in Drupal has been profoundly improved thanks to AI. The AI Search sub-module elevates traditional keyword-based search with semantic understanding, <strong>allowing the site to grasp users&rsquo; intent rather than limiting itself to typed words</strong>. This translates into more relevant results, even when search terms don&rsquo;t directly match the content.</p>
<p>The overall impact can be truly significant, both in terms of user experience and economic return for the organization (think for example of product search in ecommerce, or information search on a university portal by a potential student).</p>
<p>This functionality uses two search indexes, one traditional and one semantic, powered by vector databases like Milvus or Pinecone, fundamental for reducing AI &ldquo;hallucinations,&rdquo; those situations where AI produces inaccurate information but presents it with confidence. These <strong>vector databases contain data and information specific to your organization</strong> , chunked and vectorized. LLMs can then use this data to provide more precise and contextualized answers (or search results).</p>
<p>An example of technology that integrates with Drupal to enable these advanced functionalities is <strong>Typesense</strong> , an open-source search engine that offers AI-powered semantic search capabilities, providing faster, more accurate, and personalized results. The <a href="https://www.drupal.org/project/search_api_typesense">Typesense module</a> is developed and maintained by the SparkFabrik team. You can explore this solution in the following presentations.</p>
<p><a href="https://drive.google.com/file/d/1Z-9VR6oBOyOk0NHa_VuMw_RyLIcKPfP4/view?usp=sharing"><img src="/images/blog/drupal-ai-overview-news-vision/AD_4nXeDSuOlj4vJ78Dm6XsblnYPjJHLfKy0_eGzWtNdW45s7wCB8KmCvtwvkVLDlNLkBX6jSo3nRtVFJSca6E33u44N2JUTkGqT31eRUPnImmcUIFokl1BbmXnaVJK9XGTJj2sqy0oHnw" alt="Presentazione: Ai-Powered Semantic Search in Drupal with Typesense"></a><a href="https://drive.google.com/file/d/1VNPjsywxQ7ajV3YNXz0TVhVFC2b8ilwY/view?usp=drive_link"><img src="/images/blog/drupal-ai-overview-news-vision/AD_4nXeXOB3n12rabddSiGKA32erUXz5RMt1vyERS85c00R2EZCQuNMTkFpyE0FIbjWgBYmejhh3R5THPqqVdMaph45xp6mou0r0eOQQaBXR-IQPnTJzLRmzgZcCyruHeZIh07nH0-k" alt="Presentazione: Search API meets Typesense"></a></p>
<p>Overall, these innovations demonstrate a continuous commitment to improving usability, efficiency, and versatility of AI within Drupal, facilitating developers, making it accessible to a broader audience, and supporting increasingly complex use scenarios.</p>
<h2 id="the-drupal-ai-initiative-a-coordinated-drive-for-the-future">The Drupal AI Initiative: A Coordinated Drive for the Future</h2>
<p>The Drupal AI Initiative represents an important strategic step for Drupal&rsquo;s future, channeling community energy in a coordinated and funded direction.</p>
<p>Although the ecosystem of Artificial Intelligence functionalities, integrations, and modules in Drupal is experiencing significant growth, as we have seen, to ensure bringing real strategic impact to the market, the need to go beyond simple voluntary contributions from contributors has been recognized. To build a truly powerful and competitive AI ecosystem, a professional and dedicated team is needed, focused only on Drupal AI.</p>
<p>From this awareness was born the <strong>Drupal AI Initiative</strong> , officially <a href="https://dri.es/accelerating-ai-innovation-in-drupal">launched on June 9, 2025</a>, whose goal is to <strong>bring structure, strategy, and a shared direction to AI innovation in Drupal</strong>. By ensuring continuity and quality, it guarantees that Drupal doesn&rsquo;t fall behind with the rapid evolution of AI in the market.</p>
<p>The central strategy is to fund a <strong>team of full-time contributors</strong> to accelerate AI innovation in the Drupal ecosystem. Over $100,000 has already been allocated to be entirely dedicated to the initiative, but the aim is also to attract sponsors, defined as &ldquo;AI Makers.&rdquo; These <strong>sponsors will be able to contribute not only financially, but also by dedicating full-time human resources</strong> (at least ½ FTE, Full Time Employee), with the expectation that involved companies are fully dedicated to the coordinated development of the ecosystem.</p>
<p>The initiative leverages a clear organizational structure, with a Leadership Team dedicated to guiding product direction, fundraising, and collaboration between different work areas, fundamental aspects for ensuring coordinated effort at a global level. A funded Delivery Team, equivalent to several full-time roles, is dedicated to execution, including technical leads, UX, and project managers. Active work tracks cover key sectors like AI Core, AI Products, AI Marketing, and AI UX.</p>
<p>Initial progress has seen the initiative <strong>gain momentum</strong> , primarily with the release of the latest major releases of the Drupal AI module. On the marketing front, dedicated pages have been created on Drupal.org and collaborations have been initiated for a series of <a href="https://www.drupal.org/about/starshot/initiatives/ai/blog/introducing-a-free-drupal-ai-webinar-series-in-partnership-with-the-european-commission">webinars with the European Commission</a>. The team also participated in the AI Summit London, an important artificial intelligence conference, to present Drupal AI to a broader audience.</p>
<p>One goal is to organize regular webinars to keep the community informed about progress and raise awareness among potential clients and end users about Drupal AI&rsquo;s potential. The community is also constantly aligned through asynchronous meetings held every Monday in the #ai-contribute Slack channel, where all contributors can participate and align through messages for 24 hours, with conversations subsequently published publicly on Drupal.org.</p>
<p>Importantly, <strong>the initiative fully embraces Drupal&rsquo;s philosophy regarding AI</strong> , built on fundamental principles that guide innovation. The heart of everything is the conception of AI itself, whose role is to enhance human capabilities, not replace them. Based on this, the aim is to implement a framework that promotes responsible AI management, ensuring oversight and approval of workflows (Human in the Loop), audit trails, and compliance tools.</p>
<p>Furthermore, in pure Drupal spirit, innovation must be guided by the community&rsquo;s real needs, not by roadmaps and a priori decisions. For example, ensuring users full freedom of choice in terms of AI solution providers is considered fundamental, without providing constraints to specific vendors.</p>
<p>Furthermore, a <a href="https://www.drupal.org/about/starshot/initiatives/ai/blog/co-designing-the-future-share-your-views-on-our-drupal-ai-roadmap">recent survey</a> invited end users to share their needs, contributing to directing development resources to functionalities felt by real users. At the same time, the community and agencies are the foundation of Drupal&rsquo;s success: developing a <a href="https://dri.es/ai-and-the-great-digital-agency-unbundling">platform that grows in synergy with both end users and the agencies that support it</a> therefore proves essential.</p>
<h3 id="sparkfabriks-role-in-the-drupal-ai-ecosystem">SparkFabrik&rsquo;s Role in the Drupal AI Ecosystem</h3>
<p>SparkFabrik is not just an observer of AI innovations in Drupal, but an <strong>active and strategic participant in their development</strong>. With over a decade of significant contributions to Drupal&rsquo;s open-source ecosystem, SparkFabrik has always positioned itself at the forefront of its evolution.</p>
<p>It&rsquo;s clear how the advent of artificial intelligence is requiring a radical transformation of the entire development process, which now becomes increasingly outcome-oriented rather than time-dedicated, but is also opening completely new perspectives and evolutions throughout the ecosystem, with unprecedented speed.</p>
<p>This is why <strong>we have initiated an internal strategic initiative dedicated precisely to the Drupal and AI combination</strong> , dedicating ½ FTE for all of 2025 (50% of our expert Drupal developer Luca Lusso&rsquo;s time) specifically to projects in this area. This concrete commitment to strengthening and expanding the ecosystem fits perfectly with the global AI Maker Initiative and translates into constant and deep collaboration with the community.</p>
<p>In our <strong>first contribution sprint</strong> , we worked on two aspects we consider fundamental for the Drupal AI ecosystem: proposing an effective local development environment based on DDEV and beginning reasoning on how to integrate a &ldquo;guardrail&rdquo; system directly within LLM integration flows.</p>
<p>More in detail, regarding the local development environment, we noticed that several contrib modules adopt a pattern based on a DDEV add-on that replicates the functioning of GitLab&rsquo;s CI pipeline on Drupal.org. For many modules contributed by SparkFabrik, we have also chosen the same approach (Monolog, WebProfiler, &hellip;). The broader community has not yet decided if this is the best path to follow, and for the Drupal AI ecosystem this way of working could lead to more complications than benefits.</p>
<p>Indeed, the need to add some DDEV-specific configuration files within the project repository and the way Drupal installation occurs locally make this approach less ideal in a context like Drupal AI, consisting of dozens of modules that potentially need to be tested and developed in parallel.</p>
<p>The solution proposed by Luca Lusso, and accepted by the community, instead saw the proposal of a new generic DDEV add-on, which can be used by any contrib module and capable of simplifying contributions to multiple modules simultaneously. <strong>SparkFabrik&rsquo;s solution resolves previous limitations, streamlining development and testing of all future contributions.</strong></p>
<p>In parallel, we initiated reflection on integrating &ldquo;guardrail&rdquo; systems directly into flows involving Large Language Models (LLM). This aspect is crucial for ensuring responsible and safe use of AI within Drupal, ensuring that model output is always aligned with preset values and objectives.</p>
<p>Another tangible example of SparkFabrik&rsquo;s commitment is <strong>active participation in meetings</strong> asynchronous in the #ai-contribute channel on Slack. These meetings, which take place every Monday and last 24 hours, allow global alignment through messages, and all conversations are subsequently published openly on Drupal.org.</p>
<p>Luca Lusso&rsquo;s constant presence in these discussions (for example, see conversations from <a href="https://www.drupal.org/project/ai/issues/3533024">June 30, 2025</a>, <a href="https://www.drupal.org/project/ai/issues/3533872">July 7, 2025</a>, and <a href="https://www.drupal.org/project/ai/issues/3534867">July 14, 2025</a>) provides SparkFabrik with direct access to cutting-edge information and strategic discussions, allowing us to anticipate future developments and influence Drupal AI&rsquo;s roadmap.</p>
<p>In this scenario of rapid evolution, SparkFabrik&rsquo;s commitment to developing the Drupal ecosystem, research on AI and its practical application, is not just a technological investment. It rather represents a clear affirmation of our position as a strategic partner and thought leader, well beyond the role of a simple service provider.</p>
<h2 id="conclusion-drupal-ai-an-intelligent-open-and-collaborative-future">Conclusion: Drupal AI, an Intelligent, Open and Collaborative Future</h2>
<p>Drupal is actively leading the Artificial Intelligence revolution in the world of Content Management Systems, adopting an approach that is inherently open-source, human-centered, and highly flexible.</p>
<p>Starting from an already excellent foundation, <strong>the Drupal AI Initiative is significantly accelerating the development of powerful and responsible AI tools</strong> , thanks to a clear strategic vision, dedicated funding, and a rapidly growing community.</p>
<p>From the unified AI module, with its vast range of sub-modules and supported providers, to innovations introduced in recent versions, Drupal offers cutting-edge functionalities that transform content creation, site management, and optimization of digital experiences.</p>
<p>These developments not only improve efficiency and quality of web projects, but also reinforce Drupal&rsquo;s fundamental principle of maintaining people&rsquo;s centrality and human governance, ensuring that AI is a tool for enhancement and not replacement.</p>
<p><strong>SparkFabrik is proud to be an integral part of this transformation</strong> , not limiting ourselves to adopting new technologies, but actively and strategically contributing to their development and strengthening our position as thought leaders in the Drupal ecosystem.</p>
<p>Our dedicated internal initiative, significant resource commitment, participation in the community and strategic discussions, our expert Drupal developers&rsquo; commitment to directly contributing to code demonstrate our vision and ensure that SparkFabrik is not only at the forefront, but contributes to defining the future direction of AI in Drupal.</p>
<p>If you wish to explore AI&rsquo;s potential for your Drupal site, or if you are considering implementing a Drupal environment, SparkFabrik is available to support you in navigating and excelling in this new and dynamic digital landscape.</p>
<p>We invite you to:</p>
<ol>
<li>Explore <a href="https://www.sparkfabrik.com/en/services/drupal/">SparkFabrik’s Drupal services</a></li>
<li>Consult our <a href="https://www.sparkfabrik.com/en/success-stories/">case studies</a> that illustrate implementations in various sectors</li>
<li><a href="https://www.sparkfabrik.com/en/contact-us/">Contact our team</a> for an evaluation of your specific context and objectives</li>
</ol>
<hr>
<p>Explore more features and aspects of the Drupal platform in our <a href="/en/tag/drupal?hsLang=en">dedicated articles</a>.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/drupal-ai-overview-news-vision/Drupal_20AI_20-_20Overview_2c_20News_20e_20Ruolo_20di_20SparkFabrik_20-_20Featured_20Image.png" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/drupal-ai-overview-news-vision/Drupal_20AI_20-_20Overview_2c_20News_20e_20Ruolo_20di_20SparkFabrik_20-_20Featured_20Image.png" type="image/jpeg"/><category>Drupal</category></item><item><title>Design System and Drupal CMS: the Link between Designers &amp; Developers</title><link>https://www.sparkfabrik.com/en/blog/design-system-and-drupal-cms/</link><pubDate>Fri, 11 Jul 2025 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/design-system-and-drupal-cms/</guid><description>Discover how to integrate Design Systems and Drupal CMS to create consistent digital experiences, accelerate development, strengthen designers-devs collab</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    Design Systems integrated with Drupal CMS bridge the gap between designers and developers, enabling consistent and scalable digital experiences. Through component-based theming, Layout Builder, Design Tokens, and tools like Storybook and ZeroHeight, organizations can reduce development time by up to 50% while maintaining brand consistency across all touchpoints.
  </div>
</div>
<p>User experience has evolved into a crucial distinctive factor in the digital world, and in this context, effective collaboration and synergy between designers and developers pose a primary challenge for organizations. The fragmentation of digital channels, the continuous evolution of user expectations, and the need to maintain visual and functional consistency across different touchpoints, as well as the need to adapt promptly to technological innovations and new regulatory requirements, have made the management of digital interfaces increasingly complex.</p>
<p>In this complex scenario, Design Systems emerge as a strategic solution, acting as a bridge between the world of design and that of development. When integrated with flexible and robust platforms like Drupal CMS, Design Systems can radically transform how organizations create, manage, and evolve their digital experiences.</p>
<p>In our <a href="https://www.youtube.com/watch?v=tbAA51o3RyU">talk presented at Talks on my Machine</a> (available in Italian only), we explored this topic in depth. In this article, we delve further into the concept of Design Systems and their integration with Drupal CMS, highlighting how this synergy not only creates a common language among teams but also generates real and substantial value for organizations.</p>
<p>This article is part of our series dedicated to <a href="/en/tag/drupal">Drupal CMS</a>: we invite you to read the previous articles for further insights, from its advantages to alternatives, security aspects, and architectural considerations.</p>
<h2 id="what-is-a-design-system-and-why-is-it-important">What is a Design System and why is it important</h2>
<p>A Design System is much more than a simple component library or a style guide. It is a complete and living ecosystem, a cohesive framework that includes guiding principles, interaction patterns, reusable components, and clear guidelines. These elements collectively define the visual and interactive language of a brand across all its digital products, ensuring consistency and scalability on a large scale.</p>
<h3 id="the-three-pillars-of-an-effective-design-system">The three pillars of an effective Design System</h3>
<p>A well-structured Design System rests on interconnected elements that work in synergy to ensure consistency and scalability of digital experiences:</p>
<ol>
<li><strong>Principles and guidelines</strong>: This pillar defines the design philosophy that guides every single decision. It sets the brand&rsquo;s tone of voice, its personality, and its approach to user interaction. It acts as a compass that guides every aesthetic and functional choice, ensuring that the design is not only aesthetically pleasing but also strategically aligned with business objectives.</li>
<li><strong>Component library</strong>: This is an organized collection of reusable user interface elements (such as buttons, forms, <em>headers</em>, and <em>cards</em>) and their variations. Each component is meticulously documented, including usage examples, implementation code, and specific use cases. This granularity facilitates reuse, ensures consistency, and reduces development time in every context.</li>
<li><strong>Design patterns</strong>: These define consolidated and tested solutions for common interaction scenarios (e.g., <em>login</em> flows, shopping carts, search systems). Patterns logically and functionally combine different components from the library, providing optimized schemes to solve specific user needs efficiently and consistently.</li>
</ol>
<p>As Nathan Curtis, a Design System expert, effectively stated: <em><strong>&ldquo;A Design System is not a project but a product, which serves other products.&rdquo;</strong></em> This vision highlights the evolutionary and service nature of a Design System, which must constantly grow and adapt to the evolving needs of the organization and the dynamics of a constantly transforming market.</p>
<h3 id="the-tangible-benefits-of-a-design-system">The tangible benefits of a Design System</h3>
<p>Implementing a Design System brings significant advantages at various organizational levels, directly impacting ROI and operational efficiency.</p>
<p><strong>For the organization:</strong></p>
<ul>
<li><strong>Up to 50% reduction in implementation time</strong> for new interfaces and functionalities. This translates into significantly accelerated <em>time-to-market</em> for new initiatives, allowing the company to react with greater agility to new market opportunities.</li>
<li><strong>Greater consistency of user experience</strong> across different touchpoints. This strengthens brand identity, improves recognition, and contributes to building a perception of professionalism and reliability among the public.</li>
<li><strong>Significant optimization of maintenance and update costs.</strong> Changes to a Design System component propagate to all instances that use it, reducing technical debt and simplifying the long-term management of a digital project.</li>
</ul>
<p><strong>For designers:</strong></p>
<ul>
<li><strong>Less time spent on repetitive and manual tasks</strong>, freeing up time for innovation and creative problem-solving, which is the true added value of the design team.</li>
<li><strong>Faster decision-making process</strong> due to pre-existing patterns, clear guidelines, and pre-validated components that reduce uncertainty.</li>
<li><strong>Facilitated collaboration with developers</strong>, establishing a common language that reduces misunderstandings and review cycles, improving overall efficiency.</li>
</ul>
<p><strong>For developers:</strong></p>
<ul>
<li><strong>Reusable and standardized code</strong> that accelerates development, reduces the likelihood of introducing bugs or inconsistencies, and increases software quality.</li>
<li><strong>Reduced technical debt</strong>, keeping the codebase cleaner, more modular, and easier to maintain in the long run.</li>
<li><strong>Faster implementation and fewer bugs</strong> thanks to pre-validated and documented components that minimize errors during development.</li>
</ul>
<p>As observed during our talk, organizations that have adopted a mature Design System have seen <strong>improvements in development speed ranging from 20% to 50%</strong>, with a parallel reduction in user interface-related bugs. These results are not purely theoretical but translate into a direct impact on operational costs and the company&rsquo;s ability to innovate.</p>
<h2 id="integrating-design-system-and-drupal-cms">Integrating Design System and Drupal CMS</h2>
<p>Drupal CMS, with its <a href="/en/composable-architecture-with-drupal-cms">flexible and composable architecture</a>, stands as an ideal platform for implementing and managing a Design System over time. This integration can occur at various levels, each offering specific benefits.</p>
<h3 id="1-advanced-theming-with-a-component-based-approach">1. Advanced theming with a Component-Based Approach</h3>
<p>The evolution of theming in Drupal has seen a progressive and strategic shift towards a <em>component-based</em> model, perfectly aligned with Design System philosophy. This approach ensures greater flexibility, granularity, and reusability in the front-end.</p>
<ul>
<li><strong>Twig Components</strong>: The use of modular Twig templates allows for direct mapping to Design System components. This ensures that every visual element of your Design System has an exact counterpart in Drupal&rsquo;s code, guaranteeing design fidelity, aesthetic consistency, and ease of maintenance.</li>
<li><strong>Single Responsibility</strong>: Each component has a specific and well-defined responsibility. This principle ensures that changes to one element do not have unintended side effects on other system components, improving development stability and predictability, and fostering resilience.</li>
<li><strong>Cross-page reuse</strong>: Components can be easily reused in different contexts within Drupal, such as different pages, content types, or even across multiple sites within a multisite architecture. This maximizes efficiency and brand consistency, even at a large scale.</li>
</ul>
<p>This approach, known as &ldquo;Component-Driven Development&rdquo;, fosters a direct correspondence between the components defined in the Design System and their technical implementation in Drupal, significantly reducing the gap between design and code and optimizing the development workflow.</p>
<h3 id="2-deep-integration-with-layout-builder">2. Deep integration with Layout Builder</h3>
<p>One of the most significant innovations in Drupal CMS is the Layout Builder, which can be extended to natively integrate Design System components. This new Layout Builder enables greater autonomy for editors and marketing teams.</p>
<ul>
<li><strong>Custom Block Types</strong>: It allows the creation of custom block types that directly represent Design System components. This empowers non-technical users to assemble complex pages using an intuitive drag-and-drop functionality.</li>
<li><strong>Layout Libraries</strong>: The definition of predefined layouts, based on Design System patterns, allows users to choose validated and accessible structural options, accelerating the creation of new pages or sections while promoting consistency.</li>
<li><strong>Visual configuration</strong>: The drag-and-drop interface, combined with Design System components, enables highly intuitive visual page composition. This reduces reliance on technical resources for content creation and modifications, enabling a true <a href="/en/low-code-platform-and-no-code-platform-the-future-of-development-with-drupal">no-code approach</a> for editorial teams, without requiring code.</li>
</ul>
<p>This integration opens up exciting possibilities for promoting the use of the Design System by any role, allowing editors and content managers to create consistent experiences without needing advanced technical skills, while streamlining publishing times and workflows.</p>
<h3 id="3-design-tokens-as-a-single-source-of-truth">3. Design Tokens as a single source of truth</h3>
<p>Design Tokens represent a fundamental concept in modern Design System implementation, acting as a &ldquo;single source of truth&rdquo; for all visual and stylistic attributes of a brand. Their adoption is crucial to ensure consistency across all platforms:</p>
<ul>
<li><strong>Atomic values</strong>: Centralized definition of colors, typography, spacing, shadows, and other stylistic elements. For example, instead of defining a color as a hex code (<code>#EB0000</code>, also known as our own Spark Red), it is semantically labeled (e.g., <code>primary-heading-color</code>). This makes global changes quick and error-free, automatically propagating updates to all components that use that token.</li>
<li><strong>Platform independence</strong>: Design Tokens are agnostic to the implementation platform. This means that Tokens can be used in both design tools (Figma, Sketch) and development environments (CSS, JavaScript, Twig), ensuring perfect stylistic consistency across all channels.</li>
<li><strong>Single Source of Truth</strong>: They establish a unique and shared source for visual attributes. When it truly is a single source of truth, it eliminates inconsistencies that systematically arise from multiple, separate sources.</li>
</ul>
<p>The implementation of Design Tokens in Drupal can occur through Sass variables, CSS custom properties, or more sophisticated systems that automatically synchronize tokens between design tools and code.</p>
<p>As demonstrated in our talk, this approach effectively manages &ldquo;<strong>design drift</strong>&rdquo;, the tendency for technical implementations to progressively diverge from the original design, thus maintaining perfect design-to-code consistency.</p>
<h2 id="tools-and-workflows-for-effective-collaboration">Tools and workflows for effective collaboration</h2>
<p>The integration of Design System and Drupal CMS requires an ecosystem of tools and workflows that facilitate collaboration between designers and developers, overcoming traditional operational silos. Here are some must-haves.</p>
<h3 id="storybook-the-bridge-between-design-and-development">Storybook: the bridge between design and development</h3>
<p><a href="https://storybook.js.org/">Storybook</a> has become a de facto standard for UI component development, documentation, and testing, acting as a crucial link:</p>
<ul>
<li><strong>Isolated environment</strong>: Allows for independent component development from the application context, working on individual UI components in an isolated environment. This accelerates development, simplifies debugging, and ensures predictable functionality regardless of the usage context.</li>
<li><strong>Living documentation</strong>: Generates interactive and always up-to-date component documentation, which is always accessible to foster understanding and alignment.</li>
<li><strong>Visual testing</strong>: Automated verification of the visual correctness of components, helping to maintain high quality of the final product.</li>
</ul>
<p>Integrating Storybook with Drupal allows for developing components in isolation, thoroughly testing them, and only then integrating them into the CMS platform. This approach significantly improves code quality and reduces overall development time, leading to a faster time-to-market for new implementations and features.</p>
<h3 id="zeroheight-centralizing-design-system-documentation">ZeroHeight: centralizing Design System documentation</h3>
<p><a href="https://zeroheight.com/">ZeroHeight</a> is an excellent and widely used solution for centralizing and sharing Design System documentation, making it easily accessible to all organizational stakeholders:</p>
<ul>
<li><strong>Single Source of Truth</strong>: Creates a centralized repository accessible to the entire organization, serving as a single source for all Design System documentation. Guidelines, principles, tokens, components&hellip; everything is consolidated in one place.</li>
<li><strong>Integration with design tools</strong>: Automatically synchronizes with leading design tools like Figma, Sketch, and Adobe XD, maintaining alignment without manual effort.</li>
<li><strong>Versioning and history</strong>: Advanced tracking of Design System evolution over time through versioning and history, which is particularly crucial for large organizations with multiple products and projects.</li>
</ul>
<p>This type of platform greatly facilitates Design System adoption at the organizational level, providing a clear, always up-to-date, and accessible reference point for designers, developers, content managers, and other business stakeholders.</p>
<h3 id="cicd-for-design-system">CI/CD for Design System</h3>
<p>The application of <a href="/en/what-are-continuous-integration-delivery-deployment">Continuous Integration/Continuous Delivery</a> (CI/CD) practices, typical of the <a href="/en/guides/devops-how-to-adopt-it">DevOps</a> field, to Design Systems creates an approach known as &ldquo;<strong>DesignOps</strong>&rdquo;. This elevates the Design System to a higher operational level, ensuring unprecedented continuity, consistency, and speed in updates.</p>
<ul>
<li><strong>Automated testing</strong>: Automated verification of component compliance is integrated, not only in terms of quality standards but also crucial regulations like the EAA, which requires adherence to WCAG 2.1 AA standards.</li>
<li><strong>Continuous integration of changes</strong>: Continuous integration of Design System changes into the codebase. Development teams can always access the latest valid component versions, resolving version conflicts.</li>
<li><strong>Automated deployment</strong>: Once approved in CI/CD pipelines, Design System changes are automatically deployed to all products using it. The entire ecosystem can be consistently aligned, without manual intervention or delays.</li>
</ul>
<p>This &ldquo;DesignOps&rdquo; approach extends the benefits of DevOps methodologies (such as speed, reliability, and automation) into the design world, guaranteeing quality, speed, and consistency in the evolution of the digital experience.</p>
<h2 id="successful-case-studies-design-system-in-action-with-drupal-cms">Successful Case Studies: Design System in action with Drupal CMS</h2>
<p>To concretely illustrate the benefits of integrating Design Systems and Drupal CMS, we examine some real cases successfully implemented by the SparkFabrik team. These examples aim to inspire and demonstrate how the strategic adoption of a Design System is not just a theoretical best practice, but a strategic lever for achieving business objectives.</p>
<h3 id="zambon-group-brand-consistency-across-multiple-touchpoints">Zambon Group: brand consistency across multiple touchpoints</h3>
<p>For <a href="https://www.sparkfabrik.com/en/success-stories/zambon/">Zambon Group</a>, a pharmaceutical multinational with an extensive and complex digital presence, we implemented a Design System integrated with Drupal that allowed them to:</p>
<ul>
<li>Ensure visual and functional consistency across their corporate website and dozens of product microsites. Before implementation, each new site required significant design and development effort and often led to inconsistencies. Today, the Design System ensures that every new initiative is immediately aligned with the brand.</li>
<li>Effectively support a complex multilingual and multicountry context. It is now possible to centrally manage content and design for over 40 corporate and product sites, available in more than 20 languages and countries, maintaining a unified global brand identity.</li>
<li>Significantly reduce implementation times for new digital initiatives, with resulting optimization of time-to-market and significant cost reduction.</li>
</ul>
<p>The Design System created a unified visual language that globally reflects the Zambon brand identity, while allowing the necessary flexibility to meet the specific needs of different markets and product lines.</p>
<p><a href="https://www.sparkfabrik.com/en/success-stories/zambon/"><img src="/images/blog/design-system-e-drupal-cms/cs-zambon.webp" alt="case study zambon"></a></p>
<h3 id="caleffi-cohesive-and-scalable-digital-experiences">Caleffi: cohesive and scalable digital experiences</h3>
<p>For <a href="https://www.sparkfabrik.com/en/success-stories/caleffi-new-website/">Caleffi</a>, a leading multinational player in the plumbing sector, implementing a Design System integrated with Drupal allowed them to:</p>
<ul>
<li>Create a consistent user experience that integrates complex editorial content with a vast product catalog, comprising 12 catalogs and over 20,000 products.</li>
<li>Ensure responsiveness and consistency across all devices (desktop, tablet, mobile), in 12 countries, and 18 languages.</li>
<li>Significantly accelerate time-to-market for new digital initiatives.</li>
<li>Facilitate continuous evolution of the digital experience, rapid and efficient construction of new pages and sections, and timely response to market needs through new digital initiatives.</li>
</ul>
<p>The Design System defined not only visual components but also specific interactive patterns for e-commerce and catalog management, creating an intuitive navigation experience that enhances both informational and product content and directly contributes to Caleffi&rsquo;s business objectives.</p>
<p>[Video: Case Study Caleffi New Website]</p>
<h3 id="bocconi-university-a-unified-visual-language">Bocconi University: a unified visual language</h3>
<p>For <a href="https://www.unibocconi.it/it">Bocconi University</a>, a prestigious international educational institution, implementing a Design System integrated with Drupal allowed them to:</p>
<ul>
<li>Unify the digital experience across dozens of sites and microsites, from the university&rsquo;s main site to multiple departmental portals and microsites dedicated to events and initiatives.</li>
<li>Maintain brand consistency in a multilingual context, crucial for promoting the institution&rsquo;s international prestige, while supporting unified governance for all properties.</li>
<li>Reduce the implementation time for new sections by 40% and improve the consistency of institutional communication.</li>
</ul>
<p>The Design System defined not only visual components but also specific interactive patterns for the educational context, such as course navigation, faculty presentation, and enrollment procedures, creating an intuitive and consistent user experience for all user types: students, faculty, administrative staff, and external stakeholders.</p>
<h2 id="our-experience-lessons-learned-and-value-delivered">Our experience: lessons learned and value delivered</h2>
<p>After years of implementing Design Systems and over a decade of experience with enterprise-grade Drupal platforms, we have gained a unique perspective on what truly works and how to maximize value for our clients. We would like to share some personal reflections that go beyond theoretical best practices.</p>
<h3 id="when-a-design-system-truly-makes-a-difference">When a Design System truly makes a difference</h3>
<p>Our experience has taught us that not all projects benefit equally from a Design System. The value is maximized, and the investment most justified, when:</p>
<p><strong>There is real multi-touchpoint and multilingual complexity</strong>. In Zambon&rsquo;s case, with dozens of product sites in different languages and a global presence requiring coordinated management, the Design System literally transformed their way of working. Before its implementation, each new product site required several weeks or months of work and inevitably led to visual and functional inconsistencies. Today, launching a new product site takes days or a couple of weeks, and consistency is guaranteed regardless of the involved team.</p>
<p><strong>The client has a medium-to-long-term vision and a commitment to digital evolution</strong>. We have observed that clients like Bocconi University, with a strategic vision for their digital ecosystem and a willingness to invest in an asset like the Design System over time, have seen exponential benefits. The initial investment in the Design System is paying off not only in terms of operational efficiency but also in the ability to evolve their digital presence consistently and increasingly faster.</p>
<h3 id="1-adopt-an-incremental-approach">1. Adopt an incremental approach</h3>
<ul>
<li><strong>Start with a core set of &ldquo;essential&rdquo; components</strong>, high-impact and frequently used in the user experience (e.g., buttons, form inputs, typography). This allows for obtaining initial benefits quickly.</li>
<li><strong>Implement an MVP</strong> (Minimum Viable Product) and let it grow organically. Launching a minimal but functional Design System allows for gathering real feedback from internal users (designers, developers), and then letting it grow based on needs. This agile approach reduces risk and maximizes adoption and internal learning.</li>
<li><strong>Clearly define priorities and an evolution roadmap</strong>, taking into account business needs and available resources. The introduction of a Design System is not an endpoint: it is essential to keep it alive, evolve it, and enrich it over time to prevent it from becoming obsolete.</li>
</ul>
<p>This approach not only allows for obtaining value more quickly but also enables validating initial choices and evolving the Design System agilely, based on concrete feedback rather than hypotheses or initial requirements that might change.</p>
<h3 id="2-establish-clear-governance-and-ownership">2. Establish clear governance and ownership</h3>
<p>A Design System is a &ldquo;product that serves other products,&rdquo; and as such, it also requires continuous management and maintenance. Without clear governance, its effectiveness can quickly diminish, leading to inconsistencies, frustrations, errors, and, ultimately, obsolescence:</p>
<ul>
<li><strong>Define precise roles and responsibilities</strong>. Who is responsible for defining design principles, developing components, maintaining them, and promoting the Design System internally? Clear roles are essential to avoid overlaps or gaps in responsibility.</li>
<li><strong>Establish processes for Design System evolution</strong>. How are new components proposed? How are existing ones modified? What is the review, approval, integration, and documentation process? Well-defined and documented workflows are crucial for consistency, quality, and scalability over time.</li>
<li><strong>Create structured feedback loop mechanisms</strong> between Design System users (designers, developers, content editors, other stakeholders) and maintainers. This enables continuous improvement based on real needs and ensures the Design System remains relevant and useful within the organization.</li>
</ul>
<p>As explained, governance is often the determining factor between a successful Design System (adopted, understood, valued) and one that is progressively abandoned.</p>
<h3 id="3-invest-in-documentation-and-continuous-training">3. Invest in documentation and continuous training</h3>
<p>Documentation and training are not optional additions but integral parts of the Design System, crucial for its adoption and long-term effectiveness. Indeed, a Design System is only as useful as its documentation and the team&rsquo;s ability to use it effectively.</p>
<ul>
<li><strong>Document not only &ldquo;what&rdquo; but also &ldquo;why&rdquo; design decisions were made</strong>. Explaining the underlying principles, guidelines, and reasons helps users understand how and when to use components appropriately and autonomously.</li>
<li><strong>Create specific usage guides for different roles</strong> (designers, developers, content editors). This way, each guide focuses on the most relevant information for each role, facilitating learning and adoption.</li>
<li><strong>Organize practical training sessions and workshops regularly</strong>. As with any other area, active training is much more effective than simply distributing documents: it helps overcome initial resistance, transfers skills, and promotes the Design System culture.</li>
</ul>
<p>As clearly emerged in our talk, even the best Design System, technically impeccable, has limited value if the organization does not know how to use it effectively or is not supported in practical adoption.</p>
<h3 id="4-measure-and-communicate-value">4. Measure and communicate value</h3>
<p>To ensure continuous support for the Design System, it is essential to measure and communicate its impact at all levels of the organization:</p>
<ul>
<li><strong>Track quantitative metrics</strong>, key indicators such as development time for new functionalities (e.g., a new landing page), bug reduction, existing component reuse rates.</li>
<li><strong>Collect qualitative feedback</strong> from stakeholders and users, understand challenges encountered and areas for improvement, ensuring the Design System and its evolutions effectively meet real needs.</li>
<li><strong>Regularly communicate successes and lessons learned</strong>, such as results achieved, savings generated, successful implementations that demonstrate the tangible value of the Design System.</li>
</ul>
<p>This data-driven approach, focused on transparent communication, helps solidify the Design System as a strategic asset within the organization and its culture. By positioning it as a strategic investment rather than a mere cost center, its sustainability and evolution over time are ensured.</p>
<h2 id="the-future-of-design-system-integration-with-drupal-cms">The future of Design System integration with Drupal CMS</h2>
<p>Looking ahead, we can identify several emerging trends that will further shape the integration between Design Systems and Drupal, innovations that will make the CMS platform even more powerful, flexible, and responsive to business needs.</p>
<h3 id="1-advanced-design-to-code-automation">1. Advanced design-to-code automation</h3>
<p>Tools that automatically transform design components created in platforms like Figma into functional code implementations in Drupal are becoming increasingly sophisticated. This progress promises to further reduce the gap between design and code and is inherently linked to the concept of low-code/no-code that Drupal CMS is increasingly embracing.</p>
<h3 id="2-multi-experience-design-system">2. Multi-Experience Design System</h3>
<p>The evolution of Design Systems is extending beyond mere web development to new digital experiences. The inclusion of voice user interfaces (Voice UI), augmented reality, chatbots, IoT devices, and other emerging channels will require greater flexibility in Drupal implementations. The Design System will become an even more central hub for ensuring aesthetic and functional consistency across all touchpoints.</p>
<h3 id="3-ai-assisted-design-system">3. AI-Assisted Design System</h3>
<p>Artificial intelligence is beginning to play an increasingly pervasive role in Design System creation and maintenance. Advanced AI functionalities are expected to offer intelligent suggestions for optimal component combinations, proactively identify stylistic and functional inconsistencies, and even generate component variations based on specific parameters or styles. This will further optimize the design and development process, always under human governance, increasing efficiency and speed.</p>
<h3 id="4-cross-platform-design-system">4. Cross-Platform Design System</h3>
<p>With the emergence of standardized technologies like Web Components, Design System implementation can become increasingly platform-independent. This finally allows for true cross-platform reuse, not only within the Drupal ecosystem but across all technologies used by the organization.</p>
<h2 id="conclusions">Conclusions</h2>
<p>The integration of Design System and Drupal CMS represents much more than a technological choice: it is a strategic decision that can transform how an organization creates, manages, and evolves its digital experiences.</p>
<p>Our experience has demonstrated that, with the right approach, this integration produces tangible and measurable benefits: greater consistency, faster development, reduced costs, and a superior user experience. The Design System, when treated as a strategic product rather than a cost, becomes a true accelerator of innovation.</p>
<p>To further explore the topic, we recommend:</p>
<ol>
<li>Explore our <a href="https://www.youtube.com/watch?v=tbAA51o3RyU">complete talk on Design Systems and Drupal</a> for in-depth and practical insights (in Italian only).</li>
<li>Consult our <a href="https://www.sparkfabrik.com/en/success-stories/">case studies</a> illustrating successful implementations in various complex sectors and contexts.</li>
<li><a href="https://www.sparkfabrik.com/en/contact-us/">Contact our team</a> for a personalized assessment of your specific context and objectives.</li>
</ol>
<hr>
<p>This article is part of our series on Drupal CMS. To explore other aspects of the platform, we invite you to consult our previous articles on <a href="/en/drupal-cms-the-new-era-of-enterprise-content-management">Drupal CMS features and advantages</a>, <a href="/en/drupal-cms-a-comparison-with-the-main-alternatives">comparison with main alternatives</a>, <a href="/en/migration-to-drupal-cms-complete-guide-for-a-successful-transition">migration strategies</a>, <a href="/en/drupal-cms-security-compliace-regulated-sector">security and compliance aspects</a>, <a href="/en/drupal-cms-all-innovations-of-2025">ecosystem innovation roadmap</a>, and <a href="/en/composable-architecture-with-drupal-cms">composable architecture</a>.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/design-system-e-drupal-cms/featured-image.webp" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/design-system-e-drupal-cms/featured-image.webp" type="image/jpeg"/><category>Drupal</category></item><item><title>Composable Architecture with Drupal CMS: Flexible Digital Ecosystems</title><link>https://www.sparkfabrik.com/en/blog/composable-architecture-with-drupal-cms/</link><pubDate>Fri, 04 Jul 2025 00:00:00 +0000</pubDate><author>SparkFabrik Team</author><guid>https://www.sparkfabrik.com/en/blog/composable-architecture-with-drupal-cms/</guid><description>Discover how to implement a composable architecture with Drupal CMS: patterns, best practices, case studies to build flexible, scalable digital ecosystems</description><content:encoded><![CDATA[<div class="tldr">
  <span class="tldr__label">TL;DR</span>
  <div class="tldr__body">
    How to implement composable architecture with Drupal CMS, based on Gartner&rsquo;s 4 principles (modularity, autonomy, orchestration, discovery). The article presents 3 implementation patterns (content hub, federated DXP, low-code platform), real case studies (Zambon, Caleffi), and best practices for API strategy, CI/CD, and governance. Result: up to 40% faster time-to-market.
  </div>
</div>
<p>Sudden changes in business needs, new channels to support, integration with diverse systems, new regulatory requirements, and the necessity to rapidly adapt to technological innovations. This is the vast and rapidly evolving digital landscape in which organizations operate and face increasingly complex challenges.</p>
<p>In this context, the traditional monolithic approach to digital platforms (that of the &ldquo;single, large and immutable platform&rdquo;) is showing its limitations, giving way to a new paradigm: composable architecture.</p>
<p>In previous articles of our series on Drupal CMS, we explored the<a href="/en/drupal-cms-the-new-era-of-enterprise-content-management?hsLang=en"> innovative features of the platform</a>, analyzed its<a href="/en/drupal-cms-a-comparison-with-the-main-alternatives?hsLang=en"> advantages over alternatives</a>, delved into<a href="/en/migration-to-drupal-cms-complete-guide-for-a-successful-transition?hsLang=en"> migration strategies</a>, discussed<a href="/en/drupal-cms-security-compliace-regulated-sector?hsLang=en"> security and compliance aspects</a>, as well as the ambitious<a href="/en/drupal-cms-all-innovations-of-2025?hsLang=en"> innovation roadmap</a> of the ecosystem.</p>
<p>Now, in this article, we examine how Drupal CMS represents an ideal foundation for implementing composable, flexible architectures capable of effectively responding to change.</p>
<h2 id="composable-architecture-a-new-approach-to-building-digital-ecosystems">Composable Architecture: A New Approach to Building Digital Ecosystems</h2>
<p>Composable architecture represents an approach to building digital ecosystems based on the flexible combination of modular and interoperable components. Instead of a single, giant, and monolithic system that is difficult to manage and maintain, in the composable approach, our architecture is composed of many components that are assembled, similar to Lego bricks, each of which is independent, performs its function, and can be updated or replaced without impacting the entire system.</p>
<p>More specifically, the &ldquo;composable&rdquo; paradigm, formalized by Gartner, is based on four fundamental principles:</p>
<ol>
<li><strong>Modularity</strong> : Systems are composed of independent and interchangeable components. This autonomy makes each part easily manageable and replaceable, without compromising the overall system.</li>
<li><strong>Autonomy</strong> : Each component operates independently, with well-defined interfaces, reducing the risk of propagation of any problems between the different elements of the architecture.</li>
<li><strong>Orchestration</strong> : Components can be organized and reorganized flexibly, combining and adapting them dynamically according to operational and strategic needs.</li>
<li><strong>Discovery</strong> : Components are easily identifiable and configurable, with relevant benefits in terms of implementation and configuration times.</li>
</ol>
<p>This approach allows organizations to build highly adaptable digital ecosystems, where individual components can be replaced or updated without rebuilding the entire system. The result, in addition to technical benefits, is a significant acceleration of innovation and a reduction in time-to-market, fundamental aspects for competitiveness.</p>
<h2 id="why-composable-architecture-is-relevant-today">Why Composable Architecture is Relevant Today</h2>
<p>The interest in and adoption of composable architectures are growing rapidly. This is not merely a technological trend, but a concrete response to a market that is constantly accelerating. Let&rsquo;s understand in more detail the reasons, which directly impact the core of every business:</p>
<p><strong>Speed of market change</strong> : In a world where customer needs and digital trends change at an alarming rate, the ability to adapt quickly has become a fundamental competitive advantage.</p>
<p><strong>Proliferation of channels and touchpoints</strong> : Today, it is no longer enough to simply be on the web. Organizations must also offer consistent experiences everywhere, across an increasing number of channels and devices, and this requires agile platforms.</p>
<p><strong>Personalization expectations</strong> : Users expect increasingly personalized experiences that understand them, anticipate their desires, and align with their expectations. Such tailored experiences require flexibility, responsiveness, and agility in digital platforms.</p>
<p><strong>Heterogeneous technological ecosystems</strong> : Organizations often operate with multiple systems, some of which may be legacy, that need to integrate efficiently. Composable architecture facilitates the connection between different technologies.</p>
<p>A recent <a href="https://www.gartner.com/en/doc/predicts-2023-composable-applications-accelerate-business-innovation">Gartner study</a> predicts that by 2026, organizations that have adopted a composable architecture will outperform the competition by 80% in the speed of implementing new functionalities. This important data highlights the urgency and strategic value of this approach.</p>
<h2 id="drupal-cms-as-a-foundation-for-composable-architectures">Drupal CMS as a Foundation for Composable Architectures</h2>
<p>Drupal CMS stands out in the landscape of content management systems as a true &ldquo;champion of modularity,&rdquo; with native features that make it particularly suitable as a base for composable architectures. The API-first approach has been integrated into Drupal for several versions, and with Drupal CMS, this vision reaches a high level of maturity.</p>
<h3 id="api-first-by-design">API-first by design</h3>
<p>Drupal CMS has been designed with an API-first architecture that natively supports key protocols for modern multichannel and omnichannel implementations:</p>
<ul>
<li><strong>Native JSON:API</strong> : Integrated into the core, it offers a complete RESTful API that complies with JSON:API specifications. This provides a robust bridge for Drupal to communicate with any other system.</li>
<li><strong>Natively supported GraphQL</strong> : Allows for complex and targeted queries that reduce overhead and optimize performance.</li>
<li><strong>Flexible Web Services</strong> : Natively supports multiple integration protocols (REST, GraphQL, JSON-RPC), ensuring flexibility and integrability with a wide range of systems.</li>
<li><strong>OAuth and JWT Authentication</strong> : Provides robust mechanisms for API security and advanced authentication and authorization management. In Drupal, security is not an option but a fundamental value that characterizes the entire ecosystem.</li>
</ul>
<p>These features enable Drupal CMS to function effectively as both a headless backend and a hybrid system, where traditional server-side rendering coexists with JavaScript frontend components. This ensures maximum architectural flexibility, allowing you to bring your digital vision to life without compromise.</p>
<h3 id="flexible-and-structured-content-model">Flexible and Structured Content Model</h3>
<p>The secret to a composable architecture is having well-organized and easily manageable content. Drupal CMS&rsquo;s <em>entity</em> and <em>fields</em> system offers a powerful foundation for modeling structured content, making it easy to use and manage for any frontend or system:</p>
<ul>
<li><strong>Customizable Content Types</strong> : You can define complex content structures and articulated relationships between them, tailored to your information.</li>
<li><strong>Taxonomy System</strong> : Allows for advanced content categorization with hierarchical vocabularies.</li>
<li><strong>Extensible Field API</strong> : Natively supports complex content types, such as media, references, and structured data.</li>
<li><strong>Entity Reference</strong> : Enables the creation of sophisticated relationships and connections between different content pieces, interlinking information within your database.</li>
</ul>
<p>This range of features enables enormous flexibility in implementing complex content models tailored to specific needs. Such flexibility is fundamental for the composable approach, where different parts of the system can access and manipulate content consistently and structurally, through those well-defined APIs we&rsquo;ve just discussed.</p>
<h3 id="native-modular-system">Native Modular System</h3>
<p>Drupal was conceived with a deeply modular architecture, where each module adds specific functionality. This modularity perfectly aligns with the principles of composable architecture:</p>
<ul>
<li><strong>Extensible Module Ecosystem</strong> : Thousands of ready-to-use components are available in the Drupal ecosystem to extend core functionalities, capable of meeting virtually any need.</li>
<li><strong>Hook System and Event Dispatcher</strong> : Standardized mechanisms that allow for further extension and customization of Drupal without needing to modify core code.</li>
<li><strong>Plugin System</strong> : An infrastructure for interchangeable components with well-defined interfaces, providing enormous freedom and flexibility.</li>
<li><strong>Dependency Injection</strong> : A modern architecture that promotes decoupled and easily testable components, ensuring that your system remains robust and reliable.</li>
</ul>
<p>This modularity is not limited to the internal architecture but extends to the broader ecosystem, allowing for the construction of solutions where Drupal CMS can effectively integrate with other specialized systems, becoming the cornerstone of a &ldquo;best-of-breed&rdquo; solution (an example of which is below).</p>
<h2 id="implementing-composable-architectures-with-drupal-cms">Implementing Composable Architectures with Drupal CMS</h2>
<p>Let&rsquo;s now examine how Drupal CMS can be implemented in various composable architectural patterns, each suited to specific scenarios, requirements, and needs. We are not discussing theory, but rather projects we have successfully delivered.</p>
<h3 id="pattern-1-drupal-cms-as-a-central-content-hub">Pattern 1: Drupal CMS as a Central Content Hub</h3>
<p>In this pattern, Drupal CMS acts as a central content hub that feeds multiple touchpoints via APIs:</p>
<ul>
<li><strong>Unified Content Repository</strong> : Drupal manages all structured content within the organization.</li>
<li><strong>Multi-channel Distribution</strong> : From your website to your mobile app, from digital signage to emails, and various other channels, content is distributed everywhere from the central hub.</li>
<li><strong>Centralized Governance</strong> : Policies, workflows, and permissions are managed uniformly directly within Drupal.</li>
<li><strong>Decoupled Frontend</strong> : Specialized frontend implementations are tailored for different channels and use cases, but all draw from the same source.</li>
</ul>
<p>This approach is particularly effective for organizations with complex content management needs and multiple channels to support, where content consistency is paramount.</p>
<h4 id="case-study-zambon-group">Case study: Zambon Group</h4>
<p>The <a href="https://www.sparkfabrik.com/en/success-stories/zambon/">Zambon project</a> represents an exemplary implementation of this pattern. The platform utilizes Drupal CMS as the central content hub, the core that feeds the main corporate website, numerous product microsites in various languages, and all digital marketing initiatives.</p>
<p>The composable architecture allowed us to:</p>
<ul>
<li>Centralize content management with unified governance.</li>
<li>Distribute consistent content across dozens of different touchpoints.</li>
<li>Implement optimized frontends for various use cases.</li>
<li>Reduce time-to-market for new digital initiatives by 40%.</li>
</ul>
<h3 id="pattern-2-drupal-cms-as-a-component-of-a-federated-dxp">Pattern 2: Drupal CMS as a Component of a Federated DXP</h3>
<p>Sometimes, you need the &ldquo;best of every world.&rdquo; In this pattern, Drupal CMS acts as one of the specialized components within a federated Digital Experience Platform (DXP):</p>
<ul>
<li><strong>Best-of-breed approach</strong> : Each component of the ecosystem specializes in its function. We choose the most performant solutions for each specific function.</li>
<li><strong>API orchestration</strong> : An integration layer that coordinates different systems, making them work in harmony.</li>
<li><strong>Specialized Microservices</strong> : Dedicated components for specific, agile, and independent functionalities.</li>
<li><strong>Unified Experience Layer</strong> : A frontend that aggregates (federates) content and functionalities from various sources, offering a fluid and unified user experience, even if many different systems operate behind the scenes.</li>
</ul>
<p>This approach allows organizations to select the best solutions for each functional area, avoiding the vendor lock-in typical of monolithic DXPs.</p>
<h4 id="case-study-caleffi">Case study: Caleffi</h4>
<p>The <a href="https://www.sparkfabrik.com/en/success-stories/caleffi-new-website/">new Caleffi website</a> implements this composable architecture pattern. Drupal CMS manages editorial content, while other specialized systems handle the product catalog, e-commerce, and CRM.</p>
<p>The benefits of this architecture include:</p>
<ul>
<li>Flexibility in selecting the best solutions for each function, and freedom from vendor lock-in.</li>
<li>Ability to evolve individual components without impacting the entire ecosystem.</li>
<li>Consistent integration of data from different systems (data federation).</li>
<li>Unified user experience despite the diversity of backends.</li>
</ul>
<h3 id="pattern-3-drupal-cms-as-a-low-code-development-platform">Pattern 3: Drupal CMS as a Low-Code Development Platform</h3>
<p>Drupal CMS is also evolving into a powerful tool used as a base for rapidly implementing digital applications with a <a href="/it/low-code-platform-e-no-code-platform-il-futuro-dello-sviluppo-con-drupal?hsLang=en"><em>low-code/no-code</em></a> approach. In this innovative pattern, we find:</p>
<ul>
<li><strong>Advanced Layout Builder</strong> : A visual interface for creating complex layouts with a simple drag-and-drop.</li>
<li><strong>Webform System</strong> : Creation of interactive forms and workflows, without needing to write code.</li>
<li><strong>Views Module</strong> : A visual query builder for creating customized data views, showing only what you need, in your preferred way.</li>
<li><strong>Rules and Automation</strong> : Implementation of business logic with visual interfaces, automating processes and streamlining work.</li>
</ul>
<p>This approach allows organizations to significantly accelerate the development of digital applications, making the creation of experiences more accessible and reducing reliance on development resources.</p>
<h2 id="lessons-learned-best-practices-for-composable-implementations-with-drupal-cms">Lessons Learned: Best Practices for Composable Implementations with Drupal CMS</h2>
<p>Based on our extensive experience with numerous Drupal and composable architecture projects, we can identify several best practices that maximize the benefits of this approach and can make a difference in project success.</p>
<h3 id="1-define-a-consistent-api-strategy">1. Define a Consistent API Strategy</h3>
<p>APIs are the heart of composable architectures. Having a well-defined API strategy is fundamental for the success of a composable architecture:</p>
<ul>
<li><strong>Standardize formats and conventions</strong> : REST, GraphQL, JSON:API—choose your standards and adhere to them.</li>
<li><strong>Implement API versioning</strong> : Ensure the operational continuity of your applications even when you update the APIs.</li>
<li><strong>Clearly define the authentication and authorization model</strong> : It is essential to precisely define who can do what with your APIs. Security comes first.</li>
<li><strong>Document APIs comprehensively with tools like OpenAPI</strong> : Good documentation reduces costs and frustrations, facilitates new implementations, and simplifies maintenance.</li>
</ul>
<p>A clear API strategy, rooted in standardization and documentation, facilitates integration and significantly reduces maintenance costs over time.</p>
<h3 id="2-adopt-a-design-first-approach">2. Adopt a Design-First Approach</h3>
<p>Instead of reactively developing APIs, design their implementation upfront. A design-first approach ensures greater consistency and usability:</p>
<ul>
<li><strong>Define API contracts in detail before implementation</strong> : What they should do, what they return, and in what format and structure.</li>
<li><strong>Design APIs with consumer use cases in mind</strong> : Make them intuitive and easy for actual users.</li>
<li><strong>Validate the design with potential consumers</strong> : Early feedback allows for course correction when costs are still contained.</li>
<li><strong>Use mockup tools for early testing</strong> : Identify problems when they are still small and continue testing throughout all development phases.</li>
</ul>
<p>This approach improves API quality and reduces the need for modifications during implementation, thereby containing remediation costs.</p>
<h3 id="3-implement-advanced-cicd">3. Implement Advanced CI/CD</h3>
<p>A composable architecture requires a mature DevOps approach, with robust development and release pipelines:</p>
<ul>
<li><strong>Full automation</strong> of build and deploy processes.</li>
<li><strong>Automated testing</strong> at the API level.</li>
<li><strong>Incremental deployment</strong> strategies to reduce risks.</li>
<li><strong>Continuous end-to-end monitoring</strong> of performance and availability.</li>
</ul>
<p>These practices ensure that individual components can evolve independently without compromising the stability of the overall ecosystem, which is continuously monitored automatically.</p>
<h3 id="4-consider-governance-from-the-outset">4. Consider Governance from the Outset</h3>
<p>Governance is a critical aspect of a composable architecture. Without clear governance, flexibility can turn into chaos:</p>
<ul>
<li>Define <strong>clear responsibilities</strong> for each component.</li>
<li>Implement <strong>discovery mechanisms and service catalogs</strong> , a clear and easily accessible list of all services.</li>
<li><strong>Establish quality and security standards</strong>.</li>
<li><strong>Monitor</strong> API usage and performance.</li>
</ul>
<p>Effective governance ensures that the flexibility of the composable architecture does not result in management complexity or inconsistencies.</p>
<h2 id="overcoming-the-challenges-of-composable-architecture">Overcoming the Challenges of Composable Architecture</h2>
<p>While the benefits of composable architecture are significant, it is important to acknowledge and address the challenges this approach entails. Here are the main ones:</p>
<ul>
<li><strong>Integration Complexity.</strong> Managing multiple components and services can certainly introduce greater complexity.
<ul>
<li><strong>Solution</strong> : Implement a centralized API gateway to manage routing, authentication, and monitoring.</li>
<li><strong>Drupal Approach</strong> : Use modules like Subrequests and Decoupled Router to simplify integration.</li>
</ul>
</li>
<li><strong>User Experience Consistency.</strong> With potentially decoupled frontends and backends, maintaining a consistent user experience can be a challenge.
<ul>
<li><strong>Solution</strong> : Implement shared design systems and cross-platform component libraries.</li>
<li><strong>Drupal Approach</strong> : Use Drupal as a repository for UI components accessible via API, ensuring consistency everywhere.</li>
</ul>
</li>
<li><strong>End-to-End Performance.</strong> In distributed architectures, overall performance depends on multiple systems. Ensuring high performance in all circumstances requires specialized skills.
<ul>
<li><strong>Solution</strong> : Implement advanced caching at different architectural levels.</li>
<li><strong>Drupal Approach</strong> : Leverage Drupal&rsquo;s built-in caching system, integrating it with CDNs and edge caching for stable and top-tier performance.</li>
</ul>
</li>
<li><strong>Diversified Skill Set.</strong> Implementing composable architectures requires diverse and specialized technical skills, from data architecture to performance, from APIs to security, as well as design and frontend for decoupled solutions. In this scenario, an experienced partner with transversal skills who can collaborate with your team proves to be a winning strategic asset for all future challenges.
<ul>
<li><strong>Solution</strong> : Cross-functional teams with complementary skills.</li>
<li><strong>SparkFabrik Approach</strong> : Our team combines expertise in Drupal CMS, API design, modern frontend, DevOps, and supply chain security, supporting your resources, transferring know-how, and helping you achieve your business objectives.</li>
</ul>
</li>
</ul>
<h2 id="a-look-ahead-the-future-of-composable-architecture-with-drupal-cms">A Look Ahead: The Future of Composable Architecture with Drupal CMS</h2>
<p>Looking ahead, we can identify some emerging trends and innovations in the ecosystem that will make Drupal CMS even more effective and powerful as a base for composable architectures.</p>
<h3 id="composable-experience-builder">Composable Experience Builder</h3>
<p>Drupal&rsquo;s Layout Builder is evolving into a true &ldquo;Experience Builder&rdquo; capable of enabling visual composition of experiences across multiple channels. A new, completely WYSIWYG interface, with integrated responsive previews that show real-time rendering on different devices, finally eliminates the boundary between authoring and preview, allowing contextual modifications in real-time.</p>
<p>Furthermore, the flexibility and composability that distinguish Drupal are maintained, allowing for the creation and reuse of custom components within an intuitive interface, the orchestration of content from various sources, and the real-time personalization of the experience based on user data and behavior.</p>
<p>The evolution of Drupal&rsquo;s Layout Builder into a true &ldquo;experience studio&rdquo; will enable:</p>
<ul>
<li>Visual composition of cross-channel experiences.</li>
<li>Content orchestration from various sources.</li>
<li>Personalization based on data and behaviors.</li>
<li>Real-time preview on different devices and channels.</li>
</ul>
<h3 id="ai-integration-for-content-orchestration">AI Integration for Content Orchestration</h3>
<p>Artificial intelligence is transforming content management, and there is great excitement in the Drupal ecosystem, so much so that the <a href="https://www.drupal.org/about/starshot/initiatives/ai">Drupal AI Initiative</a> was recently launched (SparkFabrik is also actively contributing to accelerate AI innovation in Drupal).</p>
<p>AI features are evolving at a rapid pace. Among others, intelligent suggestions for component combination, automatic optimization of layouts and content, predictive personalization based on behavioral patterns, and automation of tagging and categorization are expected thanks to Agentic AI. Moreover, the modular architecture allows support for various AI models (both cloud-based and on-premise), while the unified API enables extending any functionality with AI capabilities.</p>
<p>Artificial intelligence is transforming content management:</p>
<ul>
<li>Intelligent suggestions for component combinations.</li>
<li>Automatic optimization of layouts and content.</li>
<li>Predictive personalization based on behavioral patterns.</li>
<li>Automation of tagging and categorization.</li>
</ul>
<h3 id="edge-rendering-and-distribution">Edge Rendering and Distribution</h3>
<p>A composable architecture can greatly benefit from edge rendering, allowing for the creation of high-performance web applications worldwide.</p>
<p>The evolution towards <em>edge-first</em> architectures will involve the distribution of content and logic closer to the end-user, with consequent server-side rendering &ldquo;at the edge&rdquo; to optimize performance, particularly for dynamic content and contextual personalization based on local data. In this way, specific and personalized functionalities can be delivered more quickly and closer to users, while other higher-level functionalities can reside in different geographical areas. This geographical distribution also improves operational resilience.</p>
<p>These innovations will further strengthen Drupal CMS&rsquo;s position as an ideal platform for implementing composable architectures that combine flexibility, performance, and faster time-to-market.</p>
<h2 id="conclusions">Conclusions</h2>
<p>Composable architecture represents a radical shift in how digital ecosystems are built, offering organizations the flexibility and agility needed to tackle a rapidly evolving market. Drupal CMS, with its API-first approach, flexible content model, and native modular architecture, positions itself as an ideal foundation for implementing this new paradigm.</p>
<p>Organizations adopting a composable approach with Drupal CMS can expect concrete and significant benefits, both at a technical level and for their business objectives:</p>
<ul>
<li>A <strong>significantly accelerated time-to-market</strong> for all new digital initiatives, including marketing campaigns.</li>
<li>Greater <strong>flexibility</strong> in the evolution of their technological ecosystem.</li>
<li>Reduced <strong>Total Cost of Ownership (TCO)</strong> thanks to the ability to update and evolve specific components without overhauling the entire system.</li>
<li>Improved <strong>resilience</strong> , thanks to an architecture that is by definition more modular and adaptable.</li>
</ul>
<p>With an experienced partner like SparkFabrik, which combines in-depth expertise in Drupal CMS, API architecture, and modern frontend, organizations can successfully undertake this transition. We build enabling digital ecosystems that not only meet today&rsquo;s needs but are ready to continuously evolve to address future challenges.</p>
<h3 id="next-steps">Next Steps</h3>
<p>If your organization is considering adopting a composable architecture based on Drupal CMS, we invite you to:</p>
<ol>
<li>Explore our<a href="https://www.sparkfabrik.com/en/services/drupal/"> Drupal Services Suite by SparkFabrik</a> with a focus on the API-first approach.</li>
<li>Watch the<a href="https://www.youtube.com/watch?v=hygzAGmK__0&amp;list=PLSD9hiOyso86-4F8ZFnRTbRpJ6qn_5U9j"> talk comparing different architectures presented by Luca Lusso</a>, which delves into the pros and cons of various architectural approaches (Italian only).</li>
<li><a href="https://www.sparkfabrik.com/en/contact-us/">Contact us</a> for an assessment of your specific use case and an analysis of composable implementation opportunities.</li>
</ol>
<hr>
<p>This article is part of our series dedicated to Drupal CMS. To explore other aspects of the platform, we invite you to consult our previous articles on<a href="/en/drupal-cms-the-new-era-of-enterprise-content-management?hsLang=en"> Drupal CMS features and benefits</a>, its<a href="/en/drupal-cms-a-comparison-with-the-main-alternatives?hsLang=en"> comparison with main alternatives</a>,<a href="/en/migration-to-drupal-cms-complete-guide-for-a-successful-transition?hsLang=en"> migration strategies</a> from other systems,<a href="/en/drupal-cms-security-compliace-regulated-sector?hsLang=en"> security and compliance</a> with particular attention to regulated sectors, and the ambitious<a href="/en/drupal-cms-all-innovations-of-2025?hsLang=en"> innovation roadmap</a> with all the news from the ecosystem.</p>
]]></content:encoded><media:content url="https://www.sparkfabrik.com/images/blog/composable-architecture-with-drupal-cms/Drupal_20CMS_20-_20Composable_20Architecture_20-_20Featured_20Image-1.png" medium="image"/><enclosure url="https://www.sparkfabrik.com/images/blog/composable-architecture-with-drupal-cms/Drupal_20CMS_20-_20Composable_20Architecture_20-_20Featured_20Image-1.png" type="image/jpeg"/><category>Drupal</category></item></channel></rss>