157 stories
·
1 follower

What’s coming to our GitHub Actions 2026 security roadmap

2 Shares

Why this matters right now

Software supply chain attacks aren’t slowing down. Over the past year, incidents targeting projects like tj-actions/changed-files, Nx, and  trivy-action show a clear pattern: attackers are targeting CI/CD automation itself, not just the software it builds.

The playbook is consistent:

  • Vulnerabilities allow untrusted code execution
  • Malicious workflows run without observability or control
  • Compromised dependencies spread across thousands of repositories
  • Over-permissioned credentials get exfiltrated via unrestricted network access

Today, too many of these vulnerabilities are easy to introduce and hard to detect. We’re working to address this gap.

What we’re building

Our 2026 roadmap focuses on securing GitHub Actions across three layers:

  1. Ecosystem: deterministic dependencies and more secure publishing
  2. Attack surface: policies, secure defaults, and scoped credentials
  3. Infrastructure: real-time observability and enforceable network boundaries for CI/CD runners

This isn’t a rearchitecture of Actions; it’s a shift toward making secure behavior the default, helping  every team to become CI/CD security experts.

Here’s what’s coming next, and when.

1. Building a more secure Actions ecosystem

The current challenge

Action dependencies are not deterministic and are resolved at runtime. Workflows can reference a dependency by various mutable references including tags and branches.

That means what runs in CI isn’t always fixed or auditable. Maintainers of Action workflows, for instance, typically manage updates through mutable tags that point to the latest commits of a major or minor release.

Using immutable commit SHAs helps, but it’s hard to manage at scale and transitive dependencies remain opaque.

That mutability has real consequences. When a dependency is compromised, the change can propagate immediately across every workflow that references it.

As recent supply chain incidents have shown, we can’t rely on the security posture of every maintainer and repository in the ecosystem to prevent the introduction of malicious code.

What’s changing: workflow-level dependency locking

We’re introducing a dependencies: section in workflow YAML that locks all direct and transitive dependencies with the commits SHA,  

Think of it as Go’s go.mod + go.sum, but for your workflow with complete reproducibility and auditability.

Example workflow YAML showing the dependencies section with cryptographic hashes commit SHAs for each action

What this changes in practice:

  • Deterministic runs: Every workflow executes exactly what was reviewed.
  • Reviewable updates: Dependency changes show up as diffs in pull requests.
  • Fail-fast verification. Hash mismatches stop execution before jobs run.
  • Full visibility. Composite actions no longer hide nested dependencies.

In your workflows, this means you will be able to:

  • Resolve dependencies via GitHub CLI
  • Commit the generated lock data into your workflow
  • Update by re-running resolution and reviewing diffs

Our current milestones for lock files are as follows:

Milestones:

PhaseTarget
Public preview3-6 months
General availability6 months

Future: hardened publishing with immutable releases

Beyond consumption, we’ll harden how workflows are published into the Actions ecosystem. On the publishing side, we’re moving away from mutable references and towards immutable releases with stricter release requirements.

Our goal is to:

  • Make it clearer on how and when code to enters the ecosystem
  • Create a central enforcement point for detecting and blocking malicious code

2. Reducing attack surface with secure defaults

The current challenge

GitHub Actions is flexible by design. Workflows can run:

  • In response to many events
  • Triggered by various actors
  • With varying permissions

But as organizations scale, the relationship between repository access and workflow execution needs more granularity. Different workflows, teams, and enterprises need very different levels of exposure. Moreover, it leads to over-permissioned workflows, unclear trust boundaries, and configurations that are easy to get wrong.

Attacks like Pwn Requests  show how subtle differences in event triggers, permissions, and execution contexts can be abused to compromise sensitive environments. Scaling this across thousands of repositories and contributors requires centralized policy.

What’s changing: policy-driven execution

We’re introducing workflow execution protections built on GitHub’s ruleset framework.

Instead of reasoning about security across individual YAML files, you define central policies that control:

  • Who can trigger workflows
  • Which events are allowed

This shifts the model from distributed, per-workflow configuration that’s difficult to audit and easy to misconfigure, to centralized policy that makes broad protections and restrictions visible and enforceable in one place.

Our core policy dimensions include:

  • Actor rules specify who can trigger workflows such as individual users, roles like repository admins, or trusted automation like GitHub Apps, GitHub Copilot, or Dependabot.
  • Event rules define which GitHub Actions events are permitted like push, pull_request, workflow_dispatch, and others.

For example, an organization could restrict workflow_dispatch execution to maintainers, preventing contributors with write access from manually triggering sensitive deployment or release workflows. Separately, they could prohibit pull_request_target events entirely and only allow pull_request, ensuring workflows triggered by external contributions run  without access to repository secrets or write permissions.

These protections scale across repositories without per-workflow configuration. Enterprises apply consistent policies organization-wide using rulesets and repository custom properties, reducing operational risk and governance overhead.

Why this matters for attack prevention:

Many CI/CD attacks depend on:

  • Confusing event behavior
  • Unclear permission boundaries
  • Unexpected execution contexts

Execution protections reduce this attack surface by ensuring that workflows that don’t meet policy never run.

Safe rollout: evaluate mode

To help teams adopt these protections safely, workflow execution rules support evaluate mode. In evaluate mode, rules are not enforced, but every workflow run that would have been blocked is surfaced in policy insights (similar to repository rulesets). This lets organizations assess the impact of new policies before activating enforcement, identifying affected workflows, validating coverage, and building confidence without disrupting existing automation.

Milestones:

PhaseTarget
Public preview3-6 months
General availability6 months

Scoped secrets and improved secret governance

The current challenge

Secrets in GitHub Actions are currently scoped at the repository or organization level. This makes secrets difficult to use safely, particularly with reusable workflows where credentials flow broadly by default. Teams need finer-grained controls to bind credentials to specific execution contexts.

What’s changing: scoped secrets

Scoped secrets introduce fine-grained controls that bind credentials to explicit execution contexts. Secrets can be scoped to:

  • Specific repositories or organizations
  • Branches or environments
  • Workflow identities or paths
  • Trusted reusable workflows without requiring callers to pass secrets explicitly

What this changes

  • Secrets are no longer implicitly inherited
  • Access requires matching an explicit execution context
  • Modified or unexpected workflows won’t receive credentials

Reusable workflow secret inheritance

Reusable workflows enable powerful composition, but implicit secret inheritance has caused friction within platform teams. When secrets automatically flow from a calling workflow into a reusable workflow, trust boundaries blur, and credentials can be exposed to execution paths that were never explicitly approved.

With scoped secrets:

  • Secrets are bound directly to trusted workflows
  • Callers don’t automatically pass credentials
  • Trust boundaries are explicit

Permission model changes for Action Secrets

We’re separating code contributions from credential management.

That means write access to a repository will no longer grant secret management permissions and helps us move toward least privilege by default.

This capability will instead be available through a dedicated custom role and will remain part of the repository admin, organization admin, and enterprise admin roles.

Together, these changes make it possible to ensure credentials are only issued when both the workflow and the execution context are explicitly trusted.

Milestones:

CapabilityPhaseTarget
Scoped secrets & reusable workflow inheritancePublic preview3-6 months
Scoped secrets & reusable workflow inheritanceGA6 months
Secrets permissionGA3-6 months

Our future goal: building a unified policy-first security model

Longer term, our goal is fewer implicit behaviors, fewer per-workflow configurations, and more centralized, enforceable policy.

We want to give enterprises the ability to define clear trust boundaries for workflow execution, secret access, and event triggers without encoding complex security logic into every workflow file.

This includes expanding policy coverage, introducing richer approval and attestation gates, and consolidating today’s fragmented controls into a single governance surface.

3. Endpoint monitoring and control for CI/CD infrastructure

The current challenge

CI/CD infrastructure is critical infrastructure. GitHub Actions runners execute untrusted code, handle sensitive credentials, and interact with external systems and input.

But historically:

  • Visibility is limited
  • Controls are minimal
  • Investigation is reactive

When something goes wrong, organizations often have limited insight into what executed, where data flowed, or how a compromise unfolded.

Recent attacks have shown how unrestricted execution environments amplify impact, enabling secret exfiltration, unauthorized publishing, and long dwell times. Securing CI/CD requires treating its workloads as a first-class security domain with explicit controls and continuous visibility.

What’s changing

We’re introducing enterprise-grade endpoint protections for GitHub Actions, starting with the Actions Data Stream (visibility) and the native egress firewall (control).

Increased visibility with Actions Data Stream

CI/CD visibility today is fragmented with limited insight or monitoring.  As automation becomes more powerful, and more targeted, organizations need the ability to observe execution behavior continuously, not just investigate after an incident.

The Actions Data Stream provides:

  • Near real-time execution telemetry
  • Centralized delivery to your existing systems

Supported destinations:

  • Amazon S3
  • Azure Event Hub / Data Explorer

Events are delivered in batches with at least once delivery guarantees, using a common schema that allows reliable indexing and correlation in your chosen platform.

What you can observe:

  • Workflow and job execution details across repositories and organizations.
  • Dependency resolution and action usage patterns
  • (Future) Network activity and policy enforcement outcomes

Why this matters

Without centralized telemetry, anomalies go unnoticed, detection happens after an incident, and responses are delayed.

The Actions Data Stream solves this problem by making CI/CD observable like any other production system.

Milestones:

PhaseTarget
Public preview3-6 months
General availability6-9 months

Native egress firewall for GitHub-hosted runners

The current challenge

GitHub-hosted runners currently allow unrestricted outbound network access. That means:

  • Easy data exfiltration
  • No restrictions on what package registries can be used to obtain dependencies
  • Unclear distinctions between expected and unexpected network traffic

What’s changing

We’re building a native egress firewall for GitHub-hosted runners, treating CI/CD infrastructure as critical infrastructure with enforceable network boundaries.

The firewall operates outside the runner VM at Layer 7. It remains immutable even if an attacker gains root access inside the runner environment. Organizations define precise egress policies, including:

  • Allowed domains and IP ranges
  • Permitted HTTP methods
  • TLS and protocol requirements

The firewall provides two complementary capabilities:

  1. Monitor: Organizations can monitor all outbound network traffic from their runners, with every request automatically audited and correlated to the workflow run, job, step, and initiating command. This visibility gives teams the data they need to understand what their workflows connect to, build informed allowlists, and assess the impact of restrictions before enforcing them.
  2. Enforce: Organizations can enforce egress policies that block any traffic not explicitly permitted, ensuring that only approved destinations are reachable from the build environment.

Together, monitoring and enforcement create a safe adoption path: observe traffic patterns first, develop precise allowlists based on real data, then activate enforcement with confidence.

Milestones:

PhaseTarget
Public preview6-9 months

Our future goal: treating runners as protected endpoints

Runners shouldn’t be treated as disposable black boxes. We’re expanding toward:

  • Process-level visibility
  • File system monitoring
  • Richer execution signals
  • Near real-time enforcement

What this means in practice

CI/CD has become part of the critical infrastructure for enterprises and open source.  The failures we’ve seen around dependency management, complex and implicit trust boundaries, secret handling, and observability have led to an increase in attacks across the software supply chain.

The 2026 GitHub Actions roadmap responds directly. We’re shifting the platform toward secure-by-default, verifiable automation with a focus on disrupting these attacks.

That means:

  • Workflows become deterministic and reviewable
  • Secrets are explicitly scoped and not broadly inherited
  • Execution is governed by policy, not YAML alone
  • Runners become observable and controllable systems

GitHub Actions remains flexible. Our roadmap is designed to move Actions toward a secure by default, auditable automation platform without requiring every team to rebuild their CI/CD model from scratch.

Join the discussion in the GitHub community to tell us what you think.

The post What’s coming to our GitHub Actions 2026 security roadmap appeared first on The GitHub Blog.

Read the whole story
jhunorss
49 minutes ago
reply
Share this story
Delete

That one XKCD thing, now interactive

jwz
1 Share
This is so much fun... Craig S. Kaplan:

In my online undergraduate P5.js course, students are about to begin the module on motion and physics, including a bit of physics simulation using Matter.js. It suddenly occurred to me that I had never seen anybody put together this particular demo before, and I realized it had to be done. Messy source code here.

Previously, previously, previously, previously, previously, previously.

Read the whole story
jhunorss
5 days ago
reply
Share this story
Delete

Gesundheitsdatengau verhindern

1 Share
Das wegen eines fehlendes IT-Sicherheitskonzeptes jahrelang ruhende Klageverfahren gegen die zentrale Gesundheitsdatensammlung im Forschungsdatenzentrum wird nun weitergehen. Die Gesundheitsdaten von 73 Millionen gesetzlich Versicherten werden dort zentral gespeichert und für Forschungszwecke zugänglich gemacht. Wir fordern ein Widerspruchsrecht.
Read the whole story
jhunorss
16 days ago
reply
Share this story
Delete

Migrating to Modular Monolith using Spring Modulith and IntelliJ IDEA

1 Share

As applications grow in complexity, maintaining a clean architecture becomes increasingly challenging. The traditional package-by-layer approach of organizing code into controllers, services, repositories, and entities packages often leads to tightly coupled code that’s hard to maintain and evolve.

Spring Modulith, combined with IntelliJ IDEA’s excellent tooling support, offers a powerful solution for building well-structured modular monoliths.

In this article, we will use a bookstore sample application as an example to demonstrate Spring Modulith features.

If you are interested in building a Modular Monolith using Spring and Kotlin, check out Building Modular Monoliths With Kotlin and Spring

1. The Problem with Monoliths and Package-by-Layer

Many Spring Boot applications are organized by technical layer rather than by business capability. A typical layout looks like this:

bookstore
  |-- config
  |-- entities
  |-- exceptions
  |-- models
  |-- repositories
  |-- services
  |-- web

This package-by-layer style causes several problems.

The Code Structure Doesn’t Express What the Application Does

When you open the project, you see “repositories,” “services,” and “web,” but not “catalog,” “orders,” or “inventory.” The domain is hidden behind technical folders, which makes it harder for developers to find feature-related code and understand boundaries.

Everything Tends to Become Public

In a layer-based layout, types in one package are often used from many others. To allow that, classes are made public, which effectively exposes them to the whole application. There is no clear “public API” per feature, and hence anything can depend on anything.

Tight Coupling and Spaghetti Code

With no explicit boundaries, services and controllers from different features depend on each other’s internals. For example, order logic might call catalog’s ProductService directly or reuse internal DTOs. Over time this turns into a tightly coupled “big ball of mud” where changing one feature risks breaking others.

Fragile Changes

Adding or changing a feature often forces you to touch code in repositories, services, and web at once, with no clear “module” to test or reason about. Refactoring becomes risky because the impact is hard to see.

In short: package-by-layer encourages a single, undivided monolith with weak boundaries and unclear ownership. Spring Modulith addresses this by turning your codebase into an explicit set of modules with clear APIs and enforced boundaries.

2. What Benefits Spring Modulith Brings

Spring Modulith helps you build modular monoliths: one deployable application, but with clear, domain-driven modules and enforced structure.

Explicit Module Boundaries

Modules are direct sub-packages of your application’s base package (e.g. com.example.bookstore.catalog, com.example.bookstore.orders). Spring Modulith treats each as a module and checks that:

  • Other modules do not depend on internal types unless they are explicitly exposed.
  • There are no circular dependencies between modules.
  • Dependencies between modules are declared (e.g. via allowedDependencies), so the architecture stays intentional.

Clear Public APIs

Each module can define a provided interface (public API): a small set of types and beans that other modules are allowed to use. Everything else is internal. This reduces coupling and makes it obvious how modules interact.

Event-Driven Communication

Spring Modulith encourages events for cross-module communication (e.g. OrderCreatedEvent). It provides:

  • @ApplicationModuleListener for module-aware event handling.
  • Event publication registry (e.g. JDBC) so events can be persisted and processed reliably.
  • Externalized events (e.g. AMQP, Kafka) to integrate with message brokers and other applications.

This keeps modules loosely coupled and makes it easier to later extract a module into a separate service.

Testability

You can test one module at a time with @ApplicationModuleTest, controlling which modules and beans are loaded. You mock other modules’ APIs instead of pulling in the whole application, which speeds up tests and keeps them focused.

Documentation and Verification

Spring Modulith can:

  • Verify modular structure in tests via ApplicationModules.of(...).verify().
  • Generate C4-style documentation from the same model.


So the documented architecture and the actual code stay in sync.

Gradual Migration Path

You can introduce Spring Modulith into an existing Spring Boot monolith step by step: first refactor to package-by-module, then add the Spring Modulith dependencies and ModularityTest, and fix violations one by one. You don’t need to rewrite the application.

3. How to Add Spring Modulith to a Spring Boot Project

Add the Dependencies

Use the Spring Modulith BOM and add the core and test starters:

<properties>
    <spring-modulith.version>2.0.3</spring-modulith.version>
</properties>

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.modulith</groupId>
            <artifactId>spring-modulith-bom</artifactId>
            <version>${spring-modulith.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <!-- other dependencies -->
    
    <dependency>
        <groupId>org.springframework.modulith</groupId>
        <artifactId>spring-modulith-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

Enable IntelliJ IDEA Support

Spring Modulith support is bundled in IntelliJ IDEA with the Ultimate Subscription and is enabled by default once the Spring Modulith dependencies are on the classpath.

To confirm the plugin is enabled:

  1. Open Settings (Ctrl+Alt+S / Cmd+,).
  2. Go to PluginsInstalled.
  3. Search for Spring Modulith and ensure it is checked.

You can then use module indicators in the project tree, the Structure tool window, and Modulith-specific inspections and quick-fixes.

Add a Modularity Test

Add a test that verifies your modular structure so that violations are caught in CI:

package com.sivalabs.bookstore;

import org.junit.jupiter.api.Test;
import org.springframework.modulith.core.ApplicationModules;

class ModularityTest {
    static ApplicationModules modules = ApplicationModules.of(BookStoreApplication.class);

    @Test
    void verifiesModularStructure() {
        modules.verify();
    }
}

After refactoring to package-by-module, this test will fail until all boundary and dependency rules are satisfied. Fixing those failures is the main migration work.

4. Converting a Monolith into a Modulith: Refactoring to Package-by-Module

Let’s see how we can convert a monolith application into a modular monolith one step at a time.

Step 1: Reorganize to Package-by-Module

Move from layer-based packages to module-based (package-by-module) packages. Each top-level package becomes a module.

Target structure (example):

bookstore
  |- config
  |- common
  |- catalog
  |- orders
  |- inventory

Practical steps:

  • Create the new package structure (e.g. catalog, orders, inventory, common with subpackages like domain, web, etc).
  • Move classes from entities, repositories, services, web into the appropriate feature package. Prefer package-private (no modifier) for types that should stay internal.
  • Replace a single GlobalExceptionHandler with module-specific exception handlers (e.g. CatalogExceptionHandler, OrdersExceptionHandler) in each module’s web (or equivalent) package.
  • Move and adjust tests to match the new structure.


After this, the code is organized by feature, but Spring Modulith is not yet enforcing boundaries. Adding the dependency and running ModularityTest will surface the next set of issues.

Step 2: Fix Module Boundary Violations

When you run ModularityTest, you’ll see errors such as:

  • Module ‘catalog’ depends on non-exposed type … PagedResult within module ‘common’!
  • Module ‘inventory’ depends on non-exposed type … OrderCreatedEvent within module ‘orders’!
  • Module ‘orders’ depends on non-exposed type … ProductService within module ‘catalog’!

Fixing these errors is where module types, named interfaces, and public APIs come in.

Add the following dependency to use Spring Modulith features to specify module types, named interfaces, etc:

<dependency>
    <groupId>org.springframework.modulith</groupId>
    <artifactId>spring-modulith-starter-core</artifactId>
</dependency>

Use OPEN for Shared “Common” Modules

If a module (e.g. common) is meant to be used by many others and doesn’t need a strict API, mark it as OPEN so all its types are considered exposed:

@ApplicationModule(type = ApplicationModule.Type.OPEN)
package com.sivalabs.bookstore.common;

import org.springframework.modulith.ApplicationModule;

Add this in package-info.java in the module’s root package.

Expose Specific Packages with @NamedInterface

When only certain types (e.g. events or DTOs) should be used by other modules, expose that package via a named interface:

@NamedInterface("order-models")
package com.sivalabs.bookstore.orders.domain.models;

import org.springframework.modulith.NamedInterface;

Then other modules can depend on orders::order-models (or the whole module) in their allowedDependencies.

Introduce a Public API (Provided Interface)

When another module needs to call your module’s logic, don’t expose the internal service. Expose a facade or API class in the module’s root package (or a dedicated API package):

package com.sivalabs.bookstore.catalog;

@Service
public class CatalogApi {
    private final ProductService productService;

    public CatalogApi(ProductService productService) {
        this.productService = productService;
    }

    public Optional<Product> getByCode(String code) {
        return productService.getByCode(code);
    }
}

Then in the orders module, depend on CatalogApi instead of ProductService. Spring Modulith will treat CatalogApi as the provided interface and ProductService as internal.

Step 3: Declare Explicit Module Dependencies (Optional but Recommended)

By default, a module may depend on any other module that doesn’t create a cycle. To make dependencies explicit, list allowed targets in package-info.java:

@ApplicationModule(allowedDependencies = {"catalog", "common"})
package com.sivalabs.bookstore.orders;

import org.springframework.modulith.ApplicationModule;

If the orders module later uses something from a module not in this list (e.g. inventory), modules.verify() will fail and IntelliJ will show a violation. This keeps the dependency graph intentional and documented.

Step 4: Prefer Event-Driven Communication

For cross-module side effects (e.g. “when an order is created, update inventory”), prefer events instead of direct calls:

  • Publishing module (e.g. orders): publishes OrderCreatedEvent via ApplicationEventPublisher.
  • Consuming module (e.g. inventory): handles it with @ApplicationModuleListener (and optionally event persistence or externalization).


This avoids the consuming module depending on the publisher’s internals and keeps the path open for later extraction to a separate service or messaging.

Add the following dependency:

<dependency>
    <groupId>org.springframework.modulith</groupId>
    <artifactId>spring-modulith-events-api</artifactId>
</dependency>

Publish events using ApplicationEventPublisher and implement event listener using @ApplicationModuleListener as follows:

//Event Publisher
@Service
class OrderService {
    private final ApplicationEventPublisher publisher;

    void create(OrderCreateRequest req) {
       //...
	var event = new OrderCreatedEvent(...);
       publisher.publish(event);
    }
}

//Event Listener
@Component
class OrderCreatedEventHandler {
    @ApplicationModuleListener
    void handle(OrderCreatedEvent event) {
        log.info("Received order created event: {}", event);
	 //... 
    }
}

Event Publication Registry

The events can be persisted in a persistence store (eg: database) so that they can be processed without losing then on application failures.

Add the following dependency:

<dependency>
   <groupId>org.springframework.modulith</groupId>
   <artifactId>spring-modulith-starter-jdbc</artifactId>
</dependency>

Configure the following properties to initialize the events schema and events processing behaviour:

spring.modulith.events.jdbc.schema-initialization.enabled=true
# completion-mode options: update | delete | archive
spring.modulith.events.completion-mode=update
spring.modulith.events.republish-outstanding-events-on-restart=true

When the application publishes events, first they will be stored in a database table, and after successful processing they will be deleted or marked as processed.

5. How does IntelliJ IDEA Help with Inspections and Quick Fixes?

Spring Modulith violations don’t cause compilation or runtime errors by themselves, they fail Modulith-specific tests (e.g. ModularityTest). IntelliJ IDEA’s Spring Modulith support turns these into editor-time feedback with inspections and quick-fixes so you can fix structure issues as you code.

Inspections and Severity

IntelliJ runs a set of inspections that check your code against Spring Modulith’s rules. By default, they are configured as errors (red underlines), even though the project still compiles. This helps you treat modularity as a first-class constraint.

You can adjust severity in Settings → Editor → Inspections under the Spring Modulith group if you want to start with warnings.

Violations Shown in the Editor

As soon as you introduce a dependency that breaks module boundaries, IntelliJ highlights it. For example:

  • A class in catalog module using PagedResult from common without common being OPEN or exposing that type.
  • A class in orders using catalog’s internal ProductService instead of the public CatalogApi.
  • A class in inventory using orders’ internal OrderCreatedEvent type before it is exposed via a named interface.


You don’t have to run the full test suite to see these issues, they appear as you write or refactor code.

Quick-Fixes (Alt+Enter)

When the cursor is on a Modulith violation, Alt+Enter (or the lightbulb) opens quick-fixes that align the code with the modular structure. Typical options:

  1. Annotate the type with @NamedInterface: Expose the class (or its package) as a named interface so other modules can use it.
  1. Open the module that contains the type: IntelliJ creates or updates package-info.java in that module and marks it as @ApplicationModule(type = ApplicationModule.Type.OPEN), exposing all its types.
  2. Move the component to the base package: Move the bean to the application’s root package so it’s outside any module (use sparingly).

Choosing the right fix depends on your design: use OPEN for shared utility modules, NamedInterface for a few shared types (e.g. events), and public API classes for behavioral dependencies.

Bean Injection and Module Boundaries

IntelliJ’s Spring bean autocompletion is aware of module boundaries. If you try to inject a bean that belongs to another module and is not part of that module’s public API, the completion list can show a warning icon next to that bean. This helps you avoid introducing boundary violations when wiring dependencies.

Undeclared Module Dependencies

When a module has explicit allowedDependencies (e.g. orders only allow catalog and common) but you use a type from another module (e.g. inventory), IntelliJ reports a violation: the dependency is not declared.

Quick-fix: Add the missing module (or the required named interface) to allowedDependencies in the module’s package-info.java. IntelliJ can suggest adding the dependency.

Working with allowedDependencies

In package-info.java, when you edit allowedDependencies = {"..."}, IntelliJ provides:

  • Completion (Ctrl+Space) with:
    • module — dependency on the whole module.
    • module::interface — dependency on a specific named interface.
    • module::* — dependency on all named interfaces of that module.
  • Validation: if a listed module or interface doesn’t exist, IntelliJ highlights the reference so you can fix it before running tests or starting the app.
  • Navigation: Ctrl+B on a module name in allowedDependencies jumps to that module in the Project view.

Circular Dependencies

Spring Modulith’s verification detects cycles between modules, e.g.:

Cycle detected: Slice catalog ->
                Slice orders ->
                Slice catalog

To fix this, you need to break the cycle in code: remove the dependency (e.g. catalogorders) by using events, moving shared types to common, or redefining which module owns which responsibility.

Visualizing Modules in IntelliJ IDEA

Project tool window (Alt+1): Top-level modules are marked with a green lock; internal (non-exposed) components can be marked with a red lock. This gives a quick visual of boundaries.

Structure tool window (Alt+7): With the main @SpringBootApplication class selected, open Structure and use the Modules node to see the list of application modules, their IDs, allowed dependencies, and named interfaces.

Using both views helps you understand and fix dependency and boundary issues quickly.

6. Verifying and Evolving Your Modular Structure

Keep Running ModularityTest

After each refactoring step, run ModularityTest. It should pass, once we have completed the following:

  • All cross-module references go to exposed types (OPEN modules, named interfaces, or public API classes).
  • There are no circular dependencies.
  • Any explicit allowedDependencies include all modules (and interfaces) that are actually used.

6.2 Generate Documentation

You can extend the test to generate C4-style documentation so the architecture is visible and up to date:

@Test
void verifiesModularStructure() {
    modules.verify();
    new Documenter(modules).writeDocumentation();
}

Output is written under target/spring-modulith-docs.

Test Modules in Isolation

Use @ApplicationModuleTest to load only one module (and optionally its dependencies) and mock other modules dependencies:

@ApplicationModuleTest(mode = BootstrapMode.STANDALONE)
@Import(TestcontainersConfiguration.class)
@AutoConfigureMockMvc
class OrderRestControllerTests {
    @MockitoBean
    CatalogApi catalogApi;
    // ...
}

Bootstrap modes control how much of the application is loaded, making tests faster and more focused.

  • STANDALONE (default): Load only the module being tested
  • DIRECT_DEPENDENCIES: Load the module and its direct dependencies
  • ALL_DEPENDENCIES: Load all transitive dependencies

7. Conclusion

Building a modular monolith with Spring Modulith improves long-term maintainability and prepares the codebase for possible extraction of modules into separate services. The main ideas:

  • Avoid package-by-layer: Organize by feature/module (package-by-feature) so that the structure reflects the domain.
  • Define clear boundaries: Use OPEN for shared utility modules, named interfaces for shared types (e.g. events), and public API classes for cross-module behavior.
  • Declare dependencies: Use allowedDependencies so the intended dependency graph is explicit and violations are caught early.
  • Prefer events for cross-module side effects to keep coupling low.
  • Verify continuously with ModularityTest and optional documentation generation.

IntelliJ IDEA’s Spring Modulith support turns modularity into a day-to-day concern: module indicators, Modulith inspections, quick-fixes, and dependency completion help you respect boundaries and fix common issues without leaving the editor. For more detail, see IntelliJ IDEA’s Spring Modulith documentation.

Start by refactoring one area to package-by-feature, add Spring Modulith and a modularity test, then fix violations step by step using IntelliJ IDEA’s feedback to guide the way.

Read the whole story
jhunorss
17 days ago
reply
Share this story
Delete

CCC fordert, die VDS endgültig zu begraben

1 Share
Die Bundesregierung plant eine riesige anlasslose Datenhalde, die zur Nutzerprofilierung gradezu einlädt: die Vorratsdatenspeicherung von IP-Adressen nebst Begleitdaten. Eine derart weitgreifende Überwachungsmaßnahme ist und bleibt unverhältnismäßig und gefährlich. Und die Ideen aus Brüssel sind noch schlimmer.
Read the whole story
jhunorss
41 days ago
reply
Share this story
Delete

Hungry Horrors is a unique deck-builder about feeding monsters out now

1 Share
Gather ingredients, make dishes and feed all the strange creatures in a deck-builder that manages to set itself apart from the others in Hungry Horrors.

alt

Read the full article on GamingOnLinux.

Read the whole story
jhunorss
73 days ago
reply
Share this story
Delete
Next Page of Stories