Microsoft for Java Developers https://devblogs.microsoft.com/java/ News, updates, and insights for Java development with Microsoft tools, Azure services, and OpenJDK. Tue, 20 Jan 2026 22:03:46 +0000 en-US hourly 1 https://devblogs.microsoft.com/java/wp-content/uploads/sites/51/2024/10/Microsoft-favicon-48x48.jpg Microsoft for Java Developers https://devblogs.microsoft.com/java/ 32 32 Introducing Azure Performance Diagnostics Tool for Java: Automated Java Performance Analysis in Kubernetes via Azure SRE Agent https://devblogs.microsoft.com/java/introducing-azure-performance-diagnostics-tool-for-java-automated-java-performance-analysis-in-kubernetes-via-azure-sre-agent/ https://devblogs.microsoft.com/java/introducing-azure-performance-diagnostics-tool-for-java-automated-java-performance-analysis-in-kubernetes-via-azure-sre-agent/#respond Tue, 20 Jan 2026 22:03:46 +0000 https://devblogs.microsoft.com/java/?p=232606 We’re excited to announce that the Azure Performance Diagnostics Tool for Java is now available for preview as part of the Azure SRE Agent platform, bringing intelligent, automated Java performance diagnoses. Currently supporting Java workloads deployed to Azure Kubernetes Service (AKS) clusters. What is Azure Performance Diagnostics Tool for Java via Azure SRE Agent? The […]

The post Introducing Azure Performance Diagnostics Tool for Java: Automated Java Performance Analysis in Kubernetes via Azure SRE Agent appeared first on Microsoft for Java Developers.

]]>
We’re excited to announce that the Azure Performance Diagnostics Tool for Java is now available for preview as part of the Azure SRE Agent platform, bringing intelligent, automated Java performance diagnoses. Currently supporting Java workloads deployed to Azure Kubernetes Service (AKS) clusters.

What is Azure Performance Diagnostics Tool for Java via Azure SRE Agent?

The Azure Performance Diagnostics Tool for Java is a powerful new capability within Azure SRE Agent, an AI-powered service that automatically responds to site reliability issues. This feature enables development and operations teams to monitor, analyze, and troubleshoot Java Virtual Machine (JVM) performance issues with unprecedented ease.

Azure Performance Diagnostics Tool for Java can identify and diagnose common JVM problems, including:

  • Garbage collection inefficiencies and pauses
  • CPU resource utilization issues (both under- and over-utilization)
  • Excessive I/O operations impacting application performance
  • Thread contention

An example of a report highlighting a garbage collection issue would be as follows: A diagnosis from SRE agent showing a garbage collection issue

How Does It Work?

When Azure SRE Agent is tasked with solving a performance issue by the customer and it suspects that a JVM related issue is the cause, the agent immediately initiates a comprehensive diagnostic report. This allows your team to understand the root cause of this performance issue.

Teams can also manually request diagnostics through the Azure SRE Agent chat interface. Simply ask for a performance analysis of any Java service; you can even build your own Sub-Agent and integrate the AKS performance functions as part of that Sub-Agent. You can then directly ask your agent to perform Java diagnoses:

A user requesting an analysis from the sub-agent

Take a look at it in action in the video below.

The Java Performance Diagnostic Process

When Azure SRE Agent suspects a performance issue (or is manually invoked to perform a Java performance investigation in your container), it:

  1. Spins up an ephemeral container within your pod
  2. Attaches to the target Java container without disrupting service
  3. Collects detailed performance telemetry using Java Flight Recorder (JFR)
  4. Analyzes the data and generates actionable insights
  5. Closes down the ephemeral container once the analysis is complete

This approach ensures zero downtime while providing deep visibility into JVM behaviour.

NOTE: For auditability reasons the Kubernetes API retains visibility of terminated ephemeral containers. As a result when looking at a pod for instance using kubectl describe pods the ephemeral containers will be visible, to prevent excessively generating noise within your environment, we have limited the pod to having run 5 diagnostic containers.

Getting Started

Setting up JVM Diagnostics is straightforward. Here are the requirements:

  • Java applications deployed in AKS – Your Java services must be running within an Azure Kubernetes Service cluster
  • Azure SRE Agent configuration – Ensure your Azure SRE Agent service is created and has appropriate access to your AKS cluster
  • Pod annotation – Add the languageStack=java annotation to your pods to enable Azure Performance Diagnostics Tool for Java

Note: At time of writing the Java profiling feature is in the Early access features, to enable early access, within the SRE Agent UI, browse to Settings, Basics, and select  Early access to features. The feature will progress to main line in the coming month.

Adding the required annotation is as simple as updating your pod specification:


apiVersion: v1
kind: Pod
metadata:
  name: your-java-app
  annotations:
    languageStack: java
spec:
  containers:
    - name: app
      image: your-java-app:latest

Alternatively, you can apply an annotation on the command line using kubectl:


kubectl annotate pod your-java-app languageStack=java

Annotating your pods indicates that they are running Java applications and that you consent to have them diagnosed using Azure Performance Diagnostics Tool for Java. As always with monitoring, although the diagnostic process is designed to be as non-intrusive as possible, there is a small amount of overhead involved when running the Java Flight Recorder profiler and the diagnostic container. It is recommended to first use this feature in non-production environments and ensure that the diagnostic process does not interfere with your application.

Build your own Sub-Agent

You can also create a custom Sub-Agent within Azure SRE Agent, and delegate to the AKS diagnostic analysis tools, in order to create an agent specifically for how you wish to respond to AKS diagnostic needs. You can also delegate to this agent when responding to an alert. The following is an example of how to configure a Sub-Agent which includes the Java Performance Diagnostic capability via the SRE Agent GUI:

An example of the Sub-Agent builder user interface

Below is an example YAML configuration that can be pasted into the YAML input dialogue within the create a SRE Sub-Agent Builder UI:


api_version: azuresre.ai/v1
kind: AgentConfiguration
spec:
  name: AKSDiagnosticAgent
  system_prompt: >-
    Take the details of a diagnostic analysis to be performed on a AKS container and hand off to the
    appropriate diagnostic tool. If you need to find the resource to diagnose, use the
    SearchResourceByName and ListResourcesByType tools
  tools:
    - GetCPUAnalysis
    - GetMemoryAnalysis
    - SearchResourceByName
    - ListResourcesByType
    - AnalyzeJavaAppInAKSContainer
  handoff_description: >-
    When the user has requested a diagnostic analysis, or it has been determined an AKS diagnostic
    analysis is required
agent_type: Autonomous

You can then interact with this Sub-Agent via the Azure SRE Agent chat interface to request JVM performance analyses as needed, for instance:


/agent AKSDiagnosticAgent I am having a performance issue in:
pod: illuminate-test-petclinic
container: spring-petclinic-rest
namespace: illuminate-test
aks instance: jeg-aks

This will trigger the Azure Performance Diagnostics Tool for Java process and return a detailed report of findings and recommendations.

The post Introducing Azure Performance Diagnostics Tool for Java: Automated Java Performance Analysis in Kubernetes via Azure SRE Agent appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/introducing-azure-performance-diagnostics-tool-for-java-automated-java-performance-analysis-in-kubernetes-via-azure-sre-agent/feed/ 0
Java at Microsoft: 2025 Year in Review https://devblogs.microsoft.com/java/java-at-microsoft-2025-year-in-review/ https://devblogs.microsoft.com/java/java-at-microsoft-2025-year-in-review/#comments Wed, 31 Dec 2025 18:28:14 +0000 https://devblogs.microsoft.com/java/?p=232770 A breakthrough year for modernization, AI‑assisted development, Agentic AI development, and platform innovation 2025 was one of the most significant years yet for Java at Microsoft. From the arrival of OpenJDK 25 as the newest Long‑Term Support (LTS) release, to AI‑powered modernization workflows with GitHub Copilot app modernization, to Agentic AI development in Microsoft AI Foundry […]

The post Java at Microsoft: 2025 Year in Review appeared first on Microsoft for Java Developers.

]]>
A breakthrough year for modernization, AI‑assisted development, Agentic AI development, and platform innovation

2025 was one of the most significant years yet for Java at Microsoft. From the arrival of OpenJDK 25 as the newest Long‑Term Support (LTS) release, to AI‑powered modernization workflows with GitHub Copilot app modernization, to Agentic AI development in Microsoft AI Foundry with Java frameworks like LangChain4j, Spring AI, Quarkus AI, and Embabel, with major Visual Studio Code and Azure platform investments. Microsoft deepened its commitment across the entire Java ecosystem.

Java 25: A New LTS Era Begins

2025 delivered a historic milestone: OpenJDK 25 officially shipped and with it, Microsoft Build of OpenJDK 25 as the next Long‑Term Support (LTS) release, setting the foundation for the next multi‑year cycle of enterprise Java workloads.

For developers who have not been following advancements in the Java language, it may not look, but the code below is a Rock Paper Scissors implementation in Java 25 that can be put inside a Game.java file and executed with “$ java Game.java” with a JDK 25 installation.

code sample image

To run this code, Microsoft released binaries, container images, and updated Azure Platform services, providing:

 

  • Cross‑platform availability on Linux, macOS, and Windows for both x64 and AArch64/Apple Silicon
  • Azure Platform services App Service and Functions with managed JDK 25
  • Container images via the Microsoft Container Registry
  • Production‑ready quality gates, including JCK compatibility, Eclipse AQAvit verification, and Microsoft’s internal performance hardening
  • AI-assisted upgrade and modernization path from JDK 8, 11, 17, and 21 through GitHub Copilot app modernization

With Java 25, enterprises gain language and runtime improvements, performance upgrades, memory optimizations, and new developer‑facing capabilities, giving organizations strong justification to plan migrations earlier in the LTS cycle rather than waiting several years. To learn about some of the new features in JDK 25, check our announcement.

GitHub Copilot across Java IDEs: Eclipse & IntelliJ

GitHub Copilot’s continued parity and agentic capabilities across Visual Studio Code, IntelliJ IDEA, and Eclipse IDE ensure Java teams can adopt AI assistance without changing IDEs, which is crucial for regulated environments and large estates. The advantage of GitHub Copilot is to bring to developers all the best coding models through a single subscription.

jetbrains github copilot model selection

In IntelliJ IDEA, the official GitHub Copilot plugin delivers chat, agentic capabilities, MCP support, inline completions, and more. Fast nightly updates keep pace with IDE releases across the JetBrains family. GitHub Copilot is also available on the Eclipse Marketplace with code completions, Copilot Chat, and agentic workflows powered by Agent Mode and MCP integrations. Agent preview lets developers delegate tasks from Eclipse and track jobs that open draft PRs and queue reviews.

 

GitHub Copilot CLI: beyond IDEs

A screenshot of a computer terminal screen showing Copilot CLI

For Java developers who prefer working directly from the terminal, the GitHub Copilot CLI brings the same AI‑assisted power found in IDEs straight to your shell. With Copilot CLI, you can run development tasks, upgrade, migration, and deployment workflowsend‑to‑end without switching tools, ideal for developers who live in Bash, Zsh, or PowerShell. Copilot CLI supports interactive and batch scenarios, making it possible to develop Java applications, upgrade Java versions, modernize Spring or Jakarta EE apps, or deploy to Azure entirely via command‑line tasks.

GitHub Copilot App Modernization: A Breakthrough for Java and Frameworks Upgrades

2025 was a breakout year for Java modernization via GitHub Copilot, now providing end‑to‑end support for assessments, planning, code transformation, testing, and deployment. For a video introduction, watch Modernize Java apps in days with GitHub Copilot on YouTube.

Modernization formulas, rules, and recipes encode expert migration guidance for core Java APIs, the Spring Framework, the Jakarta EE platform, and hundreds of related scenarios including logging, identity, secret management, messaging, database, and overall cloud readiness.

Key modernization capabilities:

  • Deep codebase analysis (framework versions, deprecated APIs, dependency issues)
  • AI‑generated modernization plans
  • Automated and deterministic code transformations and refactoring
  • Security CVE detection, build remediation, and test generation
  • Azure‑optimized deployment paths

GitHub Copilot app modernization is available on Visual Studio Code, IntelliJ, and in the Copilot CLI.

Azure Command Launcher for Java: Intelligent JVM Defaults and More

2025 was also the year when JVM tuning became a no-brainer for cloud‑native Java teams. Now in Public Preview, the Azure Command Launcher for Java is a drop‑in replacement for the java JVM launcher that:

  • Eliminates the need for manual JVM tuning
  • Applies smarter, optimized JVM flags and configurations automatically
  • Reduces memory waste and inconsistent tuning across your fleet
  • Improves startup, GC behavior, and resource efficiency
  • Properly detects environment memory and CPU limits
  • Supports OpenJDK 8 and later

Large organizations like Bradesco Bank validated operational gains, demonstrating measurable efficiency enhancements, performance consistency, and operational peace of mind across hundreds of thousands of JVMs. The tool provides an immediate path for teams modernizing to the new LTS without having to learn completely new tuning heuristics. Let us do the JVM tuning for you.

Once installed, using Azure Command Launcher for Java is as simple as replacing the “java” command with the “jaz” command:

dockerfile image

For more information on the performance benefits, check the article Beyond Ergonomics: How the Azure Command Launcher for Java Improves GC Stability and Throughput on Azure VMs.

 

The roadmap for the Azure Command Launcher for Java has exciting ideas, and we are eager to connect with developers and customers willing to experiment. Let us help you get the most of advanced JVM features like App CDS, Project Leyden, GC log analysis, and more!

Azure SRE Agent: Intelligent Reliability for Java Apps

Announced this year and currently in Preview, the Azure SRE Agent is soon adding deeper operational intelligence for Java workloads. Java teams running at scale gain a powerful assistant that reduces MTTR and elevates reliability practices. To learn more, you can watch this presentation at InfoQ Dev Summit Boston 2025: Fix SLO Breaches Before They Repeat to see a demo.

Modernizing Spring Boot + Azure Cosmos DB with GitHub Copilot

A standout moment was the Reactor session on modernizing Spring Boot apps from relational databases to Azure Cosmos DB, using GitHub Copilot to accelerate every step. This presentation also demonstrates features in Visual Studio Code with GitHub Copilot for customizing agentic AI instructions and prompts.

Developers learned how Copilot can:

  • Identify relational data access code
  • Generate Cosmos DB‑compatible repositories, entity models, partitioning annotations
  • Apply schema reasoning using AI
  • Create migration tasks and unit tests automatically
  • Validate migration paths from local dev to cloud deployment

This brings database modernization into the same workflow as app upgrades—critical for Java cloud migrations. Watch the replay on the Microsoft Reactor page. For more on context engineering and custom instructions, you can also watch this other presentation Context Engineering for Java Ecosystem.

AI and Java for Beginners

Experts are pushing the boundaries of AI development, whether with AI assisting tools, with Agentic AI coding, or building custom Agents. But we must start somewhere, and for beginners, it is important to have fundamentals and basic understanding of core tools. This is why the Microsoft Developer Relations team for Java, built and published the Java and AI for beginners series.

We help you through foundational ideas first and then move into hands-on examples:

  • Getting started fast– Spin up your first AI-powered app using GitHub Codespaces.
  • Core generative AI techniques– Learn the basics behind completions and chat flows. See how function calling connects models to real tools and services. Get an introduction to Retrieval-Augmented Generation (RAG) for document-aware applications.
  • Simple, focused application– Explore small projects that illustrate different capabilities, such as combining text and image generation, running models locally with the Azure AI Foundry Local experience, and wiring tools with the Model Context Protocol (MCP).
  • Responsible AI– Apply safety features from GitHub Models and Azure services. We cover content filtering, bias awareness, and practical checks you can add before deployment.
  • MCP in Java– Understand the Model Context Protocol and how it fits Java workflows. Learn what it means to implement an MCP server, connect a Java client, and use tools through a consistent protocol.
  • Context engineering for Java– Improve results with clean prompts, structured context, and simple evaluation steps. We discuss when to persist context and when to compute it on the fly.
  • Modernization with AI assistance– See how the GitHub Copilot App Modernization experience helps upgrade and migrate Java applications. Then follow a guided flow to deploy to Azure with AI-assisted configuration.
  • LangChain4j essentials– Start a basic project that targets OpenAI-compatible endpoints, then build a small agent with tools and memory to understand the moving parts.
  • Running GenAI in containers– Review when to use on-demand GPUs for inference and training. Learn how dynamic sessions in Azure Container Apps support code interpreters and short-lived, cost-aware execution.

Each video is short and focused. Watch them in order if you are new to the space or skip into the topics that match your immediate needs.

Developer Voice: Microsoft JDConf, JavaOne 2025

The year was full in terms of sharing ideas. Everyone had something to say, especially about AI. Of course, Microsoft also had a few ideas to share, and that is why we were present at dozens of conferences in 2025, all around the world: DevNexus, Devoxx, JavaLand, JavaOne, JavaZone, SpringOne, and others.

Meeting other developers at conferences, whether virtual or in-person, remain one of the best ways to share ideas and learn from others. This year, we gave continuity with our own space, Microsoft JDConf, so Microsoft experts and community speakers could participate and share theirs. In addition to that, we participated in key events like Oracle’s flagship Java developer conference, JavaOne. In 2026, we will be there again.

Microsoft JDConf

The 2025 edition focused on our opportunity to Code the Future with AI. There were 22 technical sessions across Spring, Quarkus, AI agentic development, core Java principles, modern tooling, and code modernization. With strong global community engagement and the presence of luminaries and Java Champions like Josh Long and Lize Raes, JDConf 2025 was a milestone for our community engagement, giving them the space and amplification to share their ideas. Watch the recordings.

For 2026, we are excited for what’s to come! Microsoft JDConf call for papers is up and running, and the conference will be back on April 8-9 with the usual three timezones live streams so everyone can engage and learn.

JavaOne

This year we were at Oracle’s JavaOne conference where we shared what developers can get with Microsoft tools and services for modern Java development with AI. We also had exciting breakout sessions on AI, modernization, and cloud-native Java, where thousands of developers engaged in person and online. Needless to say, we will be back at JavaOne 2026, so stay tuned!

In the meantime, watch again the Microsoft keynote at JavaOne 2025 and our two breakout sessions, Next-Level AI Mastery for Java Developers, and From RAG to Enterprise AI Agents: Building Intelligent Java Apps.

Open Source Contributions

Microsoft teams continued collaborating with key projects in the Java ecosystem. A few highlights go to OpenJDK contributions by Microsoft’s Java Engineering Group, and contributions by the Microsoft Developer Relations team to frameworks like Spring AI, Quarkus, and LangChain4j.

LangChain4j has become a de facto standard in building intelligent Java applications, and integrations with Microsoft AI services are key for enabling customers to leverage the latest AI models and capabilities into their systems. To learn more about LangChain4j and our contributions to it, check out the blog Microsoft and LangChain4j: A Partnership for Secure, Enterprise-Grade Java AI Applications.

The Road Ahead — 2026 and Beyond

With OpenJDK 25 now the active LTS, the ecosystem is entering a new multi‑year cycle. Microsoft will continue investing in:

  • Deeper GitHub Copilot agent workflows
  • Adaptive JVM tuning intelligence
  • Expanded modernization formulas
  • Broader Azure service integrations
  • Next‑gen Spring, Quarkus, LangChain4j, & Java AI tooling

The mission remains unchanged: empower every Java developer to build intelligent applications, modernize legacy codebases, and operate applications with world‑class tooling, AI assistance, and cloud‑native excellence.

The post Java at Microsoft: 2025 Year in Review appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/java-at-microsoft-2025-year-in-review/feed/ 3
Beyond Ergonomics: How the Azure Command Launcher for Java Improves GC Stability and Throughput on Azure VMs https://devblogs.microsoft.com/java/beyond-ergonomics-how-the-azure-command-launcher-for-java-improves-gc-stability-and-throughput-on-azure-vms/ https://devblogs.microsoft.com/java/beyond-ergonomics-how-the-azure-command-launcher-for-java-improves-gc-stability-and-throughput-on-azure-vms/#respond Tue, 16 Dec 2025 19:17:27 +0000 https://devblogs.microsoft.com/java/?p=232717 In our previous blog we introduced Azure Command Launcher for Java (jaz) —a safe, resource-aware way to launch the JVM without hand-tuning dozens of flags. This follow-up shares performance results, focusing on how jaz affects G1 behavior, heap dynamics, and pause characteristics under a long-running, allocation-intensive workload: SPECjbb 2015 (JBB). Test bed: 4-vCPU, 16-GB Azure […]

The post Beyond Ergonomics: How the Azure Command Launcher for Java Improves GC Stability and Throughput on Azure VMs appeared first on Microsoft for Java Developers.

]]>
In our previous blog we introduced Azure Command Launcher for Java (jaz) —a safe, resource-aware way to launch the JVM without hand-tuning dozens of flags. This follow-up shares performance results, focusing on how jaz affects G1 behavior, heap dynamics, and pause characteristics under a long-running, allocation-intensive workload: SPECjbb 2015 (JBB).

Test bed: 4-vCPU, 16-GB Azure Linux/Arm64 VM running the Microsoft Build of OpenJDK.

JDKs exercised: Validated on JDK 17 (17.0.17), 21 (21.0.9), and 25 (25.0.1); all figures in this post are from the JDK 17 runs. Trends on 21/25 matched the 17 results.

How we ran it:

# baseline
java -jar specjbb.jar

# with jaz
jaz -jar specjbb.jar

Controls: Same JBB workload config, OS settings, and JVM flags for both runs—the launcher was the only change.

SPECjbb 2015 (JBB) is a SPEC benchmark; we report relative trends only and do not publish raw scores.

Why JBB Is the Right Stress Test

JBB exercises high allocation rate, object churn, humongous-allocation behavior, generational sizing, region pressure, concurrent-mark sustainability, GC scheduling, and pause-time predictability. Because it is both bandwidth-intensive and latency-sensitive, JBB is ideal for validating heap ergonomics and GC policies in the cloud.

As a capacity-planning tool it helps explore sustained throughput limits, GC headroom before SLA violations, warm-up behavior under load, and how a given VM size (4 cores, 16 GB) holds up under continuous allocation pressure.

For detailed methodology and the JBB phase guide, see Appendix A and Appendix B. A GC refresher and figure legend are in Appendix C and Appendix D.

Performance Summary: Baseline vs jaz

Metric Baseline With jaz Improvement
Peak Throughput Baseline 22% Higher max-jOPS
SLA Performance Baseline 15% Higher critical-jOPS
Total GC Events 3777 2526 -33% (1251 fewer)
Young GC Count 1100 1596 +45% (handles higher load)
Mixed GC Count 778 265 -66% (513 fewer)
Young GC Overhead 1.41% 2.60% Higher but efficient
Mixed GC Overhead 0.96 0.39 -59% reduction
Old Gen Pattern Flat plateau (600-900) Deep sawtooth (200-1000) Dynamic sizing active

Key Insight: jaz achieves 22% higher throughput by keeping Young GC efficient—objects die in Eden/Survivor instead of promoting prematurely to Old gen, dramatically reducing expensive Mixed GC work.

Baseline Behavior: Where the Wild Things Are

Microsoft Build of OpenJDK with default G1 GC ergonomics. Long JBB run on a 4-core VM.

Region Dynamics: Tight Band with Sustained Old Gen Plateau

  • Collection cadence is crowded: Young promotes excessively to Old gen; Mixed GC runs continuously but can’t get Old occupancy down.
  • Eden (before GC) sits in high, tightly bound band; each Young GC arrives with Eden already large.
  • Old (after GC) settles into a crowded, high plateau (~600–900 regions) with a slight upward drift, indicating continued promotion pressure.
  • Humongous-triggered Concurrent Start (markers labeled Humongous) denote very large allocations that force a new concurrent marking cycle. They appear clustered, intermittent and align with heavier Old gen activity.

GC Pause Envelope: Heavy Mixed GC Response to JBB Phase Shifts

  • Young pauses (teal): early noisy cluster (spikes ~300 ms), then a long, flatter band around ~80–130 ms; moderate variance as Eden fill → evacuate → refill repeats.
  • Overlay (Concurrent-Start (magenta) + Prepare-Mixed (gray) + Mixed (blue)): the combined envelope includes the magenta points (Concurrent Start) followed by effectively continuous periods of Prepare/Mixed (~50–150 ms with occasional higher outliers) and dense bands. Old gen’s elevated plateau drives frequent concurrent cycles and Mixed activity.

With jaz: Beyond the Wild Rumpus

Same VM/JDK/workload; only the launcher changed (jaz resource-aware defaults).

Region Dynamics: Dynamic Sizing with Dramatic Old Gen Sawtooth

  • Handling higher throughput: jaz achieves 22% higher max-jOPS, driving higher allocation rate. Both Eden and Old show dramatic oscillation (200-1000 regions) – wider variation reflects increased load and dynamic heap sizing, not the tight bands of baseline.
  • Eden (before-GC): Wider oscillation reflects the higher allocation rate from increased throughput; dynamic sizing adapts to load.
  • Old (after-GC): Dramatic sawtooth pattern is the key insight:
    • Deep troughs (~200 regions): Mixed GC efficiently reclaims Old gen, bringing occupancy down to minimal levels
    • Gradual rises (200 → 1000): Steady, controlled promotion over many cycles
    • Sharp drops: Mixed cycles reclaim aggressively, restoring headroom for next load phase
  • Humongous-triggered Concurrent Starts: rare and isolated, most avoiding Old gen spikes tied to large allocations.

GC Pause Envelope: Narrow and Predictable Early

  • Young pauses (teal): Wavy pattern with periodic oscillations ~15–250 ms; working to handle the increased throughput load.
    • Key insight: Higher Young GC frequency (1,596 vs 1,100) keeps pace with the higher allocations and ages objects in the Young gen where collection is cheap, preventing premature promotion to Old gen
  • Overlay (Concurrent-Start (magenta) + Prepare-Mixed (gray) + Mixed (blue)): Shows the episodic nature clearly:
    • Concurrent Start begins marking cycle (114 events vs baseline’s 439)
    • Cleanup (which precedes Prepare-Mixed) finalizes old-region candidates
    • Prepare-Mixed transitions to Mixed GC phase
    • Mixed pauses reclaim old regions aggressively when needed
  • Overall pattern: Young GC works harder (2.60% overhead vs 1.41%) but keeps promotions low, resulting in 59% less Mixed GC overhead (0.39% vs 0.96%).

Side-by-Side: Baseline vs jaz

Region Dynamics — Before GC (Eden)

Fig 1 vs Fig 2. Comparison: Baseline shows tight, stable Eden band (700-900 regions). jaz shows wider oscillation because it’s handling 22% higher throughput—increased allocation rate from higher max-jOPS.

Baseline Before-GC region timeline showing high, tightly bounded Eden and Survivor bands with early spikes; Old remains elevated across the run.

Figure 1: G1 Region States Over Time — Before GC (baseline)

jaz Before-GC region timeline with small, regular Eden rises and low amplitude; bands are smooth and evenly spaced.

Figure 2: G1 Region States Over Time — Before GC (withjaz)

Region Dynamics — After GC (Old)

Fig 3 vs Fig 4. Comparison: Baseline keeps Old gen at elevated plateau (600-900 regions) continuously. jaz‘s dramatic sawtooth (200-1000 regions) proves efficient Mixed GC reclamation—deep troughs demonstrate productive old gen cleanup, restoring headroom. Result: 265 Mixed GCs vs 778 (−66%) despite 22% higher throughput.

Baseline After-GC region timeline where Old remains at a high plateau with frequent peaks; Survivors persist at non-trivial levels.

Figure 3: G1 Region States Over Time — After GC (baseline)

jaz After-GC region timeline featuring deep Old troughs around ~200–250 regions and gradual rises to peaks before being reclaimed again.

Figure 4: G1 Region States Over Time — After GC (with jaz)

GC Pause Envelope — Young

Fig 5 vs Fig 6. Comparison: Baseline shows 1,100 Young GCs with early spikes then ~80-130ms band. jaz shows 1,596 Young GCs (+45%) with wavy pattern ~15-250ms. More Young GC activity is positive—it’s handling 22% higher throughput while keeping objects from promoting prematurely.

Baseline Young-GC scatter with early high-variance cluster, mid-run wavy band near ~80–130 ms, and a small burst near the end.

Figure 5: Young-only pauses (baseline)

jaz Young-GC scatter showing an early, tight band around ~70–110 ms with waviness and small end-of-run taper.

Figure 6: Young-only pauses (with jaz)

GC Pause Envelope — Overlay (Concurrent Start + Prepare-Mixed + Mixed)

Fig 7 vs Fig 8. Comparison: Baseline shows dense, continuous Mixed activity (778 events) driven by Old gen’s elevated plateau. jaz shows episodic pattern with quiet stretches (265 events = −66%)—efficient GC prevents Old gen buildup proactively, resulting in 59% lower Mixed overhead.

Baseline scatter of Mixed (blue), Prepare-Mixed (gray), and Concurrent-Start evacuation (magenta) showing dense activity and periodic higher outliers.

Figure 7: Prepare-Mixed + Mixed pauses with Concurrent Start overlay (baseline)

jaz scatter of Mixed and Prepare-Mixed with lower heights and clear gaps between clusters; only occasional concurrent-start evacuation markers.

Figure 8: Prepare-Mixed + Mixed pauses with Concurrent Start overlay (with jaz)

GC Pause Envelope —Humongous Starts Concurrent Marking

Fig 9 vs Fig 10. Comparison: Baseline shows clustered Humongous Starts Concurrent Marking events at warm-up and tail. jaz shows only ~10 isolated events—large allocations are absorbed avoiding humongous-triggered marking, post-marking and Old gen pressure.

Baseline diamonds marking humongous-triggered Concurrent Start events clustered early and at end of run.

Figure 9: Humongous-trigger events (baseline). Early and tail clusters align with mixed-GC activity

jaz diamonds for humongous-triggered Concurrent Start events appearing only as a few isolated points early and late.

Figure 10: Humongous-trigger events (with jaz)—rare, non-disruptive

Comparison Matrix

Aspect Baseline With jaz What Changed
Throughput Baseline jaz +22% peak throughput

+15% at SLA

Total GC Events 3,777 cycles 2,526 cycles −33% (1251 fewer events)
Regions: Before GC (Eden)

Fig. 1 → Fig. 2

Tight band (~700-900 regions)

Eden already large at Young GC arrival

Wider oscillation

Reflects higher allocation rate

Handling 22% more work

Dynamic sizing active

Regions: After GC (Old)

Fig. 3 → Fig. 4

Flat plateau (~600–900 regions)

Always elevated

Dramatic sawtooth (~200–1000 regions)

Deep troughs restore headroom

Efficient Young GC keeps promotions low

Old cleanup prevents buildup

Young GC Count

Fig. 5 → Fig. 6

1,100 events

1.41% overhead

1,596 events (+45%)

2.60% overhead

More Young GC is good—handles higher throughput

Handles transients efficiently in the Young gen

Mixed GC Count

Fig. 7 → Fig. 8

778 events

0.96% overhead

Continuous pattern

265 events (−66%)

0.39% overhead (−59%)

Episodic pattern

Massive reduction in Old gen work

Cadenced GC prevents reactive storms

Humongous Events

Fig. 9 → Fig. 10

41 clustered bursts at warm-up and tail ~10 isolated events Sparse, absorbed Large allocations don’t trigger excessive marking cycles

Baseline Implications

In baseline, Young  GC dump large volumes of live data to Old gen. Premature promotions lead to continuous Mixed GC work. Old then stays high, thresholds trip early and often, and Mixed/concurrent activity becomes dense. JBB hits the system throughput ceiling early.

  • High stop-the-world (STW) frequency across phases: Young dominates count; Mixed/Prepare are effectively continuous once load stabilizes—no long quiet stretches.
  • Premature promotion tax: 2677 Concurrent Starts, Remark, Cleanup, Prepare-Mixed and Mixed GC events represent continuous Old gen collection work.
  • Pattern: Stable but limited—GC keeps up with load but cannot scale to higher throughput. G1 is catching up, not cruising.

The jaz Breakthrough: Efficient Young GC Enables Higher Throughput

jaz achieves 22% higher peak throughput through resource-aware defaults that provide the capacity to handle increased load, combined with efficient GC that keeps it sustainable.

How jaz Works

  1. Resource-aware defaults provide capacity for higher throughput:
    • Larger heap sizing based on available VM memory (16 GB)
    • Dynamic heap management adapts to load phases
    • More Eden headroom → can handle higher allocation rate from increased operations/sec
    • Result: System can sustain 22% higher max-jOPS without choking on memory pressure
  2. Efficient Young GC keeps it sustainable:
    • 1,596 Young GCs vs baseline’s 1,100 (+45% more cycles)
    • 2.60% overhead vs 1.41% (+1.19 percentage points)
    • But: Handling 22% higher throughput—more work per unit time
    • Objects die in Eden/Survivor instead of promoting to Old
  3. Dynamic sizing + cadenced GC maintain headroom:
    • Creates breathing room for episodic Mixed GC to reclaim aggressively
    • Prevents Old gen buildup, avoiding continuous Mixed GC tax seen in baseline
    • Sawtooth pattern shows efficient heap usage: expand → promote → reclaim → repeat
    • Result: Old gen sawtooth drops to ~200 regions (vs baseline’s 600-900 plateau)

jaz Takeaways

  • 22% higher peak throughput: jaz scales where baseline hits ceiling
  • 15% better SLA performance: steadier latency under load
  • 66% fewer Mixed GCs: 265 vs 778 events—massive reduction in expensive Old gen work
  • 33% fewer total GC events: 2,526 vs 3,777 cycles despite handling more work
  • Efficient Young GC strategy: More Young cycles (1,596 vs 1,100) but keeps promotions low

Conclusion: jaz Unlocks Higher Throughput on Azure VMs

This performance study shows that jaz is more than a convenience wrapper—it’s a resource-aware optimization pipeline that delivers measurable, significant improvements in real-world workloads:

  • Sizes heap and generations appropriately, avoiding reactive warm-up churn.
  • Stabilizes early GC behavior, tightening pause bands sooner.
  • Reduces humongous-triggered marking moments, easing Mixed-GC pressure.
  • Maintains GC cadence as load steps up, preventing premature promotions and high plateaus.
  • Lifts overall throughput/SLA metrics—with the launcher as the only change.

On Azure Linux/Arm VMs with the Microsoft Build of OpenJDK, jaz consistently delivered:

  • Faster warm-up
  • Higher sustained throughput
  • Lower p99 response-time tails
  • Tamed Old gen that repeatedly returns to a low post-GC watermark

What’s Next

We’re extending jaz beyond a great default into a continuously adaptive launcher:

  • JVM configuration profiles: pre-vetted, resource-aware profiles for common VM and container shapes.
  • Continuous tuning: light-touch runtime feedback to stay stable under shifting pressure—no app changes.
  • Telemetry: opt-in summaries that inform on-the-fly decisions and explain “why `jaz chose X.”
  • AppCDS: optional archive generation/consumption to shorten warm-up and smooth early allocation/JIT behavior.
  • Leyden alignment: play nicely with Leyden’s startup/profile optimizations so jaz can pick the right combo per workload.

Stay tunedjaz is becoming a foundation for self-optimizing Java runtimes on Azure.

Appendices

 Appendix A — Test Environment Preparation & Methodology (Reproducibility)

To ensure clean, comparable results across baseline and jaz runs, each iteration followed this protocol.

Cache Reset Per Run

sync                                # force pending disk writes
echo 3 > /proc/sys/vm/drop_caches   # drop page cache, dentries, and inode caches

We reset Linux page cache before each trial to remove cross-run noise from warm caches. drop_caches does not discard dirty data; sync persists it first. (Run as root / via sudo.)

Repeated Trials

Both configurations (baseline vs jaz) were executed multiple times across Microsoft Build of OpenJDK versions. Runs showed highly consistent GC behavior, region-state evolution, and throughput trends.

Appendix B — From JBB Phases to GC Pauses

JBB is a complex tool designed to simulate a 3-tier system and measure the performance of the JVM on a given OS + hardware. A full run cycles through several distinct operational phases, each with unique performance and memory characteristics that are crucial for performance engineers to understand for effective tuning. Let’s get a quick look at how JBB phases shape allocation/promotion pressure.

How JBB Drives Load and GC (Phase Guide)

Why this matters: JBB pushes the JVM through distinct load phases that shift allocation and promotion pressure. Understanding these phases helps when analyzing GC logs or perf telemetry, because JVM behavior changes dramatically from startup to peak load and final shutdown.

Phase 1: Warm-up / HBIR Search

This initial phase is all about getting the system ready and estimating capacity.

  • Activity: Threads come online, the JVM performs JIT compilation, and profiling begins. The benchmark searches for the High Bound Injection Rate (HBIR), a preliminary estimate of the maximum throughput.
  • Memory behavior: This phase is characterized by high bursts of object allocation and moderate, but rising, promotion pressure as the app code is loaded, initialized, and begins processing initial transactions.
Phase 2: The RT-Curve Build

This is the core measurement phase where the benchmark systematically increases the load to build the Response-Throughput (RT) curve.

  • Activity: The load (Injection Rate or IR) increases stepwise. Performance metrics are rigorously collected at each step.
  • Memory behavior: The system experiences sustained and rising allocation pressure. More transient (short-lived) objects are created, and the promotion pressure increases significantly as the system approaches maximum capacity.

The Relationship Between jOPS and Memory Pressure

  • Higher throughput ⇒ higher allocation rate: jOPS counts ops/sec; more ops create more objects per unit time.
  • Promotion pressure rises with load: to sustain higher IR, GC runs more often; survivors are promoted to Old sooner.
  • The memory subsystem must handle this churn without excessive pauses or fragmentation.
  • Key SPEC metrics:
    • max-jOPS — highest throughput at the last successful IR level before the first RT-curve failure.
    • critical-jOPS — geometric mean of jOPS at p99 response time across five SLA points (10, 25, 50, 75, 100 ms).
Phase 3: Validation / Tail

The final phase winds down the benchmark run and validates the data collected.

  • Activity: The run concludes with report/validation segments following the RT phase.
  • Memory behavior: Allocation pressure tapers; promotion pressure declines which is typical of a ramp-down in workload.

 Appendix C — GC Cheat Sheet (Quick Primer)

Overarching GC goal

Get out of the way — maximize mutator time, keep pauses predictable and short, avoid premature promotion/copying, and reclaim promptly.

  • Eden: where most new objects are born. It fills quickly and is emptied during Young (and later Mixed) GCs.

    GC goal

    Keep Eden large enough that most short-lived objects die there, but not so large that evacuations must copy a big live set, overflow Survivor or force premature tenure.
  • Survivor: short-term holding for objects that just survived a Young GC (they “age” here).

    GC goal

    Let objects age briefly and avoid premature promotion to Old.
  • Old: promoted medium/long-lived objects. Growth here drives Mixed GCs.

    GC goal

    Keep only the long-lived live data set (LDS); minimize promotion churn and lower region waste/fragmentation.
  • Humongous: very large objects (≥50% of a region) allocated directly into Old as one or more contiguous humongous regions, bypassing Young.

    Gotcha

    Short-lived/bursty humongous allocations can fragment Old or force extra GC/cycle work to find contiguous space; reclaiming them eagerly is key.

Appendix D — What the Figures Show (Baseline, jaz)

Region Composition Over Time

These plots count regions by role and reveal how the regions react across JBB phases.

  • Fig 1–3 (baseline): Before GC, After GC
  • Fig 2–4 (jaz): Before GC, After GC

“Before GC” = right before a collection (peaks). “After GC” = immediately after (valleys).

Pause-time Envelope (STW Pauses Over Time)

These plots show frequent STW events across the run, with variance shifting as JBB moves through Warm-up → SLA → Tail.

  • Young (teal): Fig 5 (baseline), Fig 6 (jaz)
  • Overlay (Concurrent-Start + Prepare-Mixed + Mixed markers): Fig 7 (baseline), Fig 8 (jaz)
  • Concurrent-Start due to humongous allocation (diamond markers): Fig 9 (baseline), Fig 10 (jaz)

Each dot is an STW pause (y-axis = ms, x-axis = runtime (s)). These pairings let you compare pause frequency/ceilings and variance shifts across the same JBB phases.

Note on Remark/Cleanup

These are short closing STW phases of a concurrent marking cycle. Remark finalizes marking bookkeeping while Cleanup tidies metadata and remembered sets and completes any leftover work before the next Mixed GCs begin. Once marking stabilizes, they’re typically tiny and flat in these runs, so we omit separate plots for brevity.

The post Beyond Ergonomics: How the Azure Command Launcher for Java Improves GC Stability and Throughput on Azure VMs appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/beyond-ergonomics-how-the-azure-command-launcher-for-java-improves-gc-stability-and-throughput-on-azure-vms/feed/ 0
From Complexity to Simplicity: Intelligent JVM Optimizations on Azure https://devblogs.microsoft.com/java/from-complexity-to-simplicity-intelligent-jvm-optimizations-on-azure/ https://devblogs.microsoft.com/java/from-complexity-to-simplicity-intelligent-jvm-optimizations-on-azure/#respond Thu, 20 Nov 2025 16:34:32 +0000 https://devblogs.microsoft.com/java/?p=232642 Introduction As cloud-native architectures scale across thousands of containers and virtual machines, Java performance tuning has become more distributed, complex, and error-prone than ever. As highlighted in our public preview announcement, traditional JVM optimization relied on expert, centralized operator teams manually tuning flags and heap sizes for large application servers. This approach simply doesn’t scale […]

The post From Complexity to Simplicity: Intelligent JVM Optimizations on Azure appeared first on Microsoft for Java Developers.

]]>
Introduction

As cloud-native architectures scale across thousands of containers and virtual machines, Java performance tuning has become more distributed, complex, and error-prone than ever. As highlighted in our public preview announcement, traditional JVM optimization relied on expert, centralized operator teams manually tuning flags and heap sizes for large application servers. This approach simply doesn’t scale in today’s highly dynamic environments, where dozens—or even hundreds—of teams deploy cloud-native JVM workloads across diverse infrastructure.

To address this, Microsoft built Azure Command Launcher for Java (jaz), a lightweight command-line tool that wraps and invokes java (e.g., jaz -jar myapp.jar). This drop-in command automatically tunes your JVM across dedicated or resource-constrained environments, providing safe, intelligent, and observable optimization out of the box.

Why Automated Tuning Matters

Designed for dedicated cloud environments—whether running a single workload in a container or on a virtual machine—Azure Command Launcher for Java acts as a fully resource-aware optimizer that adapts seamlessly across both deployment models.

The goal is to make JVM optimization effortless and predictable by replacing one-off manual tuning with intelligent, resource-aware behavior that just works. Where traditional tuning demanded deep expertise, careful experimentation, and operational risk, the tool delivers adaptive optimization that stays consistent across architectures, environments, and workload patterns.

The Complexity of Manual JVM Tuning

Traditional Approach jaz Approach
Build-time optimization and custom builds Runtime tuning that adapts to any Java app
Requires JVM expertise and manual experimentation Intelligent heuristics detect and optimize safely
Configuration drift across environments Consistent, resource-aware behavior
High operational risk Safe rollback and zero configuration risk

By replacing static configuration with dynamic, resource-aware tuning, jaz simplifies Java performance optimization across all Azure compute platforms.

How It Works: Safe, Smart, and Observable

Safe by Default

The tool preserves user-provided JVM configuration wherever applicable.

By default, the environment variable JAZ_IGNORE_USER_TUNING is set to 0, which means user-specified flags such as -Xmx, -Xms, and other tuning options are honored.

If JAZ_IGNORE_USER_TUNING=1 is set, the tool ignores most options beginning with -X to allow full optimization. As an exception, selected diagnostic and logging flags (e.g. -Xlog) are always passed through to the JVM.

Example (default behavior with user tuning preserved):

jaz -Xmx2g -jar myapp.jar

In this mode, the tool:

  • Detects user-provided JVM flags and avoids conflicts
  • Applies optimizations only when safe
  • Guarantees no behavioral regressions in production

Smart Optimization

Beyond preserving user input, the tool performs resource-aware heap sizing using system or cgroup memory limits. It also detects the JDK version to enable vendor- and version-specific optimizations—including enhancements available in Microsoft Build of OpenJDK, Eclipse Temurin, and others.

This ensures consistent, cross-platform behavior across both x64 and ARM64 architectures.

Observability Without Overhead

The tool introduces adaptive telemetry designed to maximize inference while minimizing interference.

Core principle: more insight, less intrusion.

Telemetry is used internally today to guide safe optimization decisions. The system automatically adjusts data collection depth across runtime phases (startup and steady state), providing production-grade visibility without impacting application performance.

Future phases will extend this with event-driven telemetry, including anomaly-triggered sampling for deeper insight when needed.

Evolution of Memory Management

The tool’s memory-management approach has evolved beyond conservative, resource-aware heuristics such as the default ergonomics in OpenJDK. It now applies an adaptive model that incorporates JDK-specific knowledge to deliver stable, efficient behavior across diverse cloud environments. Each stage builds on production learnings to improve predictability and performance.

Stage 1: Resource-Aware Foundations

The initial implementation established predictable memory behavior for both containerized and VM-based workloads.

  • Dynamically sized the Java heap based on system or cgroup memory limits
  • Tuned garbage-collection ergonomics for cloud usage patterns
  • Prioritized safety and consistency through conservative, production-first defaults

Stage 2: JDK-Aware Hybrid Optimization (Current)

The version available in Public Preview extends this foundation with JDK introspection and vendor-specific awareness.

  • Detects the JDK version and applies the corresponding tuning profile
  • Enables optimizations for Microsoft Build of OpenJDK when present in the path, leveraging its advanced memory-management features to enable pause-less memory reclamation, typically reclaiming 318–708 MB per cycle without adding latency
  • Falls back to proven, broadly compatible strategies for other JDK distributions, maintaining consistent results across platforms
  • Maintains stable throughput and predictable latency across VM and container environments by combining static safety heuristics with adaptive memory behavior

This integration also informs the tool’s telemetry model, where the same balance between visibility and performance guides data-collection strategy.

The Inference vs Interference Matrix

In observability systems, inference (the ability to gain insight) and interference (the performance impact of measurement) are always in tension. The tool’s telemetry model balances these forces by adjusting its sampling depth based on runtime phase and workload stability.

Telemetry Mode Insight (Inference) Runtime Impact (Interference) Use Case
High-Frequency Deep event correlation Noticeable overhead Startup diagnostics
Low-Frequency Basic trend observation Negligible overhead Steady-state monitoring
Adaptive (tool) Critical metrics collected on demand Minimal overhead Production optimization

A useful comparison is Native Memory Tracking (NMT) in the JDK. While NMT exposes multiple levels of visibility into native memory usage (summary, detail, or off), most production systems rely on summary mode for stability. Similarly, the tool embraces this tiered approach to observability but applies it dynamically by adjusting telemetry intensity rather than relying on static configuration.

Phase Data Collection Depth Inference Level Interference Level Comparable NMT Concept
Startup Focused sampling of GC and heap sizing High Moderate summary with higher granularity
Steady-State Aggregated metrics and key anomalies Medium Low summary
Future (Event-Driven) Planned reactive sampling Targeted Minimal Conceptually detail-on-demand

The following visualizations illustrate how telemetry tracks memory behavior in sync with its adaptive sampling cadence. As the runtime transitions from startup to steady state, the tool automatically increases the time between samples—reducing interference while preserving meaningful visibility into trends.

Figure 1 shows how the interval between telemetry samples (Sample Δt) increases as the runtime stabilizes. Both Process A and Process B exhibit the same pattern: a high-frequency sampling phase during startup followed by a steady-state phase with larger, consistent sampling intervals. This adaptive behavior reduces interference while maintaining meaningful visibility into runtime behavior.
Figure 1: Adaptive Telemetry Sampling Over Time

Figure 1 shows how the interval between telemetry samples (Sample Δt) increases as the runtime stabilizes. Both Process A and Process B exhibit the same pattern: a high-frequency sampling phase during startup followed by a steady-state phase with larger, consistent sampling intervals. This adaptive behavior reduces interference while maintaining meaningful visibility into runtime behavior.

Scatter plot showing the resident set size (RSS) of the Java process in gigabytes over time. The chart displays two runs, each beginning with allocation growth during warmup and stabilizing at a steady-state plateau. The data points are collected using adaptive telemetry, which increases sampling intervals as the runtime stabilizes, providing visibility into memory behavior with minimal performance impact.
Figure 2: Java Process Memory Utilization Observed Through Adaptive Telemetry

Figure 2 shows the resident set size (RSS) of the Java process in GB, captured through adaptive telemetry. The visualization highlights the expected growth during warmup and stabilization during steady state. By adjusting sampling frequency intelligently, the tool provides production-grade observability of memory behavior without disrupting application performance or exceeding resource budgets.

Benchmark Validation

SPEC JBB2015 Results

All SPEC JBB measurements in this study were run on Azure Linux/Arm64 virtual machines.

Bar chart comparing performance improvements from using jaz versus default JVM ergonomics on SPEC JBB2015, showing ~17–18% peak throughput gains and ~6–10% SLA gains depending on JDK.
Figure 3: Relative Performance Improvement vs. Out-of-the-Box JVM Ergonomics (SPEC JBB2015)

  • The tool delivers double-digit peak throughput gains across all configurations tested—well over 17% relative to out-of-the-box JVM ergonomics

  • SLA-constrained performance improves as well, with the largest gains observed when paired with the Microsoft Build of OpenJDK (around 10% in our tests)

In SPEC JBB2015, SLA performance represents latency-sensitive throughput—how much work is sustained while meeting service-level response-time requirements.

  • No regressions or stability issues were observed across repeated trials and JDK versions

Spring PetClinic REST API Results

These measurements were run in containers with the tool applying cgroup-aware ergonomics under fixed CPU and memory limits.

The Spring PetClinic REST backend provides a lighter-weight request/response workload that complements SPEC JBB2015 rather than replacing it. It exposes CRUD endpoints for owners, pets, vets, visits, pet types, and specialties (GET/POST/PUT/DELETE), documented via Swagger/OpenAPI, and backed by H2 by default. The repository includes Apache JMeter test plans under src/test/jmeter, which we run headless to generate steady read/write traffic across the API surfaces.

To evaluate stability under resource contention, we also ran stress-ng in a companion container to introduce CPU and memory pressure alongside the JMeter-driven workload.

Performance Domain Impact Stability Assessment
Mean Response Time 1–5% improvement Consistent across scenarios
Tail Latency (90–99th) Neutral/minimal Maintained under stress—including stress-ng
Throughput Capacity No degradation Scales with resources
Stress Resilience Excellent Production-ready
Memory Efficiency Resource-aware Validated from 2–8GB

In other words, the tool is effectively throughput-neutral on this microservice workload while delivering small improvements in mean response time and keeping tail latency and stability intact—even when additional load is introduced via stress-ng.

Taken together, the SPEC JBB2015 and Spring PetClinic REST results confirm that the tool enhances throughput, preserves tail latency, and maintains robust performance across both VM-based and containerized deployments—even under additional system pressure.

Enterprise Deployment Model

The tool supports a safe, incremental adoption strategy designed for production environments:

Phase Approach Command Example Risk Level
Coexistence Respect existing tuning jaz -Xmx2g -jar myapp.jar Minimal
Optimization Remove manual flags jaz -jar myapp.jar Low
Validation Verify tuning safely JAZ_DRY_RUN=1 jaz -jar myapp.jar Zero

This phased design lets teams adopt the tool at their own pace—and roll back at any time without risk.

Technical Architecture

When invoked, the launcher detects the runtime environment and JDK version, applies resource-aware JVM configuration, and launches the optimized JVM in which the application executes. Built-in telemetry operates with low overhead, providing observability without affecting startup or runtime performance.

Figure 4 illustrates the end-to-end lifecycle of jaz during application startup and execution. The tool performs an instant setup phase—detecting the environment and JDK version, applying resource-aware configuration, and preparing safe JVM arguments—all within milliseconds. It then launches the optimized JVM, after which the Java application begins execution in a tuned environment. During the continuous runtime phase, low-overhead, event-driven telemetry runs concurrently, providing observability with minimal interference.
Figure 4: The jaz runtime lifecycle showing instant setup, JVM bring-up, and continuous telemetry

Figure 4 illustrates the end-to-end lifecycle of jaz during application startup and execution. The tool performs an instant setup phase—detecting the environment and JDK version, applying resource-aware configuration, and preparing safe JVM arguments—all within milliseconds. It then launches the optimized JVM, after which the Java application begins execution in a tuned environment. During the continuous runtime phase, low-overhead, event-driven telemetry runs concurrently, providing observability with minimal interference.

Get Started

# Replace existing java command
java -Xmx512m -jar myapp.jar 

# With jaz onboarded
jaz -Xmx512m -jar myapp.jar

# Validate configuration without deployment impact
JAZ_DRY_RUN=1 jaz -Xmx512m -jar myapp.jar

# Override user-provided tuning flags and let jaz tune for you
JAZ_IGNORE_USER_TUNING=1 jaz -Xmx512m -jar myapp.jar

Azure Command Launcher for Java is available in public preview. Start simplifying Java performance tuning across your Azure environments today.

Key Benefits

  • Immediate throughput improvements (17%+)

  • Consistent, resource-aware behavior across JDKs

  • Safe, incremental adoption model

  • Foundation for adaptive, self-optimizing behavior through runtime awareness

While telemetry is currently used only to improve the launcher internally, it lays the groundwork for future self-healing features that can inform runtime components such as GC ergonomics and heap-sizing heuristics.


Azure Command Launcher for Java turns JVM optimization from a specialized task into a built-in capability—bringing Java simplicity, safety, and performance to the Azure cloud.

For installation, configuration, supported JDKs, and environment variables, see the Microsoft Learn documentation:
https://learn.microsoft.com/en-us/java/jaz/overview

A forthcoming performance analysis blog will present detailed results from extended performance testing, covering heap and GC behavior, and scaling trends across Azure VMs.

The post From Complexity to Simplicity: Intelligent JVM Optimizations on Azure appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/from-complexity-to-simplicity-intelligent-jvm-optimizations-on-azure/feed/ 0
Announcing the Public Preview of Azure Command Launcher for Java https://devblogs.microsoft.com/java/announcing-the-public-preview-of-azure-command-launcher-for-java/ https://devblogs.microsoft.com/java/announcing-the-public-preview-of-azure-command-launcher-for-java/#respond Thu, 20 Nov 2025 16:33:49 +0000 https://devblogs.microsoft.com/java/?p=232629 Today we are announcing the Public Preview of the Azure Command Launcher for Java, a new tool that helps developers, SREs, and infrastructure teams standardize and automate JVM configuration on Azure. The goal is to simplify tuning practices and reduce resource waste across Java workloads. JVM Tuning in a Cloud-Native World Before the rise of […]

The post Announcing the Public Preview of Azure Command Launcher for Java appeared first on Microsoft for Java Developers.

]]>
Today we are announcing the Public Preview of the Azure Command Launcher for Java, a new tool that helps developers, SREs, and infrastructure teams standardize and automate JVM configuration on Azure. The goal is to simplify tuning practices and reduce resource waste across Java workloads.

JVM Tuning in a Cloud-Native World

Before the rise of microservices, Java applications were typically deployed as Java EE artifacts (WARs or EARs) on managed application servers. Ops teams were responsible for configuring and tuning the JVM, often on powerful servers that hosted multiple applications on a single Java EE application server instance.

With the move to cloud-native microservices, every service now runs independently with its own JVM and in its own dedicated container or virtual machine. Each service defines its own CPU and memory boundaries, and with that, its JVM tuning parameters. This shift transferred tuning responsibilities from centralized Ops teams to individual developer teams, creating complexity and inconsistency across environments.

Bradesco Bank is one example amongst thousands of customers that have gone through this shift. One of the top five largest banks in Latin America with over $300 billion (USD) in assets, Bradesco has built much of its backend systems on Java and the JVM and now runs significant back-end operations on Azure Red Hat OpenShift (ARO) environments. Bradesco Bank processes billions of transactions every day, supported by tens of thousands of JVMs with critical Java applications at scale.

“In our proof of concept, Azure Command Launcher for Java delivered exactly the kind of operational standardization we needed as we prepared to scale Java workloads on Azure. Early tests showed strong potential for reducing waste and simplifying performance tuning.” – Thiago Mendes, Solution Architect at Bradesco Bank

Without proper JVM tuning, development and operations teams like the ones at Bradesco Bank may risk meeting with:

  • Resource waste due to low utilization in dedicated cloud environments
  • JVM tuning configuration drifts
  • Inconsistent behavior across deployments
  • Higher operational costs
  • Increased mean time to resolution

Introducing Azure Command Launcher for Java

Azure Command Launcher for Java, in Private Preview since May 2025, simplifies and automates JVM configuration for cloud workloads. It works as a drop-in replacement for the standard java command, compatible with any Azure supported JDK versions 8 or later.

Throughout the Private Preview of the Azure Command Launcher for Java, we met with several customers and found that about 20% of Java workloads on containers were being manually misconfigured in production. This led to significant resource waste, due to JVMs being tuned to values much lower than the resource limits provided to their Kubernetes deployments, resulting in unnecessary horizontal scaling to account for the increasing processing demands.

DevOps teams want consistent, battle-tested and worry-free JVM tuning today. That’s where Azure Command Launcher for Java steps in. Without changing your code or adopting a new runtime, teams simply replace their usual “java -jar”command with Azure Command Launcher for Java and gain efficient, smarter defaults plus standardized tuning across services. It’s a practical alternative for teams that want to preserve their existing JVM investments while bringing them under stronger operational control.

No code changes, no lock-in. Just replace:

java -Xmx1024m -jar myapp.jar

with:

jaz -jar myapp.jar

And Azure Command Launcher for Java manages the JVM configuration automatically.

Easy Onboarding and Rollback

By default, the tool respects any tuning flags the user provides. If it detects manual JVM settings, like -Xmx, it steps aside and does not apply its own tuning. For workloads with no tuning flags, the tool automatically uses its recommended configuration.

If operators want the tool to override manual tuning, they can enforce this behavior with:

JAZ_IGNORE_USER_TUNING=1

To return control to user-defined flags, set the variable back to:

JAZ_IGNORE_USER_TUNING=0

This approach keeps adoption safe, gradual, and fully reversible.

Smarter Defaults for Cloud Workloads

Out of the box, Azure Command Launcher for Java applies sensible JVM defaults that are optimized for dedicated containerized and virtualized environments. These defaults are based on widely accepted best practices and insights gathered from real-world Java workloads on Azure.

This allows teams to start with a configuration that is more closely aligned with modern cloud deployment models, helping reduce manual setup and the risk of configuration drift across services.

To understand our approach for these choices, please see this article from our JVM Performance Architect, Monica Beckwith.

Adaptive Learning and AI-Assisted Tuning

While the Public Preview focuses on standardization and improved default configurations for immediate benefits, the roadmap includes more advanced and intelligent capabilities for the future.

Planned features include adaptive learning based on telemetry, where the tool will gradually analyze JVM telemetry over time and suggest further optimizations. This capability will be introduced in later releases after further validation with customers and partners.

Additionally, we will also incorporate features like Application Class Data Sharing (JEP 310) so users can benefit automatically. Long term, we will enable Project Leyden.

Built for Azure

Azure Command Launcher for Java works with all Microsoft compute services, including but not limited to:

  • Azure Kubernetes Service
  • Azure Container Apps
  • Azure App Service
  • Azure Functions
  • Azure Red Hat OpenShift
  • Azure Virtual Machines
  • Azure DevOps
  • GitHub Codespaces
  • GitHub Actions

Linux binaries are available for x64 and ARM64 architectures, and it also comes pre-bundled in the latest Microsoft Build of OpenJDK container images.

Get Started

The Public Preview is now available to all customers. To get started, visit the documentation page for more information on how to configure and use the tool.

The post Announcing the Public Preview of Azure Command Launcher for Java appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/announcing-the-public-preview-of-azure-command-launcher-for-java/feed/ 0
Introducing Major New Agentic Capabilities for GitHub Copilot in JetBrains and Eclipse https://devblogs.microsoft.com/java/new-agentic-capabilities-for-github-copilot-in-jetbrains-and-eclipse/ https://devblogs.microsoft.com/java/new-agentic-capabilities-for-github-copilot-in-jetbrains-and-eclipse/#respond Tue, 18 Nov 2025 16:11:56 +0000 https://devblogs.microsoft.com/java/?p=232580 GitHub Copilot is taking a major step forward with expanded, deeply integrated support for JetBrains and Eclipse — bringing a new generation of agentic, intelligent capabilities directly into your favorite Java IDEs. This release strengthens Copilot’s cross-IDE experience, unifies agentic workflows, and unlocks more powerful automation to help developers code faster, modernize confidently, and stay […]

The post Introducing Major New Agentic Capabilities for GitHub Copilot in JetBrains and Eclipse appeared first on Microsoft for Java Developers.

]]>
GitHub Copilot is taking a major step forward with expanded, deeply integrated support for JetBrains and Eclipse — bringing a new generation of agentic, intelligent capabilities directly into your favorite Java IDEs. This release strengthens Copilot’s cross-IDE experience, unifies agentic workflows, and unlocks more powerful automation to help developers code faster, modernize confidently, and stay in flow. 

New Agentic Capabilities

This is the year of the agents. Developers need more control than ever—both in how they work with agents and how agents adapt to their workflows. After introducing Custom Agents in VS Code, we’re now bringing them to JetBrains and Eclipse. With new Custom Agents and Subagents, developers can set tailored instructions while subagents operate in clean, isolated contexts for focused, accurate execution. And with the new Plan Mode, developers can tackle complex problems through structured, step-by-step planning with seamless task handoffs. 

Custom Agents – tailor Copilot to your workflow

Custom Agents give developers the ability to shape Copilot’s behavior around their unique coding patterns, project requirements, or domain-specific rules. You can define your own instructions, constraints, and tools, turning Copilot into a configurable assistant that works the way you do — not the other way around. 

Custom Agent JetBrains

Isolated Subagents – focused, context-clean execution

Isolated Subagents bring a new level of precision to multi-step tasks. Each subagent operates in a clean context to deliver more accurate reasoning and fewer distractions. Whether you’re fixing tests, refactoring code, or generating documentation, subagents ensure Copilot stays laser-focused on the task at hand. 

JetBrains Subagent support

Plan Mode – structured, step-by-step task execution

Plan Mode elevates Copilot from a passive helper to an orchestrated problem-solver. It breaks complex tasks into clear, sequential steps — planning, executing, and validating as it goes. This ensures more reliable outcomes, better visibility into the solution path, and smoother handling of multi-stage engineering tasks. 

Plan mode

Core Experience Improvements

In addition to the new agentic capabilities, we are also bringing improvements to the fundamental experience. This ensures developers can perform their most essential daily coding tasks with GitHub Copilot. 

Next Edit Suggestions – expanded to Eclipse

Next Edit Suggestions, already released in JetBrains, is now available in Eclipse This feature proactively surfaces the next best actions — code edits, improvements, or cleanup — helping developers maintain momentum and quickly apply iterative changes without losing context. 

Next Edit Suggestions in Eclipse

Coding Agent integration now in Eclipse

Eclipse now gains Coding Agent support, enabling developers to offload asynchronous coding tasks to an autonomous background agent. You can delegate fixes, transformations, or generation tasks, and Copilot will complete them while you continue working elsewhere in the IDE. 

coding agent image
Coding Agent in Eclipse

Finally, all of this is powered by a much smarter model—and we’re moving quickly to bring it to JetBrains and Eclipse. OpenAI’s GPT-5.1, GPT-5.1-Codex, and GPT-5.1-Codex-Mini (Preview)—the variants of GPT-5 optimized specifically for agentic software engineering—was rolled out last week across VS Code, JetBrains, Eclipse, Xcode, and the GitHub Copilot CLI, delivering significant quality improvements in chat, agents, and code operations. 

Together, these innovations deliver a truly adaptive Copilot experience—faster, smarter, and designed for the way Java developers work. 

How to get started

You can download our extensions from the following links

Screenshot 2025 11 14 162109 image

eclipselogo image

Provide feedback 

Your feedback is essential to our product. Let us know how we can continue improving.

In-product feedback: Use the feedback options within your IDE 

Feedback Repositories by IDE

JetBrains

Eclipse

 

 

The post Introducing Major New Agentic Capabilities for GitHub Copilot in JetBrains and Eclipse appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/new-agentic-capabilities-for-github-copilot-in-jetbrains-and-eclipse/feed/ 0
JDConf 2026 Is Coming With Modern Solutions for an Agentic World https://devblogs.microsoft.com/java/jdconf-2026-is-coming-with-modern-solutions-for-an-agentic-world/ https://devblogs.microsoft.com/java/jdconf-2026-is-coming-with-modern-solutions-for-an-agentic-world/#respond Tue, 04 Nov 2025 16:00:58 +0000 https://devblogs.microsoft.com/java/?p=232564 Technology is accelerating faster than ever, and developers are once again at the helm, shaping the future of applications, intelligence, and enterprise systems. With the rise of large language models (LLMs), agent-oriented architectures, and AI-driven development paradigms, Java developers find themselves in a uniquely powerful position to modernize code already powering critical systems, and to […]

The post JDConf 2026 Is Coming With Modern Solutions for an Agentic World appeared first on Microsoft for Java Developers.

]]>

Technology is accelerating faster than ever, and developers are once again at the helm, shaping the future of applications, intelligence, and enterprise systems. With the rise of large language models (LLMs), agent-oriented architectures, and AI-driven development paradigms, Java developers find themselves in a uniquely powerful position to modernize code already powering critical systems, and to build the software of tomorrow.

Java remains one of the world’s most trusted languages for enterprise, cloud, mobile and mission-critical systems. As James Governor, from developer analyst firm RedMonk, recently said, “Java has maintained relevance through all of the waves that we’ve seen over the last couple of decades – it is the exemplar of a general purpose programming language and runtime. […] The idea that somehow Java isn’t going to play well with AI doesn’t make any sense.”

At Microsoft JDConf 2026, we’ll explore how Java is evolving to power agentic, intelligent applications.

We are thrilled to announce that Microsoft JDConf 2026 will take place on April 8-9, 2026, with live-streaming across multiple time zones to support our global community.

Why attend JDConf 2026?

This edition is all about agents, intelligence, and modernization. How Java developers can modernize legacy systems with GitHub Copilot, to then leverage LLMs, build intelligent and agentic features,, integrate them into existing systems, and scale them in production with Agentic DevOps. We’ll dive into not just AI assistive tooling, but agentic applications that act, coordinate, and drive outcomes.

We are working on Microsoft JDConf 2026 with a focus on showcasing:

  • AI-Native Java and AI-Assisted Development: How Java developers build and code with AI. From AI-native applications using Spring AI, LangChain4J, or Azure OpenAI to AI-powered IDEs, Copilot workflows, and predictive coding.
  • App Modernization and Next-Generation Cloud: Modernizing Java workloads with containers, serverless, and cloud-native tools. Include migration patterns and the role of AI or LLMOps in modernization.
  • Tools, Automation, and Responsible AI Operations: AI in the build, test, and deployment lifecycle—CI/CD automation, observability, policy-as-code, and responsible AI practices for Java systems.
  • Sustainable, Secure, and Efficient Java: Improving performance, security, and sustainability. Cover GraalVM, native compilation, zero-trust design, and efficient runtime operations.
  • AI Success Stories and Customer Journeys: Case studies showing how teams combine Java, AI, and cloud to deliver measurable results.

Call for Speakers is Open

The heartbeat of JDConf is the community. We’re inviting Java developers, architects, engineers and thought leaders to submit proposals and share their experience with the world.

Why speak at JDConf 2026?

  • Reach a global audience: Engage with practitioners, influencers and enterprise developers around the world. The event will be streamed through Microsoft channels on YouTube, with hundreds of thousands of subscribers.
  • Share your impact: Contribute to shaping how the Java ecosystem evolves in the age of AI.
  • Showcase practical outcomes: Attendees value real-world case-studies, lessons learned and actionable take-aways.

Head over to our speaker submission portal (coming soon) to submit your session. Stay tuned for submission deadlines, format guidelines and speaker benefits.

Let’s build the future together

JDConf 2026 offers a unique moment for Java developers to be at the forefront of the agentic AI wave. With the depth of the Java ecosystem, the power of modern tooling and the scale of the cloud, there’s no better time to innovate, to re-imagine, and to build intelligent applications that truly act.

Mark your calendar for April 8-9, 2026. Stay tuned for registration, the full agenda, sessions and more.

Let’s code the future of intelligent agents, in Java.

The post JDConf 2026 Is Coming With Modern Solutions for an Agentic World appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/jdconf-2026-is-coming-with-modern-solutions-for-an-agentic-world/feed/ 0
Java OpenJDK October 2025 Patch & Security Update https://devblogs.microsoft.com/java/java-openjdk-oct-2025-patch-security-update/ https://devblogs.microsoft.com/java/java-openjdk-oct-2025-patch-security-update/#respond Wed, 29 Oct 2025 21:19:50 +0000 https://devblogs.microsoft.com/java/?p=232561 Hello Java customers! We are happy to announce the latest July 2025 patch & security update release for the Microsoft Build of OpenJDK. Download and install the binaries today. OpenJDK 25.0.1 OpenJDK 21.0.9 OpenJDK 17.0.17 OpenJDK 11.0.29 Check our release notes page for details on fixes and enhancements. The source code of our builds is […]

The post Java OpenJDK October 2025 Patch & Security Update appeared first on Microsoft for Java Developers.

]]>
Hello Java customers!

We are happy to announce the latest July 2025 patch & security update release for the Microsoft Build of OpenJDK. Download and install the binaries today.

  • OpenJDK 25.0.1
  • OpenJDK 21.0.9
  • OpenJDK 17.0.17
  • OpenJDK 11.0.29

Check our release notes page for details on fixes and enhancements. The source code of our builds is available now on GitHub for further inspection: jdk25u, jdk21u, jdk17u, jdk11u.

Microsoft Build of OpenJDK specific updates

OpenJDK25

  • No Microsoft-specific updates for this release.

OpenJDK21

  • No Microsoft-specific updates for this release.

OpenJDK17

  • No Microsoft-specific updates for this release.

OpenJDK11

  • No Microsoft-specific updates for this release.

Summary of Upstream Updates

OpenJDK 25

OpenJDK 21

OpenJDK 17

OpenJDK 11

OpenJDK 8

We continue to provide support on Azure and internally at Microsoft for OpenJDK 8 binaries of Eclipse Temurin built by the Eclipse Adoptium project. To facilitate its usage, we ship container images of OpenJDK 8 on top of Azure Linux and Ubuntu. Refer to our documentation.

OpenJDK 8 (latest)

Questions?

Contact openjdk-support@microsoft.com.

Amplify the news!

LinkedIn

The post Java OpenJDK October 2025 Patch & Security Update appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/java-openjdk-oct-2025-patch-security-update/feed/ 0
Java and AI for Beginners: a practical video series for Java https://devblogs.microsoft.com/java/java-and-ai-for-beginners-a-practical-video-series-for-java/ https://devblogs.microsoft.com/java/java-and-ai-for-beginners-a-practical-video-series-for-java/#respond Tue, 28 Oct 2025 16:00:33 +0000 https://devblogs.microsoft.com/java/?p=232527 If you’re looking for a clear, no-nonsense path into generative AI on Java, this series is for you.  Microsoft’s Java and AI for Beginners video series is a set of short tutorials that introduce the concepts, tooling, and patterns you need to get started at a pace that respects your time and experience. What the series […]

The post Java and AI for Beginners: a practical video series for Java appeared first on Microsoft for Java Developers.

]]>
BegJavaAI 101 image

If you’re looking for a clear, no-nonsense path into generative AI on Java, this series is for you.  Microsoft’s Java and AI for Beginners video series is a set of short tutorials that introduce the concepts, tooling, and patterns you need to get started at a pace that respects your time and experience.

What the series covers

We help you through foundational ideas first and then move into hands-on examples:

  • Getting started fast – Spin up your first AI-powered app using GitHub Codespaces.

  • Core generative AI techniques – Learn the basics behind completions and chat flows. See how function calling connects models to real tools and services. Get an introduction to Retrieval-Augmented Generation (RAG) for document-aware applications.

  • Simple, focused application – Explore small projects that illustrate different capabilities, such as combining text and image generation, running models locally with the Azure AI Foundry Local experience, and wiring tools with the Model Context Protocol (MCP).

  • Responsible AI – Apply safety features from GitHub Models and Azure services. We cover content filtering, bias awareness, and practical checks you can add before deployment.

  • MCP in Java – Understand the Model Context Protocol and how it fits Java workflows. Learn what it means to implement an MCP server, connect a Java client, and use tools through a consistent protocol.

  • Context engineering for Java – Improve results with clean prompts, structured context, and simple evaluation steps. We discuss when to persist context and when to compute it on the fly.

  • Modernization with AI assistance – See how the GitHub Copilot App Modernization experience helps upgrade and migrate Java applications. Then follow a guided flow to deploy to Azure with AI-assisted configuration.

  • LangChain4j essentials – Start a basic project that targets OpenAI-compatible endpoints, then build a small agent with tools and memory to understand the moving parts.

  • Running GenAI in containers – Review when to use on-demand GPUs for inference and training. Learn how dynamic sessions in Azure Container Apps support code interpreters and short-lived, cost-aware execution.

Each video is short and focused. Watch them in order if you are new to the space, or skip into the topics that match your immediate needs.

Integrations you will see

The series uses services and libraries that many Java teams already rely on:

Where it helps, we use the official OpenAI Java SDK to target both OpenAI and Azure OpenAI endpoints with consistent code paths and Azure-aware authentication and security options.

Microsoft’s investment and community partnership

This series reflects ongoing work with the Java open-source community. The Microsoft Java advocacy and engineering teams continue to contribute to projects like LangChain4j and Spring AI, improve Azure and OpenAI integrations, and provide open-source examples that run locally and on the cloud. Your feedback from conferences, meetups, issue threads, and customer projects shaped each of the outlines for these videos.

Get started

The playlist, code links, and references are available on the Microsoft Developer YouTube channel :

https://aka.ms/java-ai-beginners

Also subscribe to the Microsoft for Java Developers YouTube Channel

If you have suggestions or want a deeper dive on a specific area, let me know—your input will guide future installments.

The post Java and AI for Beginners: a practical video series for Java appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/java-and-ai-for-beginners-a-practical-video-series-for-java/feed/ 0
MCP Registry and Allowlist Controls for Copilot in JetBrains and Eclipse Now in Public Preview https://devblogs.microsoft.com/java/mcp-registry-and-allowlist-controls-for-copilot-in-jetbrains-and-eclipse-now-in-public-preview/ https://devblogs.microsoft.com/java/mcp-registry-and-allowlist-controls-for-copilot-in-jetbrains-and-eclipse-now-in-public-preview/#respond Tue, 28 Oct 2025 16:00:33 +0000 https://devblogs.microsoft.com/java/?p=232540 MCP registry and allowlist controls for GitHub Copilot in JetBrains IDEs and Eclipse are now available in public preview in nightly/pre-release builds. What’s new MCP Registry An MCP Registry is a directory of Model Context Protocol (MCP) servers. For users of JetBrains IDEs and Eclipse, you can now configure your MCP Registry and browse available […]

The post MCP Registry and Allowlist Controls for Copilot in JetBrains and Eclipse Now in Public Preview appeared first on Microsoft for Java Developers.

]]>
MCP registry and allowlist controls for GitHub Copilot in JetBrains IDEs and Eclipse are now available in public preview in nightly/pre-release builds.

MCP Registry JB EL image

What’s new

MCP Registry

An MCP Registry is a directory of Model Context Protocol (MCP) servers. For users of JetBrains IDEs and Eclipse, you can now configure your MCP Registry and browse available MCP servers directly within your IDE. This greatly streamlines setup and provides a seamless experience for discovering and managing MCP servers right from the editor.

Allow List Controls

As an enterprise or organization owner, you can configure an MCP Registry URL along with an access control policy. These settings determine which MCP servers your developers can see and run in supported IDEs with GitHub Copilot.

When combined with the Registry only policy, it prevents any usage of MCP servers (at runtime) that are not defined in the internal registry.

Set up your MCP Registry

In JetBrains IDEs:

  1.  Sign in and open Copilot chat, then click the MCP Registry icon.
  2.  After the MCP Registry loads, you can browse, install, or uninstall MCP servers from the registry.
  3.  Optionally, click Configure MCP Registry URL icon and specify your preferred registry endpoint, or use the default provided.

In Eclipse:

  1.  In the top bar of Copilot chat panel, click the MCP Registry icon.
  2. After the MCP Registry loads, you can browse, install, or uninstall MCP servers from the registry.
  3.  Optionally, click Configure Registry URL icon and specify your preferred registry endpoint, or use the default provided.

For admins: configure Allowlist Controls

Allow List Controls are available only for Copilot Business and Copilot Enterprise customers.

  1. In GitHub Enterprise settings → AI Controls tab → MCP (or at the org level: Organization settings → Policies → Copilot → Policies).
  2. Enable MCP servers in Copilot.
  3. Add your MCP Registry URL.
  4. Choose enforcement mode:
    • Allow all (default): Any MCP server can run; registry servers appear as recommended.
    • Registry only: Only servers from your registry can run; others are blocked at runtime with a clear warning.

For setup instructions and registry format details, see the official documentation.

Try it out

You can try these new features today in the nightly release of Copilot for JetBrains, and the pre-release versions of Copilot for Eclipse. Please install from:

You will also need to have a valid Copilot license.

Share your feedback

We value your feedback! Share your experience through the following channels:

Note: These features are currently in preview and are subject to change.

The post MCP Registry and Allowlist Controls for Copilot in JetBrains and Eclipse Now in Public Preview appeared first on Microsoft for Java Developers.

]]>
https://devblogs.microsoft.com/java/mcp-registry-and-allowlist-controls-for-copilot-in-jetbrains-and-eclipse-now-in-public-preview/feed/ 0