The post Introducing Azure Performance Diagnostics Tool for Java: Automated Java Performance Analysis in Kubernetes via Azure SRE Agent appeared first on Microsoft for Java Developers.
]]>The Azure Performance Diagnostics Tool for Java is a powerful new capability within Azure SRE Agent, an AI-powered service that automatically responds to site reliability issues. This feature enables development and operations teams to monitor, analyze, and troubleshoot Java Virtual Machine (JVM) performance issues with unprecedented ease.
Azure Performance Diagnostics Tool for Java can identify and diagnose common JVM problems, including:
An example of a report highlighting a garbage collection issue would be as follows:

When Azure SRE Agent is tasked with solving a performance issue by the customer and it suspects that a JVM related issue is the cause, the agent immediately initiates a comprehensive diagnostic report. This allows your team to understand the root cause of this performance issue.
Teams can also manually request diagnostics through the Azure SRE Agent chat interface. Simply ask for a performance analysis of any Java service; you can even build your own Sub-Agent and integrate the AKS performance functions as part of that Sub-Agent. You can then directly ask your agent to perform Java diagnoses:
Take a look at it in action in the video below.
When Azure SRE Agent suspects a performance issue (or is manually invoked to perform a Java performance investigation in your container), it:
This approach ensures zero downtime while providing deep visibility into JVM behaviour.
NOTE: For auditability reasons the Kubernetes API retains visibility of terminated ephemeral containers. As a result when looking at a pod for instance using kubectl describe pods the ephemeral containers will be visible, to prevent excessively generating noise within your environment, we have limited the pod to having run 5 diagnostic containers.
Setting up JVM Diagnostics is straightforward. Here are the requirements:
languageStack=java annotation to your pods to enable Azure Performance Diagnostics Tool for JavaNote: At time of writing the Java profiling feature is in the Early access features, to enable early access, within the SRE Agent UI, browse to Settings, Basics, and select Early access to features. The feature will progress to main line in the coming month.
Adding the required annotation is as simple as updating your pod specification:
apiVersion: v1
kind: Pod
metadata:
name: your-java-app
annotations:
languageStack: java
spec:
containers:
- name: app
image: your-java-app:latest
Alternatively, you can apply an annotation on the command line using kubectl:
kubectl annotate pod your-java-app languageStack=java
Annotating your pods indicates that they are running Java applications and that you consent to have them diagnosed using Azure Performance Diagnostics Tool for Java. As always with monitoring, although the diagnostic process is designed to be as non-intrusive as possible, there is a small amount of overhead involved when running the Java Flight Recorder profiler and the diagnostic container. It is recommended to first use this feature in non-production environments and ensure that the diagnostic process does not interfere with your application.
You can also create a custom Sub-Agent within Azure SRE Agent, and delegate to the AKS diagnostic analysis tools, in order to create an agent specifically for how you wish to respond to AKS diagnostic needs. You can also delegate to this agent when responding to an alert. The following is an example of how to configure a Sub-Agent which includes the Java Performance Diagnostic capability via the SRE Agent GUI:
Below is an example YAML configuration that can be pasted into the YAML input dialogue within the create a SRE Sub-Agent Builder UI:
api_version: azuresre.ai/v1
kind: AgentConfiguration
spec:
name: AKSDiagnosticAgent
system_prompt: >-
Take the details of a diagnostic analysis to be performed on a AKS container and hand off to the
appropriate diagnostic tool. If you need to find the resource to diagnose, use the
SearchResourceByName and ListResourcesByType tools
tools:
- GetCPUAnalysis
- GetMemoryAnalysis
- SearchResourceByName
- ListResourcesByType
- AnalyzeJavaAppInAKSContainer
handoff_description: >-
When the user has requested a diagnostic analysis, or it has been determined an AKS diagnostic
analysis is required
agent_type: Autonomous
You can then interact with this Sub-Agent via the Azure SRE Agent chat interface to request JVM performance analyses as needed, for instance:
/agent AKSDiagnosticAgent I am having a performance issue in:
pod: illuminate-test-petclinic
container: spring-petclinic-rest
namespace: illuminate-test
aks instance: jeg-aks
This will trigger the Azure Performance Diagnostics Tool for Java process and return a detailed report of findings and recommendations.
The post Introducing Azure Performance Diagnostics Tool for Java: Automated Java Performance Analysis in Kubernetes via Azure SRE Agent appeared first on Microsoft for Java Developers.
]]>The post Java at Microsoft: 2025 Year in Review appeared first on Microsoft for Java Developers.
]]>2025 was one of the most significant years yet for Java at Microsoft. From the arrival of OpenJDK 25 as the newest Long‑Term Support (LTS) release, to AI‑powered modernization workflows with GitHub Copilot app modernization, to Agentic AI development in Microsoft AI Foundry with Java frameworks like LangChain4j, Spring AI, Quarkus AI, and Embabel, with major Visual Studio Code and Azure platform investments. Microsoft deepened its commitment across the entire Java ecosystem.
2025 delivered a historic milestone: OpenJDK 25 officially shipped and with it, Microsoft Build of OpenJDK 25 as the next Long‑Term Support (LTS) release, setting the foundation for the next multi‑year cycle of enterprise Java workloads.
For developers who have not been following advancements in the Java language, it may not look, but the code below is a Rock Paper Scissors implementation in Java 25 that can be put inside a Game.java file and executed with “$ java Game.java” with a JDK 25 installation.
To run this code, Microsoft released binaries, container images, and updated Azure Platform services, providing:
With Java 25, enterprises gain language and runtime improvements, performance upgrades, memory optimizations, and new developer‑facing capabilities, giving organizations strong justification to plan migrations earlier in the LTS cycle rather than waiting several years. To learn about some of the new features in JDK 25, check our announcement.
GitHub Copilot’s continued parity and agentic capabilities across Visual Studio Code, IntelliJ IDEA, and Eclipse IDE ensure Java teams can adopt AI assistance without changing IDEs, which is crucial for regulated environments and large estates. The advantage of GitHub Copilot is to bring to developers all the best coding models through a single subscription.
In IntelliJ IDEA, the official GitHub Copilot plugin delivers chat, agentic capabilities, MCP support, inline completions, and more. Fast nightly updates keep pace with IDE releases across the JetBrains family. GitHub Copilot is also available on the Eclipse Marketplace with code completions, Copilot Chat, and agentic workflows powered by Agent Mode and MCP integrations. Agent preview lets developers delegate tasks from Eclipse and track jobs that open draft PRs and queue reviews.
For Java developers who prefer working directly from the terminal, the GitHub Copilot CLI brings the same AI‑assisted power found in IDEs straight to your shell. With Copilot CLI, you can run development tasks, upgrade, migration, and deployment workflowsend‑to‑end without switching tools, ideal for developers who live in Bash, Zsh, or PowerShell. Copilot CLI supports interactive and batch scenarios, making it possible to develop Java applications, upgrade Java versions, modernize Spring or Jakarta EE apps, or deploy to Azure entirely via command‑line tasks.
2025 was a breakout year for Java modernization via GitHub Copilot, now providing end‑to‑end support for assessments, planning, code transformation, testing, and deployment. For a video introduction, watch Modernize Java apps in days with GitHub Copilot on YouTube.
Modernization formulas, rules, and recipes encode expert migration guidance for core Java APIs, the Spring Framework, the Jakarta EE platform, and hundreds of related scenarios including logging, identity, secret management, messaging, database, and overall cloud readiness.
Key modernization capabilities:
GitHub Copilot app modernization is available on Visual Studio Code, IntelliJ, and in the Copilot CLI.
2025 was also the year when JVM tuning became a no-brainer for cloud‑native Java teams. Now in Public Preview, the Azure Command Launcher for Java is a drop‑in replacement for the java JVM launcher that:
Large organizations like Bradesco Bank validated operational gains, demonstrating measurable efficiency enhancements, performance consistency, and operational peace of mind across hundreds of thousands of JVMs. The tool provides an immediate path for teams modernizing to the new LTS without having to learn completely new tuning heuristics. Let us do the JVM tuning for you.
Once installed, using Azure Command Launcher for Java is as simple as replacing the “java” command with the “jaz” command:
For more information on the performance benefits, check the article Beyond Ergonomics: How the Azure Command Launcher for Java Improves GC Stability and Throughput on Azure VMs.
The roadmap for the Azure Command Launcher for Java has exciting ideas, and we are eager to connect with developers and customers willing to experiment. Let us help you get the most of advanced JVM features like App CDS, Project Leyden, GC log analysis, and more!
Announced this year and currently in Preview, the Azure SRE Agent is soon adding deeper operational intelligence for Java workloads. Java teams running at scale gain a powerful assistant that reduces MTTR and elevates reliability practices. To learn more, you can watch this presentation at InfoQ Dev Summit Boston 2025: Fix SLO Breaches Before They Repeat to see a demo.
A standout moment was the Reactor session on modernizing Spring Boot apps from relational databases to Azure Cosmos DB, using GitHub Copilot to accelerate every step. This presentation also demonstrates features in Visual Studio Code with GitHub Copilot for customizing agentic AI instructions and prompts.
Developers learned how Copilot can:
This brings database modernization into the same workflow as app upgrades—critical for Java cloud migrations. Watch the replay on the Microsoft Reactor page. For more on context engineering and custom instructions, you can also watch this other presentation Context Engineering for Java Ecosystem.
Experts are pushing the boundaries of AI development, whether with AI assisting tools, with Agentic AI coding, or building custom Agents. But we must start somewhere, and for beginners, it is important to have fundamentals and basic understanding of core tools. This is why the Microsoft Developer Relations team for Java, built and published the Java and AI for beginners series.
We help you through foundational ideas first and then move into hands-on examples:
Each video is short and focused. Watch them in order if you are new to the space or skip into the topics that match your immediate needs.
The year was full in terms of sharing ideas. Everyone had something to say, especially about AI. Of course, Microsoft also had a few ideas to share, and that is why we were present at dozens of conferences in 2025, all around the world: DevNexus, Devoxx, JavaLand, JavaOne, JavaZone, SpringOne, and others.
Meeting other developers at conferences, whether virtual or in-person, remain one of the best ways to share ideas and learn from others. This year, we gave continuity with our own space, Microsoft JDConf, so Microsoft experts and community speakers could participate and share theirs. In addition to that, we participated in key events like Oracle’s flagship Java developer conference, JavaOne. In 2026, we will be there again.
The 2025 edition focused on our opportunity to Code the Future with AI. There were 22 technical sessions across Spring, Quarkus, AI agentic development, core Java principles, modern tooling, and code modernization. With strong global community engagement and the presence of luminaries and Java Champions like Josh Long and Lize Raes, JDConf 2025 was a milestone for our community engagement, giving them the space and amplification to share their ideas. Watch the recordings.
For 2026, we are excited for what’s to come! Microsoft JDConf call for papers is up and running, and the conference will be back on April 8-9 with the usual three timezones live streams so everyone can engage and learn.
This year we were at Oracle’s JavaOne conference where we shared what developers can get with Microsoft tools and services for modern Java development with AI. We also had exciting breakout sessions on AI, modernization, and cloud-native Java, where thousands of developers engaged in person and online. Needless to say, we will be back at JavaOne 2026, so stay tuned!
In the meantime, watch again the Microsoft keynote at JavaOne 2025 and our two breakout sessions, Next-Level AI Mastery for Java Developers, and From RAG to Enterprise AI Agents: Building Intelligent Java Apps.
Microsoft teams continued collaborating with key projects in the Java ecosystem. A few highlights go to OpenJDK contributions by Microsoft’s Java Engineering Group, and contributions by the Microsoft Developer Relations team to frameworks like Spring AI, Quarkus, and LangChain4j.
LangChain4j has become a de facto standard in building intelligent Java applications, and integrations with Microsoft AI services are key for enabling customers to leverage the latest AI models and capabilities into their systems. To learn more about LangChain4j and our contributions to it, check out the blog Microsoft and LangChain4j: A Partnership for Secure, Enterprise-Grade Java AI Applications.
With OpenJDK 25 now the active LTS, the ecosystem is entering a new multi‑year cycle. Microsoft will continue investing in:
The mission remains unchanged: empower every Java developer to build intelligent applications, modernize legacy codebases, and operate applications with world‑class tooling, AI assistance, and cloud‑native excellence.
The post Java at Microsoft: 2025 Year in Review appeared first on Microsoft for Java Developers.
]]>The post Beyond Ergonomics: How the Azure Command Launcher for Java Improves GC Stability and Throughput on Azure VMs appeared first on Microsoft for Java Developers.
]]>jaz) —a safe, resource-aware way to launch the JVM without hand-tuning dozens of flags. This follow-up shares performance results, focusing on how jaz affects G1 behavior, heap dynamics, and pause characteristics under a long-running, allocation-intensive workload: SPECjbb 2015 (JBB).
Test bed: 4-vCPU, 16-GB Azure Linux/Arm64 VM running the Microsoft Build of OpenJDK.
JDKs exercised: Validated on JDK 17 (17.0.17), 21 (21.0.9), and 25 (25.0.1); all figures in this post are from the JDK 17 runs. Trends on 21/25 matched the 17 results.
How we ran it:
# baseline
java -jar specjbb.jar
# with jaz
jaz -jar specjbb.jar
Controls: Same JBB workload config, OS settings, and JVM flags for both runs—the launcher was the only change.
JBB exercises high allocation rate, object churn, humongous-allocation behavior, generational sizing, region pressure, concurrent-mark sustainability, GC scheduling, and pause-time predictability. Because it is both bandwidth-intensive and latency-sensitive, JBB is ideal for validating heap ergonomics and GC policies in the cloud.
As a capacity-planning tool it helps explore sustained throughput limits, GC headroom before SLA violations, warm-up behavior under load, and how a given VM size (4 cores, 16 GB) holds up under continuous allocation pressure.
jaz| Metric | Baseline | With jaz |
Improvement |
| Peak Throughput | Baseline | 22% | Higher max-jOPS |
| SLA Performance | Baseline | 15% | Higher critical-jOPS |
| Total GC Events | 3777 | 2526 | -33% (1251 fewer) |
| Young GC Count | 1100 | 1596 | +45% (handles higher load) |
| Mixed GC Count | 778 | 265 | -66% (513 fewer) |
| Young GC Overhead | 1.41% | 2.60% | Higher but efficient |
| Mixed GC Overhead | 0.96 | 0.39 | -59% reduction |
| Old Gen Pattern | Flat plateau (600-900) | Deep sawtooth (200-1000) | Dynamic sizing active |
Key Insight: jaz achieves 22% higher throughput by keeping Young GC efficient—objects die in Eden/Survivor instead of promoting prematurely to Old gen, dramatically reducing expensive Mixed GC work.
Microsoft Build of OpenJDK with default G1 GC ergonomics. Long JBB run on a 4-core VM.
jaz: Beyond the Wild RumpusSame VM/JDK/workload; only the launcher changed (jaz resource-aware defaults).
jaz achieves 22% higher max-jOPS, driving higher allocation rate. Both Eden and Old show dramatic oscillation (200-1000 regions) – wider variation reflects increased load and dynamic heap sizing, not the tight bands of baseline.jazFig 1 vs Fig 2. Comparison: Baseline shows tight, stable Eden band (700-900 regions). jaz shows wider oscillation because it’s handling 22% higher throughput—increased allocation rate from higher max-jOPS.
Figure 1: G1 Region States Over Time — Before GC (baseline)
Figure 2: G1 Region States Over Time — Before GC (withjaz)
Fig 3 vs Fig 4. Comparison: Baseline keeps Old gen at elevated plateau (600-900 regions) continuously. jaz‘s dramatic sawtooth (200-1000 regions) proves efficient Mixed GC reclamation—deep troughs demonstrate productive old gen cleanup, restoring headroom. Result: 265 Mixed GCs vs 778 (−66%) despite 22% higher throughput.
Figure 3: G1 Region States Over Time — After GC (baseline)
Figure 4: G1 Region States Over Time — After GC (with jaz)
Fig 5 vs Fig 6. Comparison: Baseline shows 1,100 Young GCs with early spikes then ~80-130ms band. jaz shows 1,596 Young GCs (+45%) with wavy pattern ~15-250ms. More Young GC activity is positive—it’s handling 22% higher throughput while keeping objects from promoting prematurely.
Figure 5: Young-only pauses (baseline)
Figure 6: Young-only pauses (with jaz)
Fig 7 vs Fig 8. Comparison: Baseline shows dense, continuous Mixed activity (778 events) driven by Old gen’s elevated plateau. jaz shows episodic pattern with quiet stretches (265 events = −66%)—efficient GC prevents Old gen buildup proactively, resulting in 59% lower Mixed overhead.
Figure 7: Prepare-Mixed + Mixed pauses with Concurrent Start overlay (baseline)
Figure 8: Prepare-Mixed + Mixed pauses with Concurrent Start overlay (with jaz)
Fig 9 vs Fig 10. Comparison: Baseline shows clustered Humongous Starts Concurrent Marking events at warm-up and tail. jaz shows only ~10 isolated events—large allocations are absorbed avoiding humongous-triggered marking, post-marking and Old gen pressure.
Figure 9: Humongous-trigger events (baseline). Early and tail clusters align with mixed-GC activity
Figure 10: Humongous-trigger events (with jaz)—rare, non-disruptive
| Aspect | Baseline | With jaz |
What Changed |
| Throughput | Baseline | jaz |
+22% peak throughput
+15% at SLA |
| Total GC Events | 3,777 cycles | 2,526 cycles | −33% (1251 fewer events) |
| Regions: Before GC (Eden)
Fig. 1 → Fig. 2 |
Tight band (~700-900 regions)
Eden already large at Young GC arrival |
Wider oscillation
Reflects higher allocation rate |
Handling 22% more work
Dynamic sizing active |
| Regions: After GC (Old)
Fig. 3 → Fig. 4 |
Flat plateau (~600–900 regions)
Always elevated |
Dramatic sawtooth (~200–1000 regions)
Deep troughs restore headroom |
Efficient Young GC keeps promotions low
Old cleanup prevents buildup |
| Young GC Count
Fig. 5 → Fig. 6 |
1,100 events
1.41% overhead |
1,596 events (+45%)
2.60% overhead |
More Young GC is good—handles higher throughput
Handles transients efficiently in the Young gen |
| Mixed GC Count
Fig. 7 → Fig. 8 |
778 events
0.96% overhead Continuous pattern |
265 events (−66%)
0.39% overhead (−59%) Episodic pattern |
Massive reduction in Old gen work
Cadenced GC prevents reactive storms |
| Humongous Events
Fig. 9 → Fig. 10 |
41 clustered bursts at warm-up and tail | ~10 isolated events Sparse, absorbed | Large allocations don’t trigger excessive marking cycles |
In baseline, Young GC dump large volumes of live data to Old gen. Premature promotions lead to continuous Mixed GC work. Old then stays high, thresholds trip early and often, and Mixed/concurrent activity becomes dense. JBB hits the system throughput ceiling early.
jaz Breakthrough: Efficient Young GC Enables Higher Throughputjaz achieves 22% higher peak throughput through resource-aware defaults that provide the capacity to handle increased load, combined with efficient GC that keeps it sustainable.
jaz Takeawaysjaz scales where baseline hits ceilingjaz Unlocks Higher Throughput on Azure VMsThis performance study shows that jaz is more than a convenience wrapper—it’s a resource-aware optimization pipeline that delivers measurable, significant improvements in real-world workloads:
On Azure Linux/Arm VMs with the Microsoft Build of OpenJDK, jaz consistently delivered:
We’re extending jaz beyond a great default into a continuously adaptive launcher:
jaz chose X.”jaz can pick the right combo per workload.Stay tuned—jaz is becoming a foundation for self-optimizing Java runtimes on Azure.
To ensure clean, comparable results across baseline and jaz runs, each iteration followed this protocol.
sync # force pending disk writes
echo 3 > /proc/sys/vm/drop_caches # drop page cache, dentries, and inode caches
We reset Linux page cache before each trial to remove cross-run noise from warm caches. drop_caches does not discard dirty data; sync persists it first. (Run as root / via sudo.)
Both configurations (baseline vs jaz) were executed multiple times across Microsoft Build of OpenJDK versions. Runs showed highly consistent GC behavior, region-state evolution, and throughput trends.
JBB is a complex tool designed to simulate a 3-tier system and measure the performance of the JVM on a given OS + hardware. A full run cycles through several distinct operational phases, each with unique performance and memory characteristics that are crucial for performance engineers to understand for effective tuning. Let’s get a quick look at how JBB phases shape allocation/promotion pressure.
Why this matters: JBB pushes the JVM through distinct load phases that shift allocation and promotion pressure. Understanding these phases helps when analyzing GC logs or perf telemetry, because JVM behavior changes dramatically from startup to peak load and final shutdown.
This initial phase is all about getting the system ready and estimating capacity.
This is the core measurement phase where the benchmark systematically increases the load to build the Response-Throughput (RT) curve.
The Relationship Between jOPS and Memory Pressure
The final phase winds down the benchmark run and validates the data collected.
Overarching GC goal
Get out of the way — maximize mutator time, keep pauses predictable and short, avoid premature promotion/copying, and reclaim promptly.GC goal
Keep Eden large enough that most short-lived objects die there, but not so large that evacuations must copy a big live set, overflow Survivor or force premature tenure.GC goal
Let objects age briefly and avoid premature promotion to Old.GC goal
Keep only the long-lived live data set (LDS); minimize promotion churn and lower region waste/fragmentation.Gotcha
Short-lived/bursty humongous allocations can fragment Old or force extra GC/cycle work to find contiguous space; reclaiming them eagerly is key.jaz)These plots count regions by role and reveal how the regions react across JBB phases.
jaz): Before GC, After GC“Before GC” = right before a collection (peaks). “After GC” = immediately after (valleys).
These plots show frequent STW events across the run, with variance shifting as JBB moves through Warm-up → SLA → Tail.
jaz)jaz)jaz)Each dot is an STW pause (y-axis = ms, x-axis = runtime (s)). These pairings let you compare pause frequency/ceilings and variance shifts across the same JBB phases.
Note on Remark/Cleanup
These are short closing STW phases of a concurrent marking cycle. Remark finalizes marking bookkeeping while Cleanup tidies metadata and remembered sets and completes any leftover work before the next Mixed GCs begin. Once marking stabilizes, they’re typically tiny and flat in these runs, so we omit separate plots for brevity.The post Beyond Ergonomics: How the Azure Command Launcher for Java Improves GC Stability and Throughput on Azure VMs appeared first on Microsoft for Java Developers.
]]>The post From Complexity to Simplicity: Intelligent JVM Optimizations on Azure appeared first on Microsoft for Java Developers.
]]>As cloud-native architectures scale across thousands of containers and virtual machines, Java performance tuning has become more distributed, complex, and error-prone than ever. As highlighted in our public preview announcement, traditional JVM optimization relied on expert, centralized operator teams manually tuning flags and heap sizes for large application servers. This approach simply doesn’t scale in today’s highly dynamic environments, where dozens—or even hundreds—of teams deploy cloud-native JVM workloads across diverse infrastructure.
To address this, Microsoft built Azure Command Launcher for Java (jaz), a lightweight command-line tool that wraps and invokes java (e.g., jaz -jar myapp.jar). This drop-in command automatically tunes your JVM across dedicated or resource-constrained environments, providing safe, intelligent, and observable optimization out of the box.
Designed for dedicated cloud environments—whether running a single workload in a container or on a virtual machine—Azure Command Launcher for Java acts as a fully resource-aware optimizer that adapts seamlessly across both deployment models.
The goal is to make JVM optimization effortless and predictable by replacing one-off manual tuning with intelligent, resource-aware behavior that just works. Where traditional tuning demanded deep expertise, careful experimentation, and operational risk, the tool delivers adaptive optimization that stays consistent across architectures, environments, and workload patterns.
| Traditional Approach | jaz Approach |
|---|---|
| Build-time optimization and custom builds | Runtime tuning that adapts to any Java app |
| Requires JVM expertise and manual experimentation | Intelligent heuristics detect and optimize safely |
| Configuration drift across environments | Consistent, resource-aware behavior |
| High operational risk | Safe rollback and zero configuration risk |
By replacing static configuration with dynamic, resource-aware tuning, jaz simplifies Java performance optimization across all Azure compute platforms.
The tool preserves user-provided JVM configuration wherever applicable.
By default, the environment variable JAZ_IGNORE_USER_TUNING is set to 0, which means user-specified flags such as -Xmx, -Xms, and other tuning options are honored.
If JAZ_IGNORE_USER_TUNING=1 is set, the tool ignores most options beginning with -X to allow full optimization. As an exception, selected diagnostic and logging flags (e.g. -Xlog) are always passed through to the JVM.
Example (default behavior with user tuning preserved):
jaz -Xmx2g -jar myapp.jar
In this mode, the tool:
Beyond preserving user input, the tool performs resource-aware heap sizing using system or cgroup memory limits. It also detects the JDK version to enable vendor- and version-specific optimizations—including enhancements available in Microsoft Build of OpenJDK, Eclipse Temurin, and others.
This ensures consistent, cross-platform behavior across both x64 and ARM64 architectures.
The tool introduces adaptive telemetry designed to maximize inference while minimizing interference.
Core principle: more insight, less intrusion.
Telemetry is used internally today to guide safe optimization decisions. The system automatically adjusts data collection depth across runtime phases (startup and steady state), providing production-grade visibility without impacting application performance.
Future phases will extend this with event-driven telemetry, including anomaly-triggered sampling for deeper insight when needed.
The tool’s memory-management approach has evolved beyond conservative, resource-aware heuristics such as the default ergonomics in OpenJDK. It now applies an adaptive model that incorporates JDK-specific knowledge to deliver stable, efficient behavior across diverse cloud environments. Each stage builds on production learnings to improve predictability and performance.
The initial implementation established predictable memory behavior for both containerized and VM-based workloads.
The version available in Public Preview extends this foundation with JDK introspection and vendor-specific awareness.
This integration also informs the tool’s telemetry model, where the same balance between visibility and performance guides data-collection strategy.
In observability systems, inference (the ability to gain insight) and interference (the performance impact of measurement) are always in tension. The tool’s telemetry model balances these forces by adjusting its sampling depth based on runtime phase and workload stability.
| Telemetry Mode | Insight (Inference) | Runtime Impact (Interference) | Use Case |
|---|---|---|---|
| High-Frequency | Deep event correlation | Noticeable overhead | Startup diagnostics |
| Low-Frequency | Basic trend observation | Negligible overhead | Steady-state monitoring |
| Adaptive (tool) | Critical metrics collected on demand | Minimal overhead | Production optimization |
A useful comparison is Native Memory Tracking (NMT) in the JDK. While NMT exposes multiple levels of visibility into native memory usage (summary, detail, or off), most production systems rely on summary mode for stability. Similarly, the tool embraces this tiered approach to observability but applies it dynamically by adjusting telemetry intensity rather than relying on static configuration.
| Phase | Data Collection Depth | Inference Level | Interference Level | Comparable NMT Concept |
|---|---|---|---|---|
| Startup | Focused sampling of GC and heap sizing | High | Moderate | summary with higher granularity |
| Steady-State | Aggregated metrics and key anomalies | Medium | Low | summary |
| Future (Event-Driven) | Planned reactive sampling | Targeted | Minimal | Conceptually detail-on-demand |
The following visualizations illustrate how telemetry tracks memory behavior in sync with its adaptive sampling cadence. As the runtime transitions from startup to steady state, the tool automatically increases the time between samples—reducing interference while preserving meaningful visibility into trends.

Figure 1 shows how the interval between telemetry samples (Sample Δt) increases as the runtime stabilizes. Both Process A and Process B exhibit the same pattern: a high-frequency sampling phase during startup followed by a steady-state phase with larger, consistent sampling intervals. This adaptive behavior reduces interference while maintaining meaningful visibility into runtime behavior.

Figure 2 shows the resident set size (RSS) of the Java process in GB, captured through adaptive telemetry. The visualization highlights the expected growth during warmup and stabilization during steady state. By adjusting sampling frequency intelligently, the tool provides production-grade observability of memory behavior without disrupting application performance or exceeding resource budgets.
All SPEC JBB measurements in this study were run on Azure Linux/Arm64 virtual machines.

The tool delivers double-digit peak throughput gains across all configurations tested—well over 17% relative to out-of-the-box JVM ergonomics
SLA-constrained performance improves as well, with the largest gains observed when paired with the Microsoft Build of OpenJDK (around 10% in our tests)
In SPEC JBB2015, SLA performance represents latency-sensitive throughput—how much work is sustained while meeting service-level response-time requirements.
No regressions or stability issues were observed across repeated trials and JDK versions
These measurements were run in containers with the tool applying cgroup-aware ergonomics under fixed CPU and memory limits.
The Spring PetClinic REST backend provides a lighter-weight request/response workload that complements SPEC JBB2015 rather than replacing it. It exposes CRUD endpoints for owners, pets, vets, visits, pet types, and specialties (GET/POST/PUT/DELETE), documented via Swagger/OpenAPI, and backed by H2 by default. The repository includes Apache JMeter test plans under src/test/jmeter, which we run headless to generate steady read/write traffic across the API surfaces.
To evaluate stability under resource contention, we also ran stress-ng in a companion container to introduce CPU and memory pressure alongside the JMeter-driven workload.
| Performance Domain | Impact | Stability Assessment |
|---|---|---|
| Mean Response Time | 1–5% improvement | Consistent across scenarios |
| Tail Latency (90–99th) | Neutral/minimal | Maintained under stress—including stress-ng |
| Throughput Capacity | No degradation | Scales with resources |
| Stress Resilience | Excellent | Production-ready |
| Memory Efficiency | Resource-aware | Validated from 2–8GB |
In other words, the tool is effectively throughput-neutral on this microservice workload while delivering small improvements in mean response time and keeping tail latency and stability intact—even when additional load is introduced via stress-ng.
Taken together, the SPEC JBB2015 and Spring PetClinic REST results confirm that the tool enhances throughput, preserves tail latency, and maintains robust performance across both VM-based and containerized deployments—even under additional system pressure.
The tool supports a safe, incremental adoption strategy designed for production environments:
| Phase | Approach | Command Example | Risk Level |
|---|---|---|---|
| Coexistence | Respect existing tuning | jaz -Xmx2g -jar myapp.jar |
Minimal |
| Optimization | Remove manual flags | jaz -jar myapp.jar |
Low |
| Validation | Verify tuning safely | JAZ_DRY_RUN=1 jaz -jar myapp.jar |
Zero |
This phased design lets teams adopt the tool at their own pace—and roll back at any time without risk.
When invoked, the launcher detects the runtime environment and JDK version, applies resource-aware JVM configuration, and launches the optimized JVM in which the application executes. Built-in telemetry operates with low overhead, providing observability without affecting startup or runtime performance.

jaz runtime lifecycle showing instant setup, JVM bring-up, and continuous telemetryFigure 4 illustrates the end-to-end lifecycle of jaz during application startup and execution. The tool performs an instant setup phase—detecting the environment and JDK version, applying resource-aware configuration, and preparing safe JVM arguments—all within milliseconds. It then launches the optimized JVM, after which the Java application begins execution in a tuned environment. During the continuous runtime phase, low-overhead, event-driven telemetry runs concurrently, providing observability with minimal interference.
Immediate throughput improvements (17%+)
Consistent, resource-aware behavior across JDKs
Safe, incremental adoption model
Foundation for adaptive, self-optimizing behavior through runtime awareness
While telemetry is currently used only to improve the launcher internally, it lays the groundwork for future self-healing features that can inform runtime components such as GC ergonomics and heap-sizing heuristics.
Azure Command Launcher for Java turns JVM optimization from a specialized task into a built-in capability—bringing Java simplicity, safety, and performance to the Azure cloud.
For installation, configuration, supported JDKs, and environment variables, see the Microsoft Learn documentation:
https://learn.microsoft.com/en-us/java/jaz/overview
A forthcoming performance analysis blog will present detailed results from extended performance testing, covering heap and GC behavior, and scaling trends across Azure VMs.
The post From Complexity to Simplicity: Intelligent JVM Optimizations on Azure appeared first on Microsoft for Java Developers.
]]>The post Announcing the Public Preview of Azure Command Launcher for Java appeared first on Microsoft for Java Developers.
]]>Before the rise of microservices, Java applications were typically deployed as Java EE artifacts (WARs or EARs) on managed application servers. Ops teams were responsible for configuring and tuning the JVM, often on powerful servers that hosted multiple applications on a single Java EE application server instance.
With the move to cloud-native microservices, every service now runs independently with its own JVM and in its own dedicated container or virtual machine. Each service defines its own CPU and memory boundaries, and with that, its JVM tuning parameters. This shift transferred tuning responsibilities from centralized Ops teams to individual developer teams, creating complexity and inconsistency across environments.
Bradesco Bank is one example amongst thousands of customers that have gone through this shift. One of the top five largest banks in Latin America with over $300 billion (USD) in assets, Bradesco has built much of its backend systems on Java and the JVM and now runs significant back-end operations on Azure Red Hat OpenShift (ARO) environments. Bradesco Bank processes billions of transactions every day, supported by tens of thousands of JVMs with critical Java applications at scale.
“In our proof of concept, Azure Command Launcher for Java delivered exactly the kind of operational standardization we needed as we prepared to scale Java workloads on Azure. Early tests showed strong potential for reducing waste and simplifying performance tuning.” – Thiago Mendes, Solution Architect at Bradesco Bank
Without proper JVM tuning, development and operations teams like the ones at Bradesco Bank may risk meeting with:
Azure Command Launcher for Java, in Private Preview since May 2025, simplifies and automates JVM configuration for cloud workloads. It works as a drop-in replacement for the standard java command, compatible with any Azure supported JDK versions 8 or later.
Throughout the Private Preview of the Azure Command Launcher for Java, we met with several customers and found that about 20% of Java workloads on containers were being manually misconfigured in production. This led to significant resource waste, due to JVMs being tuned to values much lower than the resource limits provided to their Kubernetes deployments, resulting in unnecessary horizontal scaling to account for the increasing processing demands.
DevOps teams want consistent, battle-tested and worry-free JVM tuning today. That’s where Azure Command Launcher for Java steps in. Without changing your code or adopting a new runtime, teams simply replace their usual “java -jar”command with Azure Command Launcher for Java and gain efficient, smarter defaults plus standardized tuning across services. It’s a practical alternative for teams that want to preserve their existing JVM investments while bringing them under stronger operational control.
No code changes, no lock-in. Just replace:
java -Xmx1024m -jar myapp.jar
with:
jaz -jar myapp.jar
And Azure Command Launcher for Java manages the JVM configuration automatically.
By default, the tool respects any tuning flags the user provides. If it detects manual JVM settings, like -Xmx, it steps aside and does not apply its own tuning. For workloads with no tuning flags, the tool automatically uses its recommended configuration.
If operators want the tool to override manual tuning, they can enforce this behavior with:
JAZ_IGNORE_USER_TUNING=1
To return control to user-defined flags, set the variable back to:
JAZ_IGNORE_USER_TUNING=0
This approach keeps adoption safe, gradual, and fully reversible.
Out of the box, Azure Command Launcher for Java applies sensible JVM defaults that are optimized for dedicated containerized and virtualized environments. These defaults are based on widely accepted best practices and insights gathered from real-world Java workloads on Azure.
This allows teams to start with a configuration that is more closely aligned with modern cloud deployment models, helping reduce manual setup and the risk of configuration drift across services.
To understand our approach for these choices, please see this article from our JVM Performance Architect, Monica Beckwith.
While the Public Preview focuses on standardization and improved default configurations for immediate benefits, the roadmap includes more advanced and intelligent capabilities for the future.
Planned features include adaptive learning based on telemetry, where the tool will gradually analyze JVM telemetry over time and suggest further optimizations. This capability will be introduced in later releases after further validation with customers and partners.
Additionally, we will also incorporate features like Application Class Data Sharing (JEP 310) so users can benefit automatically. Long term, we will enable Project Leyden.
Azure Command Launcher for Java works with all Microsoft compute services, including but not limited to:
Linux binaries are available for x64 and ARM64 architectures, and it also comes pre-bundled in the latest Microsoft Build of OpenJDK container images.
The Public Preview is now available to all customers. To get started, visit the documentation page for more information on how to configure and use the tool.
The post Announcing the Public Preview of Azure Command Launcher for Java appeared first on Microsoft for Java Developers.
]]>The post Introducing Major New Agentic Capabilities for GitHub Copilot in JetBrains and Eclipse appeared first on Microsoft for Java Developers.
]]>This is the year of the agents. Developers need more control than ever—both in how they work with agents and how agents adapt to their workflows. After introducing Custom Agents in VS Code, we’re now bringing them to JetBrains and Eclipse. With new Custom Agents and Subagents, developers can set tailored instructions while subagents operate in clean, isolated contexts for focused, accurate execution. And with the new Plan Mode, developers can tackle complex problems through structured, step-by-step planning with seamless task handoffs.
Custom Agents give developers the ability to shape Copilot’s behavior around their unique coding patterns, project requirements, or domain-specific rules. You can define your own instructions, constraints, and tools, turning Copilot into a configurable assistant that works the way you do — not the other way around.
Isolated Subagents bring a new level of precision to multi-step tasks. Each subagent operates in a clean context to deliver more accurate reasoning and fewer distractions. Whether you’re fixing tests, refactoring code, or generating documentation, subagents ensure Copilot stays laser-focused on the task at hand.
Plan Mode elevates Copilot from a passive helper to an orchestrated problem-solver. It breaks complex tasks into clear, sequential steps — planning, executing, and validating as it goes. This ensures more reliable outcomes, better visibility into the solution path, and smoother handling of multi-stage engineering tasks.
In addition to the new agentic capabilities, we are also bringing improvements to the fundamental experience. This ensures developers can perform their most essential daily coding tasks with GitHub Copilot.
Next Edit Suggestions, already released in JetBrains, is now available in Eclipse This feature proactively surfaces the next best actions — code edits, improvements, or cleanup — helping developers maintain momentum and quickly apply iterative changes without losing context.
Eclipse now gains Coding Agent support, enabling developers to offload asynchronous coding tasks to an autonomous background agent. You can delegate fixes, transformations, or generation tasks, and Copilot will complete them while you continue working elsewhere in the IDE.

Finally, all of this is powered by a much smarter model—and we’re moving quickly to bring it to JetBrains and Eclipse. OpenAI’s GPT-5.1, GPT-5.1-Codex, and GPT-5.1-Codex-Mini (Preview)—the variants of GPT-5 optimized specifically for agentic software engineering—was rolled out last week across VS Code, JetBrains, Eclipse, Xcode, and the GitHub Copilot CLI, delivering significant quality improvements in chat, agents, and code operations.
Together, these innovations deliver a truly adaptive Copilot experience—faster, smarter, and designed for the way Java developers work.
You can download our extensions from the following links
Your feedback is essential to our product. Let us know how we can continue improving.
In-product feedback: Use the feedback options within your IDE
Feedback Repositories by IDE
JetBrains
Eclipse
The post Introducing Major New Agentic Capabilities for GitHub Copilot in JetBrains and Eclipse appeared first on Microsoft for Java Developers.
]]>The post JDConf 2026 Is Coming With Modern Solutions for an Agentic World appeared first on Microsoft for Java Developers.
]]>Technology is accelerating faster than ever, and developers are once again at the helm, shaping the future of applications, intelligence, and enterprise systems. With the rise of large language models (LLMs), agent-oriented architectures, and AI-driven development paradigms, Java developers find themselves in a uniquely powerful position to modernize code already powering critical systems, and to build the software of tomorrow.
Java remains one of the world’s most trusted languages for enterprise, cloud, mobile and mission-critical systems. As James Governor, from developer analyst firm RedMonk, recently said, “Java has maintained relevance through all of the waves that we’ve seen over the last couple of decades – it is the exemplar of a general purpose programming language and runtime. […] The idea that somehow Java isn’t going to play well with AI doesn’t make any sense.”
At Microsoft JDConf 2026, we’ll explore how Java is evolving to power agentic, intelligent applications.
We are thrilled to announce that Microsoft JDConf 2026 will take place on April 8-9, 2026, with live-streaming across multiple time zones to support our global community.
This edition is all about agents, intelligence, and modernization. How Java developers can modernize legacy systems with GitHub Copilot, to then leverage LLMs, build intelligent and agentic features,, integrate them into existing systems, and scale them in production with Agentic DevOps. We’ll dive into not just AI assistive tooling, but agentic applications that act, coordinate, and drive outcomes.
We are working on Microsoft JDConf 2026 with a focus on showcasing:
The heartbeat of JDConf is the community. We’re inviting Java developers, architects, engineers and thought leaders to submit proposals and share their experience with the world.
Head over to our speaker submission portal (coming soon) to submit your session. Stay tuned for submission deadlines, format guidelines and speaker benefits.
JDConf 2026 offers a unique moment for Java developers to be at the forefront of the agentic AI wave. With the depth of the Java ecosystem, the power of modern tooling and the scale of the cloud, there’s no better time to innovate, to re-imagine, and to build intelligent applications that truly act.
Mark your calendar for April 8-9, 2026. Stay tuned for registration, the full agenda, sessions and more.
Let’s code the future of intelligent agents, in Java.
The post JDConf 2026 Is Coming With Modern Solutions for an Agentic World appeared first on Microsoft for Java Developers.
]]>The post Java OpenJDK October 2025 Patch & Security Update appeared first on Microsoft for Java Developers.
]]>We are happy to announce the latest July 2025 patch & security update release for the Microsoft Build of OpenJDK. Download and install the binaries today.
Check our release notes page for details on fixes and enhancements. The source code of our builds is available now on GitHub for further inspection: jdk25u, jdk21u, jdk17u, jdk11u.
We continue to provide support on Azure and internally at Microsoft for OpenJDK 8 binaries of Eclipse Temurin built by the Eclipse Adoptium project. To facilitate its usage, we ship container images of OpenJDK 8 on top of Azure Linux and Ubuntu. Refer to our documentation.
Contact openjdk-support@microsoft.com.
The post Java OpenJDK October 2025 Patch & Security Update appeared first on Microsoft for Java Developers.
]]>The post Java and AI for Beginners: a practical video series for Java appeared first on Microsoft for Java Developers.
]]>
We help you through foundational ideas first and then move into hands-on examples:
Getting started fast – Spin up your first AI-powered app using GitHub Codespaces.
Core generative AI techniques – Learn the basics behind completions and chat flows. See how function calling connects models to real tools and services. Get an introduction to Retrieval-Augmented Generation (RAG) for document-aware applications.
Simple, focused application – Explore small projects that illustrate different capabilities, such as combining text and image generation, running models locally with the Azure AI Foundry Local experience, and wiring tools with the Model Context Protocol (MCP).
Responsible AI – Apply safety features from GitHub Models and Azure services. We cover content filtering, bias awareness, and practical checks you can add before deployment.
MCP in Java – Understand the Model Context Protocol and how it fits Java workflows. Learn what it means to implement an MCP server, connect a Java client, and use tools through a consistent protocol.
Context engineering for Java – Improve results with clean prompts, structured context, and simple evaluation steps. We discuss when to persist context and when to compute it on the fly.
Modernization with AI assistance – See how the GitHub Copilot App Modernization experience helps upgrade and migrate Java applications. Then follow a guided flow to deploy to Azure with AI-assisted configuration.
LangChain4j essentials – Start a basic project that targets OpenAI-compatible endpoints, then build a small agent with tools and memory to understand the moving parts.
Running GenAI in containers – Review when to use on-demand GPUs for inference and training. Learn how dynamic sessions in Azure Container Apps support code interpreters and short-lived, cost-aware execution.
Each video is short and focused. Watch them in order if you are new to the space, or skip into the topics that match your immediate needs.
The series uses services and libraries that many Java teams already rely on:
OpenAI and GitHub Models
LangChain4j for building Java-based AI applications with open-source patterns
Model Context Protocol to connect tools and services through a common protocol
Where it helps, we use the official OpenAI Java SDK to target both OpenAI and Azure OpenAI endpoints with consistent code paths and Azure-aware authentication and security options.
This series reflects ongoing work with the Java open-source community. The Microsoft Java advocacy and engineering teams continue to contribute to projects like LangChain4j and Spring AI, improve Azure and OpenAI integrations, and provide open-source examples that run locally and on the cloud. Your feedback from conferences, meetups, issue threads, and customer projects shaped each of the outlines for these videos.
The playlist, code links, and references are available on the Microsoft Developer YouTube channel :
https://aka.ms/java-ai-beginners
Also subscribe to the Microsoft for Java Developers YouTube Channel
If you have suggestions or want a deeper dive on a specific area, let me know—your input will guide future installments.
The post Java and AI for Beginners: a practical video series for Java appeared first on Microsoft for Java Developers.
]]>The post MCP Registry and Allowlist Controls for Copilot in JetBrains and Eclipse Now in Public Preview appeared first on Microsoft for Java Developers.
]]>An MCP Registry is a directory of Model Context Protocol (MCP) servers. For users of JetBrains IDEs and Eclipse, you can now configure your MCP Registry and browse available MCP servers directly within your IDE. This greatly streamlines setup and provides a seamless experience for discovering and managing MCP servers right from the editor.
As an enterprise or organization owner, you can configure an MCP Registry URL along with an access control policy. These settings determine which MCP servers your developers can see and run in supported IDEs with GitHub Copilot.
When combined with the Registry only policy, it prevents any usage of MCP servers (at runtime) that are not defined in the internal registry.
In JetBrains IDEs:
In Eclipse:
Allow List Controls are available only for Copilot Business and Copilot Enterprise customers.
For setup instructions and registry format details, see the official documentation.
You can try these new features today in the nightly release of Copilot for JetBrains, and the pre-release versions of Copilot for Eclipse. Please install from:
You will also need to have a valid Copilot license.
We value your feedback! Share your experience through the following channels:
Note: These features are currently in preview and are subject to change.
The post MCP Registry and Allowlist Controls for Copilot in JetBrains and Eclipse Now in Public Preview appeared first on Microsoft for Java Developers.
]]>