This stability-focused release delivers significant performance improvements for high-concurrency deployments through singleflight deduplication and reduced lock contention, alongside critical fixes for rolling upgrades, inventory job stability, and data usage capacity reporting. Go 1.25.7 is included with security patches.
Answer
RELEASE.2026-02-07 is a performance and stability release. Singleflight deduplication eliminates thundering herd problems across Delta snapshots, table authorization, and LDAP lookups. Lock contention is reduced by replacing traditional mutex patterns with xsync.Map and sharded RWMutex across multiple hot paths. Critical bug fixes resolve rolling upgrade failures from the previous release, inventory job deadlocks, and capacity API returning zero values.
| Environment | Recommendation |
|---|---|
| Upgrading from 2026-02-02 | Upgrade immediately (rolling upgrade fix) |
| High-concurrency deployments | Upgrade soon (performance gains) |
| All other environments | Upgrade at next maintenance window |
Security Updates
Go 1.25.7 Security Update
What
Updated Go runtime to 1.25.7 (released 2026-02-04) with security fixes for:
crypto/tlsvulnerabilitiescrypto/x509vulnerabilitiesgocommand security issues- Compiler bug fixes
Availability
Available in this release.
Why
Go runtime security patches must be applied promptly to address known vulnerabilities in TLS handling and certificate validation.
What This Means for Customers
- Security compliance — Latest CVE patches from Go 1.25.7
- No action required — Patches apply automatically with the upgrade
Performance Improvements
Singleflight Deduplication
What
Concurrent requests for the same resource now share a single storage read instead of duplicating work. Applied to:
- Delta snapshot cache — Prevents duplicate reads for the same snapshot
- Table authorization — Coalesces identical policy evaluations
- LDAP IAM store policy lookups — Eliminates redundant LDAP queries under load
Availability
Available in this release.
Why
Under high concurrency, multiple goroutines requesting the same resource simultaneously cause a “thundering herd”—each performs identical work. Singleflight coalescing ensures only one goroutine does the actual work while others wait for the shared result, eliminating cache stampede.
What This Means for Customers
- Lower latency — Reduced duplicate I/O under concurrent access
- Higher throughput — More efficient resource utilization at scale
- LDAP stability — Fewer redundant queries during authentication storms
- No configuration required — Applied automatically
Lock Contention Reduction
What
Replaced sync.RWMutex + map patterns with lock-free concurrent data structures across multiple hot paths:
| Component | Change |
|---|---|
| Bucket metadata | xsync.Map replaces mutex-guarded map |
| Metacache manager | xsync.Map for concurrent access |
| Tier metrics | Sharded RWMutex (shardlock) |
| Event target list | xsync.Map for target registry |
| Delta-sharing auth | Lock-free authentication state |
| Bucket targets | xsync.Map for target management |
Availability
Available in this release.
Why
Traditional sync.RWMutex patterns become bottlenecks under high concurrency—read locks still contend with write locks. xsync.Map provides lock-free reads and sharded writes, while shardlock distributes write contention across multiple shards.
What This Means for Customers
- Higher concurrency — Reduced lock contention in high-traffic paths
- Lower tail latency — Fewer goroutines blocked on locks
- Better scaling — Performance improves with more concurrent clients
Cached DNS SRV Lookups for LDAP
What
DNS SRV record lookups are now cached with a 5-minute TTL using singleflight-backed caching. Also migrated from deprecated ldap.Dial/DialTLS to ldap.DialURL.
Availability
Available in this release.
Why
LDAP authentication requires DNS SRV lookups to discover directory servers. During login storms, redundant DNS queries add latency and load on DNS infrastructure.
What This Means for Customers
- Faster LDAP authentication — Cached DNS eliminates per-request lookups
- Reduced DNS load — Fewer queries to DNS infrastructure
- Improved reliability — Less sensitive to DNS latency spikes
New Features
Kubernetes Deployment Metadata in Diagnostics
What
Health diagnostics via mc support diag now include deployment metadata for Kubernetes-managed clusters:
- Operator version — AIStor operator version running in the cluster
- Container images — Exact image tags in use
- Helm chart versions — Chart version used for deployment
The AIStor operator writes metadata to a Kubernetes secret, which is synced to MinIO pods and included in diagnostics output.
Availability
Available in this release. No additional configuration required — metadata is automatically collected when the AIStor operator writes to /tmp/minio/deployment.json in MinIO pods.
Why
Troubleshooting Kubernetes deployments requires knowing the exact operator version, container images, and Helm chart versions. Previously this information had to be gathered manually from multiple sources.
What This Means for Customers
- Faster support resolution — All deployment context included in diagnostics
- Simplified troubleshooting — No manual gathering of K8s metadata
- Zero configuration — Works automatically with AIStor operator
Bug Fixes
| Issue | Fix |
|---|---|
| Rolling upgrade failures from handler ID reordering | Handler IDs now appended at end with stability test to prevent future reordering |
DataUsageInfo API returning zero capacity values | JSON marshalling now occurs after capacity values are populated |
| LDAP/OpenID purge running before providers are ready | IAM purge routines wait for provider initialization |
| Inventory job deadlock and spurious context cancellation | periodicSaver lifecycle encapsulated with explicit Start/Stop methods |
AppendObject failing in extended naming mode | Correctly uses encoded names for disk ops, original names for encryption |
Config file parsing failing with spaces around = | MINIO_CONFIG_ENV_FILE now trims whitespace, matching systemd behavior |
| Webhook targets sending wrong content type | Now sets Content-Type: application/x-ndjson for log entries |
| Tables staged-to-live recovery using wrong semantics | Changed from roll-forward to roll-back when client never received success |
| v3 metrics bucket limit regression | Reverted bucket limit from 1000 back to 10 |
| Kubernetes JWT validation nil pointer risk | Added safe type assertion checks in ValidateServiceAccountJWT |
| Table metadata pointer cache not invalidated on delete | Cache entries properly cleared on delete operations |
Rolling Upgrade Fix — Critical
The most impactful bug fix: handler IDs in the internal RPC protocol were inadvertently reordered, causing Invalid Handler errors during rolling upgrades from RELEASE.2025-12-20 to RELEASE.2026-02-02. This release adds a stability test to prevent future handler ID reordering.
If you deferred upgrading to 2026-02-02 due to rolling upgrade issues, this release resolves the problem.
Summary
RELEASE.2026-02-07 focuses on operational stability and performance:
- Singleflight deduplication eliminates thundering herd across Delta snapshots, table auth, and LDAP
- Lock contention reduction via
xsync.Mapand sharded RWMutex across six hot paths - Critical rolling upgrade fix resolves handler ID reordering that broke upgrades from 2025-12-20 to 2026-02-02
- 11 bug fixes addressing inventory deadlocks, capacity API, config parsing, and more
- Go 1.25.7 with security patches for crypto/tls and crypto/x509
For upgrade assistance, contact MinIO support through SUBNET.