Question
My MinIO uploads are running slower than expected. How do I diagnose the root cause and optimize upload performance? What are the common bottlenecks and how can I resolve them?
Answer
Slow upload performance in MinIO can stem from various bottlenecks including network, storage, CPU, or configuration issues. Here’s a systematic approach to diagnose and resolve performance problems:
1. Initial Performance Assessment
Establish Baseline Metrics
# Test upload performance with mc clienttime mc cp large-file.bin myminio/test-bucket/
# Test with multiple filestime mc mirror local-folder/ myminio/test-bucket/ --overwrite
# Use dd to create test files of different sizesdd if=/dev/zero of=test-1mb.bin bs=1M count=1dd if=/dev/zero of=test-100mb.bin bs=1M count=100dd if=/dev/zero of=test-1gb.bin bs=1M count=1024Monitor Real-time Performance
# Monitor MinIO metricsmc admin prometheus generate myminio > prometheus.yml
# Check active connections and throughputmc admin trace myminio --verbose
# Monitor system resourceshtopiotop -aonethogs2. Network Diagnostics
Bandwidth Testing
# Test network bandwidth between client and serveriperf3 -s # On MinIO serveriperf3 -c <minio-server-ip> -t 30 # On client
# Test with multiple streamsiperf3 -c <minio-server-ip> -P 4 -t 30
# Test UDP performanceiperf3 -c <minio-server-ip> -u -b 1GNetwork Configuration Checks
# Check network interface settingsethtool eth0
# Verify MTU settingsip link show eth0
# Check for packet lossping -c 100 <minio-server-ip>
# Monitor network utilizationsar -n DEV 1 10Network Optimization
# Increase network buffer sizesecho 'net.core.rmem_max = 134217728' >> /etc/sysctl.confecho 'net.core.wmem_max = 134217728' >> /etc/sysctl.confecho 'net.ipv4.tcp_rmem = 4096 87380 134217728' >> /etc/sysctl.confecho 'net.ipv4.tcp_wmem = 4096 65536 134217728' >> /etc/sysctl.confecho 'net.core.netdev_max_backlog = 5000' >> /etc/sysctl.conf
# Apply changessysctl -p3. Storage Performance Analysis
Disk I/O Testing
# Test sequential write performancefio --name=seqwrite --ioengine=libaio --iodepth=32 --rw=write \ --bs=1M --direct=1 --size=1G --numjobs=1 --runtime=60 \ --group_reporting --filename=/opt/minio/data1/test
# Test random write performancefio --name=randwrite --ioengine=libaio --iodepth=32 --rw=randwrite \ --bs=4K --direct=1 --size=1G --numjobs=4 --runtime=60 \ --group_reporting --filename=/opt/minio/data1/test
# Test with multiple drivesfor i in {1..4}; do fio --name=drive$i --ioengine=libaio --iodepth=32 --rw=write \ --bs=1M --direct=1 --size=1G --numjobs=1 \ --filename=/opt/minio/data$i/test &donewaitStorage Monitoring
# Monitor disk I/Oiostat -x 1 10
# Check disk utilizationdf -hdu -sh /opt/minio/data*
# Monitor disk queues and latencysar -d 1 10
# Check for storage errorsdmesg | grep -i error4. MinIO Configuration Optimization
Server Configuration Tuning
# /etc/minio/minio.conf optimizations
# Increase API workersMINIO_API_WORKERS=8
# Optimize read/write operationsMINIO_API_REQUESTS_MAX=10000MINIO_API_REQUESTS_DEADLINE=10s
# Enable compression for better network utilizationMINIO_COMPRESS=onMINIO_COMPRESS_EXTENSIONS=".txt,.log,.csv,.json,.tar"
# Memory optimizationMINIO_CACHE_DRIVES="/tmp/cache1,/tmp/cache2"MINIO_CACHE_EXCLUDE="*.pdf,*.mp4"Systemd Service Optimization
# /etc/systemd/system/minio.service optimizations
[Service]# Increase file descriptor limitsLimitNOFILE=1048576
# Increase process limitsLimitNPROC=1048576
# CPU schedulingCPUSchedulingPolicy=2CPUSchedulingPriority=99
# Memory settingsMemoryLimit=16GMemorySwappiness=1
# I/O settingsIOSchedulingClass=1IOSchedulingPriority=45. Application-Level Optimization
Multipart Upload Configuration
# Configure optimal part size for large filesmc config host add myminio http://localhost:9000 access-key secret-key
# Set multipart threshold and part sizemc config set myminio api-signature-version v4mc config set myminio multipart-threshold 64MBmc config set myminio multipart-copy-threshold 128MBConcurrent Upload Strategies
Go Example with Optimized Uploads:
package main
import ( "context" "fmt" "io" "log" "sync"
"github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7/pkg/credentials")
func optimizedUpload(client *minio.Client, bucketName, objectName string, reader io.Reader, objectSize int64) error { options := minio.PutObjectOptions{ PartSize: 64 * 1024 * 1024, // 64MB parts NumThreads: 4, // Parallel uploads ContentType: "application/octet-stream", SendContentMd5: true, }
_, err := client.PutObject(context.Background(), bucketName, objectName, reader, objectSize, options) return err}
func concurrentUploads(client *minio.Client, files []string) { var wg sync.WaitGroup semaphore := make(chan struct{}, 10) // Limit concurrent uploads
for _, file := range files { wg.Add(1) go func(filename string) { defer wg.Done() semaphore <- struct{}{} defer func() { <-semaphore }()
// Upload logic here fmt.Printf("Uploading %s\n", filename) }(file) }
wg.Wait()}Python Example with Optimization:
from minio import Miniofrom concurrent.futures import ThreadPoolExecutor, as_completedimport os
def upload_file_optimized(client, bucket, file_path): """Upload file with optimized settings""" file_size = os.path.getsize(file_path)
# Use larger part size for better performance part_size = 64 * 1024 * 1024 # 64MB
with open(file_path, 'rb') as file_data: client.put_object( bucket, os.path.basename(file_path), file_data, file_size, part_size=part_size )
def concurrent_upload(client, bucket, file_list, max_workers=10): """Upload multiple files concurrently""" with ThreadPoolExecutor(max_workers=max_workers) as executor: futures = { executor.submit(upload_file_optimized, client, bucket, file_path): file_path for file_path in file_list }
for future in as_completed(futures): file_path = futures[future] try: future.result() print(f"Successfully uploaded {file_path}") except Exception as e: print(f"Failed to upload {file_path}: {e}")6. System-Level Optimizations
CPU and Memory Tuning
# Set CPU governor to performanceecho performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
# Disable CPU power savingecho 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo
# Memory optimizationecho never > /sys/kernel/mm/transparent_hugepage/enabledecho never > /sys/kernel/mm/transparent_hugepage/defrag
# Swappiness optimizationecho 'vm.swappiness=1' >> /etc/sysctl.confsysctl -pI/O Scheduler Optimization
# Set I/O scheduler for SSDsecho noop > /sys/block/sda/queue/scheduler
# Set I/O scheduler for HDDsecho deadline > /sys/block/sdb/queue/scheduler
# Optimize queue depthecho 32 > /sys/block/sda/queue/nr_requests
# Disable barriers for better performance (if using UPS)mount -o remount,nobarrier /opt/minio/data17. Load Balancer Optimization
HAProxy Configuration
# Optimized HAProxy configuration for MinIObackend minio_servers balance roundrobin option httpchk GET /minio/health/live
# Connection optimization option tcp-check option log-health-checks
# Timeout settings timeout connect 5s timeout server 60s
# Buffer sizes tune.bufsize 32768 tune.maxrewrite 8192
server minio1 minio-node-1:9000 check maxconn 1000 server minio2 minio-node-2:9000 check maxconn 1000 server minio3 minio-node-3:9000 check maxconn 1000 server minio4 minio-node-4:9000 check maxconn 1000Nginx Optimization
upstream minio_servers { least_conn; server minio-node-1:9000 max_fails=2 fail_timeout=30s; server minio-node-2:9000 max_fails=2 fail_timeout=30s; server minio-node-3:9000 max_fails=2 fail_timeout=30s; server minio-node-4:9000 max_fails=2 fail_timeout=30s;}
server { # Buffer optimization client_body_buffer_size 1M; client_max_body_size 1G; client_body_timeout 300s;
# Proxy settings proxy_buffering off; proxy_request_buffering off; proxy_http_version 1.1; proxy_set_header Connection "";
location / { proxy_pass http://minio_servers; }}8. Monitoring and Troubleshooting Tools
Performance Monitoring Script
#!/bin/bashecho "=== MinIO Performance Monitor ==="echo "Timestamp: $(date)"echo
# System resourcesecho "CPU Usage:"top -bn1 | grep "Cpu(s)" | awk '{print $2}' | awk -F'%' '{print $1}'
echo "Memory Usage:"free -h | awk 'NR==2{printf "%.2f%%\n", $3*100/$2}'
echo "Disk Usage:"df -h | grep minio
echo "Network Connections:"netstat -an | grep :9000 | wc -l
# MinIO specific metricsecho "MinIO API Calls (last minute):"mc admin trace myminio --json | jq -r '.api.name' | sort | uniq -c | sort -nr | head -10
echo "Active HTTP Requests:"mc admin info myminio | grep "HTTP requests"Automated Performance Testing
#!/bin/bashBUCKET="perf-test"TEST_FILES_DIR="/tmp/test-files"
# Create test filesmkdir -p $TEST_FILES_DIRfor size in 1M 10M 100M 1G; do dd if=/dev/zero of=$TEST_FILES_DIR/test-$size.bin bs=$size count=1 2>/dev/nulldone
# Performance testsecho "=== Upload Performance Test ==="for file in $TEST_FILES_DIR/*; do echo "Testing $(basename $file):" time mc cp $file myminio/$BUCKET/done
# Cleanupmc rm --recursive --force myminio/$BUCKET/rm -rf $TEST_FILES_DIR9. Common Performance Issues and Solutions
Issue: High Latency
- Cause: Network latency, disk I/O wait
- Solution: Use local caching, optimize network path, upgrade storage
Issue: Low Throughput
- Cause: Small part sizes, insufficient parallelism
- Solution: Increase part size, use concurrent uploads
Issue: CPU Bottleneck
- Cause: Encryption overhead, compression
- Solution: Hardware acceleration, distribute load
Issue: Memory Pressure
- Cause: Large concurrent uploads, caching
- Solution: Increase RAM, optimize buffer sizes
10. Recommended Performance Targets
| File Size | Expected Throughput | Optimization Focus |
|---|---|---|
| < 1MB | 1000+ uploads/sec | Reduce latency |
| 1-100MB | 500-1000 MB/s | Optimize throughput |
| > 100MB | 1-10 GB/s | Maximize bandwidth |
Conclusion
Systematic performance optimization requires analyzing all components in the stack. Start with network and storage baseline tests, then optimize configuration, and finally tune application-level parameters. Regular monitoring and testing ensure sustained performance as your workload grows.
Monitor key metrics:
- Network utilization and latency
- Storage IOPS and throughput
- CPU and memory usage
- MinIO API response times
- Error rates and retry patterns
This methodical approach will help identify and resolve performance bottlenecks in your MinIO deployment.