I am running a load test scenario where we have 3 Bookies, dedicated SSD's for journal and ledger, JVM heap size 5G with G1GC enabled.
`jvm_opts: -Xmx5g -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+DoEscapeAnalysis -XX:ParallelGCThreads=32 -XX:ConcGCThreads=32 -XX:G1NewSizePercent=50 -XX:+DisableExplicitGC -XX:-ResizePLAB -XX:+PrintFlagsFinal -XX:+PrintGC -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps`
I am testing 1000 bytes record size, ingesting around 50M record at the ingestion rate limit 100K/sec.
I wanted to understand how the metric types `stats` are reported.
My understanding is that the above metrics are reported in micro seconds (from BK code) and the reporters (we use statsD to collect BK metrics `codahale` and sink it to `InfluxDB`) converts the `rates` to seconds and `duration` to `milliseconds`
1) I wanted to confirm if the final graph values that I am seeing in the UI (attached) is represented in milliseconds or some other units?
2) If it's in milliseconds, are these numbers in expected range (see attached image). To me 2.5 seconds (2.5K ms) latency for add entry request is very high.
Any help to understand the metrics is much appreciated.