TWJUG@LINE Conference Notes: September 5, 2019
Source: Dev.to
Preface
Hello everyone, I am Evan Lin, a Technical Evangelist at LINE Taiwan.
On the evening of 2019‑09‑05 I was delighted to invite the TWJUG community to LINE’s Taipei office for another community gathering.
Speakers
- Shinya Yoshida – LINE Tokyo office
- Yuto Kawamura – Speaker at Kafka Summit 2017
Topics
- ZGC for Future LINE HBase
- Kafka Broker performance degradation by mysterious JVM pause
Event URL – KKTIX:
ZGC for Future LINE HBase – Shinya Yoshida
Shinya Yoshida, who is responsible for HBase‑related processing at LINE, shared how HBase is used in LINE’s services.
HBase is a widely‑used NoSQL store on the JVM that requires low latency and high availability. Because the JVM’s STW (Stop‑The‑World) garbage collection can pause all threads, it becomes a pain point for high‑throughput, low‑latency workloads. The talk covered performance‑tuning techniques and observations.
GC Basics
Garbage collection consists of two main phases:
1. Finding the garbage
- Goal – Mark memory that can be reclaimed.
- Algorithms
- Reference counting – Count references to objects; when the count drops to zero the object is eligible for collection.
- Tracing (Mark) – Walk the object graph from GC roots; objects not reachable are considered garbage.
2. Collecting the garbage and defragmenting
- Goal – Reclaim memory and compact the heap.
- Algorithms
- Sweep / Compaction – Reclaim unreachable objects, then move live objects to eliminate fragmentation.
- Copying – Allocate a new region, copy live objects there, and discard the old region. This uses more memory but is usually faster.
Modern GC Algorithms
The diagram above shows several GC algorithms in use today:
| Algorithm | Characteristics |
|---|---|
| G1GC | Region‑based, aims for predictable pause times. |
| ZGC | Low‑latency, scalable to multi‑terabyte heaps; still experimental in Java 11. |
| Shenandoah | Low‑pause concurrent collector (OpenJDK). |
| Parallel/Serial GC | “Old” collectors, higher pause times but simple. |
Choosing a GC
- Understand the trade‑offs of each collector (throughput vs. pause latency, memory overhead, hardware requirements).
- Match the collector to your hardware and workload (CPU cores, heap size, latency requirements).
Performance Comparison (ZGC vs. G1GC)
- On a large‑memory configuration (128 GB), ZGC delivered better update and read performance than G1GC.
- Because ZGC was still experimental in Java 11, LINE used it only for internal performance testing. Further experiments and official roll‑out plans will be shared later.
Kafka Broker performance degradation by mysterious JVM pause – Yuto Kawamura
Overview
This section is shared by Yuto Kawamura, a senior engineer at LINE, about the debugging process of a problem that occurred in a live service. Kafka occupies a very important position in LINE’s message backend, and more than sixty services use Kafka (see this slide). The speaker presented a Kafka‑related incident and explained the entire debugging process.
Phenomenon / Problem
Originally, each Kafka message was processed smoothly, but suddenly the following situation occurred for a period of time:
- Response‑time degradation for the 99th‑percentile producer latency.
- Zookeeper session timeouts.
When the problem surfaced, the team observed:
- The CPU usage of each running thread spiked.
- GC pause time (STW) increased; the JVM’s stop‑the‑world pauses became noticeably longer.
Start Narrowing the Scope

From these results, the speaker shared his debugging experience:
- Assumption – Some JVM‑level events were making the system too slow.
- Reproduction – He tried to recreate the same environment.
About STW (Stop‑The‑World)
When the GC runs, the JVM performs two actions:
- Set a safepoint – tells the JVM to start GC.
- Safepoint sync – the JVM waits for all running threads to pause.
To test whether the JVM’s safepoint sync was causing excessive delays, the speaker wrote a very long nested loop that prevented the system from reaching a safepoint too early. By observing the behavior, he could confirm (or refute) his hypothesis and see whether the problem could be reproduced.
The process was iterative: continuously form hypotheses, write test tools, reproduce the issue, and finally verify the assumptions with low‑level observation tools.
What was the root cause? The speaker kept it a secret – readers are encouraged to view the slides for the full story.
References
Event Summary
The gathering provided an in‑depth look at JVM‑level debugging and Kafka performance issues. Attendees gained valuable knowledge and were invited to explore the slides further and discuss the findings.
Join the “LINE Developer Official Community” to receive first‑hand meetup updates and push notifications about the developer program.
Official account ID: @line_tw_dev
About the “LINE Developer Community Program”
LINE launched the LINE Developer Community Program in Taiwan at the beginning of this year. The program invests long‑term manpower and resources to host internal and external, online and offline developer gatherings, recruitment days, conferences, etc. Over 30 events are planned for the year.
Stay tuned for updates. For details, see the continuously updated schedule:
Prepared by the TWJUG community for the LINE meetup, 2019‑09‑05.



