HBase Practice At XiaomiRequest-2 Response-1 Request-3 Response2 Response-3 Request-4 Response-4 RPC-1 RPC-2 RPC-3 RPC-4 RPC-1 RPC-2 RPC-3 RPC-4 Blocking Client (Single Thread) Non-Blocking Client(Single Thread) Fault Client RegionServer RegionServer RegionServer Handler-1 Handler-2 Handler-3 Services RegionServer RegionServer RegionServer Handler-1 Handler-1 Handler-1 Services Get Stuck Get Stuck Get Stuck All Availability: 66% Availability: 0% Why Async HBase Client ? ● Region Server / Master STW GC ● Slow RPC to HDFS ● Region Server Crash ● High Load ● Network Failure BTW: HBase may also suffer from fault0 码力 | 45 页 | 1.32 MB | 1 年前3
HBASE-21879 Read HFile ’s Block into ByteBuffer directly.HFile ’s Block into ByteBuffer directly. 1. Background For reducing the Java GC impact to p99/p999 RPC latency, HBase 2.x has made an offheap read and write path. The KV are allocated from the JVM offheap Now consider the implementation: Firstly, we need a global ByteBuffAllocator for RPC. when reading a block in RPC, we’ll do: 1. Allocate ByteBuff from ByteBuffAllocator, and read the data from HFile BucketCache asynchronously (don’t block the RPC); In theory, if the RPC finished, we need to free the ByteBuff, but we can’t: ByteBuff can also be referenced by other RPC because it's still in RAMCache and0 码力 | 18 页 | 1.14 MB | 1 年前3
HBase Read PathRegionServer-0 RegionServer-1 RegionServer-2 scanResultCache ScannerCallableWithReplicas 1. RPC Request 2. RPC Response 3. Regroup and enqueue 4. Return a result Step.1 + Step.2 + Step.3 = loadCache Result-3 Cell-5 Result-4 Cell-7 Result-5 Cell-6 Cell-8 Cell-2 Result-2 Cell-3 RegionServer Row Data RPC Response Recieved from RS Results get from scanner.next() Cell-9 Size of cell-1 > 1MB scan.setCaching(2) Result-3 Cell-5 Result-4 Cell-7 Result-5 Cell-6 Cell-8 Cell-2 Result-2 Cell-3 RegionServer Row Data RPC Response Recieved from RS Results get from scanner.next() Cell-1 Cell-2 Result-1 Cell-3 Cell-4 Result-20 码力 | 38 页 | 970.76 KB | 1 年前3
HBase最佳实践及优化Postgres Conference China 2016 中国用户大会 HBase的GC特点 • 由单个RPC带来的操作类垃圾对象是短期的 • Memstore是相对长期驻留的,按2MB为单位分配 • Blockcache是长期驻留的,按64KB为单位分配 • 如何有效的回收RPC操作带来的临时对象是HBase 的GC重点 • 不建议HBase的堆大小操作操过64GB,否则GC压 力大、执行时间太长 的读性能 Postgres Conference China 2016 中国用户大会 写性能 • HBase理论平均写延时<10ms,时间复杂度O(1) • 没有可用的handler响应 – 考虑增加handler数目或硬件资源 • 更常见的情况是95%-99%的写入都很快,但有些 写入非常慢,甚至慢上万倍,一般问题在服务器端: – 写入Memstore慢 • HLog写入超时——考虑HDFS及硬盘异常0 码力 | 45 页 | 4.33 MB | 1 年前3
HBase Practice At XiaoMit (ClientSideRegionScanner) ❏ Construct regions by snapshot files ❏ Read data without any HBase RPC requests ❏ Required READ access to reference files and HFiles Snapshot ACL ❏ HDFS ACL could grant0 码力 | 56 页 | 350.38 KB | 1 年前3
共 5 条
- 1













