TiDB v5.4 Documentationimporting a large amount of data and quickly initializing a specific table in a TiDB cluster Support checkpoints to store the import progress, so that tidb-lightning continues importing from where it lefts off recommended that you understand how to handle checkpoints, and then choose the appropriate way to proceed according to your needs. 243 5.3.5.3.1 Checkpoints Migrating a large volume of data usually takes imported. Fortunately, TiDB Lightning provides a feature called checkpoints, which makes TiDB Lightning save the import progress as checkpoints from time to time, so that an interrupted import task can be0 码力 | 3650 页 | 52.72 MB | 1 年前3
PostgreSQL和Greenplum 数据库故障排查执行的SQL语句。 log_duration = off 记录每条SQL语句执行完成消耗的时间,将此配置设置为on, 用于统计哪些SQL语句耗时较长。 记录校验点的信息 log_checkpoints = on log_connections = off 是否记录连接日志 2018年PostgreSQL中国技术大会 微信号:laohouzi999 3.PostgreSQL连接问题 max_wal_size Maximum size to let the WAL grow to between automatic WAL checkpoints. checkpoint_timeout Maximum time between automatic WAL checkpoints 提高max_wal_size和checkpoint_timeout的值可以减少校验点发生的 频率 2018年PostgreSQL中国技术大会0 码力 | 84 页 | 12.61 MB | 1 年前3
TiDB v5.3 Documentationbased on binlog positions and the binlog file size, and store the replicated binlog positions as checkpoints. However, the official MySQL uses uint32 to store binlog positions, which means the binlog position there might be chances that you need to reimport from scratch, because there is no guarantee that checkpoints work across versions. 11.8.4 TiDB Lightning Prechecks Starting from TiDB 5.3.0, TiDB Lightning key of this service. ### key-path = "/path/to/lightning.key" [checkpoint] ### Whether to enable checkpoints. ### While importing data, TiDB Lightning records which tables have been �→ imported, so ###0 码力 | 2996 页 | 49.30 MB | 1 年前3
TiDB v5.1 Documentationbased on binlog positions and the binlog file size, and store the replicated binlog positions as checkpoints. However, the official MySQL uses uint32 to store binlog positions, which means the binlog position there might be chances that you need to reimport from scratch, because there is no guarantee that checkpoints work across versions. 11.8.4 TiDB Lightning Configuration This document provides samples for key of this service. ### key-path = "/path/to/lightning.key" [checkpoint] ### Whether to enable checkpoints. ### While importing data, TiDB Lightning records which tables have been �→ imported, so ###0 码力 | 2745 页 | 47.65 MB | 1 年前3
TiDB v5.2 Documentationbased on binlog positions and the binlog file size, and store the replicated binlog positions as checkpoints. However, the official MySQL uses uint32 to store binlog positions, which means the binlog position there might be chances that you need to reimport from scratch, because there is no guarantee that checkpoints work across versions. 11.8.4 TiDB Lightning Configuration This document provides samples for key of this service. ### key-path = "/path/to/lightning.key" [checkpoint] ### Whether to enable checkpoints. ### While importing data, TiDB Lightning records which tables have been �→ imported, so ###0 码力 | 2848 页 | 47.90 MB | 1 年前3
TiDB v6.1 Documentation· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 2069 13.7.10 TiDB Lightning Checkpoints · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 2074 13.7.11 Use TiDB importing a large amount of data and quickly initializing a specific table in a TiDB cluster Support checkpoints to store the import progress, so that tidb-lightning continues importing from where it lefts off recommended that you understand how to handle checkpoints, and then choose the appropriate way to proceed according to your needs. 681 6.3.5.3.1 Checkpoints Migrating a large volume of data usually takes0 码力 | 4487 页 | 84.44 MB | 1 年前3
PostgreSQL 9.0 Documentation.............................................................................432 xi 18.5.2. Checkpoints............................................................................................... large data loads faster. This is because loading a large amount of data into PostgreSQL will cause checkpoints to occur more often than the normal checkpoint frequency (specified by the checkpoint_timeout configura- flushed to disk. By increasing checkpoint_segments temporarily during bulk data loads, the number of checkpoints that are required can be reduced. 14.4.7. Disable WAL archival and streaming replication When0 码力 | 2561 页 | 5.55 MB | 1 年前3
PostgreSQL 9.0 Documentation..................................................................................403 18.5.2. Checkpoints..............................................................................................406 large data loads faster. This is because loading a large amount of data into PostgreSQL will cause checkpoints to occur more often than the normal checkpoint frequency (specified by the checkpoint_timeout configuration flushed to disk. By increasing checkpoint_segments temporarily during bulk data loads, the number of checkpoints that are required can be reduced. 339 Chapter 14. Performance Tips 14.4.7. Disable WAL archival0 码力 | 2401 页 | 5.50 MB | 1 年前3
PostgreSQL 8.3 Documentation..................................................................................379 18.5.2. Checkpoints............................................................................................... large data loads faster. This is because loading a large amount of data into PostgreSQL will cause checkpoints to occur more often than the normal checkpoint frequency (specified by the checkpoint_timeout configura- flushed to disk. By increasing checkpoint_segments temporarily during bulk data loads, the number of checkpoints that are required can be reduced. 14.4.7. Turn off archive_mode When loading large amounts of0 码力 | 2143 页 | 4.58 MB | 1 年前3
PostgreSQL 8.3 Documentation..................................................................................358 18.5.2. Checkpoints..............................................................................................360 large data loads faster. This is because loading a large amount of data into PostgreSQL will cause checkpoints to occur more often than the normal checkpoint frequency (specified by the checkpoint_timeout configuration flushed to disk. By increasing checkpoint_segments temporarily during bulk data loads, the number of checkpoints that are required can be reduced. 14.4.7. Turn off archive_mode When loading large amounts of0 码力 | 2015 页 | 4.54 MB | 1 年前3
共 77 条
- 1
- 2
- 3
- 4
- 5
- 6
- 8













