mysqldump extended inserts
MySQL backups using mysqldump
When creating MySQL database dumps using mysqldump
, by default, --extended-insert, -e
is enabled. The dump file is created with multiple-row syntax that includes several VALUES list per INSERT
statement. It results in a smaller dump file and speeds up inserts when the file is re-loaded.
There are two MySQL variables that are of interest in this context:
net_buffer_length
The initial size of the buffer for client/server communication. When creating multiple-rowINSERT
statements (as with the--extended-insert
or--opt
option), mysqldump creates rows up to--net-buffer-length
bytes long. If you increase this variable, ensure that the MySQL servernet_buffer_length
system variable has a value at least this large.max_allowed_packet
The maximum size of the buffer for client/server communication. The default is 24MB, the maximum is 1GB. I usually setmax_allowed_packet
quite high e.g. 64M or 128M as memory is only allocated up to a max of this rather than the value it is set to. This means that a connection will not allocate the full 64M/128M if it’s not needed.This variable is relevant tomysql client
but not formysqldump
. There’s a MySQL bug that was reported way back in 2005. Not sure if it was fixed.
You can use the following command to find out what values mysqldump
will use for these two variables:
1 2 3 4 5 6 |
$ mysqldump --help |grep -E "(net-buffer-length |max-allowed-packet )" max-allowed-packet 25165824 (25MB) net-buffer-length 1046528 (1MB) |
Similarly you can then check the values used by mysql
client when loading the dump:
1 2 3 4 5 6 |
$ mysql --help |grep -E "(net-buffer-length |max-allowed-packet )" max-allowed-packet 16777216 (16MB) net-buffer-length 16384 (16KB) |
The above mysql client
configuration value for net-buffer-length
is smaller than the one used by mysqldump
, so this dump load will fail. Use a value that is at least the same as the one used by mysqldump
when creating the backup.
If you would like to have smaller lines – one per row (as against multiple-row syntax), use options --extended-insert
and --complete-insert
with mysqldump command as shown below:
1 2 3 4 |
mysqldump -u<userid> -p --extended-insert=FALSE --complete-insert=TRUE ... |
This command above will create one INSERT
statement per tuple/table row. Dump file created will be bigger and load times will increase quite a lot. If you have a biggiesh data set, the dump load for such a backup can take hours, so not really worth it.
Another way to create one line per row in the dump file (and also be able to load back the data really fast) would be:
- to have mysqldump create CSV files and
- load the dump back using
LOAD DATA INFILE
(increasebulk_insert_buffer_size
to appropriate value e.g. 1G)
MySQL backups using MySQL Shell
You can also use MySQL Shell to create and load backups.
MySQL Shell uses multiple streams to create a per table backup for a schema and is really fast. Chunking support pushes the speeds even higher.
Backup loads are also fast – thanks to chunking and multiple streams.
Recent Posts
- Enable http/2 in Apache 16th Jan 2021
- MySQL Shell 8.x 16th Jan 2021
- vi/vim tricks 30th Dec 2020
- SELinux on CentOS 29th Dec 2020
- ssh-copy-id (password less ssh login) 26th Dec 2020
- MySQL auto_increment capacity calculation 26th Dec 2020
- Using UUID as a Primary Key 24th Dec 2020
- Keycloak and InnoDB Cluster 29th Nov 2020
- How to create Column in MySQL only if one does not exist? 19th Mar 2019
- Inject empty transactions in MySQL 5.6 GTID based Replication 25th Dec 2018
- Adding a new disk to CentOS 26th Aug 2018
- mysqldump extended inserts 26th Jun 2018
- CentOS Logical Volume Manager (LVM) 26th Oct 2017
- How to prepare SD Card for Raspberry Pi on a MAC 26th Jun 2017
- Convert .dmg file on a MAC to .iso 26th Apr 2017
Categories
- CentOS (4)
- CentOS 8 (2)
- InnoDB cluster (1)
- mysql (7)
- Raspberry Pi (1)
- ssh (1)
- Technical (3)
Tag cloud
Archives
- January 2021 (2)
- December 2020 (5)
- November 2020 (1)
- March 2019 (1)
- December 2018 (1)
- August 2018 (1)
- June 2018 (1)
- October 2017 (1)
- June 2017 (1)
- April 2017 (1)
- February 2016 (1)