mysqldump extended inserts
MySQL backups using mysqldump
When creating MySQL database dumps using
mysqldump, by default,
--extended-insert, -e is enabled. The dump file is created with multiple-row syntax that includes several VALUES list per
INSERT statement. It results in a smaller dump file and speeds up inserts when the file is re-loaded.
There are two MySQL variables that are of interest in this context:
net_buffer_lengthThe initial size of the buffer for client/server communication. When creating multiple-row
INSERTstatements (as with the
--optoption), mysqldump creates rows up to
--net-buffer-lengthbytes long. If you increase this variable, ensure that the MySQL server
net_buffer_lengthsystem variable has a value at least this large.
max_allowed_packetThe maximum size of the buffer for client/server communication. The default is 24MB, the maximum is 1GB. I usually set
max_allowed_packetquite high e.g. 64M or 128M as memory is only allocated up to a max of this rather than the value it is set to. This means that a connection will not allocate the full 64M/128M if it’s not needed.This variable is relevant to
mysql clientbut not for
mysqldump. There’s a MySQL bug that was reported way back in 2005. Not sure if it was fixed.
You can use the following command to find out what values
mysqldump will use for these two variables:
$ mysqldump --help |grep -E "(net-buffer-length |max-allowed-packet )"
max-allowed-packet 25165824 (25MB)
net-buffer-length 1046528 (1MB)
Similarly you can then check the values used by
mysql client when loading the dump:
$ mysql --help |grep -E "(net-buffer-length |max-allowed-packet )"
max-allowed-packet 16777216 (16MB)
net-buffer-length 16384 (16KB)
mysql client configuration value for
net-buffer-length is smaller than the one used by
mysqldump, so this dump load will fail. Use a value that is at least the same as the one used by
mysqldump when creating the backup.
If you would like to have smaller lines – one per row (as against multiple-row syntax), use options
--complete-insert with mysqldump command as shown below:
mysqldump -u<userid> -p --extended-insert=FALSE --complete-insert=TRUE ...
This command above will create one
INSERT statement per tuple/table row. Dump file created will be bigger and load times will increase quite a lot. If you have a biggiesh data set, the dump load for such a backup can take hours, so not really worth it.
Another way to create one line per row in the dump file (and also be able to load back the data really fast) would be:
- to have mysqldump create CSV files and
- load the dump back using
LOAD DATA INFILE(increase
bulk_insert_buffer_sizeto appropriate value e.g. 1G)
MySQL backups using MySQL Shell
You can also use MySQL Shell to create and load backups.
MySQL Shell uses multiple streams to create a per table backup for a schema and is really fast. Chunking support pushes the speeds even higher.
Backup loads are also fast – thanks to chunking and multiple streams.
- Enable http/2 in Apache 16th Jan 2021
- MySQL Shell 8.x 16th Jan 2021
- vi/vim tricks 30th Dec 2020
- SELinux on CentOS 29th Dec 2020
- ssh-copy-id (password less ssh login) 26th Dec 2020
- MySQL auto_increment capacity calculation 26th Dec 2020
- Using UUID as a Primary Key 24th Dec 2020
- Keycloak and InnoDB Cluster 29th Nov 2020
- How to create Column in MySQL only if one does not exist? 19th Mar 2019
- Inject empty transactions in MySQL 5.6 GTID based Replication 25th Dec 2018
- Adding a new disk to CentOS 26th Aug 2018
- mysqldump extended inserts 26th Jun 2018
- CentOS Logical Volume Manager (LVM) 26th Oct 2017
- How to prepare SD Card for Raspberry Pi on a MAC 26th Jun 2017
- Convert .dmg file on a MAC to .iso 26th Apr 2017