mysqldump extended inserts

MySQL backups using mysqldump

When creating MySQL database dumps using mysqldump, by default, --extended-insert, -e is enabled. The dump file is created with multiple-row syntax that includes several VALUES list per INSERT statement. It results in a smaller dump file and speeds up inserts when the file is re-loaded.

There are two MySQL variables that are of interest in this context:

  • net_buffer_length The initial size of the buffer for client/server communication. When creating multiple-row INSERT statements (as with the --extended-insert or --opt option), mysqldump creates rows up to --net-buffer-length bytes long. If you increase this variable, ensure that the MySQL server net_buffer_length system variable has a value at least this large.
  • max_allowed_packet The maximum size of the buffer for client/server communication. The default is 24MB, the maximum is 1GB. I usually set max_allowed_packet quite high e.g. 64M or 128M as memory is only allocated up to a max of this rather than the value it is set to. This means that a connection will not allocate the full 64M/128M if it’s not needed.This variable is relevant to mysql client but not for mysqldump. There’s a MySQL bug that was reported way back in 2005. Not sure if it was fixed.

You can use the following command to find out what values mysqldump will use for these two variables:

Similarly you can then check the values used by mysql client when loading the dump:

The above mysql client configuration value for net-buffer-length is smaller than the one used by mysqldump, so this dump load will fail. Use a value that is at least the same as the one used by mysqldump when creating the backup.

If you would like to have smaller lines – one per row (as against multiple-row syntax), use options --extended-insert and --complete-insert with mysqldump command as shown below:

This command above will create one INSERT statement per tuple/table row. Dump file created will be bigger and load times will increase quite a lot. If you have a biggiesh data set, the dump load for such a backup can take hours, so not really worth it.

Another way to create one line per row in the dump file (and also be able to load back the data really fast) would be:

MySQL backups using MySQL Shell

You can also use MySQL Shell to create and load backups.

MySQL Shell uses multiple streams to create a per table backup for a schema and is really fast. Chunking support pushes the speeds even higher.

Backup loads are also fast – thanks to chunking and multiple streams.