Yet another DDL dump script

Inspired by some recent posts about scripts for parsing mysqldump files, I thought I’d put my own little dump program out there for those looking for alternatives to mysqldump. It’s written in perl and only actually uses the mysqldump utility to dump the data portion of its file dumps. The rest is done via mysql’s internal show commands. Each database is dumped to it’s own directory, where each component in that database is dumped into five subdirs: schemas, routines, views, triggers & data. Inside these subdirs is a file per table and file per proc, etc. The data section is slightly different: as I mentioned it uses mysqldump to dump bulk insert statements into a sql file, one per table. So at any time, any or all components (including data) can be reloaded with a simple redirect or pipe back to mysql client:

[user@svr123]$ mysql -D dbname < /path/to/files/dbname/schemas/*.sql
[user@svr123]$ mysql -D dbname < /path/to/files/dbname/routines/*.sql
[user@svr123]$ mysql -D dbname < /path/to/files/dbname/views/*.sql
[user@svr123]$ mysql -D dbname < /path/to/files/dbname/triggers/*.sql
[user@svr123]$ gunzip -c /path/to/files/dbname/data/*.sql.gz|mysql -D dbname
(data files are zipped)

The script is self-explanatory and since I’m pretty much a newb at perl code, I’d love to know if there’s a better way to do any of this stuff. You can find the code here.

Federated tables bug

Scenario
Recently came across this bug when trying out the federated storage engine for the first time in MySQL 5.1. Had a security table with user information on a remote server & database that needed to be joined to a local table housing site-specific permissions but only containing user IDs. I definitely wanted to use the “create server” method for the new table in case we later decided to link to a different table in the same remote database. A terrific little feature of the MySQL federated storage engine, to be sure.

Problem
The local server was the master in a replication pair. After executing the create server statement on the master, I proceeded to create the new federated table pointed to the new remote server. That’s when my mysql replication monitoring script alerted me that the replication thread had stopped on the slave server. The error log complained about:

100806 10:33:32 [Warning] Slave: Can't create/write to file 'server name: 'svr123' doesn't exist!' (Errcode: 1186271216) Error_code: 1

I could verify the create server statement was successful on the slave:

>select * from servers\G
*************************** 1. row ***************************
Server_name: svr123
Host: svr123
Db: defaultdb
Username: blah
Password: blah
Port: 0
Socket:
Wrapper: mysql
Owner:

Workaround
The slave obviously doesn’t register the new remote server for some weird reason, even though it makes the entry in the servers table on the slave server. To fix the slave (& replication), I just deleted the row from the mysql.server table on the slave and ran the create server SQL (also on the slave). A simple but very annoying workaround, as I don’t like to run any sql on a slave server if I can help it. After starting the slave, it ran & created the new federated table with no other problems.

Follow

Get every new post delivered to your Inbox.