Percona Repo Conflicts With Itself

On a RHEL 6.3 server, we are running MySQL 5.1 (not MariaDB, not the Percona patched version, plain old regular MySQL): $ rpm -qa | egrep -i “percona|maria|mysql” | sort mysql-5.1.61-4.el6.x86_64 mysql-libs-5.1.61-4.el6.x86_64 mysql-server-5.1.61-4.el6.x86_64 percona-toolkit-2.1.8-1.noarch percona-xtrabackup-2.0.5-499.rhel6.x86_64 perl-DBD-MySQL-4.013-3.el6.x86_64 $ We went to … Continue reading

On a RHEL 6.3 server, we are running MySQL 5.1 (not MariaDB, not the Percona patched version, plain old regular MySQL):

$ rpm -qa | egrep -i "percona|maria|mysql" | sort
mysql-5.1.61-4.el6.x86_64
mysql-libs-5.1.61-4.el6.x86_64
mysql-server-5.1.61-4.el6.x86_64
percona-toolkit-2.1.8-1.noarch
percona-xtrabackup-2.0.5-499.rhel6.x86_64
perl-DBD-MySQL-4.013-3.el6.x86_64
$

We went to do a system update today, and had a problem with dependencies, because the Percona repository wants to replace mysql-libs with Perconas shared libraries. How rude!

# yum update mysql-libs
Loaded plugins: downloadonly, rhnplugin, security
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package Percona-Server-shared-51.x86_64 0:5.1.67-rel14.3.506.rhel6 will be obsoleting
---> Package Percona-Server-shared-55.x86_64 0:5.5.29-rel29.4.401.rhel6 will be obsoleting
---> Package Percona-Server-shared-compat.x86_64 0:5.5.29-rel29.4.401.rhel6 will be obsoleting
---> Package mysql-libs.x86_64 0:5.1.61-4.el6 will be obsoleted
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================
Package Arch Version Repository Size
==============================================================================================================
Installing:
Percona-Server-shared-51 x86_64 5.1.67-rel14.3.506.rhel6 percona 2.8 M
replacing mysql-libs.x86_64 5.1.61-4.el6
Percona-Server-shared-55 x86_64 5.5.29-rel29.4.401.rhel6 percona 787 k
replacing mysql-libs.x86_64 5.1.61-4.el6
Percona-Server-shared-compat x86_64 5.5.29-rel29.4.401.rhel6 percona 3.4 M
replacing mysql-libs.x86_64 5.1.61-4.el6

Transaction Summary
==============================================================================================================
Install 3 Package(s)

Total size: 7.0 M
Is this ok [y/N]: N

The reason I say this is rude is because the Percona repository does not have mysql-libs. Its not for the Percona repository to say that its obsolete. If we were running the Percona packages, thats fine, but were not. We are running the MySQL packages, and we want to keep running them. Just because we run the Percona toolkit and Xtrabackup doesn’t mean we want the shared MySQL libraries from Percona as well.

The bizarre part is that the Percona repository recommends installing BOTH Percona-Server-shared-51 AND Percona-Server-shared-55, and those packages conflict with each other. So if I hit yes I would end up getting:

Transaction Check Error:
file /usr/lib64/libmysqlclient.so conflicts between attempted installs of Percona-Server-shared-51-5.1.67-rel14.3.506.rhel6.x86_64 and Percona-Server-shared-55-5.5.29-rel29.4.401.rhel6.x86_64
file /usr/lib64/libmysqlclient.so.16.0.0 conflicts between attempted installs of Percona-Server-shared-51-5.1.67-rel14.3.506.rhel6.x86_64 and Percona-Server-shared-compat-5.5.29-rel29.4.401.rhel6.x86_64
file /usr/lib64/libmysqlclient_r.so conflicts between attempted installs of Percona-Server-shared-51-5.1.67-rel14.3.506.rhel6.x86_64 and Percona-Server-shared-55-5.5.29-rel29.4.401.rhel6.x86_64
file /usr/lib64/libmysqlclient_r.so.16.0.0 conflicts between attempted installs of Percona-Server-shared-51-5.1.67-rel14.3.506.rhel6.x86_64 and Percona-Server-shared-compat-5.5.29-rel29.4.401.rhel6.x86_64

Error Summary
-------------

So in order to upgrade this machine, we had to remove the Percona repository, update the packages so that mysql-libs was properly updated by the right repository, put the Percona repository back, and run update again to see if xtrabackup or percona-toolkit has updates to be run.

And all is happy again:
$ rpm -qa | egrep -i "percona|maria|mysql" | sort
mysql-5.1.67-1.el6_3.x86_64
mysql-libs-5.1.67-1.el6_3.x86_64
mysql-server-5.1.67-1.el6_3.x86_64
percona-toolkit-2.1.8-1.noarch
percona-xtrabackup-2.0.5-499.rhel6.x86_64
perl-DBD-MySQL-4.014-1.el6.rfx.x86_64
$

IN Subqueries in MySQL 5.6 Are Optimized Away

In MySQL 5.6, it looks like IN() subqueries are optimized even better than they are in MariaDB 5.5. Here’s a typical IN() subquery, using the sakila sample database (query taken from slide 6 of the presentation about new MySQL 5.6 … Continue reading

In MySQL 5.6, it looks like IN() subqueries are optimized even better than they are in MariaDB 5.5. Heres a typical IN() subquery, using the sakila sample database (query taken from slide 6 of the presentation about new MySQL 5.6 optimizer statistics):

SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor)

Before there were any subquery optimizations, say if you are using MySQL 5.1, you would have to rewrite this query as a JOIN, to avoid the dreaded DEPENDENT SUBQUERY that shows up in the EXPLAIN:

mysql> explain SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor)\G
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: film
type: index
possible_keys: NULL
key: idx_title
key_len: 767
ref: NULL
rows: 1025
Extra: Using where; Using index
*************************** 2. row ***************************
id: 2
select_type: DEPENDENT SUBQUERY
table: film_actor
type: index_subquery
possible_keys: idx_fk_film_id
key: idx_fk_film_id
key_len: 2
ref: func
rows: 1
Extra: Using index

MariaDB 5.5 boasts subquery optimization, and rightfully so. It looks like MariaDB materializes the subquery:

MariaDB [sakila]> explain SELECT title FROM film WHERE film_id IN
-> (SELECT film_id FROM film_actor)\G
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: film
type: index
possible_keys: PRIMARY
key: idx_title
key_len: 767
ref: NULL
rows: 1043
Extra: Using index
*************************** 2. row ***************************
id: 1
select_type: PRIMARY
table:
type: eq_ref
possible_keys: distinct_key
key: distinct_key
key_len: 2
ref: func
rows: 1
Extra:
*************************** 3. row ***************************
id: 2
select_type: MATERIALIZED
table: film_actor
type: index
possible_keys: idx_fk_film_id
key: idx_fk_film_id
key_len: 2
ref: NULL
rows: 4889
Extra: Using index

So MariaDB recognizes the subquery and optimizes it. But it is still optimized as a subquery. There are 3 rows here, a new <subquery2> table is used to help optimize the results.

In MySQL 5.6, the subquery is actually optimized away. The optimizer actually changes it. You can see this in the EXPLAIN plan by looking at the select_type. In both the MySQL 5.1 and MariaDB 5.5 examples, the select_type is PRIMARY, which is used for the outer query in a subquery (or the first SELECT in a UNION, but that does not apply here). In MySQL 5.6, the select_type is SIMPLE for both rows. Note that MySQL 5.6 also does not have to add a third table as MariaDB does:

mysql> explain SELECT title FROM film WHERE film_id IN
-> (SELECT film_id FROM film_actor)\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: film
type: index
possible_keys: PRIMARY
key: idx_title
key_len: 767
ref: NULL
rows: 1000
Extra: Using index
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: film_actor
type: ref
possible_keys: idx_fk_film_id
key: idx_fk_film_id
key_len: 2
ref: sakila.film.film_id
rows: 1
Extra: Using index; FirstMatch(film)
2 rows in set (0.00 sec)

In the presentation, the Oracle team says that for DBT3 Query #18, execution time reduces from days to seconds. With optimizations like this, I believe it!

New EXPLAIN Features in MySQL 5.6 – FORMAT=JSON actually adds more information!

When I heard that MySQL 5.6 was implementing EXPLAIN for writes (INSERT,UPDATE,DELETE,REPLACE), I was pretty excited. Then I heard that MySQL 5.6 also was implementing a JSON format for EXPLAIN and my thought was “I do not care about that!” … Continue reading

When I heard that MySQL 5.6 was implementing EXPLAIN for writes (INSERT,UPDATE,DELETE,REPLACE), I was pretty excited. Then I heard that MySQL 5.6 also was implementing a JSON format for EXPLAIN and my thought was I do not care about that!

Boy, was I wrong. The JSON format does not just put the output into JSON format, it also gives extra information thats actually pretty useful! It can tell you when you are doing an implicit cast, which parts of a composite index are being used, and when index condition pushdown are being used. None of these are shown in regular EXPLAIN (which seems odd, why could they extend the JSON format but not put the information into the regular EXPLAIN format?), so using the JSON format is actually a good idea even if you do not care about what format your output is in.

As a note, MySQL Workbenchs Visual Explain (go to Query->Visual Explain Current Statement) also gives this information.

attached_condition and implicit casts

In a talk about EXPLAIN I do, I use the Sakila sample database. Here is an example of a bad query:

mysql> EXPLAIN SELECT rental_id FROM rental WHERE DATE(rental_date) = '2006-02-14'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: rental
type: index
possible_keys: NULL
key: rental_date
key_len: 10
ref: NULL
rows: 16005
Extra: Using where; Using index
1 row in set (0.00 sec)

This query is bad because it is doing a full index scan (type: index) instead of doing a range scan for just the range of dates we want (should be type: range). Ironically, the EXPLAIN does not actually explain why.

However, the JSON format does explain why:
mysql> EXPLAIN FORMAT=JSON SELECT rental_id FROM rental WHERE DATE(rental_date) = '2006-02-14'\G
*************************** 1. row ***************************
EXPLAIN: {
"query_block": {
"select_id": 1,
"table": {
"table_name": "rental",
"access_type": "index",
"key": "rental_date",
"used_key_parts": [
"rental_date",
"inventory_id",
"customer_id"
],
"key_length": "10",
"rows": 16005,
"filtered": 100,
"using_index": true,
"attached_condition": "(cast(`sakila`.`rental`.`rental_date` as date) = '2006-02-14')"
}
}
}

Note that the attached_condition shows the implicit cast. This is MUCH more friendly to a developer or administrator who is trying to figure out why MySQL is not doing what they want it to do!

The visual EXPLAIN from MySQL Workbench also shows the implicit cast:

You may also notice it shows the filtered attribute, which is not in regular EXPLAIN but is part of EXPLAIN EXTENDED filtered is the percentage of rows that are estimated to be returned. A higher number here is better, if it is low it means that you are examining a lot of rows that you do not return.

used_key_parts

You may have noticed above that there is a used_key_parts array that does not show up in the traditional EXPLAIN. In a traditional EXPLAIN (or EXPLAIN EXTENDED), you do get to see the index length with the key_len field, so you can guess that only part of a composite index is used. Both the previous query and the following query use this index:

UNIQUE KEY `rental_date` (`rental_date`,`inventory_id`,`customer_id`)

Here is the traditional EXPLAIN note that it shows the rental_date index is used, and the key_len is 5, which infers that only the first field fo the index, rental_date is being used, not the other 2 id fields. But you have to deduce that for yourself:

mysql> EXPLAIN EXTENDED SELECT rental_id FROM rental WHERE rental_date BETWEEN '2006-02-14 00:00:00' and '2006-02-14 23:59:59'\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: rental
type: range
possible_keys: rental_date
key: rental_date
key_len: 5
ref: NULL
rows: 181
filtered: 100.00
Extra: Using where; Using index

Here is the JSON format, which shows the used_key_parts field, which reveals very clearly that only the first field of the index is used:

mysql> EXPLAIN FORMAT=JSON SELECT rental_id FROM rental WHERE rental_date BETWEEN '2006-02-14 00:00:00' and '2006-02-14 23:59:59'\G
*************************** 1. row ***************************
EXPLAIN: {
"query_block": {
"select_id": 1,
"table": {
"table_name": "rental",
"access_type": "range",
"possible_keys": [
"rental_date"
],
"key": "rental_date",
"used_key_parts": [
"rental_date"
],
"key_length": "5",
"rows": 181,
"filtered": 100,
"using_index": true,
"attached_condition": "(`sakila`.`rental`.`rental_date` between '2006-02-14 00:00:00' and '2006-02-14 23:59:59')"
}
}
}

And here is the MySQL Workbench Visual EXPLAIN that shows the used_key_parts clearly:

Index condition pushdown is itself a new feature in MySQL 5.6, and I will talk about it in another blog post.

I am glad I took a second look at EXPLAIN FORMAT=JSON the new features are awesome! My only complaint is that I think they should be added to either EXPLAIN or EXPLAIN EXTENDED. I also hope that tools like pt-query-digest will be updated to use the extra information.

MySQL 5.1 vs. MySQL 5.5: Floats, Doubles, and Scientific Notation

For the past few days, I have been upgrading a few servers. We are going from Percona’s patched MySQL 5.1 to MariaDB 5.5 (the subquery optimization is excellent, and we have lots of subqueries). Our process involves upgrading a slave … Continue reading

For the past few days, I have been upgrading a few servers. We are going from Percona’s patched MySQL 5.1 to MariaDB 5.5 (the subquery optimization is excellent, and we have lots of subqueries). Our process involves upgrading a slave first, and letting it replicate for a while, and if everything’s good, update more slaves, then the master.

This has served us well in the past. And now that we are checking data integrity between masters and slaves with pt-table-checksum, the process involves checking before we start that there is data integrity. This is easy, as we checksum twice daily and have a Nagios alert if there are any discrepancies. After the upgrade, we checksum again, to be sure no data has been changed/corrupted in the process of doing a mysqldump export and import.*

Much to my surprise, after importing the data on one of our dev servers, I found that there were a lot of discrepancies. So I picked a chunk to do some comparisons on, and found something interesting:

On Server version: 5.1.65-rel14.0-log Percona Server (GPL), 14.0, Revision 475:
mysql> select float_field from db.tbl where id=218964;
+-------------+
| float_field |
+-------------+
| 9.58084e-05 |
+-------------+
1 row in set (0.04 sec)

On Server version: 5.5.28a-MariaDB-log MariaDB Server
MariaDB [(none)]> select float_field from db.tbl where id=218964;
+--------------+
| float_field |
+--------------+
| 0.0000958084 |
+--------------+
1 row in set (0.24 sec)

Which of course causes a different checksum. I tried SELECTing the values, casting and converting them, but I could not get them to change in the database. MySQL 5.1 insists on storing in scientific notation, and MariaDB 5.5 (and MySQL 5.5, we tested it out) insists on storing without scientific notation.

Frankly, I’m surprised this has not come up before (I did lots of querying Google for MySQL 5.5 and scientific notation), since it radically changes how numbers look when they are stored and retrieved. I guess code does the right thing…except for pt-table-checksum, and I cannot really blame it.

In the end, I used the –ignore-columns option to pt-table-checksum, with the result of:

SELECT GROUP_CONCAT(DISTINCT COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE DATA_TYPE IN ('float','double') AND TABLE_SCHEMA NOT IN ('mysql','information_schema','performance_schema');

In this way, I can get an accurate checksum to see if anything has changed, before I mark that the upgrade is complete on this slave server.

* This is just on the first slave. After the first slave is upgraded, we use xtrabackup to copy the data to another server to upgrade it.

 

Open Database Camp at Northeast LinuxFest

I am happy and proud to announce that there will be an Open Database Camp held at this year’s Northeast LinuxFest! The venue is at Harvard University in Cambridge, MA (“our fair city”), and will take place Saturday, March 16 … Continue reading

I am happy and proud to announce that there will be an Open Database Camp held at this years Northeast LinuxFest! The venue is at Harvard University in Cambridge, MA (our fair city), and will take place Saturday, March 16 and Sunday, March 17, 2013.

Northeast LinuxFest and Open Database Camp are both free, but there is no reciprocal membership. To register for Open Database Camp, just sign up with Eventbrite. We are also soliciting session ideas ahead of time, and attendees will choose sessions during the Saturday morning planning session, as usual for Open DB Camp.

If you are interested in sponsoring, do so directly to Northeast LinuxFest and let them know its for Open Database Camp!

Open Database Camp is for all open databases whether its MySQL, Postgres, NoSQL, been around for years or something youre thinking about. You can see previous session ideas at the OpenSQLCamp website.

December News from the Mozilla Database Team

Being the end of the quarter, there has been some planning going on this month about goals for Q1 2013 as well as meeting our goals for Q4 2012. Our biggest goal was to stop using MySQL 5.0, which we … Continue reading

Being the end of the quarter, there has been some planning going on this month about goals for Q1 2013 as well as meeting our goals for Q4 2012. Our biggest goal was to stop using MySQL 5.0, which we have successfully done. We only have one server left on MySQL 5.0, and that has a compatible MySQL 5.1 server waiting for a few developers to get back from their well-deserved vacations to migrate off. In December, we finished upgrading 2 servers to MySQL 5.1.

– Looked at the top 30 Bugzilla queries and started to give optimization tips for MySQL.
– Did our regular purge/defrag of TinderBox PushLog.
– Worked on integrating our datazilla code with chart.io features.
– Helped change the data model for datazilla.
– Moved some Bugzilla tables to a different partition when the data partition filled up. There is a plan to upgrade but we had an immediate need for the move.
– Upgraded one of the Bugzilla slaves to MariaDB 5.5.
– Refreshed the support staging database with production data.
– Added grants for metrics users to support new Bugzilla custom fields.
– Did some research on whether SSDs were good enough for the addons database or if we really needed Fusion I/O. (conclusion: SSDs are good enough! The driver for this was cost of larger Fusion I/O disks, and starting to worry about space on the database systems.)
– Found a bug in new build code for the builder that builds Firefox, that would effectively stop updated builds from being recorded in the builder database. The bug was found in development, the code itself is not in production yet, but there were several hours of database debugging to figure out the problem.
– Built a new database server for backups that does not depend on NFS.
– Implemented checksum checking on several more MySQL clusters to ensure the data on the master and slaves match.
– Created databases for Game On.
– Optimized a query for a Firefox build system component (clobberer).
– Installed larger disks on our production Postgres failover server. We will be testing failover and adding more disks to the current master server in Q1.
– Created a database cluster for the main Mozilla website for failover.
– Cleaned up replication on a cluster after a power problem caused the master to crash.
– Added a Nagios check that uses pt-config-diff to all our MySQL servers to ensure that we know whenever the running MySQL configuration does not match the /etc/my.cnf file.
– Dealt with a set of queries breaking replication due to not being inside a transaction.
– Dealt with a schema change for Bugzilla taking a long time, and took slaves out of the load balancer one at a time to let the ALTER TABLE complete without read queries getting locked and causing slow responsiveness on the database servers.
– Created read-only database logins for the administrators of Air Mozilla so they can better debug problems.
– Imported some data for Graphs.
– Audited the Persona/BrowserID databases to get them ready for prime time (these databases are not normally managed by the DB team).
– Did a security review for our Engagement team to get reports of Mozillians emails for sending out information to registered and vouched Mozillians.
– Added documentation for 11 Nagios checks related to MySQL and Postgres.
– Researched the Zero Day Exploits for MySQL to see if Mozilla was affected.
– Puppetized the postgresql.conf files for all our postgres servers.
– Puppetized our datazilla database servers.
– Puppetiezed database servers for web development and for internal tools.
– We sized MySQL database machines for the Platform as a Service (PaaS) platform that the webops team will be implementing. The next step is ordering the hardware!

Under planning we have done a lot in 2012 to stabilize our MySQL environment and have a good, sane centralized puppet configuration for control of MySQL packages, configuration files, scripts and backups. 2013 will be the year we do the same with Postgres:
– Stabilizing Postgres
– Streamlining Postgres configuration and installation and upgrading with puppet
– Reconfiguring Postgres logging
– Stabilizing Postgres backups

There are plenty of great things that will happen in 2013 from the Mozilla Database Team for both MySQL and Postgres databases!

2012 Mozilla DB Year in Graphs

I’m not a wizard with infographics, but I can do a few pie charts. I copied the data to the right of the pie charts for those that want to see the numbers. Overall, there are almost 400 databases at … Continue reading

I am not a wizard with infographics, but I can do a few pie charts. I copied the data to the right of the pie charts for those that want to see the numbers. Overall, there are almost 400 databases at Mozilla, in 11 different categories. Here is how each category fares in number of databases:

Here is how each category measures up with regards to database size clearly, our crash-stats database (which is on Postgres, not MySQL) is the largest:

So here is another pie chart with the relative sizes of the MySQL databases:

I’m sure I’ve miscategorized some things (for instance, are metrics on AMO classified under AMO/Marketplace or internal tools?) but here are the categories I used:

Categories:
air.m.o air.mozilla.org
AMO/Marketplace addons/marketplace
blog/web page its a db behind a blog or mostly static webpage
bugzilla Bugzilla
Crash-stats Socorro, crash-stats.mozilla.com Where apps like Firefox send crash details.
Internal tool If the db behind this is down, moco/mofo people may not be able to do their work. This covers applications from graphs.mozilla.org to inventory.mozilla.org to the PTO app.
release tool If this db is down, releases can not happen (but this db is not a tree-closing db).
SUMO support.mozilla.org
Tree-closing if this db is down, the tree closes (and releases cant happen)
World-facing if this db is down, non moco/mofo ppl will notice. These are specifically tools that folks interact with, including the Mozilla Developer Network and sites like gameon.mozilla.org
World-interfacing This db is critical to tools we use to interface with the world, though not necessarily world visible. basket.mozilla.org, Mozillians, etc.

The count of databases includes all production/dev/stage servers. The size is the size of the database on one of the production/dev/stage machines. For example, Bugzilla has 6 servers in use 4 in production and 2 in stage. The size is the size of the master in production and the master in stage, combined. This way we have not grossly inflated the size of the database, even though technically speaking we do have to manage the data on each of the servers.

For next year, I hope to be able to gather this kind of information automatically, and have easily accessible comprehensive numbers for bandwidth, number of queries per day on each server, and more.

Learn MySQL For Free During a Virtual Self-Study Group!

Back in the summer, I heard about how Gene Babon of Boston’s PHP Meetup Group was brilliantly helping folks learn web programming through PHP Percolate, jQuery Jam and HTML5 Brunch. While he was explaining this, I thought to myself, “this … Continue reading

Back in the summer, I heard about how Gene Babon of Bostons PHP Meetup Group was brilliantly helping folks learn web programming through PHP Percolate, jQuery Jam and HTML5 Brunch. While he was explaining this, I thought to myself, this needs to be done for MySQL!

And so it will be. Starting in just over a month, January 1st, 2013, the first MySQL Marinate group will happen. We already have over 50 people signed up for the event itself, and as of right now there are 85 people in the meetup group itself, with over 30% of the group being female.

I found a book that teaches MySQL with exercises that students can do OReillys Learning MySQL. All you need to do is sign up for the event and acquire a copy of the book maybe you can find a used copy, maybe your library has it, or you can buy it new in paper or electronic version. Starting January 1st, read a chapter per week and do the exercises. We will go through chapter 12. The first few chapters cover database basics and installing and connecting to MySQL, so if you cannot start exactly on January 1st, thats fine. And you can do exercises ahead of time.

The best part is that each chapter (each week) has its own message thread so if you have a question about something in the book, or are having problems, there is a place to connect with others. Gerry Narvaja will be leading a group in Seattle in some in-person discussions/office hours, and I will do the same in the Boston area. I will also do some live office hours online in IRC, details forthcoming.

Its a study group. For MySQL. Where a few top experts are around to help out. Sign up today what do you have to lose?

About Master/Master Replication, from Books

It started with a tweet from a coworker asking if I can recommend reading for making a master/master MySQL server. There are plenty of caveats about writing to only one master at a time, and that master/master is not write … Continue reading

It started with a tweet from a coworker asking if I can recommend reading for making a master/master MySQL server. There are plenty of caveats about writing to only one master at a time, and that master/master is not write scaling, but I think I tackled it pretty well in the MySQL Administrators Bible. It is not a very long topic, so I made a PDF of the relevant pages. High Performance MySQL also has a few pages that I would recommend reading, and the third edition has similar information as the Bible, although it goes into more detail about why you might use master/master replication and what might go wrong. Unfortunately I could not find a resource for the few pages of text from High Performance MySQL, and I am not sure if this person needs the whole book for just a few pages.

Mozilla DB News, Fri 26 October: puppetizing backups

This week was a great catch-up week. There is only one conference I am doing for the rest of the year, CodeConnexx, though submissions for conferences in the first quarter of 2013 are happening, too. We have some great candidates we are interviewing for our open Database Administrator position. The database team has gotten a… Read more

This week was a great catch-up week. There is only one conference I am doing for the rest of the year, CodeConnexx, though submissions for conferences in the first quarter of 2013 are happening, too. We have some great candidates we are interviewing for our open Database Administrator position.

The database team has gotten a lot of great stuff done this week (I know, I say that all the time, but its true!):

  • Audited and got rid of legacy MyISAM tables in our Addons database.
  • Decommissioned an old MySQL cluster that has not been in use for a while. It is the database that used to back the predecessor to Mozillians.
  • Moved half our backups in one data center to another machine in the same data center, as they were on a netapp thats having problems with being overloaded. Our Storage Team is talking to NetApp, but for now we alleviated some of the problems by moving the backups to another head. We also opened the process to get hardware allocated so our backups arent using NFS.
  • We took the move as an opportunity to puppetize the backup servers. Now all the backup scripts and backup instances are puppetized, with just a few more challenging items remaining: config files and startup scripts for each backup instance.
  • We have enabled a Nagios check for pt-config-diff so that we will be alerted (by e-mail only) when a running configuration on MySQL is different from the configuration file.
  • Fixed automatic slow query log copying from Addons to a machine our web developers use.
  • The entire IT team is working on documenting our Nagios checks specifically, what to do for each Nagios check so our oncall folks can handle more problems before we have to be called in. We have documented 6 checks so far.
  • Fixed a fascinating problem in which ulimits we put in place were not being read by Percona, when we upgraded from MySQL 5.1 to Percona 5.1. (I have to blog about this, actually, with all the details)
  • We upgraded the kernel for 3 different Addons database servers due to a crashing bug.
  • Finished work on one of our multi-use staging clusters upgrading and converting to innodb_file_per_table.
  • Reduced the innodb_buffer_pool_size on one of our multi-use staging clusters, so that swapping and its corresponding cpu load would be reduced.
  • Loaded in missing data due to a failed cron job on our Crash Stats cluster.
  • Deleted some spam comments in Bugzilla.
  • Created two new read/write accounts for the development database cluster for Mozillians.
  • Moved our soon-to-be deprecated cacti database off an SSH jump host, which means the jump host no longer has a MySQL installation on it.
  • Ran a query to figure out how many Bugzilla e-mail addresses have + or in them as a percentage of total emails.

Next week is Halloween! Are you ready?