Data Warehousing Tips and Tricks

It’s not easy to do a DW in MySQL — but it’s not impossible either. Easier to go to Teradata than to write your own.

DW characteristics:

1) Organic, evolves over time from OLTP systems — issues, locking, large queries, # of userss.

2) Starts as a copy of OLTP, but changes over time — schema evolution, replication lag, duplicate data issues

3) Custom — designed from the ground up for DW — issues with getting it started, growth, aggregations, backup.

4) How do you update the data in the warehouse? — write/update/read/delete, write/read/delete, or write only — which means that roll out requires partitions or merge tables.

The secret to DW is partitioning — can be based on:
data — date, groups like department, company, etc.
functional — sales, HR, etc.
random — hash, mod on a primary key.

You can partition:
manually — unions, application logic, etc.
using MERGE tables and MyISAM
MySQL 5.1 using partitions

You can load, backup and purge by partition, so perhaps keeping that logic intact — if it takes too much work to load a partition because you’ve grouped it oddly, then your partitioning schema isn’t so great.

Make sure your partitioning is flexible — you need to plan for growth from day 1. So don’t just partition once and forget about it, make sure you can change the partitioning schema without too much trouble. Hash and modulo partitioning aren’t very flexible, and you have to restructure your data to do so.

Use MyISAM for data warehousing — 3-4 times faster than InnoDB, data 2-3 times smaller, MyISAM table files can be easily copied from one server to another, MERGE tables available only over MyISAM tables (scans are 10-15% faster with merge tables), and you can make read-only tables (compressed with indexes) to reduce data size further. ie, compress older data (a year ago, or a week ago if it doesn’t change!)

Issues for using MyISAM for DW — Table locking for high volumes of real-time data (concurrent inserts are allowed when there is ONLY insertions going on, not deletions). This is where partitioning comes in! REPAIR TABLE also takes a long time — better to backup frequently, saving tables, loadset and logs, and then instead of REPAIR TABLE do a point-in-time recovery. For write-only DW, save your write-loads and use that as part of your backup strategy.

Deletes will break concurrent inserts — delayed inserts still work, but they’re not as efficient. You also have to program that in, you can’t, say, replicate using INSERT DELAYED where the master had INSERT.

[Baron’s idea — take current data in InnoDB format, and UNION over other DW tables]

No index clustering for queries that need it — OPTIMIZE TABLE will fix this but it can take a long time to run.

When to use InnoDB — if you must have a high volume of realtime loads — InnoDB record locking is better.

If ALL of your queries can take advantage of index clustering — most or all queries access the data using the primary key (bec. all indexes are clustered together with the primary key, so non-primary key lookups are much faster than regular non-primary key lookups in MySIAM). BUT this means you want to keep your primary keys small. Plus, the more indexes you have, the slower your inserts are, and moreso because of the clustering.

MEMORY storage engine: Use it when you have smaller tables that aren’t updated very often; they’re faster and support hash indexes, which are better for doing single record lookups.

Store the data for the MEMORY engine twice, once in the MEMORY table and once in MyISAM or InnoDB, add queries to the MySQL init script to copy the data from the disk tables to the MEMORY tables upon restart using –init-file=< file name >

ARCHIVE storage engine — use to store older data. More compression than compressed MyISAM, fast inserts, 5.1 supports limited indexes, good performance for full table scans.

Nitro Storage Engine — very high INSERT rates w/ simultaneous queries. Ultra high performance on aggregate operations on index values. Doesn’t require 64-bit server, runs well on 32-bit machines. High performance scans on temporal data, can use partial indexes in ways other engines can’t. http://www.nitrosecurity.com

InfoBright Storage Engine — best compression of all storage engines — 10:1 compression, peak can be as high as 30:1 — includes equivalent of indexes for complex analysis queries. High batch load rates — up to 65GB per hour! Right now it’s Windows only, Linux and other to come. Very good performance for analysis type queries, even working with >5TB data. http://www.infobright.com

Backup — For small tables, just back up. Best option for large tables is copying the data files. If you have a write-only/roll out DB you only need to copy the newly added tables. So you don’t need to keep backing up the same data, just backup the new stuff. Or, just save the load sets. Just backup what changes, and partition smartly.

Tips:
Use INSERT . . . ON DUPLICATE KEY UPDATE to build aggregate tables, when the tables are very large and sorts go to disk, or when you need it real time.

Emulating Star Schema Optimization & Hash Joins — MySQL doesn’t do these, except MEMORY tables can use has indexes. So use a MEMORY INDEX table and optimizer hints to manually do a star schema optimized hash join. Steps:

1) Create a query to filter the fact table
to select all sales from week 1-5 and display by region & store type:

SELECT D.week, S.totalsales, S.locationID, S.storeID
FROM sales S INNER JOIN date D USING (dateID)
WHERE D.week BETWEEN 1 AND 5;

Access only the tables you need for filtering the data, but select the foreign key ID’s.

2) Join the result from step 1 with other facts/tables needed for the report

(SELECT D.week, S.totalsales, S.locationID, S.storeID
FROM sales S INNER JOIN date D USING (dateID)
WHERE D.week BETWEEN 1 AND 5) AS R
INNER JOIN location AS L ON (L.locationID=R.locationID) INNER JOIN store AS S ON (S.storeId=R.storeId);

3) Aggregate the results

(SELECT D.week, S.totalsales, S.locationID, S.storeID
FROM sales S INNER JOIN date D USING (dateID)
WHERE D.week BETWEEN 1 AND 5) AS R
INNER JOIN location AS L ON (L.locationID=R.locationID) INNER JOIN store AS S ON (S.storeId=R.storeId)
GROUP BY week, region, store_type;

Critical configuration options for DW — sort_buffer_size — used to do SELECT DISTINCT, GROUP BY, ORDER BY, UNION DISTINCT (or just UNION)

Watch the value of sort_merge_passes (more than 1 per second or 4-5 per minute) to see if you need to increase sort_buffer_size. sort_buffer_size is a PER-CONNECTION parameter, so don’t be too too greedy…..but it can also be increased dynamically before running a large query, and reduced afterwards.

key_buffer_size – use multiplekey buffer caches. Use difference caches for hot, warm & cold indexes. Preload your key caches at server startup. Try to use 1/4 of memory (up to 4G per key_buffer) for your total key buffer space. Monitor the cache hit rate by watching:

Read hit rate = key_reads/key_read_requests
Write hit rate = key_writes/key_write_requests
Key_reads & key_writes per second are also important.

hot_cache.key_buffer_size = 1G
fred.key_buffer_size = 1G
fred.key_cache_division_limit = 80
key_cache_size = 2G
key_cache_division_limit = 60
init-file = my_init_file.sql

in the init file:

CACHE INDEX T1,T2,T3 INDEX (I1,I2) INTO hot_cache;
CACHE INDEX T4,T5,T3 INDEX (I3,I4) INTO fred;
LOAD INDEX INTO CACHE T1,T3 NO LEAVES; — use when cache isn’t big enough to hold the whole index.
LOAD INDEX INTO CACHE T10, T11, T2, T4, T5

http://dev.mysql.com/doc/refman/5.0/en/myisam-key-cache.html

This was implemented in MySQL 4.1.1

Temporary table sizes — monitor created_disk_tmp_tables — more than a few per minute is bad, one a minute could be bad depending on the query. tmp tables start in memory and then go to disk…increase tmp_table_size and max_heap_table_size — can by done by session, for queries that need >64MB or so of space.

ALWAYS turn on the slow query log! save them for a few logs, use mysqldumpslow to analyze queries daily. Best to have an automated script to run mysqldumpslow and e-mail a report with the 10-25 worst queries.

log_queries_not_using_indexes unless your DW is designed to use explicit full-table scans.

Learn what the explain plan output means & how the optimizer works:
http://forge.mysql.com/wiki/MySQL_Internals_Optimizer

Other key status variables to watch
select_scan — full scan of first table
select_full_join — # of joins doing full table scan ’cause not using indexes
sort_scan — # of sorts that require
table_locks_waited
uptime

mysqladmin extended:
mysqladmin -u user -ppasswd ex =i60 -r | tee states.log | grep -v ‘0’

(runs every 60 seconds, display only status variables that have changed, logs full status to stats.log every 60 seconds).

It’s not easy to do a DW in MySQL — but it’s not impossible either. Easier to go to Teradata than to write your own.

DW characteristics:

1) Organic, evolves over time from OLTP systems — issues, locking, large queries, # of userss.

2) Starts as a copy of OLTP, but changes over time — schema evolution, replication lag, duplicate data issues

3) Custom — designed from the ground up for DW — issues with getting it started, growth, aggregations, backup.

4) How do you update the data in the warehouse? — write/update/read/delete, write/read/delete, or write only — which means that roll out requires partitions or merge tables.

The secret to DW is partitioning — can be based on:
data — date, groups like department, company, etc.
functional — sales, HR, etc.
random — hash, mod on a primary key.

You can partition:
manually — unions, application logic, etc.
using MERGE tables and MyISAM
MySQL 5.1 using partitions

You can load, backup and purge by partition, so perhaps keeping that logic intact — if it takes too much work to load a partition because you’ve grouped it oddly, then your partitioning schema isn’t so great.

Make sure your partitioning is flexible — you need to plan for growth from day 1. So don’t just partition once and forget about it, make sure you can change the partitioning schema without too much trouble. Hash and modulo partitioning aren’t very flexible, and you have to restructure your data to do so.

Use MyISAM for data warehousing — 3-4 times faster than InnoDB, data 2-3 times smaller, MyISAM table files can be easily copied from one server to another, MERGE tables available only over MyISAM tables (scans are 10-15% faster with merge tables), and you can make read-only tables (compressed with indexes) to reduce data size further. ie, compress older data (a year ago, or a week ago if it doesn’t change!)

Issues for using MyISAM for DW — Table locking for high volumes of real-time data (concurrent inserts are allowed when there is ONLY insertions going on, not deletions). This is where partitioning comes in! REPAIR TABLE also takes a long time — better to backup frequently, saving tables, loadset and logs, and then instead of REPAIR TABLE do a point-in-time recovery. For write-only DW, save your write-loads and use that as part of your backup strategy.

Deletes will break concurrent inserts — delayed inserts still work, but they’re not as efficient. You also have to program that in, you can’t, say, replicate using INSERT DELAYED where the master had INSERT.

[Baron’s idea — take current data in InnoDB format, and UNION over other DW tables]

No index clustering for queries that need it — OPTIMIZE TABLE will fix this but it can take a long time to run.

When to use InnoDB — if you must have a high volume of realtime loads — InnoDB record locking is better.

If ALL of your queries can take advantage of index clustering — most or all queries access the data using the primary key (bec. all indexes are clustered together with the primary key, so non-primary key lookups are much faster than regular non-primary key lookups in MySIAM). BUT this means you want to keep your primary keys small. Plus, the more indexes you have, the slower your inserts are, and moreso because of the clustering.

MEMORY storage engine: Use it when you have smaller tables that aren’t updated very often; they’re faster and support hash indexes, which are better for doing single record lookups.

Store the data for the MEMORY engine twice, once in the MEMORY table and once in MyISAM or InnoDB, add queries to the MySQL init script to copy the data from the disk tables to the MEMORY tables upon restart using –init-file=< file name >

ARCHIVE storage engine — use to store older data. More compression than compressed MyISAM, fast inserts, 5.1 supports limited indexes, good performance for full table scans.

Nitro Storage Engine — very high INSERT rates w/ simultaneous queries. Ultra high performance on aggregate operations on index values. Doesn’t require 64-bit server, runs well on 32-bit machines. High performance scans on temporal data, can use partial indexes in ways other engines can’t. http://www.nitrosecurity.com

InfoBright Storage Engine — best compression of all storage engines — 10:1 compression, peak can be as high as 30:1 — includes equivalent of indexes for complex analysis queries. High batch load rates — up to 65GB per hour! Right now it’s Windows only, Linux and other to come. Very good performance for analysis type queries, even working with >5TB data. http://www.infobright.com

Backup — For small tables, just back up. Best option for large tables is copying the data files. If you have a write-only/roll out DB you only need to copy the newly added tables. So you don’t need to keep backing up the same data, just backup the new stuff. Or, just save the load sets. Just backup what changes, and partition smartly.

Tips:
Use INSERT . . . ON DUPLICATE KEY UPDATE to build aggregate tables, when the tables are very large and sorts go to disk, or when you need it real time.

Emulating Star Schema Optimization & Hash Joins — MySQL doesn’t do these, except MEMORY tables can use has indexes. So use a MEMORY INDEX table and optimizer hints to manually do a star schema optimized hash join. Steps:

1) Create a query to filter the fact table
to select all sales from week 1-5 and display by region & store type:

SELECT D.week, S.totalsales, S.locationID, S.storeID
FROM sales S INNER JOIN date D USING (dateID)
WHERE D.week BETWEEN 1 AND 5;

Access only the tables you need for filtering the data, but select the foreign key ID’s.

2) Join the result from step 1 with other facts/tables needed for the report

(SELECT D.week, S.totalsales, S.locationID, S.storeID
FROM sales S INNER JOIN date D USING (dateID)
WHERE D.week BETWEEN 1 AND 5) AS R
INNER JOIN location AS L ON (L.locationID=R.locationID) INNER JOIN store AS S ON (S.storeId=R.storeId);

3) Aggregate the results

(SELECT D.week, S.totalsales, S.locationID, S.storeID
FROM sales S INNER JOIN date D USING (dateID)
WHERE D.week BETWEEN 1 AND 5) AS R
INNER JOIN location AS L ON (L.locationID=R.locationID) INNER JOIN store AS S ON (S.storeId=R.storeId)
GROUP BY week, region, store_type;

Critical configuration options for DW — sort_buffer_size — used to do SELECT DISTINCT, GROUP BY, ORDER BY, UNION DISTINCT (or just UNION)

Watch the value of sort_merge_passes (more than 1 per second or 4-5 per minute) to see if you need to increase sort_buffer_size. sort_buffer_size is a PER-CONNECTION parameter, so don’t be too too greedy…..but it can also be increased dynamically before running a large query, and reduced afterwards.

key_buffer_size – use multiplekey buffer caches. Use difference caches for hot, warm & cold indexes. Preload your key caches at server startup. Try to use 1/4 of memory (up to 4G per key_buffer) for your total key buffer space. Monitor the cache hit rate by watching:

Read hit rate = key_reads/key_read_requests
Write hit rate = key_writes/key_write_requests
Key_reads & key_writes per second are also important.

hot_cache.key_buffer_size = 1G
fred.key_buffer_size = 1G
fred.key_cache_division_limit = 80
key_cache_size = 2G
key_cache_division_limit = 60
init-file = my_init_file.sql

in the init file:

CACHE INDEX T1,T2,T3 INDEX (I1,I2) INTO hot_cache;
CACHE INDEX T4,T5,T3 INDEX (I3,I4) INTO fred;
LOAD INDEX INTO CACHE T1,T3 NO LEAVES; — use when cache isn’t big enough to hold the whole index.
LOAD INDEX INTO CACHE T10, T11, T2, T4, T5

http://dev.mysql.com/doc/refman/5.0/en/myisam-key-cache.html

This was implemented in MySQL 4.1.1

Temporary table sizes — monitor created_disk_tmp_tables — more than a few per minute is bad, one a minute could be bad depending on the query. tmp tables start in memory and then go to disk…increase tmp_table_size and max_heap_table_size — can by done by session, for queries that need >64MB or so of space.

ALWAYS turn on the slow query log! save them for a few logs, use mysqldumpslow to analyze queries daily. Best to have an automated script to run mysqldumpslow and e-mail a report with the 10-25 worst queries.

log_queries_not_using_indexes unless your DW is designed to use explicit full-table scans.

Learn what the explain plan output means & how the optimizer works:
http://forge.mysql.com/wiki/MySQL_Internals_Optimizer

Other key status variables to watch
select_scan — full scan of first table
select_full_join — # of joins doing full table scan ’cause not using indexes
sort_scan — # of sorts that require
table_locks_waited
uptime

mysqladmin extended:
mysqladmin -u user -ppasswd ex =i60 -r | tee states.log | grep -v ‘0’

(runs every 60 seconds, display only status variables that have changed, logs full status to stats.log every 60 seconds).

Japanese Character Set

There are too many Japanese characters to be able to use one byte to handle all of them.

Hiragana — over 50 characters

Katakana — over 50 characters

Kanji — over 6,000 characters

So the Japanese Character set has to be multi-byte. JIS=Japan Industrial Standard, this specifies it.

JIS X 0208 in 1990, updated in 1997 — covers widely used characters, not all characters
JIS X 0213 in 2000, updated in 2004

There are also vendor defined Japanese charsets — NEC Kanji and IBM Kanji — these supplement JIS X 0208.

Cellphone specific symbols have been introduced, so the # of characters is actually increasing!

For JIS X 0208, there are multiple encodings — Shift_JIS (all characters are 2 bytes), EUC-JP (most are 2 bytes, some are 3 bytes), and Unicode (all characters are 3 bytes, this makes people not want to use UTF-8 for ). Shift_JIS is most widely used, but they are moving to Unicode gradually (Vista is using UTF-8 as the standard now). Each code mapping is different, with different hex values for the same character in different encodings.

Similarly, there are different encodings for the other charsets.

MySQL supports only some of these. (get the graph from the slides)

char_length() returns the length by # of characters, length() returns the length by # of bytes.

The connection charset and the server charset have to match otherwise…mojibake!

Windows — Shift_JIS is standard encoding, linux EUC-JP is standard. So conversion may be needed.

MySQL Code Conversion algorithm — UCS-2 facilitates conversion between encodings. MySQL converts mappings to and from UCS-2. If client and server encoding are the same, there’s no conversion. If the conversion fails (ie, trying to convert to latin1), the character is converted to ? and you get mojibake.

You can set a my.cnf paramater for “skip-character-set-client-handshake”, this forces the use of the server side charset (for the column(s) in question).

Issues:

Unicode is supposed to support worldwide characters.

UCS-2 is 2-byte fixed length, takes 2^16 = 65,536 characters. This is one Basic Multilingual Plane (BMP). Some Japanese (and Chinese) characters are not covered by UCS-2. Windows Visa supports JIS X 0213:2004 as a standard character set in Japan (available for Windows XP with the right )

UCS-4 is 4-byte fixed length, can encode 2^31 characters (~2 billion) This covers many BMP’s (128?)

UTF-16 is 2 or 4 byte length, all UCS-2 are mapped to 2 bytes, not all UCS-4 characters are supported — 1 million are. Supported UCS-4 characters are mapped to 4 bytes

UTF-8 from 1-6 bytes is fully compliant with UCS-4. This is out of date. 1-4 byte UTF-8 is fully compliant with UTF-16. From 1-3 bytes, UTF-8 is compliant with UCS-2.

MySQL interally handles all characters as UCS-2, UCS-4 is not supported. This is not enough. Plus, UCS-2 is not supported for client encoding. UTF-8 support is up to 3 bytes — this is not just a MySQL problem though.

CREATE TABLE t1 (c1 VARCHAR(30)) CHARSET=utf8;
INSERT INTO T1 VALUES (0x6162F0A0808B63646566); — this inserts ‘ab’ + 4-byte UTF-8 translation of cdef

SELECT c1,HEX(c1) from t1;
if you get ab,6162 back it means that the invalid character was truncated. MySQL does throw up a warning for this.

Possible workarounds — using VARBINARY/BLOB types. Can store any binary data, but this is always case-sensitive (and yes, Japanese characters do have case). FULLTEXT index is not supported, and application code may need to be modified to handle UTF-8 — ie, String.getBytes may need “UTF-8” parameter in it.

Alternatively, use UCS-2 for column encoding:

CREATE TABLE t1 (c1 VARCHAR(30)) CHARSET=ucs2;

INSERT INTO t1 VALUES (_utf8 0x6162F0A0808B63646566);

SELECT … now gives you ?? instead of truncating.

Another alternative: use Shift_JIS or EUC-JP. Code conversion of JIS X 0213:2004 characters is not currently supported.

Shift_JIS is the most widely used encoding, 1 or 2 byte encoding. All ASCII and 1/2 width katakana are 1-byte, the rest are 2-byte. If the first byte value is between 0x00 and 0x7F it’s ASCII 1 byte, 0XA0 – 0XDf is 1-byte, 1/2 width katakana. all the rest are 2-byte characters.

The 2nd byte might be in the ASCII graphic code area 0x40 for example.

0x5C is the escape sequence (backslash in the US). Some Shift_JIS characters contain 0x5C in the 2nd byte. If the charset is specified incorrectly, you’ll end up getting different values — for instance, hex value 0X5C6e will conver to hex value 0X0A. The backslash at the end of the string, hex value 0X5C, will be removed (truncated) if charset is specified incorrectly.

Native MySQL does not support FULLTEXT search in Japanese, Korean and Chinese (CJK issue).

Japanese words do not delimit by space, so it can’t work. 2 ways to do this: dictionary based indexing, dividing words using a pre-installed dictionary. Also N-gram indexing — divide text by N letters (n could be 1, 2, 3 etc). MySQL + Senna implements this, supported by Sumisho Computer Systems.

There are too many Japanese characters to be able to use one byte to handle all of them.

Hiragana — over 50 characters

Katakana — over 50 characters

Kanji — over 6,000 characters

So the Japanese Character set has to be multi-byte. JIS=Japan Industrial Standard, this specifies it.

JIS X 0208 in 1990, updated in 1997 — covers widely used characters, not all characters
JIS X 0213 in 2000, updated in 2004

There are also vendor defined Japanese charsets — NEC Kanji and IBM Kanji — these supplement JIS X 0208.

Cellphone specific symbols have been introduced, so the # of characters is actually increasing!

For JIS X 0208, there are multiple encodings — Shift_JIS (all characters are 2 bytes), EUC-JP (most are 2 bytes, some are 3 bytes), and Unicode (all characters are 3 bytes, this makes people not want to use UTF-8 for ). Shift_JIS is most widely used, but they are moving to Unicode gradually (Vista is using UTF-8 as the standard now). Each code mapping is different, with different hex values for the same character in different encodings.

Similarly, there are different encodings for the other charsets.

MySQL supports only some of these. (get the graph from the slides)

char_length() returns the length by # of characters, length() returns the length by # of bytes.

The connection charset and the server charset have to match otherwise…mojibake!

Windows — Shift_JIS is standard encoding, linux EUC-JP is standard. So conversion may be needed.

MySQL Code Conversion algorithm — UCS-2 facilitates conversion between encodings. MySQL converts mappings to and from UCS-2. If client and server encoding are the same, there’s no conversion. If the conversion fails (ie, trying to convert to latin1), the character is converted to ? and you get mojibake.

You can set a my.cnf paramater for “skip-character-set-client-handshake”, this forces the use of the server side charset (for the column(s) in question).

Issues:

Unicode is supposed to support worldwide characters.

UCS-2 is 2-byte fixed length, takes 2^16 = 65,536 characters. This is one Basic Multilingual Plane (BMP). Some Japanese (and Chinese) characters are not covered by UCS-2. Windows Visa supports JIS X 0213:2004 as a standard character set in Japan (available for Windows XP with the right )

UCS-4 is 4-byte fixed length, can encode 2^31 characters (~2 billion) This covers many BMP’s (128?)

UTF-16 is 2 or 4 byte length, all UCS-2 are mapped to 2 bytes, not all UCS-4 characters are supported — 1 million are. Supported UCS-4 characters are mapped to 4 bytes

UTF-8 from 1-6 bytes is fully compliant with UCS-4. This is out of date. 1-4 byte UTF-8 is fully compliant with UTF-16. From 1-3 bytes, UTF-8 is compliant with UCS-2.

MySQL interally handles all characters as UCS-2, UCS-4 is not supported. This is not enough. Plus, UCS-2 is not supported for client encoding. UTF-8 support is up to 3 bytes — this is not just a MySQL problem though.

CREATE TABLE t1 (c1 VARCHAR(30)) CHARSET=utf8;
INSERT INTO T1 VALUES (0x6162F0A0808B63646566); — this inserts ‘ab’ + 4-byte UTF-8 translation of cdef

SELECT c1,HEX(c1) from t1;
if you get ab,6162 back it means that the invalid character was truncated. MySQL does throw up a warning for this.

Possible workarounds — using VARBINARY/BLOB types. Can store any binary data, but this is always case-sensitive (and yes, Japanese characters do have case). FULLTEXT index is not supported, and application code may need to be modified to handle UTF-8 — ie, String.getBytes may need “UTF-8” parameter in it.

Alternatively, use UCS-2 for column encoding:

CREATE TABLE t1 (c1 VARCHAR(30)) CHARSET=ucs2;

INSERT INTO t1 VALUES (_utf8 0x6162F0A0808B63646566);

SELECT … now gives you ?? instead of truncating.

Another alternative: use Shift_JIS or EUC-JP. Code conversion of JIS X 0213:2004 characters is not currently supported.

Shift_JIS is the most widely used encoding, 1 or 2 byte encoding. All ASCII and 1/2 width katakana are 1-byte, the rest are 2-byte. If the first byte value is between 0x00 and 0x7F it’s ASCII 1 byte, 0XA0 – 0XDf is 1-byte, 1/2 width katakana. all the rest are 2-byte characters.

The 2nd byte might be in the ASCII graphic code area 0x40 for example.

0x5C is the escape sequence (backslash in the US). Some Shift_JIS characters contain 0x5C in the 2nd byte. If the charset is specified incorrectly, you’ll end up getting different values — for instance, hex value 0X5C6e will conver to hex value 0X0A. The backslash at the end of the string, hex value 0X5C, will be removed (truncated) if charset is specified incorrectly.

Native MySQL does not support FULLTEXT search in Japanese, Korean and Chinese (CJK issue).

Japanese words do not delimit by space, so it can’t work. 2 ways to do this: dictionary based indexing, dividing words using a pre-installed dictionary. Also N-gram indexing — divide text by N letters (n could be 1, 2, 3 etc). MySQL + Senna implements this, supported by Sumisho Computer Systems.

OurSQL Podcasts on DVD

If you can find me today during the MySQL Conference & Expo, I have a limited amount of DVD’s that contain all 15 podcasts on them. If you have been thinking you wanted to listen to the podcast but haven’t gotten around to downloading the episodes yet, here’s your chance! Just find me — Today I’m in a red top and black skirt….

If you can find me today during the MySQL Conference & Expo, I have a limited amount of DVD’s that contain all 15 podcasts on them. If you have been thinking you wanted to listen to the podcast but haven’t gotten around to downloading the episodes yet, here’s your chance! Just find me — Today I’m in a red top and black skirt….

OurSQL Episode 14: The MySQL Conference & Expo

In this episode, we take a walk through the Expo part of the MySQL Conference and Expo. We spoke with 3 companies about their solutions for backup and reporting.

Subscribe to the podcast at:
http://feeds.feedburner.com/oursql

Download all podcasts at:
http://technocation.org/podcasts/oursql/

R1 Soft
http://r1soft.com

Actuate
http://www.actuate.com/birt
or
http://www.eclipse.org/birt

FiveRuns
http://www.fiveruns.com

Feedback:

Email podcast@technocation.org

call the comment line at +1 617-674-2369

use Odeo to leave a voice mail through your computer:
http://odeo.com/sendmeamessage/Sheeri

Or use the Technocation forums:
http://technocation.org/forum


In this episode, we take a walk through the Expo part of the MySQL Conference and Expo. We spoke with 3 companies about their solutions for backup and reporting.

Subscribe to the podcast at:
http://feeds.feedburner.com/oursql

Download all podcasts at:
http://technocation.org/podcasts/oursql/

R1 Soft
http://r1soft.com

Actuate
http://www.actuate.com/birt
or
http://www.eclipse.org/birt

FiveRuns
http://www.fiveruns.com

Feedback:

Email podcast@technocation.org

call the comment line at +1 617-674-2369

use Odeo to leave a voice mail through your computer:
http://odeo.com/sendmeamessage/Sheeri

Or use the Technocation forums:
http://technocation.org/forum


MySQL 2007 Community Award

Tony (my fiancee) says it best — he’s an amazing writer. For some history, I work for a dating site that caters to gay men:

MySQL is database software. Whenever a computer program or system (like, say, a gay man’s online dating service) needs to randomly access, store, and keep track of a bunch of data “stuff” (like, say, a bunch of fruits, their personal information and, uh, “vital statistics”), it puts it into and maintains a database.

MySQL (http://www.mysql.com) is a very popular, very good database system. It’s the one Sheeri uses at her job, and my company is currently test-driving a new way to put together web sites that relies on a MySQL database.

An interesting feature of it is that it’s what’s called “open source” software. That means that the community of users is also largely the community of developers. Anyone using the software who says, “You know, I’d really like if it did this better” and figures out a way to do it can actually change the software and then tell everyone, “Hey, I did this thing to it”. Or you can just say, “Hey, I figured out how to do this other thing with it” and every learns a new thing. It’s very socialist.

So, Sheeri works with MySQL, started up and runs the Boston MySQL users group, and runs a podcast detailing and sharing her expertise and experience using MySQL. Plus, she’s actually giving a couple of workshops at this conference she’s at:

http://mysqlconf.com/cs/mysqluc2007/view/e_spkr/2731

Hence, she is a MySQL community advocate. In fact, the 2007 MySQL Community Advocate of the Year. It didn’t come with an oversized novelty check, but it’s all very computer geek sexy. It means my wife-thing is very good at what she does and people respect her intelligence and her efforts. Meanwhile, her husband-lump-thing knows how to make playing cards appear in his pockets and occasionally leaves his shoes on the bed. But he’s pretty.

Click on an image for a larger picture:







Tony (my fiancee) says it best — he’s an amazing writer. For some history, I work for a dating site that caters to gay men:

MySQL is database software. Whenever a computer program or system (like, say, a gay man’s online dating service) needs to randomly access, store, and keep track of a bunch of data “stuff” (like, say, a bunch of fruits, their personal information and, uh, “vital statistics”), it puts it into and maintains a database.

MySQL (http://www.mysql.com) is a very popular, very good database system. It’s the one Sheeri uses at her job, and my company is currently test-driving a new way to put together web sites that relies on a MySQL database.

An interesting feature of it is that it’s what’s called “open source” software. That means that the community of users is also largely the community of developers. Anyone using the software who says, “You know, I’d really like if it did this better” and figures out a way to do it can actually change the software and then tell everyone, “Hey, I did this thing to it”. Or you can just say, “Hey, I figured out how to do this other thing with it” and every learns a new thing. It’s very socialist.

So, Sheeri works with MySQL, started up and runs the Boston MySQL users group, and runs a podcast detailing and sharing her expertise and experience using MySQL. Plus, she’s actually giving a couple of workshops at this conference she’s at:

http://mysqlconf.com/cs/mysqluc2007/view/e_spkr/2731

Hence, she is a MySQL community advocate. In fact, the 2007 MySQL Community Advocate of the Year. It didn’t come with an oversized novelty check, but it’s all very computer geek sexy. It means my wife-thing is very good at what she does and people respect her intelligence and her efforts. Meanwhile, her husband-lump-thing knows how to make playing cards appear in his pockets and occasionally leaves his shoes on the bed. But he’s pretty.

Click on an image for a larger picture:







Eben Moglen: Fredom Businesses Protect Privacy

“What societies value is what they memorize. And how they memorize it and who has access to its memorized form determines who has power.”

We’re starting to become a society that “memorizes” private facts — not just public records being written down, but private thoughts, dreams and wishes.

“Living largely in a world of expensive written material and seeking to build a private database of things experienced and learned, early modern Europeans built in their minds memory palaces — imaginary rooms furnished with complex bric-a-brac and decorations. . . By walking through the rooms of the memory palace in their minds, [they] remembered things they needed to know.”

Photographs took a factual and emotional snapshot of experience and put it into a form that could be held and shared, unlike a memory palace.

“The private photograph isn’t private any more.”

I can’t do Eben’s speech justice. I will post the video when I get to it, but he’s really saying some great stuff about privacy.

“What societies value is what they memorize. And how they memorize it and who has access to its memorized form determines who has power.”

We’re starting to become a society that “memorizes” private facts — not just public records being written down, but private thoughts, dreams and wishes.

“Living largely in a world of expensive written material and seeking to build a private database of things experienced and learned, early modern Europeans built in their minds memory palaces — imaginary rooms furnished with complex bric-a-brac and decorations. . . By walking through the rooms of the memory palace in their minds, [they] remembered things they needed to know.”

Photographs took a factual and emotional snapshot of experience and put it into a form that could be held and shared, unlike a memory palace.

“The private photograph isn’t private any more.”

I can’t do Eben’s speech justice. I will post the video when I get to it, but he’s really saying some great stuff about privacy.

MySQL Security Talk slides

For those wanting the slides for “Testing the Security of Your Site”, they’re at:

http://www.sheeri.com/presentations/MySQLSecurity2007_04_24.pdf — 108 K PDF file

http://www.sheeri.com/presentations/MySQLSecurity2007_04_24.swf — 56 K Flash file

and some code:

For the UserAuth table I use in the example to test SQL injection (see slides):

CREATE TABLE UserAuth (userId INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, uname VARCHAR(20) NOT NULL DEFAULT '' UNIQUE KEY, pass VARCHAR(32) NOT NULL DEFAULT '') ENGINE=INNODB DEFAULT CHARSET=UTF8;

Populate the table:

INSERT INTO UserAuth (uname) VALUES ('alef'),('bet'),('gimel'),('daled'),('hay'),('vav'),('zayin'),('chet'),('tet'),('yud'),('kaf'),('lamed'),('mem'),('nun'),('samech'),('ayin'),('pe'),('tsadik'),('kuf'),('resh'),('shin'),('tav');
UPDATE UserAuth SET pass=MD5(uname) WHERE 1=1;

Test some SQL injection yourself:
go to Acunetix’s test site: http://testasp.acunetix.com/login.asp

Type any of the following as your password, with any user name:
anything' OR 'x'='x
anything' OR '1'='1
anything' OR 1=1
anything' OR 1/'0
anything' UNION SELECT 'a
anything'; SELECT * FROM Users; select '
1234' AND 1=0 UNION ALL SELECT 'admin', '81dc9bdb52d04dc20036dbd8313ed055

And perhaps some of the following:
ASCII/Unicode equivalents (CHAR(39) is single quote)
Hex equivalents (0x27, ie SELECT 0x27726F6F7427)
— for comments

For those wanting the slides for “Testing the Security of Your Site”, they’re at:

http://www.sheeri.com/presentations/MySQLSecurity2007_04_24.pdf — 108 K PDF file

http://www.sheeri.com/presentations/MySQLSecurity2007_04_24.swf — 56 K Flash file

and some code:

For the UserAuth table I use in the example to test SQL injection (see slides):

CREATE TABLE UserAuth (userId INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, uname VARCHAR(20) NOT NULL DEFAULT '' UNIQUE KEY, pass VARCHAR(32) NOT NULL DEFAULT '') ENGINE=INNODB DEFAULT CHARSET=UTF8;

Populate the table:

INSERT INTO UserAuth (uname) VALUES ('alef'),('bet'),('gimel'),('daled'),('hay'),('vav'),('zayin'),('chet'),('tet'),('yud'),('kaf'),('lamed'),('mem'),('nun'),('samech'),('ayin'),('pe'),('tsadik'),('kuf'),('resh'),('shin'),('tav');
UPDATE UserAuth SET pass=MD5(uname) WHERE 1=1;

Test some SQL injection yourself:
go to Acunetix’s test site: http://testasp.acunetix.com/login.asp

Type any of the following as your password, with any user name:
anything' OR 'x'='x
anything' OR '1'='1
anything' OR 1=1
anything' OR 1/'0
anything' UNION SELECT 'a
anything'; SELECT * FROM Users; select '
1234' AND 1=0 UNION ALL SELECT 'admin', '81dc9bdb52d04dc20036dbd8313ed055

And perhaps some of the following:
ASCII/Unicode equivalents (CHAR(39) is single quote)
Hex equivalents (0x27, ie SELECT 0x27726F6F7427)
— for comments

SQL Antipatterns — Bill Karwin

Well, I came late, so I missed the first one….so we start with #2

#2. Ambiguous GROUP BY —

query BUGS and include details on the corresponding PRODUCT rows —

SELECT b.bug_id, p.product_name from bugs b NATURAL JOIN bugs_products NATURAL JOIN products GROUP BY b.bug_id;

We use the GROUP BY to get one row per bug, but then you lose information.

Standard SQL says that GROUP BY queries require the GROUP BY columns to be in the SELECT clause. MySQL does not enforce this. If a column is in a SELECT clause but not in the GROUP BY clause it displays a random value.

[my note, not said in the presentation this fools people when they want the groupwise maximum, they think that selecting multiple columns and grouping means that you get some particular row ]

Solution 1: Restrict yourself to standard SQL — do not allow columns in SELECT if
use GROUP BY

Solution 2: Use GROUP_CONCAT() to get a comma-separated list of distinct values in the row.

SELECT b.bug_id, GROUP_CONCAT(p.product_name) AS product_names FROM bugs b NATURAL JOIN bugs_products NATURAL JOIN products GROUP BY b.bug_id;

Performance: no worse than doing a regular group function because the concat happens after the grouping is done.

3. EAV Tables — Entity-Attribute-Value Tables.

Example: product catalog w/ attributes, too many to use one column per attribute. Not every product has every attribute. ie, DVD’s don’t have pages and books don’t have a region encoding.

Most people make an “eav” table, that has the attribute name and value and the entity name. It associates the entity name (say, “spiderman DVD”) with an attribute (“region encoding”) and value (“region 1”)

Why is this bad? It’s harder to apply constraints because the column may have many different values (ie, # of pages should be a number but region encoding is a character). This may be a sign of a bad data model.

So why is this bad?

EAV cannot require an attribute — if you were doing many columns per table, you could specify NOT NULL (ie, price). Well, you could do that with TRIGGERs, but MySQL does not raise errors or abort an operation that spawned a trigger — in other words, you can’t stop the row from being inserted, just have an event when a row is inserted.

EAV cannot have referential integrity to multiple lookup tables, or only for some values.

It’s also expensive and complex to find all the attributes for one entity. In order to get one row that looks like normalized data, you need one join per attribute, and you may not even know how many there are.

Solution: Try not to use EAV tables, defining your attributes in your data model (ie, one table per attribute type). If you do, application logic should enforce constraints. Don’t try to fetch attributes in a single row (that looks like normalized data); fetch multiple rows and use the application code to reconstruct the entity.

4. Letting users crash your server
Example: people request ability to query database flexibility. So the antipattern is to give them access to run their own SQL.

Solution: give an interface which allows parameters to queries. But watch out for SQL injection!

Filter input escaping strings, or use parameterized queries.

6. Forcing primary keys to be contiguous

Example: managers don’t like gaps in invoice #’s. Antipattern is to try to reuse primary key values to fill in the gaps. Another antipattern is to change values to close the gaps.

Solution — deal with it. Do not reuse primary keys. Also, do not use auto_increment surrogate keys for everything if you do not need to.

Well, I came late, so I missed the first one….so we start with #2

#2. Ambiguous GROUP BY —

query BUGS and include details on the corresponding PRODUCT rows —

SELECT b.bug_id, p.product_name from bugs b NATURAL JOIN bugs_products NATURAL JOIN products GROUP BY b.bug_id;

We use the GROUP BY to get one row per bug, but then you lose information.

Standard SQL says that GROUP BY queries require the GROUP BY columns to be in the SELECT clause. MySQL does not enforce this. If a column is in a SELECT clause but not in the GROUP BY clause it displays a random value.

[my note, not said in the presentation this fools people when they want the groupwise maximum, they think that selecting multiple columns and grouping means that you get some particular row ]

Solution 1: Restrict yourself to standard SQL — do not allow columns in SELECT if
use GROUP BY

Solution 2: Use GROUP_CONCAT() to get a comma-separated list of distinct values in the row.

SELECT b.bug_id, GROUP_CONCAT(p.product_name) AS product_names FROM bugs b NATURAL JOIN bugs_products NATURAL JOIN products GROUP BY b.bug_id;

Performance: no worse than doing a regular group function because the concat happens after the grouping is done.

3. EAV Tables — Entity-Attribute-Value Tables.

Example: product catalog w/ attributes, too many to use one column per attribute. Not every product has every attribute. ie, DVD’s don’t have pages and books don’t have a region encoding.

Most people make an “eav” table, that has the attribute name and value and the entity name. It associates the entity name (say, “spiderman DVD”) with an attribute (“region encoding”) and value (“region 1”)

Why is this bad? It’s harder to apply constraints because the column may have many different values (ie, # of pages should be a number but region encoding is a character). This may be a sign of a bad data model.

So why is this bad?

EAV cannot require an attribute — if you were doing many columns per table, you could specify NOT NULL (ie, price). Well, you could do that with TRIGGERs, but MySQL does not raise errors or abort an operation that spawned a trigger — in other words, you can’t stop the row from being inserted, just have an event when a row is inserted.

EAV cannot have referential integrity to multiple lookup tables, or only for some values.

It’s also expensive and complex to find all the attributes for one entity. In order to get one row that looks like normalized data, you need one join per attribute, and you may not even know how many there are.

Solution: Try not to use EAV tables, defining your attributes in your data model (ie, one table per attribute type). If you do, application logic should enforce constraints. Don’t try to fetch attributes in a single row (that looks like normalized data); fetch multiple rows and use the application code to reconstruct the entity.

4. Letting users crash your server
Example: people request ability to query database flexibility. So the antipattern is to give them access to run their own SQL.

Solution: give an interface which allows parameters to queries. But watch out for SQL injection!

Filter input escaping strings, or use parameterized queries.

6. Forcing primary keys to be contiguous

Example: managers don’t like gaps in invoice #’s. Antipattern is to try to reuse primary key values to fill in the gaps. Another antipattern is to change values to close the gaps.

Solution — deal with it. Do not reuse primary keys. Also, do not use auto_increment surrogate keys for everything if you do not need to.

Easiest Application-Level MySQL Auditing

This article shows the easiest way to audit commands to a MySQL database, assuming all content happens from an application. Now, this will use a lot of storage, and doubles the query load for each query, but it’s useful for when you know you want to capture the information of someone using the application.

The basic premise is simple. Logon to your nearest MySQL server and type the following:

SELECT CURRENT_USER(), USER();

Chances are the values are different. More on this later.

First, create a table:

CREATE TABLE `action` (
`user` varchar(77) NOT NULL default '',
`asuser` varchar(77) NOT NULL default '',
`db` varchar(64) NOT NULL default '',
`query` mediumtext NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

Why varchar(77)? Because the mysql.user table puts a maximum of 16 characters for the username, and 60 characters for the hostname. And then there’s the 1 character “@”. Similarly, database names are limited to varchar(64).

The “asuser” column is the grant record that the user is acting as. For instance, a connection with the username “sheeri” from the host “www.sheeri.com” has a user value of “sheeri@www.sheeri.com” but may have an asuser value of “sheeri@’%.sheeri.com'” — whatever the GRANT statement that applies to my current user is. This is the difference between CURRENT_USER() and USER().

Then, create the function — here’s a PHP example:
function my_mysql_query ($query, $dblink) {
$action="INSERT INTO action (user,asuser,db,query) VALUES (CURRENT_USER(), USER(), DATABASE(), $query)";
mysql_query($action, $dblink);
mysql_query($query, $dblink);
}

Of course, we could also add in application specific information. For a web-based application where there is an overall password instead of a different password for each customer or user, this does not help. However in that case, a session username and client IP can be easily gotten from environment variables and used instead of the MySQL-specific “user@host”.

To use it, simply use my_mysql_query in place of mysql_query.

Note that this is the quick-and-dirty way to do it.

This article shows the easiest way to audit commands to a MySQL database, assuming all content happens from an application. Now, this will use a lot of storage, and doubles the query load for each query, but it’s useful for when you know you want to capture the information of someone using the application.

The basic premise is simple. Logon to your nearest MySQL server and type the following:

SELECT CURRENT_USER(), USER();

Chances are the values are different. More on this later.

First, create a table:

CREATE TABLE `action` (
`user` varchar(77) NOT NULL default '',
`asuser` varchar(77) NOT NULL default '',
`db` varchar(64) NOT NULL default '',
`query` mediumtext NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

Why varchar(77)? Because the mysql.user table puts a maximum of 16 characters for the username, and 60 characters for the hostname. And then there’s the 1 character “@”. Similarly, database names are limited to varchar(64).

The “asuser” column is the grant record that the user is acting as. For instance, a connection with the username “sheeri” from the host “www.sheeri.com” has a user value of “sheeri@www.sheeri.com” but may have an asuser value of “sheeri@’%.sheeri.com'” — whatever the GRANT statement that applies to my current user is. This is the difference between CURRENT_USER() and USER().

Then, create the function — here’s a PHP example:
function my_mysql_query ($query, $dblink) {
$action="INSERT INTO action (user,asuser,db,query) VALUES (CURRENT_USER(), USER(), DATABASE(), $query)";
mysql_query($action, $dblink);
mysql_query($query, $dblink);
}

Of course, we could also add in application specific information. For a web-based application where there is an overall password instead of a different password for each customer or user, this does not help. However in that case, a session username and client IP can be easily gotten from environment variables and used instead of the MySQL-specific “user@host”.

To use it, simply use my_mysql_query in place of mysql_query.

Note that this is the quick-and-dirty way to do it.

OurSQL Episode 13: The Nitty Gritty of Indexes

In this episode, we go through how a B-tree works. The next episode will use what we learn in this episode to explain why MySQL indexes work the way they do.

Direct play this episode at:
http://technocation.org/content/oursql-episode-13%3A-nitty-gritty-indexes-0

Download all podcasts at:
http://technocation.org/podcasts/oursql/

Subscribe to the podcast at:
http://feeds.feedburner.com/oursql

Register for the MySQL Conference now!:
http://www.mysqlconf.com

Quiz to receive a free certification voucher from Proven Scaling:
http://www.provenscaling.com/freecert

MySQL Full Reference Cards:
http://www.visibone.com/sql

About B-Trees:
http://www.semaphorecorp.com/btp/algo.html

http://perl.plover.com/BTree/article.txt

Feedback:

Email podcast@technocation.org

call the comment line at +1 617-674-2369

use Odeo to leave a voice mail through your computer:
http://odeo.com/sendmeamessage/Sheeri

Or use the Technocation forums:
http://technocation.org/forum

In this episode, we go through how a B-tree works. The next episode will use what we learn in this episode to explain why MySQL indexes work the way they do.

Direct play this episode at:
http://technocation.org/content/oursql-episode-13%3A-nitty-gritty-indexes-0

Download all podcasts at:
http://technocation.org/podcasts/oursql/

Subscribe to the podcast at:
http://feeds.feedburner.com/oursql

Register for the MySQL Conference now!:
http://www.mysqlconf.com

Quiz to receive a free certification voucher from Proven Scaling:
http://www.provenscaling.com/freecert

MySQL Full Reference Cards:
http://www.visibone.com/sql

About B-Trees:
http://www.semaphorecorp.com/btp/algo.html

http://perl.plover.com/BTree/article.txt

Feedback:

Email podcast@technocation.org

call the comment line at +1 617-674-2369

use Odeo to leave a voice mail through your computer:
http://odeo.com/sendmeamessage/Sheeri

Or use the Technocation forums:
http://technocation.org/forum