segunda-feira, 28 de setembro de 2009

Finding for each time interval, how many records are "ocurring" during that interval

This is a complex problem: You are mapping events (of some kind) with a start and end timestamp, but how do you know, for a specific interval [ti,tf] (timeslot), how many records of those have start<ti and end>tf? This problem is complex because you have no records defining the timeslot to serve either as a grouping field or comparison field. This is a problem I've seen people tend to approach with a procedural approach, and that's the big problem to understand SQL, which tipically are set problems.

The main issue around this problem is that you need to count existences for a list you don't have. In my real scenario, there are some restrictions to have in mind:

  • The data set is extremely large, so this operation is daily generated for a specific day.

  • Due to the above, the table is partitioned on a filtering field (stoptime below).



Immediatelly, some solutions pop in my head:

  • Use a summary table for each time slot: when a record is inserted, increment all respective time slots by one. This is cool, but I'd like to avoid insert delay. This solution also implies having a persistent table for each timeslot during the whole times of the whole records, right? That could be from 2009-08 to 2009-09, but also could start on 1989-09 to 2009-09, which represent ~10.5M records, some of them possibly zero.

  • Another option could be to use cursors to iterate through the selection of records which cross a specific minute and perhaps fill a temporary table with the results. Cursors are slow, it is a procedural approach, and represents programming overhead.



But then again, these are both procedural solutions and that's why they don't seem so effective - actually, the first is not quite the same as the second and is pretty (well) used, however it induces some extra effort and schema changes.
The solution I'm proposing is a set theory approach: IF we had a table of timeslots (minute slots), we could just join the two tables and apply the rules we want. But we don't have. But perhaps we can generate it. This idea came out after reading the brilliant examples of Roland Bouman's MySQL: Another Ranking trick and Shlomi Noach's SQL: Ranking without self join.



Let's build an example table:
[mysql]
mysql> CREATE TABLE `phone_calls` (
-> `starttime` datetime NOT NULL,
-> `stoptime` datetime NOT NULL,
-> `id` int(11) NOT NULL,
-> PRIMARY KEY (`id`),
-> KEY `idx_stoptime` (`stoptime`)
-> ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.04 sec)
[/mysql]
Now manually insert some interesting records:
[mysql]
mysql> select * from phone_calls;
+---------------------+---------------------+----+
| starttime | stoptime | id |
+---------------------+---------------------+----+
| 2009-08-03 09:23:42 | 2009-08-03 09:24:54 | 0 |
| 2009-08-03 11:32:11 | 2009-08-03 11:34:55 | 2 |
| 2009-08-03 10:23:12 | 2009-08-03 10:23:13 | 1 |
| 2009-08-03 16:12:53 | 2009-08-03 16:20:21 | 3 |
| 2009-08-03 11:29:09 | 2009-08-03 11:34:51 | 4 |
+---------------------+---------------------+----+
5 rows in set (0.00 sec)
[/mysql]

As an example, you may verify that record id=2 crosses only time slot '2009-08-03 11:33:00' and no other, and record id=0 crosses none. These are perfectly legitimate call start and end timestamps.

Let's see a couple of premisses:


  • A record that traverses a single minute can be described by this:
    MINUTESLOT(starttime) - MINUTESLOT(stoptime) >= 2

    You can think of MINUTESLOT(x) as the timeslot record associated with field x in the record. It actually represents CONCAT(LEFT(x,16),":00") and the difference is actually a TIMEDIFF();

  • A JOIN will give you a product of records for each match, which means if I could "know" a specific timeslot I could multiply it by the number of records that cross it and then GROUP BY with a COUNT(1). But I don't have the timeslots...


As I've said, I'm generating this recordset for a specific day, and that's why these records all refer to 2009-08-03. Let's confirm I can select the recordspace I'm interested in:
[mysql]
mysql> SELECT starttime,stoptime
-> FROM phone_calls
-> WHERE
-> /* partition pruning */
-> stoptime >= '2009-08-03 00:00:00'
-> AND stoptime <= DATE_ADD('2009-08-03 23:59:59', INTERVAL 1 HOUR)
->
-> /* the real filtering:
/*> FIRST: only consider call where start+stop boundaries are out of the
/*> minute slot being analysed (seed.timeslot)
/*> */
-> AND TIMESTAMPDIFF(MINUTE, CONCAT(LEFT(starttime,16),":00"), CONCAT(LEFT(stoptime,16),":00")) >= 2
->
-> /* consequence of the broader interval that we had set to cope
/*> with calls taking place beyond midnight
/*> */
-> AND starttime <= '2009-08-03 23:59:59';
+---------------------+---------------------+
| starttime | stoptime |
+---------------------+---------------------+
| 2009-08-03 11:32:11 | 2009-08-03 11:34:55 |
| 2009-08-03 16:12:53 | 2009-08-03 16:20:21 |
| 2009-08-03 11:29:09 | 2009-08-03 11:34:51 |
+---------------------+---------------------+
3 rows in set (0.00 sec)
[/mysql]

These are the 'calls' that cross any minute in the selected day. I deliberately showed specific restrictions so you understand the many aspects involved:

  • Partition pruning is fundamental, unless you want to scan the whole 500GB table. This means you are forced to limit the scope of analysed records. Now, if you have a call starting at 23:58:00 and stopping at 00:01:02 the next day, pruning would leave that record out. So I've given 1 HOUR of margin to catch those records;

  • We had to set stoptime later than the end of the day being analysed. That also means we might catch unwanted records starting between 00:00:00 and that 1 HOUR margin, so we'll need to filter them out;
  • Finally, there's also our rule about "crossing a minute".



In the end, maybe some of these restrictions (WHERE clauses) can be removed as redundant.

Now let's see if we can generate a table of timeslots:
[mysql]
mysql> select CONVERT(@a,DATETIME) AS timeslot
-> FROM phone_calls_helper, (
-> select @a := DATE_SUB('2009-08-03 00:00:00', INTERVAL 1 MINUTE)) as init
-> WHERE (@a := DATE_ADD(@a, INTERVAL 1 MINUTE)) <= '2009-08-03 23:59:59'
-> LIMIT 1440;
+---------------------+
| timeslot |
+---------------------+
| 2009-08-03 00:00:00 |
| 2009-08-03 00:01:00 |
....
| 2009-08-03 23:58:00 |
| 2009-08-03 23:59:00 |
+---------------------+
1440 rows in set (0.01 sec)
[/mysql]

This is the exciting part: We generate the timeslots using user variables and this might be only possible to do in MySQL. Notice that I need to recur to a table, since I can't produce results from the void: its records are actually used as a product of my join to generate what I want. You can use any table, as long as it has at least 1440 records (number of minutes in a day). But your should also have in mind the kind of access is being made to that table because it can translate to unnecessary I/O if you're not carefull:
[mysql]
mysql> explain select CONVERT(@a,DATETIME) AS timeslot
-> FROM phone_calls_helper, (
-> select @a := DATE_SUB('2009-08-03 00:00:00', INTERVAL 1 MINUTE)) as init
-> WHERE (@a := DATE_ADD(@a, INTERVAL 1 MINUTE)) <= '2009-08-03 23:59:59'
-> LIMIT 1440;
+----+-------------+--------------------+--------+---------------+---------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------------+--------+---------------+---------+---------+------+------+--------------------------+
| 1 | PRIMARY | | system | NULL | NULL | NULL | NULL | 1 | |
| 1 | PRIMARY | phone_calls_helper | index | NULL | PRIMARY | 4 | NULL | 1440 | Using where; Using index |
| 2 | DERIVED | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used |
+----+-------------+--------------------+--------+---------------+---------+---------+------+------+--------------------------+
3 rows in set (0.00 sec)
[/mysql]
In my case I see scanning the 1400 records is being made on the PRIMARY key, which is great. You should choose a table whose keycache has high probability to be in RAM so the index scanning don't go I/O bound either. Scanning 1440 PRIMARY KEY entries shouldn't be quite an I/O effort even in cold datasets, but if you can avoid it anyway, the better.

At this moment you are probably starting to see the solution: either way the Optimizer choosed the first or the last table, it's always a win-win case, since the 1440 are RAM based: you can choose to think of 1440 timeslots being generated and then multiplied by the number of records that cross each timeslot (Rc), or you can choose to think of the 3 records that cross any timeslot and generate timeslots that fall between the start/stop boundaries of each record (Tr). The mathematical result is








timeslots_per_records vs. records_per_timeslots


Well, they might not represent the same effort. Remember that the timeslots are memory and seeking back and forth from it is less costly than seeking back and forth from possibly I/O bound data. However, due to our "imaginary" way of generating the timeslots (which aren't made persistent anyhow by that subquery), we'd need to materialize it so that we could seek on it. But that would also give us the change to optimize some other issues, like CONVERT(), the DATE_ADD()s, etc, and scan only the timeslots that are crossed by a specific call, which is optimal. However, if you're going to GROUP BY the timeslot you could use an index on the timeslot table and fetch each record that cross each timeslot. Tough decision, eh? I have both solutions, I won't benchmark them here, but since the "timeslots per record" made me materialize the table, I'll leave it here as an example:

[mysql]
mysql> CREATE TEMPORARY TABLE `phone_calls_helper2` (
-> `tslot` datetime NOT NULL,
-> PRIMARY KEY (`tslot`)
-> ) ENGINE=MEMORY DEFAULT CHARSET=latin1 ;
Query OK, 0 rows affected (0.00 sec)

mysql> insert into phone_calls_helper2 select CONVERT(@a,DATETIME) AS timeslot
-> FROM phone_calls_helper, (
-> select @a := DATE_SUB('2009-08-03 00:00:00', INTERVAL 1 MINUTE)) as init
-> WHERE (@a := DATE_ADD(@a, INTERVAL 1 MINUTE)) <= '2009-08-03 23:59:59'
-> LIMIT 1440;
Query OK, 1440 rows affected (0.01 sec)
Records: 1440 Duplicates: 0 Warnings: 0
[/mysql]

So now, the "timeslots per records" query should look like this:
[mysql]
mysql> explain SELECT tslot
-> FROM phone_calls FORCE INDEX(idx_stoptime)
-> JOIN phone_calls_helper2 FORCE INDEX (PRIMARY) ON
-> tslot > CONCAT(LEFT(starttime,16),":00")
-> AND tslot < CONCAT(LEFT(stoptime,16),":00")
->
-> WHERE
-> /* partition pruning */
-> stoptime >= '2009-08-03 00:00:00'
-> AND stoptime <= DATE_ADD('2009-08-03 23:59:59', INTERVAL 1 HOUR)
->
-> /* the real filtering:
/*> FIRST: only consider call where start+stop boundaries are out of the
/*> minute slot being analysed (seed.timeslot)
/*> */
-> AND TIMESTAMPDIFF(MINUTE, CONCAT(LEFT(starttime,16),":00"), CONCAT(LEFT(stoptime,16),":00")) >= 2
->
-> /* consequence of the broader interval that we had set to cope
/*> with calls taking place beyond midnight
/*> */
-> AND starttime <= '2009-08-03 23:59:59'
-> GROUP BY tslot;
+----+-------------+---------------------+-------+---------------+--------------+---------+------+------+------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------------+-------+---------------+--------------+---------+------+------+------------------------------------------------+
| 1 | SIMPLE | phone_calls | range | idx_stoptime | idx_stoptime | 8 | NULL | 4 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | phone_calls_helper2 | ALL | PRIMARY | NULL | NULL | NULL | 1440 | Range checked for each record (index map: 0x1) |
+----+-------------+---------------------+-------+---------------+--------------+---------+------+------+------------------------------------------------+
2 rows in set (0.00 sec)
[/mysql]

It's interesting to see «Range checked for each record (index map: 0x1)» for which the manual states:
MySQL found no good index to use, but found that some of indexes might be used after column values from preceding tables are known.

I can't explain why shouldn't it use the PRIMARY KEY - I tried using CONVERT() for the CONCAT()s to ensure the same data type, but no luck - , but I'm probably safe as it'll probably use it. And this is the final result:
[mysql]
mysql> SELECT tslot,count(1) FROM phone_calls FORCE INDEX(idx_stoptime) JOIN phone_calls_helper2 FORCE INDEX (PRIMARY) ON tslot > CONVERT(CONCAT(LEFT(starttime,16),":00"),DATETIME) AND tslot < CONVERT(CONCAT(LEFT(stoptime,16),":00"),DATETIME) WHERE stoptime >= '2009-08-03 00:00:00' AND stoptime <= DATE_ADD('2009-08-03 23:59:59', INTERVAL 1 HOUR) AND TIMESTAMPDIFF(MINUTE, CONCAT(LEFT(starttime,16),":00"), CONCAT(LEFT(stoptime,16),":00")) >= 2 AND starttime <= '2009-08-03 23:59:59' GROUP BY tslot;
+---------------------+----------+
| tslot | count(1) |
+---------------------+----------+
| 2009-08-03 11:30:00 | 1 |
| 2009-08-03 11:31:00 | 1 |
| 2009-08-03 11:32:00 | 1 |
| 2009-08-03 11:33:00 | 2 |
| 2009-08-03 16:13:00 | 1 |
| 2009-08-03 16:14:00 | 1 |
| 2009-08-03 16:15:00 | 1 |
| 2009-08-03 16:16:00 | 1 |
| 2009-08-03 16:17:00 | 1 |
| 2009-08-03 16:18:00 | 1 |
| 2009-08-03 16:19:00 | 1 |
+---------------------+----------+
11 rows in set (0.00 sec)
[/mysql]

Notice that I already did the GROUP BY and that it forces a temporary table and filesort, so it's better to be careful on how many records this will generate. In my (real) case the grouping is done on more phone_calls fields, so I can probably reuse the index later. As regarding the post-execution, since the helper table is TEMPORARY, everything will be dismissed automatically without further programming overhead.

I hope you understand this solution opens a wide range of "set"-based solutions to problems you might try to solve in a procedural way - which is the reason your solution might turn to be painfull.

domingo, 27 de setembro de 2009

Importing wikimedia dumps

We are trying to gather some particular statistics about portuguese wikipedia usage.
I proposed myself for import the ptwiki-20090926-stub-meta-history dump, which is a XML file, and we'll be running very heavy queries (it's my task to optimize them, somehow).

What I'd like to mention is that the importing mechanism seems to be tremendously simplified. I remember testing a couple of tools in the past, without much success (or robustness). However, I gave a try to mwdumper this time, and it seems it does it. Note however that there were schema changes from the last mwdumper release, so you should have a look at WMF Bug #18328: mwdumper java.lang.IllegalArgumentException: Invalid contributor which releases a proposed fix which seems to work well. Special note to its memory efficiency: RAM is barely touched!

The xml.gz file is ~550MB, and was converted to a ~499MB sql.gz:

1,992,543 pages (3,458.297/sec), 15,713,915 revs (27,273.384/sec)


I've copied the schema from a running (updated!) mediawiki to spare some time. The tables seem to be InnoDB default, so let's simplify I/O a bit (I'm on my laptop). This will also allow to speed up loading times a lot:
[mysql]
mysql> ALTER TABLE `text` ENGINE=Blackhole;
Query OK, 0 rows affected (0.01 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> alter table page drop index page_random, drop index page_len;
Query OK, 0 rows affected (0.01 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> alter table revision drop index rev_timestamp, drop index page_timestamp, drop index user_timestamp, drop index usertext_timestamp;
Query OK, 0 rows affected (0.01 sec)
Records: 0 Duplicates: 0 Warnings: 0
[/mysql]

The important here is to avoid the larger I/O if you don't need it at all. Table text has page/revision content which I'm not interested at all. As regarding MySQL's configuration (and as a personal note, anyway), the following configuration will give you great InnoDB speeds:
[code]
key_buffer = 512K
sort_buffer_size = 16K
read_buffer_size = 2M
read_rnd_buffer_size = 1M
myisam_sort_buffer_size = 512K
query_cache_size = 0
query_cache_type = 0
bulk_insert_buffer_size = 2M

innodb_file_per_table
transaction-isolation = READ-COMMITTED
innodb_buffer_pool_size = 2700M
innodb_additional_mem_pool_size = 20M
innodb_autoinc_lock_mode = 2
innodb_flush_log_at_trx_commit = 0
innodb_doublewrite = 0
skip-innodb-checksum
innodb_locks_unsafe_for_binlog=1
innodb_log_file_size=128M
innodb_log_buffer_size=8388608
innodb_support_xa=0
innodb_autoextend_increment=16
[/code]

Now I'd recommend uncompress the dump so it's easier to trace the whole process if it's taking too long:
[code]
[myself@speedy ~]$ gunzip ptwiki-20090926-stub-meta-history.sql.gz
[myself@speedy ~]$ cat ptwiki-20090926-stub-meta-history.sql | mysql wmfdumps
[/code]

After some minutes on a Dual Quad Core Xeon 2.0GHz and 2.4 GB of datafiles we are ready to rock! I will probably also need later the user table, which Wikimedia doesn't distribute, so I'll rebuild it now:
[mysql]
mysql> alter table user modify column user_id int(10) unsigned NOT NULL;
Query OK, 0 rows affected (0.12 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> alter table user drop index user_email_token, drop index user_name;
Query OK, 0 rows affected (0.03 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> insert into user(user_id,user_name) select distinct rev_user,rev_user_text from revision where rev_user <> 0;
Query OK, 119140 rows affected, 4 warnings (2 min 4.45 sec)
Records: 119140 Duplicates: 0 Warnings: 0

mysql> alter table user drop primary key;
Query OK, 0 rows affected (0.13 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> insert into user(user_id,user_name) values(0,'anonymous');
Query OK, 1 row affected, 4 warnings (0.00 sec)
[/mysql]
It's preferable to join on INT's rather than VARCHAR(255) that's why I reconstructed the user table. I actually removed the PRIMARY KEY but I set it after the process. What happens is that there are users that have been renamed and thus they appear with same id, different user_name. The query to list them all is this:
[mysql]
mysql> select a.user_id,a.user_name from user a join (select user_id,count(1) as counter from user group by user_id having counter > 1 order by counter desc) as b on a.user_id = b.user_id order by user_id DESC;
....
14 rows in set (0.34 sec)

mysql> update user a join (select user_id,GROUP_CONCAT(user_name) as user_name,count(1) as counter from user group by user_id having counter > 1) as b set a.user_name = b.user_name where a.user_id = b.user_id;
Query OK, 14 rows affected (2.49 sec)
Rows matched: 14 Changed: 14 Warnings: 0
[/mysql]

The duplicates were removed manually (they're just 7). Now, let's start to go deeper. I'm not concerned about optimizing for now. What I wanted to run right away was the query I asked on Toolserver more than a month ago:

[mysql]
mysql> CREATE TABLE `teste` (
-> `rev_user` int(10) unsigned NOT NULL DEFAULT '0',
-> `page_namespace` int(11) NOT NULL,
-> `rev_page` int(10) unsigned NOT NULL,
-> `edits` int(1) unsigned NOT NULL,
-> PRIMARY KEY (`rev_user`,`page_namespace`,`rev_page`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ;
Query OK, 0 rows affected (0.04 sec)

mysql> insert into teste select r.rev_user, p.page_namespace, r.rev_page, count(1) AS edits from revision r JOIN page p ON r.rev_page = p.page_id GROUP BY r.rev_user,p.page_namespace,r.rev_page;
Query OK, 7444039 rows affected (8 min 28.98 sec)
Records: 7444039 Duplicates: 0 Warnings: 0

mysql> create table edits_per_namespace select straight_join u.user_id,u.user_name, page_namespace,count(1) as edits from teste join user u on u.user_id = rev_user group by rev_user,page_namespace;
Query OK, 187624 rows affected (3.65 sec)
Records: 187624 Duplicates: 0 Warnings: 0

mysql> select * from edits_per_namespace order by edits desc limit 5;
+---------+---------------+----------------+--------+
| user_id | user_name | page_namespace | edits |
+---------+---------------+----------------+--------+
| 76240 | Rei-bot | 0 | 365800 |
| 0 | anonymous | 0 | 253238 |
| 76240 | Rei-bot | 3 | 219085 |
| 1740 | LeonardoRob0t | 0 | 145418 |
| 170627 | SieBot | 0 | 121647 |
+---------+---------------+----------------+--------+
5 rows in set (0.09 sec)
[/mysql]

Well, that's funny Rei-artur's bot beats all summed anonymous edits on the main namespace :) I still need to setup a way of discarding the bots, they usually don't count on stats. I'll probably set a flag on the user table myself, but this is enough to get us started..

Listing miscellaneous Apache parameters in SNMP

We recently had to look at a server which occasionaly died with DoS. I was manually monitoring a lot of stuff while I was watching a persistent BIG apache worker popping up occasionally and then disappear (probably being recycled). Yet more rarely I caught two of them. This machine was being flood with blog spam from a botnet. I did the math and soon I found that if the current number of allowed workers was filled the way this was, the machine would start swapping like nuts. This seemed to be the cause.

After corrected the problem (many measures were taken, see below), I searched for cacti templates that could evidence this behaviour. I found that ApacheStats nor the better Apache templates didn't report about Virtual Memory Size (VSZ) nor Resident Set Size (RSS), which is exaplained by mod_status not reporting it either (and they fetch the data by querying mod_status).

So here's a simple way of monitoring these. Suppose there is a server which runs some apache workers you want to monitor, and there is machine to where you want to collect data:

Edit your server's /etc/snmp/snmpd.conf
[code]
# .... other configuration directives
exec .1.3.6.1.4.1.111111.1 ApacheRSS /usr/local/bin/apache-snmp-rss.sh
[/code]

'1.3.6.1.4.1.111111.1' OID is a branch of '.1.3.6.1.4.1' which was assigned with meaning '.iso.org.dod.internet.private.enterprises', which is where one enterprise without IANA assignmed code should place its OIDs. Anyway, you can use any sequence you want.

Create a file named /usr/local/bin/apache-snmp-rss.sh with following contents:
[code]
#!/bin/sh
WORKERS=4
ps h -C httpd -o rss | sort -rn | head -n $WORKERS
[/code]

Notice that httpd is apache's process name in CentOS. In Debian, eg, that would be apache. Now give the script execution rights. Now go to your poller machine, from where you'll do the SNMP queries:
[code]
[root@poller ~]# snmpwalk -v 2c -c public targetserver .1.3.6.1.4.1.111111.1.101
SNMPv2-SMI::enterprises.111111.1.101.1 = STRING: "27856"
SNMPv2-SMI::enterprises.111111.1.101.2 = STRING: "25552"
SNMPv2-SMI::enterprises.111111.1.101.3 = STRING: "24588"
SNMPv2-SMI::enterprises.111111.1.101.4 = STRING: "12040"
[/code]

So this is reporting the 4 most consuming workers (which is the value specified in the script variable WORKERS) with their RSS usage (that's the output of '-o rss' on the script).

Now graphing these values is a bit more complicated, specially because the graphs are usually created on a "fixed number of values" basis. That means whenever your workers number increases or decreases, the script has to cope with it. That's why there is filtering ocurring on the script: first we reverse order them by RSS size, then we get only the first 4 - this means you'll be listing the most consuming workers. To avoid having your graphs asking for more values than the scripts generates, the WORKERS script variable should be adjusted to the minimum apache workers you'll ever have on your system - that should be the httpd.conf StartServers directive.

Now going for the graphs: this is the tricky part as I find cacti a little overcomplicated. However you should be OK with this Netuality post. You should place individual data sources for each of the workers, and group the four in a Graph Template. This is the final result, after lots of struggling trying to get the correct values (I still didn't manage to get the right values, which are ~22KB):

cacti_apache_rss_stats

In this graph you won't notice the events I exposed in the beginning because other measures were taken, including dynamic firewalling, apache tuning and auditing the blogs for comment and track/pingback permissions - we had an user wide open to spam, and that was when the automatic process of cleaning up the blog spam was implemented. In any case, this graph will evidence future similar situations which I hope are over.

I'll try to post the cacti templates as well, as soon as I recover from the struggling :) Drop me a note if you're interested.

sexta-feira, 25 de setembro de 2009

Side-effect of mysqlhotcopy and LVM snapshots on active READ server

I just came across a particular feature of MySQL while using inspecting a Query Cache being wiped out at backup times. Whenever you run FLUSH TABLES, the whole Query Cache gets flushed as well, even if you FLUSH TABLES a particular table. And guess what, mysqlhotcopy issues FLUSH TABLES so the tables get in sync on storage.

I actually noticed the problem with Query Cache on a server reporting the cache flush at a [too] round time (backup time).

flush_tables_affects_query_cache

First thought was «there's something wrong about mysqlhotcopy. But actually this is expected behaviour:

When no tables are named, closes all open tables, forces all tables in use to be closed, and flushes the query cache. With one or more table names, flushes only the given tables. FLUSH TABLES also removes all query results from the query cache, like the RESET QUERY CACHE statement.


I got curious about why the heck closing a table should invalidate the cache - maybe the "close table" mechanism is overly cautious?

Anyway, it's not mysqlhotcopy's fault. And since you should issue FLUSH TABLES for LVM snapshost for consistentency as well, this method is also affected, which renders both methods pretty counter-perfomance in a single production server, comparing to mysqldump, unless you do post-backup warmup process. For that, it would be interesting to be able to dump the QC contents and reload them after the backup - which is not possible, at the moment... bummer...

quarta-feira, 23 de setembro de 2009

Tamanho e character set de campos VARCHAR e consequências

Apesar do tamanho dos campos VARCHAR não influenciar o tamanho dos índices, este tipo de dados (e outros semelhantes) comportam uma série de características que podem afectar seriamente a performance. Vamos testar o impacto destes campos em dois cenários: com character sets diferentes e com tamanhos diferentes.

Tendo em conta que o storage engine MEMORY apenas trabalha com tamanhos de tuplos fixos (fixed-length rows), e que este engine é o utilizado para tabelas temporárias (uma coisa a evitar, embora nem sempre seja possível), as consequências podem ser desastrosas.

Para esta demonstração, vamos definir o tamanho máximo das tabelas MEMORY para um valor que possamos atingir em alguns segundos, o mínimo:

[mysql]
mysql> set max_heap_table_size = 1;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show variables like '%heap%';
+---------------------+-------+
| Variable_name | Value |
+---------------------+-------+
| max_heap_table_size | 16384 |
+---------------------+-------+
1 row in set (0.00 sec)
[/mysql]

Observamos que o mínimo que conseguimos será 16KB. Vamos ver o que acontece com campos VARCHAR (ie, de comprimento [VAR]iável):
[mysql]
mysql> CREATE TABLE `varchar_small` (
-> `id` varchar(36) NOT NULL
-> ) ENGINE=MEMORY DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.01 sec)

mysql> insert into varchar_small values('abcd');
Query OK, 1 row affected (0.00 sec)

mysql> -- Vamos enchendo ate nao dar mais....
mysql> insert into varchar_small select * from varchar_small;
ERROR 1114 (HY000): The table 'varchar_small' is full

mysql> CREATE TABLE `var_char` (
-> `id` varchar(36) NOT NULL
-> ) ENGINE=MEMORY DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.00 sec)

mysql> insert into var_char values('abcdefgh-ijkl-mnop-qrst-uvwxyzabcedf');
Query OK, 1 row affected (0.00 sec)

mysql> -- O mesmo: vamos enchendo ate nao dar mais...
mysql> insert into var_char select * from var_char;
ERROR 1114 (HY000): The table 'var_char' is full

mysql> select count(1) from var_char;
+----------+
| count(1) |
+----------+
| 320 |
+----------+
1 row in set (0.00 sec)

mysql> select count(1) from varchar_small;
+----------+
| count(1) |
+----------+
| 320 |
+----------+
1 row in set (0.00 sec)
[/mysql]
Ora bem, o que fiz foi preencher a primeira tabela com conteúdo bem menor que 36 carácteres (apenas 4) e, na segunda, conteúdo que preenchesse o campo todo. O que podemos observar é que, neste storage engine, o espaço ocupado por um campo pouco cheio ou muito cheio é sempre o mesmo: é o tamanho total associado ao campo. É isso que significa fixed-length, e isto acarreta como consequência que o tamanho de um campo VARCHAR em tabelas do tipo MEMORY (leia-se também: tabelas temporárias) é invariavelmente o tamanho máximo do campo. Consequentemente, ainda que um tamanho máximo de 255 não influencie o tamanho de índice sobre esse campo, sempre que seja necessário transportar esses dados para tabelas temporárias pode verificar-se um desperdício enorme de espaço (multiplicar esse desperdício pelo número de linhas que terão que ser transportadas para a tabela temporária!).

Mas isto não fica por aqui: para além de cuidado e bom senso na definição do tamanho do campo, é necessário também ter em atenção o seu encoding. O segundo ponto é sobre a influência do charset. Tipicamente trabalhávamos em Latin1, mas com a disseminação da localização (i18n) passou-se praticamente a usar sempre UTF-8. Tudo muito bem, deixámos de ter uma série de problemas com a formatação das sequências de carácteres (vulgo strings). Mas nem tudo são rosas: o charset UTf-8 pode consumir até 4 carácteres (em MySQL, 3), ao passo que Latin1 usava apenas 1:

[mysql]
mysql> CREATE TABLE `varchar_utf8` (
-> `id` varchar(36) NOT NULL
-> ) ENGINE=MEMORY DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE `varchar_latin1` (
-> `id` varchar(36) NOT NULL
-> ) ENGINE=MEMORY DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.00 sec)
[/mysql]
A cláusula geral de encoding da tabela propaga-se como default para as colunas. De qualquer forma, poderia especificar o encoding para a coluna de forma individual. Para ter a certeza que não estamos a ser enganados pelo encoding corrente no cliente mysql, usei dois scripts em PHP para criar uma única linha para cada uma destas tabelas (o encoding no meu terminal é UTF-8, como já vai sendo comum):
[php]
mysql_connect('localhost','root','');
mysql_select_db('test');
mysql_query("SET NAMES UTF8");
mysql_query("INSERT INTO varchar_utf8 VALUES('ãããããããã-ãããã-ãããã-ãããã-ãããããããããããã')");
[/php]
[php]
mysql_connect('localhost','root','');
mysql_select_db('test');
mysql_query("SET NAMES Latin1");
mysql_query("INSERT INTO varchar_latin1 VALUES('ãããããããã-ãããã-ãããã-ãããã-ãããããããããããã')");
EOF
[/php]
Gravei os dois scripts acima para varchar_utf8.php e varchar_latin1.php.temp, respectivamente. Por fim, alterei o encoding do script para em Latin1 (porque faça o que fizer, o encoding do meu terminal é UTF-8):
[code]
[root@speedy ~]# cat varchar_latin1.php.temp | iconv -f utf8 -t iso-8859-1 > varchar_latin1.php
[/code]
E executei os dois:
[code]
[root@speedy ~]# php -f varchar_utf8.php
[root@speedy ~]# php -f varchar_latin1.php
[/code]
OK, agora tenho 1 registo de cada em cada tabela. Vamos usar esse único registo em cada para encher as tabelas até não ser possível adicionar mais registos:
[mysql]
mysql> -- ...
mysql> insert into varchar_utf8 select * from varchar_utf8;
ERROR 1114 (HY000): The table 'mem_utf8' is full

mysql> -- ...
insert into varchar_latin1 select * from varchar_latin1;
ERROR 1114 (HY000): The table 'mem_latin1' is full

mysql> select count(1) from mem_utf8;
+----------+
| count(1) |
+----------+
| 126 |
+----------+
1 row in set (0.00 sec)

mysql> select count(1) from mem_latin1;
+----------+
| count(1) |
+----------+
| 320 |
+----------+
1 row in set (0.00 sec)
[/mysql]

Pois é, verificámos o segundo problema: no engine MEMORY couberam muito menos linhas na primeira tabela (UTF-8) que na primeira (Latin1) porque que os campos são do seu tamanho máximo possível e, no caso dos VARCHAR em UTF8, cada carácter pode ocupar até 3 bytes. Ora, para caberem 36 carácteres, serão necessários, pelo menos, 36*3 = 108 bytes! Isto representa um consumo de 300% que pode não ter sido estimado no que respeita ao tamanho de memória permitido para tabelas temporárias.

Ambos cenários são bastante frequentes, especialmente em aplicações que usem frameworks - e também nestes casos, pela generalidade de backends que tentam suportar (MySQL, PostgreSQL, Oracle, etc), as queries envolvidas não costumam ser optimizadas para nenhum motor em particular; no caso do MySQL essas queries geram, muitas vezes, tabelas temporárias sem necessidade e optimizar este tipo de queries pode implicar alterações no núcleo da framework ou, no mínimo, criar excepções à regra.

A escolha do nome da coluna nos exemplos acima não foi ao acaso: com efeito, os campos UUID() costumam ser UTF-8 apenas porque toda a tabela é UTF-8 e, imagine-se, não há nenhum carácter num UUID() que não seja ASCII (na verdade, não há nenhum carácter que não seja [a-zA-Z-1-9]!):
[mysql]
mysql> select UUID();
+--------------------------------------+
| UUID() |
+--------------------------------------+
| 27d6a670-a64b-11de-866d-0017083bf00f |
+--------------------------------------+
1 row in set (0.00 sec)
[/mysql]
Como o conjunto de caracteres de um UUID() é charset compatible entre os vários, esta é uma das optimizações que poderão fazer nas vossas aplicações facilmente, contribuindo para o aumento de performance do MySQL sem qualquer impacto na vossa aplicação.

segunda-feira, 14 de setembro de 2009

Novidades na integração SugarCRM - IPBrick

Para a versão GA do SugarCRM 5.5, estamos a preparar algumas surpresas para a versão 5.1 da IPBrick:


  • Disponibilização do suporte integrado para qualquer das versões SugarCRM 5.2 e 5.5 (Community Edition, Professional ou Enterprise).


  • Novo método de sincronização/importação de contas e contactos. Este método reduz exponencialmente a velocidade de sincronização: quanto maiores forem os dados a importar, maior será a diferença na rapidez.


  • Graças a este novo método vai ser possível também algum nível de sincronização bidireccional. Na verdade trata-se de uma fusão - tanto quanto possível - de dois registos em alterados em ambos lados. O utilizador poderá controlar como pretende os resultados:

    • sincronização estrita com o IP Contactos

    • sincronização dos comuns (entre IPBrick e SugarCRM) mas preservando os dados isolados do SugarCRM (ie, que não existam no IPBrick), permitindo ao SugarCRM desenvolver autonomia

    • ou sincronização de apenas os dados novos do IP Contactos, preservando por completo os registos do SugarCRM.



  • Melhor integração com o SugarCRM: a nova versão está muito mais robusta no que toca a alterações upgrade safe e a DRI refez a integração para isso mesmo, o que significa que novas versões serão lançadas mais rapidamente.


  • A existência de uma camada de abstracção, possibilitando testar directa e instantaneamente sobre dados reais dos clientes. Ainda será desenvolvida uma funcionalidade de ofuscamento de dados para permitir manter a confidencialidade desses dados.


  • A possibilidade de realizar sobre o capítulo de sincronização uma bateria de testes de validação (unit testing) automatizados. Esta medida vai-nos possibilitar fazer controlo de qualidade antes de cada versão do módulo.


  • E claro, não menos importante, um footprint de memória muito reduzido (< 1MB na linha de comandos);


  • A nova interface de administração, com um assistente que vai explicando os passos a seguir, oferece agora a possibilidade de extrair relatórios da sincronização (ver abaixo) e permite a revisão do resultado final antes de ser fundido com o SugarCRM:



Quadro inicial:
sugar-ipbrick-uirevamp1

O primeiro passo é a importação dos dados do IP Contacts e cruzamento com os dados actuais do SugarCRM. No final do processo será possível rever as operações:
sugar-ipbrick-uirevamp2

Finalmente, o último passo é a fusão:
sugar-ipbrick-uirevamp3


De resto, estamos ainda a afinar os últimos detalhes do módulo de integração com as comunicações unificadas, que será também adaptado para as alterações da IPBrick 5.2.

sábado, 5 de setembro de 2009

Automatically cleaning up SPAM Wordpress comments

Doing the maintenance of our blogs (Wordpress), I bumped over one that had fallen on an active botnet. It was receiving like 5 or 6 spam comments per minute. It was nearly the only one in such an harassment, so I suspect the botnet loved it for being open on commenting.

Since I've activated reCaptcha I've been monitoring my "spam folder" and I'm really confident on his guesses, so I just wrote a STORED PROCEDURE to clean up these spam comments on a periodic basis, so I can do a sitewide cleanup:

[mysql]
DELIMITER $$

DROP PROCEDURE IF EXISTS `our_blog_db`.`REMOVE_OLD_SPAM`$$
CREATE PROCEDURE `our_blog_db`.`REMOVE_OLD_SPAM` ()
MODIFIES SQL DATA
COMMENT 'remove comentarios marcados como SPAM'
BEGIN

DECLARE done BIT(1) DEFAULT false;
DECLARE commtbl VARCHAR(50);
DECLARE comments_tbls CURSOR FOR SELECT TABLE_NAME
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'our_blog_db' AND TABLE_NAME LIKE '%comments';
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = true;


OPEN comments_tbls;

REPEAT
FETCH comments_tbls INTO commtbl;
SET @next_tbl = CONCAT('DELETE FROM our_blog_db.',commtbl,'
WHERE comment_approved = "spam"
AND comment_date_gmt < DATE_SUB(UTC_TIMESTAMP(), INTERVAL 15 DAYS)');
PREPARE get_next_tbl FROM @next_tbl;
EXECUTE get_next_tbl;

UNTIL done END REPEAT;

CLOSE comments_tbls;


END$$

DELIMITER ;
[/mysql]

It's very easy to stick it into an EVENT, if you have MySQL 5.1 or bigger, and which to do a daily clean up automatically:

[mysql]
CREATE EVENT `EV_REMOVE_OLD_SPAM` ON SCHEDULE EVERY 1 DAY STARTS '2009-08-01 21:00:00' ON COMPLETION NOT PRESERVE ENABLE
COMMENT 'remove comentarios marcados como SPAM' DO
BEGIN

SELECT GET_LOCK('remove_spam',5) INTO @remove_spam_lock;

IF @remove_spam_lock THEN
CALL REMOVE_OLD_SPAM();

END IF;

END
[/mysql]

Enjoy!

quarta-feira, 2 de setembro de 2009

About cloud computing

Last Sunday I commented about Pedro's opinion about cloud computing and thought I could give my blog a reversed trackback :) Here it goes:

I think Pedro's message is important. Cloud marketing and fuzzing seems to be targetted to business decision making personnel. However, no matter what they try to look like, that’s a technical decision and I really think that companies just following this marketing hype will eventually get caught on those small contract letters. As a technician, I agree with Pedro on the enterprise [not] moving its core to the cloud, and that the prices are [still] overrated.

However, for medium-to-large traffic platforms, such that they require a complex setup (meaning >4 machines) cloud can be a solution very similar to what could be called Hardware-as-a-Service. Unavoidabily, you have to move this kind of platforms outside the core, even if they are on a DMZ. More, you don’t usually want to mix corporate traffic with specific platforms (eg. a multinational’s CRM, the company’s website, etc.). In this context, cloud adds as much value as a regular hosting company would do, IMO. No more, no less.

Having said that, I still think it has lots of potential for intermediate companies (and again, this lives in technical scope) to provide HW solutions to costumers by clicking and adding “resources” to a [kind of] shop cart and then split them accordingly to their needs. That’s pretty much how Amazon seems to work – not some VPS/sliced hosting we are getting used to. Also, I see benefit for large hosting companies (now these could be those VPS/sliced ones :) ) because they can turn the income on periodic basis to match the periodic costs. From this intermediate’s perspective, one of the great features of this cloud thing is that they have setup quite heterogeneous provising systems, which a regular company can’t handle – that is to say you could setup a small/medium/full-blown pile of servers with a few clicks. Time also costs money.

Of course, this is all theoretical while the prices remain so high. It seemed even worst from my searches (although I confess I didn’t explore in depth): you will pay much more with cloud to have there available the same resources you can find on typical dedicated hosting servers – but it’s also true you rarely use them at 100%, so you may eventually get more cost/performance benefit in the near future (because when you buy or rent hardware it’s very difficult to recover the cost).

My conclusion is that the cloud is trying to attract customers on the hype, and that makes our technical advice more needed than ever: explain to the client how to plan, how to implement, and how to scale and where exactly the cloud fits in. To them, my recommendation is this: being on the cloud just because “it’s cool” or because it (seems) so simple you won’t need specialized IT staff, will eventually turn against you.