Picking the last record of each date in mysql

When we have stats, often times the timestamp is involved. So, say we have a table like the following

CREATE TABLE IF NOT EXISTS `my_stats_table` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `created_datetime` datetime NOT NULL,
  `value1` int(11) NOT NULL,
  PRIMARY KEY (`id`)
)

Now, all I want is to grab data for each day. Of course, there could be many rows in a day, and there are many ways to handle that. In my case, I need to grab the last row since it’s most accurate. How to do that?

SELECT * 
FROM `my_stats_table`
WHERE created_datetime IN
  (SELECT max(created_datetime)
    FROM `my_stats_table`
    GROUP BY DATE(created_datetime)
  )

The magic here is GROUP BY DATE(created_datetime). Yea, we are not limited to grouping bare columns only.

What if I want to get the average of the rows of a particular day? Um…

Installing mysql on osx

I often need to do this so I figure I will write it down to remind myself later.

First, download the mysql installation dmg file (version 5.5.20 as of now). osx already comes with mysql but I like the extra System Pref icon after the manual install. You can download at http://dev.mysql.com/downloads/mysql/. Pick Mac OS X 10.6 (x86, 64-bit), DMG Archive. Once you have the dmg file, open and install all three files (mysql-5.5.20-osx10.6-x86_64.pkg, MySQL.prefPane, MySQLStartupItem.pkg) in there.

Then in your ~/.bash_rc file, put in the following to save typing.

alias mysql=/usr/local/mysql-5.5.20-osx10.6-x86_64/bin/mysql
alias mysqladmin=/usr/local/mysql-5.5.20-osx10.6-x86_64/bin/mysqladmin

You can surely add /usr/local/mysql-5.5.20-osx10.6-x86_64/bin/ to the $PATH variable in your ~/.bash_profile file. Yea, whatever works.

PATH=/usr/local/mysql-5.5.20-osx10.6-x86_64/bin/:$PATH

Then turn on your MySQL daemon. An easy way is to open up System Preferences, then find the MySQL icon, open it. Then start the server and check the auto-start option below. If everything goes well, you will see the green running text.

Then set the root password for this database. The following set the root password to my_secret_password

$ mysqladmin -u root password my_secret_password

To change your root password, say from my_secret_password to abc123, do the following.

$ mysqladmin -u root -p 'my_secret_password' password 'abc123'

Alright, mysql is now up and running!

Bonus: To make your life a little easier, download Sequel Pro. It’s a free osx mysql gui.

MySQL converting Text to Varchar

One incentive of converting a text column to a varchar column is that, you can index that column for quicker query.

Before converting, you want to make sure you won’t be truncating anything. Run the following to make sure

SELECT MAX( LENGTH(  target_column ) ) 
FROM  target_table

As long as the returned length is less than your varchar length, you are good to go.

GROUP_CONCAT

GROUP_CONCAT is very useful when you are doing a GROUP BY, and want to capture the values from other columns. For example:

SELECT country, GROUP_CONCAT(city SEPARATOR ',') AS city_arr_str
FROM myTable
GROUP BY country
ORDER BY country

Then in say PHP, you can read the city names for each returned row:

foreach ($rows as $r){
  $country = $r["country"];
  $city_arr_str = $r["city_arr_str"];
  // parse the city names
  $city_arr = explode(",", $city_arr_str);
  // do your stuff...
}

Everything is fine and all, until later on your database gets much bigger.

Imagine now your db table gets lots of duplicates, and each country has more than 1000 cities. When you run the same query and code, you will notice something seems wrong… It turns out there is a length limit to what GROUP_CONCAT returns. The length is 1024 characters.

To change that limit, go to your information_schema db, search for GROUP_CONCAT_MAX_LEN in the GLOBAL_VARIABLES table, then change the default value 1024 to something higher. Below is the sql query.

SET [GLOBAL | SESSION] group_concat_max_len = 1024;

But if you don’t have the permission to make the above change, or you just want a quick fix, below is a way to get rid of the duplicates hopefully the returned string is shorter than 1024 characters.

SELECT country, GROUP_CONCAT(DISTINCT city ORDER BY city SEPARATOR ',') AS city_arr_str
FROM myTable
GROUP BY country
ORDER BY country

I threw in the “ORDER BY” so we don’t need to do a sort again in PHP. Anyhow, that’s the idea. Happy programming! 😉

storing blob into mysql through apache and php

This idea sounds easy and stuff. I agree. I assume you already got it working fine. But perhaps one day you may notice something is not working well, especially when you are dealing with bigger files. Below are a few things you want to look out for.

Change your /etc/php.ini

upload_max_filesize = 2M
post_max_size = 8M
memory_limit = 128M

The default upload file size is 2M, which could be too small. If your upload file is larger than this, you will receive an empty upload file at your server side. So, increase this. While if you are using ajax to post-send, increase the post_max_size limit. Another thing to watch out for is memory_limit. If you somehow make copies of big blobs in your code, increasing PHP’s memory limit is also a good idea.

After these changes, restart your apache server by: sudo apachectl restart

You can double check by calling the phpinfo() function. Your new settings should show up.

Next thing is in mysql. If you have been using addslashes to insert, that will eventually fail when you hit your mysql query max length limit. I am not sure how long that is but I hit that a few times which got me all confused until I checked my logs. So, possibly change your insert or update statement to something like the following.

UPDATE my_table SET my_blob = LOAD_FILE( $filename ) WHERE id = '$id';

Now, after you made all these changes you may still be loading null into your db. If so, check this. Go grab your uploaded file at the server and check the file size. It better be the right size. In code, I mean

if (isset($_FILES["my_file"])){
	$file_obj = $_FILES["my_file"];
	$file_size = $file_obj["size"];
	print $file_size; // hopefully it's greater than 0

	// below is for later, skip it for now
	$file_tmp_name = $file_obj["tmp_name"];
	$file_content = file_get_contents($file_tmp_name);
}

Once you pass that, there may be a chance your mysql client doesn’t have enough permission to read your uploaded file. Below is some code to create a new temp file and let your mysql client read the file.

// basically make a duplicate of your upload file
$tmpfname = tempnam("/tmp", "somePrefix");
$handle = fopen($tmpfname, "w");
fwrite($handle, $file_content);
fclose($handle);
chmod($tmpfname, 0666);  // this is so that the mysql process can read this file

// HERE, call your mysql LOAD_FILE statement. 

unlink($tmpfname); // clean up

If you are still loading null into your db. Then it’s also possible that your current mysql user doesn’t have enough permission to load local files.

GRANT FILE ON *.* TO root@localhost
show grants

Assuming you are using root. I know, it’s bad. Shut up. Just an example.

And if you are still loading null, then it’s also possible that your mysql client has a low max_load_file size limit. Do this. Open a terminal, get a mysql prompt by: mysql -u root -p

set global net_buffer_length=1000000; 
set global max_allowed_packet=1000000000;

That should allow enough data to go through the LOAD_FILE command.

I am good after all these. Hopefully your issues are solved by now. 😉

osx Lion mysql sock path

I just installed Lion and for some reasons my internal website stopped working due to the mysql lock file not being found. I found out that the new osx installation changed the mysql sock file default path from /var/mysql/mysql.sock to /tmp/mysql.sock.

So, to re-enable everything, do:

  1. shut down your Web Sharing
  2. cp /etc/php.ini.default /etc/php.ini (for some reasons my old php.ini got renamed to php.ini.default php.ini-5.2-previous…)
  3. modify /etc/php.ini, change ALL /var/mysql/mysql.sock to /tmp/mysql.sock
  4. enable Web Sharing

That will do it.